Vous êtes sur la page 1sur 93

Evolution

From Wikipedia, the free encyclopedia


Jump to: navigation, search This article is about evolution in biology. For other uses, see Evolution (disambiguation). For a generally accessible and less technical introduction to the topic, see Introduction to evolution.
Part of the Biology series on

Evolution

Mechanisms and processes Adaptation Genetic drift Gene flow Mutation Natural selection Speciation Research and history Introduction Evidence Evolutionary history of life History Modern synthesis Social effect Theory and fact Objections / Controversy Evolutionary biology fields Cladistics Ecological genetics Evolutionary development Evolutionary psychology Human evolution Molecular evolution Phylogenetics Population genetics

Biology portal v d e In biology, evolution is the change in the inherited traits of a population of organisms through successive generations.[1] When a population splits into smaller groups, these groups evolve independently and develop into new species. Anatomical similarities, geographical distribution of similar species and the fossil record indicate that all organisms are descended from a common ancestor through a long series of these divergence events, stretching back in a tree of life that has grown over the 3,500 million years of life on Earth.[2] Evolution is the product of two opposing forces: processes that constantly introduce variation in traits, and processes that make particular variants become more common or rare. A trait is a particular characteristic such as eye color, height, or a behavior that is expressed when an organism's genes interact with its environment. Genes vary within populations, so organisms show heritable differences (variation) in their traits. The main cause of variation is mutation, which changes the sequence of a gene. Altered genes are then inherited by offspring. There can sometimes also be transfer of genes between species. Two main processes cause variants to become more common or rare in a population. One is natural selection, which causes traits that aid survival and reproduction to become more common, and traits that hinder survival and reproduction to become more rare.[1][3] Natural selection occurs because only a few individuals in each generation will survive, since resources are limited and organisms produce many more offspring than their environment can support. Over many generations mutations produce successive, small, random changes in traits, which are then filtered by natural selection and the beneficial changes retained. This adjusts traits so they become suited to an organism's environment: these adjustments are called adaptations.[4] Not every trait, however, is an adaptation. Another cause of evolution is genetic drift, an independent process that produces entirely random changes in how common traits are in a population. Genetic drift comes from the role that chance plays in whether a trait will be passed on to the next generation. Evolutionary biologists document the fact that evolution occurs, and also develop and test theories that explain its causes. The study of evolutionary biology began in the mid-nineteenth century, when research into the fossil record and the diversity of living organisms convinced most scientists that species changed over time.[5][6] However, the mechanism driving these changes remained unclear until the theories of natural selection were independently proposed by Charles Darwin and Alfred Wallace. In 1859, Darwin's seminal work On the Origin of Species brought the new theories of evolution by natural selection to a wide audience,[7] leading to the overwhelming acceptance of evolution among scientists.[8][9][10][11] In the 1930s, Darwinian natural selection was combined with Mendelian inheritance to form the modern evolutionary synthesis,[12] which connected the units of evolution (genes) and the mechanism of evolution (natural selection). This powerful explanatory and predictive theory has become the central organizing principle of modern biology, directing research and providing a unifying explanation for the history and diversity of life on Earth.[9][10][13] Evolution is therefore applied and studied in fields as diverse as ecology, psychology, paleontology, philosophy, medicine, agriculture and conservation biology.

Contents
[hide]
y y y

y y y y y y

1 History of evolutionary thought 2 Heredity 3 Variation o 3.1 Mutation o 3.2 Sex and recombination o 3.3 Population genetics o 3.4 Gene flow 4 Mechanisms o 4.1 Natural selection o 4.2 Genetic drift 5 Outcomes o 5.1 Adaptation o 5.2 Co-evolution o 5.3 Co-operation o 5.4 Speciation o 5.5 Extinction 6 Evolutionary history of life o 6.1 Origin of life o 6.2 Common descent o 6.3 Evolution of life 7 Social and cultural responses 8 Applications 9 See also 10 References 11 Further reading 12 External links

History of evolutionary thought


For more details on this topic, see History of evolutionary thought.

Around 1854 Charles Darwin began writing out what became On the Origin of Species. The scientific inquiry into the origin of species can be dated to at least the 6th century BCE, with the Greek philosopher Anaximander.[14] Others who considered evolutionary ideas included the Greek philosopher Empedocles, the Roman philosopher-poet Lucretius, the Afro-Arab biologist Al-Jahiz,[15] the Persian philosopher Ibn Miskawayh, the Brethren of Purity,[16] and the Chinese philosopher Zhuangzi.[17] As biological knowledge grew in the 18th century, evolutionary ideas were set out by a few natural philosophers including Pierre Maupertuis in 1745 and Erasmus Darwin in 1796.[18] The ideas of the biologist Jean-Baptiste Lamarck about transmutation of species influenced radicals, but were rejected by mainstream scientists. Charles Darwin formulated his idea of natural selection in 1838 and was still developing his theory in 1858 when Alfred Russel Wallace sent him a similar theory, and both were presented to the Linnean Society of London in separate papers.[19] At the end of 1859 Darwin's publication of On the Origin of Species explained natural selection in detail and presented evidence leading to increasingly wide acceptance of the occurrence of evolution. Debate about the mechanisms of evolution continued, and Darwin could not explain the source of the heritable variations which would be acted on by natural selection. Like Lamarck, he thought that parents passed on adaptations acquired during their lifetimes,[20] a theory which was subsequently dubbed Lamarckism.[21] In the 1880s August Weismann's experiments indicated that changes from use and disuse were not heritable, and Lamarckism gradually fell from favour.[22][23] More significantly, Darwin could not account for how traits were passed down from generation to generation. In 1865 Gregor Mendel found that traits were inherited in a predictable manner.[24] When Mendel's work was rediscovered in 1900s, disagreements over the rate of evolution predicted by early geneticists and biometricians led to a rift between the Mendelian and Darwinian models of evolution. Yet it was the rediscovery of Gregor Mendels pioneering work on the fundamentals of genetics (of which Darwin and Wallace were unaware) by Hugo de Vries and others in the early 1900s that provided the impetus for a better understanding of how variation occurs in plant and animal traits. That variation is the main fuel used by natural selection to shape the wide variety of adaptive traits observed in organic life. Even though Hugo de Vries and other early geneticists rejected gradual natural selection, their rediscovery of and subsequent work on genetics eventually provided a solid basis on which the theory of evolution stood even more convincingly than when it was originally proposed.[25] The apparent contradiction between Darwins theory of evolution by natural selection and Mendels work was reconciled in the 1920s and 1930s by evolutionary biologists such as J.B.S. Haldane, Sewall Wright, and particularly Ronald Fisher, who set the foundations for the establishment of the field of population genetics. The end result was a combination of evolution by natural selection and Mendelian inheritance, the modern evolutionary synthesis.[26] In the 1940s, the identification of DNA as the genetic material by Oswald Avery and colleagues and the subsequent publication of the structure of DNA by James Watson and Francis Crick in 1953, demonstrated the physical basis for inheritance. Since then, genetics and molecular biology have become core parts of evolutionary biology and have revolutionized the field of phylogenetics.[12]

In its early history, evolutionary biology primarily drew in scientists from traditional taxonomically oriented disciplines, whose specialist training in particular organisms addressed general questions in evolution. As evolutionary biology expanded as an academic discipline, particularly after the development of the modern evolutionary synthesis, it began to draw more widely from the biological sciences.[12] Currently the study of evolutionary biology involves scientists from fields as diverse as biochemistry, ecology, genetics and physiology, and evolutionary concepts are used in even more distant disciplines such as psychology, medicine, philosophy and computer science. In the 21st century, current research in evolutionary biology deals with several areas where the modern evolutionary synthesis may need modification or extension, such as assessing the relative importance of various ideas on the unit of selection and evolvability and how to fully incorporate the findings of evolutionary developmental biology.[27][28]

Heredity
Further information: Introduction to genetics, Genetics, and Heredity

DNA structure. Bases are in the center, surrounded by phosphatesugar chains in a double helix. Evolution in organisms occurs through changes in heritable traits particular characteristics of an organism. In humans, for example, eye color is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents.[29] Inherited traits are controlled by genes and the complete set of genes within an organism's genome is called its genotype.[30] The complete set of observable traits that make up the structure and behavior of an organism is called its phenotype. These traits come from the interaction of its genotype with the environment.[31] As a result, many aspects of an organism's phenotype are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. However, some people tan more easily than

others, due to differences in their genotype; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.[32] Heritable traits are passed from one generation to the next via DNA, a molecule that encodes genetic information.[30] DNA is a long polymer composed of four types of bases. The sequence of bases along a particular DNA molecule specify the genetic information, in a manner similar to a sequence of letters spelling out a sentence. DNA is heritable because the specific pairing of the four bases provides a biochemical mechanism that cells use to accurately transcribe and replicate coded information from one template to another.[33][34] Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, the long strands of DNA form condensed structures called chromosomes. A specific location within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by multiple interacting genes.[35][36] The study of such complex traits is a major area of current genetic research. Another interesting but unsolved question in genetics is if epigenetics is important in evolution, this is where heritable changes occur in organisms without there being any changes to the sequences of their genes.[37]

Variation
Further information: Genetic diversity and Population genetics An individual organism's phenotype results from both its genotype and the influence from the environment it has lived in. A substantial part of the variation in phenotypes in a population is caused by the differences between their genotypes.[36] The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will fluctuate, becoming more or less prevalent relative to other forms of that gene. Evolutionary forces act by driving these changes in allele frequency in one direction or another. Variation disappears when a new allele reaches the point of fixation when it either disappears from the population or replaces the ancestral allele entirely.[38] Variation comes from mutations in genetic material, migration between populations (gene flow), and the reshuffling of genes through sexual reproduction. Variation also comes from exchanges of genes between different species; for example, through horizontal gene transfer in bacteria, and hybridization in plants.[39] Despite the constant introduction of variation through these processes, most of the genome of a species is identical in all individuals of that species.[40] However, even relatively small changes in genotype can lead to dramatic changes in phenotype: chimpanzees and humans differ in only about 5% of their genomes.[41]

Mutation

Further information: Mutation and Molecular evolution

Duplication of part of a chromosome Random mutations constantly occur in the genomes of organisms; these mutations create genetic variation. Mutations are changes in the DNA sequence of a cell's genome and are caused by radiation, viruses, transposons and mutagenic chemicals, as well as errors that occur during meiosis or DNA replication.[42][43][44] These mutations involve several different types of change in DNA sequences; these can either have no effect, alter the product of a gene, or prevent the gene from functioning. Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70 percent of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial.[45] Due to the damaging effects that mutations can have on cells, organisms have evolved mechanisms such as DNA repair to remove mutations.[42] Therefore, the optimal mutation rate for a species is a trade-off between costs of a high mutation rate, such as deleterious mutations, and the metabolic costs of maintaining systems to reduce the mutation rate, such as DNA repair enzymes.[46] Viruses that use RNA as their genetic material have rapid mutation rates,[47] which can be an advantage since these viruses will evolve constantly and rapidly, and thus evade the defensive responses of e.g. the human immune system.[48] Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome.[49] Extra copies of genes are a major source of the raw material needed for new genes to evolve.[50] This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors.[51] For example, the human eye uses four genes to make structures that sense light: three for color vision and one for night vision; all four are descended from a single ancestral gene.[52] New genes can be created from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because this increases redundancy; with one gene in the pair acquiring a new function while the other copy still performs its original function.[53][54] Other types of mutation can even create entirely new genes from previously noncoding DNA.[55][56] The creation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to

form new combinations with new functions.[57][58] When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together creating new combinations with new and complex functions.[59] For example, polyketide synthases are large enzymes that make antibiotics; they contain up to one hundred independent domains that each catalyze one step in the overall process, like a step in an assembly line.[60] Changes in chromosome number may involve even larger mutations, where segments of the DNA within chromosomes break and then rearrange. For example, two chromosomes in the Homo genus fused to produce human chromosome 2; this fusion did not occur in the lineage of the other apes, and they retain these separate chromosomes.[61] In evolution, the most important role of such chromosomal rearrangements may be to accelerate the divergence of a population into new species by making populations less likely to interbreed, and thereby preserving genetic differences between these populations.[62] Sequences of DNA that can move about the genome, such as transposons, make up a major fraction of the genetic material of plants and animals, and may have been important in the evolution of genomes.[63] For example, more than a million copies of the Alu sequence are present in the human genome, and these sequences have now been recruited to perform functions such as regulating gene expression.[64] Another effect of these mobile DNA sequences is that when they move within a genome, they can mutate or delete existing genes and thereby produce genetic diversity.[43]

Sex and recombination


Further information: Genetic recombination and Sexual reproduction In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes.[65] Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles.[66] Sex usually increases genetic variation and may increase the rate of evolution.[67][68] However, asexuality is advantageous in some environments as it can evolve in previously-sexual animals.[69] Here, asexuality might allow the two sets of alleles in their genome to diverge and gain different functions.[70] Recombination allows even alleles that are close together in a strand of DNA to be inherited independently. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other, and genes that are close together tend to be inherited together, a phenomenon known as linkage.[71] This tendency is measured by finding how often two alleles occur together on a single chromosome, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective

sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking.[72] When alleles cannot be separated by recombination such as in mammalian Y chromosomes, which pass intact from fathers to sons harmful mutations accumulate.[73][74] By breaking up allele combinations, sexual reproduction allows the removal of harmful mutations and the retention of beneficial mutations.[75] In addition, recombination and reassortment can produce individuals with new and advantageous gene combinations. These positive effects are balanced by the fact that sex reduces an organism's reproductive rate, can cause mutations and may separate beneficial combinations of genes.[75] The reasons for the evolution of sexual reproduction are therefore unclear and this question is still an active area of research in evolutionary biology,[76][77] that has prompted ideas such as the Red Queen hypothesis.[78]

Population genetics

White peppered moth

Black morph in peppered moth evolution Further information: Population genetics From a genetic viewpoint, evolution is a generation-to-generation change in the frequencies of alleles within a population that shares a common gene pool.[79] A population is a localized group of individuals belonging to the same species. For example, all of the moths of the same species living in an isolated forest represent a population. A single gene in this population may have several alternate forms, which account for variations between the phenotypes of the organisms. An example might be a gene for coloration in moths that has two alleles: black and white. A gene pool is the complete set of alleles for a gene in a single population; the allele frequency measures the fraction of the gene pool composed of a single allele (for example, what fraction of moth coloration genes are the black allele). Evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms; for example, the allele for black color in a population of moths becoming more common.

To understand the mechanisms that cause a population to evolve, it is useful to consider what conditions are required for a population not to evolve. The Hardy-Weinberg principle states that the frequencies of alleles (variations in a gene) in a sufficiently large population will remain constant if the only forces acting on that population are the random reshuffling of alleles during the formation of the sperm or egg, and the random combination of the alleles in these sex cells during fertilization.[80] Such a population is said to be in Hardy-Weinberg equilibrium; it is not evolving.[81]

Gene flow
Further information: Gene flow, Hybrid (biology), and Horizontal gene transfer

When they mature, male lions leave the pride where they were born and take over a new pride to mate, causing gene flow between prides.[82] Gene flow is the exchange of genes between populations, which are usually of the same species.[83] Examples of gene flow within a species include the migration and then breeding of organisms, or the exchange of pollen. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Migration into or out of a population can change allele frequencies, as well as introducing genetic variation into a population. Immigration may add new genetic material to the established gene pool of a population. Conversely, emigration may remove genetic material. As barriers to reproduction between two diverging populations are required for the populations to become new species, gene flow may slow this process by spreading genetic differences between the populations. Gene flow is hindered by mountain ranges, oceans and deserts or even man-made structures such as the Great Wall of China, which has hindered the flow of plant genes.[84] Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules.[85] Such hybrids are generally infertile, due to the two different sets of chromosomes being unable to pair up during meiosis. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype.[86] The importance of hybridization in creating new species of animals is unclear, although cases have

been seen in many types of animals,[87] with the gray tree frog being a particularly well-studied example.[88] Hybridization is, however, an important means of speciation in plants, since polyploidy (having more than two copies of each chromosome) is tolerated in plants more readily than in animals.[89][90] Polyploidy is important in hybrids as it allows reproduction, with the two different sets of chromosomes each being able to pair with an identical partner during meiosis.[91] Polyploids also have more genetic diversity, which allows them to avoid inbreeding depression in small populations.[92] Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria.[93] In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species.[94] Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean beetle Callosobruchus chinensis may also have occurred.[95][96] An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which appear to have received a range of genes from bacteria, fungi, and plants.[97] Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains.[98] Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and prokaryotes, during the acquisition of chloroplasts and mitochondria.[99]

Mechanisms
The two main mechanisms that produce evolution are natural selection and genetic drift. Natural selection favors genes that aid survival and reproduction. Genetic drift is random change in the frequency of alleles, caused by the random sampling of a generation's genes during reproduction. The relative importance of natural selection and genetic drift in a population varies depending on the strength of the selection and the effective population size, which is the number of individuals capable of breeding.[100] Natural selection usually predominates in large populations, while genetic drift dominates in small populations. The dominance of genetic drift in small populations can even lead to the fixation of slightly deleterious mutations.[101] As a result, changing population size can dramatically influence the course of evolution. Population bottlenecks, where the population shrinks temporarily and therefore loses genetic variation, result in a more uniform population.[38]

Natural selection
Further information: Natural selection and Fitness (biology)

Natural selection of a population for dark coloration. Natural selection is the process by which genetic mutations that enhance reproduction become, and remain, more common in successive generations of a population. It has often been called a "self-evident" mechanism because it necessarily follows from three simple facts:
y y y

Heritable variation exists within populations of organisms. Organisms produce more offspring than can survive. These offspring vary in their ability to survive and reproduce.

These conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors pass these advantageous traits on, while traits that do not confer an advantage are not passed on to the next generation.[102] The central concept of natural selection is the evolutionary fitness of an organism.[103] Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation.[103] However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes.[104] For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness.[103] If an allele increases fitness more than the other alleles of that gene, then with each generation this allele will become more common within the population. These traits are said to be "selected for". Examples of traits that can increase fitness are enhanced survival, and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele becoming rarer they are "selected against".[3] Importantly, the fitness of an allele is not a fixed characteristic, if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful.[1] However, even if the

direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form (see Dollo's law).[105][106] Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorized into three different types. The first is directional selection, which is a shift in the average value of a trait over time for example organisms slowly getting taller.[107] Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilizing selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity.[102][108] This would, for example, cause organisms to slowly become all the same height. A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates.[109] Traits that evolved through sexual selection are particularly prominent in males of some animal species, despite traits such as cumbersome antlers, mating calls or bright colors that attract predators, decreasing the survival of individual males.[110] This survival disadvantage is balanced by higher reproductive success in males that show these hard to fake, sexually selected traits.[111] Natural selection most generally makes nature the measure against which individuals, and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (ie: exchange of materials between living and nonliving parts) within the system."[112] Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain, and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection. An active area of research is the unit of selection, with natural selection being proposed to work at the level of genes, cells, individual organisms, groups of organisms and species.[113][114] None of these are mutually exclusive and selection can act on multiple levels simultaneously.[115] An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome.[116] Selection at a level above the individual, such as group selection, may allow the evolution of co-operation, as discussed below.[117]

Genetic drift
Further information: Genetic drift and Effective population size

Simulation of genetic drift of 20 unlinked alleles in populations of 10 (top) and 100 (bottom). Drift to fixation is more rapid in the smaller population. Genetic drift is the change in allele frequency from one generation to the next that occurs because alleles in offspring are a random sample of those in the parents, as well as from the role that chance plays in determining whether a given individual will survive and reproduce. In mathematical terms, alleles are subject to sampling error. As a result, when selective forces are absent or relatively weak, allele frequencies tend to "drift" upward or downward randomly (in a random walk). This drift halts when an allele eventually becomes fixed, either by disappearing from the population, or replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that began with the same genetic structure to drift apart into two divergent populations with different sets of alleles.[118] The time for an allele to become fixed by genetic drift depends on population size, with fixation occurring more rapidly in smaller populations.[119] The precise measure of population that is important is called the effective population size. The effective population is always smaller than the total population since it takes into account factors such as the level of inbreeding, the number of animals that are too old or young to breed, and the lower probability of animals that live far apart managing to mate with each other.[120] An example when genetic drift is probably of central importance in determining a trait is the loss of pigments from animals that live in caves, a change that produces no obvious advantage or disadvantage in complete darkness.[121] However, it is usually difficult to measure the relative importance of selection and drift,[122] so the comparative importance of these two forces in driving evolutionary change is an area of current research.[123] These investigations were prompted by the neutral theory of molecular evolution, which proposed that most evolutionary changes are the result of the fixation of neutral mutations that do not have any immediate effects on the fitness of an organism.[124] Hence, in this model, most genetic changes in a population are the result of constant mutation pressure and genetic drift.[125] This form of the neutral theory is now largely abandoned, since it does not seem to fit the genetic variation seen in nature.[126][127]

However, a more recent and better-supported version of this model is the nearly neutral theory, where most mutations only have small effects on fitness.[102]

Outcomes
Evolution influences every aspect of the form and behavior of organisms. Most prominent are the specific behavioral and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by co-operating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are sometimes divided into macroevolution, which is evolution that occurs at or above the level of species, such as extinction and speciation, and microevolution, which is smaller evolutionary changes, such as adaptations, within a species or population.[128] In general, macroevolution is regarded as the outcome of long periods of microevolution.[129] Thus, the distinction between micro- and macroevolution is not a fundamental one the difference is simply the time involved.[130] However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction.[131][132][133] A common misconception is that evolution has goals or long-term plans, but in reality, evolution has no long-term goal and does not necessarily produce greater complexity.[134][135] Although complex species have evolved, this occurs as a side effect of the overall number of organisms increasing, and simple forms of life remain more common.[136] For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size,[137] and constitute the vast majority of Earth's biodiversity.[138] Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable.[139] Indeed, the evolution of microorganisms is particularly important to modern evolutionary research, since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time.[140][141]

Adaptation
For more details on this topic, see Adaptation. Adaptation is one of the basic phenomena of biology,[142] and is the process whereby an organism becomes better suited to its habitat.[143][144] Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to

the grinding of grass, or the ability of horses to run fast and escape predators. By using the term adaptation for the evolutionary process, and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection.[145] The following definitions are due to Theodosius Dobzhansky. 1. Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat or habitats.[146] 2. Adaptedness is the state of being adapted: the degree to which an organism is able to live and reproduce in a given set of habitats.[147] 3. An adaptive trait is an aspect of the developmental pattern of the organism which enables or enhances the probability of that organism surviving and reproducing.[148] Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell.[149] Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment,[150] Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing,[151][152] and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol.[153][154] An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability).[155][156]

A baleen whale skeleton, a and b label flipper bones, which were adapted from front leg bones: while c indicates vestigial leg bones.[157] Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organization may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor.[158] However, since all living organisms are related to some extent,[159] even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology.[160][161] During adaptation, some structures may lose their original function and become vestigial structures.[162] Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes,[163]

the non-functional remains of eyes in blind cave-dwelling fish,[164] wings in flightless birds,[165] and the presence of hip bones in whales and snakes.[157] Examples of vestigial structures in humans include wisdom teeth,[166] the coccyx,[162] and the vermiform appendix.[162] However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process.[167] One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to treean exaptation.[167] Within cells, molecular machines such as the bacterial flagella[168] and protein sorting machinery[169] evolved by the recruitment of several pre-existing proteins that previously had different functions.[128] Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes.[170][171] A critical principle of ecology is that of competitive exclusion: no two species can occupy the same niche in the same environment for a long time.[172] Consequently, natural selection will tend to force species to adapt to different ecological niches. This may mean that, for example, two species of cichlid fish adapt to live in different habitats, which will minimize the competition between them for food.[173] An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations.[174] This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features.[175] These studies have shown that evolution can alter development to create new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals.[176] It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles.[177] It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes.[178]

Co-evolution
Further information: Co-evolution Interactions between organisms can produce both conflict and co-operation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called co-evolution.[179] An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake.[180]

Co-operation
Further information: Co-operation (evolution) However, not all interactions between species involve conflict.[181] Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil.[182] This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system.[183] Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer.[42] Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring.[184] This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on.[185] Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms.[186]

Speciation
Further information: Speciation

The four mechanisms of speciation. Speciation is the process where a species diverges into two or more descendant species.[187] Evolutionary biologists view species as statistical phenomena and not categories or types. This view is counterintuitive since the classical idea of species is still widely held, with a species seen as a class of organisms exemplified by a "type specimen" that bears all the traits common to this species. Instead, a species is now defined as a separately evolving lineage that forms a single gene pool. Although properties such as genetics and morphology are used to help separate closely related lineages, this definition has fuzzy boundaries.[188] Indeed, the exact definition of the term "species" is still controversial, particularly in prokaryotes,[189] and this is called the species problem.[190] Biologists have proposed a range of more precise definitions, but the definition used is a pragmatic choice that depends on the particularities of the species concerned.[190] Typically the actual focus on biological study is the population, an observable interacting group of organisms, rather than a species, an observable similar group of individuals. Speciation has been observed multiple times under both controlled laboratory conditions and in nature.[191] In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four mechanisms for speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms.[192][193] As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed.[194] The second mechanism of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental

population. Here, the founder effect causes rapid speciation through both rapid genetic drift and selection on a small gene pool.[195] The third mechanism of speciation is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations.[187] Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localized metal pollution from mines.[196] Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance.[197]

Geographical isolation of finches on the Galpagos Islands produced over a dozen new species. Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population.[198] Generally, sympatric speciation in animals requires the evolution of both genetic differences and non-random mating, to allow reproductive isolation to evolve.[199] One type of sympatric speciation involves cross-breeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids.[200] This allows the chromosomes from each parental species to form a matching pair during meiosis, since as each parent's chromosomes is represented by a pair already.[201] An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa cross-bred to give the new species Arabidopsis suecica.[202] This happened about 20,000 years ago,[203] and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms

involved in this process.[204] Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms.[90] Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged.[205] In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population, and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats, and therefore rarely being preserved as fossils.[206]

Extinction
Further information: Extinction

Tyrannosaurus rex. Non-avian dinosaurs died out in the CretaceousTertiary extinction event at the end of the Cretaceous period. Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation, and disappear through extinction.[207] Nearly all animal and plant species that have lived on earth are now extinct,[208] and extinction appears to be the ultimate fate of all species.[209] These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events.[210] The CretaceousTertiary extinction event, during which the non-avian dinosaurs went extinct, is the most well-known, but the earlier PermianTriassic extinction event was even more severe, with approximately 96 percent of species driven to extinction.[210] The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 1001000 times greater than the background rate, and up to 30 percent of species may be extinct by the mid 21st century.[211] Human activities are now the primary cause of the ongoing extinction event;[212] global warming may further accelerate it in the future.[213] The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered.[210] The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (competitive exclusion).[12] If one species can out-compete another, this could produce

species selection, with the fitter species surviving and the other species being driven to extinction.[113] The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors.[214]

Evolutionary history of life


Main article: Evolutionary history of life See also: Timeline of evolution and Timeline of human evolution

Origin of life
Further information: Abiogenesis and RNA world hypothesis The origin of life is a necessary precursor for biological evolution, but understanding that evolution occurred once organisms appeared and investigating how this happens does not depend on understanding exactly how life began.[215] The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions, but it is unclear how this occurred.[216] Not much is certain about the earliest developments in life, the structure of the first living things, or the identity and nature of any last universal common ancestor or ancestral gene pool.[217][218] Consequently, there is no scientific consensus on how life began, but proposals include self-replicating molecules such as RNA,[219] and the assembly of simple cells.[220]

Common descent
Further information: Evidence of common descent, Common descent, and Homology (biology)

The hominoids are descendants of a common ancestor. All organisms on Earth are descended from a common ancestor or ancestral gene pool.[159] Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events.[221] The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with

no clear purpose resemble functional ancestral traits, and finally, that organisms can be classified using these similarities into a hierarchy of nested groups similar to a family tree.[7] However, modern research has suggested that, due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree since some genes have spread independently between distantly related species.[222][223] Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record.[224] By comparing the anatomies of both modern and extinct species, paleontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry. More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids.[225] The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations.[226] For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 96% of their genomes and analyzing the few areas where they differ helps shed light on when the common ancestor of these species existed.[227]

Evolution of life
For more details on this topic, see Timeline of evolution.

Evolutionary tree showing the divergence of modern species from their common ancestor in the center.[228] The three domains are colored, with bacteria blue, archaea green, and eukaryotes red. Despite the uncertainty on how life began, it is generally accepted that prokaryotes inhabited the Earth from approximately 34 billion years ago.[2][229] No obvious changes in morphology or cellular organization occurred in these organisms over the next few billion years.[230]

The eukaryotes were the next major change in cell structure. These came from ancient bacteria being engulfed by the ancestors of eukaryotic cells, in a cooperative association called endosymbiosis.[99][231] The engulfed bacteria and the host cell then underwent co-evolution, with the bacteria evolving into either mitochondria or hydrogenosomes.[232] An independent second engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants.[233] It is unknown when the first eukaryotic cells appeared though they first emerged between 1.6 2.7 billion years ago. The history of life was that of the unicellular eukaryotes, prokaryotes, and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period.[2][234] The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria.[235] Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct.[236] Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis.[237] About 500 million years ago, plants and fungi colonized the land, and were soon followed by arthropods and other animals.[238] Insects were particularly successful and even today make up the majority of animal species.[239] Amphibians first appeared around 300 million years ago, followed by early amniotes, then mammals around 200 million years ago and birds around 100 million years ago (both from "reptile"-like lineages). However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes.[138]

Social and cultural responses


Further information: Social effect of evolutionary theory and Objections to evolution

As Darwinism became widely accepted in the 1870s, caricatures of Charles Darwin with an ape or monkey body symbolised evolution.[240]

In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centered on the philosophical, social and religious implications of evolution. Nowadays, the fact that organisms evolve is uncontested in the scientific literature and the modern evolutionary synthesis is widely accepted by scientists.[12] However, evolution remains a contentious concept for some theists.[241] While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their respective religions and who raise various objections to evolution.[128][242][243] As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that human mental and moral faculties, which had been thought purely spiritual, are not distinctly separated from those of other animals.[6] In some countriesnotably the United Statesthese tensions between science and religion have fueled the current creation-evolution controversy, a religious conflict focusing on politics and public education.[244] While other scientific fields such as cosmology[245] and earth science[246] also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists. The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced about a generation later and legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in the form of intelligent design, to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case.[247] Another example somewhat associated with evolutionary theory that is now widely regarded as unwarranted is "Social Darwinism", a derogatory term associated with the 19th century Malthusian theory developed by Whig philosopher Herbert Spencer. It was later expanded by others into ideas about "survival of the fittest" in commerce and human societies as a whole, and led to claims that social inequality, sexism, racism, and imperialism were justified.[248] However, these ideas contradict Darwin's own views, and contemporary scientists and philosophers consider these ideas to be neither mandated by evolutionary theory nor supported by data.[249][250][251]

Applications
Further information: Artificial selection and Evolutionary computation Evolutionary biology, and in particular the understanding of how organisms evolve through natural selection, is an area of science with many practical applications.[252] A major technological application of evolution is artificial selection, which is the intentional selection of certain traits in a population of organisms. Humans have used artificial selection for thousands of years in the domestication of plants and animals.[253] More recently, such selection has become a

vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA in molecular biology. It is also possible to use repeated rounds of mutation and selection to evolve proteins with particular properties, such as modified enzymes or new antibodies, in a process called directed evolution.[254] Understanding the changes that have occurred during organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders.[255] For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves.[256] This helped identify genes required for vision and pigmentation, such as crystallins and the melanocortin 1 receptor.[257] Similarly, comparing the genome of the Antarctic icefish, which lacks red blood cells, to close relatives such as the zebrafish revealed genes needed to make these blood cells.[258] As evolution can produce highly optimized processes and networks, it has many applications in computer science. Here, simulations of evolution using evolutionary algorithms and artificial life started with the work of Nils Aall Barricelli in the 1960s, and was extended by Alex Fraser, who published a series of papers on simulation of artificial selection.[259] Artificial evolution became a widely recognized optimization method as a result of the work of Ingo Rechenberg in the 1960s and early 1970s, who used evolution strategies to solve complex engineering problems.[260] Genetic algorithms in particular became popular through the writing of John Holland.[261] As academic interest grew, dramatic increases in the power of computers allowed practical applications, including the automatic evolution of computer programs.[262] Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers, and also to optimize the design of systems.[263]

See also

Bovine spongiform encephalopathy


From Wikipedia, the free encyclopedia
Jump to: navigation, search For other uses, see BSE (disambiguation).

Classic image of a cow with BSE. A feature of such disease is the inability of the infected animal to stand. Bovine spongiform encephalopathy (BSE), commonly known as mad-cow disease (MCD), is a fatal, neurodegenerative disease in cattle, that causes a spongy degeneration in the brain and spinal cord. BSE has a long incubation period, about 4 years, usually affecting adult cattle at a peak age onset of four to five years, all breeds being equally susceptible.[1] In the United Kingdom, the country worst affected, more than 179,000 cattle have been infected and 4.4 million slaughtered during the eradication programme.[2] It is believed by most scientists that the disease may be transmitted to human beings who eat the brain or spinal cord of infected carcasses.[3] In humans, it is known as new variant Creutzfeldt Jakob disease (vCJD or nvCJD), and by October 2009, it had killed 166 people in Britain (the most recent being of a different genotype to other sufferers[4]), and 44 elsewhere[5] with the number expected to rise because of the disease's long incubation period.[6] Between 460,000 and 482,000 BSE-infected animals had entered the human food chain before controls on high-risk offal were introduced in 1989.[7] A British inquiry into BSE concluded that the epidemic was caused by cattle, who are normally herbivores, being fed the remains of other cattle in the form of meat and bone meal (MBM), which caused the infectious agent to spread.[8][9] The origin of the disease itself remains unknown. The infectious agent is distinctive for the high temperatures at which it remains viable; this contributed to the spread of the disease in Britain, which had reduced the temperatures used during its rendering process.[8] Another contributory factor was the feeding of infected protein supplements to very young calves.[8][10]

Contents
[hide]
y y y

1 Infectious agent 2 The BSE epidemic in British cattle o 2.1 UK epizootic and UK licensed medicines 3 Husbandry practices in the United States relating to BSE

y y y y

3.1 Regulatory failures 3.2 Effect on the beef industry 4 BSE statistics by country 5 See also 6 References 7 External links
o o

[edit] Infectious agent

Microscopic "holes" of tissue sections are examined in the lab. Source: APHIS The infectious agent in BSE is believed to be a specific type of misfolded protein called a prion. Those prion proteins carry the disease between individuals and cause deterioration of the brain. BSE is a type of transmissible spongiform encephalopathy (TSE).[11] TSEs can arise in animals that carry an allele which causes previously normal protein molecules to contort by themselves from an alpha helical arrangement to a beta pleated sheet, which is the disease-causing shape for the particular protein. Transmission can occur when healthy animals come in contact with tainted tissues from others with the disease. In the brain these proteins cause native cellular prion protein to deform into the infectious state, which then goes on to deform further prion protein in an exponential cascade. This results in protein aggregates, which then form dense plaque fibers, leading to the microscopic appearance of "holes" in the brain, degeneration of physical and mental abilities, and ultimately death. Different hypotheses exist for the origin of prion proteins in cattle. Two leading hypotheses suggest that it may have jumped species from the disease scrapie in sheep, or that it evolved from a spontaneous form of "mad cow disease" that has been seen occasionally in cattle for many centuries.[12] Publius Flavius Vegetius Renatus records cases of a disease with similar characteristics in the 4th and 5th century AD.[13] The British Government enquiry took the view the cause was not scrapie as had originally been postulated, and was some event in the 1970s that it was not possible to identify.[14] Findings published in PLoS Pathogens (September 12, 2008) suggest that mad cow disease also is caused by a genetic mutation within a gene called Prion Protein Gene. The research shows, for the first time, that a 10-year-old cow from Alabama with an atypical form of bovine spongiform encephalopathy had the same type of prion protein gene mutation as found in human patients

with the genetic form of CreutzfeldtJakob disease, also called genetic CJD for short. Besides having a genetic origin, other human forms of prion diseases can be sporadic, as in sporadic CJD, as well as foodborne. That is, they are contracted when people eat products contaminated with mad cow disease. This form of Creutzfeldt-Jakob disease is called variant CJD.[15]

[edit] The BSE epidemic in British cattle


Cattle are normally herbivores. In nature, cattle eat grass. In modern industrial cattle-farming, various commercial feeds are used, which may contain ingredients including antibiotics, hormones, pesticides, fertilizers, and protein supplements. The use of meat and bone meal, produced from the ground and cooked left-overs of the slaughtering process as well as from the cadavers of sick and injured animals such as cattle, sheep, or chickens, as a protein supplement in cattle feed was widespread in Europe prior to about 1987.[3] Worldwide, soya bean meal is the primary plant-based protein supplement fed to cattle. However, soya beans do not grow well in Europe, so cattle raisers throughout Europe turned to the less expensive animal by-product feeds as an alternative. A change to the rendering process in the early 1980s may have resulted in a large increase of the infectious agents in the cattle feed. A contributing factor was suggested to have been a change in British laws that allowed a lower temperature sterilization of the protein meal. While other European countries like Germany required said animal byproducts to undergo a high temperature steam boiling process, this requirement had been eased in Britain as a measure to keep prices competitive. Later the British Inquiry dismissed this theory saying "changes in process could not have been solely responsible for the emergence of BSE, and changes in regulation were not a factor at all."[16] The first animal to fall ill with the disease occurred in 1984 in Britain, lab tests the following year indicated the presence of BSE, it was only in November 1986 that the UK Ministry of Agriculture accepted it had a new disease on its hands.[citation needed] Subsequently, 165 people (up until October 2009) acquired and died of a disease with similar neurological symptoms subsequently called vCJD, or (new) variant Creutzfeldt-Jakob disease.[5] This is a separate disease from 'classical' CreutzfeldtJakob disease, which is not related to BSE and has been known about since the early 1900s. Three cases of vCJD occurred in people who had lived in or visited Britain one each in Ireland, Canada and the United States. There is also some concern about those who work with (and therefore inhale) cattle meat and bone meal, such as horticulturists, who use it as fertilizer. Up to date statistics on all types of CJD are published by the National Creutzfeldt-Jakob Disease Surveillance Unit (NCJDSU) in Edinburgh. For many of the vCJD patients, direct evidence exists that they had consumed tainted beef, and this is assumed to be the mechanism by which all affected individuals contracted it. Disease incidence also appears to correlate with slaughtering practices that led to the mixture of nervous system tissue with hamburger and other beef. It is estimated that 400,000 cattle infected with BSE entered the human food chain in the 1980s.[citation needed] Although the BSE epizootic was eventually brought under control by culling all suspect cattle populations, people are still being diagnosed with vCJD each year (though the number of new cases currently has dropped to fewer than 5 per year). This is attributed to the long incubation period for prion diseases, which are typically measured in years or decades. As a result the extent of the human vCJD outbreak is still not fully known.

The scientific consensus is that infectious BSE prion material is not destroyed through normal cooking procedures, meaning that contaminated beef foodstuffs prepared "well done" may remain infectious.[17][18] In 2004 researchers reported evidence of a second contorted shape of prions in a rare minority of diseased cattle. If valid, this would imply a second strain of BSE prion. Very little is known about the shape of disease-causing prions, because their insolubility and tendency to clump thwarts application of the detailed measurement techniques of structural biology. But cruder measures yield a "biochemical signature" by which the newly discovered cattle strain appears different from the familiar one, but similar to the clumped prions in humans with traditional CJD Creutzfeldt-Jakob Disease. The finding of a second strain of BSE prion raises the possibility that transmission of BSE to humans has been underestimated, because some of the individuals diagnosed with spontaneous or "sporadic" CJD may have actually contracted the disease from tainted beef. So far nothing is known about the relative transmissibility of the two disease strains of BSE prion. Alan Colchester, a professor of neurology at the University of Kent, and Nancy Colchester, writing in the September 3, 2005 issue of the medical journal, The Lancet, proposed a theory that the most likely initial origin of BSE in Britain was the importation from the Indian subcontinent of bone meal which contained CJD infected human remains.[19] The government of India vehemently responded to the research calling it "misleading, highly mischievous; a figment of imagination; absurd," further adding that India maintained constant surveillance and had not had a single case of either BSE or vCJD.[20][21] The authors responded in the January 22, 2006 issue of The Lancet that their theory is unprovable only in the same sense as all other BSE origin theories are and that the theory warrants further investigation.[22]

[edit] UK epizootic and UK licensed medicines


During the course of the investigation into the BSE epizootic, an enquiry was also made into the activities of the Department of Health and its Medicines Control Agency. On May 7, 1999, in his written statement number 476 to the BSE Inquiry, David Osborne Hagger reported on behalf of the Medicines Control Agency that in a previous enquiry the Agency had been asked to: "... identify relevant manufacturers and obtain information about the bovine material contained in childrens vaccines, the stocks of these vaccines and how long it would take to switch to other products." It was further reported that the: "... use of bovine insulin in a small group of mainly elderly patients was noted and it was recognised that alternative products for this group were not considered satisfactory." A medicines licensing committee report that same year recommended that: "... no licensing action is required at present in regard to products produced from bovine material or using prepared bovine brain in nutrient media and sourced from outside the United Kingdom, the Channel Isles and the Republic of Ireland provided that the country of origin is known to be free of BSE, has competent veterinary advisers and is known to practise good animal husbandry." In 1990 the British Diabetic Association became concerned regarding the safety of bovine insulin and the government licensing agency assured them that: "... there was no insulin sourced from cattle in the UK or Ireland and that the situation in other countries was being monitored." In 1991 a European Community Commission: "... expressed concerns

about the possible transmission of the BSE/scrapie agent to man through use of certain cosmetic treatments." Sources in France reported to the British Medicines Control Agency: "... that there were some licensed surgical sutures derived from French bovine material." Concerns were also raised: "... regarding a possible risk of transmission of the BSE agent in gelatin products."

[edit] Husbandry practices in the United States relating to BSE


Soybean meal is cheap and plentiful in the United States. As a result, the use of animal byproduct feeds was never common, as it was in Europe. However, U.S. regulations only partially prohibit the use of animal byproducts in feed. In 1997, regulations prohibited the feeding of mammalian byproducts to ruminants such as cows and goats. However, the byproducts of ruminants can still be legally fed to pets or other livestock such as pigs and poultry such as chickens. In addition, it is legal for ruminants to be fed byproducts from some of these animals.[23] A proposal to end the use of cow blood, restaurant scraps, and poultry litter (fecal matter, feathers)[24] in January 2004 has yet to be implemented,[25] despite the efforts of some advocates[who?] of such a policy, who cite the fact that cows are herbivores, and that blood and fecal matter could potentially carry BSE.

[edit] Regulatory failures


In February 2001, the USGAO reported that the FDA, which is responsible for regulating feed, had not adequately policed the various bans.[26] Compliance with the regulations was shown to be extremely poor before the discovery of the Washington cow, but industry representatives report that compliance is now 100%. Even so, critics call the partial prohibitions insufficient. Indeed, US meat producer Creekstone Farms alleges that the USDA is preventing BSE testing from being conducted.[27] The USDA has issued recalls of beef supplies that involved introduction of downer cows into the food supply. Hallmark/Westland Meat Packing Company was found to have used electric shocks to prod downer cows into the slaughtering system in 2007.[28] Possibly due to pressure from large agribusiness, the United States has drastically cut back on the number of cows inspected for BSE.[29]

[edit] Effect on the beef industry


Japan was the top importer of U.S. beef, buying 240,000 tons valued at $1.4 billion in 2003.[citation needed] After the discovery of the first case of BSE in the U.S. on December 23, 2003, Japan stopped U.S. beef imports in December 2003. In December 2005, Japan once again allowed imports of U.S. beef, but reinstated its ban in mid-January 2006 after a technical violation of the U.S.-Japan beef import agreement: a vertebral column, which should have been removed prior to shipment, was included in a shipment of veal. Tokyo yielded to U.S. pressure to resume imports, ignoring consumer worries about the safety of U.S. beef, said Japanese consumer groups. Michiko Kamiyama from Food Safety Citizen Watch

and Yoko Tomiyama from Consumers Union of Japan[30] said about this: "The government has put priority on the political schedule between the two countries, not on food safety or human health." 65 nations implemented full or partial restrictions on importing U.S. beef products because of concerns that U.S. testing lacked sufficient rigor. As a result, exports of U.S. beef declined from 1,300,000 metric tons in 2003, before the first mad cow was detected in the US, to 322,000 metric tons in 2004. This has increased since then to 771,000 metric tons in 2007.[31] On December 31, 2006, Hematech, a biotechnology company based in Sioux Falls, South Dakota, announced that it had used genetic engineering and cloning technology to produce cattle that lacked a necessary gene for prion production - thus theoretically making them immune to BSE.[32]

[edit] BSE statistics by country


Country BSE cases vCJD cases Austria 5 0 Belgium 133[33] 0 [34] [5] Canada 15 1 Czech Republic 28[35] 0 Denmark 14[36] 0 Falkland Islands 1 0 Finland 1 0 France[37] 900 25[5] Germany 312 0 Greece 1[38] 0 Hong Kong 2 0 [5] Ireland 1,353 4 Israel 1[39] 0 Italy 138[40] 1[5] Japan 26 1[5] Liechtenstein 2 0 Luxembourg 2 1 Netherlands 85[41] 3[5] Oman 2 0 Poland 21 0 Portugal 875 2[5] Saudi Arabia 1[5] Slovakia 15 0 Slovenia 7 0 Spain 412 5[5] Sweden 1 0 Switzerland 453 0 [42] Thailand 2 [5] United Kingdom 183,841 170 United States 3[34] 3[5]

Total

188,579

214

Dark green areas are countries with confirmed human cases of vCJD. Light green shows countries which have reported cases of only BSE. The table[citation needed] to the right summarizes reported cases of BSE and of vCJD by country. BSE is the disease in cattle, while vCJD is the disease in people. The tests used for detecting BSE vary considerably as do the regulations in various jurisdictions for when, and which cattle, must be tested. For instance, in the EU the cattle tested are older (30 months+), while many cattle are slaughtered earlier than that. At the opposite end of the scale, Japan tests all cattle at the time of slaughter. Tests are also difficult as the altered prion protein has very small levels in blood or urine, and no other signal has been found. Newer tests are faster, more sensitive, and cheaper, so it is possible that future figures may be more comprehensive. Even so, currently the only reliable test is examination of tissues during an autopsy. It is notable that there are no cases reported in Australia, Brazil, New Zealand and Vanuatu where cattle are mainly fed outside on grass pasture and, mainly in Australia, non-grass feeding is done only as a final finishing process before the animals are processed for meat. As for vCJD in humans, autopsy tests are not always done and so those figures too are likely to be too low, but probably by a lesser fraction. In the UK anyone with possible vCJD symptoms must be reported to the UK Creutzfeldt-Jakob Disease Surveillance Unit. In the U.S., the CDC has refused to impose a national requirement that physicians and hospitals report cases of the disease. Instead, the agency relies on other methods, including death certificates and urging physicians to send suspicious cases to the National Prion Disease Pathology Surveillance Center (NPDPSC) at Case Western Reserve University in Cleveland, which is funded by the CDC.

Malaria
From Wikipedia, the free encyclopedia
Jump to: navigation, search

Malaria
Classification and external resources

Plasmodium falciparum ring-forms and gametocytes in human blood. ICD-10 ICD-9 OMIM DiseasesDB MedlinePlus eMedicine MeSH B50. 084 248310 7728 000621 med/1385 emerg/305 ped/1357 C03.752.250.552

Malaria is a mosquito-borne infectious disease caused by a eukaryotic protist of the genus Plasmodium. It is widespread in tropical and subtropical regions, including parts of the Americas, Asia, and Africa. Each year, there are approximately 350500 million cases of malaria,[1] killing between one and three million people, the majority of whom are young children in sub-Saharan Africa.[2] Ninety percent of malaria-related deaths occur in sub-Saharan Africa. Malaria is commonly associated with poverty, but is also a cause of poverty[3] and a major hindrance to economic development. Five species of the plasmodium parasite can infect humans; the most serious forms of the disease are caused by Plasmodium falciparum. Malaria caused by Plasmodium vivax, Plasmodium ovale and Plasmodium malariae causes milder disease in humans that is not generally fatal. A fifth species, Plasmodium knowlesi, is a zoonosis that causes malaria in macaques but can also infect humans.[4][5]

Malaria is naturally transmitted by the bite of a female Anopheles mosquito. When a mosquito bites an infected person, a small amount of blood is taken, which contains malaria parasites. These develop within the mosquito, and about one week later, when the mosquito takes its next blood meal, the parasites are injected with the mosquito's saliva into the person being bitten. After a period of between two weeks and several months (occasionally years) spent in the liver, the malaria parasites start to multiply within red blood cells, causing symptoms that include fever and headache. In severe cases, the disease worsens, leading to coma and death. A wide variety of antimalarial drugs are available to treat malaria. In the last 5 years, treatment of P. falciparum infections in endemic countries has been transformed by the use of combinations of drugs containing an artemisinin derivative. Severe malaria is treated with intravenous or intramuscular quinine or, increasingly, the artemisinin derivative artesunate[6]. Several drugs are also available to prevent malaria in travellers to malaria-endemic countries (prophylaxis). Resistance has developed to several antimalarial drugs, most notably chloroquine[7]. Malaria transmission can be reduced by preventing mosquito bites by distribution of inexpensive mosquito nets and insect repellents, or by mosquito-control measures such as spraying insecticides inside houses and draining standing water where mosquitoes lay their eggs. Although many are under development, the challenge of producing a widely available vaccine that provides a high level of protection for a sustained period is still to be met[8].

Contents
[hide]
y y

1 Signs and symptoms 2 Causes o 2.1 Malaria parasites o 2.2 Mosquito vectors and the Plasmodium life cycle o 2.3 Pathogenesis 3 Diagnosis o 3.1 Symptomatic diagnosis o 3.2 Microscopic examination of blood films o 3.3 Antigen tests o 3.4 Molecular methods 4 Prevention o 4.1 Vector control o 4.2 Prophylactic drugs o 4.3 Indoor residual spraying o 4.4 Mosquito nets and bedclothes o 4.5 Vaccination o 4.6 Other methods 5 Treatment o 5.1 Counterfeit drugs

y y y

y y y y

6 Epidemiology 7 History 8 Evolutionary pressure of malaria on human genes o 8.1 Sickle-cell disease o 8.2 Thalassaemias o 8.3 Duffy antigens o 8.4 G6PD o 8.5 HLA and interleukin-4 o 8.6 Resistance in South Asia 9 Society and culture 10 See also 11 References 12 External links

Signs and symptoms

Main symptoms of malaria.[9] Symptoms of malaria include fever, shivering, arthralgia (joint pain), vomiting, anemia (caused by hemolysis), hemoglobinuria, retinal damage,[10] and convulsions. The classic symptom of malaria is cyclical occurrence of sudden coldness followed by rigor and then fever and sweating lasting four to six hours, occurring every two days in P. vivax and P. ovale infections, while every three for P. malariae.[11] P. falciparum can have recurrent fever every 3648 hours or a less pronounced and almost continuous fever. For reasons that are poorly understood, but that may be related to high intracranial pressure, children with malaria frequently exhibit abnormal posturing, a sign indicating severe brain damage.[12] Malaria has been found to cause cognitive impairments, especially in children. It causes widespread anemia during a period of rapid brain development and also direct brain damage. This neurologic damage results from cerebral malaria to which children are more vulnerable.[13][14] Cerebral malaria is associated with retinal whitening,[15] which may be a useful clinical sign in distinguishing malaria from other causes of fever.[16]

Species

Appearance

Periodicity Persistent in liver?

Plasmodium vivax

tertian

yes

Plasmodium ovale

tertian

yes

Plasmodium falciparum

tertian

no

Plasmodium malariae

quartan

no

Severe malaria is almost exclusively caused by P. falciparum infection, and usually arises 614 days after infection.[17] Consequences of severe malaria include coma and death if untreated young children and pregnant women are especially vulnerable. Splenomegaly (enlarged spleen), severe headache, cerebral ischemia, hepatomegaly (enlarged liver), hypoglycemia, and hemoglobinuria with renal failure may occur. Renal failure may cause blackwater fever, where hemoglobin from lysed red blood cells leaks into the urine. Severe malaria can progress extremely rapidly and cause death within hours or days.[17] In the most severe cases of the disease, fatality rates can exceed 20%, even with intensive care and treatment.[18] In endemic areas, treatment is often less satisfactory and the overall fatality rate for all cases of malaria can be as high as one in ten.[19] Over the longer term, developmental impairments have been documented in children who have suffered episodes of severe malaria.[20] Chronic malaria is seen in both P. vivax and P. ovale, but not in P. falciparum. Here, the disease can relapse months or years after exposure, due to the presence of latent parasites in the liver.

Describing a case of malaria as cured by observing the disappearance of parasites from the bloodstream can, therefore, be deceptive. The longest incubation period reported for a P. vivax infection is 30 years.[17] Approximately one in five of P. vivax malaria cases in temperate areas involve overwintering by hypnozoites (i.e., relapses begin the year after the mosquito bite).[21]

Causes

A Plasmodium sporozoite traverses the cytoplasm of a mosquito midgut epithelial cell in this false-color electron micrograph.

Malaria parasites
Malaria parasites are members of the genus Plasmodium (phylum Apicomplexa). In humans malaria is caused by P. falciparum, P. malariae, P. ovale, P. vivax and P. knowlesi.[22][23] P. falciparum is the most common cause of infection and is responsible for about 80% of all malaria cases, and is also responsible for about 90% of the deaths from malaria.[24] Parasitic Plasmodium species also infect birds, reptiles, monkeys, chimpanzees and rodents.[25] There have been documented human infections with several simian species of malaria, namely P. knowlesi, P. inui, P. cynomolgi,[26] P. simiovale, P. brazilianum, P. schwetzi and P. simium; however, with the exception of P. knowlesi, these are mostly of limited public health importance.[27]

Mosquito vectors and the Plasmodium life cycle


The parasite's primary (definitive) hosts and transmission vectors are female mosquitoes of the Anopheles genus, while humans and other vertebrates are secondary hosts. Young mosquitoes first ingest the malaria parasite by feeding on an infected human carrier and the infected Anopheles mosquitoes carry Plasmodium sporozoites in their salivary glands. A mosquito becomes infected when it takes a blood meal from an infected human. Once ingested, the parasite gametocytes taken up in the blood will further differentiate into male or female gametes and then fuse in the mosquito gut. This produces an ookinete that penetrates the gut lining and produces an oocyst in the gut wall. When the oocyst ruptures, it releases sporozoites that migrate through the mosquito's body to the salivary glands, where they are then ready to infect a new

human host. This type of transmission is occasionally referred to as anterior station transfer.[28] The sporozoites are injected into the skin, alongside saliva, when the mosquito takes a subsequent blood meal. Only female mosquitoes feed on blood, thus males do not transmit the disease. The females of the Anopheles genus of mosquito prefer to feed at night. They usually start searching for a meal at dusk, and will continue throughout the night until taking a meal. Malaria parasites can also be transmitted by blood transfusions, although this is rare.[29]

Pathogenesis

The life cycle of malaria parasites in the human body. A mosquito infects a person by taking a blood meal. First, sporozoites enter the bloodstream, and migrate to the liver. They infect liver cells (hepatocytes), where they multiply into merozoites, rupture the liver cells, and escape back into the bloodstream. Then, the merozoites infect red blood cells, where they develop into ring forms, then trophozoites (a feeding stage), then schizonts (a reproduction stage), then back into merozoites. Sexual forms called gametocytes are also produced, which, if taken up by a mosquito, will infect the insect and continue the life cycle. Malaria in humans develops via two phases: an exoerythrocytic and an erythrocytic phase. The exoerythrocytic phase involves infection of the hepatic system, or liver, whereas the erythrocytic phase involves infection of the erythrocytes, or red blood cells. When an infected mosquito pierces a person's skin to take a blood meal, sporozoites in the mosquito's saliva enter the bloodstream and migrate to the liver. Within 30 minutes of being introduced into the human host, the sporozoites infect hepatocytes, multiplying asexually and asymptomatically for a period of 615 days. Once in the liver, these organisms differentiate to yield thousands of merozoites, which, following rupture of their host cells, escape into the blood and infect red blood cells, thus beginning the erythrocytic stage of the life cycle.[30] The parasite escapes from the liver undetected by wrapping itself in the cell membrane of the infected host liver cell.[31] Within the red blood cells, the parasites multiply further, again asexually, periodically breaking out of their hosts to invade fresh red blood cells. Several such amplification cycles occur. Thus, classical descriptions of waves of fever arise from simultaneous waves of merozoites escaping and infecting red blood cells. Some P. vivax and P. ovale sporozoites do not immediately develop into exoerythrocytic-phase merozoites, but instead produce hypnozoites that remain dormant for periods ranging from

several months (612 months is typical) to as long as three years. After a period of dormancy, they reactivate and produce merozoites. Hypnozoites are responsible for long incubation and late relapses in these two species of malaria.[32] The parasite is relatively protected from attack by the body's immune system because for most of its human life cycle it resides within the liver and blood cells and is relatively invisible to immune surveillance. However, circulating infected blood cells are destroyed in the spleen. To avoid this fate, the P. falciparum parasite displays adhesive proteins on the surface of the infected blood cells, causing the blood cells to stick to the walls of small blood vessels, thereby sequestering the parasite from passage through the general circulation and the spleen.[33] This "stickiness" is the main factor giving rise to hemorrhagic complications of malaria. High endothelial venules (the smallest branches of the circulatory system) can be blocked by the attachment of masses of these infected red blood cells. The blockage of these vessels causes symptoms such as in placental and cerebral malaria. In cerebral malaria the sequestrated red blood cells can breach the blood brain barrier possibly leading to coma.[34] Although the red blood cell surface adhesive proteins (called PfEMP1, for Plasmodium falciparum erythrocyte membrane protein 1) are exposed to the immune system, they do not serve as good immune targets, because of their extreme diversity; there are at least 60 variations of the protein within a single parasite and effectively limitless versions within parasite populations.[33] The parasite switches between a broad repertoire of PfEMP1 surface proteins, thus staying one step ahead of the pursuing immune system. Some merozoites turn into male and female gametocytes. If a mosquito pierces the skin of an infected person, it potentially picks up gametocytes within the blood. Fertilization and sexual recombination of the parasite occurs in the mosquito's gut, thereby defining the mosquito as the definitive host of the disease. New sporozoites develop and travel to the mosquito's salivary gland, completing the cycle. Pregnant women are especially attractive to the mosquitoes,[35] and malaria in pregnant women is an important cause of stillbirths, infant mortality and low birth weight,[36] particularly in P. falciparum infection, but also in other species infection, such as P. vivax.[37]

Diagnosis
Further information: Romanowsky stain, Malaria antigen detection tests

Blood smear from a P. falciparum culture (K1 strain). Several red blood cells have ring stages inside them. Close to the center there is a schizont and on the left a trophozoite.

Since Charles Laveran first visualised the malaria parasite in blood in 1880,[38] the mainstay of malaria diagnosis has been the microscopic examination of blood. Fever and septic shock are commonly misdiagnosed as severe malaria in Africa, leading to a failure to treat other life-threatening illnesses. In malaria-endemic areas, parasitemia does not ensure a diagnosis of severe malaria, because parasitemia can be incidental to other concurrent disease. Recent investigations suggest that malarial retinopathy is better (collective sensitivity of 95% and specificity of 90%) than any other clinical or laboratory feature in distinguishing malarial from non-malarial coma.[39] Although blood is the sample most frequently used to make a diagnosis, both saliva and urine have been investigated as alternative, less invasive specimens.[38]

Symptomatic diagnosis
Areas that cannot afford even simple laboratory diagnostic tests often use only a history of subjective fever as the indication to treat for malaria. Using Giemsa-stained blood smears from children in Malawi, one study showed that when clinical predictors (rectal temperature, nailbed pallor, and splenomegaly) were used as treatment indications, rather than using only a history of subjective fevers, a correct diagnosis increased from 21% to 41% of cases, and unnecessary treatment for malaria was significantly decreased.[40]

Microscopic examination of blood films


For more details on individual parasites, see P. falciparum, P. vivax, P. ovale, P. malariae, P. knowlesi. The most economic, preferred, and reliable diagnosis of malaria is microscopic examination of blood films because each of the four major parasite species has distinguishing characteristics. Two sorts of blood film are traditionally used. Thin films are similar to usual blood films and allow species identification because the parasite's appearance is best preserved in this preparation. Thick films allow the microscopist to screen a larger volume of blood and are about eleven times more sensitive than the thin film, so picking up low levels of infection is easier on the thick film, but the appearance of the parasite is much more distorted and therefore distinguishing between the different species can be much more difficult. With the pros and cons of both thick and thin smears taken into consideration, it is imperative to utilize both smears while attempting to make a definitive diagnosis.[41] From the thick film, an experienced microscopist can detect parasite levels (or parasitemia) down to as low as 0.0000001% of red blood cells. Diagnosis of species can be difficult because the early trophozoites ("ring form") of all four species look identical and it is never possible to diagnose species on the basis of a single ring form; species identification is always based on several trophozoites. One important thing to note is that P. malariae and P. knowlesi (which is the most common cause of malaria in South-east Asia) look very similar under the microscope. However, P.

knowlesi parasitemia increases very fast and causes more severe disease than P. malariae, so it is important to identify and treat infections quickly. Therefore modern methods such as PCR (see "Molecular methods" below) or monoclonal antibody panels that can distinguish between the two should be used in this part of the world. [42]

Antigen tests
Further information: Malaria antigen detection tests For areas where microscopy is not available, or where laboratory staff are not experienced at malaria diagnosis, there are commercial antigen detection tests that require only a drop of blood.[43] Immunochromatographic tests (also called: Malaria Rapid Diagnostic Tests, AntigenCapture Assay or "Dipsticks") been developed, distributed and fieldtested. These tests use fingerstick or venous blood, the completed test takes a total of 1520 minutes, and the results are read visually as the presence or absence of colored stripes on the dipstick, so they are suitable for use in the field. The threshold of detection by these rapid diagnostic tests is in the range of 100 parasites/l of blood (commercial kits can range from about 0.002% to 0.1% parasitemia) compared to 5 by thick film microscopy. One disadvantage is that dipstick tests are qualitative but not quantitative - they can determine if parasites are present in the blood, but not how many. The first rapid diagnostic tests were using P. falciparum glutamate dehydrogenase as antigen.[44] PGluDH was soon replaced by P.falciparum lactate dehydrogenase, a 33 kDa oxidoreductase [EC 1.1.1.27]. It is the last enzyme of the glycolytic pathway, essential for ATP generation and one of the most abundant enzymes expressed by P.falciparum. PLDH does not persist in the blood but clears about the same time as the parasites following successful treatment. The lack of antigen persistence after treatment makes the pLDH test useful in predicting treatment failure. In this respect, pLDH is similar to pGluDH. Depending on which monoclonal antibodies are used, this type of assay can distinguish between all five different species of human malaria parasites, because of antigenic differences between their pLDH isoenzymes.

Molecular methods
Molecular methods are available in some clinical laboratories and rapid real-time assays (for example, QT-NASBA based on the polymerase chain reaction)[45] are being developed with the hope of being able to deploy them in endemic areas. PCR (and other molecular methods) is more accurate than microscopy. However, it is expensive, and requires a specialized laboratory. Moreover, levels of parasitemia are not necessarily correlative with the progression of disease, particularly when the parasite is able to adhere to blood vessel walls. Therefore more sensitive, low-tech diagnosis tools need to be developed in order to detect low levels of parasitemia in the field. [46]

Prevention

Anopheles albimanus mosquito feeding on a human arm. This mosquito is a vector of malaria and mosquito control is a very effective way of reducing the incidence of malaria. Methods used to prevent the spread of disease, or to protect individuals in areas where malaria is endemic, include prophylactic drugs, mosquito eradication, and the prevention of mosquito bites. The continued existence of malaria in an area requires a combination of high human population density, high mosquito population density, and high rates of transmission from humans to mosquitoes and from mosquitoes to humans. If any of these is lowered sufficiently, the parasite will sooner or later disappear from that area, as happened in North America, Europe and much of Middle East. However, unless the parasite is eliminated from the whole world, it could become re-established if conditions revert to a combination that favors the parasite's reproduction. Many countries are seeing an increasing number of imported malaria cases due to extensive travel and migration. Many researchers argue that prevention of malaria may be more cost-effective than treatment of the disease in the long run, but the capital costs required are out of reach of many of the world's poorest people. Economic adviser Jeffrey Sachs estimates that malaria can be controlled for US$3 billion in aid per year. The distribution of funding varies among countries. Countries with large populations do not receive the same amount of support. The 34 countries that received a per capita annual support of less than $1 included some of the poorest countries in Africa. Brazil, Eritrea, India, and Vietnam have, unlike many other developing nations, successfully reduced the malaria burden. Common success factors included conducive country conditions, a targeted technical approach using a package of effective tools, data-driven decision-making, active leadership at all levels of government, involvement of communities, decentralized implementation and control of finances, skilled technical and managerial capacity at national and sub-national levels, hands-on technical and programmatic support from partner agencies, and sufficient and flexible financing.[47]

Vector control
Further information: Mosquito control Efforts to eradicate malaria by eliminating mosquitoes have been successful in some areas. Malaria was once common in the United States and southern Europe, but vector control

programs, in conjunction with the monitoring and treatment of infected humans, eliminated it from those regions. In some areas, the draining of wetland breeding grounds and better sanitation were adequate. Malaria was eliminated from the northern parts of the USA in the early 20th century by such methods, and the use of the pesticide DDT eliminated it from the South by 1951.[48] In 2002, there were 1,059 cases of malaria reported in the US, including eight deaths, but in only five of those cases was the disease contracted in the United States. Before DDT, malaria was successfully eradicated or controlled also in several tropical areas by removing or poisoning the breeding grounds of the mosquitoes or the aquatic habitats of the larva stages, for example by filling or applying oil to places with standing water. These methods have seen little application in Africa for more than half a century.[49] In the 1950s and 1960s, there was a major public health effort to eradicate malaria worldwide by selectively targeting mosquitoes in areas where malaria was rampant.[50] However, these efforts have so far failed to eradicate malaria in many parts of the developing worldthe problem is most prevalent in Africa. Sterile insect technique is emerging as a potential mosquito control method. Progress towards transgenic, or genetically modified, insects suggest that wild mosquito populations could be made malaria-resistant. Researchers at Imperial College London created the world's first transgenic malaria mosquito,[51] with the first plasmodium-resistant species announced by a team at Case Western Reserve University in Ohio in 2002.[52] Successful replacement of current populations with a new genetically modified population, relies upon a drive mechanism, such as transposable elements to allow for non-Mendelian inheritance of the gene of interest. However, this approach contains many difficulties and success is a distant prospect.[53] An even more futuristic method of vector control is the idea that lasers could be used to kill flying mosquitoes.[54]

Prophylactic drugs
Main article: Malaria prophylaxis Several drugs, most of which are also used for treatment of malaria, can be taken preventively. Generally, these drugs are taken daily or weekly, at a lower dose than would be used for treatment of a person who had actually contracted the disease. Use of prophylactic drugs is seldom practical for full-time residents of malaria-endemic areas, and their use is usually restricted to short-term visitors and travelers to malarial regions. This is due to the cost of purchasing the drugs, negative side effects from long-term use, and because some effective antimalarial drugs are difficult to obtain outside of wealthy nations. Quinine was used starting in the 17th century as a prophylactic against malaria. The development of more effective alternatives such as quinacrine, chloroquine, and primaquine in the 20th century reduced the reliance on quinine. Today, quinine is still used to treat chloroquine resistant Plasmodium falciparum, as well as severe and cerebral stages of malaria, but is not generally used for prophylaxis.

Modern drugs used preventively include mefloquine (Lariam), doxycycline (available generically), and the combination of atovaquone and proguanil hydrochloride (Malarone). The choice of which drug to use depends on which drugs the parasites in the area are resistant to, as well as side-effects and other considerations. The prophylactic effect does not begin immediately upon starting taking the drugs, so people temporarily visiting malaria-endemic areas usually begin taking the drugs one to two weeks before arriving and must continue taking them for 4 weeks after leaving (with the exception of atovaquone proguanil that only needs be started 2 days prior and continued for 7 days afterwards). The use of prophylactic drugs where malaria-bearing mosquitoes are present may encourage the development of partial immunity.[55]

Indoor residual spraying


Main articles: Indoor residual spraying and DDT use against malaria Indoor residual spraying (IRS) is the practice of spraying insecticides on the interior walls of homes in malaria affected areas. After feeding, many mosquito species rest on a nearby surface while digesting the bloodmeal, so if the walls of dwellings have been coated with insecticides, the resting mosquitos will be killed before they can bite another victim, transferring the malaria parasite. The first pesticide used for IRS was DDT.[48] Although it was initially used exclusively to combat malaria, its use quickly spread to agriculture. In time, pest-control, rather than diseasecontrol, came to dominate DDT use, and this large-scale agricultural use led to the evolution of resistant mosquitoes in many regions. The DDT resistance shown by Anopheles mosquitoes can be compared to antibiotic resistance shown by bacteria. The overuse of anti-bacterial soaps and antibiotics led to antibiotic resistance in bacteria, similar to how overspraying of DDT on crops led to DDT resistance in Anopheles mosquitoes. During the 1960s, awareness of the negative consequences of its indiscriminate use increased, ultimately leading to bans on agricultural applications of DDT in many countries in the 1970s. Since the use of DDT has been limited or banned for agricultural use for some time, DDT may now be more effective as a method of disease-control. Although DDT has never been banned for use in malaria control and there are several other insecticides suitable for IRS, some advocates have claimed that bans are responsible for tens of millions of deaths in tropical countries where DDT had once been effective in controlling malaria. Furthermore, most of the problems associated with DDT use stem specifically from its industrial-scale application in agriculture, rather than its use in public health.[56] The World Health Organization (WHO) currently advises the use of 12 different insecticides in IRS operations. These include DDT and a series of alternative insecticides (such as the pyrethroids permethrin and deltamethrin), to combat malaria in areas where mosquitoes are DDT-resistant and to slow the evolution of resistance.[57] This public health use of small amounts of DDT is permitted under the Stockholm Convention on Persistent Organic Pollutants (POPs),

which prohibits the agricultural use of DDT.[58] However, because of its legacy, many developed countries discourage DDT use even in small quantities.[59][60] One problem with all forms of Indoor Residual Spraying is insecticide resistance via evolution of mosquitos. According to a study published on Mosquito Behavior and Vector Control, mosquito species that are affected by IRS are endophilic species (species that tend to rest and live indoors), and due to the irritation caused by spraying, their evolutionary descendants are trending towards becoming exophilic (species that tend to rest and live out of doors), meaning that they are not as affectedif affected at allby the IRS, rendering it somewhat useless as a defense mechanism.[61]

Mosquito nets and bedclothes


Main article: Mosquito net Mosquito nets help keep mosquitoes away from people and greatly reduce the infection and transmission of malaria. The nets are not a perfect barrier and they are often treated with an insecticide designed to kill the mosquito before it has time to search for a way past the net. Insecticide-treated nets (ITN) are estimated to be twice as effective as untreated nets and offer greater than 70% protection compared with no net.[62]. Although ITN are proven to be very effective against malaria, less than 2% of children in urban areas in Sub-Saharan Africa are protected by ITNs. Since the Anopheles mosquitoes feed at night, the preferred method is to hang a large "bed net" above the center of a bed such that it drapes down and covers the bed completely. The distribution of mosquito nets impregnated with insecticides such as permethrin or deltamethrin has been shown to be an extremely effective method of malaria prevention, and it is also one of the most cost-effective methods of prevention. These nets can often be obtained for around $2.50$3.50 (23 euros) from the United Nations, the World Health Organization (WHO), and others. ITNs have been shown to be the most cost-effective prevention method against malaria and are part of WHOs Millennium Development Goals (MDGs). While some experts argue that international organizations should distribute ITNs and LLINs to people for free in order to maximize coverage (since such a policy would reduces price barriers), others insist that cost-sharing between the international organization and recipients would led to greater usage of the net (arguing that people will value a good more if they pay for it). Additionally, proponents of cost-sharing argue that such a policy ensures that nets are efficiently allocated to those people who most need them (or are most vulnerable to infection). Through a "selection effect", they argue, those people who most need the bed nets will choose to purchase them, while those less in need will opt out. However, a randomized controlled trial study [1] of ITNs uptake among pregnant women in Kenya, conducted by economists Pascaline Dupas and Jessica Cohen, found that cost-sharing does not necessarily increase the usage intensity of ITNs nor does it induce uptake by those most vulnerable to infection, as compared to a policy of free distribution.[63] In some cases, costsharing can actually decrease demand for mosquito nets by erecting a price barrier. Dupas and

Cohens findings support the argument that free distribution of ITNs can be more effective than cost-sharing in both increasing coverage and saving lives. In a cost-effectiveness analysis, Dupas and Cohen note that cost-sharing is at best marginally more cost-effective than free distribution, but free distribution leads to many more lives saved. [63] The researchers base their conclusions about the cost-effectiveness of free distribution on the proven spillover benefits of increased ITN usage.[64] When a large number of nets are distributed in one residential area, their chemical additives help reduce the number of mosquitoes in the environment. With fewer mosquitoes in the environment, the chances of malaria infection for recipients and non-recipients are significantly reduced. (In other words, the importance of the physical barrier effect of ITNs decreases relative to the positive externality effect of the nets in creating a mosquito-free environment when ITNs are highly concentrated in one residential cluster or community.) For maximum effectiveness, the nets should be re-impregnated with insecticide every six months. This process poses a significant logistical problem in rural areas. New technologies like Olyset or DawaPlus allow for production of long-lasting insecticidal mosquito nets (LLINs), which release insecticide for approximately 5 years,[65] and cost about US$5.50. ITNs protect people sleeping under the net and simultaneously kill mosquitoes that contact the net. Some protection is also provided to others by this method, including people sleeping in the same room but not under the net. While distributing mosquito nets is a major component of malaria prevention, community education and awareness on the dangers of malaria are associated with distribution campaigns to make sure people who receive a net know how to use it. "Hang Up" campaigns such as the ones conducted by volunteers of the International Red Cross and Red Crescent Movement consist of visiting households that received a net at the end of the campaign or just before the rainy season, ensuring that the net is being used properly and that the people most vulnerable to malaria, such as young children and the elderly, sleep under it. A study conducted by the CDC in Sierra Leone showed a 22 percent increase in net utilization following a personal visit from a volunteer living in the same community promoting net usage. A study in Togo showed similar improvements.[66] Mosquito nets are often unaffordable to people in developing countries, especially for those most at risk. Only 1 out of 20 people in Africa own a bed net. Nets are also often distributed though vaccine campaigns using voucher subsidies, such as the measles campaign for children. A study among Afghan refugees in Pakistan found that treating top-sheets and chaddars (head coverings) with permethrin has similar effectiveness to using a treated net, but is much cheaper.[67] Another alternative approach uses spores of the fungus Beauveria bassiana, sprayed on walls and bed nets, to kill mosquitoes. While some mosquitoes have developed resistance to chemicals, they have not been found to develop a resistance to fungal infections.[68]

Vaccination
Further information: Malaria vaccine

Immunity (or, more accurately, tolerance) does occur naturally, but only in response to repeated infection with multiple strains of malaria.[69] Vaccines for malaria are under development, with no completely effective vaccine yet available. The first promising studies demonstrating the potential for a malaria vaccine were performed in 1967 by immunizing mice with live, radiation-attenuated sporozoites, providing protection to about 60% of the mice upon subsequent injection with normal, viable sporozoites.[70] Since the 1970s, there has been a considerable effort to develop similar vaccination strategies within humans. It was determined that an individual can be protected from a P. falciparum infection if they receive over 1,000 bites from infected, irradiated mosquitoes.[71] It has been generally accepted that it is impractical to provide at-risk individuals with this vaccination strategy, but that has been recently challenged with work being done by Dr. Stephen Hoffman, one of the key researchers who originally sequenced the genome of Plasmodium falciparum. His work most recently has revolved around solving the logistical problem of isolating and preparing the parasites equivalent to 1000 irradiated mosquitoes for mass storage and inoculation of human beings. The company has recently received several multi-million dollar grants from the Bill & Melinda Gates Foundation and the U.S. government to begin early clinical studies in 2007 and 2008.[72] The Seattle Biomedical Research Institute (SBRI), funded by the Malaria Vaccine Initiative, assures potential volunteers that "the [2009] clinical trials won't be a life-threatening experience. While many volunteers [in Seattle] will actually contract malaria, the cloned strain used in the experiments can be quickly cured, and does not cause a recurring form of the disease." "Some participants will get experimental drugs or vaccines, while others will get placebo."[73] Instead, much work has been performed to try and understand the immunological processes that provide protection after immunization with irradiated sporozoites. After the mouse vaccination study in 1967,[70] it was hypothesized that the injected sporozoites themselves were being recognized by the immune system, which was in turn creating antibodies against the parasite. It was determined that the immune system was creating antibodies against the circumsporozoite protein (CSP) which coated the sporozoite.[74] Moreover, antibodies against CSP prevented the sporozoite from invading hepatocytes.[75] CSP was therefore chosen as the most promising protein on which to develop a vaccine against the malaria sporozoite. It is for these historical reasons that vaccines based on CSP are the most numerous of all malaria vaccines. Presently, there is a huge variety of vaccine candidates on the table. Pre-erythrocytic vaccines (vaccines that target the parasite before it reaches the blood), in particular vaccines based on CSP, make up the largest group of research for the malaria vaccine. Other vaccine candidates include: those that seek to induce immunity to the blood stages of the infection; those that seek to avoid more severe pathologies of malaria by preventing adherence of the parasite to blood venules and placenta; and transmission-blocking vaccines that would stop the development of the parasite in the mosquito right after the mosquito has taken a bloodmeal from an infected person.[76] It is hoped that the knowledge of the P. falciparum genome, the sequencing of which was completed in 2002[77], will provide targets for new drugs or vaccines.[78]

The first vaccine developed that has undergone field trials, is the SPf66, developed by Manuel Elkin Patarroyo in 1987. It presents a combination of antigens from the sporozoite (using CS repeats) and merozoite parasites. During phase I trials a 75% efficacy rate was demonstrated and the vaccine appeared to be well tolerated by subjects and immunogenic. The phase IIb and III trials were less promising, with the efficacy falling to between 38.8% and 60.2%. A trial was carried out in Tanzania in 1993 demonstrating the efficacy to be 31% after a years follow up, however the most recent (though controversial) study in The Gambia did not show any effect. Despite the relatively long trial periods and the number of studies carried out, it is still not known how the SPf66 vaccine confers immunity; it therefore remains an unlikely solution to malaria. The CSP was the next vaccine developed that initially appeared promising enough to undergo trials. It is also based on the circumsporoziote protein, but additionally has the recombinant (Asn-Ala-Pro15Asn-Val-Asp-Pro)2-Leu-Arg(R32LR) protein covalently bound to a purified Pseudomonas aeruginosa toxin (A9). However at an early stage a complete lack of protective immunity was demonstrated in those inoculated. The study group used in Kenya had an 82% incidence of parasitaemia whilst the control group only had an 89% incidence. The vaccine intended to cause an increased T-lymphocyte response in those exposed, this was also not observed. The efficacy of Patarroyo's vaccine has been disputed with some US scientists concluding in The Lancet (1997) that "the vaccine was not effective and should be dropped" while the Colombian accused them of "arrogance" putting down their assertions to the fact that he came from a developing country. The RTS,S/AS02A vaccine is the candidate furthest along in vaccine trials. It is being developed by a partnership between the PATH Malaria Vaccine Initiative (a grantee of the Gates Foundation), the pharmaceutical company, GlaxoSmithKline, and the Walter Reed Army Institute of Research[79] In the vaccine, a portion of CSP has been fused to the immunogenic "S antigen" of the hepatitis B virus; this recombinant protein is injected alongside the potent AS02A adjuvant.[76] In October 2004, the RTS,S/AS02A researchers announced results of a Phase IIb trial, indicating the vaccine reduced infection risk by approximately 30% and severity of infection by over 50%. The study looked at over 2,000 Mozambican children.[80] More recent testing of the RTS,S/AS02A vaccine has focused on the safety and efficacy of administering it earlier in infancy: In October 2007, the researchers announced results of a phase I/IIb trial conducted on 214 Mozambican infants between the ages of 10 and 18 months in which the full three-dose course of the vaccine led to a 62% reduction of infection with no serious side-effects save some pain at the point of injection.[81] Further research will delay this vaccine from commercial release until around 2011.[82]

Other methods
Education in recognizing the symptoms of malaria has reduced the number of cases in some areas of the developing world by as much as 20%. Recognizing the disease in the early stages can also stop the disease from becoming a killer. Education can also inform people to cover over areas of stagnant, still water e.g. Water Tanks which are ideal breeding grounds for the parasite and mosquito, thus cutting down the risk of the transmission between people. This is most put in

practice in urban areas where there are large centers of population in a confined space and transmission would be most likely in these areas. The Malaria Control Project is currently using downtime computing power donated by individual volunteers around the world (see Volunteer computing and BOINC) to simulate models of the health effects and transmission dynamics in order to find the best method or combination of methods for malaria control. This modeling is extremely computer intensive due to the simulations of large human populations with a vast range of parameters related to biological and social factors that influence the spread of the disease. It is expected to take a few months using volunteered computing power compared to the 40 years it would have taken with the current resources available to the scientists who developed the program.[83] An example of the importance of computer modeling in planning malaria eradication programs is shown in the paper by guas and others. They showed that eradication of malaria is crucially dependent on finding and treating the large number of people in endemic areas with asymptomatic malaria, who act as a reservoir for infection.[84] The malaria parasites do not affect animal species and therefore eradication of the disease from the human population would be expected to be effective. Other interventions for the control of malaria include mass drug administrations and intermittent preventive therapy.

Treatment
Further information: Antimalarial drug Active malaria infection with P. falciparum is a medical emergency requiring hospitalization. Infection with P. vivax, P. ovale or P. malariae can often be treated on an outpatient basis. Treatment of malaria involves supportive measures as well as specific antimalarial drugs. Most antimalarial drugs are produced industrially and are sold at pharmacies. However, as the cost of such medicines are often too high for most people in the developing world, some herbal remedies (such as Artemisia annua tea[85]) have also been developed, and have gained support from international organisations such as Mdicins Sans Frontires. When properly treated, someone with malaria can expect a complete recovery.[86]

Counterfeit drugs
Sophisticated counterfeits have been found in several Asian countries such as Cambodia,[87] China,[88] Indonesia, Laos, Thailand, Vietnam and are an important cause of avoidable death in those countries.[89] WHO have said that studies indicate that up to 40% of artesunate based malaria medications are counterfeit, especially in the Greater Mekong region and have established a rapid alert system to enable information about counterfeit drugs to be rapidly reported to the relevant authorities in participating countries.[90] There is no reliable way for doctors or lay people to detect counterfeit drugs without help from a laboratory. Companies are attempting to combat the persistence of counterfeit drugs by using new technology to provide security from source to distribution.

Epidemiology
Further information: Diseases of poverty, Tropical disease

Countries which have regions where malaria is endemic as of 2003 (coloured yellow).[91] Countries in green are free of indigenous cases of malaria in all areas.

Disability-adjusted life year for malaria per 100,000 inhabitants in 2002.


no data 10 2000 2000-2500 10-50 50-100 100-250 250-500 2500-3000 3000-3500 3500 500-1000 1000-1500 1500-

Malaria causes about 250 million cases of fever and approximately one million deaths annually.[92] The vast majority of cases occur in children under 5 years old;[93] pregnant women are also especially vulnerable. Despite efforts to reduce transmission and increase treatment, there has been little change in which areas are at risk of this disease since 1992.[94] Indeed, if the prevalence of malaria stays on its present upwards course, the death rate could double in the next twenty years.[95] Precise statistics are unknown because many cases occur in rural areas where people do not have access to hospitals or the means to afford health care. As a consequence, the majority of cases are undocumented.[95] Although co-infection with HIV and malaria does cause increased mortality, this is less of a problem than with HIV/tuberculosis co-infection, due to the two diseases usually attacking different age-ranges, with malaria being most common in the young and active tuberculosis most common in the old.[96] Although HIV/malaria co-infection produces less severe symptoms than the interaction between HIV and TB, HIV and malaria do contribute to each other's spread. This effect comes from malaria increasing viral load and HIV infection increasing a person's susceptibility to malaria infection.[97] Malaria is presently endemic in a broad band around the equator, in areas of the Americas, many parts of Asia, and much of Africa; however, it is in sub-Saharan Africa where 85 90% of malaria fatalities occur.[98] The geographic distribution of malaria within large regions is complex, and malaria-afflicted and malaria-free areas are often found close to each other.[99] In drier areas, outbreaks of malaria can be predicted with reasonable accuracy by mapping rainfall.[100] Malaria is more common in rural areas than in cities; this is in contrast to dengue fever where urban areas present the greater risk.[101] For example, the cities of Vietnam, Laos and

Cambodia are essentially malaria-free, but the disease is present in many rural regions.[102] By contrast, in Africa malaria is present in both rural and urban areas, though the risk is lower in the larger cities.[103] The global endemic levels of malaria have not been mapped since the 1960s. However, the Wellcome Trust, UK, has funded the Malaria Atlas Project[104] to rectify this, providing a more contemporary and robust means with which to assess current and future malaria disease burden.

History
Further information: History of malaria Malaria has infected humans for over 50,000 years, and Plasmodium may have been a human pathogen for the entire history of the species.[105] Close relatives of the human malaria parasites remain common in chimpanzees.[106] References to the unique periodic fevers of malaria are found throughout recorded history, beginning in 2700 BC in China.[107] The term malaria originates from Medieval Italian: mala aria"bad air"; and the disease was formerly called ague or marsh fever due to its association with swamps and marshland.[108] Malaria was once common in most of Europe and North America, where it is no longer endemic[109], though imported cases do occur. Scientific studies on malaria made their first significant advance in 1880, when a French army doctor working in the military hospital of Constantine in Algeria named Charles Louis Alphonse Laveran observed parasites for the first time, inside the red blood cells of people suffering from malaria. He, therefore, proposed that malaria is caused by this organism, the first time a protist was identified as causing disease.[110] For this and later discoveries, he was awarded the 1907 Nobel Prize for Physiology or Medicine. The malarial parasite was called Plasmodium by the Italian scientists Ettore Marchiafava and Angelo Celli.[111] A year later, Carlos Finlay, a Cuban doctor treating patients with yellow fever in Havana, provided strong evidence that mosquitoes were transmitting disease to and from humans.[112] This work followed earlier suggestions by Josiah C. Nott,[113] and work by Patrick Manson on the transmission of filariasis.[114] However, it was Britain's Sir Ronald Ross working in the Presidency General Hospital in Calcutta who finally proved in 1898 that malaria is transmitted by mosquitoes. He did this by showing that certain mosquito species transmit malaria to birds and isolating malaria parasites from the salivary glands of mosquitoes that had fed on infected birds.[115] For this work Ross received the 1902 Nobel Prize in Medicine. After resigning from the Indian Medical Service, Ross worked at the newly-established Liverpool School of Tropical Medicine and directed malaria-control efforts in Egypt, Panama, Greece and Mauritius.[116] The findings of Finlay and Ross were later confirmed by a medical board headed by Walter Reed in 1900, and its recommendations implemented by William C. Gorgas in the health measures undertaken during construction of the Panama Canal. This public-health work saved the lives of thousands of workers and helped develop the methods used in future public-health campaigns against this disease. The first effective treatment for malaria came from the bark of cinchona tree, which contains quinine. This tree grows on the slopes of the Andes, mainly in Peru. A tincture made of this

natural product was used by the inhabitants of Peru to control malaria, and the Jesuits introduced this practice to Europe during the 1640s, where it was rapidly accepted.[117] However, it was not until 1820 that the active ingredient, quinine, was extracted from the bark, isolated and named by the French chemists Pierre Joseph Pelletier and Joseph Bienaim Caventou.[118] In the early 20th century, before antibiotics became available, Julius Wagner-Jauregg discovered that patients with syphilis could be treated by intentionally infecting them with malaria; the resulting fever would kill the syphilis spirochetes, and quinine would then be administered to control the malaria. Although some patients died from malaria, this was considered preferable to the almost-certain death from syphilis.[119]

A continuous P. falciparum culture was established in 1976. The first successful continuous malaria culture was established in 1976 by William Trager and James B. Jensen, which facilitated research into the molecular biology of the parasite and the development of new drugs substantially.[120][121] Although the blood stage and mosquito stages of the malaria life cycle were identified in the 19th and early 20th centuries, it was not until the 1980s that the latent liver form of the parasite was observed.[122][123] The discovery of this latent form of the parasite finally explained why people could appear to be cured of malaria but still relapse years after the parasite had disappeared from their bloodstreams.

Evolutionary pressure of malaria on human genes


Further information: Genetic resistance to malaria

Malaria is thought to have been the greatest selective pressure on the human genome in recent history.[124] This is due to the high levels of mortality and morbidity caused by malaria, especially the P. falciparum species. Sickle-cell disease

Frequency and origin of malaria cases in 1996.[125] The most-studied influence of the malaria parasite upon the human genome is a hereditary blood disease, sickle-cell disease. The sickle-cell trait causes disease, but even those only partially affected by sickle-cell have substantial protection against malaria. In sickle-cell disease, there is a mutation in the HBB gene, which encodes the beta-globin subunit of haemoglobin. The normal allele encodes a glutamate at position six of the beta-globin protein, whereas the sickle-cell allele encodes a valine. This change from a hydrophilic to a hydrophobic amino acid encourages binding between haemoglobin molecules, with polymerization of haemoglobin deforming red blood cells into a "sickle" shape. Such deformed cells are cleared rapidly from the blood, mainly in the spleen, for destruction and recycling. In the merozoite stage of its life cycle, the malaria parasite lives inside red blood cells, and its metabolism changes the internal chemistry of the red blood cell. Infected cells normally survive until the parasite reproduces, but, if the red cell contains a mixture of sickle and normal haemoglobin, it is likely to become deformed and be destroyed before the daughter parasites emerge. Thus, individuals heterozygous for the mutated allele, known as sickle-cell trait, may have a low and usually-unimportant level of anaemia, but also have a greatly reduced chance of serious malaria infection. This is a classic example of heterozygote advantage. Individuals homozygous for the mutation have full sickle-cell disease and in traditional societies rarely live beyond adolescence. However, in populations where malaria is endemic, the frequency of sickle-cell genes is around 10%. The existence of four haplotypes of sickle-type hemoglobin suggests that this mutation has emerged independently at least four times in malariaendemic areas, further demonstrating its evolutionary advantage in such affected regions. There are also other mutations of the HBB gene that produce haemoglobin molecules capable of conferring similar resistance to malaria infection. These mutations produce haemoglobin types HbE and HbC, which are common in Southeast Asia and Western Africa, respectively. Thalassaemias Another well-documented set of mutations found in the human genome associated with malaria are those involved in causing blood disorders known as thalassaemias. Studies in Sardinia and

Papua New Guinea have found that the gene frequency of -thalassaemias is related to the level of malarial endemicity in a given population. A study on more than 500 children in Liberia found that those with -thalassaemia had a 50% decreased chance of getting clinical malaria. Similar studies have found links between gene frequency and malaria endemicity in the + form of thalassaemia. Presumably these genes have also been selected in the course of human evolution. Duffy antigens The Duffy antigens are antigens expressed on red blood cells and other cells in the body acting as a chemokine receptor. The expression of Duffy antigens on blood cells is encoded by Fy genes (Fya, Fyb, Fyc etc.). Plasmodium vivax malaria uses the Duffy antigen to enter blood cells. However, it is possible to express no Duffy antigen on red blood cells (Fy-/Fy-). This genotype confers complete resistance to P. vivax infection. The genotype is very rare in European, Asian and American populations, but is found in almost all of the indigenous population of West and Central Africa.[126] This is thought to be due to very high exposure to P. vivax in Africa in the last few thousand years. G6PD Glucose-6-phosphate dehydrogenase (G6PD) is an enzyme that normally protects from the effects of oxidative stress in red blood cells. However, a genetic deficiency in this enzyme results in increased protection against severe malaria. HLA and interleukin-4 HLA-B53 is associated with low risk of severe malaria. This MHC class I molecule presents liver stage and sporozoite antigens to T-Cells. Interleukin-4, encoded by IL4, is produced by activated T cells and promotes proliferation and differentiation of antibody-producing B cells. A study of the Fulani of Burkina Faso, who have both fewer malaria attacks and higher levels of antimalarial antibodies than do neighboring ethnic groups, found that the IL4-524 T allele was associated with elevated antibody levels against malaria antigens, which raises the possibility that this might be a factor in increased resistance to malaria.[127] Resistance in South Asia The lowest Himalayan Foothills and Inner Terai or Doon Valleys of Nepal and India are highly malarial due to a warm climate and marshes sustained during the dry season by groundwater percolating down from the higher hills. Malarial forests were intentionally maintained by the rulers of Nepal as a defensive measure. Humans attempting to live in this zone suffered much higher mortality than at higher elevations or below on the drier Gangetic Plain, however the Tharu people had lived in this zone long enough to evolve resistance via multiple genes. Endogamy along caste and ethnic lines appear to have confined these to the Tharu community. Otherwise these genes probably would have become nearly universal in South Asia and beyond because of their considerable survival value and the apparent lack of negative effects comparable to Sickle Cell Anemia.

Society and culture


Malaria is not just a disease commonly associated with poverty but also a cause of poverty and a major hindrance to economic development. Tropical regions are affected most, however malarias furthest extent reaches into some temperate zones with extreme seasonal changes. The disease has been associated with major negative economic effects on regions where it is widespread. During the late 19th and early 20th centuries, it was a major factor in the slow economic development of the American southern states.[128]. A comparison of average per capita GDP in 1995, adjusted for parity of purchasing power, between countries with malaria and countries without malaria gives a fivefold difference ($1,526 USD versus $8,268 USD). In countries where malaria is common, average per capita GDP has risen (between 1965 and 1990) only 0.4% per year, compared to 2.4% per year in other countries.[129] Poverty is both cause and effect, however, since the poor do not have the financial capacities to prevent or treat the disease. The lowest income group in Malawi carries (1994) the burden of having 32% of their annual income used on this disease compared with the 4% of household incomes from low-to-high groups.[130] In its entirety, the economic impact of malaria has been estimated to cost Africa $12 billion USD every year. The economic impact includes costs of health care, working days lost due to sickness, days lost in education, decreased productivity due to brain damage from cerebral malaria, and loss of investment and tourism.[93] In some countries with a heavy malaria burden, the disease may account for as much as 40% of public health expenditure, 30-50% of inpatient admissions, and up to 50% of outpatient visits.[131]

Dengue fever
From Wikipedia, the free encyclopedia
Jump to: navigation, search "Dengue Fever" redirects here. For the band of the same name, see Dengue Fever (band). Dengue virus

A TEM micrograph showing Dengue virus virions (the cluster of dark dots near the center).

Virus classification Group: Group IV ((+)ssRNA) Family: Flaviviridae Genus: Flavivirus Species: Dengue virus

Dengue fever
Classification and external resources ICD-10 ICD-9 DiseasesDB MedlinePlus eMedicine MeSH A90. 061 3564 001374 med/528 C02.782.417.214

Dengue fever (pronounced UK: / d e /, US: / d i /) and dengue hemorrhagic fever (DHF) are acute febrile diseases which occur in the tropics, can be life-threatening, and are caused by four closely related virus serotypes of the genus Flavivirus, family Flaviviridae.[1] It is also known as breakbone fever. It occurs widely in the tropics, including northern Argentina, northern Australia, the entirety of Bangladesh, Barbados, Bolivia[2], Brazil, Cambodia, Costa Rica, Dominican Republic, El Salvador, Guatemala, Guyana, Honduras, India, Indonesia, Jamaica, Laos, Malaysia, Mexico, Micronesia, Panama, Paraguay[3], Philippines, Puerto Rico, Samoa[4], Singapore, Sri Lanka, Suriname, Taiwan, Thailand, Trinidad, Venezuela and Vietnam, and increasingly in southern China[5]. Unlike malaria, dengue is just as prevalent in the urban districts of its range as in rural areas. Each serotype is sufficiently different that there is no crossprotection and epidemics caused by multiple serotypes (hyperendemicity) can occur. Dengue is transmitted to humans by the Aedes aegypti or more rarely the Aedes albopictus mosquito, which feed during the day.[6] The WHO says some 2.5 billion people, two fifths of the world's population, are now at risk from dengue and estimates that there may be 50 million cases of dengue infection worldwide every year. The disease is now endemic in more than 100 countries.[7]

Contents
[hide]

y y y y

y y

y y y y

1 Signs and symptoms 2 Diagnosis 3 Cause 4 Prevention o 4.1 Vaccine development o 4.2 Mosquito control  4.2.1 Mesocyclops  4.2.2 Wolbachia  4.2.3 Mosquito mapping o 4.3 Potential antiviral approaches o 4.4 Sterile insect technique 5 Treatment o 5.1 Research o 5.2 Alternative medicine 6 Epidemiology o 6.1 Blood transfusion 7 History o 7.1 Etymology o 7.2 History of the literature 8 Society and Culture o 8.1 Use as a biological weapon 9 See also 10 References 11 External links

[edit] Signs and symptoms


The disease manifests as a sudden onset of severe headache, muscle and joint pains (myalgias and arthralgiassevere pain that gives it the nickname break-bone fever or bonecrusher disease), fever, and rash.[8] The dengue rash is characteristically bright red petechiae and usually appears first on the lower limbs and the chest; in some patients, it spreads to cover most of the body. There may also be gastritis with some combination of associated abdominal pain, nausea, vomiting, or diarrhea. Some cases develop much milder symptoms which can be misdiagnosed as influenza or other viral infection when no rash is present. Thus travelers from tropical areas may pass on dengue inadvertently, having not been properly diagnosed at the height of their illness. Patients with dengue can pass on the infection only through mosquitoes or blood products and only while they are still febrile. The classic dengue fever lasts about two to seven days, with a smaller peak of fever at the trailing end of the disease (the so-called "biphasic pattern"). Clinically, the platelet count will drop until the patient's temperature is normal. Cases of DHF also show higher fever, variable hemorrhagic phenomena, thrombocytopenia, and hemoconcentration. A small proportion of cases lead to dengue shock syndrome (DSS) which has a high mortality rate.

[edit] Diagnosis
The diagnosis of dengue is usually made clinically. The classic picture is high fever with no localising source of infection, a petechial rash with thrombocytopenia and relative leukopenia low platelet and white blood cell count. Care has to be taken as diagnosis of DHF can mask end stage liver disease and vice versa. 1. Fever, bladder problem, constant headaches, eye pain, severe dizziness and loss of appetite. 2. Hemorrhagic tendency (positive tourniquet test, spontaneous bruising, bleeding from mucosa, gingiva, injection sites, etc.; vomiting blood, or bloody diarrhea) 3. Thrombocytopenia (<100,000 platelets per mm or estimated as less than 3 platelets per high power field) 4. Evidence of plasma leakage (hematocrit more than 20% higher than expected, or drop in hematocrit of 20% or more from baseline following IV fluid, pleural effusion, ascites, hypoproteinemia) 5. Encephalitic occurrences. Dengue shock syndrome is defined as dengue hemorrhagic fever plus:
y y y

Weak rapid pulse, Narrow pulse pressure (less than 20 mm Hg) Cold, clammy skin and restlessness.

Dependable, immediate diagnosis of dengue can be performed in rural areas by the use of Rapid Diagnostic Test kits, which also differentiate between primary and secondary dengue infections.[9] Serology and polymerase chain reaction (PCR) studies are available to confirm the diagnosis of dengue if clinically indicated. Dengue can be a life threatening fever.

[edit] Cause
Dengue fever is caused by Dengue virus (DENV), a mosquito-borne flavivirus. DENV is an ssRNA positive-strand virus of the family Flaviviridae; genus Flavivirus. There are four serotypes of DENV. The virus has a genome of about 11000 bases that codes for three structural proteins, C, prM, E; seven nonstructural proteins, NS1, NS2a, NS2b, NS3, NS4a, NS4b, NS5; and short non-coding regions on both the 5' and 3' ends.[10] The potential factors causing hemorrhagic fever are varied. The most suspected factors are human's cross-serotypic immune response and membrane fusion process.

[edit] Prevention
[edit] Vaccine development

There is no tested and approved vaccine for the dengue flavivirus. There are many ongoing vaccine development programs. Among them is the Pediatric Dengue Vaccine Initiative set up in 2003 with the aim of accelerating the development and introduction of dengue vaccine(s) that are affordable and accessible to poor children in endemic countries.[11] Thai researchers are testing a dengue fever vaccine on 3,0005,000 human volunteers after having successfully conducted tests on animals and a small group of human volunteers.[12] A number of other vaccine candidates are entering phase I or II testing.[13]

[edit] Mosquito control

A field technician looking for larvae in standing water containers during the 1965 Aedes aegypti eradication program in Miami, Florida. In the 1960s, a major effort was made to eradicate the principal urban vector mosquito of dengue and yellow fever viruses, Aedes aegypti, from southeast United States. Primary prevention of dengue mainly resides in mosquito control. There are two primary methods: larval control and adult mosquito control.[citation needed] In urban areas, Aedes mosquitos breed on water collections in artificial containers such as plastic cups, used tires, broken bottles, flower pots, etc. Periodic draining or removal of containers is the most effective way of reducing the breeding grounds for mosquitos.[citation needed] Larvicide treatment is another effective way to control the vector larvae, but the larvicide chosen should be long-lasting and preferably have World Health Organization clearance for use in drinking water. There are some very effective insect growth regulators (IGRs) available which are both safe and long-lasting (e.g., pyriproxyfen). For reducing the adult mosquito load, fogging with insecticide is somewhat effective.[citation needed] Prevention of mosquito bites is another way of preventing disease. This can be achieved by using insect repellent, mosquito traps or mosquito nets. [edit] Mesocyclops In 1998, scientists from the Queensland Institute of Medical Research (QIMR) in Australia and Vietnam's Ministry of Health introduced a scheme that encouraged children to place a water bug, the crustacean Mesocyclops, in water tanks and discarded containers where the Aedes aegypti mosquito was known to thrive.[14] This method is viewed as being more cost-effective and more environmentally friendly than pesticides, though not as effective, and requires the continuing participation of the community.[15]

Even though this method of mosquito control was successful in rural provinces, not much is known about how effective it could be if applied to cities and urban areas. The Mesocyclops can survive and breed in large water containers but would not be able to do so in small containers that most urban dwellers have in their homes. Also, Mesocyclops are hosts for the guinea worm, a pathogen that causes a parasite infection, and so this method of mosquito control cannot be used in countries that are still susceptible to the guinea worm. The biggest dilemma with Mesocyclops is that its success depends on the participation of the community. This idea of a possible parasite-bearing creature in household water containers dissuades people from continuing the process of inoculation and, without the support and work of everyone living in the city, this method will not be successful.[16] [edit] Wolbachia In 2009, scientists from the School of Integrative Biology at The University of Queensland revealed that by infecting Aedes mosquitos with the bacterium Wolbachia, the adult lifespan was reduced by half.[17] In the study, super-fine needles were used to inject 10,000 mosquito embryos with the bacterium. Once an insect was infected, the bacterium would spread via its eggs to the next generation. A pilot release of infected mosquitoes could begin in Vietnam within three years. If no problems are discovered, a full-scale biological attack against the insects could be launched within five years.[18] [edit] Mosquito mapping In 2004, scientists from the Federal University of Minas Gerais, Brazil, discovered a fast way to find and count mosquito population inside urban areas. The technology, named Intelligent Monitoring of Dengue (in Portuguese), uses traps with kairomones that capture Aedes gravid females, and upload insect counts with a combination of cell phone, GPS and internet technology. The result is a complete map of the mosquitoes in urban areas, updated in real time and accessible remotely, that can inform control methodologies.[19] The technology was recognized with a Tech Museum Award in 2006.[20]

[edit] Potential antiviral approaches


Dengue virus belongs to the family Flaviviridae, which includes the hepatitis C virus, West Nile and Yellow fever viruses among others. Possible laboratory modification of the yellow fever vaccine YF-17D to target the dengue virus via chimeric replacement has been discussed extensively in scientific literature,[21] but as of 2009 no full scale studies have been conducted.[22] In 2006 a group of Argentine scientists discovered the molecular replication mechanism of the virus, which could be specifically attacked by disrupting the viral RNA polymerase.[23] In cell culture[24] and murine experiments[25][26] morpholino antisense oligomers have shown specific activity against Dengue virus. In 2007 virus replication was attenuated in the laboratory by interfering with activity of the dengue viral protease, and a project to identify drug leads with broad spectrum activity against the related dengue, hepatitis C, West Nile, and yellow fever viruses was launched[27][28].

[edit] Sterile insect technique


The sterile insect technique, a form of biological control, has long proved difficult with mosquitos because of the fragility of the males.[29] However, a transgenic strain of Aedes aegypti was announced in 2010 which might alleviate this problem: the strain produces females that are flightless due to a mis-development of their wings,[30] and so can neither mate nor bite. The genetic defect only causes effects in females, so that males can act as silent carriers.[29]

[edit] Treatment
The mainstay of treatment is timely supportive therapy to tackle shock due to hemoconcentration and bleeding. Close monitoring of vital signs in critical period (between day 2 to day 7 of fever) is critical. Increased oral fluid intake is recommended to prevent dehydration. Supplementation with intravenous fluids may be necessary to prevent dehydration and significant concentration of the blood if the patient is unable to maintain oral intake. A platelet transfusion is indicated in rare cases if the platelet level drops significantly (below 20,000) or if there is significant bleeding. The presence of melena may indicate internal gastrointestinal bleeding requiring platelet and/or red blood cell transfusion. Aspirin and non-steroidal anti-inflammatory drugs should be avoided as these drugs may worsen the bleeding tendency associated with some of these infections. Patients may receive paracetamol preparations to deal with these symptoms if dengue is suspected.[31]

[edit] Research
Emerging evidence suggests that mycophenolic acid and ribavirin inhibit dengue replication. Initial experiments showed a fivefold increase in defective viral RNA production by cells treated with each drug.[32] In vivo studies, however, have not yet been done. Unlike HIV therapy, lack of adequate global interest and funding greatly hampers the development of a treatment regime.

[edit] Alternative medicine


In Brazilian traditional medicine, cat's claw herb is used to treat patients with dengue.[33] In Philippines, the tawa-tawa herb is used to treat patients with dengue.[34]

[edit] Epidemiology

Disability-adjusted life year for dengue fever per 100,000 inhabitants in 2002.
no data < 15 15-30 30-45 45-60 90-105 105-120 120-135 135-150

60-75

75-90

150-250

> 250

Worldwide dengue distribution, 2006. Red: Epidemic dengue. Blue: Aedes aegypti.

Worldwide dengue distribution, 2000. Dengue is transmitted by Aedes mosquitoes, particularly A. aegypti and A. albopictus. The first recognized Dengue epidemics occurred almost simultaneously in Asia, Africa, and North America in the 1780s, shortly after the identification and naming of the disease in 1779. A pandemic began in Southeast Asia in the 1950s, and by 1975 DHF had become a leading cause of death among children in the region. Epidemic dengue has become more common since the 1980s. By the late 1990s, dengue was the most important mosquito-borne disease affecting humans after malaria, with around 40 million cases of dengue fever and several hundred thousand cases of dengue hemorrhagic fever each year. Significant outbreaks of dengue fever tend to occur every five or six months. The cyclical rise and fall in numbers of dengue cases is thought to be the result of seasonal cycles interacting with a short-lived cross-immunity[clarification needed] for all four strains in people who have had dengue. When the cross-immunity wears off the population is more susceptible to transmission whenever the next seasonal peak occurs. Thus over time there remain large numbers of susceptible people in affected populations despite previous outbreaks due to the four different serotypes of dengue virus and the presence of unexposed individuals from childbirth or immigration. There is significant evidence, originally suggested by S.B. Halstead in the 1970s, that dengue hemorrhagic fever is more likely to occur in patients who have secondary infections by another one of dengue fever's four serotypes. One model to explain this process is known as antibodydependent enhancement (ADE), which allows for increased uptake and virion replication during a secondary infection with a different strain. Through an immunological phenomenon, known as original antigenic sin, the immune system is not able to adequately respond to the stronger infection, and the secondary infection becomes far more serious.[35] This process is also known as superinfection.[36][37]

Reported cases of dengue are an under-representation of all cases when accounting for subclinical cases and cases where the patient did not present for medical treatment. There was a serious outbreak in Rio de Janeiro in February 2002 affecting around one million people and killing sixteen. On March 20, 2008, the secretary of health of the state of Rio de Janeiro, Srgio Crtes, announced that 23,555 cases of dengue, including 30 deaths, had been recorded in the state in less than three months. Crtes said, "I am treating this as an epidemic because the number of cases is extremely high." Federal Minister of Health, Jos Gomes Temporo also announced that he was forming a panel to respond to the situation. Cesar Maia, mayor of the city of Rio de Janeiro, denied that there was serious cause for concern, saying that the incidence of cases was in fact declining from a peak at the beginning of February.[38] By April 3, 2008, the number of cases reported rose to 55,000 [39] In Singapore, there are 4,0005,000 reported cases of dengue fever or dengue haemorrhagic fever every year. In the year 2004, there were seven deaths from dengue shock syndrome[40]. An epidemic broke out in Bolivia in early 2009, in which 18 people have died and 31,000 infected. An outbreak of dengue fever was declared in Cairns, located in the tropical north of Queensland, Australia on 1 December 2008. As at 3 March 2009 there were 503 confirmed cases of dengue fever, in a residential population of 152,137. Outbreaks were subsequently declared the neighbouring cities and towns of Townsville (outbreak declared 5 January 2009), Port Douglas (6 February 2009), Yarrabah (19 February 2009), Injinoo (24 February 2009), Innisfail (27 February 2009) and Rockhampton (10 March 2009). There have been occurrences of dengue types one, two, three and four in the region. March 4, 2009, Queensland Health had confirmed an elderly woman had died from dengue fever in Cairns, in the first fatality since the epidemic began last year. The statement said that although the woman had other health problems, she tested positive for dengue and the disease probably contributed to her death. In 2009, in Argentina, a dengue outbreak was declared the northern provinces of Chaco, Catamarca, Salta, Jujuy, and Corrientes, with over 9673 cases reported as of April 11, 2009 by the Health Ministry [1]. Some travelers from the affected zones have spread the fever as far south as Buenos Aires [2]. Major efforts to control the epidemic in Argentina are focused on preventing its vector (the Aedes mosquitoes) from reproducing. This is addressed by asking people to dry out all possible water reservoirs from where mosquitoes could proliferate (which is, in other countries, known as "descacharrado"). There have also been information campaigns concerning prevention of the dengue fever; and the government is fumigating with insecticide in order to control the mosquito population.[41] The first cases of dengue fever have recently been reported on the island of Mauritius, in the Indian Ocean. One of the South Asian countries still suffering highly from this problem is Sri Lanka.[42]

[edit] Blood transfusion

Dengue may also be transmitted via infected blood products (blood transfusions, plasma, and platelets),[43][44] and in countries such as Singapore, where dengue is endemic, the risk was estimated to be between 1.6 and 6 per 10,000 blood transfusions.[45]

[edit] History
[edit] Etymology
The origins of the word dengue are not clear, but one theory is that it is derived from the Swahili phrase "Ka-dinga pepo", which describes the disease as being caused by an evil spirit.[46] The Swahili word "dinga" may possibly have its origin in the Spanish word "dengue" meaning fastidious or careful, which would describe the gait of a person suffering the bone pain of dengue fever.[47] Alternatively, the use of the Spanish word may derive from the similar-sounding Swahili.[48]

[edit] History of the literature


Slaves in the West Indies who contracted dengue were said to have the posture and gait of a dandy, and the disease was known as "Dandy Fever".[49] The first record of a case of probable dengue fever is in a Chinese medical encyclopedia from the Jin Dynasty (265420 AD) which referred to a water poison associated with flying insects.[48] The first confirmed case report dates from 1789 and is by Benjamin Rush, who coined the term "breakbone fever" because of the symptoms of myalgia and arthralgia.[50] The viral etiology and the transmission by mosquitoes were discovered in the 20th century by Sir John Burton Cleland. Population movements during World War II spread the disease globally. A pandemic of dengue began in Southeast Asia after World War II and has spread around the globe since then.[51]

[edit] Society and Culture


[edit] Use as a biological weapon
Dengue fever was one of more than a dozen agents that the United States researched as potential biological weapons before the nation suspended its biological weapons program.[52]

[edit] See also

Biofuel
From Wikipedia, the free encyclopedia
Jump to: navigation, search

Information on pump regarding ethanol fuel blend up to 10%, California.

Bus run by biodiesel. Renewable energy

Biofuel Biomass Geothermal Hydroelectricity Solar energy Tidal power Wave power Wind power
vde

Biofuels are a wide range of fuels which are in some way derived from biomass. The term covers solid biomass, liquid fuels and various biogases.[1] Biofuels are gaining increased public and scientific attention, driven by factors such as oil price spikes and the need for increased energy security. Bioethanol is an alcohol made by fermenting the sugar components of plant materials and it is made mostly from sugar and starch crops. With advanced technology being developed, cellulosic biomass, such as trees and grasses, are also used as feedstocks for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form, but it is usually used as a gasoline additive to

increase octane and improve vehicle emissions. Bioethanol is widely used in the USA and in Brazil. Biodiesel is made from vegetable oils, animal fats or recycled greases. Biodiesel can be used as a fuel for vehicles in its pure form, but it is usually used as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe. Biofuels provided 1.8% of the world's transport fuel in 2008. Investment into biofuels production capacity exceeded $4 billion worldwide in 2007 and is growing.[2]

Contents
[hide]
y

y y y y y y

1 Liquid fuels for transportation o 1.1 First generation biofuels  1.1.1 Bioalcohols  1.1.2 Green Diesel  1.1.3 Biodiesel  1.1.4 Vegetable oil  1.1.5 Bioethers  1.1.6 Biogas  1.1.7 Syngas  1.1.8 Solid biofuels o 1.2 Second generation biofuels o 1.3 Third generation biofuels o 1.4 Green fuels o 1.5 Ethanol from living algae o 1.6 Helioculture 2 Biofuels by region 3 Issues with biofuel production and use 4 See also 5 References 6 Further reading 7 External links

[edit] Liquid fuels for transportation


Most transportation fuels are liquids, because vehicles usually require high energy density, as occurs in liquids and solids. High power density can be provided most inexpensively by an internal combustion engine; these engines require clean burning fuels, to keep the engine clean and minimize air pollution.

The fuels that are easiest to burn cleanly are typically liquids and gases. Thus liquids (and gases that can be stored in liquid form) meet the requirements of being both portable and clean burning. Also, liquids and gases can be pumped, which means handling is easily mechanized, and thus less laborious.

[edit] First generation biofuels


'First-generation biofuels' are biofuels made from sugar, starch, vegetable oil, or animal fats using conventional technology.[3] The basic feedstocks for the production of first generation biofuels are often seeds or grains such as wheat, which yields starch that is fermented into bioethanol, or sunflower seeds, which are pressed to yield vegetable oil that can be used in biodiesel. These feedstocks could instead enter the animal or human food chain, and as the global population has risen their use in producing biofuels has been criticised for diverting food away from the human food chain, leading to food shortages and price rises. The most common biofuels are listed below. [edit] Bioalcohols Main article: Alcohol fuel

Neat ethanol on the left (A), gasoline on the right (G) at a filling station in Brazil. Biologically produced alcohols, most commonly ethanol, and less commonly propanol and butanol, are produced by the action of microorganisms and enzymes through the fermentation of sugars or starches (easiest), or cellulose (which is more difficult). Biobutanol (also called biogasoline) is often claimed to provide a direct replacement for gasoline, because it can be used directly in a gasoline engine (in a similar way to biodiesel in diesel engines). Ethanol fuel is the most common biofuel worldwide, particularly in Brazil. Alcohol fuels are produced by fermentation of sugars derived from wheat, corn, sugar beets, sugar cane, molasses and any sugar or starch that alcoholic beverages can be made from (like potato and fruit waste, etc.). The ethanol production methods used are enzyme digestion (to release sugars from stored starches), fermentation of the sugars, distillation and drying. The distillation process requires significant energy input for heat (often unsustainable natural gas fossil fuel, but cellulosic biomass such as bagasse, the waste left after sugar cane is pressed to extract its juice, can also be used more sustainably).

The Koenigsegg CCXR Edition at the 2008 Geneva Motor Show. This is an "environmentallyfriendly" version of the CCX, which can use E85 and E100. Ethanol can be used in petrol engines as a replacement for gasoline; it can be mixed with gasoline to any percentage. Most existing car petrol engines can run on blends of up to 15% bioethanol with petroleum/gasoline. Ethanol has a smaller energy density than gasoline, which means it takes more fuel (volume and mass) to produce the same amount of work. An advantage of ethanol (CH3CH2OH) is that is has a higher octane rating than ethanol-free gasoline available at roadside gas stations which allows an increase of an engine's compression ratio for increased thermal efficiency. In high altitude (thin air) locations, some states mandate a mix of gasoline and ethanol as a winter oxidizer to reduce atmospheric pollution emissions. Ethanol is also used to fuel bio ethanol fireplaces. As they do not require a chimney and are "flueless", bio ethanol fires [4] are extremely useful for new build homes and apartments without a flue. The downside to these fireplaces, is that the heat output is slightly less than electric and gas fires. In the current alcohol-from-corn production model in the United States, considering the total energy consumed by farm equipment, cultivation, planting, fertilizers, pesticides, herbicides, and fungicides made from petroleum, irrigation systems, harvesting, transport of feedstock to processing plants, fermentation, distillation, drying, transport to fuel terminals and retail pumps, and lower ethanol fuel energy content, the net energy content value added and delivered to consumers is very small. And, the net benefit (all things considered) does little to reduce unsustainable imported oil and fossil fuels required to produce the ethanol.[5] Although ethanol-from-corn and other food stocks has implications both in terms of world food prices and limited, yet positive energy yield (in terms of energy delivered to customer/fossil fuels used), the technology has led to the development of cellulosic ethanol. According to a joint research agenda conducted through the U.S. Department of Energy,[6] the fossil energy ratios (FER) for cellulosic ethanol, corn ethanol, and gasoline are 10.3, 1.36, and 0.81, respectively.[7][8][9] Many car manufacturers are now producing flexible-fuel vehicles (FFV's), which can safely run on any combination of bioethanol and petrol, up to 100% bioethanol. They dynamically sense exhaust oxygen content, and adjust the engine's computer systems, spark, and fuel injection accordingly. This adds initial cost and ongoing increased vehicle maintenance.[citation needed] As with all vehicles, efficiency falls and pollution emissions increase when FFV system

maintenance is needed (regardless of the fuel mix being used), but is not performed. FFV internal combustion engines are becoming increasingly complex, as are multiple-propulsionsystem FFV hybrid vehicles, which impacts cost, maintenance, reliability, and useful lifetime longevity.[citation needed] Even dry ethanol has roughly one-third lower energy content per unit of volume compared to gasoline, so larger / heavier fuel tanks are required to travel the same distance, or more fuel stops are required. With large current un-sustainable, non-scalable subsidies, ethanol fuel still costs much more per distance traveled than current high gasoline prices in the United States.[10] Methanol is currently produced from natural gas, a non-renewable fossil fuel. It can also be produced from biomass as biomethanol. The methanol economy is an interesting alternative to the hydrogen economy, compared to today's hydrogen produced from natural gas, but not hydrogen production directly from water and state-of-the-art clean solar thermal energy processes.[11] Butanol is formed by ABE fermentation (acetone, butanol, ethanol) and experimental modifications of the process show potentially high net energy gains with butanol as the only liquid product. Butanol will produce more energy and allegedly can be burned "straight" in existing gasoline engines (without modification to the engine or car),[12] and is less corrosive and less water soluble than ethanol, and could be distributed via existing infrastructures. DuPont and BP are working together to help develop Butanol. E. coli have also been successfully engineered to produce Butanol by hijacking their amino acid metabolism.[13] [edit] Green Diesel Main article: Green diesel Green diesel, also known as renewable diesel, is a form of diesel fuel which is derived from renewable feedstock rather than the fossil feedstock used in most diesel fuels. Green diesel is not to be confused with biodiesel which is chemically quite different and processed using transesterification rather than the traditional fractional distillation used to process green diesel. Green diesel feedstock can be sourced from a variety oils including canola, algae, jatropha and salicornia in addition to tallow. [edit] Biodiesel Main articles: Biodiesel and Biodiesel around the world

In some countries biodiesel is less expensive than conventional diesel. Biodiesel is the most common biofuel in Europe. It is produced from oils or fats using transesterification and is a liquid similar in composition to fossil/mineral diesel. Its chemical name is fatty acid methyl (or ethyl) ester (FAME). Oils are mixed with sodium hydroxide and methanol (or ethanol) and the chemical reaction produces biodiesel (FAME) and glycerol. One part glycerol is produced for every 10 parts biodiesel. Feedstocks for biodiesel include animal fats, vegetable oils, soy, rapeseed, jatropha, mahua, mustard, flax, sunflower, palm oil, hemp, field pennycress, pongamia pinnata and algae. Pure biodiesel (B100) is by far the lowest emission diesel fuel. Although liquefied petroleum gas and hydrogen have cleaner combustion, they are used to fuel much less efficient petrol engines and are not as widely available. Biodiesel can be used in any diesel engine when mixed with mineral diesel. The majority of vehicle manufacturers limit their recommendations to 15% biodiesel blended with mineral diesel. In some countries manufacturers cover their diesel engines under warranty for B100 use, although Volkswagen of Germany, for example, asks drivers to check by telephone with the VW environmental services department before switching to B100. B100 may become more viscous at lower temperatures, depending on the feedstock used, requiring vehicles to have fuel line heaters. In most cases, biodiesel is compatible with diesel engines from 1994 onwards, which use 'Viton' (by DuPont) synthetic rubber in their mechanical injection systems. Electronically controlled 'common rail' and 'pump duse' type systems from the late 1990s onwards may only use biodiesel blended with conventional diesel fuel. These engines have finely metered and atomized multistage injection systems are very sensitive to the viscosity of the fuel. Many current generation diesel engines are made so that they can run on B100 without altering the engine itself, although this depends on the fuel rail design. NExBTL is suitable for all diesel engines in the world since it overperforms DIN EN 590 standards.

Since biodiesel is an effective solvent and cleans residues deposited by mineral diesel, engine filters may need to be replaced more often, as the biofuel dissolves old deposits in the fuel tank and pipes. It also effectively cleans the engine combustion chamber of carbon deposits, helping to maintain efficiency. In many European countries, a 5% biodiesel blend is widely used and is available at thousands of gas stations.[14][15] Biodiesel is also an oxygenated fuel, meaning that it contains a reduced amount of carbon and higher hydrogen and oxygen content than fossil diesel. This improves the combustion of fossil diesel and reduces the particulate emissions from unburnt carbon. Biodiesel is safe to handle and transport because it is as biodegradable as sugar, 10 times less toxic than table salt, and has a high flashpoint of about 300 F (148 C) compared to petroleum diesel fuel, which has a flash point of 125 F (52 C).[16] In the USA, more than 80% of commercial trucks and city buses run on diesel. The emerging US biodiesel market is estimated to have grown 200% from 2004 to 2005. "By the end of 2006 biodiesel production was estimated to increase fourfold [from 2004] to more than 1 billion gallons".[17] [edit] Vegetable oil

Filtered waste vegetable oil. Main article: Vegetable oil used as fuel

Edible vegetable oil is generally not used as fuel, but lower quality oil can be used for this purpose. Used vegetable oil is increasingly being processed into biodiesel, or (more rarely) cleaned of water and particulates and used as a fuel. To ensure that the fuel injectors atomize the fuel in the correct pattern for efficient combustion, vegetable oil fuel must be heated to reduce its viscosity to that of diesel, either by electric coils or heat exchangers. This is easier in warm or temperate climates. Big corporations like MAN B&W Diesel, Wartsila and Deutz AG as well as a number of smaller companies such as Elsbett offer engines that are compatible with straight vegetable oil, without the need for after-market modifications. Vegetable oil can also be used in many older diesel engines that do not use common rail or unit injection electronic diesel injection systems. Due to the design of the combustion chambers in indirect injection engines, these are the best engines for use with vegetable oil. This system allows the relatively larger oil molecules more time to burn. Some older engines, especially Mercedes are driven experimentally by enthusiasts without any conversion, a handful of drivers have experienced limited success with earlier pre-"pumped use" VW TDI engines and other similar engines with direct injection. Several companies like Elsbett or Wolf have developed professional conversion kits and successfully installed hundreds of them over the last decades. Oils and fats can be hydrogenated to give a diesel substitute. The resulting product is a straight chain hydrocarbon, high in cetane, low in aromatics and sulphur and does not contain oxygen. Hydrogenated oils can be blended with diesel in all proportions Hydrogenated oils have several advantages over biodiesel, including good performance at low temperatures, no storage stability problems and no susceptibility to microbial attack.[18] [edit] Bioethers Bio ethers (also referred to as fuel ethers or fuel oxygenates) are cost-effective compounds that act as octane rating enhancers. They also enhance engine performance, whilst significantly reducing engine wear and toxic exhaust emissions. Greatly reducing the amount of ground-level ozone, they contribute to the quality of the air we breathe.[19][20][21] [edit] Biogas

Pipes carrying biogas Main article: Biogas

Biogas is produced by the process of anaerobic digestion of organic material by anaerobes.[22] It can be produced either from biodegradable waste materials or by the use of energy crops fed into anaerobic digesters to supplement gas yields. The solid byproduct, digestate, can be used as a biofuel or a fertilizer. In the UK, the National Coal Board experimented with microorganisms that digested coal in situ converting it directly to gases such as methane. Biogas contains methane and can be recovered from industrial anaerobic digesters and mechanical biological treatment systems. Landfill gas is a less clean form of biogas which is produced in landfills through naturally occurring anaerobic digestion. If it escapes into the atmosphere it is a potent greenhouse gas. Oils and gases can be produced from various biological wastes:
y y

Thermal depolymerization of waste can extract methane and other oils similar to petroleum. GreenFuel Technologies Corporation developed a patented bioreactor system that uses nontoxic photosynthetic algae to take in smokestacks flue gases and produce biofuels such as biodiesel, biogas and a dry fuel comparable to coal.[23]

Farmer can produce biogas from manure from their cows by getting a anaerobic digester (AD).[24] [edit] Syngas Main article: Gasification Syngas, a mixture of carbon monoxide and hydrogen, is produced by partial combustion of biomass, that is, combustion with an amount of oxygen that is not sufficient to convert the biomass completely to carbon dioxide and water.[18] Before partial combustion the biomass is dried, and sometimes pyrolysed. The resulting gas mixture, syngas, is itself a fuel. Using the syngas is more efficient than direct combustion of the original biofuel; more of the energy contained in the fuel is extracted. Syngas may be burned directly in internal combustion engines or turbines. The wood gas generator is a wood-fueled gasification reactor mounted on an internal combustion engine. Syngas can be used to produce methanol and hydrogen, or converted via the Fischer-Tropsch process to produce a synthetic diesel substitute, or a mixture of alcohols that can be blended into gasoline. Gasification normally relies on temperatures >700C. Lower temperature gasification is desirable when co-producing biochar but results in a Syngas polluted with tar. [edit] Solid biofuels Examples include wood, sawdust, grass cuttings, domestic refuse, charcoal, agricultural waste, non-food energy crops (see picture), and dried manure.

When raw biomass is already in a suitable form (such as firewood), it can burn directly in a stove or furnace to provide heat or raise steam. When raw biomass is in an inconvenient form (such as sawdust, wood chips, grass, urban waste wood, agricultural residues), the typical process is to densify the biomass. This process includes grinding the raw biomass to an appropriate particulate size (known as hogfuel), which depending on the densification type can be from 1 to 3 cm (1 in), which is then concentrated into a fuel product. The current types of processes are pellet, cube, or puck. The pellet process is most common in Europe and is typically a pure wood product. The other types of densification are larger in size compared to a pellet and are compatible with a broadrange of input feedstocks. The resulting densified fuel is easier transport and feed into thermal generation systems such as boilers. A problem with the combustion of raw biomass is that it emits considerable amounts of pollutants such as particulates and PAHs (polycyclic aromatic hydrocarbons). Even modern pellet boilers generate much more pollutants than oil or natural gas boilers. Pellets made from agricultural residues are usually worse than wood pellets, producing much larger emissions of dioxins and chlorophenols.[25] Notwithstanding the above noted study, numerous studies have shown that biomass fuels have significantly less impact on the environment than fossil based fuels. Of note is the U.S. Department of Energy Laboratory, Operated by Midwest Research Institute Biomass Power and Conventional Fossil Systems with and without CO2 Sequestration Comparing the Energy Balance, Greenhouse Gas Emissions and Economics Study. Power generation emits significant amounts of greenhouse gases (GHGs), mainly carbon dioxide (CO2). Sequestering CO2 from the power plant flue gas can significantly reduce the GHGs from the power plant itself, but this is not the total picture. CO2 capture and sequestration consumes additional energy, thus lowering the plant's fuel-to-electricity efficiency. To compensate for this, more fossil fuel must be procured and consumed to make up for lost capacity. Taking this into consideration, the global warming potential (GWP), which is a combination of CO2, methane (CH4), and nitrous oxide (N2O) emissions, and energy balance of the system need to be examined using a life cycle approach. This takes into account the upstream processes which remain constant after CO2 sequestration as well as the steps required for additional power generation. firing biomass instead of coal led to a 148% reduction in GWP. A derivative of solid biofuel is biochar, which is produced by biomass pyrolysis. Bio-char made from agricultural waste can substitute for wood charcoal. As wood stock becomes scarce this alternative is gaining ground. In eastern Democratic Republic of Congo, for example, biomass briquettes are being marketed as an alternative to charcoal in order to protect Virunga National Park from deforestation associated with charcoal production.[26]

[edit] Second generation biofuels


Main article: Second generation biofuels Supporters of biofuels claim that a more viable solution is to increase political and industrial support for, and rapidity of, second-generation biofuel implementation from non-food crops. These include waste biomass, the stalks of wheat, corn, wood, and special-energy-or-biomass

crops (e.g. Miscanthus). Second generation (2G) biofuels use biomass to liquid technology,[27] including cellulosic biofuels.[28] Many second generation biofuels are under development such as biohydrogen, biomethanol, DMF, Bio-DME, Fischer-Tropsch diesel, biohydrogen diesel, mixed alcohols and wood diesel. Cellulosic ethanol production uses non-food crops or inedible waste products and does not divert food away from the animal or human food chain. Lignocellulose is the "woody" structural material of plants. This feedstock is abundant and diverse, and in some cases (like citrus peels or sawdust) it is in itself a significant disposal problem. Producing ethanol from cellulose is a difficult technical problem to solve. In nature, ruminant livestock (like cattle) eat grass and then use slow enzymatic digestive processes to break it into glucose (sugar). In cellulosic ethanol laboratories, various experimental processes are being developed to do the same thing, and then the sugars released can be fermented to make ethanol fuel. In 2009 scientists reported developing, using "synthetic biology", "15 new highly stable fungal enzyme catalysts that efficiently break down cellulose into sugars at high temperatures", adding to the 10 previously known.[29] In addition, research conducted at TU Delft by Jack Pronk has shown that elephant yeast, when slightly modified can also create ethanol from non-edible ground sources (e.g. straw).[30][31] The recent discovery of the fungus Gliocladium roseum points toward the production of socalled myco-diesel from cellulose. This organism was recently discovered in the rainforests of northern Patagonia and has the unique capability of converting cellulose into medium length hydrocarbons typically found in diesel fuel.[32] Scientists also work on experimental recombinant DNA genetic engineering organisms that could increase biofuel potential. Scientists working in New Zealand have developed a technology to use industrial waste gases from steel mills as a feedstock for a microbial fermentation process to produce ethanol.[33][34]

[edit] Third generation biofuels


Main article: Algae fuel Algae fuel, also called oilgae or third generation biofuel, is a biofuel from algae. Algae are lowinput, high-yield feedstocks to produce biofuels. Based on laboratory experiments, it is claimed that algae can produces up to 30 times more energy per acre than land crops such as soybeans,[35] but these yields have yet to be produced commercially. With the higher prices of fossil fuels (petroleum), there is much interest in algaculture (farming algae). One advantage of many biofuels over most other fuel types is that they are biodegradable, and so relatively harmless to the environment if spilled.[36][37][38] Algae fuel still has its difficulties though, for instance to produce algae fuels it must be mixed uniformly, which, if done by agitation, could affect biomass growth.[39]

The United States Department of Energy estimates that if algae fuel replaced all the petroleum fuel in the United States, it would require 15,000 square miles (38,849 square kilometers), which is roughly the size of Maryland.[35] Second and third generation biofuels are also called advanced biofuels. Algae, such as Botryococcus braunii and Chlorella vulgaris, are relatively easy to grow,[40] but the algal oil is hard to extract. There are several approaches, some of which work better than others.[41] Macroalgae (seaweed) also have a great potential for bioethanol and biogas production.[42]

[edit] Green fuels


However, if biocatalytic cracking and traditional fractional distillation used to process properly prepared algal biomass i.e. biocrude[43], then as a result we receive the following distillates: jet fuel, gasoline, diesel, etc.. Hence, we may call them third generation or green fuels.

[edit] Ethanol from living algae


Most biofuel production comes from harvesting organic matter and then converting it to fuel but an alternative approach relies on the fact that some algae naturally produce ethanol and this can be collected without killing the algae. The ethanol evaporates and then can be condensed and collected. The company Algenol is trying to commercialize this process.

[edit] Helioculture
Helioculture is a newly developed technology which is claimed to be able to produce 20,000 gallons of fuel per acre per year, and which removes carbon dioxide from the air as a feedstock for the fuel.[44] Helioculture involves direct conversion of carbon dioxide into fuel using solar power.[45] Helioculture can be used to develop many different fuels and petroleum-derived chemicals all while not using any fresh water or agriculture.[46]

[edit] Biofuels by region


Main article: Biofuels by region Recognizing the importance of implementing bioenergy, there are international organizations such as IEA Bioenergy,[47] established in 1978 by the OECD International Energy Agency (IEA), with the aim of improving cooperation and information exchange between countries that have national programs in bioenergy research, development and deployment. The U.N. International Biofuels Forum is formed by Brazil, China, India, South Africa, the United States and the European Commission.[48] The world leaders in biofuel development and use are Brazil, United States, France, Sweden and Germany.

See also: Biodiesel around the world

[edit] Issues with biofuel production and use


It has been suggested that Issues relating to biofuels be merged into this section because it is a policy violating point-of-view fork. (Discuss) Main article: Issues relating to biofuels There are various current issues with biofuel production and use, which are presently being discussed in the popular media and scientific journals. These include: the effect of moderating oil prices, the "food vs fuel" debate, carbon emissions levels, sustainable biofuel production, deforestation and soil erosion, impact on water resources, human rights issues, poverty reduction potential, biofuel prices, energy balance and efficiency, and centralised versus decentralised production models.

Avian influenza
From Wikipedia, the free encyclopedia
Jump to: navigation, search

Influenza (Flu)

Types

Avian (A/H5N1 subtype) Canine Equine Swine (A/H1N1 subtype)

Vaccines

2009 pandemic (Pandemrix)

ACAM-FLU-A Fluzone Influvac Live attenuated (FluMist) Optaflu

Treatment

Amantadine Arbidol Laninamivir Oseltamivir Peramivir Rimantadine Vitamin D Zanamivir

Pandemics

2009 1968-1969 Hong Kong 1918

Outbreaks

2008 West Bengal 2007 Bernard Matthews H5N1 2007 Australian equine 2006 H5N1 India 1976 swine flu

See also

Flu season Influenza evolution Influenza research Influenza-like illness

vde

For the H5N1 subtype of Avian influenza see Influenza A virus subtype H5N1. This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (February 2010)

Avian influenza, sometimes avian flu, and commonly bird flu, refers to "influenza caused by viruses adapted to birds."[1][2][3][4][5][6][7][clarification needed] Of the greatest concern is highly pathogenic avian influenza (HPAI). "Bird flu" is a phrase similar to "swine flu," "dog flu," "horse flu," or "human flu" in that it refers to an illness caused by any of many different strains of influenza viruses that have adapted to a specific host. All known viruses that cause influenza in birds belong to the species influenza A virus. All subtypes (but not all strains of all subtypes) of influenza A virus are adapted to birds, which is why for many purposes avian flu virus is the influenza A virus (note that the "A" does not stand for "avian"). Adaptation is non-exclusive. Being adapted towards a particular species does not preclude adaptations, or partial adaptations, towards infecting different species. In this way strains of influenza viruses are adapted to multiple species, though may be preferential towards a particular host. For example, viruses responsible for influenza pandemics are adapted to both humans and birds. Recent influenza research into the genes of the Spanish flu virus shows it to have genes adapted to both birds and humans; with more of its genes from birds than less deadly later pandemic strains.

Contents
[hide]
y y y y y y y y y

1 Nomenclature and taxonomy 2 Genetics 3 Subtypes 4 Influenza pandemic 5 H5N1 6 In domestic animals o 6.1 Birds 7 See also 8 References 9 External links

[edit] Nomenclature and taxonomy


This section requires expansion.

[edit] Genetics
Genetic factors in distinguishing between "human flu viruses" and "avian flu viruses" include:

PB2: (RNA polymerase): Amino acid (or residue) position 627 in the PB2 protein encoded by the PB2 RNA gene. Until H5N1, all known avian influenza viruses had a Glu at position 627, while all human influenza viruses had a Lys. HA: (hemagglutinin): Avian influenza HA bind alpha 2-3 sialic acid receptors while human influenza HA bind alpha 2-6 sialic acid receptors. Swine influenza viruses have the ability to bind both types of sialic acid receptors. Hemagglutinin is the major antigen of the virus against which neutralizing antibodies are produced and influenza virus epidemics are associated with changes in its antigenic structure. This was originally derived from pigs, and should technically be referred to as "Pig Flu" (see ref. 7a)

[edit] Subtypes
There are many subtypes of avian influenza viruses, but only some strains of four subtypes have been highly pathogenic in humans. These are types H5N1, H7N3, H7N7 and H9N2.[8] Examples of avian influenza A virus strains:[9] HA subtype designation NA subtype designation Avian influenza A viruses H1 N1 A/duck/Alberta/35/76(H1N1) H1 H2 H3 H3 H3 H4 H4 H5 H5 H5 H5 H6 H6 H6 H6 H7 H7 H7 H7 H8 N8 N9 N8 N8 N2 N6 N3 N3 N4 N9 N1 N2 N8 N5 N1 N7 N1 N3 N1 N4 A/duck/Alberta/97/77(H1N8) A/duck/Germany/1/72(H2N9) A/duck/Ukraine/63(H3N8) A/duck/England/62(H3N8) A/turkey/England/69(H3N2) A/duck/Czechoslovakia/56(H4N6) A/duck/Alberta/300/77(H4N3) A/tern/South Africa/300/77(H4N3) A/jyotichinara/Ethiopia/300/77(H6N6) A/turkey/Ontario/7732/66(H5N9) A/chick/Scotland/59(H5N1) A/turkey/Massachusetts/3740/65(H6N2) A/turkey/Canada/63(H6N8) A/shearwater/Australia/72(H6N5) A/duck/Germany/1868/68(H6N1) A/fowl plague virus/Dutch/27(H7N7) A/chick/Brescia/1902(H7N1) A/turkey/England/639H7N3) A/fowl plague virus/Rostock/34(H7N1) A/turkey/Ontario/6118/68(H8N4)

H9 H9 H10 H10 H11 H11 H12 H13 H14 H15

N2 N6 N7 N8 N6 N9 N5 N6 N4 N9

A/turkey/Wisconsin/1/66(H9N2) A/duck/Hong Kong/147/77(H9N6) A/chick/Germany/N/49(H10N7) A/quail/Italy/1117/65(H10N8) A/duck/England/56(H11N6) A/duck/Memphis/546/74(H11N9) A/duck/Alberta/60/76/(H12N5) A/gull/Maryland/704/77(H13N6) A/duck/Gurjev/263/83(H14N4) A/shearwater/Australia/2576/83(H15N9)

[edit] Influenza pandemic


Further information: Influenza pandemic Pandemic flu viruses have some avian flu virus genes and usually some human flu virus genes. Both the H2N2 and H3N2 pandemic strains contained genes from avian influenza viruses. The new subtypes arose in pigs coinfected with avian and human viruses and were soon transferred to humans. Swine were considered the original "intermediate host" for influenza, because they supported reassortment of divergent subtypes. However, other hosts appear capable of similar coinfection (e.g., many poultry species), and direct transmission of avian viruses to humans is possible.[10] The Spanish flu virus strain may have been transmitted directly from birds to humans.[11] In spite of their pandemic connection, avian influenza viruses are noninfectious for most species. When they are infectious they are usually asymptomatic, so the carrier does not have any disease from it. Thus while infected with an avian flu virus, the animal doesn't have a "flu". Typically, when illness (called "flu") from an avian flu virus does occur, it is the result of an avian flu virus strain adapted to one species spreading to another species (usually from one bird species to another bird species). So far as is known, the most common result of this is an illness so minor as to be not worth noticing (and thus little studied). But with the domestication of chickens and turkeys, humans have created species subtypes (domesticated poultry) that can catch an avian flu virus adapted to waterfowl and have it rapidly mutate into a form that kills in days over 90% of an entire flock and spread to other flocks and kill 90% of them and can only be stopped by killing every domestic bird in the area. Until H5N1 infected humans in the 1990s, this was the only reason avian flu was considered important. Since then, avian flu viruses have been intensively studied; resulting in changes in what is believed about flu pandemics, changes in poultry farming, changes in flu vaccination research, and changes in flu pandemic planning. H5N1 has evolved into a flu virus strain that infects more species than any previously known flu virus strain, is deadlier than any previously known flu virus strain, and continues to evolve becoming both more widespread and more deadly causing Robert G. Webster, a leading expert on avian flu, to publish an article titled "The world is teetering on the edge of a pandemic that could kill a large fraction of the human population" in American Scientist. He called for adequate

resources to fight what he sees as a major world threat to possibly billions of lives.[12] Since the article was written, the world community has spent billions of dollars fighting this threat with limited success. Vaccines have been formulated against several of the avian H5N1 influenza varieties. Vaccination of poultry against the ongoing H5N1 epizootic is widespread in certain countries. Some vaccines also exist for use in humans, and others are in testing, but none have been made available to civilian populations, nor produced in quantities sufficient to protect more than a tiny fraction of the Earth's population in the event that an H5N1 pandemic breaks out. The World Health Organization has compiled a list of known clinical trials of pandemic influenza prototype vaccines, including those against H5N1.

[edit] H5N1

H5N1

y y y y

Influenza A virus o subtype H5N1 Genetic structure Infection Human mortality Global spread o in 2004 o in 2005

y y

in 2006 o in 2007 Social impact Pandemic


o

vde

Further information: Influenza A virus subtype H5N1 and Transmission and infection of H5N1 The highly pathogenic influenza A virus subtype H5N1 virus is an emerging avian influenza virus that has been causing global concern as a potential pandemic threat. It is often referred to simply as "bird flu" or "avian influenza" even though it is only one subtype of avian influenza causing virus. H5N1 has killed millions of poultry in a growing number of countries throughout Asia, Europe and Africa. Health experts are concerned that the co-existence of human flu viruses and avian flu viruses (especially H5N1) will provide an opportunity for genetic material to be exchanged between species-specific viruses, possibly creating a new virulent influenza strain that is easily transmissible and lethal to humans.[13] Since the first H5N1 outbreak occurred in 1987, there has been an increasing number of HPAI H5N1 bird-to-human transmissions leading to clinically severe and fatal human infections. However, because there is a significant species barrier that exists between birds and humans, the virus does not easily cross over to humans, though some cases of infection are being researched to discern whether human to human transmission is occurring.[10] More research is necessary to understand the pathogenesis and epidemiology of the H5N1 virus in humans. Exposure routes and other disease transmission characteristics such as genetic and immunological factors, that may increase the likelihood of infection, are not clearly understood.[14] On January 18, 2009, a 27-year-old woman from eastern China has died of bird flu, Chinese authorities said, making her the second person to die from the deadly virus at that time. Two tests on the woman were positive for H5N1 avian influenza, said the ministry, which did not say how she might have contracted the virus[15]. Although millions of birds have become infected with the virus since its discovery, 262 humans have died from the H5N1 in twelve countries according to WHO data as of August 31, 2009.[16] The avian flu claimed at least 200 humans in Indonesia, Vietnam, Laos, Romania, China, Taiwan, Turkey and Russia. Epidemiologists are afraid that the next time such a virus mutates, it could pass from human to human; however, the current A/H5N1 virus does not transmit easily from human to human. If this form of transmission occurs, another pandemic could result. Thus disease-control centers around the world are making avian flu a top priority. These organizations encourage poultry-related operations to develop a preemptive plan to prevent the spread of H5N1 and its potentially pandemic strains. The recommended plans center on providing protective clothing for workers and isolating flocks to prevent the spread of the virus.[17]

The Thailand outbreak of avian flu causes massive economic losses especially among poultry workers. Infected birds were culled and sacrificed. The public loss its confidence with the poultry products and thus decreasing the consumption of chicken and its products. This also elicited a ban from importing countries. There were however, factors which aggravated the spread of the virus which includes bird migration, cool temperature (increases virus survival) and several festivals at that time.[18]

[edit] In domestic animals


Several domestic species have been infected with and shown symptoms of H5N1 viral infection including cats, dogs, ferrets, pigs,and birds.

[edit] Birds
Attempts are made in the United States to minimize the presence of highly pathogenic avian influenza (HPAI) in poultry in through routine surveillance of poultry flocks in commercial poultry operations. Detection of a HPAI virus may result in immediate death of the flock. Less pathogenic viruses are controlled by vaccination, which is done primarily in turkey flocks (ATCvet codes: QI01AA23 for the inactivated fowl vaccine, QI01CL01 for the inactivated turkey combination vaccine).[19]

Rizals Last Poem Mi Ultimo Adios


Jose Rizal was executed on December 30 1896. He was imprisoned in Fort Santiago Intramuros, he was a revolutionary and his writings were said to entice insurgency. However I dont think the Spanish needed to much of an exuse. Jose Rizal, before his execution by firing squad at Rizal or Luneta Park, wrote Rizals last poem Mi Ultimo Adios or My Ultimate Goodbye Interestingly enough his original writing was said to have no title, the title Mi Ultimo Adios was given by Mariano Ponce. Mi Ultimo Adios Farewell, my adored Land, region of the sun caressed, Pearl of the Orient Sea, our Eden lost, With gladness I give you my Life, sad and repressed; And were it more brilliant, more fresh and at its best, I would still give it to you for your welfare at most. On the fields of battle, in the fury of fight, Others give you their lives without pain or hesitancy,

The place does not matter: cypress laurel, lily white, Scaffold, open field, conflict or martyrdom's site, It is the same if asked by home and Country. I die as I see tints on the sky b'gin to show And at last announce the day, after a gloomy night; If you need a hue to dye your matutinal glow, Pour my blood and at the right moment spread it so, And gild it with a reflection of your nascent light! My dreams, when scarcely a lad adolescent, My dreams when already a youth, full of vigor to attain, Were to see you, gem of the sea of the Orient, Your dark eyes dry, smooth brow held to a high plane Without frown, without wrinkles and of shame without stain. My life's fancy, my ardent, passionate desire, Hail! Cries out the soul to you, that will soon part from thee; Hail! How sweet 'tis to fall that fullness you may acquire; To die to give you life, 'neath your skies to expire, And in your mystic land to sleep through eternity ! If over my tomb some day, you would see blow, A simple humble flow'r amidst thick grasses, Bring it up to your lips and kiss my soul so, And under the cold tomb, I may feel on my brow, Warmth of your breath, a whiff of your tenderness. Let the moon with soft, gentle light me descry, Let the dawn send forth its fleeting, brilliant light, In murmurs grave allow the wind to sigh, And should a bird descend on my cross and alight, Let the bird intone a song of peace o'er my site. Let the burning sun the raindrops vaporize And with my clamor behind return pure to the sky; Let a friend shed tears over my early demise; And on quiet afternoons when one prays for me on high, Pray too, oh, my Motherland, that in God may rest I. Pray thee for all the hapless who have died, For all those who unequalled torments have undergone; For our poor mothers who in bitterness have cried; For orphans, widows and captives to tortures were shied, And pray too that you may see you own redemption.

And when the dark night wraps the cemet'ry And only the dead to vigil there are left alone, Don't disturb their repose, don't disturb the mystery: If you hear the sounds of cithern or psaltery, It is I, dear Country, who, a song t'you intone. And when my grave by all is no more remembered, With neither cross nor stone to mark its place, Let it be plowed by man, with spade let it be scattered And my ashes ere to nothingness are restored, Let them turn to dust to cover your earthly space. Then it doesn't matter that you should forget me: Your atmosphere, your skies, your vales I'll sweep; Vibrant and clear note to your ears I shall be: Aroma, light, hues, murmur, song, moanings deep, Constantly repeating the essence of the faith I keep. My idolized Country, for whom I most gravely pine, Dear Philippines, to my last goodbye, oh, harken There I leave all: my parents, loves of mine, I'll go where there are no slaves, tyrants or hangmen Where faith does not kill and where God alone does reign. Farewell, parents, brothers, beloved by me, Friends of my childhood, in the home distressed; Give thanks that now I rest from the wearisome day; Farewell, sweet stranger, my friend, who brightened my way; Farewell, to all I love. To die is to rest.

My Final Farewell Farewell, dear Fatherland, clime of the sun caress'd Pearl of the Orient seas, our Eden lost!, Gladly now I go to give thee this faded life's best, And were it brighter, fresher, or more blest Still would I give it thee, nor count the cost. On the field of battle, 'mid the frenzy of fight, Others have given their lives, without doubt or heed; The place matters not-cypress or laurel or lily white, Scaffold or open plain, combat or martyrdom's plight, T is ever the same, to serve our home and country's need. I die just when I see the dawn break,

Through the gloom of night, to herald the day; And if color is lacking my blood thou shalt take, Pour'd out at need for thy dear sake To dye with its crimson the waking ray. My dreams, when life first opened to me, My dreams, when the hopes of youth beat high, Were to see thy lov'd face, O gem of the Orient sea From gloom and grief, from care and sorrow free; No blush on thy brow, no tear in thine eye. Dream of my life, my living and burning desire, All hail ! cries the soul that is now to take flight; All hail ! And sweet it is for thee to expire ; To die for thy sake, that thou mayst aspire; And sleep in thy bosom eternity's long night. If over my grave some day thou seest grow, In the grassy sod, a humble flower, Draw it to thy lips and kiss my soul so, While I may feel on my brow in the cold tomb below The touch of thy tenderness, thy breath's warm power. Let the moon beam over me soft and serene, Let the dawn shed over me its radiant flashes, Let the wind with sad lament over me keen ; And if on my cross a bird should be seen, Let it trill there its hymn of peace to my ashes. Let the sun draw the vapors up to the sky, And heavenward in purity bear my tardy protest Let some kind soul o 'er my untimely fate sigh, And in the still evening a prayer be lifted on high From thee, 0 my country, that in God I may rest. Pray for all those that hapless have died, For all who have suffered the unmeasur'd pain; For our mothers that bitterly their woes have cried, For widows and orphans, for captives by torture tried And then for thyself that redemption thou mayst gain. And when the dark night wraps the graveyard around With only the dead in their vigil to see Break not my repose or the mystery profound And perchance thou mayst hear a sad hymn resound

'T is I, O my country, raising a song unto thee. And even my grave is remembered no more Unmark'd by never a cross nor a stone Let the plow sweep through it, the spade turn it o'er That my ashes may carpet earthly floor, Before into nothingness at last they are blown. Then will oblivion bring to me no care As over thy vales and plains I sweep; Throbbing and cleansed in thy space and air With color and light, with song and lament I fare, Ever repeating the faith that I keep. My Fatherland ador'd, that sadness to my sorrow lends Beloved Filipinas, hear now my last good-by! I give thee all: parents and kindred and friends For I go where no slave before the oppressor bends, Where faith can never kill, and God reigns e'er on high! Farewell to you all, from my soul torn away, Friends of my childhood in the home dispossessed ! Give thanks that I rest from the wearisome day ! Farewell to thee, too, sweet friend that lightened my way; Beloved creatures all, farewell! In death there is rest!

(This is the 1911 translation by Charles Derbyshire of the Spanish original of Jos Rizal's poem, Mi Ultimo Adis)

My Last Farewell (English Version) of Jose Rizal Farewell, my adored Land, region of the sun caressed, Pearl of the Orient Sea, our Eden lost, With gladness I give you my Life, sad and repressed; And were it more brilliant, more fresh and at its best, I would still give it to you for your welfare at most. On the fields of battle, in the fury of fight, Others give you their lives without pain or hesitancy, The place does not matter: cypress laurel, lily white, Scaffold, open field, conflict or martyrdoms site, It is the same if asked by home and Country.

I die as I see tints on the sky bgin to show And at last announce the day, after a gloomy night; If you need a hue to dye your matutinal glow, Pour my blood and at the right moment spread it so, And gild it with a reflection of your nascent light! My dreams, when scarcely a lad adolescent, My dreams when already a youth, full of vigor to attain, Were to see you, gem of the sea of the Orient, Your dark eyes dry, smooth brow held to a high plane Without frown, without wrinkles and of shame without stain. My lifes fancy, my ardent, passionate desire, Hail! Cries out the soul to you, that will soon part from thee; Hail! How sweet tis to fall that fullness you may acquire; To die to give you life, neath your skies to expire, And in your mystic land to sleep through eternity! If over my tomb some day, you would see blow, A simple humble flowr amidst thick grasses, Bring it up to your lips and kiss my soul so, And under the cold tomb, I may feel on my brow, Warmth of your breath, a whiff of your tenderness. Let the moon with soft, gentle light me descry, Let the dawn send forth its fleeting, brilliant light, In murmurs grave allow the wind to sigh, And should a bird descend on my cross and alight, Let the bird intone a song of peace oer my site. Let the burning sun the raindrops vaporize And with my clamor behind return pure to the sky; Let a friend shed tears over my early demise; And on quiet afternoons when one prays for me on high, Pray too, oh, my Motherland, that in God may rest I. Pray thee for all the hapless who have died, For all those who unequalled torments have undergone; For our poor mothers who in bitterness have cried; For orphans, widows and captives to tortures were shied, And pray too that you may see your own redemption. And when the dark night wraps the cemetry And only the dead to vigil there are left alone, Dont disturb their repose, dont disturb the mystery:

If you hear the sounds of cittern or psaltery, It is I, dear Country, who, a song tyou intone. And when my grave by all is no more remembered, With neither cross nor stone to mark its place, Let it be plowed by man, with spade let it be scattered And my ashes ere to nothingness are restored, Let them turn to dust to cover your earthly space. Then it doesnt matter that you should forget me: Your atmosphere, your skies, your vales Ill sweep; Vibrant and clear note to your ears I shall be: Aroma, light, hues, murmur, song, moanings deep, Constantly repeating the essence of the faith I keep. My idolized Country, for whom I most gravely pine, Dear Philippines, to my last goodbye, oh, harken There I leave all: my parents, loves of mine, Ill go where there are no slaves, tyrants or hangmen Where faith does not kill and where God alone does reign. Farewell, parents, brothers, beloved by me, Friends of my childhood, in the home distressed; Give thanks that now I rest from the wearisome day; Farewell, sweet stranger, my friend, who brightened my way; Farewell, to all I love. To die is to rest.

My Last Farewell
Farewell, beloved Country, treasured region of the sun, Pearl of the sea of the Orient, our lost Eden! To you eagerly I surrender this sad and gloomy life; And were it brighter, fresher, more florid, Even then Id give it to you, for your sake alone. In fields of battle, deliriously fighting, Others give you their lives, without doubt, without regret; The place matters not: where theres cypress, laurel or lily, On a plank or open field, in combat or cruel martyrdom, Its all the same if the home or country asks. I die when I see the sky has unfurled its colors And at last after a cloak of darkness announces the day; If you need scarlet to tint your dawn, Shed my blood, pour it as the moment comes, And may it be gilded by a reflection of the heavens newly-born light.

My dreams, when scarcely an adolescent, My dreams, when a young man already full of life, Were to see you one day, jewel of the sea of the Orient, Dry those eyes of black, that forehead high, Without frown, without wrinkles, without stains of shame. My lifelong dream, my deep burning desire, This soul that will soon depart cries out: Salud! To your health! Oh how beautiful to fall to give you flight, To die to give you life, to die under your sky, And in your enchanted land eternally sleep. If upon my grave one day you see appear, Amidst the dense grass, a simple humble flower, Place it near your lips and my soul youll kiss, And on my brow may I feel, under the cold tomb, The gentle blow of your tenderness, the warmth of your breath. Let the moon see me in a soft and tranquil light, Let the dawn send its fleeting radiance, Let the wind moan with its low murmur, And should a bird descend and rest on my cross, Let it sing its canticle of peace. Let the burning sun evaporate the rains, And with my clamor behind, towards the sky may they turn pure; Let a friend mourn my early demise, And in the serene afternoons, when someone prays for me, O Country, pray to God also for my rest! Pray for all the unfortunate ones who died, For all who suffered torments unequaled, For our poor mothers who in their grief and bitterness cry, For orphans and widows, for prisoners in torture, And for yourself pray that your final redemption youll see. And when the cemetery is enveloped in dark night, And there, alone, only those who have gone remain in vigil, Disturb not their rest, nor the mystery, And should you hear chords from a zither or psaltery, It is I, beloved Country, singing to you. And when my grave, then by all forgotten, has not a cross nor stone to mark its place, Let men plow and with a spade scatter it,

And before my ashes return to nothing, May they be the dust that carpets your fields. Then nothing matters, cast me in oblivion. Your atmosphere, your space and valleys Ill cross. I will be a vibrant and clear note to your ears, Aroma, light, colors, murmur, moan, and song, Constantly repeating the essence of my faith. My idolized country, sorrow of my sorrows, Beloved Filipinas, hear my last good-bye. There I leave you all, my parents, my loves. Ill go where there are no slaves, hangmen nor oppressors, Where faith doesnt kill, where the one who reigns is God. Goodbye, dear parents, brother and sisters, fragments of my soul, Childhood friends in the home now lost, Give thanks that I rest from this wearisome day; Goodbye, sweet foreigner, my friend, my joy; Farewell, loved ones, to die is to rest. Jos Rizal, 1896 (Modern English translation by Edwin Agustn Lozada)

Vous aimerez peut-être aussi