CSS Forums

CSS Forums (http://www.cssforum.com.pk/)
-   General Science Notes (http://www.cssforum.com.pk/css-compulsory-subjects/general-science-ability/general-science-notes/)
-   -   EDS- notes (http://www.cssforum.com.pk/css-compulsory-subjects/general-science-ability/general-science-notes/14021-eds-notes.html)

Predator Tuesday, November 13, 2007 09:59 AM

EDS- notes
 
[B]Eye (anatomy)

I -INTRODUCTION[/B]
Eye (anatomy), light-sensitive organ of vision in animals. The eyes of various species vary from simple structures that are capable only of differentiating between light and dark to complex organs, such as those of humans and other mammals, that can distinguish minute variations of shape, color, brightness, and distance. The actual process of seeing is performed by the brain rather than by the eye. The function of the eye is to translate the electromagnetic vibrations of light into patterns of nerve impulses that are transmitted to the brain.

[B]II -THE HUMAN EYE[/B]
The entire eye, often called the eyeball, is a spherical structure approximately 2.5 cm (about 1 in) in diameter with a pronounced bulge on its forward surface. The outer part of the eye is composed of three layers of tissue. The outside layer is the sclera, a protective coating. It covers about five-sixths of the surface of the eye. At the front of the eyeball, it is continuous with the bulging, transparent cornea. The middle layer of the coating of the eye is the choroid, a vascular layer lining the posterior three-fifths of the eyeball. The choroid is continuous with the ciliary body and with the iris, which lies at the front of the eye. The innermost layer is the light-sensitive retina.

The cornea is a tough, five-layered membrane through which light is admitted to the interior of the eye. Behind the cornea is a chamber filled with clear, watery fluid, the aqueous humor, which separates the cornea from the crystalline lens. The lens itself is a flattened sphere constructed of a large number of transparent fibers arranged in layers. It is connected by ligaments to a ringlike muscle, called the ciliary muscle, which surrounds it. The ciliary muscle and its surrounding tissues form the ciliary body. This muscle, by flattening the lens or making it more nearly spherical, changes its focal length.
The pigmented iris hangs behind the cornea in front of the lens, and has a circular opening in its center. The size of its opening, the pupil, is controlled by a muscle around its edge. This muscle contracts or relaxes, making the pupil larger or smaller, to control the amount of light admitted to the eye.
Behind the lens the main body of the eye is filled with a transparent, jellylike substance, the vitreous humor, enclosed in a thin sac, the hyaloid membrane. The pressure of the vitreous humor keeps the eyeball distended.

The retina is a complex layer, composed largely of nerve cells. The light-sensitive receptor cells lie on the outer surface of the retina in front of a pigmented tissue layer. These cells take the form of rods or cones packed closely together like matches in a box. Directly behind the pupil is a small yellow-pigmented spot, the macula lutea, in the center of which is the fovea centralis, the area of greatest visual acuity of the eye. At the center of the fovea, the sensory layer is composed entirely of cone-shaped cells. Around the fovea both rod-shaped and cone-shaped cells are present, with the cone-shaped cells becoming fewer toward the periphery of the sensitive area. At the outer edges are only rod-shaped cells.

Where the optic nerve enters the eyeball, below and slightly to the inner side of the fovea, a small round area of the retina exists that has no light-sensitive cells. This optic disk forms the blind spot of the eye.

[B]III -FUNCTIONING OF THE EYE[/B]
In general the eyes of all animals resemble simple cameras in that the lens of the eye forms an inverted image of objects in front of it on the sensitive retina, which corresponds to the film in a camera.
Focusing the eye, as mentioned above, is accomplished by a flattening or thickening (rounding) of the lens. The process is known as accommodation. In the normal eye accommodation is not necessary for seeing distant objects. The lens, when flattened by the suspensory ligament, brings such objects to focus on the retina. For nearer objects the lens is increasingly rounded by ciliary muscle contraction, which relaxes the suspensory ligament. A young child can see clearly at a distance as close as 6.3 cm (2.5 in), but with increasing age the lens gradually hardens, so that the limits of close seeing are approximately 15 cm (about 6 in) at the age of 30 and 40 cm (16 in) at the age of 50. In the later years of life most people lose the ability to accommodate their eyes to distances within reading or close working range. This condition, known as presbyopia, can be corrected by the use of special convex lenses for the near range.

Structural differences in the size of the eye cause the defects of hyperopia, or farsightedness, and myopia, or nearsightedness. See Eyeglasses; Vision.
As mentioned above, the eye sees with greatest clarity only in the region of the fovea; due to the neural structure of the retina. The cone-shaped cells of the retina are individually connected to other nerve fibers, so that stimuli to each individual cell are reproduced and, as a result, fine details can be distinguished. The rodshaped cells, on the other hand, are connected in groups so that they respond to stimuli over a general area.

The rods, therefore, respond to small total light stimuli, but do not have the ability to separate small details of the visual image. The result of these differences in structure is that the visual field of the eye is composed of a small central area of great sharpness surrounded by an area of lesser sharpness. In the latter area, however, the sensitivity of the eye to light is great. As a result, dim objects can be seen at night on the peripheral part of the retina when they are invisible to the central part.

The mechanism of seeing at night involves the sensitization of the rod cells by means of a pigment, called visual purple or rhodopsin, that is formed within the cells. Vitamin A is necessary for the production of visual purple; a deficiency of this vitamin leads to night blindness. Visual purple is bleached by the action of light and must be reformed by the rod cells under conditions of darkness. Hence a person who steps from sunlight into a darkened room cannot see until the pigment begins to form. When the pigment has formed and the eyes are sensitive to low levels of illumination, the eyes are said to be dark-adapted.

A brownish pigment present in the outer layer of the retina serves to protect the cone cells of the retina from overexposure to light. If bright light strikes the retina, granules of this brown pigment migrate to the spaces around the cone cells, sheathing and screening them from the light. This action, called light adaptation, has the opposite effect to that of dark adaptation.
Subjectively, a person is not conscious that the visual field consists of a central zone of sharpness surrounded by an area of increasing fuzziness.

The reason is that the eyes are constantly moving, bringing first one part of the visual field and then another to the foveal region as the attention is shifted from one object to another. These motions are accomplished by six muscles that move the eyeball upward, downward, to the left, to the right, and obliquely. The motions of the eye muscles are extremely precise; the estimation has been made that the eyes can be moved to focus on no less than 100,000 distinct points in the visual field. The muscles of the two eyes, working together, also serve the important function of converging the eyes on any point being observed, so that the images of the two eyes coincide. When convergence is nonexistent or faulty, double vision results. The movement of the eyes and fusion of the images also play a part in the visual estimation of size and distance.

[B]IV -PROTECTIVE STRUCTURES[/B]
Several structures, not parts of the eyeball, contribute to the protection of the eye. The most important of these are the eyelids, two folds of skin and tissue, upper and lower, that can be closed by means of muscles to form a protective covering over the eyeball against excessive light and mechanical injury.
The eyelashes, a fringe of short hairs growing on the edge of either eyelid, act as a screen to keep dust particles and insects out of the eyes when the eyelids are partly closed. Inside the eyelids is a thin protective membrane, the conjunctiva, which doubles over to cover the visible sclera. Each eye also has a tear gland, or lacrimal organ, situated at the outside corner of the eye. The salty secretion of these glands lubricates the forward part of the eyeball when the eyelids are closed and flushes away any small dust particles or other foreign matter on the surface of the eye. Normally the eyelids of human eyes close by reflex action about every six seconds, but if dust reaches the surface of the eye and is not washed away, the eyelids blink oftener and more tears are produced. On the edges of the eyelids are a number of small glands, the Meibomian glands, which produce a fatty secretion that lubricates the eyelids themselves and the eyelashes. The eyebrows, located above each eye, also have a protective function in soaking up or deflecting perspiration or rain and preventing the moisture from running into the eyes. The hollow socket in the skull in which the eye is set is called the orbit. The bony edges of the orbit, the frontal bone, and the cheekbone protect the eye from mechanical injury by blows or collisions.

[B]V -COMPARATIVE ANATOMY[/B]
The simplest animal eyes occur in the cnidarians and ctenophores, phyla comprising the jellyfish and somewhat similar primitive animals. These eyes, known as pigment eyes, consist of groups of pigment cells associated with sensory cells and often covered with a thickened layer of cuticle that forms a kind of lens. Similar eyes, usually having a somewhat more complex structure, occur in worms, insects, and mollusks.

Two kinds of image-forming eyes are found in the animal world, single and compound eyes. The single eyes are essentially similar to the human eye, though varying from group to group in details of structure. The lowest species to develop such eyes are some of the large jellyfish. Compound eyes, confined to the arthropods (see Arthropod), consist of a faceted lens, each facet of which forms a separate image on a retinal cell, creating a moasic field. In some arthropods the structure is more sophisticated, forming a combined image.

The eyes of other vertebrates are essentially similar to human eyes, although important modifications may exist. The eyes of such nocturnal animals as cats, owls, and bats are provided only with rod cells, and the cells are both more sensitive and more numerous than in humans. The eye of a dolphin has 7000 times as many rod cells as a human eye, enabling it to see in deep water. The eyes of most fish have a flat cornea and a globular lens and are hence particularly adapted for seeing close objects. Birds’ eyes are elongated from front to back, permitting larger images of distant objects to be formed on the retina.

[B]VI -EYE DISEASES[/B]
Eye disorders may be classified according to the part of the eye in which the disorders occur.

The most common disease of the eyelids is hordeolum, known commonly as a sty, which is an infection of the follicles of the eyelashes, usually caused by infection by staphylococci. Internal sties that occur inside the eyelid and not on its edge are similar infections of the lubricating Meibomian glands. Abscesses of the eyelids are sometimes the result of penetrating wounds. Several congenital defects of the eyelids occasionally occur, including coloboma, or cleft eyelid, and ptosis, a drooping of the upper lid. Among acquired defects are symblepharon, an adhesion of the inner surface of the eyelid to the eyeball, which is most frequently the result of burns. Entropion, the turning of the eyelid inward toward the cornea, and ectropion, the turning of the eyelid outward, can be caused by scars or by spasmodic muscular contractions resulting from chronic irritation.
The eyelids also are subject to several diseases of the skin such as eczema and acne, and to both benign and malignant tumors. Another eye disease is infection of the conjunctiva, the mucous membranes covering the inside of the eyelids and the outside of the eyeball. See Conjunctivitis; Trachoma.
Disorders of the cornea, which may result in a loss of transparency and impaired sight, are usually the result of injury but may also occur as a secondary result of disease; for example, edema, or swelling, of the cornea sometimes accompanies glaucoma.

The choroid, or middle coat of the eyeball, contains most of the blood vessels of the eye; it is often the site of secondary infections from toxic conditions and bacterial infections such as tuberculosis and syphilis. Cancer may develop in the choroidal tissues or may be carried to the eye from malignancies elsewhere in the body. The light-sensitive retina, which lies just beneath the choroid, also is subject to the same type of infections. The cause of retrolental fibroplasia, however—a disease of premature infants that causes retinal detachment and partial blindness—is unknown. Retinal detachment may also follow cataract surgery. Laser beams are sometimes used to weld detached retinas back onto the eye. Another retinal condition, called macular degeneration, affects the central retina. Macular degeneration is a frequent cause of loss of vision in older persons. Juvenile forms of this condition also exist.

The optic nerve contains the retinal nerve fibers, which carry visual impulses to the brain. The retinal circulation is carried by the central artery and vein, which lie in the optic nerve. The sheath of the optic nerve communicates with the cerebral lymph spaces. Inflammation of that part of the optic nerve situated within the eye is known as optic neuritis, or papillitis; when inflammation occurs in the part of the optic nerve behind the eye, the disease is called retrobulbar neuritis. When the pressure in the skull is elevated, or increased in intracranial pressure, as in brain tumors, edema and swelling of the optic disk occur where the nerve enters the eyeball, a condition known as papilledema, or chocked disk.
For disorders of the crystalline lens, see Cataract. See also Color Blindness.

[B]VII -EYE BANK[/B]
Eye banks are organizations that distribute corneal tissue taken from deceased persons for eye grafts. Blindness caused by cloudiness or scarring of the cornea can sometimes be cured by surgical removal of the affected portion of the corneal tissue. With present techniques, such tissue can be kept alive for only 48 hours, but current experiments in preserving human corneas by freezing give hope of extending its useful life for months. Eye banks also preserve and distribute vitreous humor, the liquid within the larger chamber of the eye, for use in treatment of detached retinas. The first eye bank was opened in New York City in 1945. The Eye-Bank Association of America, in Rochester, New York, acts as a clearinghouse for information.

Predator Tuesday, November 13, 2007 10:02 AM

[B] Fingerprinting

I -INTRODUCTION[/B]
Fingerprinting, method of identification using the impression made by the minute ridge formations or patterns found on the fingertips. No two persons have exactly the same arrangement of ridge patterns, and the patterns of any one individual remain unchanged through life. To obtain a set of fingerprints, the ends of the fingers are inked and then pressed or rolled one by one on some receiving surface. Fingerprints may be classified and filed on the basis of the ridge patterns, setting up an identification system that is almost infallible.

[B]II -HISTORY[/B]
The first recorded use of fingerprints was by the ancient Assyrians and Chinese for the signing of legal documents. Probably the first modern study of fingerprints was made by the Czech physiologist Johannes Evengelista Purkinje, who in 1823 proposed a system of classification that attracted little attention. The use of fingerprints for identification purposes was proposed late in the 19th century by the British scientist Sir Francis Galton, who wrote a detailed study of fingerprints in which he presented a new classification system using prints of all ten fingers, which is the basis of identification systems still in use. In the 1890s the police in Bengal, India, under the British police official Sir Edward Richard Henry, began using fingerprints to identify criminals. As assistant commissioner of metropolitan police, Henry established the first British fingerprint files in London in 1901. Subsequently, the use of fingerprinting as a means for identifying criminals spread rapidly throughout Europe and the United States, superseding the old Bertillon system of identification by means of body measurements.

[B]III -MODERN USE [/B]
As crime-detection methods improved, law enforcement officers found that any smooth, hard surface touched by a human hand would yield fingerprints made by the oily secretion present on the skin. When these so-called latent prints were dusted with powder or chemically treated, the identifying fingerprint pattern could be seen and photographed or otherwise preserved. Today, law enforcement agencies can also use computers to digitally record fingerprints and to transmit them electronically to other agencies for comparison. By comparing fingerprints at the scene of a crime with the fingerprint record of suspected persons, officials can establish absolute proof of the presence or identity of a person.

The confusion and inefficiency caused by the establishment of many separate fingerprint archives in the United States led the federal government to set up a central agency in 1924, the Identification Division of the Federal Bureau of Investigation (FBI). This division was absorbed in 1993 by the FBI’s Criminal Justice Information Services Division, which now maintains the world’s largest fingerprint collection. Currently the FBI has a library of more than 234 million civil and criminal fingerprint cards, representing 81 million people. In 1999 the FBI began full operation of the Integrated Automated Fingerprint Identification System (IAFIS), a computerized system that stores digital images of fingerprints for more than 36 million individuals, along with each individual’s criminal history if one exists. Using IAFIS, authorities can conduct automated searches to identify people from their fingerprints and determine whether they have a criminal record. The system also gives state and local law enforcement agencies the ability to electronically transmit fingerprint information to the FBI. The implementation of IAFIS represented a breakthrough in crimefighting by reducing the time needed for fingerprint identification from weeks to minutes or hours.

Predator Tuesday, November 13, 2007 10:05 AM

[B][U]Infrared Radiation[/U][/B]

Infrared Radiation, emission of energy as electromagnetic waves in the portion of the spectrum just beyond the limit of the red portion of visible radiation (see Electromagnetic Radiation). The wavelengths of infrared radiation are shorter than those of radio waves and longer than those of light waves. They range between approximately 10-6 and 10-3 (about 0.0004 and 0.04 in).

Infrared radiation may be detected as heat, and instruments such as bolometers are used to detect it. See Radiation; Spectrum.
Infrared radiation is used to obtain pictures of distant objects obscured by atmospheric haze, because visible light is scattered by haze but infrared radiation is not. The detection of infrared radiation is used by astronomers to observe stars and nebulas that are invisible in ordinary light or that emit radiation in the infrared portion of the spectrum.

An opaque filter that admits only infrared radiation is used for very precise infrared photographs, but an ordinary orange or light-red filter, which will absorb blue and violet light, is usually sufficient for most infrared pictures. Developed about 1880, infrared photography has today become an important diagnostic tool in medical science as well as in agriculture and industry. Use of infrared techniques reveals pathogenic conditions that are not visible to the eye or recorded on X-ray plates. Remote sensing by means of aerial and orbital infrared photography has been used to monitor crop conditions and insect and disease damage to large agricultural areas, and to locate mineral deposits. See Aerial Survey; Satellite, Artificial. In industry, infrared spectroscopy forms an increasingly important part of metal and alloy research, and infrared photography is used to monitor the quality of products. See also Photography: Photographic Films.

Infrared devices such as those used during World War II enable sharpshooters to see their targets in total visual darkness. These instruments consist essentially of an infrared lamp that sends out a beam of infrared radiation, often referred to as black light, and a telescope receiver that picks up returned radiation from the object and converts it to a visible image.

Predator Tuesday, November 13, 2007 10:13 AM

[B]Deoxyribonucleic Acid

I -INTRODUCTION[/B]
Deoxyribonucleic Acid (DNA), genetic material of all cellular organisms and most viruses. DNA carries the information needed to direct protein synthesis and replication. Protein synthesis is the production of the proteins needed by the cell or virus for its activities and development. Replication is the process by which DNA copies itself for each descendant cell or virus, passing on the information needed for protein synthesis. In most cellular organisms, DNA is organized on chromosomes located in the nucleus of the cell.

[B]II -STRUCTURE[/B]
A molecule of DNA consists of two chains, strands composed of a large number of chemical compounds, called nucleotides, linked together to form a chain. These chains are arranged like a ladder that has been twisted into the shape of a winding staircase, called a double helix. Each nucleotide consists of three units: a sugar molecule called deoxyribose, a phosphate group, and one of four different nitrogen-containing compounds called bases. The four bases are adenine (A), guanine (G), thymine (T), and cytosine (C). The deoxyribose molecule occupies the center position in the nucleotide, flanked by a phosphate group on one side and a base on the other. The phosphate group of each nucleotide is also linked to the deoxyribose of the adjacent nucleotide in the chain. These linked deoxyribose-phosphate subunits form the parallel side rails of the ladder. The bases face inward toward each other, forming the rungs of the ladder.

The nucleotides in one DNA strand have a specific association with the corresponding nucleotides in the other DNA strand. Because of the chemical affinity of the bases, nucleotides containing adenine are always paired with nucleotides containing thymine, and nucleotides containing cytosine are always paired with nucleotides containing guanine. The complementary bases are joined to each other by weak chemical bonds called hydrogen bonds.
In 1953 American biochemist James D. Watson and British biophysicist Francis Crick published the first description of the structure of DNA. Their model proved to be so important for the understanding of protein synthesis, DNA replication, and mutation that they were awarded the 1962 Nobel Prize for physiology or medicine for their work.

[B]III -PROTEIN SYNTHESIS[/B]
DNA carries the instructions for the production of proteins. A protein is composed of smaller molecules called amino acids, and the structure and function of the protein is determined by the sequence of its amino acids. The sequence of amino acids, in turn, is determined by the sequence of nucleotide bases in the DNA. A sequence of three nucleotide bases, called a triplet, is the genetic code word, or codon, that specifies a particular amino acid. For instance, the triplet GAC (guanine, adenine, and cytosine) is the codon for the amino acid leucine, and the triplet CAG (cytosine, adenine, and guanine) is the codon for the amino acid valine. A protein consisting of 100 amino acids is thus encoded by a DNA segment consisting of 300 nucleotides. Of the two polynucleotide chains that form a DNA molecule, only one strand contains the information needed for the production of a given amino acid sequence. The other strand aids in replication.

Protein synthesis begins with the separation of a DNA molecule into two strands. In a process called transcription, a section of one strand acts as a template, or pattern, to produce a new strand called messenger RNA (mRNA). The mRNA leaves the cell nucleus and attaches to the ribosomes, specialized cellular structures that are the sites of protein synthesis. Amino acids are carried to the ribosomes by another type of RNA, called transfer RNA (tRNA). In a process called translation, the amino acids are linked together in a particular sequence, dictated by the mRNA, to form a protein.

A gene is a sequence of DNA nucleotides that specify the order of amino acids in a protein via an intermediary mRNA molecule. Substituting one DNA nucleotide with another containing a different base causes all descendant cells or viruses to have the altered nucleotide base sequence. As a result of the substitution, the sequence of amino acids in the resulting protein may also be changed. Such a change in a DNA molecule is called a mutation. Most mutations are the result of errors in the replication process. Exposure of a cell or virus to radiation or to certain chemicals increases the likelihood of mutations.

[B]IV -REPLICATION[/B]
In most cellular organisms, replication of a DNA molecule takes place in the cell nucleus and occurs just before the cell divides. Replication begins with the separation of the two polynucleotide chains, each of which then acts as a template for the assembly of a new complementary chain. As the old chains separate, each nucleotide in the two chains attracts a complementary nucleotide that has been formed earlier by the cell. The nucleotides are joined to one another by hydrogen bonds to form the rungs of a new DNA molecule. As the complementary nucleotides are fitted into place, an enzyme called DNA polymerase links them together by bonding the phosphate group of one nucleotide to the sugar molecule of the adjacent nucleotide, forming the side rail of the new DNA molecule. This process continues until a new polynucleotide chain has been formed alongside the old one, forming a new double-helix molecule.

[B]V -TOOLS AND PROCEDURES[/B]
Several tools and procedures facilitate are used by scientists for the study and manipulation of DNA. Specialized enzymes, called restriction enzymes, found in bacteria act like molecular scissors to cut the phosphate backbones of DNA molecules at specific base sequences. Strands of DNA that have been cut with restriction enzymes are left with single-stranded tails that are called sticky ends, because they can easily realign with tails from certain other DNA fragments. Scientists take advantage of restriction enzymes and the sticky ends generated by these enzymes to carry out recombinant DNA technology, or genetic engineering. This technology involves removing a specific gene from one organism and inserting the gene into another organism.

Another tool for working with DNA is a procedure called polymerase chain reaction (PCR). This procedure uses the enzyme DNA polymerase to make copies of DNA strands in a process that mimics the way in which DNA replicates naturally within cells. Scientists use PCR to obtain vast numbers of copies of a given segment of DNA.

DNA fingerprinting, also called DNA typing, makes it possible to compare samples of DNA from various sources in a manner that is analogous to the comparison of fingerprints. In this procedure, scientists use restriction enzymes to cleave a sample of DNA into an assortment of fragments. Solutions containing these fragments are placed at the surface of a gel to which an electric current is applied. The electric current causes the DNA fragments to move through the gel. Because smaller fragments move more quickly than larger ones, this process, called electrophoresis, separates the fragments according to their size. The fragments are then marked with probes and exposed on X-ray film, where they form the DNA fingerprint—a pattern of characteristic black bars that is unique for each type of DNA.
A procedure called DNA sequencing makes it possible to determine the precise order, or sequence, of nucleotide bases within a fragment of DNA. Most versions of DNA sequencing use a technique called primer extension, developed by British molecular biologist Frederick Sanger.
In primer extension, specific pieces of DNA are replicated and modified, so that each DNA segment ends in a fluorescent form of one of the four nucleotide bases. Modern DNA sequencers, pioneered by American molecular biologist Leroy Hood, incorporate both lasers and computers. Scientists have completely sequenced the genetic material of several microorganisms, including the bacterium Escherichia coli. In 1998, scientists achieved the milestone of sequencing the complete genome of a multicellular organism—a roundworm identified as Caenorhabditis elegans. The Human Genome Project, an international research collaboration, has been established to determine the sequence of all of the three billion nucleotide base pairs that make up the human genetic material.

An instrument called an atomic force microscope enables scientists to manipulate the three-dimensional structure of DNA molecules. This microscope involves laser beams that act like tweezers—attaching to the ends of a DNA molecule and pulling on them. By manipulating these laser beams, scientists can stretch, or uncoil, fragments of DNA. This work is helping reveal how DNA changes its three-dimensional shape as it interacts with enzymes.

[B]VI -APPLICATIONS[/B]
Research into DNA has had a significant impact on medicine. Through recombinant DNA technology, scientists can modify microorganisms so that they become so-called factories that produce large quantities of medically useful drugs. This technology is used to produce insulin, which is a drug used by diabetics, and interferon, which is used by some cancer patients. Studies of human DNA are revealing genes that are associated with specific diseases, such as cystic fibrosis and breast cancer. This information is helping physicians to diagnose various diseases, and it may lead to new treatments. For example, physicians are using a technology called chimeraplasty, which involves a synthetic molecule containing both DNA and RNA strands, in an effort to develop a treatment for a form of hemophilia.

Forensic science uses techniques developed in DNA research to identify individuals who have committed crimes. DNA from semen, skin, or blood taken from the crime scene can be compared with the DNA of a suspect, and the results can be used in court as evidence.

DNA has helped taxonomists determine evolutionary relationships among animals, plants, and other life forms. Closely related species have more similar DNA than do species that are distantly related. One surprising finding to emerge from DNA studies is that vultures of the Americas are more closely related to storks than to the vultures of Europe, Asia, or Africa (see Classification).

Techniques of DNA manipulation are used in farming, in the form of genetic engineering and biotechnology. Strains of crop plants to which genes have been transferred may produce higher yields and may be more resistant to insects. Cattle have been similarly treated to increase milk and beef production, as have hogs, to yield more meat with less fat.

[B]VII -SOCIAL ISSUES[/B]
Despite the many benefits offered by DNA technology, some critics argue that its development should be monitored closely. One fear raised by such critics is that DNA fingerprinting could provide a means for employers to discriminate against members of various ethnic groups. Critics also fear that studies of people’s DNA could permit insurance companies to deny health insurance to those people at risk for developing certain diseases. The potential use of DNA technology to alter the genes of embryos is a particularly controversial issue.

The use of DNA technology in agriculture has also sparked controversy. Some people question the safety, desirability, and ecological impact of genetically altered crop plants. In addition, animal rights groups have protested against the genetic engineering of farm animals.

Despite these and other areas of disagreement, many people agree that DNA technology offers a mixture of benefits and potential hazards. Many experts also agree that an informed public can help assure that DNA technology is used wisely.

Predator Tuesday, November 13, 2007 10:21 AM

Blood
 
[B]Blood

I -INTRODUCTION[/B]
Blood, vital fluid found in humans and other animals that provides important nourishment to all body organs and tissues and carries away waste materials. Sometimes referred to as “the river of life,” blood is pumped from the heart through a network of blood vessels collectively known as the circulatory system.

An adult human has about 5 to 6 liters (1 to 2 gal) of blood, which is roughly 7 to 8 percent of total body weight. Infants and children have comparably lower volumes of blood, roughly proportionate to their smaller size. The volume of blood in an individual fluctuates. During dehydration, for example while running a marathon, blood volume decreases. Blood volume increases in circumstances such as pregnancy, when the mother’s blood needs to carry extra oxygen and nutrients to the baby.

[B]II -ROLE OF BLOOD[/B]
Blood carries oxygen from the lungs to all the other tissues in the body and, in turn, carries waste products, predominantly carbon dioxide, back to the lungs where they are released into the air. When oxygen transport fails, a person dies within a few minutes. Food that has been processed by the digestive system into smaller components such as proteins, fats, and carbohydrates is also delivered to the tissues by the blood. These nutrients provide the materials and energy needed by individual cells for metabolism, or the performance of cellular function. Waste products produced during metabolism, such as urea and uric acid, are carried by the blood to the kidneys, where they are transferred from the blood into urine and eliminated from the body. In addition to oxygen and nutrients, blood also transports special chemicals, called hormones, that regulate certain body functions. The movement of these chemicals enables one organ to control the function of another even though the two organs may be located far apart. In this way, the blood acts not just as a means of transportation but also as a communications system.

The blood is more than a pipeline for nutrients and information; it is also responsible for the activities of the immune system, helping fend off infection and fight disease. In addition, blood carries the means for stopping itself from leaking out of the body after an injury. The blood does this by carrying special cells and proteins, known as the coagulation system, that start to form clots within a matter of seconds after injury.

Blood is vital to maintaining a stable body temperature; in humans, body temperature normally fluctuates within a degree of 37.0° C (98.6° F). Heat production and heat loss in various parts of the body are balanced out by heat transfer via the bloodstream. This is accomplished by varying the diameter of blood vessels in the skin. When a person becomes overheated, the vessels dilate and an increased volume of blood flows through the skin. Heat dissipates through the skin, effectively lowering the body temperature. The increased flow of blood in the skin makes the skin appear pink or flushed. When a person is cold, the skin may become pale as the vessels narrow, diverting blood from the skin and reducing heat loss.

[B]III -COMPOSITION OF BLOOD[/B]
About 55 percent of the blood is composed of a liquid known as plasma. The rest of the blood is made of three major types of cells: red blood cells (also known as erythrocytes), white blood cells (leukocytes), and platelets (thrombocytes).

[B]A Plasma[/B]
Plasma consists predominantly of water and salts. The kidneys carefully maintain the salt concentration in plasma because small changes in its concentration will cause cells in the body to function improperly. In extreme conditions this can result in seizures, coma, or even death. The pH of plasma, the common measurement of the plasma’s acidity, is also carefully controlled by the kidneys within the neutral range of 6.8 to 7.7. Plasma also contains other small molecules, including vitamins, minerals, nutrients, and waste products. The concentrations of all of these molecules must be carefully regulated.

Plasma is usually yellow in color due to proteins dissolved in it. However, after a person eats a fatty meal, that person’s plasma temporarily develops a milky color as the blood carries the ingested fats from the intestines to other organs of the body.

Plasma carries a large number of important proteins, including albumin, gamma globulin, and clotting factors. Albumin is the main protein in blood. It helps regulate the water content of tissues and blood. Gamma globulin is composed of tens of thousands of unique antibody molecules. Antibodies neutralize or help destroy infectious organisms. Each antibody is designed to target one specific invading organism. For example, chicken pox antibody will target chicken pox virus, but will leave an influenza virus unharmed. Clotting factors, such as fibrinogen, are involved in forming blood clots that seal leaks after an injury. Plasma that has had the clotting factors removed is called serum. Both serum and plasma are easy to store and have many medical uses.

[B]B -Red Blood Cells[/B]
Red blood cells make up almost 45 percent of the blood volume. Their primary function is to carry oxygen from the lungs to every cell in the body. Red blood cells are composed predominantly of a protein and iron compound, called hemoglobin, that captures oxygen molecules as the blood moves through the lungs, giving blood its red color. As blood passes through body tissues, hemoglobin then releases the oxygen to cells throughout the body. Red blood cells are so packed with hemoglobin that they lack many components, including a nucleus, found in other cells.

The membrane, or outer layer, of the red blood cell is flexible, like a soap bubble, and is able to bend in many directions without breaking. This is important because the red blood cells must be able to pass through the tiniest blood vessels, the capillaries, to deliver oxygen wherever it is needed. The capillaries are so narrow that the red blood cells, normally shaped like a disk with a concave top and bottom, must bend and twist to maneuver single file through them.

[B]C -Blood Type[/B]
There are several types of red blood cells and each person has red blood cells of just one type. Blood type is determined by the occurrence or absence of substances, known as recognition markers or antigens, on the surface of the red blood cell. Type A blood has just marker A on its red blood cells while type B has only marker B. If neither A nor B markers are present, the blood is type O. If both the A and B markers are present, the blood is type AB. Another marker, the Rh antigen (also known as the Rh factor), is present or absent regardless of the presence of A and B markers. If the Rh marker is present, the blood is said to be Rh positive, and if it is absent, the blood is Rh negative. The most common blood type is A positive—that is, blood that has an A marker and also an Rh marker. More than 20 additional red blood cell types have been discovered.

Blood typing is important for many medical reasons. If a person loses a lot of blood, that person may need a blood transfusion to replace some of the lost red blood cells. Since everyone makes antibodies against substances that are foreign, or not of their own body, transfused blood must be matched so as not to contain these substances. For example, a person who is blood type A positive will not make antibodies against the A or Rh markers, but will make antibodies against the B marker, which is not on that person’s own red blood cells. If blood containing the B marker (from types B positive, B negative, AB positive, or AB negative) is transfused into this person, then the transfused red blood cells will be rapidly destroyed by the patient’s anti-B antibodies.

In this case, the transfusion will do the patient no good and may even result in serious harm. For a successful blood transfusion into an A positive blood type individual, blood that is type O negative, O positive, A negative, or A positive is needed because these blood types will not be attacked by the patient’s anti-B antibodies.

[B]D -White Blood Cells[/B]
White blood cells only make up about 1 percent of blood, but their small number belies their immense importance. They play a vital role in the body’s immune system—the primary defense mechanism against invading bacteria, viruses, fungi, and parasites. They often accomplish this goal through direct attack, which usually involves identifying the invading organism as foreign, attaching to it, and then destroying it. This process is referred to as phagocytosis.

White blood cells also produce antibodies, which are released into the circulating blood to target and attach to foreign organisms. After attachment, the antibody may neutralize the organism, or it may elicit help from other immune system cells to destroy the foreign substance. There are several varieties of white blood cells, including neutrophils, monocytes, and lymphocytes, all of which interact with one another and with plasma proteins and other cell types to form the complex and highly effective immune system.

[B]E -Platelets and Clotting[/B]
The smallest cells in the blood are the platelets, which are designed for a single purpose—to begin the process of coagulation, or forming a clot, whenever a blood vessel is broken. As soon as an artery or vein is injured, the platelets in the area of the injury begin to clump together and stick to the edges of the cut. They also release messengers into the blood that perform a variety of functions: constricting the blood vessels to reduce bleeding, attracting more platelets to the area to enlarge the platelet plug, and initiating the work of plasma-based clotting factors, such as fibrinogen. Through a complex mechanism involving many steps and many clotting factors, the plasma protein fibrinogen is transformed into long, sticky threads of fibrin. Together, the platelets and the fibrin create an intertwined meshwork that forms a stable clot. This self-sealing aspect of the blood is crucial to survival.

[B]IV -PRODUCTION AND ELIMINATION OF BLOOD CELLS[/B]
Blood is produced in the bone marrow, a tissue in the central cavity inside almost all of the bones in the body. In infants, the marrow in most of the bones is actively involved in blood cell formation. By later adult life, active blood cell formation gradually ceases in the bones of the arms and legs and concentrates in the skull, spine, ribs, and pelvis.

Red blood cells, white blood cells, and platelets grow from a single precursor cell, known as a hematopoietic stem cell. Remarkably, experiments have suggested that as few as 10 stem cells can, in four weeks, multiply into 30 trillion red blood cells, 30 billion white blood cells, and 1.2 trillion platelets—enough to replace every blood cell in the body.

Red blood cells have the longest average life span of any of the cellular elements of blood. A red blood cell lives 100 to 120 days after being released from the marrow into the blood. Over that period of time, red blood cells gradually age. Spent cells are removed by the spleen and, to a lesser extent, by the liver. The spleen and the liver also remove any red blood cells that become damaged, regardless of their age. The body efficiently recycles many components of the damaged cells, including parts of the hemoglobin molecule, especially the iron contained within it.

The majority of white blood cells have a relatively short life span. They may survive only 18 to 36 hours after being released from the marrow. However, some of the white blood cells are responsible for maintaining what is called immunologic memory. These memory cells retain knowledge of what infectious organisms the body has previously been exposed to. If one of those organisms returns, the memory cells initiate an extremely rapid response designed to kill the foreign invader. Memory cells may live for years or even decades before dying.

Memory cells make immunizations possible. An immunization, also called a vaccination or an inoculation, is a method of using a vaccine to make the human body immune to certain diseases. A vaccine consists of an infectious agent that has been weakened or killed in the laboratory so that it cannot produce disease when injected into a person, but can spark the immune system to generate memory cells and antibodies specific for the infectious agent. If the infectious agent should ever invade that vaccinated person in the future, these memory cells will direct the cells of the immune system to target the invader before it has the opportunity to cause harm.

Platelets have a life span of seven to ten days in the blood. They either participate in clot formation during that time or, when they have reached the end of their lifetime, are eliminated by the spleen and, to a lesser extent, by the liver.

[B]V -BLOOD DISEASES[/B]
Many diseases are caused by abnormalities in the blood. These diseases are categorized by which component of the blood is affected.

[B]A -Red Blood Cell Diseases[/B]
One of the most common blood diseases worldwide is anemia, which is characterized by an abnormally low number of red blood cells or low levels of hemoglobin. One of the major symptoms of anemia is fatigue, due to the failure of the blood to carry enough oxygen to all of the tissues.

The most common type of anemia, iron-deficiency anemia, occurs because the marrow fails to produce sufficient red blood cells. When insufficient iron is available to the bone marrow, it slows down its production of hemoglobin and red blood cells. In the United States, iron deficiency occurs most commonly due to poor nutrition. In other areas of the world, however, the most common causes of iron-deficiency anemia are certain infections that result in gastrointestinal blood loss and the consequent chronic loss of iron. Adding supplemental iron to the diet is often sufficient to cure iron-deficiency anemia.

Some anemias are the result of increased destruction of red blood cells, as in the case of sickle-cell anemia, a genetic disease most common in persons of African ancestry. The red blood cells of sickle-cell patients assume an unusual crescent shape, causing them to become trapped in some blood vessels, blocking the flow of other blood cells to tissues and depriving them of oxygen.

[B]B -White Blood Cell Diseases[/B]
Some white blood cell diseases are characterized by an insufficient number of white blood cells. This can be caused by the failure of the bone marrow to produce adequate numbers of normal white blood cells, or by diseases that lead to the destruction of crucial white blood cells. These conditions result in severe immune deficiencies characterized by recurrent infections.

Any disease in which excess white blood cells are produced, particularly immature white blood cells, is called leukemia, or blood cancer. Many cases of leukemia are linked to gene abnormalities, resulting in unchecked growth of immature white blood cells. If this growth is not halted, it often results in the death of the patient. These genetic abnormalities are not inherited in the vast majority of cases, but rather occur after birth. Although some causes of these abnormalities are known, for example exposure to high doses of radiation or the chemical benzene, most remain poorly understood.

Treatment for leukemia typically involves the use of chemotherapy, in which strong drugs are used to target and kill leukemic cells, permitting normal cells to regenerate. In some cases, bone marrow transplants are effective. Much progress has been made over the last 30 years in the treatment of this disease. In one type of childhood leukemia, more than 80 percent of patients can now be cured of their disease.

[B]C -Coagulation Diseases[/B]
One disease of the coagulation system is hemophilia, a genetic bleeding disorder in which one of the plasma clotting factors, usually factor VIII, is produced in abnormally low quantities, resulting in uncontrolled bleeding from minor injuries. Although individuals with hemophilia are able to form a good initial platelet plug when blood vessels are damaged, they are not easily able to form the meshwork that holds the clot firmly intact.

As a result, bleeding may occur some time after the initial traumatic event. Treatment for hemophilia relies on giving transfusions of factor VIII. Factor VIII can be isolated from the blood of normal blood donors but it also can be manufactured in a laboratory through a process known as gene cloning.

[B]VI -BLOOD BANKS[/B]
The Red Cross and a number of other organizations run programs, known as blood banks, to collect, store, and distribute blood and blood products for transfusions. When blood is donated, its blood type is determined so that only appropriately matched blood is given to patients needing a transfusion. Before using the blood, the blood bank also tests it for the presence of disease-causing organisms, such as hepatitis viruses and human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome (AIDS).
This blood screening dramatically reduces, but does not fully eliminate, the risk to the recipient of acquiring a disease through a blood transfusion. Blood donation, which is extremely safe, generally involves giving about 400 to 500 ml (about 1 pt) of blood, which is only about 7 percent of a person’s total blood.

[B]VII -BLOOD IN NONHUMANS[/B]
One-celled organisms have no need for blood. They are able to absorb nutrients, expel wastes, and exchange gases with their environment directly. Simple multicelled marine animals, such as sponges, jellyfishes, and anemones, also do not have blood. They use the seawater that bathes their cells to perform the functions of blood. However, all more complex multicellular animals have some form of a circulatory system using blood. In some invertebrates, there are no cells analogous to red blood cells. Instead, hemoglobin, or the related copper compound heocyanin, circulates dissolved in the plasma.

The blood of complex multicellular animals tends to be similar to human blood, but there are also some significant differences, typically at the cellular level. For example, fish, amphibians, and reptiles possess red blood cells that have a nucleus, unlike the red blood cells of mammals. The immune system of invertebrates is more primitive than that of vertebrates, lacking the functionality associated with the white blood cell and antibody system found in mammals. Some arctic fish species produce proteins in their blood that act as a type of antifreeze, enabling them to survive in environments where the blood of other animals would freeze. Nonetheless, the essential transportation, communication, and protection functions that make blood essential to the continuation of life occur throughout much of the animal kingdom.

Predator Tuesday, November 13, 2007 12:07 PM

Environmental Effects of the Fossil Fuel Age
 
[B]Environmental Effects of the Fossil Fuel Age[/B]

Over the last two centuries, human activity has transformed the chemistry of Earth’s water and air, altered the face of Earth itself, and rewoven the web of life. Why has this time period, more than any other, brought so much widespread environmental change? The reasons are many and complex. But a major influence surely is the use of fossil fuels, which has made far more energy available to more people than had ever been available before.

By 1990, humans were using about 80 times as much energy as was being used in 1800. The great majority of this energy was derived from fossil fuels. The availability and use of this new energy source has allowed people to produce more and consume more. Indirectly, this energy source caused a rapid increase in population as people developed much more efficient means of agriculture—such as mechanized farming—that required the use of fossil fuels. Improved farming techniques brought about an increase in food supply, which fostered the population growth. By the end of the 1990s, the human population was about six times what it was in 1800. Widespread changes to the environment resulted from other factors as well. The breakneck pace of urbanization is a factor, as is the equally dizzying speed of technological change. No less important a factor in environmental change is the heightened emphasis of modern governments on economic growth. All of these trends are interrelated, each one helping to advance the others. Together, they have shaped the evolution of human society in modern times. These growth trends have recast the relationships between humanity and other inhabitants of Earth.

For hundreds of thousands of years, human beings and their predecessors have both deliberately and accidentally altered their environments. But only recently, with the harnessing of fossil fuels, has humankind acquired the power to effect thorough changes on air, water, soils, plants, and animals. Armed with fossil fuels, people have changed the environment in ways they never had in pre-modern times—for example, devastating natural habitats and wildlife with oil spills. People have also been able to bring about environmental change much more rapidly, through acceleration of old activities such as deforestation.

[B]Origins of Fossil Fuels[/B]
Fossil fuels include coal, natural gas, and petroleum (also known as oil or crude oil), which are the petrified and liquefied remains of millions of years’ accumulation of decayed plant life. When fossil fuels are burned, their chemical energy becomes heat energy, which, by means of machines such as engines and turbines, is converted into mechanical or electrical energy.
Coal first became an important industrial fuel during the 11th and 12th centuries in China, where iron manufacturing consumed great quantities of this resource. The first major usage of coal as a domestic fuel began in 16th-century London, England. During the Industrial Revolution, which began in the 18th century, coal became a key fuel for industry, powering most steam engines.

Coal was the primary fossil fuel until the middle of the 20th century, when oil replaced it as the fuel of choice in industry, transportation, and other fields. Deep drilling for petroleum was pioneered in western Pennsylvania in 1859, and the first large oil fields were tapped in southeastern Texas in 1901. The world’s biggest oil fields were accessed in the 1940s in Saudi Arabia and in the 1960s in Siberia. Why did oil overshadow coal as the fuel of choice? Oil has certain advantages over coal. It is more efficient than coal, providing more energy per unit of weight than coal does. Oil also causes less pollution and works better in small engines. Oil is less plentiful than coal, however. When the world runs low on oil, copious supplies of coal will remain available.

[B]Modern Air Pollution[/B]
The outermost layer of the Earth’s living environment is the atmosphere, a mixture of gases surrounding the planet. The atmosphere contains a thin layer called ozone, which protects all life on Earth from harmful ultraviolet radiation from the Sun. For most of human history, people had very little effect on the atmosphere. For many thousands of years, humans routinely burned vegetation, causing some intermittent air pollution. In ancient times, the smelting of ores, such as copper ore, released metals that traveled in the atmosphere from the shores of the Mediterranean Sea as far as Greenland. With the development of fossil fuels, however, much more intense air pollution began to trouble humanity.

Before widespread use of fossil fuels, air pollution typically affected cities more than it did rural areas because of the concentration of combustion in cities. People in cold-climate urban areas kept warm by burning wood, but local wood supplies were soon exhausted. As a result of the limited supply, wood became expensive. People then burned comparatively little amounts of wood and heated their homes less. The first city to resolve this problem was London, where residents began using coal to heat their buildings. By the 1800s, half a million chimneys were releasing coal smoke, soot, ash, and sulfur dioxide into the London air.

The development of steam engines in the 18th century introduced coal to industry. The resultant growth from the Industrial Revolution meant more steam engines, more factory chimneys, and, thus, more air pollution. Skies darkened in the industrial heartlands of Britain, Belgium, Germany, and the United States. Cities that combined energy-intensive industries such as iron and steel manufacturing, and coal-heated buildings, were routinely shrouded in smoke and bathed in sulfur dioxide. Pittsburgh, Pennsylvania, one of the United States’ major industrial cities at the time, was sometimes referred to as “Hell with the lid taken off.” The coal consumption of some industries was so great that it could pollute the skies over entire regions, as was the case in the Ruhr region in Germany and around Hanshin, the area near Ōsaka, Japan.

[B]Early Air Pollution Control[/B]
Efforts at smoke abatement were largely ineffective until about 1940, so residents of industrial cities and regions suffered the consequences of life with polluted air. During the Victorian Age in England, dusting household surfaces twice a day to keep up with the dustfall was not uncommon. Residents of industrial cities witnessed the loss of pine trees and some wildlife, due to the high levels of sulfur dioxide. These people suffered rates of pneumonia and bronchitis far higher than those of their ancestors, their relatives living elsewhere, or their descendants.

After 1940, leaders of industrial cities and regions managed to reduce the severity of coal-based air pollution. St. Louis, Missouri, was the first city in the world to make smoke abatement a high priority. Pittsburgh and other U.S. cities followed during the late 1940s and 1950s. London took effective steps during the mid-1950s after the killer fog, an acute bout of pollution in December of 1952, took some 4,000 lives. Germany and Japan made strides toward smoke abatement during the 1960s, using a combination of taller smokestacks, smokestack filters and scrubbers, and the substitution of other fuels for coal.

Even as smoke abatement continued, however, cities acquired new and more complex air pollution problems. As cars became commonplace—first in the United States during the 1920s and then in Western Europe and Japan during the 1950s and 1960s—tailpipe emissions added to the air pollution already flowing out of chimneys and smokestacks. Auto exhaust contained different kinds of pollutants, such as carbon monoxide, nitrous oxide, and lead. Therefore cars, together with new industries, such as the petrochemical industry, complicated and intensified the world’s air pollution problems. Photochemical smog, which is caused by sunlight’s impact on elements of auto exhaust, became a serious health menace in cities where abundant sunshine combined with frequent temperature change. The world's worst smog was brewed in sunny, car-clogged cities, such as Athens, Greece; Bangkok, Thailand; Mexico City, Mexico; and Los Angeles, California.

In addition to these local and regional pollution problems, during the late 20th century human activity began to take its toll on the atmosphere. The increased carbon dioxide levels in the atmosphere after 1850, which were mainly a consequence of burning fossil fuels, raised the efficiency with which the air retains the sun's heat. This greater heat retention brought the threat of global warming, an overall increase in Earth’s temperature. Yet another threat to the atmosphere was caused by chemicals known as chlorofluorocarbons, which were invented in 1930 and used widely in industry and as refrigerants after 1950. When chlorofluorocarbons float up to the stratosphere (the upper layer of Earth’s atmosphere), they cause the ozone layer to become thinner, hampering its ability to block harmful ultraviolet radiation.

[B]Water Pollution[/B]
Water has always been a vital resource for human beings—at first just for drinking, later for washing, and eventually for irrigation. With the power conferred by fossil fuels and modern technology, people have rerouted rivers, pumped up deep groundwater, and polluted the Earth’s water supply as never before.

Irrigation, though an ancient practice, affected only small parts of the world until recently. During the 1800s, irrigation practices spread quickly, driven by advances in engineering and increased demand for food by the world’s growing population. In India and North America, huge networks of dams and canals were built. The 1900s saw the construction of still larger dams in these countries, as well as in Central Asia, China, and elsewhere. After the 1930s, dams built for irrigation also served to generate hydroelectric power. Between 1945 and 1980, most of the world's rivers that had met engineers’ criteria for suitability had acquired dams.

Because they provided electric power as well as irrigation water, dams made life easier for millions of people. Convenience came at a price, however, as dams changed established water ecosystems that had developed over the course of centuries. In the Columbia River in western North America, for example, salmon populations suffered because dams blocked the annual migrations of the salmon. In Egypt, where a large dam spanned the Nile at
Aswan after 1971, many humans and animals paid the price. Mediterranean sardines died and the fisherman who caught these fish lost their business. Farmers had to resort to chemical fertilizers because the dam prevented the Nile’s spring flooding and the resultant annual coating of fertile silt on land along the river. In addition, many Egyptians who drank Nile water, which carried increasing amounts of fertilizer runoff, experienced negative health effects. In Central Asia, the Aral Sea paid the price. After 1960 this sea shrank because the waters that fed into it were diverted to irrigate cotton fields.

River water alone did not suffice to meet the water needs of agriculture and cities. Groundwater in many parts of the world became an essential source of water. This source was available at low cost, because fossil fuels made pumping much easier. For example, after 1930 an economy based on grain and livestock emerged on the High Plains, from Texas to the Dakotas. This economy drew water from the Ogallala Aquifer, a vast underground reservoir. To meet the drinking, washing, and industrial needs of their growing populations, cities such as Barcelona, Spain; Beijing, China; and Mexico City, pumped up groundwater. Beijing and Mexico City began sinking slowly into the ground as they pumped out much of their underlying water. As groundwater supplies dwindled, both cities found they needed to bring water in from great distances. By 1999 humanity was using about 20 times as much fresh water as was used in 1800.

Not only was the water use increasing, but more of it was becoming polluted by human use. While water pollution had long existed in river water that flowed through cities, such as the Seine in Paris, France, the fossil fuel age changed the scope and character of water pollution. Water usage increased throughout this era, and a far wider variety of pollutants contaminated the world’s water supplies. For most of human history, water pollution was largely biological, caused mainly by human and animal wastes. However, industrialization introduced countless chemicals into the waters of the world, complicating pollution problems.

[B]Efforts to Control Water Pollution[/B]
Until the early 20th century, biological pollution of the world's lakes and rivers remained a baffling problem. Then experiments in filtration and chemical treatment of water proved fruitful. In Europe and North America, sewage treatment and water filtration assured a cleaner and healthier water supply. As late as the 1880s in Chicago, Illinois, thousands of people died each year from waterborne diseases, such as typhoid fever. By 1920, though, Chicago's water no longer carried fatal illnesses. Many communities around the world, especially in poor countries such as India and Nigeria, could not afford to invest in sewage treatment and water filtration plants, however.

As was the case with air pollution, the industrialization and technological advances of the 20th century brought increasing varieties of water pollution. Scientists invented new chemicals that did not exist in nature, and a few of these chemicals turned out to be very useful in manufacturing and in agriculture. Unfortunately, a few of these also turned out to be harmful pollutants. After 1960 chemicals called polychlorinated biphenyls (PCBs) turned up in dangerous quantities in North American waters, killing and damaging aquatic life and the creatures that eat these plants and animals. After 1970, legislation in North America and Europe substantially reduced point pollution, or water pollution derived from single sources. But nonpoint pollution, such as pesticide-laced runoff from farms, proved much harder to control. The worst water pollution prevailed in poorer countries where biological pollution continued unabated, while chemical pollution from industry or agriculture emerged to complement the biological pollution. In the late 1900s, China probably suffered the most from the widest variety of water pollution problems.

[B]Soil Pollution[/B]
During the era of fossil fuels, the surface of Earth also has undergone remarkable change. The same substances that have polluted the air and water often lodge in the soil, occasionally in dangerous concentrations that threaten human health. While this situation normally happened only in the vicinity of industries that generated toxic wastes, the problem of salinization, or salting, which was associated with irrigation, was more widespread.

Although irrigation has always brought the risk of destroying soils by waterlogging and salinization—the ancient middle-eastern civilization of Mesopotamia probably undermined its agricultural base this way—the modern scale of irrigation has intensified this problem around the world. By the 1990s, fields ruined by salinization were being abandoned as fast as engineers could irrigate new fields. Salinization has been the most severe in dry lands where evaporation occurs the fastest, such as in Mexico, Australia, Central Asia, and the Southwestern United States.

Soil erosion due to human activity was a problem long before salinization was. Modern soil erosion diminished the productivity of agriculture. This problem was worst in the 1800s in the frontier lands newly opened to pioneer settlement in the United States, Canada, Australia, New Zealand, Argentina, and elsewhere. Grasslands that had never been plowed before became vulnerable to wind erosion, which reached disastrous proportions during droughts, such as those during the 1930s in the Dust Bowl of Kansas and Oklahoma. The last major clearing of virgin grassland took place in the Union of Soviet Socialist Republics (USSR) in the 1950s, when Premier Nikita Khrushchev decided to convert northern Kazakhstan into a wheat belt. Fossil fuels also played a crucial role at this time, because railroads and steamships carried the grain and beef raised in these frontiers to distant markets.

By the late 20th century, pioneer settlement had shifted away from the world's grasslands into tropical and mountain forest regions. After 1950, farmers in Asia, Africa, and Latin America increasingly sought land in little-cultivated forests. Often these forests, such as those in Central America or the Philippines, were mountainous and subject to heavy rains. In order to cultivate this land, farmers deforested these mountainsides, which exposed them to heavy rains and invited soil erosion. Erosion caused in this manner stripped soils in the Andes of Bolivia, in the Himalayas of Nepal and northern India, and in the rugged terrain of Rwanda and Burundi. Depleted soils made life harder for farmers in these and other lands.

The impact of soil erosion does not stop with the loss of soil. Eroded soil does not simply disappear. Rather, it flows downhill and downstream, only to rest somewhere else. Often this soil has lodged in inconvenient places, silting up dam reservoirs or covering roads. Within only a few years of being built, some dams in Algeria and China became useless because they were clogged by soil erosion originating upstream.

[B]Animal and Plant Life[/B]
Human activity has affected the world's plants and animals no less than it has the air, water, and soil. For millions of years, life evolved without much impact from human beings. However, as early as the first settlements of Australia and North America, human beings probably caused mass extinctions, either through hunting or through the use of fire. With the domestication of animals, which began perhaps 10,000 years ago, humanity came to play a more active role in biological evolution. By the 1800s and 1900s, the role that human beings played in species survival had expanded to the extent that many species survive only because human beings allow it.

Some animal species survive in great numbers thanks to us. For example, today there are about 10 billion chickens on Earth—about thirteen to fifteen times as many as there were a century ago. This is because people like to eat chickens, so they are raised for this purpose. Similarly, we protect cattle, sheep, goats, and a few other domesticated animals in order to make use of them. Inadvertently, modern civilizations have ensured the survival of certain other animals. Rat populations propagate because of all of the food available to them, since humans store so much food and generate so much garbage. Squirrels prosper in large part because we have created suburban landscapes with few predators.

Even as modern human beings intentionally or unintentionally encourage the survival of a few species, humans threaten many more. Modern technology and fuels have made hunting vastly more efficient, bringing animals such as the blue whale and the North American bison to the edge of extinction. Many other animals, most notably tropical forest species, suffer from destruction of their preferred habitats. Quite inadvertently, and almost unconsciously, humankind has assumed a central role in determining the fate of many species and the health of Earth’s water, air, and soil. Humans have therefore assumed a central role in biological evolution.

The environmental history of the last two centuries has been one of enormous change. In a mere 200 years, humanity has altered Earth more drastically than since the dawn of agriculture about 10,000 years ago. Our vital air, water, and soil have been jeopardized; the very web of life hangs on our whims. For the most part, human beings have never been more successful nor led easier lives. The age of fossil fuels is changing the human condition in ways previously unimaginable. But whether we understand the impact—and are willing to accept it—remains an unanswered question.
About the author: John R. McNeill is a professor of history at Georgetown University. He is the author of Global Environmental History of the Twentieth Century among numerous other publications.

Predator Tuesday, November 13, 2007 12:12 PM

Darwin, Charles Robert
 
[B]Darwin, Charles Robert

I -INTRODUCTION[/B]
Darwin, Charles Robert (1809-1882), British scientist, who laid the foundation of modern evolutionary theory with his concept of the development of all forms of life through the slow-working process of natural selection. His work was of major influence on the life and earth sciences and on modern thought in general.

Born in Shrewsbury, Shropshire, England, on February 12, 1809, Darwin was the fifth child of a wealthy and sophisticated English family. His maternal grandfather was the successful china and pottery entrepreneur Josiah Wedgwood; his paternal grandfather was the well-known 18th-century physician and savant Erasmus Darwin. After graduating from the elite school at Shrewsbury in 1825, young Darwin went to the University of Edinburgh to study medicine. In 1827 he dropped out of medical school and entered the University of Cambridge, in preparation for becoming a clergyman of the Church of England. There he met two stellar figures: Adam Sedgwick, a geologist, and John Stevens Henslow, a naturalist. Henslow not only helped build Darwin’s self-confidence but also taught his student to be a meticulous and painstaking observer of natural phenomena and collector of specimens. After graduating from Cambridge in 1831, the 22-year-old Darwin was taken aboard the English survey ship HMS Beagle, largely on Henslow’s recommendation, as an unpaid naturalist on a scientific expedition around the world.

[B]II -VOYAGE OF THE BEAGLE[/B]
Darwin’s job as naturalist aboard the Beagle gave him the opportunity to observe the various geological formations found on different continents and islands along the way, as well as a huge variety of fossils and living organisms. In his geological observations, Darwin was most impressed with the effect that natural forces had on shaping the earth’s surface.
At the time, most geologists adhered to the so-called catastrophist theory that the earth had experienced a succession of creations of animal and plant life, and that each creation had been destroyed by a sudden catastrophe, such as an upheaval or convulsion of the earth’s surface (see Geology: History of Geology: Geology in the 18th and 19th Centuries). According to this theory, the most recent catastrophe, Noah’s flood, wiped away all life except those forms taken into the ark. The rest were visible only in the form of fossils. In the view of the catastrophists, species were individually created and immutable, that is, unchangeable for all time.

The catastrophist viewpoint (but not the immutability of species) was challenged by the English geologist Sir Charles Lyell in his three-volume work Principles of Geology (1830-1833). Lyell maintained that the earth’s surface is undergoing constant change, the result of natural forces operating uniformly over long periods.

Aboard the Beagle, Darwin found himself fitting many of his observations into Lyell’s general uniformitarian view. Beyond that, however, he realized that some of his own observations of fossils and living plants and animals cast doubt on the Lyell-supported view that species were specially created. He noted, for example, that certain fossils of supposedly extinct species closely resembled living species in the same geographical area. In the Galápagos Islands, off the coast of Ecuador, he also observed that each island supported its own form of tortoise, mockingbird, and finch; the various forms were closely related but differed in structure and eating habits from island to island. Both observations raised the question, for Darwin, of possible links between distinct but similar species.

[B]III -THEORY OF NATURAL SELECTION[/B]
After returning to England in 1836, Darwin began recording his ideas about changeability of species in his Notebooks on the Transmutation of Species. Darwin’s explanation for how organisms evolved was brought into sharp focus after he read An Essay on the Principle of Population (1798), by the British economist Thomas Robert Malthus, who explained how human populations remain in balance. Malthus argued that any increase in the availability of food for basic human survival could not match the geometrical rate of population growth. The latter, therefore, had to be checked by natural limitations such as famine and disease, or by social actions such as war.

Darwin immediately applied Malthus’s argument to animals and plants, and by 1838 he had arrived at a sketch of a theory of evolution through natural selection (see Species and Speciation). For the next two decades he worked on his theory and other natural history projects. (Darwin was independently wealthy and never had to earn an income.) In 1839 he married his first cousin, Emma Wedgwood, and soon after, moved to a small estate, Down House, outside London. There he and his wife had ten children, three of whom died in infancy.

Darwin’s theory was first announced in 1858 in a paper presented at the same time as one by Alfred Russel Wallace, a young naturalist who had come independently to the theory of natural selection. Darwin’s complete theory was published in 1859, in On the Origin of Species. Often referred to as the “book that shook the world,” the Origin sold out on the first day of publication and subsequently went through six editions.

Darwin’s theory of evolution by natural selection is essentially that, because of the food-supply problem described by Malthus, the young born to any species intensely compete for survival. Those young that survive to produce the next generation tend to embody favorable natural variations (however slight the advantage may be)—the process of natural selection—and these variations are passed on by heredity. Therefore, each generation will improve adaptively over the preceding generations, and this gradual and continuous process is the source of the evolution of species. Natural selection is only part of Darwin’s vast conceptual scheme; he also introduced the concept that all related organisms are descended from common ancestors. Moreover, he provided additional support for the older concept that the earth itself is not static but evolving.

[B]IV -REACTIONS TO THE THEORY[/B]
The reaction to the Origin was immediate. Some biologists argued that Darwin could not prove his hypothesis. Others criticized Darwin’s concept of variation, arguing that he could explain neither the origin of variations nor how they were passed to succeeding generations. This particular scientific objection was not answered until the birth of modern genetics in the early 20th century (see Heredity; Mendel’s Laws). In fact, many scientists continued to express doubts for the following 50 to 80 years. The most publicized attacks on Darwin’s ideas, however, came not from scientists but from religious opponents. The thought that living things had evolved by natural processes denied the special creation of humankind and seemed to place humanity on a plane with the animals; both of these ideas were serious contradictions to orthodox theological opinion.

[B]V -LATER YEARS[/B]
Darwin spent the rest of his life expanding on different aspects of problems raised in the Origin. His later books—including The Variation of Animals and Plants Under Domestication (1868), The Descent of Man (1871), and The Expression of the Emotions in Man and Animals (1872)—were detailed expositions of topics that had been confined to small sections of the Origin. The importance of his work was well recognized by his contemporaries; Darwin was elected to the Royal Society (1839) and the French Academy of Sciences (1878). He was also honored by burial in Westminster Abbey after he died in Downe, Kent, on April 19, 1882.

Predator Tuesday, November 13, 2007 12:19 PM

Adaptation
 
[B]Adaptation[/B]

[B]I -INTRODUCTION[/B]Adaptation word used by biologists in two different senses, both of which imply the accommodation of a living organism to its environment. One form of adaptation, called physiological adaptation, involves the acclimatization of an individual organism to a sudden change in environment. The other kind of adaptation, discussed here, occurs during the slow course of evolution and hence is called evolutionary adaptation.

[B]II -MECHANISMS OF ADAPTATION[/B]
Evolutionary adaptations are the result of the competition among individuals of a particular species over many generations in response to an ever-changing environment, including other animals and plants. Certain traits are culled by natural selection (see Evolution), favoring those individual organisms that produce the most offspring. This is such a broad concept that, theoretically, all the features of any animal or plant could be considered adaptive. For example, the leaves, trunk, and roots of a tree all arose by selection and help the individual tree in its competition for space, soil, and sunlight.

Biologists have been accused of assuming adaptive ness for all such features of a species, but few cases have actually been demonstrated. Indeed, biologists find it difficult to be certain whether any particular structure of an organism arose by selection and hence can be called adaptive or whether it arose by chance and is selectively neutral.

The best example of an evolutionary development with evidence for adaptation is mimicry. Biologists can show experimentally that some organisms escape predators by trying to be inconspicuous and blend into their environment and that other organisms imitate the coloration of species distasteful to predators. These tested cases are only a handful, however, and many supposed cases of adaptation are simply assumed.

On the contrary, it is possible that some features of an organism may be retained because they are adaptive for special, limited reasons, even though they may be maladaptive on the whole. The large antlers of an elk or moose, for example, may be effective in sexual selection for mating but could well be maladaptive at all other times of the year. In addition, a species feature that now has one adaptive significance may have been produced as an adaptation to quite different circumstances. For example, lungs probably evolved in adaptation to life in water that sometimes ran low on oxygen. Fish with lungs were then “preadapted” in a way that accidentally allowed their descendants to become terrestrial.

[B]III -ADAPTIVE RADIATION[/B]
Because the environment exerts such control over the adaptations that arise by natural selection—including the coadaptations of different species evolving together, such as flowers and pollinators—the kind of organism that would fill a particular environmental niche ought to be predictable in general terms. An example of this process of adaptative radiation, or filling out of environmental niches by the development of new species, is provided by Australia.

When Australia became a separate continent some 60 million years ago, only monotremes and marsupials lived there, with no competition from the placental mammals that were emerging on other continents. Although only two living monotremes are found in Australia today, the marsupials have filled most of the niches open to terrestrial mammals on that continent. Because Australian habitats resemble those in other parts of the world, marsupial equivalents can be found to the major placental herbivores, carnivores, and even rodents and moles.

This pattern can be observed on a restricted scale as well. In some sparsely populated islands, for example, one species of bird might enter the region, find little or no competition, and evolve rapidly into a number of species adapted to the available niches. A well-known instance of such adaptive radiation was discovered by Charles Darwin in the Galápagos Islands. He presumed, probably correctly, that one species of finch colonized the islands thousands of years ago and gave rise to the 14 species of finchlike birds that exist there now. Thus, one finch behaves like a warbler, another like a woodpecker, and so on. The greatest differences in their appearance lie in the shapes of the bills, adapted to the types of food each species eats.

[B]IV -ANALOGY AND HOMOLOGY[/B]
When different species are compared, some adaptive features can be described as analogous or homologous. For example, flight requires certain rigid aeronautical principles of design; yet birds, bats, and insects have all conquered the air. In this case the flight structures are said to be analogous—that is, they have different embryological origins but perform the same function. By contrast, structures that arise from the same structures in the embryo but are used in entirely different kinds of functions, such as the forelimb of a rat and the wing of a bat, are said to be homologous.

Predator Tuesday, November 13, 2007 12:37 PM

Plate Tectonics
 
[B]Plate Tectonics[/B]

[B]I -INTRODUCTION[/B]
Plate Tectonics, theory that the outer shell of the earth is made up of thin, rigid plates that move relative to each other. The theory of plate tectonics was formulated during the early 1960s, and it revolutionized the field of geology. Scientists have successfully used it to explain many geological events, such as earthquakes and volcanic eruptions as well as mountain building and the formation of the oceans and continents.
Plate tectonics arose from an earlier theory proposed by German scientist Alfred Wegener in 1912. Looking at the shapes of the continents, Wegener found that they fit together like a jigsaw puzzle. Using this observation, along with geological evidence he found on different continents, he developed the theory of continental drift, which states that today’s continents were once joined together into one large landmass.

Geologists of the 1950s and 1960s found evidence supporting the idea of tectonic plates and their movement. They applied Wegener’s theory to various aspects of the changing earth and used this evidence to confirm continental drift. By 1968 scientists integrated most geologic activities into a theory called the New Global Tectonics, or more commonly, Plate Tectonics.

[B]II -TECTONIC PLATES[/B]
Tectonic plates are made of either oceanic or continental crust and the very top part of the mantle, a layer of rock inside the earth. This crust and upper mantle form what is called the lithosphere. Under the lithosphere lies a fluid rock layer called the asthenosphere. The rocks in the asthenosphere move in a fluid manner because of the high temperatures and pressures found there. Tectonic plates are able to float upon the fluid asthenosphere because they are made of rigid lithosphere. See also Earth: Plate Tectonics.

[B]A -Continental Crust[/B]
The earth’s solid surface is about 40 percent continental crust. Continental crust is much older, thicker and less dense than oceanic crust. The thinnest continental crust, between plates that are moving apart, is about 15 km (about 9 mi) thick. In other places, such as mountain ranges, the crust may be as much as 75 km (47 mi) thick. Near the surface, it is composed of rocks that are felsic (made up of minerals including feldspar and silica). Deeper in the continental crust, the composition is mafic (made of magnesium, iron, and other minerals).

[B]B -Oceanic Crust[/B]
Oceanic crust makes up the other 60 percent of the earth’s solid surface. Oceanic crust is, in general, thin and dense. It is constantly being produced at the bottom of the oceans in places called mid-ocean ridges—undersea volcanic mountain chains formed at plate boundaries where there is a build-up of ocean crust. This production of crust does not increase the physical size of the earth, so the material produced at mid-ocean ridges must be recycled, or consumed, somewhere else. Geologists believe it is recycled back into the earth in areas called subduction zones, where one plate sinks underneath another and the crust of the sinking plate melts back down into the earth. Oceanic crust is continually recycled so that its age is generally not greater than 200 million years. Oceanic crust averages between 5 and 10 km (between 3 and 6 mi) thick. It is composed of a top layer of sediment, a middle layer of rock called basalt, and a bottom layer of rock called gabbro. Both basalt and gabbro are dark-colored igneous, or volcanic, rocks.

[B]C -Plate Sizes[/B]
Currently, there are seven large and several small plates. The largest plates include the Pacific plate, the North American plate, the Eurasian plate, the Antarctic plate, and the African plate. Smaller plates include the Cocos plate, the Nazca plate, the Caribbean plate, and the Gorda plate. Plate sizes vary a great deal. The Cocos plate is 2000 km (1400 mi) wide, while the Pacific plate is the largest plate at nearly 14,000 km (nearly 9000 mi) wide.

[B]III -PLATE MOVEMENT[/B]
Geologists study how tectonic plates move relative to a fixed spot in the earth’s mantle and how they move relative to each other. The first type of motion is called absolute motion, and it can lead to strings of volcanoes. The second kind of motion, called relative motion, leads to different types of boundaries between plates: plates moving apart from one another form a divergent boundary, plates moving toward one another form a convergent boundary, and plates that slide along one another form a transform plate boundary. In rare instances, three plates may meet in one place, forming a triple junction. Current plate movement is making the Pacific Ocean smaller, the Atlantic Ocean larger, and the Himalayan mountains taller.

[B]A -Measuring Plate Movement[/B]
Geologists discovered absolute plate motion when they found chains of extinct submarine volcanoes. A chain of dead volcanoes forms as a plate moves over a plume, a source of magma, or molten rock, deep within the mantle. These plumes stay in one spot, and each one creates a hot spot in the plate above the plume. These hot spots can form into a volcano on the surface of the earth. An active volcano indicates a hot spot as well as the youngest region of a volcanic chain. As the plate moves, a new volcano forms in the plate over the place where the hot spot occurs. The volcanoes in the chain get progressively older and become extinct as they move away from the hot spot (see Hawaii: Formation of the Islands and Volcanoes).

Scientists use hot spots to measure the speed of tectonic plates relative to a fixed point. To do this, they determine the age of extinct volcanoes and their distance from a hot spot. They then use these numbers to calculate how far the plate has moved in the time since each volcano formed. Today, the plates move at velocities up to 18.5 cm per year (7.3 in per year). On average, they move nearly 4 to 7 cm per year (2 to 3 in per year).

[B]B -Divergent Plate Boundaries[/B]
Divergent plate boundaries occur where two plates are moving apart from each other. When plates break apart, the lithosphere thins and ruptures to form a divergent plate boundary. In the oceanic crust, this process is called seafloor spreading, because the splitting plates are spreading apart from each other. On land, divergent plate boundaries create rift valleys—deep valley depressions formed as the land slowly splits apart.

When seafloor spreading occurs, magma, or molten rock material, rises to the sea floor surface along the rupture. As the magma cools, it forms new oceanic crust and lithosphere. The new lithosphere is less dense, so it rises, or floats, higher above older lithosphere, producing long submarine mountain chains known as mid-ocean ridges. The Mid-Atlantic Ridge is an underwater mountain range created at a divergent plate boundary in the middle of the Atlantic Ocean. It is part of a worldwide system of ridges made by seafloor spreading. The Mid-Atlantic Ridge is currently spreading at a rate of 2.5 cm per year (1 in per year). The mid-ocean ridges today are 60,000 km (about 40,000 mi) long, forming the largest continuous mountain chain on earth. Earthquakes, faults, underwater volcanic eruptions, and vents, or openings, along the mountain crests produce rugged seafloor features, or topography.
Divergent boundaries on land cause rifting, in which broad areas of land are uplifted, or moved upward. These uplifts and faulting along the rift result in rift valleys. Examples of rift valleys are found at the Krafla Volcano rift area in Iceland as well as at the East African Rift Zone—part of the Great Rift Valley that extends from Syria to Mozambique and out to the Red Sea. In these areas, volcanic eruptions and shallow earthquakes are common.

[B]C -Convergent Plate Boundaries[/B]
Convergent plate boundaries occur where plates are consumed, or recycled back into the earth’s mantle. There are three types of convergent plate boundaries: between two oceanic plates, between an oceanic plate and a continental plate, and between two continental plates. Subduction zones are convergent regions where oceanic crust is thrust below either oceanic crust or continental crust. Many earthquakes occur at subduction zones, and volcanic ridges and oceanic trenches form in these areas.

In the ocean, convergent plate boundaries occur where an oceanic plate descends beneath another oceanic plate. Chains of active volcanoes develop 100 to 150 km (60 to 90 mi) above the descending slab as magma rises from under the plate. Also, where the crust slides down into the earth, a trench forms. Together, the volcanoes and trench form an intra-oceanic island arc and trench system. A good example of such a system is the Mariana Trench system in the western Pacific Ocean, where the Pacific plate is descending under the Philippine plate. In these areas, earthquakes are frequent but not large. Stress in and behind the arc often causes the arc and trench system to move toward the incoming plate, which opens small ocean basins behind the arc. This process is called back-arc seafloor spreading.

Convergent boundaries that occur between the ocean and land create continental margin arc and trench systems near the margins, or edges, of continents. Volcanoes also form here. Stress can develop in these areas and cause the rock layers to fold, leading to earthquake faults, or breaks in the earth’s crust called thrust faults. The folding and thrust faulting thicken the continental crust, producing high mountains. Many of the world’s large destructive earthquakes and major mountain chains, such as the Andes Mountains of western South America, occur along these convergent plate boundaries.

When two continental plates converge, the incoming plate drives against and under the opposing continent. This often affects hundreds of miles of each continent and, at times, doubles the normal thickness of continental crust. Colliding continents cause earthquakes and form mountains and plateaus. The collision of India with Asia has produced the Himalayan Mountains and Tibetan Plateau.

[B]D -Transform Plate Boundaries[/B]
A transform plate boundary, also known as a transform fault system, forms as plates slide past one another in opposite directions without converging or diverging. Early in the plate tectonic revolution, geologists proposed that transform faults were a new class of fault because they “transformed” plate motions from one plate boundary to another. Canadian geophysicist J. Tuzlo Wilson studied the direction of faulting along fracture zones that divide the mid-ocean ridge system and confirmed that transform plate boundaries were different than convergent and divergent boundaries. Within the ocean, transform faults are usually simple, straight fault lines that form at a right angle to ocean ridge spreading centers. As plates slide past each other, the transform faults can divide the centers of ocean ridge spreading. By cutting across the ridges of the undersea mountain chains, they create steep cliff slopes. Transform fault systems can also connect spreading centers to subduction zones or other transform fault systems within the continental crust. As a transform plate boundary cuts perpendicularly across the edges of the continental crust near the borders of the continental and oceanic crust, the result is a system such as the San Andreas transform fault system in California.

[B]E -Triple Junctions[/B]
Rarely, a group of three plates, or a combination of plates, faults, and trenches, meet at a point called a triple junction. The East African Rift Zone is a good example of a triple plate junction. The African plate is splitting into two plates and moving away from the Arabian plate as the Red Sea meets the Gulf of Aden. Another example is the Mendocino Triple Junction, which occurs at the intersection of two transform faults (the San Andreas and Mendocino faults) and the plate boundary between the Pacific and Gorda plates.

[B]F -Current Plate Movement[/B]
Plate movement is changing the sizes of our oceans and the shapes of our continents. The Pacific plate moves at an absolute motion rate of 9 cm per year (4 in per year) away from the East Pacific Rise spreading center, the undersea volcanic region in the eastern Pacific Ocean that runs parallel to the western coast of South America. On the other side of the Pacific Ocean, near Japan, the Pacific plate is being subducted, or consumed under, the oceanic arc systems found there. The Pacific Ocean is getting smaller as the North and South American plates move west. The Atlantic Ocean is getting larger as plate movement causes North and South America to move away from Europe and Africa. Since the Eurasian and Antarctic plates are nearly stationary, the Indian Ocean at present is not significantly expanding or shrinking. The plate that includes Australia is just beginning to collide with the plate that forms Southeast Asia, while India’s plate is still colliding with Asia. India moves north at 5 cm per year (2 in per year) as it crashes into Asia, while Australia moves slightly farther away from Antarctica each year.

[B]IV -CAUSES OF PLATE MOTION[/B]
Although plate tectonics has explained most of the surface features of the earth, the driving force of plate tectonics is still unclear. According to geologists, a model that explains plate movement should include three forces. Those three forces are the pull of gravity; convection currents, or the circulating movement of fluid rocky material in the mantle; and thermal plumes, or vertical columns of molten rocky material in the mantle.

[B]A -Plate Movement Caused by Gravity[/B]
Geologists believe that tectonic plates move primarily as a result of their own weight, or the force of gravity acting on them. Since the plates are slightly denser than the underlying asthenosphere, they tend to sink. Their weight causes them to slide down gentle gradients, such as those formed by the higher ocean ridge crests, to the lower subduction zones. Once the plate’s leading edge has entered a subduction zone and penetrated the mantle, the weight of the slab itself will tend to pull the rest of the plate toward the trench. This sinking action is known as slab-pull because the sinking plate edge pulls the remainder of the plate behind it. Another kind of action, called ridge-push, is the opposite of slab-pull, in that gravity also causes plates to slide away from mid-ocean ridges. Scientists believe that plates pushing against one another also causes plate movement.

[B]B -Convection Currents[/B]
In 1929 British geologist Arthur Holmes proposed the concept of convection currents—the movement of molten material circulating deep within the earth—and the concept was modified to explain plate movement. A convection current occurs when hot, molten, rocky material floats up within the asthenosphere, then cools as it approaches the surface. As it cools, the material becomes denser and begins to sink again, moving in a circular pattern. Geologists once thought that convection currents were the primary driving force of plate movement. They now believe that convection currents are not the primary cause, but are an effect of sinking plates that contributes to the overall movement of the plates.

[B]C -Thermal Plumes[/B]
Some scientists have proposed the concept of thermal plumes, vertical columns of molten material, as an additional force of plate movement. Thermal plumes do not circulate like convection currents. Rather, they are columns of material that rise up through the asthenosphere and appear on the surface of the earth as hot spots. Scientists estimate thermal plumes to be between 100 and 250 km (60 and 160 mi) in diameter. They may originate within the asthenosphere or even deeper within the earth at the boundary between the mantle and the core.

[B]V -EXTRATERRESTRIAL PLATE TECTONICS[/B]
Scientists have also observed tectonic activity and fracturing on several moons of other planets in our solar system. Starting in 1985, images from the Voyager probes indicated that Saturn’s satellite Enceladus and Uranus’ moon Miranda also show signs of being tectonically active. In 1989 the Voyager probes sent photographs and data to Earth of volcanic activity on Neptune’s satellite Triton. In 1995 the Galileo probe began to send data and images of tectonic activity on three of Jupiter’s four Galilean satellites. The information that scientists gather from space missions such as these helps increase their understanding of the solar system and our planet. They can apply this knowledge to better understand the forces that created the earth and that continue to act upon it.

Scientists believe that Enceladus has a very tectonically active surface. It has several different terrain types, including craters, plains, and many faults that cross the surface. Miranda has fault canyons and terraced land formations that indicate a diverse tectonic environment. Scientists studying the Voyager 2 images of Triton found evidence of an active geologic past as well as ongoing eruptions of ice volcanoes.

Scientists are still gathering information from the Galileo probe of the Jupiter moon system. Three of Jupiter’s four Galilean satellites show signs of being tectonically active. Europa, Ganymede, and Io all exhibit various features that indicate tectonic motion or volcanism. Europa’s surface is broken apart into large plates similar to the plates found on Earth. The plate movement indicates that the crust is brittle and that the plates move over the top of a softer, more fluid layer. Ganymede probably has a metallic inner core and at least two outer layers that make up a crust and mantle. Io may also have a giant iron core interior that causes the active tectonics and volcanism. It is believed that Io has a partially molten rock mantle and crust. See also Planetary Science: Volcanism and Tectonic Activity.

[B]VI -HISTORY OF TECTONIC THEORY[/B]
The theory of plate tectonics arose from several previous geologic theories and discoveries. As early as the 16th century, explorers began examining the coastlines of Africa and South America and proposed that these continents were once connected. In the 20th century, scientists proposed theories that the continents moved or drifted apart from each other. Additionally, in the 1950s scientists proposed that the earth’s magnetic poles wander, leading to more evidence, such as rocks with similar magnetic patterns around the world, that the continents had drifted. More recently, scientists examining the seafloor have discovered that it is spreading as new seafloor is created, and through this work they have discovered that the magnetic polarity of the earth has changed several times throughout the earth's history. The theory of plate tectonics revolutionized earth sciences by providing a framework that could explain these discoveries, as well as events such as earthquakes and volcanic eruptions, mountain building and the formation of the continents and oceans. See also Earthquake.

[B]A -Continental Drift[/B]
Beginning in the late 16th century and early 17th century, many people, including Flemish cartographer Abraham Ortelius and English philosopher Sir Francis Bacon, were intrigued by the shapes of the South American and African coastlines and the possibility that these continents were once connected. In 1912, German scientist Alfred Wegener eventually developed the idea that the continents were at one time connected into the theory of continental drift. Scientists of the early 20th century found evidence of continental drift in the similarity of the coastlines and geologic features on both continents. Geologists found rocks of the same age and type on opposite sides of the ocean, fossils of similar animals and plants, and similar ancient climate indicators, such as glaciation patterns. British geologist

Arthur Holmes proposed that convection currents drove the drifting movement of continents. Most earth scientists did not seriously consider the theory of continental drift until the 1960s when scientists began to discover other evidence, such as polar wandering, seafloor spreading, and reversals of the earth’s magnetic field. See also Continent.

[B]B -Polar Wandering[/B]
In the 1950s, physicists in England became interested in the observation that certain kinds of rocks produced a magnetic field. They soon decided that the magnetic fields were remnant, or left over, magnetism acquired from the earth’s magnetic field as the rocks cooled and solidified from the hot magma that formed them. Scientists measured the orientation and direction of the acquired magnetic fields and, from these orientations, calculated the direction of the rock’s magnetism and the distance from the place the rock was found to the magnetic poles. As calculations from rocks of varying ages began to accumulate, scientists calculated the position of the earth’s magnetic poles over time. The position of the poles varied depending on where the rocks were collected, and the idea of a polar wander path began to form. When sample paths of polar wander from two continents, such as North America and Europe, were compared, they coincided as if the continents were once joined. This new science and methodology became known as the discipline of paleomagnetism. As a result, discussion of the theory of continental drift increased, but most earth scientists remained skeptical.

[B]C -Seafloor Spreading[/B]
During the 1950s, as people began creating detailed maps of the world’s ocean floor, they discovered a mid-ocean ridge system of mountains nearly 60,000 km (nearly 40,000 mi) long. This ridge goes all the way around the globe. American geologist Harry H. Hess proposed that this mountain chain was the place where new ocean floor was created and that the continents moved as a result of the expansion of the ocean floors. This process was termed seafloor spreading by American geophysicist Robert S. Dietz in 1961. Hess also proposed that since the size of the earth seems to have remained constant, the seafloor must also be recycled back into the mantle beneath mountain chains and volcanic arcs along the deep trenches on the ocean floor.

These studies also found marine magnetic anomalies, or differences, on the sea floor. The anomalies are changes, or switches, in the north and south polarity of the magnetic rock of the seafloor. Scientists discovered that the switches make a striped pattern of the positive and negative magnetic anomalies: one segment, or stripe, is positive, and the segment next to it is negative. The stripes are parallel to the mid-ocean ridge crest, and the pattern is the same on both sides of that crest. Scientists could not explain the cause of these anomalies until they discovered that the earth’s magnetic field periodically reverses direction.

[B]D -Magnetic Field Reversals[/B]
In 1963, British scientists Fred J. Vine and Drummond H. Matthews combined their observations of the marine magnetic anomalies with the concept of reversals of the earth’s magnetic field. They proposed that the marine magnetic anomalies were a “tape recording” of the spreading of the ocean floor as the earth’s magnetic field reversed its direction. At the same time, other geophysicists were studying lava flows from many parts of the world to see how these flows revealed the record of reversals of the direction of the earth’s magnetic field. These studies showed that nearly four reversals have occurred over the past 5 million years. The concept of magnetic field reversals was a breakthrough that explained the magnetic polarity switches seen in seafloor spreading as well as the concept of similar magnetic patterns in the rocks used to demonstrate continental drift.

[B]E -Revolution in Geology[/B]
The theory of plate tectonics tied together the concepts of continental drift, polar wandering, seafloor spreading, and magnetic field reversals into a single theory that completely changed the science of geology. Geologists finally had one theory that could explain all the different evidence they had accumulated to support these previous theories and discoveries. Geologists now use the theory of plate tectonics to integrate geologic events, to explain the occurrence of earthquakes and volcanic eruptions, and to explain the formation of mountain ranges and oceans.

Predator Tuesday, November 13, 2007 02:11 PM

Nuclear Energy
 
Nuclear Energy

[B]I -INTRODUCTION[/B]
Nuclear Energy, energy released during the splitting or fusing of atomic nuclei. The energy of any system, whether physical, chemical, or nuclear, is manifested by the system’s ability to do work or to release heat or radiation. The total energy in a system is always conserved, but it can be transferred to another system or changed in form.

Until about 1800 the principal fuel was wood, its energy derived from solar energy stored in plants during their lifetimes. Since the Industrial Revolution, people have depended on fossil fuels—coal, petroleum, and natural gas—also derived from stored solar energy. When a fossil fuel such as coal is burned, atoms of hydrogen and carbon in the coal combine with oxygen atoms in air. Water and carbon dioxide are produced and heat is released, equivalent to about 1.6 kilowatt-hours per kilogram or about 10 electron volts (eV) per atom of carbon. This amount of energy is typical of chemical reactions resulting from changes in the electronic structure of the atoms. A part of the energy released as heat keeps the adjacent fuel hot enough to keep the reaction going.

[B]II -THE ATOM[/B]
The atom consists of a small, massive, positively charged core (nucleus) surrounded by electrons (see Atom). The nucleus, containing most of the mass of the atom, is itself composed of neutrons and protons bound together by very strong nuclear forces, much greater than the electrical forces that bind the electrons to the nucleus. The mass number A of a nucleus is the number of nucleons, or protons and neutrons, it contains; the atomic number Z is the number of positively charged protons. A specific nucleus is designated as ¿U the expression ¯U, for example, represents uranium-235. See Isotope.

The binding energy of a nucleus is a measure of how tightly its protons and neutrons are held together by the nuclear forces. The binding energy per nucleon, the energy required to remove one neutron or proton from a nucleus, is a function of the mass number A. The curve of binding energy implies that if two light nuclei near the left end of the curve coalesce to form a heavier nucleus, or if a heavy nucleus at the far right splits into two lighter ones, more tightly bound nuclei result, and energy will be released. Nuclear energy, measured in millions of electron volts (MeV), is released by the fusion of two light nuclei, as when two heavy hydrogen nuclei, deuterons (ªH), combine in the reaction producing a helium-3 atom, a free neutron (¦n), and 3.2 MeV, or 5.1 × 10-13 J (1.2 × 10-13 cal). Nuclear energy is also released when the fission of a heavy nucleus such as ¯U is induced by the absorption of a neutron as in

producing cesium-140, rubidium-93, three neutrons, and 200 MeV, or 3.2 × 10-11 J (7.7 × 10-12 cal). A nuclear fission reaction releases 10 million times as much energy as is released in a typical chemical reaction. See Nuclear Chemistry.

[B]III -NUCLEAR ENERGY FROM FISSION[/B]
The two key characteristics of nuclear fission important for the practical release of nuclear energy are both evident in equation (2). First, the energy per fission is very large. In practical units, the fission of 1 kg (2.2 lb) of uranium-235 releases 18.7 million kilowatt-hours as heat. Second, the fission process initiated by the absorption of one neutron in uranium-235 releases about 2.5 neutrons, on the average, from the split nuclei. The neutrons released in this manner quickly cause the fission of two more atoms, thereby releasing four or more additional neutrons and initiating a self-sustaining series of nuclear fissions, or a chain reaction, which results in continuous release of nuclear energy.

Naturally occurring uranium contains only 0.71 percent uranium-235; the remainder is the nonfissile isotope uranium-238. A mass of natural uranium by itself, no matter how large, cannot sustain a chain reaction because only the uranium-235 is easily fissionable. The probability that a fission neutron with an initial energy of about 1 MeV will induce fission is rather low, but the probability can be increased by a factor of hundreds when the neutron is slowed down through a series of elastic collisions with light nuclei such as hydrogen, deuterium, or carbon. This fact is the basis for the design of practical energy-producing fission reactors.

In December 1942 at the University of Chicago, the Italian physicist Enrico Fermi succeeded in producing the first nuclear chain reaction. This was done with an arrangement of natural uranium lumps distributed within a large stack of pure graphite, a form of carbon. In Fermi's “pile,” or nuclear reactor, the graphite moderator served to slow the neutrons.

[B]IV -NUCLEAR POWER REACTORS[/B]
The first large-scale nuclear reactors were built in 1944 at Hanford, Washington, for the production of nuclear weapons material. The fuel was natural uranium metal; the moderator, graphite. Plutonium was produced in these plants by neutron absorption in uranium-238; the power produced was not used.

[B]A -Light-Water and Heavy-Water Reactors[/B]
A variety of reactor types, characterized by the type of fuel, moderator, and coolant used, have been built throughout the world for the production of electric power. In the United States, with few exceptions, power reactors use nuclear fuel in the form of uranium oxide isotopically enriched to about three percent uranium-235. The moderator and coolant are highly purified ordinary water. A reactor of this type is called a light-water reactor (LWR).

In the pressurized-water reactor (PWR), a version of the LWR system, the water coolant operates at a pressure of about 150 atmospheres. It is pumped through the reactor core, where it is heated to about 325° C (about 620° F). The superheated water is pumped through a steam generator, where, through heat exchangers, a secondary loop of water is heated and converted to steam. This steam drives one or more turbine generators, is condensed, and is pumped back to the steam generator. The secondary loop is isolated from the water in the reactor core and, therefore, is not radioactive. A third stream of water from a lake, river, or cooling tower is used to condense the steam. The reactor pressure vessel is about 15 m (about 49 ft) high and 5 m (about 16.4 ft) in diameter, with walls 25 cm (about 10 in) thick. The core houses some 82 metric tons of uranium oxide contained in thin corrosion-resistant tubes clustered into fuel bundles.

In the boiling-water reactor (BWR), a second type of LWR, the water coolant is permitted to boil within the core, by operating at somewhat lower pressure. The steam produced in the reactor pressure vessel is piped directly to the turbine generator, is condensed, and is then pumped back to the reactor. Although the steam is radioactive, there is no intermediate heat exchanger between the reactor and turbine to decrease efficiency. As in the PWR, the condenser cooling water has a separate source, such as a lake or river. The power level of an operating reactor is monitored by a variety of thermal, flow, and nuclear instruments. Power output is controlled by inserting or removing from the core a group of neutron-absorbing control rods. The position of these rods determines the power level at which the chain reaction is just self-sustaining.

During operation, and even after shutdown, a large, 1,000-megawatt (MW) power reactor contains billions of curies of radioactivity. Radiation emitted from the reactor during operation and from the fission products after shutdown is absorbed in thick concrete shields around the reactor and primary coolant system. Other safety features include emergency core cooling systems to prevent core overheating in the event of malfunction of the main coolant systems and, in most countries, a large steel and concrete containment building to retain any radioactive elements that might escape in the event of a leak.

Although more than 100 nuclear power plants were operating or being built in the United States at the beginning of the 1980s, in the aftermath of the Three Mile Island accident in Pennsylvania in 1979 safety concerns and economic factors combined to block any additional growth in nuclear power. No orders for nuclear plants have been placed in the United States since 1978, and some plants that have been completed have not been allowed to operate. In 1996 about 22 percent of the electric power generated in the United States came from nuclear power plants. In contrast, in France almost three-quarters of the electricity generated was from nuclear power plants.

In the initial period of nuclear power development in the early 1950s, enriched uranium was available only in the United States and the Union of Soviet Socialist Republics (USSR). The nuclear power programs in Canada, France, and the United Kingdom therefore centered about natural uranium reactors, in which ordinary water cannot be used as the moderator because it absorbs too many neutrons. This limitation led Canadian engineers to develop a reactor cooled and moderated by deuterium oxide (D2O), or heavy water. The Canadian deuterium-uranium reactor known as CANDU has operated satisfactorily in Canada, and similar plants have been built in India, Argentina, and elsewhere.

In the United Kingdom and France the first full-scale power reactors were fueled with natural uranium metal, were graphite-moderated, and were cooled with carbon dioxide gas under pressure. These initial designs have been superseded in the United Kingdom by a system that uses enriched uranium fuel. In France the initial reactor type chosen was dropped in favor of the PWR of U.S. design when enriched uranium became available from French isotope-enrichment plants. Russia and the other successor states of the USSR had a large nuclear power program, using both graphite-moderated and PWR systems.

[B]B -Propulsion Reactors[/B]
Nuclear power plants similar to the PWR are used for the propulsion plants of large surface naval vessels such as the aircraft carrier USS Nimitz. The basic technology of the PWR system was first developed in the U.S. naval reactor program directed by Admiral Hyman G. Rickover. Reactors for submarine propulsion are generally physically smaller and use more highly enriched uranium to permit a compact core. The United States, the United Kingdom, Russia, and France all have nuclear-powered submarines with such power plants.

Three experimental seagoing nuclear cargo ships were operated for limited periods by the United States, Germany, and Japan. Although they were technically successful, economic conditions and restrictive port regulations brought an end to these projects. The Soviet government built the first successful nuclear-powered icebreaker, Lenin, for use in clearing the Arctic sea-lanes.

[B]C -Research Reactors[/B]
A variety of small nuclear reactors have been built in many countries for use in education and training, research, and the production of radioactive isotopes. These reactors generally operate at power levels near one MW, and they are more easily started up and shut down than larger power reactors.
A widely used type is called the swimming-pool reactor. The core is partially or fully enriched uranium-235 contained in aluminum alloy plates, immersed in a large pool of water that serves as both coolant and moderator. Materials may be placed directly in or near the reactor core to be irradiated with neutrons. Various radioactive isotopes can be produced for use in medicine, research, and industry (see Isotopic Tracer). Neutrons may also be extracted from the reactor core by means of beam tubes to be used for experimentation.

[B]D -Breeder Reactors[/B]
Uranium, the natural resource on which nuclear power is based, occurs in scattered deposits throughout the world. Its total supply is not fully known, and may be limited unless sources of very low concentration such as granites and shale were to be used. Conservatively estimated U.S. resources of uranium having an acceptable cost lie in the range of two million to five million metric tons. The lower amount could support an LWR nuclear power system providing about 30 percent of U.S. electric power for only about 50 years. The principal reason for this relatively brief life span of the LWR nuclear power system is its very low efficiency in the use of uranium: only approximately one percent of the energy content of the uranium is made available in this system.

The key feature of a breeder reactor is that it produces more fuel than it consumes. It does this by promoting the absorption of excess neutrons in a fertile material. Several breeder reactor systems are technically feasible. The breeder system that has received the greatest worldwide attention uses uranium-238 as the fertile material. When uranium-238 absorbs neutrons in the reactor, it is transmuted to a new fissionable material, plutonium, through a nuclear process called β (beta) decay. The sequence of nuclear reactions is In beta decay a nuclear neutron decays into a proton and a beta particle (a high-energy electron).

When plutonium-239 itself absorbs a neutron, fission can occur, and on the average about 2.8 neutrons are released. In an operating reactor, one of these neutrons is needed to cause the next fission and keep the chain reaction going. On the average about 0.5 neutron is uselessly lost by absorption in the reactor structure or coolant. The remaining 1.3 neutrons can be absorbed in uranium-238 to produce more plutonium via the reactions in equation (3).

The breeder system that has had the greatest development effort is called the liquid-metal fast breeder reactor (LMFBR). In order to maximize the production of plutonium-239, the velocity of the neutrons causing fission must remain fast—at or near their initial release energy. Any moderating materials, such as water, that might slow the neutrons must be excluded from the reactor. A molten metal, liquid sodium, is the preferred coolant liquid. Sodium has very good heat transfer properties, melts at about 100° C (about 212° F), and does not boil until about 900° C (about 1650° F). Its main drawbacks are its chemical reactivity with air and water and the high level of radioactivity induced in it in the reactor.

Development of the LMFBR system began in the United States before 1950, with the construction of the first experimental breeder reactor, EBR-1. A larger U.S. program, on the Clinch River, was halted in 1983, and only experimental work was to continue (see Tennessee Valley Authority). In the United Kingdom, France, and Russia and the other successor states of the USSR, working breeder reactors were installed, and experimental work continued in Germany and Japan.

In one design of a large LMFBR power plant, the core of the reactor consists of thousands of thin stainless steel tubes containing mixed uranium and plutonium oxide fuel: about 15 to 20 percent plutonium-239, the remainder uranium. Surrounding the core is a region called the breeder blanket, which contains similar rods filled only with uranium oxide. The entire core and blanket assembly measures about 3 m (about 10 ft) high by about 5 m (about 16.4 ft) in diameter and is supported in a large vessel containing molten sodium that leaves the reactor at about 500° C (about 930° F). This vessel also contains the pumps and heat exchangers that aid in removing heat from the core. Steam is produced in a second sodium loop, separated from the radioactive reactor coolant loop by the intermediate heat exchangers in the reactor vessel. The entire nuclear reactor system is housed in a large steel and concrete containment building.

The first large-scale plant of this type for the generation of electricity, called Super-Phénix, went into operation in France in 1984. (However, concerns about operational safety and environmental contamination led the French government to announce in 1998 that Super-Phénix would be dismantled). An intermediate-scale plant, the BN-600, was built on the shore of the Caspian Sea for the production of power and the desalination of water. The British have a large 250-MW prototype in Scotland.

The LMFBR produces about 20 percent more fuel than it consumes. In a large power reactor enough excess new fuel is produced over 20 years to permit the loading of another similar reactor. In the LMFBR system about 75 percent of the energy content of natural uranium is made available, in contrast to the one percent in the LWR.

[B]V -NUCLEAR FUELS AND WASTES[/B]
The hazardous fuels used in nuclear reactors present handling problems in their use. This is particularly true of the spent fuels, which must be stored or disposed of in some way.

[B]A -The Nuclear Fuel Cycle[/B]
Any electric power generating plant is only one part of a total energy cycle. The uranium fuel cycle that is employed for LWR systems currently dominates worldwide nuclear power production and includes many steps. Uranium, which contains about 0.7 percent uranium-235, is obtained from either surface or underground mines. The ore is concentrated by milling and then shipped to a conversion plant, where its elemental form is changed to uranium hexafluoride gas (UF6). At an isotope enrichment plant, the gas is forced against a porous barrier that permits the lighter uranium-235 to penetrate more readily than uranium-238. This process enriches uranium to about 3 percent uranium-235. The depleted uranium—the tailings—contain about 0.3 percent uranium-235.

The enriched product is sent to a fuel fabrication plant, where the UF6 gas is converted to uranium oxide powder, then into ceramic pellets that are loaded into corrosion-resistant fuel rods. These are assembled into fuel elements and are shipped to the reactor power plant. The world’s supply of enriched uranium fuel for powering commercial nuclear power plants is produced by five consortiums located in the United States, Western Europe, Russia, and Japan. The United States consortium—the federally owned United States Enrichment Corporation—produces 40 percent of this enriched uranium.

A typical 1,000-MW pressurized-water reactor has about 200 fuel elements, one-third of which are replaced each year because of the depletion of the uranium-235 and the buildup of fission products that absorb neutrons. At the end of its life in the reactor, the fuel is tremendously radioactive because of the fission products it contains and hence is still producing a considerable amount of energy. The discharged fuel is placed in water storage pools at the reactor site for a year or more.

At the end of the cooling period the spent fuel elements are shipped in heavily shielded casks either to permanent storage facilities or to a chemical reprocessing plant. At a reprocessing plant, the unused uranium and the plutonium-239 produced in the reactor are recovered and the radioactive wastes concentrated. (In the late 1990s neither such facility was yet available in the United States for power plant fuel, and temporary storage was used.)

The spent fuel still contains almost all the original uranium-238, about one-third of the uranium-235, and some of the plutonium-239 produced in the reactor. In cases where the spent fuel is sent to permanent storage, none of this potential energy content is used. In cases where the fuel is reprocessed, the uranium is recycled through the diffusion plant, and the recovered plutonium-239 may be used in place of some uranium-235 in new fuel elements. At the end of the 20th century, no reprocessing of fuel occurred in the United States because of environmental, health, and safety concerns, and the concern that plutonium-239 could be used illegally for the manufacture of weapons.

In the fuel cycle for the LMFBR, plutonium bred in the reactor is always recycled for use in new fuel. The feed to the fuel-element fabrication plant consists of recycled uranium-238, depleted uranium from the isotope separation plant stockpile, and part of the recovered plutonium-239. No additional uranium needs to be mined, as the existing stockpile could support many breeder reactors for centuries. Because the breeder produces more plutonium-239 than it requires for its own refueling, about 20 percent of the recovered plutonium is stored for later use in starting up new breeders. Because new fuel is bred from the uranium-238, instead of using only the natural uranium-235 content, about 75 percent of the potential energy of uranium is made available with the breeder cycle.

The final step in any of the fuel cycles is the long-term storage of the highly radioactive wastes, which remain biologically hazardous for thousands of years. Fuel elements may be stored in shielded, guarded repositories for later disposition or may be converted to very stable compounds, fixed in ceramics or glass, encapsulated in stainless steel canisters, and buried far underground in very stable geologic formations.

However, the safety of such repositories is the subject of public controversy, especially in the geographic region in which the repository is located or is proposed to be built. For example, environmentalists plan to file a lawsuit to close a repository built near Carlsbad, New Mexico. In 1999, this repository began receiving shipments of radioactive waste from the manufacture of nuclear weapons in United States during the Cold War. Another controversy centers around a proposed repository at Yucca Mountain, Nevada. Opposition from state residents and questions about the geologic stability of this site have helped prolong government studies. Even if opened, the site will not receive shipments of radioactive waste until at least 2010 (see Nuclear Fuels and Wastes, Waste Management section below).

[B]B -Nuclear Safety[/B]
Public concern about the acceptability of nuclear power from fission arises from two basic features of the system. The first is the high level of radioactivity present at various stages of the nuclear cycle, including disposal. The second is the fact that the nuclear fuels uranium-235 and plutonium-239 are the materials from which nuclear weapons are made. See Nuclear Weapons; Radioactive Fallout.

U.S. President Dwight D. Eisenhower announced the U.S. Atoms for Peace program in 1953. It was perceived as offering a future of cheap, plentiful energy. The utility industry hoped that nuclear power would replace increasingly scarce fossil fuels and lower the cost of electricity. Groups concerned with conserving natural resources foresaw a reduction in air pollution and strip mining. The public in general looked favorably on this new energy source, seeing the program as a realization of hopes for the transition of nuclear power from wartime to peaceful uses.

Nevertheless, after this initial euphoria, reservations about nuclear energy grew as greater scrutiny was given to issues of nuclear safety and weapons proliferation. In the United States and other countries many groups oppose nuclear power. In addition, high construction costs, strict building and operating regulations, and high costs for waste disposal make nuclear power plants much more expensive to build and operate than plants that burn fossil fuels. In some industrialized countries, the nuclear power industry has come under growing pressure to cut operating expenses and become more cost-competitive. Other countries have begun or planned to phase out nuclear power completely.

At the end of the 20th century, many experts viewed Asia as the only possible growth area for nuclear power. In the late 1990s, China, Japan, South Korea, and Taiwan had nuclear power plants under construction. However, many European nations were reducing or reversing their commitments to nuclear power. For example, Sweden committed to phasing out nuclear power by 2010. France canceled several planned reactors and was considering the replacement of aging nuclear plants with environmentally safer fossil-fuel plants. Germany announced plans in 1998 to phase out nuclear energy. In the United States, no new reactors had been ordered since 1978.

In 1996, 21.9 percent of the electricity generated in the United States was produced by nuclear power. By 1998 that amount had decreased to 20 percent. Because no orders for nuclear plants have been placed since 1978, this share should continue to decline as existing nuclear plants are eventually closed. In 1998 Commonwealth Edison, the largest private owner and operator of nuclear plants in the United States, had only four of 12 nuclear power plants online. Industry experts cite economic, safety, and labor problems as reasons for these shutdowns.

[B]B1 -Radiological Hazards[/B]
Radioactive materials emit penetrating, ionizing radiation that can injure living tissues. The commonly used unit of radiation dose equivalent in humans is the sievert. (In the United States, rems are still used as a measure of dose equivalent. One rem equals 0.01 sievert.) Each individual in the United States and Canada is exposed to about 0.003 sievert per year from natural background radiation sources. An exposure to an individual of five sieverts is likely to be fatal. A large population exposed to low levels of radiation will experience about one additional cancer for each 10 sieverts total dose equivalent. See Radiation Effects, Biological.

Radiological hazards can arise in most steps of the nuclear fuel cycle. Radioactive radon gas is a colorless gas produced from the decay of uranium. As a result, radon is a common air pollutant in underground uranium mines. The mining and ore-milling operations leave large amounts of waste material on the ground that still contain small concentrations of uranium. To prevent the release of radioactive radon gas into the air from this uranium waste, these wastes must be stored in waterproof basins and covered with a thick layer of soil.

Uranium enrichment and fuel fabrication plants contain large quantities of three-percent uranium-235, in the form of corrosive gas, uranium hexafluoride, UF6. The radiological hazard, however, is low, and the usual care taken with a valuable material posing a typical chemical hazard suffices to ensure safety.

[B]B2 -Reactor Safety Systems[/B]
The safety of the power reactor itself has received the greatest attention. In an operating reactor, the fuel elements contain by far the largest fraction of the total radioactive inventory. A number of barriers prevent fission products from leaking into the air during normal operation. The fuel is clad in corrosion-resistant tubing. The heavy steel walls of the primary coolant system of the PWR form a second barrier. The water coolant itself absorbs some of the biologically important radioactive isotopes such as iodine. The steel and concrete building is a third barrier.

During the operation of a power reactor, some radioactive compounds are unavoidably released. The total exposure to people living nearby is usually only a few percent of the natural background radiation. Major concerns arise, however, from radioactive releases caused by accidents in which fuel damage occurs and safety devices fail. The major danger to the integrity of the fuel is a loss-of-coolant accident in which the fuel is damaged or even melts. Fission products are released into the coolant, and if the coolant system is breached, fission products enter the reactor building.

Reactor systems rely on elaborate instrumentation to monitor their condition and to control the safety systems used to shut down the reactor under abnormal circumstances. Backup safety systems that inject boron into the coolant to absorb neutrons and stop the chain reaction to further assure shutdown are part of the PWR design. Light-water reactor plants operate at high coolant pressure. In the event of a large pipe break, much of the coolant would flash into steam and core cooling could be lost. To prevent a total loss of core cooling, reactors are provided with emergency core cooling systems that begin to operate automatically on the loss of primary coolant pressure. In the event of a steam leak into the containment building from a broken primary coolant line, spray coolers are actuated to condense the steam and prevent a hazardous pressure rise in the building.

[B]B3 -Three Mile Island and Chernobyl'[/B]
Despite the many safety features described above, an accident did occur in 1979 at the Three Mile Island PWR near Harrisburg, Pennsylvania. A maintenance error and a defective valve led to a loss-of-coolant accident. The reactor itself was shut down by its safety system when the accident began, and the emergency core cooling system began operating as required a short time into the accident. Then, however, as a result of human error, the emergency cooling system was shut off, causing severe core damage and the release of volatile fission products from the reactor vessel. Although only a small amount of radioactive gas escaped from the containment building, causing a slight rise in individual human exposure levels, the financial damage to the utility was very large, $1 billion or more, and the psychological stress on the public, especially those people who live in the area near the nuclear power plant, was in some instances severe.

The official investigation of the accident named operational error and inadequate control room design, rather than simple equipment failure, as the principal causes of the accident. It led to enactment of legislation requiring the Nuclear Regulatory Commission to adopt far more stringent standards for the design and construction of nuclear power plants. The legislation also required utility companies to assume responsibility for helping state and county governments prepare emergency response plans to protect the public health in the event of other such accidents.

Since 1981, the financial burdens imposed by these requirements have made it difficult to build and operate new nuclear power plants. Combined with other factors, such as high capital costs and long construction periods (which means builders must borrow more money and wait longer periods before earning a return on their investment), safety regulations have forced utility companies in the states of Washington, Ohio, Indiana, and New York to abandon partly completed plants after spending billions of dollars on them.
On April 26, 1986, another serious incident alarmed the world. One of four nuclear reactors at Chernobyl', near Pripyat’, about 130 km (about 80 mi) north of Kyiv (now in Ukraine) in the USSR, exploded and burned. Radioactive material spread over Scandinavia and northern Europe, as discovered by Swedish observers on April 28. According to the official report issued in August, the accident was caused by unauthorized testing of the reactor by its operators. The reactor went out of control; there were two explosions, the top of the reactor blew off, and the core was ignited, burning at temperatures of 1500° C (2800° F). Radiation about 50 times higher than that at Three Mile Island exposed people nearest the reactor, and a cloud of radioactive fallout spread westward. Unlike most reactors in western countries, including the United States, the reactor at Chernobyl' did not have a containment building. Such a structure could have prevented material from leaving the reactor site. About 135,000 people were evacuated, and more than 30 died. The plant was encased in concrete. By 1988, however, the other three Chernobyl' reactors were back in operation. One of the three remaining reactors was shut down in 1991 because of a fire in the reactor building. In 1994 Western nations developed a financial aid package to help close the entire plant, and a year later the Ukrainian government finally agreed to a plan that would shut down the remaining reactors by the year 2000.

[B]C -Fuel Reprocessing[/B]
The fuel reprocessing step poses a combination of radiological hazards. One is the accidental release of fission products if a leak should occur in chemical equipment or the cells and building housing it. Another may be the routine release of low levels of inert radioactive gases such as xenon and krypton. In 1966 a commercial reprocessing plant opened in West Valley, New York. But in 1972 this reprocessing plant was closed after generating more than 600,000 gallons of high-level radioactive waste. After the plant was closed, a portion of this radioactive waste was partially treated and cemented into nearly 20,000 steel drums. In 1996, the United States Department of Energy began to solidify the remaining liquid radioactive wastes into glass cylinders. At the end of the 20th century, no reprocessing plants were licensed in the United States.

Of major concern in chemical reprocessing is the separation of plutonium-239, a material that can be used to make nuclear weapons. The hazards of theft of plutonium-239, or its use for intentional but hidden production for weapons purposes, can best be controlled by political rather than technical means. Improved security measures at sensitive points in the fuel cycle and expanded international inspection by the International Atomic Energy Agency (IAEA) offer the best prospects for controlling the hazards of plutonium diversion.

[B]D -Waste Management[/B]
The last step in the nuclear fuel cycle, waste management, remains one of the most controversial. The principal issue here is not so much the present danger as the danger to generations far in the future. Many nuclear wastes remain radioactive for thousands of years, beyond the span of any human institution. The technology for packaging the wastes so that they pose no current hazard is relatively straightforward. The difficulty lies both in being adequately confident that future generations are well protected and in making the political decision on how and where to proceed with waste storage. Permanent but potentially retrievable storage in deep stable geologic formations seems the best solution. In 1988 the U.S. government chose Yucca Mountain, a Nevada desert site with a thick section of porous volcanic rocks, as the nation's first permanent underground repository for more than 36,290 metric tons of nuclear waste. However, opposition from state residents and uncertainty that Yucca Mountain may not be completely insulated from earthquakes and other hazards has prolonged government studies. For example, a geological study by the U.S. Department of Energy detected water in several mineral samples taken at the Yucca Mountain site. The presence of water in these samples suggests that water may have once risen up through the mountain and later subsided. Because such an event could jeopardize the safety of a nuclear waste repository, the Department of Energy has funded more study of these fluid intrusions.

A $2 billion repository built in underground salt caverns near Carlsbad, New Mexico, is designed to store radioactive waste from the manufacture of nuclear weapons during the Cold War. This repository, located 655 meters (2,150 feet) underground, is designed to slowly collapse and encapsulate the plutonium-contaminated waste in the salt beds. Although the repository began receiving radioactive waste shipments in April 1999, environmentalists planned to file a lawsuit to close the Carlsbad repository.

[B]VI -NUCLEAR FUSION[/B]
The release of nuclear energy can occur at the low end of the binding energy curve (see accompanying chart) through the fusion of two light nuclei into a heavier one. The energy radiated by stars, including the Sun, arises from such fusion reactions deep in their interiors. At the enormous pressure and at temperatures above 15 million ° C (27 million ° F) existing there, hydrogen nuclei combine according to equation (1) and give rise to most of the energy released by the Sun.

Nuclear fusion was first achieved on earth in the early 1930s by bombarding a target containing deuterium, the mass-2 isotope of hydrogen, with high-energy deuterons in a cyclotron (see Particle Accelerators). To accelerate the deuteron beam a great deal of energy is required, most of which appeared as heat in the target. As a result, no net useful energy was produced. In the 1950s the first large-scale but uncontrolled release of fusion energy was demonstrated in the tests of thermonuclear weapons by the United States, the USSR, the United Kingdom, and France. This was such a brief and uncontrolled release that it could not be used for the production of electric power.

In the fission reactions discussed earlier, the neutron, which has no electric charge, can easily approach and react with a fissionable nucleus—for example, uranium-235. In the typical fusion reaction, however, the reacting nuclei both have a positive electric charge, and the natural repulsion between them, called Coulomb repulsion, must be overcome before they can join. This occurs when the temperature of the reacting gas is sufficiently high—50 to 100 million ° C (90 to 180 million ° F). In a gas of the heavy hydrogen isotopes deuterium and tritium at such temperature, the fusion reaction occurs, releasing about 17.6 MeV per fusion event. The energy appears first as kinetic energy of the helium-4 nucleus and the neutron, but is soon transformed into heat in the gas and surrounding materials.
If the density of the gas is sufficient—and at these temperatures the density need be only 10-5 atm, or almost a vacuum—the energetic helium-4 nucleus can transfer its energy to the surrounding hydrogen gas, thereby maintaining the high temperature and allowing subsequent fusion reactions, or a fusion chain reaction, to take place. Under these conditions, “nuclear ignition” is said to have occurred.

The basic problems in attaining useful nuclear fusion conditions are (1) to heat the gas to these very high temperatures and (2) to confine a sufficient quantity of the reacting nuclei for a long enough time to permit the release of more energy than is needed to heat and confine the gas. A subsequent major problem is the capture of this energy and its conversion to electricity.
At temperatures of even 100,000° C (180,000° F), all the hydrogen atoms are fully ionized. The gas consists of an electrically neutral assemblage of positively charged nuclei and negatively charged free electrons. This state of matter is called a plasma.

A plasma hot enough for fusion cannot be contained by ordinary materials. The plasma would cool very rapidly, and the vessel walls would be destroyed by the extreme heat. However, since the plasma consists of charged nuclei and electrons, which move in tight spirals around the lines of force of strong magnetic fields, the plasma can be contained in a properly shaped magnetic field region without reacting with material walls.

In any useful fusion device, the energy output must exceed the energy required to confine and heat the plasma. This condition can be met when the product of confinement time t and plasma density n exceeds about 1014. The relationship tn≥ 1014 is called the Lawson criterion.

Numerous schemes for the magnetic confinement of plasma have been tried since 1950 in the United States, Russia, the United Kingdom, Japan, and elsewhere. Thermonuclear reactions have been observed, but the Lawson number rarely exceeded 1012. One device, however—the tokamak, originally suggested in the USSR by Igor Tamm and Andrey Sakharov—began to give encouraging results in the early 1960s.

The confinement chamber of a tokamak has the shape of a torus, with a minor diameter of about 1 m (about 3.3 ft) and a major diameter of about 3 m (about 9.8 ft). A toroidal (donut-shaped) magnetic field of about 50,000 gauss is established inside this chamber by large electromagnets. A longitudinal current of several million amperes is induced in the plasma by the transformer coils that link the torus. The resulting magnetic field lines, spirals in the torus, stably confine the plasma.

Based on the successful operation of small tokamaks at several laboratories, two large devices were built in the early 1980s, one at Princeton University in the United States and one in the USSR. The enormous magnetic fields in a tokamak subject the plasma to extremely high temperatures and pressures, forcing the atomic nuclei to fuse. As the atomic nuclei are fused together, an extraordinary amount of energy is released. During this fusion process, the temperature in the tokamak reaches three times that of the Sun’s core.
Another possible route to fusion energy is that of inertial confinement. In this concept, the fuel—tritium or deuterium—is contained within a tiny glass sphere that is then bombarded on several sides by a pulsed laser or heavy ion beam. This causes an implosion of the glass sphere, setting off a thermonuclear reaction that ignites the fuel. Several laboratories in the United States and elsewhere are currently pursuing this possibility. In the late 1990s, many researchers concentrated on the use of beams of heavy ions, such as barium ions, rather than lasers to trigger inertial-confinement fusion. Researchers chose heavy ion beams because heavy ion accelerators can produce intense ion pulses at high repetition rates and because heavy ion accelerators are extremely efficient at converting electric power into ion beam energy, thus reducing the amount of input power. Also in comparison to laser beams, ion beams can penetrate the glass sphere and fuel more effectively to heat the fuel.

Progress in fusion research has been promising, but the development of practical systems for creating a stable fusion reaction that produces more power than it consumes will probably take decades to realize. The research is expensive, as well. However, some progress was made in the early 1990s. In 1991, for the first time ever, a significant amount of energy—about 1.7 million watts—was produced from controlled nuclear fusion at the Joint European Torus (JET) Laboratory in England. In December 1993, researchers at Princeton University used the Tokamak Fusion Test Reactor to produce a controlled fusion reaction that output 5.6 million watts of power. However, both the JET and the Tokamak Fusion Test Reactor consumed more energy than they produced during their operation.

If fusion energy does become practical, it offers the following advantages: (1) a limitless source of fuel, deuterium from the ocean; (2) no possibility of a reactor accident, as the amount of fuel in the system is very small; and (3) waste products much less radioactive and simpler to handle than those from fission systems.

prissygirl Tuesday, November 13, 2007 02:20 PM

gr8 work
 
2 Attachment(s)
welldone predator but i would like to add some diagramatic representations through which the posted topics can easily be understood.
[B]EYE[/B]

Predator Tuesday, November 13, 2007 02:35 PM

Aging
 
[B]Aging

I -INTRODUCTION[/B]

Aging, irreversible biological changes that occur in all living things with the passage of time, eventually resulting in death. Although all organisms age, rates of aging vary considerably. Fruit flies, for example, are born, grow old, and die in 30 or 40 days, while field mice have a life span of about three years. Dolphins may live to age 25, elephants to age 50, and Galápagos tortoises to 100. These life spans pale in comparison to those of some species of giant sequoia trees, which live hundreds of years.

Among humans, the effects of aging vary from one individual to another. The average life expectancy for Americans is around 75 years, almost twice what it was in the early 1900s. Although some people never reach this age, and others are beset with illnesses if they do, more and more people are living healthy lives well into their 90s and older. The study of the different aging processes that occur among individuals and the factors that cause these changes is known as gerontology. Geriatrics is a medical specialty concerned with the prevention, diagnosis, and treatment of diseases in the elderly.

[B]II -EFFECTS OF AGING ON THE HUMAN BODY[/B]
Several general changes take place in the human body as it ages: hearing and vision decline, muscle strength lessens, soft tissues such as skin and blood vessels become less flexible, and there is an overall decline in body tone.

Most of the body's organs perform less efficiently with advancing age. For example, the average amount of blood pumped by the heart drops from about 6.9 liters (7.3 quarts) per minute at age 20 to only 3.5 liters (3.7 quarts) pumped per minute at age 85. For this same age range, the average amount of blood flowing through the kidneys drops from approximately 0.6 liters (0.6 quarts) per minute to 0.3 liters (0.3 quarts). Not all people experience decreased organ function to the same degree—some individuals have healthier hearts and kidneys at age 85 than others do at age 50.

The immune system also changes with age. A healthy immune system protects the body against bacteria, viruses, and other harmful agents by producing disease-fighting proteins known as antibodies. A healthy immune system also prevents the growth of abnormal cells, which can become cancerous. With advancing age, the ability of the immune system to carry out these protective functions is diminished—the rate of antibody production may drop by as much as 80 percent between age 20 and age 85. This less-effective immune system explains why a bout of influenza, which may make a young adult sick for a few days, can be fatal for an elderly person. Thus, it is as important for an older person to be vaccinated against the flu and pneumonia as it is for young people to be vaccinated against childhood diseases.

Most of the glands of the endocrine system, the organs that secrete hormones regulating such functions as metabolism, temperature, and blood sugar levels, retain their ability to function into advanced age. However, these glands often become less sensitive to the triggers that direct hormone secretion. In the aging pancreas, for example, higher blood sugar levels are required to stimulate the release of insulin, a hormone that helps the muscles convert blood sugar to energy.

The ovaries and the testes, the endocrine glands that regulate many aspects of sexual reproduction, alter during the aging process. As a man ages, the testes produce less of the male sex hormone, testosterone. A woman's ovaries undergo marked changes from about age 45 to age 55 during a process known as menopause. The ovaries no longer release egg cells, and they no longer generate the hormones that stimulate monthly menstrual cycles. After women have gone through menopause, they are no longer capable of having children without the aid of reproductive technology. The physical changes associated with aging do not have a significant impact on sexual activity—most healthy people maintain an interest in sex all of their lives.

[B]III -THE EFFECTS OF AGING ON THE MIND[/B]
One of the myths of aging is that intelligence diminishes with age. Early studies that used intelligence tests designed for children revealed that older people scored lower than young adults. However, these tests relied heavily on skills commonly used in school classrooms, such as arithmetic, and required the test to be completed within a specific time limit. Older people may require more time to answer questions, and more recent studies based on untimed tests and other measures of intellectual activity, such as problem solving and concept formation, show that there is relatively little decline in mental ability in healthy people at least up to age 70.

The aging brain does undergo a progressive loss of neurons, or nerve cells, but these losses represent only a small percentage of neurons in the brain. The speed of conduction of a nerve impulse declines with age, but it drops only about 15 percent over the age span from 30 to 85 years. Although intelligence is generally not affected by the aging process, studies show that some older people may find it difficult to deal with many stimuli at once. For example, an older individual requires more time to sort out all of the information when many highway signs come into view simultaneously.

Traveling at 97 km/h (60 mph), an elderly driver may miss the information he or she needs or may act on the wrong information. But if older individuals recognize this limitation and adjust their behavior accordingly, they can continue driving safely well into old age.

Many older people experience problems with memory, and up to 10 percent of the elderly have memory problems significant enough to interfere with their ability to function independently. Memory problems were once considered an inevitable effect of the aging process, but researchers have determined that many of the brain-related changes often observed in elderly people, including memory loss, are actually a result of such diseases as Alzheimer’s disease and diseases associated with blood vessels and blood flow in the brain, such as stroke. Memory loss is sometimes treatable, and certain memory-aiding strategies have been found to help reverse the short-term memory loss experienced by many older people.

Another myth about aging is that people tend to grow sour and mean-spirited with age. Research shows that personalities really do not change much over time. A mean-spirited, grumpy old person was probably that way when he or she was 30. And, as humans age, most still like to do the things they did when they were young. For example those who were athletic in their youth may continue to enjoy athletic activities as they age.

An older person's social environment, however, can have a marked impact on personality. The social isolation that often exists among older people can dramatically influence mental attitudes and behavior. In the United States, 33 percent of all older people live alone, most of them widowed women over the age of 85. About 5 percent of elderly Americans live in some type of long-term care facility, and almost 25 percent of all older Americans live under or near the federal poverty level. These people have little or no money for recreational activities. This poverty and isolation often leads to clinical depression and other problems, such as alcoholism.

[B]IV -CAUSES OF AGING [/B]
Although the exact causes of aging remain unknown, scientists are learning a great deal about the aging process and the mechanisms that drive it. Some of the most promising research on the aging process focuses on the microscopic changes that occur in all living cells as organisms age. In 1965 American microbiologist Leonard Hayflick observed that under laboratory conditions, human cells can duplicate up to 50 times before they stop. Hayflick also noted that when cells stop normal cell division (see Mitosis), they start to age, or senesce. Since Hayflick’s groundbreaking observations, scientists have been searching for the underlying cause, known as the senescent factor (SF), of why cells stop dividing and thus age.

Different theories have been proposed to explain how SF works. One theory is based on the assumption that aging, and diseases that occur more frequently with advancing age, are caused by structural damage to cells. This damage accumulates in tiny amounts each time the cell divides, eventually preventing the cell from carrying out normal functions.

One cause of this damage may be free radicals, which are chemical compounds found in the environment and also generated by normal chemical reactions in the body. Free radicals contain unpaired electrons and so carry an electric charge that makes them highly reactive. In an effort to neutralize their electric charge, free radicals constantly bombard cells in order to steal electrons in a process called oxidation. Free radicals are thought to greatly increase the severity of—or perhaps even cause—such life-shortening diseases as diabetes mellitus, strokes, and heart attacks. Researchers have observed that free radicals exist in smaller amounts in those species with relatively long life spans. Increasing human life span may depend on our ability to prevent free radical damage, and scientists are currently examining the role of chemical compounds, called antioxidants, that prevent or reverse oxidative damage in the aging process.

Another theory suggests that SF is genetically regulated—that is, cells are genetically programmed to carry out about 50 cell divisions and then die. Researchers have identified at least three genes that are involved with human cellular senescence. They have also discovered a protein on the surface membranes of senescent cells that inhibits production of deoxyribonucleic acid (DNA), the essential molecule that carries all genetic information.

Another theory proposes that extra, useless bits of DNA accumulate over time within a cell's nucleus. Eventually this so-called junk DNA builds up to levels that clog normal cell action. If this idea is correct, scientists may be able to find ways to prevent accumulation of junk DNA, thereby slowing down the process of senescence in cells.

Other studies focus on cell division limits. Each time a cell divides, it duplicates its DNA, and in each division the sections at the ends of DNA, called the telomeres, are gradually depleted, or shortened. Eventually the telomeres become so depleted that normal cell division halts, typically within 50 cell divisions. Scientists have found that an enzyme produced by the human body, called telomerase, can prolong the life of the telomeres, thus extending the number of cell divisions. In laboratory studies, cells injected with telomerase continue to divide well beyond the normal limit of 50 cell divisions. These promising results have triggered worldwide attention on telomerase and its relationship to aging.

A number of other studies are underway to investigate the effects of aging. Scientists have found, for example, a possible explanation for why women have longer average life spans than men. The difference seems to be biologically determined, and male and female sex hormones are probably responsible. The blood levels of female sex hormones drop sharply during menopause. At that time, the incidence of heart disease and high blood pressure in women increases to match the incidence in men, suggesting that the presence of female sex hormones offers some protection against heart disease.

[B]V -AGING POPULATIONS[/B]
In developed nations, life expectancy has increased more in the 20th century than it has in all of recorded history. A person born in the United States in 1995 can expect to live more than 35 years longer than a person born in
1900. Today more than 34 million Americans are 65 or older, accounting for about 13 percent of the population. By the year 2030, their numbers will more than double: One in every five Americans will be over age 65.

A person who lives 100 years or more—a centenarian—was once a rarity, but today about 60,000 Americans are 100 years or older. By the year 2060, there may be as many as 2.5 million centenarians in the United States. The number of supercentenarians—people 105 years of age and older—will probably be as commonplace in the next century as centenarians are fast becoming now.

In some parts of the world, 16 to 18 percent of the population is already age 65 or older. By the year 2025, Japan is expected to have twice as many old people as children. Also by that time, there will be more than one billion older people worldwide. This increase in life expectancy is the result of better public health measures, improvements in living conditions, and advances in medical care. A marked reduction in infant mortality rates has also contributed to increased life expectancy statistics.

Aging populations are expected to have profound effects on the way societies care for their elderly members. With a larger proportion of the population over age 65, medical care must become better equipped to deal with the disorders and diseases of the elderly. All health care professionals should have special training in geriatrics. As the percentage of older people in the population exceeds the percentage of young working people, traditional methods for caring for older people may need to be modified. For example, in the United States, workers pay taxes throughout their careers so that when they retire, usually around the age of 65, they can receive money from the federal government to survive. This system, called Social Security, may be in jeopardy as the percentage of retired people increases, placing inordinate demands on the smaller number of people working and supporting them.

In many parts of the world, including the United States, older people who cannot work and have health problems live in long-term care facilities such as nursing homes, where they receive care 24 hours a day. But many families are unable to bear the costs of nursing homes and medical care for the elderly, and health insurance is unable to cover the expense. Other countries face similar problems, and multinational efforts are underway to explore new methods to finance the care of the world’s older persons, soon to number one billion.

prissygirl Tuesday, November 13, 2007 02:51 PM

structure of DNA and its bonding
 
3 Attachment(s)
[B][I]the structure of DNA according to the watson and crick model and the bonding within the nucleotides and between the two strands of DNA[/I][/B].

It should be kept in mind that the Gaunine of one strand is bonded through the hydrogen bond with the cytosine of the complementary strand and adenine with thymine.There are two hydrogen bonds between adenine and thymine and three hydrogen bonds between guanine and cytosine.

the nitrogenous bases are categorised as[LIST][*]Double ringed Purines including adenine and guanine[*]single ringed pryrimidines including thymine and cytosine.[/LIST]the two strand of DNA are anti parrallel to each other.
regards

prissygirl Tuesday, November 13, 2007 03:07 PM

transcription Of mRNA
 
2 Attachment(s)
the formation of the single stranded RNA from one of the two strands of DNA for the purpose of protein synthesis is called transcription.

Predator Tuesday, November 13, 2007 03:20 PM

Ecosystem
 
[B]Ecosystem[/B]

[B]I -INTRODUCTION[/B]
Ecosystem, organisms living in a particular environment, such as a forest or a coral reef, and the physical parts of the environment that affect them. The term ecosystem was coined in 1935 by the British ecologist Sir Arthur George Tansley, who described natural systems in “constant interchange” among their living and nonliving parts.

The ecosystem concept fits into an ordered view of nature that was developed by scientists to simplify the study of the relationships between organisms and their physical environment, a field known as ecology. At the top of the hierarchy is the planet’s entire living environment, known as the biosphere. Within this biosphere are several large categories of living communities known as biomes that are usually characterized by their dominant vegetation, such as grasslands, tropical forests, or deserts. The biomes are in turn made up of ecosystems. The living, or biotic, parts of an ecosystem, such as the plants, animals, and bacteria found in soil, are known as a community. The physical surroundings, or abiotic components, such as the minerals found in the soil, are known as the environment or habitat.
Any given place may have several different ecosystems that vary in size and complexity. A tropical island, for example, may have a rain forest ecosystem that covers hundreds of square miles, a mangrove swamp ecosystem along the coast, and an underwater coral reef ecosystem. No matter how the size or complexity of an ecosystem is characterized, all ecosystems exhibit a constant exchange of matter and energy between the biotic and abiotic community. Ecosystem components are so interconnected that a change in any one component of an ecosystem will cause subsequent changes throughout the system.

[B]II -HOW ECOSYSTEMS WORK[/B]
The living portion of an ecosystem is best described in terms of feeding levels known as trophic levels. Green plants make up the first trophic level and are known as primary producers. Plants are able to convert energy from the sun into food in a process known as photosynthesis. In the second trophic level, the primary consumers—known as herbivores—are animals and insects that obtain their energy solely by eating the green plants. The third trophic level is composed of the secondary consumers, flesh-eating or carnivorous animals that feed on herbivores. At the fourth level are the tertiary consumers, carnivores that feed on other carnivores. Finally, the fifth trophic level consists of the decomposers, organisms such as fungi and bacteria that break down dead or dying matter into nutrients that can be used again.

Some or all of these trophic levels combine to form what is known as a food web, the ecosystem’s mechanism for circulating and recycling energy and materials. For example, in an aquatic ecosystem algae and other aquatic plants use sunlight to produce energy in the form of carbohydrates. Primary consumers such as insects and small fish may feed on some of this plant matter, and are in turn eaten by secondary consumers, such as salmon. A brown bear may play the role of the tertiary consumer by catching and eating salmon. Bacteria and fungi may then feed upon and decompose the salmon carcass left behind by the bear, enabling the valuable nonliving components of the ecosystem, such as chemical nutrients, to leach back into the soil and water, where they can be absorbed by the roots of plants. In this way nutrients and the energy that green plants derive from sunlight are efficiently transferred and recycled throughout the ecosystem.

In addition to the exchange of energy, ecosystems are characterized by several other cycles. Elements such as carbon and nitrogen travel throughout the biotic and abiotic components of an ecosystem in processes known as nutrient cycles. For example, nitrogen traveling in the air may be snatched by a tree-dwelling, or epiphytic, lichen that converts it to a form useful to plants. When rain drips through the lichen and falls to the ground, or the lichen itself falls to the forest floor, the nitrogen from the raindrops or the lichen is leached into the soil to be used by plants and trees. Another process important to ecosystems is the water cycle, the movement of water from ocean to atmosphere to land and eventually back to the ocean. An ecosystem such as a forest or wetland plays a significant role in this cycle by storing, releasing, or filtering the water as it passes through the system.

Every ecosystem is also characterized by a disturbance cycle, a regular cycle of events such as fires, storms, floods, and landslides that keeps the ecosystem in a constant state of change and adaptation. Some species even depend on the disturbance cycle for survival or reproduction. For example, longleaf pine forests depend on frequent low-intensity fires for reproduction. The cones of the trees, which contain the reproductive structures, are sealed shut with a resin that melts away to release the seeds only under high heat.

[B]III -ECOSYSTEM MANAGEMENT[/B]
Humans benefit from these smooth-functioning ecosystems in many ways. Healthy forests, streams, and wetlands contribute to clean air and clean water by trapping fast-moving air and water, enabling impurities to settle out or be converted to harmless compounds by plants or soil. The diversity of organisms, or biodiversity, in an ecosystem provides essential foods, medicines, and other materials. But as human populations increase and their encroachment on natural habitats expands, humans are having detrimental effects on the very ecosystems on which they depend. The survival of natural ecosystems around the world is threatened by many human activities: bulldozing wetlands and clear-cutting forests—the systematic cutting of all trees in a specific area—to make room for new housing and agricultural land; damming rivers to harness the energy for electricity and water for irrigation; and polluting the air, soil, and water.

Many organizations and government agencies have adopted a new approach to managing natural resources—naturally occurring materials that have economic or cultural value, such as commercial fisheries, timber, and water—in order to prevent their catastrophic depletion. This strategy, known as ecosystem management, treats resources as interdependent ecosystems rather than simply commodities to be extracted. Using advances in the study of ecology to protect the biodiversity of an ecosystem, ecosystem management encourages practices that enable humans to obtain necessary resources using methods that protect the whole ecosystem. Because regional economic prosperity may be linked to ecosystem health, the needs of the human community are also considered.

Ecosystem management often requires special measures to protect threatened or endangered species that play key roles in the ecosystem. In the commercial shrimp trawling industry, for example, ecosystem management techniques protect loggerhead sea turtles. In the last thirty years, populations of loggerhead turtles on the southeastern coasts of the United States have been declining at alarming rates due to beach development and the ensuing erosion, bright lights, and traffic, which make it nearly impossible for female turtles to build nests on beaches. At sea, loggerheads are threatened by oil spills and plastic debris, offshore dredging, injury from boat propellers, and getting caught in fishing nets and equipment. In 1970 the species was listed as threatened under the Endangered Species Act.

When scientists learned that commercial shrimp trawling nets were trapping and killing between 5000 and 50,000 loggerhead sea turtles a year, they developed a large metal grid called a Turtle Excluder Device (TED) that fits into the trawl net, preventing 97 percent of trawl-related loggerhead turtle deaths while only minimally reducing the commercial shrimp harvest. In 1992 the National Marine Fisheries Service (NMFS) implemented regulations requiring commercial shrimp trawlers to use TEDs, effectively balancing the commercial demand for shrimp with the health and vitality of the loggerhead sea turtle population.

prissygirl Tuesday, November 13, 2007 03:22 PM

translation/protein synthesis
 
2 Attachment(s)
simply, in protein synthesis the the mRNA enteres the cytoplasm where ribosomes get attached to it.....read the genetic code and translate it into amino acids resulting in the formation of polypeptides and proteins.

prissygirl Tuesday, November 13, 2007 03:40 PM

replication Of DNA
 
2 Attachment(s)
the single molecule of double sranded DNA,before cell division,replicates and give rise to two daughter molecules.in this process each strand acts as a tepelate and synthesize its complementary strand.now each newly formed DNA molecule has one parent and one daughter or newly formed strand.

prissygirl Tuesday, November 13, 2007 03:44 PM

Red blood cells
 
1 Attachment(s)
The red color of the blood is due to the presence of the protein called Haemoglobin in specialised blood cells called red blood cells.these are biconcave bean shaped cells which are non-nucleated.

prissygirl Tuesday, November 13, 2007 04:00 PM

White blood cells
 
2 Attachment(s)
white blood cells are the leukocytes which are an important figure of the body's defense system.
they are divided into
Agranulocyte: their cytoplasm is clear and donot contain any granules in it.it includes [B]lymphocytes and monocytes[/B]
granulocyte:their cytoplasm is rich in granule like structure which are basically the lysosomes.these lysosomes help the white blood cell to act upon other foreign agents,phagocytose or engulf them because of such characteristics of lysosomes.Examples are [B]Eosinophil,Basophil,neutrophil[/B].
All these WBC's can be recognised on the basis of staining method and the shape of their nucleus

prissygirl Tuesday, November 13, 2007 04:10 PM

platelets
 
1 Attachment(s)
the component of the blood responsible for clotting is called platelets.the activity of the platelets is promoted by thrombin,a coagulation protein present in blood.

Predator Tuesday, November 13, 2007 04:12 PM

Immunization
 
[CENTER][B][U]Immunization[/U][/B][/CENTER]

[B]I -INTRODUCTION[/B]
Immunization, also called vaccination or inoculation, a method of stimulating resistance in the human body to specific diseases using microorganisms—bacteria or viruses—that have been modified or killed. These treated microorganisms do not cause the disease, but rather trigger the body's immune system to build a defense mechanism that continuously guards against the disease. If a person immunized against a particular disease later comes into contact with the disease-causing agent, the immune system is immediately able to respond defensively.

Immunization has dramatically reduced the incidence of a number of deadly diseases. For example, a worldwide vaccination program resulted in the global eradication of smallpox in 1980, and in most developed countries immunization has essentially eliminated diphtheria, poliomyelitis, and neonatal tetanus. The number of cases of Haemophilus influenzae type b meningitis in the United States has dropped 95 percent among infants and children since 1988, when the vaccine for that disease was first introduced. In the United States, more than 90 percent of children receive all the recommended vaccinations by their second birthday. About 85 percent of Canadian children are immunized by age two.

[B]II -TYPES OF IMMUNIZATION[/B]
Scientists have developed two approaches to immunization: active immunization, which provides long-lasting immunity, and passive immunization, which gives temporary immunity. In active immunization, all or part of a disease-causing microorganism or a modified product of that microorganism is injected into the body to make the immune system respond defensively. Passive immunity is accomplished by injecting blood from an actively immunized human being or animal.

[B]A -Active Immunization[/B]
Vaccines that provide active immunization are made in a variety of ways, depending on the type of disease and the organism that causes it. The active components of the vaccinations are antigens, substances found in the disease-causing organism that the immune system recognizes as foreign. In response to the antigen, the immune system develops either antibodies or white blood cells called T lymphocytes, which are special attacker cells. Immunization mimics real infection but presents little or no risk to the recipient. Some immunizing agents provide complete protection against a disease for life. Other agents provide partial protection, meaning that the immunized person can contract the disease, but in a less severe form. These vaccines are usually considered risky for people who have a damaged immune system, such as those infected with the virus that causes acquired immunodeficiency syndrome (AIDS) or those receiving chemotherapy for cancer or organ transplantation. Without a healthy defense system to fight infection, these people may develop the disease that the vaccine is trying to prevent. Some immunizing agents require repeated inoculations—or booster shots—at specific intervals. Tetanus shots, for example, are recommended every ten years throughout life.

In order to make a vaccine that confers active immunization, scientists use an organism or part of one that has been modified so that it has a low risk of causing illness but still triggers the body’s immune defenses against disease. One type of vaccine contains live organisms that have been attenuated—that is, their virulence has been weakened. This procedure is used to protect against yellow fever, measles, smallpox, and many other viral diseases.
Immunization can also occur when a person receives an injection of killed or inactivated organisms that are relatively harmless but that still contain antigens. This type of vaccination is used to protect against bacterial diseases such as poliomyelitis, typhoid fever, and diphtheria.

Some vaccines use only parts of an infectious organism that contain antigens, such as a protein cell wall or a flagellum. Known as acellular vaccines, they produce the desired immunity with a lower risk of producing potentially harmful immune reactions that may result from exposure to other parts of the organism. Acellular vaccines include the Haemophilus influenzae type B vaccine for meningitis and newer versions of the whooping cough vaccine. Scientists use genetic engineering techniques to refine this approach further by isolating a gene or genes within an infectious organism that code for a particular antigen. The subunit vaccines produced by this method cannot cause disease and are safe to use in people who have an impaired immune system. Subunit vaccines for hepatitis B and pneumococcus infection, which causes pneumonia, became available in the late 1990s.

Active immunization can also be carried out using bacterial toxins that have been treated with chemicals so that they are no longer toxic, even though their antigens remain intact. This procedure uses the toxins produced by genetically engineered bacteria rather than the organism itself and is used in vaccinating against tetanus, botulism, and similar toxic diseases.

[B]B -Passive Immunization[/B]
Passive immunization is performed without injecting any antigen. In this method, vaccines contain antibodies obtained from the blood of an actively immunized human being or animal. The antibodies last for two to three weeks, and during that time the person is protected against the disease. Although short-lived, passive immunization provides immediate protection, unlike active immunization, which can take weeks to develop. Consequently, passive immunization can be lifesaving when a person has been infected with a deadly organism.

Occasionally there are complications associated with passive immunization. Diseases such as botulism and rabies once posed a particular problem. Immune globulin (antibody-containing plasma) for these diseases was once derived from the blood serum of horses. Although this animal material was specially treated before administration to humans, serious allergic reactions were common. Today, human-derived immune globulin is more widely available and the risk of side effects is reduced.

[B]III -IMMUNIZATION RECOMMENDATIONS[/B]
More than 50 vaccines for preventable diseases are licensed in the United States. The American Academy of Pediatrics and the U.S. Public Health Service recommend a series of immunizations beginning at birth. The initial series for children is complete by the time they reach the age of two, but booster vaccines are required for certain diseases, such as diphtheria and tetanus, in order to maintain adequate protection. When new vaccines are introduced, it is uncertain how long full protection will last. Recently, for example, it was discovered that a single injection of measles vaccine, first licensed in 1963 and administered to children at the age of 15 months, did not confer protection through adolescence and young adulthood. As a result, in the 1980s a series of measles epidemics occurred on college campuses throughout the United States among students who had been vaccinated as infants. To forestall future epidemics, health authorities now recommend that a booster dose of the measles, mumps, and rubella (also known as German measles) vaccine be administered at the time a child first enters school.
Not only children but also adults can benefit from immunization. Many adults in the United States are not sufficiently protected against tetanus, diphtheria, measles, mumps, and German measles. Health authorities recommend that most adults 65 years of age and older, and those with respiratory illnesses, be immunized against influenza (yearly) and pneumococcus (once).

[B]IV -HISTORY OF IMMUNIZATION[/B]
The use of immunization to prevent disease predated the knowledge of both infection and immunology. In China in approximately 600 BC, smallpox material was inoculated through the nostrils. Inoculation of healthy people with a tiny amount of material from smallpox sores was first attempted in England in 1718and later in America. Those who survived the inoculation became immune to smallpox. American statesman Thomas Jefferson traveled from his home in Virginia to Philadelphia, Pennsylvania, to undergo this risky procedure.

A significant breakthrough came in 1796 when British physician Edward Jenner discovered that he could immunize patients against smallpox by inoculating them with material from cowpox sores. Cowpox is a far milder disease that, unlike smallpox, carries little risk of death or disfigurement. Jenner inserted matter from cowpox sores into cuts he made on the arm of a healthy eight-year-old boy. The boy caught cowpox. However, when Jenner exposed the boy to smallpox eight weeks later, the child did not contract the disease. The vaccination with cowpox had made him immune to the smallpox virus. Today we know that the cowpox virus antigens are so similar to those of the smallpox virus that they trigger the body's defenses against both diseases.

In 1885 Louis Pasteur created the first successful vaccine against rabies for a young boy who had been bitten 14 times by a rabid dog. Over the course of ten days, Pasteur injected progressively more virulent rabies organisms into the boy, causing the boy to develop immunity in time to avert death from this disease.

Another major milestone in the use of vaccination to prevent disease occurred with the efforts of two American physician-researchers. In 1954 Jonas Salk introduced an injectable vaccine containing an inactivated virus to counter the epidemic of poliomyelitis. Subsequently, Albert Sabin made great strides in the fight against this paralyzing disease by developing an oral vaccine containing a live weakened virus. Since the introduction of the polio vaccine, the disease has been nearly eliminated in many parts of the world.
As more vaccines are developed, a new generation of combined vaccines are becoming available that will allow physicians to administer a single shot for multiple diseases. Work is also under way to develop additional orally administered vaccines and vaccines for sexually transmitted diseases.

Possible future vaccines may include, for example, one that would temporarily prevent pregnancy. Such a vaccine would still operate by stimulating the immune system to recognize and attack antigens, but in this case the antigens would be those of the hormones that are necessary for pregnancy.

Predator Tuesday, November 13, 2007 04:22 PM

Microscope
 
[CENTER][B][U]Microscope[/U][/B][/CENTER]

[B]I -INTRODUCTION[/B]
Microscope, instrument used to obtain a magnified image of minute objects or minute details of objects.

[B]II -OPTICAL MICROSCOPES[/B]
The most widely used microscopes are optical microscopes, which use visible light to create a magnified image of an object. The simplest optical microscope is the double-convex lens with a short focal length (see Optics). Double-convex lenses can magnify an object up to 15 times. The compound microscope uses two lenses, an objective lens and an ocular lens, mounted at opposite ends of a closed tube, to provide greater magnification than is possible with a single lens. The objective lens is composed of several lens elements that form an enlarged real image of the object being examined. The real image formed by the objective lens lies at the focal point of the ocular lens. Thus, the observer looking through the ocular lens sees an enlarged virtual image of the real image. The total magnification of a compound microscope is determined by the focal lengths of the two lens systems and can be more than 2000 times.

Optical microscopes have a firm stand with a flat stage to hold the material examined and some means for moving the microscope tube toward and away from the specimen to bring it into focus. Ordinarily, specimens are transparent and are mounted on slides—thin, rectangular pieces of clear glass that are placed on the stage for viewing. The stage has a small hole through which light can pass from a light source mounted underneath the stage—either a mirror that reflects natural light or a special electric light that directs light through the specimen.

In photomicrography, the process of taking photographs through a microscope, a camera is mounted directly above the microscope's eyepiece. Normally the camera does not contain a lens because the microscope itself acts as the lens system.

Microscopes used for research have a number of refinements to enable a complete study of the specimens. Because the image of a specimen is highly magnified and inverted, manipulating the specimen by hand is difficult. Therefore, the stages of high-powered research microscopes can by moved by micrometer screws, and in some microscopes, the stage can also be rotated. Research microscopes are also equipped with three or more objective lenses, mounted on a revolving head, so that the magnifying power of the microscope can be varied.

[B]III -SPECIAL-PURPOSE OPTICAL MICROSCOPES[/B]
Different microscopes have been developed for specialized uses. The stereoscopic microscope, two low-powered microscopes arranged to converge on a single specimen, provides a three-dimensional image.
The petrographic microscope is used to analyze igneous and metamorphic rock. A Nicol prism or other polarizing device polarizes the light that passes through the specimen. Another Nicol prism or analyzer determines the polarization of the light after it has passed through the specimen. Rotating the stage causes changes in the polarization of light that can be measured and used to identify and estimate the mineral components of the rock.

The dark-field microscope employs a hollow, extremely intense cone of light concentrated on the specimen. The field of view of the objective lens lies in the hollow, dark portion of the cone and picks up only scattered light from the object. The clear portions of the specimen appear as a dark background, and the minute objects under study glow brightly against the dark field. This form of illumination is useful for transparent, unstained biological material and for minute objects that cannot be seen in normal illumination under the microscope.

The phase microscope also illuminates the specimen with a hollow cone of light. However, the cone of light is narrower and enters the field of view of the objective lens. Within the objective lens is a ring-shaped device that reduces the intensity of the light and introduces a phase shift of a quarter of a wavelength. This illumination causes minute variations of refractive index in a transparent specimen to become visible. This type of microscope is particularly effective for studying living tissue.

A typical optical microscope cannot resolve images smaller than the wavelength of light used to illuminate the specimen. An ultraviolet microscope uses the shorter wavelengths of the ultraviolet region of the light spectrum to increase resolution or to emphasize details by selective absorption (see Ultraviolet Radiation). Glass does not transmit the shorter wavelengths of ultraviolet light, so the optics in an ultraviolet microscope are usually quartz, fluorite, or aluminized-mirror systems. Ultraviolet radiation is invisible to human eyes, so the image must be made visible through phosphorescence (see Luminescence), photography, or electronic scanning.

The near-field microscope is an advanced optical microscope that is able to resolve details slightly smaller than the wavelength of visible light. This high resolution is achieved by passing a light beam through a tiny hole at a distance from the specimen of only about half the diameter of the hole. The light is played across the specimen until an entire image is obtained.

The magnifying power of a typical optical microscope is limited by the wavelengths of visible light. Details cannot be resolved that are smaller than these wavelengths. To overcome this limitation, the scanning interferometric apertureless microscope (SIAM) was developed. SIAM uses a silicon probe with a tip one nanometer (1 billionth of a meter) wide. This probe vibrates 200,000 times a second and scatters a portion of the light passing through an observed sample. The scattered light is then recombined with the unscattered light to produce an interference pattern that reveals minute details of the sample. The SIAM can currently resolve images 6500 times smaller than conventional light microscopes.

[B]IV -ELECTRON MICROSCOPES[/B]
An electron microscope uses electrons to “illuminate” an object. Electrons have a much smaller wavelength than light, so they can resolve much smaller structures. The smallest wavelength of visible light is about 4000 angstroms (40 millionths of a meter). The wavelength of electrons used in electron microscopes is usually about half an angstrom (50 trillionths of a meter).

Electron microscopes have an electron gun that emits electrons, which then strike the specimen. Conventional lenses used in optical microscopes to focus visible light do not work with electrons; instead, magnetic fields (see Magnetism) are used to create “lenses” that direct and focus the electrons. Since electrons are easily scattered by air molecules, the interior of an electron microscope must be sealed at a very high vacuum. Electron microscopes also have systems that record or display the images produced by the electrons.

There are two types of electron microscopes: the transmission electron microscope (TEM), and the scanning electron microscope (SEM). In a TEM, the electron beam is directed onto the object to be magnified. Some of the electrons are absorbed or bounce off the specimen, while others pass through and form a magnified image of the specimen. The sample must be cut very thin to be used in a TEM, usually no more than a few thousand angstroms thick. A photographic plate or fluorescent screen beyond the sample records the magnified image. Transmission electron microscopes can magnify an object up to one million times. In a scanning electron microscope, a tightly focused electron beam moves over the entire sample to create a magnified image of the surface of the object in much the same way an electron beam scans an image onto the screen of a television. Electrons in the tightly focused beam might scatter directly off the sample or cause secondary electrons to be emitted from the surface of the sample. These scattered or secondary electrons are collected and counted by an electronic device. Each scanned point on the sample corresponds to a pixel on a television monitor; the more electrons the counting device detects, the brighter the pixel on the monitor is. As the electron beam scans over the entire sample, a complete image of the sample is displayed on the monitor.

An SEM scans the surface of the sample bit by bit, in contrast to a TEM, which looks at a relatively large area of the sample all at once. Samples scanned by an SEM do not need to be thinly sliced, as do TEM specimens, but they must be dehydrated to prevent the secondary electrons emitted from the specimen from being scattered by water molecules in the sample.
Scanning electron microscopes can magnify objects 100,000 times or more. SEMs are particularly useful because, unlike TEMs and powerful optical microscopes, they can produce detailed three-dimensional images of the surface of objects.

The scanning transmission electron microscope (STEM) combines elements of an SEM and a TEM and can resolve single atoms in a sample.
The electron probe microanalyzer, an electron microscope fitted with an X-ray spectrum analyzer, can examine the high-energy X rays emitted by the sample when it is bombarded with electrons. The identity of different atoms or molecules can be determined from their X-ray emissions, so the electron probe analyzer not only provides a magnified image of the sample, but also information about the sample's chemical composition.

[B]V -SCANNING PROBE MICROSCOPES[/B]
A scanning probe microscope uses a probe to scan the surface of a sample and provides a three-dimensional image of atoms or molecules on the surface of the object. The probe is an extremely sharp metal point that can be as narrow as a single atom at the tip.

An important type of scanning probe microscope is the scanning tunneling microscope (STM). Invented in 1981, the STM uses a quantum physics phenomenon called tunneling to provide detailed images of substances that can conduct electricity. The probe is brought to within a few angstroms of the surface of the material being viewed, and a small voltage is applied between the surface and the probe. Because the probe is so close to the surface, electrons leak, or tunnel, across the gap between the probe and surface, generating a current. The strength of the tunneling current depends on the distance between the surface and the probe. If the probe moves closer to the surface, the tunneling current increases, and if the probe moves away from the surface, the tunneling current decreases. As the scanning mechanism moves along the surface of the substance, the mechanism constantly adjusts the height of the probe to keep the tunneling current constant. By tracking these minute adjustments with many scans back and forth along the surface, a computer can create a three-dimensional representation of the surface.

Another type of scanning probe microscope is the atomic force microscope (AFM). The AFM does not use a tunneling current, so the sample does not need to conduct electricity. As the metal probe in an AFM moves along the surface of a sample, the electrons in the probe are repelled by the electrons of the atoms in the sample and the AFM adjusts the height of the probe to keep the force on it constant. A sensing mechanism records the up-and-down movements of the probe and feeds the data into a computer, which creates a three-dimensional image of the surface of the sample.

Predator Tuesday, November 13, 2007 05:22 PM

Energy
 
[B][U][CENTER][SIZE="3"]Energy[/SIZE][/CENTER][/U][/B]

Energy, capacity of matter to perform work as the result of its motion or its position in relation to forces acting on it. Energy associated with motion is known as kinetic energy, and energy related to position is called potential energy. Thus, a swinging pendulum has maximum potential energy at the terminal points; at all intermediate positions it has both kinetic and potential energy in varying proportions. Energy exists in various forms, including mechanical (see Mechanics), thermal (see Thermodynamics), chemical (see Chemical Reaction), electrical (see Electricity), radiant (see Radiation), and atomic (see Nuclear Energy). All forms of energy are interconvertible by appropriate processes. In the process of transformation either kinetic or potential energy may be lost or gained, but the sum total of the two remains always the same.

A weight suspended from a cord has potential energy due to its position, inasmuch as it can perform work in the process of falling. An electric battery has potential energy in chemical form. A piece of magnesium has potential energy stored in chemical form that is expended in the form of heat and light if the magnesium is ignited. If a gun is fired, the potential energy of the gunpowder is transformed into the kinetic energy of the moving projectile. The kinetic mechanical energy of the moving rotor of a dynamo is changed into kinetic electrical energy by electromagnetic induction. All forms of energy tend to be transformed into heat, which is the most transient form of energy. In mechanical devices energy not expended in useful work is dissipated in frictional heat, and losses in electrical circuits are largely heat losses.

Empirical observation in the 19th century led to the conclusion that although energy can be transformed, it cannot be created or destroyed. This concept, known as the conservation of energy, constitutes one of the basic principles of classical mechanics. The principle, along with the parallel principle of conservation of matter, holds true only for phenomena involving velocities that are small compared with the velocity of light. At higher velocities close to that of light, as in nuclear reactions, energy and matter are interconvertible (see Relativity). In modern physics the two concepts, the conservation of energy and of mass, are thus unified.

Predator Wednesday, November 14, 2007 10:12 AM

Fingerprinting
 
[B][U][CENTER][SIZE="3"]Fingerprinting[/SIZE][/CENTER][/U][/B]

[B]I -INTRODUCTION[/B]
Fingerprinting, method of identification using the impression made by the minute ridge formations or patterns found on the fingertips. No two persons have exactly the same arrangement of ridge patterns, and the patterns of any one individual remain unchanged through life. To obtain a set of fingerprints, the ends of the fingers are inked and then pressed or rolled one by one on some receiving surface. Fingerprints may be classified and filed on the basis of the ridge patterns, setting up an identification system that is almost infallible.

[B]II -HISTORY[/B]
The first recorded use of fingerprints was by the ancient Assyrians and Chinese for the signing of legal documents. Probably the first modern study of fingerprints was made by the Czech physiologist Johannes Evengelista Purkinje, who in 1823 proposed a system of classification that attracted little attention. The use of fingerprints for identification purposes was proposed late in the 19th century by the British scientist Sir Francis Galton, who wrote a detailed study of fingerprints in which he presented a new classification system using prints of all ten fingers, which is the basis of identification systems still in use. In the 1890s the police in Bengal, India, under the British police official Sir Edward Richard Henry, began using fingerprints to identify criminals. As assistant commissioner of metropolitan police, Henry established the first British fingerprint files in London in 1901. Subsequently, the use of fingerprinting as a means for identifying criminals spread rapidly throughout Europe and the United States, superseding the old Bertillon system of identification by means of body measurements.

[B]III -MODERN USE[/B]
As crime-detection methods improved, law enforcement officers found that any smooth, hard surface touched by a human hand would yield fingerprints made by the oily secretion present on the skin. When these so-called latent prints were dusted with powder or chemically treated, the identifying fingerprint pattern could be seen and photographed or otherwise preserved. Today, law enforcement agencies can also use computers to digitally record fingerprints and to transmit them electronically to other agencies for comparison. By comparing fingerprints at the scene of a crime with the fingerprint record of suspected persons, officials can establish absolute proof of the presence or identity of a person.

The confusion and inefficiency caused by the establishment of many separate fingerprint archives in the United States led the federal government to set up a central agency in 1924, the Identification Division of the Federal Bureau of Investigation (FBI). This division was absorbed in 1993 by the FBI’s Criminal Justice Information Services Division, which now maintains the world’s largest fingerprint collection. Currently the FBI has a library of more than 234 million civil and criminal fingerprint cards, representing 81 million people. In 1999 the FBI began full operation of the Integrated Automated Fingerprint Identification System (IAFIS), a computerized system that stores digital images of fingerprints for more than 36 million individuals, along with each individual’s criminal history if one exists. Using IAFIS, authorities can conduct automated searches to identify people from their fingerprints and determine whether they have a criminal record. The system also gives state and local law enforcement agencies the ability to electronically transmit fingerprint information to the FBI. The implementation of IAFIS represented a breakthrough in crimefighting by reducing the time needed for fingerprint identification from weeks to minutes or hours.

Predator Wednesday, November 14, 2007 10:15 AM

Infrared Radiation
 
[B][U][CENTER][SIZE="3"]Infrared Radiation[/SIZE][/CENTER][/U][/B]

Infrared Radiation, emission of energy as electromagnetic waves in the portion of the spectrum just beyond the limit of the red portion of visible radiation (see Electromagnetic Radiation). The wavelengths of infrared radiation are shorter than those of radio waves and longer than those of light waves. They range between approximately 10-6 and 10-3 (about 0.0004 and 0.04 in). Infrared radiation may be detected as heat, and instruments such as bolometers are used to detect it. See Radiation; Spectrum.
Infrared radiation is used to obtain pictures of distant objects obscured by atmospheric haze, because visible light is scattered by haze but infrared radiation is not. The detection of infrared radiation is used by astronomers to observe stars and nebulas that are invisible in ordinary light or that emit radiation in the infrared portion of the spectrum.

An opaque filter that admits only infrared radiation is used for very precise infrared photographs, but an ordinary orange or light-red filter, which will absorb blue and violet light, is usually sufficient for most infrared pictures. Developed about 1880, infrared photography has today become an important diagnostic tool in medical science as well as in agriculture and industry. Use of infrared techniques reveals pathogenic conditions that are not visible to the eye or recorded on X-ray plates. Remote sensing by means of aerial and orbital infrared photography has been used to monitor crop conditions and insect and disease damage to large agricultural areas, and to locate mineral deposits. See Aerial Survey; Satellite, Artificial. In industry, infrared spectroscopy forms an increasingly important part of metal and alloy research, and infrared photography is used to monitor the quality of products. See also Photography: Photographic Films.

Infrared devices such as those used during World War II enable sharpshooters to see their targets in total visual darkness. These instruments consist essentially of an infrared lamp that sends out a beam of infrared radiation, often referred to as black light, and a telescope receiver that picks up returned radiation from the object and converts it to a visible image.

Predator Wednesday, November 14, 2007 10:22 AM

Greenhouse Effect
 
[B][U][CENTER][SIZE="3"]Greenhouse Effect[/SIZE][/CENTER][/U][/B]

[B]I -INTRODUCTION[/B]
Greenhouse Effect, the capacity of certain gases in the atmosphere to trap heat emitted from the Earth’s surface, thereby insulating and warming the Earth. Without the thermal blanketing of the natural greenhouse effect, the Earth’s climate would be about 33 Celsius degrees (about 59 Fahrenheit degrees) cooler—too cold for most living organisms to survive.

The greenhouse effect has warmed the Earth for over 4 billion years. Now scientists are growing increasingly concerned that human activities may be modifying this natural process, with potentially dangerous consequences. Since the advent of the Industrial Revolution in the 1700s, humans have devised many inventions that burn fossil fuels such as coal, oil, and natural gas. Burning these fossil fuels, as well as other activities such as clearing land for agriculture or urban settlements, releases some of the same gases that trap heat in the atmosphere, including carbon dioxide, methane, and nitrous oxide. These atmospheric gases have risen to levels higher than at any time in the last 420,000 years. As these gases build up in the atmosphere, they trap more heat near the Earth’s surface, causing Earth’s climate to become warmer than it would naturally.

Scientists call this unnatural heating effect global warming and blame it for an increase in the Earth’s surface temperature of about 0.6 Celsius degrees (about 1 Fahrenheit degree) over the last nearly 100 years. Without remedial measures, many scientists fear that global temperatures will rise 1.4 to 5.8 Celsius degrees (2.5 to 10.4 Fahrenheit degrees) by 2100. These warmer temperatures could melt parts of polar ice caps and most mountain glaciers, causing a rise in sea level of up to 1 m (40 in) within a century or two, which would flood coastal regions. Global warming could also affect weather patterns causing, among other problems, prolonged drought or increased flooding in some of the world’s leading agricultural regions.

[B]II -HOW THE GREENHOUSE EFFECT WORKS[/B]
The greenhouse effect results from the interaction between sunlight and the layer of greenhouse gases in the Earth's atmosphere that extends up to 100 km (60 mi) above Earth's surface. Sunlight is composed of a range of radiant energies known as the solar spectrum, which includes visible light, infrared light, gamma rays, X rays, and ultraviolet light. When the Sun’s radiation reaches the Earth’s atmosphere, some 25 percent of the energy is reflected back into space by clouds and other atmospheric particles. About 20 percent is absorbed in the atmosphere. For instance, gas molecules in the uppermost layers of the atmosphere absorb the Sun’s gamma rays and X rays. The Sun’s ultraviolet radiation is absorbed by the ozone layer, located 19 to 48 km (12 to 30 mi) above the Earth’s surface.

About 50 percent of the Sun’s energy, largely in the form of visible light, passes through the atmosphere to reach the Earth’s surface. Soils, plants, and oceans on the Earth’s surface absorb about 85 percent of this heat energy, while the rest is reflected back into the atmosphere—most effectively by reflective surfaces such as snow, ice, and sandy deserts. In addition, some of the Sun’s radiation that is absorbed by the Earth’s surface becomes heat energy in the form of long-wave infrared radiation, and this energy is released back into the atmosphere.

Certain gases in the atmosphere, including water vapor, carbon dioxide, methane, and nitrous oxide, absorb this infrared radiant heat, temporarily preventing it from dispersing into space. As these atmospheric gases warm, they in turn emit infrared radiation in all directions. Some of this heat returns back to Earth to further warm the surface in what is known as the greenhouse effect, and some of this heat is eventually released to space. This heat transfer creates equilibrium between the total amount of heat that reaches the Earth from the Sun and the amount of heat that the Earth radiates out into space. This equilibrium or energy balance—the exchange of energy between the Earth’s surface, atmosphere, and space—is important to maintain a climate that can support a wide variety of life.

The heat-trapping gases in the atmosphere behave like the glass of a greenhouse. They let much of the Sun’s rays in, but keep most of that heat from directly escaping. Because of this, they are called greenhouse gases. Without these gases, heat energy absorbed and reflected from the Earth’s surface would easily radiate back out to space, leaving the planet with an inhospitable temperature close to –19°C (2°F), instead of the present average surface temperature of 15°C (59°F).

To appreciate the importance of the greenhouse gases in creating a climate that helps sustain most forms of life, compare Earth to Mars and Venus. Mars has a thin atmosphere that contains low concentrations of heat-trapping gases. As a result, Mars has a weak greenhouse effect resulting in a largely frozen surface that shows no evidence of life. In contrast, Venus has an atmosphere containing high concentrations of carbon dioxide. This heat-trapping gas prevents heat radiated from the planet’s surface from escaping into space, resulting in surface temperatures that average 462°C (864°F)—too hot to support life.

[B]III -TYPES OF GREENHOUSE GASES[/B]
Earth’s atmosphere is primarily composed of nitrogen (78 percent) and oxygen (21 percent). These two most common atmospheric gases have chemical structures that restrict absorption of infrared energy. Only the few greenhouse gases, which make up less than 1 percent of the atmosphere, offer the Earth any insulation. Greenhouse gases occur naturally or are manufactured. The most abundant naturally occurring greenhouse gas is water vapor, followed by carbon dioxide, methane, and nitrous oxide. Human-made chemicals that act as greenhouse gases include chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), and hydrofluorocarbons (HFCs).

Since the 1700s, human activities have substantially increased the levels of greenhouse gases in the atmosphere. Scientists are concerned that expected increases in the concentrations of greenhouse gases will powerfully enhance the atmosphere’s capacity to retain infrared radiation, leading to an artificial warming of the Earth’s surface.

[B]A -Water Vapor[/B]
Water vapor is the most common greenhouse gas in the atmosphere, accounting for about 60 to 70 percent of the natural greenhouse effect. Humans do not have a significant direct impact on water vapor levels in the atmosphere. However, as human activities increase the concentration of other greenhouse gases in the atmosphere (producing warmer temperatures on Earth), the evaporation of oceans, lakes, and rivers, as well as water evaporation from plants, increase and raise the amount of water vapor in the atmosphere.

[B]B -Carbon Dioxide[/B]
Carbon dioxide constantly circulates in the environment through a variety of natural processes known as the carbon cycle. Volcanic eruptions and the decay of plant and animal matter both release carbon dioxide into the atmosphere. In respiration, animals break down food to release the energy required to build and maintain cellular activity. A byproduct of respiration is the formation of carbon dioxide, which is exhaled from animals into the environment. Oceans, lakes, and rivers absorb carbon dioxide from the atmosphere. Through photosynthesis, plants collect carbon dioxide and use it to make their own food, in the process incorporating carbon into new plant tissue and releasing oxygen to the environment as a byproduct.
In order to provide energy to heat buildings, power automobiles, and fuel electricity-producing power plants, humans burn objects that contain carbon, such as the fossil fuels oil, coal, and natural gas; wood or wood products; and some solid wastes. When these products are burned, they release carbon dioxide into the air. In addition, humans cut down huge tracts of trees for lumber or to clear land for farming or building. This process, known as deforestation, can both release the carbon stored in trees and significantly reduce the number of trees available to absorb carbon dioxide.

As a result of these human activities, carbon dioxide in the atmosphere is accumulating faster than the Earth’s natural processes can absorb the gas. By analyzing air bubbles trapped in glacier ice that is many centuries old, scientists have determined that carbon dioxide levels in the atmosphere have risen by 31 percent since 1750. And since carbon dioxide increases can remain in the atmosphere for centuries, scientists expect these concentrations to double or triple in the next century if current trends continue.

[B]C -Methane[/B]
Many natural processes produce methane, also known as natural gas. Decomposition of carbon-containing substances found in oxygen-free environments, such as wastes in landfills, release methane. Ruminating animals such as cattle and sheep belch methane into the air as a byproduct of digestion. Microorganisms that live in damp soils, such as rice fields, produce methane when they break down organic matter. Methane is also emitted during coal mining and the production and transport of other fossil fuels.

Methane has more than doubled in the atmosphere since 1750, and could double again in the next century. Atmospheric concentrations of methane are far less than carbon dioxide, and methane only stays in the atmosphere for a decade or so. But scientists consider methane an extremely effective heat-trapping gas—one molecule of methane is 20 times more efficient at trapping infrared radiation radiated from the Earth’s surface than a molecule of carbon dioxide.

[B]D -Nitrous Oxide[/B]
Nitrous oxide is released by the burning of fossil fuels, and automobile exhaust is a large source of this gas. In addition, many farmers use nitrogen-containing fertilizers to provide nutrients to their crops. When these fertilizers break down in the soil, they emit nitrous oxide into the air. Plowing fields also releases nitrous oxide.

Since 1750 nitrous oxide has risen by 17 percent in the atmosphere. Although this increase is smaller than for the other greenhouse gases, nitrous oxide traps heat about 300 times more effectively than carbon dioxide and can stay in the atmosphere for a century.

[B]E -Fluorinated Compounds[/B]
Some of the most potent greenhouse gases emitted are produced solely by human activities. Fluorinated compounds, including CFCs, HCFCs, and HFCs, are used in a variety of manufacturing processes. For each of these synthetic compounds, one molecule is several thousand times more effective in trapping heat than a single molecule of carbon dioxide.

CFCs, first synthesized in 1928, were widely used in the manufacture of aerosol sprays, blowing agents for foams and packing materials, as solvents, and as refrigerants. Nontoxic and safe to use in most applications, CFCs are harmless in the lower atmosphere. However, in the upper atmosphere, ultraviolet radiation breaks down CFCs, releasing chlorine into the atmosphere. In the mid-1970s, scientists began observing that higher concentrations of chlorine were destroying the ozone layer in the upper atmosphere. Ozone protects the Earth from harmful ultraviolet radiation, which can cause cancer and other damage to plants and animals. Beginning in 1987 with the Montréal Protocol on Substances that Deplete the Ozone Layer, representatives from 47 countries established control measures that limited the consumption of CFCs. By 1992 the Montréal Protocol was amended to completely ban the manufacture and use of CFCs worldwide, except in certain developing countries and for use in special medical processes such as asthma inhalers.

Scientists devised substitutes for CFCs, developing HCFCs and HFCs. Since HCFCs still release ozone-destroying chlorine in the atmosphere, production of this chemical will be phased out by the year 2030, providing scientists some time to develop a new generation of safer, effective chemicals. HFCs, which do not contain chlorine and only remain in the atmosphere for a short time, are now considered the most effective and safest substitute for CFCs.

[B]F -Other Synthetic Chemicals[/B]
Experts are concerned about other industrial chemicals that may have heat-trapping abilities. In 2000 scientists observed rising concentrations of a previously unreported compound called trifluoromethyl sulphur pentafluoride. Although present in extremely low concentrations in the environment, the gas still poses a significant threat because it traps heat more effectively than all other known greenhouse gases. The exact sources of the gas, undisputedly produced from industrial processes, still remain uncertain.

[B]IV -OTHER FACTORS AFFECTING THE GREENHOUSE EFFECT[/B]
Aerosols, also known as particulates, are airborne particles that absorb, scatter, and reflect radiation back into space. Clouds, windblown dust, and particles that can be traced to erupting volcanoes are examples of natural aerosols. Human activities, including the burning of fossil fuels and slash-and-burn farming techniques used to clear forestland, contribute additional aerosols to the atmosphere. Although aerosols are not considered a heat-trapping greenhouse gas, they do affect the transfer of heat energy radiated from the Earth to space. The effect of aerosols on climate change is still debated, but scientists believe that light-colored aerosols cool the Earth’s surface, while dark aerosols like soot actually warm the atmosphere. The increase in global temperature in the last century is lower than many scientists predicted when only taking into account increasing levels of carbon dioxide, methane, nitrous oxide, and fluorinated compounds. Some scientists believe that aerosol cooling may be the cause of this unexpectedly reduced warming.

However, scientists do not expect that aerosols will ever play a significant role in offsetting global warming. As pollutants, aerosols typically pose a health threat, and the manufacturing or agricultural processes that produce them are subject to air-pollution control efforts. As a result, scientists do not expect aerosols to increase as fast as other greenhouse gases in the 21st century.

[B]V -UNDERSTANDING THE GREENHOUSE EFFECT[/B]
Although concern over the effect of increasing greenhouse gases is a relatively recent development, scientists have been investigating the greenhouse effect since the early 1800s. French mathematician and physicist Jean Baptiste Joseph Fourier, while exploring how heat is conducted through different materials, was the first to compare the atmosphere to a glass vessel in 1827. Fourier recognized that the air around the planet lets in sunlight, much like a glass roof.

In the 1850s British physicist John Tyndall investigated the transmission of radiant heat through gases and vapors. Tyndall found that nitrogen and oxygen, the two most common gases in the atmosphere, had no heat-absorbing properties. He then went on to measure the absorption of infrared radiation by carbon dioxide and water vapor, publishing his findings in 1863 in a paper titled “On Radiation Through the Earth’s Atmosphere.”

Swedish chemist Svante August Arrhenius, best known for his Nobel Prize-winning work in electrochemistry, also advanced understanding of the greenhouse effect. In 1896 he calculated that doubling the natural concentrations of carbon dioxide in the atmosphere would increase global temperatures by 4 to 6 Celsius degrees (7 to 11 Fahrenheit degrees), a calculation that is not too far from today’s estimates using more sophisticated methods. Arrhenius correctly predicted that when Earth’s temperature warms, water vapor evaporation from the oceans increases. The higher concentration of water vapor in the atmosphere would then contribute to the greenhouse effect and global warming.

The predictions about carbon dioxide and its role in global warming set forth by Arrhenius were virtually ignored for over half a century, until scientists began to detect a disturbing change in atmospheric levels of carbon dioxide. In 1957 researchers at the Scripps Institution of Oceanography, based in San Diego, California, began monitoring carbon dioxide levels in the atmosphere from Hawaii’s remote Mauna Loa Observatory located 3,000 m (11,000 ft) above sea level. When the study began, carbon dioxide concentrations in the Earth’s atmosphere were 315 molecules of gas per million molecules of air (abbreviated parts per million or ppm). Each year carbon dioxide concentrations increased—to 323 ppm by 1970 and 335 ppm by 1980. By 1988 atmospheric carbon dioxide had increased to 350 ppm, an 8 percent increase in only 31 years.

As other researchers confirmed these findings, scientific interest in the accumulation of greenhouse gases and their effect on the environment slowly began to grow. In 1988 the World Meteorological Organization and the United Nations Environment Programme established the Intergovernmental Panel on Climate Change (IPCC). The IPCC was the first international collaboration of scientists to assess the scientific, technical, and socioeconomic information related to the risk of human-induced climate change. The IPCC creates periodic assessment reports on advances in scientific understanding of the causes of climate change, its potential impacts, and strategies to control greenhouse gases. The IPCC played a critical role in establishing the United Nations Framework Convention on Climate Change (UNFCCC). The UNFCCC, which provides an international policy framework for addressing climate change issues, was adopted by the United Nations General Assembly in 1992.

Today scientists around the world monitor atmospheric greenhouse gas concentrations and create forecasts about their effects on global temperatures. Air samples from sites spread across the globe are analyzed in laboratories to determine levels of individual greenhouse gases. Sources of greenhouse gases, such as automobiles, factories, and power plants, are monitored directly to determine their emissions. Scientists gather information about climate systems and use this information to create and test computer models that simulate how climate could change in response to changing conditions on the Earth and in the atmosphere. These models act as high-tech crystal balls to project what may happen in the future as greenhouse gas levels rise. Models can only provide approximations, and some of the predictions based on these models often spark controversy within the science community. Nevertheless, the basic concept of global warming is widely accepted by most climate scientists.

[B]VI -EFFORTS TO CONTROL GREENHOUSE GASES[/B]
Due to overwhelming scientific evidence and growing political interest, global warming is currently recognized as an important national and international issue. Since 1992 representatives from over 160 countries have met regularly to discuss how to reduce worldwide greenhouse gas emissions. In 1997 representatives met in Kyôto, Japan, and produced an agreement, known as the Kyôto Protocol, which requires industrialized countries to reduce their emissions by 2012 to an average of 5 percent below 1990 levels. To help countries meet this agreement cost-effectively, negotiators are trying to develop a system in which nations that have no obligations or that have successfully met their reduced emissions obligations could profit by selling or trading their extra emissions quotas to other countries that are struggling to reduce their emissions. Negotiating such detailed emissions trading rules has been a contentious task for the world community since the signing of the Kyôto Protocol. A ratified agreement is still not yet in force, and ratification received a setback in 2001 when newly elected U.S. president George W. Bush renounced the treaty on the grounds that the required carbon-dioxide reductions in the United States would be too costly. He also objected that developing nations would not be bound by similar carbon-dioxide reducing obligations. However, many experts expect that as the scientific evidence about the dangers of global warming continues to mount, nations will be motivated to cooperate more effectively to reduce the risks of climate change.

Predator Wednesday, November 14, 2007 10:30 AM

Antimatter
 
[B][U][CENTER][SIZE="3"]Antimatter[/SIZE][/CENTER][/U][/B]

Antimatter, matter composed of elementary particles that are, in a special sense, mirror images of the particles that make up ordinary matter as it is known on earth. Antiparticles have the same mass as their corresponding particles but have opposite electric charges or other properties related to electromagnetism. For example, the antimatter electron, or positron, has opposite electric charge and magnetic moment (a property that determines how it behaves in a magnetic field), but is identical in all other respects to the electron. The antimatter equivalent of the chargeless neutron, on the other hand, differs in having a magnetic moment of opposite sign (magnetic moment is another electromagnetic property). In all of the other parameters involved in the dynamical properties of elementary particles, such as mass, spin, and partial decay, antiparticles are identical with their corresponding particles.

The existence of antiparticles was first proposed by the British physicist Paul Adrien Maurice Dirac, arising from his attempt to apply the techniques of relativistic mechanics (see Relativity) to quantum theory. In 1928 he developed the concept of a positively charged electron but its actual existence was established experimentally in 1932. The existence of other antiparticles was presumed but not confirmed until 1955, when antiprotons and antineutrons were observed in particle accelerators. Since then, the full range of antiparticles has been observed or indicated. Antimatter atoms were created for the first time in September 1995 at the European Organization for Nuclear Research (CERN). Positrons were combined with antimatter protons to produce antimatter hydrogen atoms. These atoms of antimatter exist only for forty-billionths of a second, but physicists hope future experiments will determine what differences there are between normal hydrogen and its antimatter counterpart.

A profound problem for particle physics and for cosmology in general is the apparent scarcity of antiparticles in the universe. Their nonexistence, except momentarily, on earth is understandable, because particles and antiparticles are mutually annihilated with a great release of energy when they meet (see Annihilation). Distant galaxies could possibly be made of antimatter, but no direct method of confirmation exists. Most of what is known about the far universe arrives in the form of photons, which are identical with their antiparticles and thus reveal little about the nature of their sources. The prevailing opinion, however, is that the universe consists overwhelmingly of “ordinary” matter, and explanations for this have been proposed by recent cosmological theory (see Inflationary Theory).

In 1997 scientists studying data gathered by the Compton Gamma Ray Observatory (GRO) operated by the National Aeronautics and Space Administration (NASA) found that the earth’s home galaxy—the Milky Way—contains large clouds of antimatter particles. Astronomers suggest that these clouds form when high-energy events—such as the collision of neutron stars, exploding stars, or black holes—create radioactive elements that decay into matter and antimatter or heat matter enough to make it split into particles of matter and antimatter. When antimatter particles meet particles of matter, the two annihilate each other and produce a burst of gamma rays. It was these gamma rays that GRO detected.

Predator Wednesday, November 14, 2007 11:35 AM

Magma
 
[B][U][CENTER][SIZE="3"]Magma[/SIZE][/CENTER][/U][/B]

[B]I -INTRODUCTION[/B]
Magma, molten or partially molten rock beneath the earth’s surface. Magma is generated when rock deep underground melts due to the high temperatures and pressures inside the earth. Because magma is lighter than the surrounding rock, it tends to rise. As it moves upward, the magma encounters colder rock and begins to cool. If the temperature of the magma drops low enough, the magma will crystallize underground to form rock; rock that forms in this way is called intrusive, or plutonic igneous rock, as the magma has formed by intruding the surrounding rocks. If the crust through which the magma passes is sufficiently shallow, warm, or fractured, and if the magma is sufficiently hot and fluid, the magma will erupt at the surface of the earth, possibly forming volcanoes. Magma that erupts is called lava.

[B]II -COMPOSITION OF MAGMA[/B]
Magmas are liquids that contain a variety of melted minerals and dissolved gases. Because magmas form deep underground, however, geologists cannot directly observe and measure their original composition. This difficulty has led to controversy over the exact chemical composition of magmas. Geologists cannot simply assume it is the same as the composition of the rock in the source region. One reason for this is that the source rock may melt only partially, releasing only the minerals with the lowest melting points. For this reason, the composition of magma produced by melting 1 percent of a rock is different from the composition of magma produced by melting 20 percent of a rock. Experiments have shown that the temperature and pressure of the location within the earth, and the amount of water present at that location affect the amount of melting. Because temperature and pressure increase as depth within the earth increases, melting an identical source rock at different depths will produce magmas of different composition. Combining these considerations with the fact that the composition of the source rock may be different in different geographic regions, there is a considerable range of possible compositions for magma.

As magma moves toward the surface, the pressure and temperature decrease, which causes partial crystallization, or the formation of mineral crystals within the magma. The compositions of the minerals that crystallize are different from the initial composition of the magma because of changes in temperature and pressure, hence the composition of the remaining liquid changes. The resultant crystals may separate from the liquid either by sinking or by a process known as filter-pressing, in which pressure compresses the liquid and causes it to move toward regions of lower pressure while leaving the crystals behind. As a result, the composition of the remaining magma is different from that of the initial magma. This process is known as magmatic differentiation, and is the principal mechanism whereby a wide variety of magmas and rocks can be produced from a single primary magma (see Igneous Rock: Formation of Igneous Rocks).

The composition of magma can also be modified by chemical interactions with, and melting of, the rocks through which it passes on its way upward. This process is known as assimilation. Magma cannot usually supply enough heat to melt a large amount of the surrounding rock, so assimilation seldom produces a significant change in the composition of magma.

Magmas also contain dissolved gases, because gases are especially soluble (easily dissolved) in liquids when the liquids are under pressure. Magma deep underground is under thousands of atmospheres (units of measure) of pressure due to the weight of the overlying rock. Gases commonly dissolved in magma are carbon dioxide, water vapor, and sulfur dioxide.

[B]III -PHYSICAL PROPERTIES OF MAGMA[/B]
The density and viscosity, or thickness, of magma is key physical factors that affect its upward passage. Most rocks expand about 10 percent when they melt, and hence most magma has a density of about 90 percent of the equivalent solid rock. This density difference produces sufficient buoyancy in the magma to cause it to rise toward the surface.

The viscosity of a fluid is a measure of its resistance to flow. The viscosity of a magma affects how quickly the magma will rise, and it determines whether crystals of significantly different density will sink rapidly enough to change the bulk composition of the magma. Viscosity also influences the rate of release of gases from the magma when pressure is released. The viscosity of magma is closely related to the magma’s chemical composition. Magma rich in silicon and poor in magnesium and iron, called felsic magma, is very viscous, or thick (see Igneous Rock: Felsic Rocks). Magma poor in silicon and rich in magnesium and iron, called mafic magma, is quite fluid (see Igneous Rock: Mafic Rocks).

[B]IV -GEOLOGICAL FEATURES FORMED BY MAGMA[/B]
Some magma reaches the surface of the earth and erupts from volcanoes or fissures before they solidify. Other magmas fail to reach the surface before they solidify. Magma that reaches the surface and is erupted, or extruded, forms extrusive igneous rocks. Magma that intrudes, or pushes its way into rocks deep underground and solidifies there forms intrusive igneous rock.
Volcanoes are cone-shaped mountains formed by the eruption of lava. Magma collects in a reservoir surrounded by rock, called a magma chamber, about 10 to 20 km (6 to 12 mi) below the volcano. A conduit known as a volcanic pipe provides a passage for the magma from the magma chamber to the volcano. As the magma rises in the conduit, the pressure of the overlying rock drops. Gases expand and bubble out that were kept dissolved in the magma by the pressure. The rapidly expanding gases propel the magma up the volcanic pipe, forcing the magma to the surface and leading to an eruption. The same process occurs when a shaken bottle of soda is suddenly opened.

The viscosity and dissolved-gas content of the magma control the character of the eruption. Low-viscosity magmas often have a low gas content. They flow easily from volcanic conduits and result in relatively quiet eruptions. Once the magma reaches the surface, it rapidly spreads out and over the volcano. Such fluid lava creates broad, gently sloped volcanoes called shield volcanoes, so called because they resemble giant shields lying on the ground.
Low-viscosity lava can also flow from fissures (long cracks in the rock), forming huge lava lakes. Repeated eruptions result in formations called flood basalts. The Columbia Plateau, in the states of Washington, Oregon, and Idaho, is a flood basalt that covers nearly 200,000 sq km (about 80,000 sq mi) and is more than 4000 m (13,000 ft) thick in places.

If a low-viscosity magma contains moderate amounts of dissolved gas, the released gases can eject the magma from the top of the volcano with enough force to form a lava fountain. The blobs of lava that are ejected into the air are called pyroclasts. They accumulate around the base of the fountain, forming a cinder cone.

Medium-viscosity magmas usually contain higher amounts of gases. They tend to form stratovolcanoes. The higher amounts of gases in the magma lead to very explosive eruptions that spew out large amounts of volcanic material. Stratovolcanoes have steeper sides than shield volcanoes. They are also known as composite volcanoes because they are made up of alternating layers of lava flows and deposits of pyroclasts.

High-viscosity magmas do not extrude easily though volcanic conduits. They often have a high gas content that can cause catastrophic eruptions. Both of these properties tend to promote explosive behavior, such as occurred on May 18, 1980 at Mount Saint Helens in Washington, when about 400 m (about 1300 ft) of rock was blasted off of its summit.

Intrusive bodies of rock formed from magma are classified by their size and shape. A batholith is an intrusive body that covers more than 100 sq km (nearly 40 sq mi). Lopoliths are saucer-shaped intrusions and may be up to 100 km (60 mi) in diameter and 8 km (5 mi) thick. Laccoliths have a flat base and a domed ceiling and are usually smaller than lopoliths. Sills and dikes are sheetlike intrusions that are very thin relative to their length. They can be less than one meter (about one yard) to several hundred meters thick but can be larger; the Palisades sill in the state of New York is 300 m (1000 ft) thick and 80 km (50 mi) long. Sills are formed when magma is forced between beds of layered rock; they run parallel to the layering of the surrounding rock. Dikes are formed when magma is forced into cracks in the surrounding rock; they tend to run perpendicular to the layering of the surrounding rock.

Predator Wednesday, November 14, 2007 02:56 PM

Rain
 
[B][U][SIZE="3"][CENTER]Rain[/CENTER][/SIZE][/U][/B]

[B]I -INTRODUCTION[/B]
Rain, precipitation of liquid drops of water. Raindrops generally have a diameter greater than 0.5 mm (0.02 in). They range in size up to about 3 mm (about 0.13 in) in diameter, and their rate of fall increases, up to 7.6 m (25 ft) per sec with their size. Larger drops tend to be flattened and broken into smaller drops by rapid fall through the air. The precipitation of smaller drops, called drizzle, often severely restricts visibility but usually does not produce significant accumulations of water.

Amount or volume of rainfall is expressed as the depth of water that collects on a flat surface, and is measured in a rain gauge to the nearest 0.25 mm (0.01 in). Rainfall is classified as light if not more than 2.5 mm (0.10 in) per hr, heavy if more than 7.50 mm (more than 0.30 in) per hr, and moderate if between these limits.

[B]II -PROCESS OF PRECIPITATION[/B]
Air masses acquire moisture on passing over warm bodies of water, or over wet land surfaces. The moisture, or water vapor, is carried upward into the air mass by turbulence and convection (see Heat Transfer). The lifting required to cool and condense this water vapor results from several processes, and study of these processes provides a key for understanding the distribution of rainfall in various parts of the world.

The phenomenon of lifting, associated with the convergence of the trade winds (see Wind), results in a band of copious rains near the equator. This band, called the intertropical convergence zone (ITCZ), moves northward or southward with the seasons. In higher latitudes much of the lifting is associated with moving cyclones (see Cyclone), often taking the form of the ascent of warm moist air, over a mass of colder air, along an interface called a front. Lifting on a smaller scale is associated with convection in air that is heated by a warm underlying surface, giving rise to showers and thunderstorms. The heaviest rainfall over short periods of time usually comes from such storms. Air may also be lifted by being forced to rise over a land barrier, with the result that the exposed windward slopes have enhanced amounts of rain while the sheltered, or lee, slopes have little rain.

[B]III -AVERAGE RAINFALL[/B]
In the U.S. the heaviest average rainfall amounts, up to 1778 mm (70 in), are experienced in the Southeast, where air masses from the tropical Atlantic and Gulf of Mexico are lifted frequently by cyclones and by convection. Moderate annual accumulations, from 762 to 1270 mm (30 to 50 in), occur throughout the eastern U.S., and are caused by cyclones in winter and convection in summer. The central plains, being farther from sources of moisture, have smaller annual accumulations, 381 to 1016 mm (15 to 40 in), mainly from summer convective storms. The southwestern U.S. is dominated by widespread descent of air in the subtropical Pacific anticyclone; rainfall is light, less than 254 mm (less than 10 in), except in the mountainous regions. The northwestern states are affected by cyclones from the Pacific Ocean, particularly during the winter; but rainfall is moderate, especially on the westward-facing slopes of mountain ranges.

The world's heaviest average rainfall, about 10,922 mm (about 430 in) per year, occurs at Cherrapunji, in northeastern India, where moisture-laden air from the Bay of Bengal is forced to rise over the Khāsi Hills of Assam State. As much as 26,466 mm (1042 in), or 26 m (87 ft), of rain have fallen there in one year. Other extreme rainfall records include nearly 1168 mm (nearly 46 in) of rain in one day during a typhoon at Baguio, Philippines; 304.8 mm (12 in) within one hour during a thunderstorm at Holt, Missouri; and 62.7 mm (2.48in) in over a 5-min period at Portobelo, Panama.

[B]IV -ARTIFICIAL PRECIPITATION[/B]
Despite the presence of moisture and lifting, clouds sometimes fail to precipitate rain. This circumstance has stimulated intensive study of precipitation processes, specifically of how single raindrops are produced out of a million or so minute droplets inside clouds. Two precipitation processes are recognized: (1) evaporation of water drops at subfreezing temperatures onto ice crystals that later fall into warmer layers and melt, and (2) the collection of smaller droplets upon larger drops that fall at a higher speed.

Efforts to effect or stimulate these processes artificially have led to extensive weather modification operations within the last 20 years (see Meteorology). These efforts have had only limited success, since most areas with deficient rainfall are dominated by air masses that have either inadequate moisture content or inadequate elevation, or both. Nevertheless, some promising results have been realized and much research is now being conducted in order to develop more effective methods of artificial precipitation.

Predator Wednesday, November 14, 2007 04:51 PM

Acid Rain
 
[B][U][CENTER][SIZE="3"]Acid Rain[/SIZE][/CENTER][/U][/B]

[B]I -INTRODUCTION[/B]
Acid Rain, form of air pollution in which airborne acids produced by electric utility plants and other sources fall to Earth in distant regions. The corrosive nature of acid rain causes widespread damage to the environment. The problem begins with the production of sulfur dioxide and nitrogen oxides from the burning of fossil fuels, such as coal, natural gas, and oil, and from certain kinds of manufacturing. Sulfur dioxide and nitrogen oxides react with water and other chemicals in the air to form sulfuric acid, nitric acid, and other pollutants. These acid pollutants reach high into the atmosphere, travel with the wind for hundreds of miles, and eventually return to the ground by way of rain, snow, or fog, and as invisible “dry” forms.

Damage from acid rain has been widespread in eastern North America and throughout Europe, and in Japan, China, and Southeast Asia. Acid rain leaches nutrients from soils, slows the growth of trees, and makes lakes uninhabitable for fish and other wildlife. In cities, acid pollutants corrode almost everything they touch, accelerating natural wear and tear on structures such as buildings and statues. Acids combine with other chemicals to form urban smog, which attacks the lungs, causing illness and premature deaths.

[B]II -FORMATION OF ACID RAIN[/B]
The process that leads to acid rain begins with the burning of fossil fuels. Burning, or combustion, is a chemical reaction in which oxygen from the air combines with carbon, nitrogen, sulfur, and other elements in the substance being burned. The new compounds formed are gases called oxides. When sulfur and nitrogen are present in the fuel, their reaction with oxygen yields sulfur dioxide and various nitrogen oxide compounds. In the United States, 70 percent of sulfur dioxide pollution comes from power plants, especially those that burn coal. In Canada, industrial activities, including oil refining and metal smelting, account for 61 percent of sulfur dioxide pollution. Nitrogen oxides enter the atmosphere from many sources, with motor vehicles emitting the largest share—43 percent in the United States and 60 percent in Canada.
Once in the atmosphere, sulfur dioxide and nitrogen oxides undergo complex reactions with water vapor and other chemicals to yield sulfuric acid, nitric acid, and other pollutants called nitrates and sulfates. The acid compounds are carried by air currents and the wind, sometimes over long distances. When clouds or fog form in acid-laden air, they too are acidic, and so is the rain or snow that falls from them.

Acid pollutants also occur as dry particles and as gases, which may reach the ground without the help of water. When these “dry” acids are washed from ground surfaces by rain, they add to the acids in the rain itself to produce a still more corrosive solution. The combination of acid rain and dry acids is known as acid deposition.

[B]III -EFFECTS OF ACID RAIN[/B]
The acids in acid rain react chemically with any object they contact. Acids are corrosive chemicals that react with other chemicals by giving up hydrogen atoms. The acidity of a substance comes from the abundance of free hydrogen atoms when the substance is dissolved in water. Acidity is measured using a pH scale with units from 0 to 14. Acidic substances have pH numbers from 1 to 6—the lower the pH number, the stronger, or more corrosive, the substance. Some nonacidic substances, called bases or alkalis, are like acids in reverse—they readily accept the hydrogen atoms that the acids offer. Bases have pH numbers from 8 to 14, with the higher values indicating increased alkalinity. Pure water has a neutral pH of 7—it is not acidic or basic. Rain, snow, or fog with a pH below 5.6 is considered acid rain.
When bases mix with acids, the bases lessen the strength of an acid (see Acids and Bases). This buffering action regularly occurs in nature. Rain, snow, and fog formed in regions free of acid pollutants are slightly acidic, having a pH near 5.6. Alkaline chemicals in the environment, found in rocks, soils, lakes, and streams, regularly neutralize this precipitation. But when precipitation is highly acidic, with a pH below 5.6, naturally occurring acid buffers become depleted over time, and nature’s ability to neutralize the acids is impaired. Acid rain has been linked to widespread environmental damage, including soil and plant degradation, depleted life in lakes and streams, and erosion of human-made structures.

[B]A -Soil[/B]
In soil, acid rain dissolves and washes away nutrients needed by plants. It can also dissolve toxic substances, such as aluminum and mercury, which are naturally present in some soils, freeing these toxins to pollute water or to poison plants that absorb them. Some soils are quite alkaline and can neutralize acid deposition indefinitely; others, especially thin mountain soils derived from granite or gneiss, buffer acid only briefly.

[B]B -Trees[/B]
By removing useful nutrients from the soil, acid rain slows the growth of plants, especially trees. It also attacks trees more directly by eating holes in the waxy coating of leaves and needles, causing brown dead spots. If many such spots form, a tree loses some of its ability to make food through photosynthesis. Also, organisms that cause disease can infect the tree through its injured leaves. Once weakened, trees are more vulnerable to other stresses, such as insect infestations, drought, and cold temperatures.
Spruce and fir forests at higher elevations, where the trees literally touch the acid clouds, seem to be most at risk. Acid rain has been blamed for the decline of spruce forests on the highest ridges of the Appalachian Mountains in the eastern United States. In the Black Forest of southwestern Germany, half of the trees are damaged from acid rain and other forms of pollution.

[B]C -Agriculture[/B]
Most farm crops are less affected by acid rain than are forests. The deep soils of many farm regions, such as those in the Midwestern United States, can absorb and neutralize large amounts of acid. Mountain farms are more at risk—the thin soils in these higher elevations cannot neutralize so much acid. Farmers can prevent acid rain damage by monitoring the condition of the soil and, when necessary, adding crushed limestone to the soil to neutralize acid. If excessive amounts of nutrients have been leached out of the soil, farmers can replace them by adding nutrient-rich fertilizer.

[B]D -Surface Waters[/B]
Acid rain falls into and drains into streams, lakes, and marshes. Where there is snow cover in winter, local waters grow suddenly more acidic when the snow melts in the spring. Most natural waters are close to chemically neutral, neither acidic nor alkaline: their pH is between 6 and 8. In the northeastern United States and southeastern Canada, the water in some lakes now has a pH value of less than 5 as a result of acid rain. This means they are at least ten times more acidic than they should be. In the Adirondack Mountains of New York State, a quarter of the lakes and ponds are acidic, and many have lost their brook trout and other fish. In the middle Appalachian Mountains, over 1,300 streams are afflicted. All of Norway’s major rivers have been damaged by acid rain, severely reducing salmon and trout populations.

[B]E -Plants and Animals[/B]
The effects of acid rain on wildlife can be far-reaching. If a population of one plant or animal is adversely affected by acid rain, animals that feed on that organism may also suffer. Ultimately, an entire ecosystem may become endangered. Some species that live in water are very sensitive to acidity, some less so. Freshwater clams and mayfly young, for instance, begin dying when the water pH reaches 6.0. Frogs can generally survive more acidic water, but if their supply of mayflies is destroyed by acid rain, frog populations may also decline. Fish eggs of most species stop hatching at a pH of 5.0. Below a pH of 4.5, water is nearly sterile, unable to support any wildlife.

Land animals dependent on aquatic organisms are also affected. Scientists have found that populations of snails living in or near water polluted by acid rain are declining in some regions. In The Netherlands songbirds are finding fewer snails to eat. The eggs these birds lay have weakened shells because the birds are receiving less calcium from snail shells.

[B]F -Human-Made Structures[/B]
Acid rain and the dry deposition of acidic particles damage buildings, statues, automobiles, and other structures made of stone, metal, or any other material exposed to weather for long periods. The corrosive damage can be expensive and, in cities with very historic buildings, tragic. Both the Parthenon in Athens, Greece, and the Taj Mahal in Agra, India, are deteriorating due to acid pollution.

[B]G -Human Health[/B]
The acidification of surface waters causes little direct harm to people. It is safe to swim in even the most acidified lakes. However, toxic substances leached from soil can pollute local water supplies. In Sweden, as many as 10,000 lakes have been polluted by mercury released from soils damaged by acid rain, and residents have been warned to avoid eating fish caught in these lakes. In the air, acids join with other chemicals to produce urban smog, which can irritate the lungs and make breathing difficult, especially for people who already have asthma, bronchitis, or other respiratory diseases. Solid particles of sulfates, a class of minerals derived from sulfur dioxide, are thought to be especially damaging to the lungs.

[B]H -Acid Rain and Global Warming[/B]
Acid pollution has one surprising effect that may be beneficial. Sulfates in the upper atmosphere reflect some sunlight out into space, and thus tend to slow down global warming. Scientists believe that acid pollution may have delayed the onset of warming by several decades in the middle of the 20th century.

[B]IV -EFFORTS TO CONTROL ACID RAIN[/B]
Acid rain can best be curtailed by reducing the amount of sulfur dioxide and nitrogen oxides released by power plants, motorized vehicles, and factories. The simplest way to cut these emissions is to use less energy from fossil fuels. Individuals can help. Every time a consumer buys an energy-efficient appliance, adds insulation to a house, or takes a bus to work, he or she conserves energy and, as a result, fights acid rain.

Another way to cut emissions of sulfur dioxide and nitrogen oxides is by switching to cleaner-burning fuels. For instance, coal can be high or low in sulfur, and some coal contains sulfur in a form that can be washed out easily before burning. By using more of the low-sulfur or cleanable types of coal, electric utility companies and other industries can pollute less. The gasoline and diesel oil that run most motor vehicles can also be formulated to burn more cleanly, producing less nitrogen oxide pollution. Clean-burning fuels such as natural gas are being used increasingly in vehicles. Natural gas contains almost no sulfur and produces very low nitrogen oxides. Unfortunately, natural gas and the less-polluting coals tend to be more expensive, placing them out of the reach of nations that are struggling economically.
Pollution can also be reduced at the moment the fuel is burned. Several new kinds of burners and boilers alter the burning process to produce less nitrogen oxides and more free nitrogen, which is harmless. Limestone or sandstone added to the combustion chamber can capture some of the sulfur released by burning coal.

Once sulfur dioxide and oxides of nitrogen have been formed, there is one more chance to keep them out of the atmosphere. In smokestacks, devices called scrubbers spray a mixture of water and powdered limestone into the waste gases (flue gases), recapturing the sulfur. Pollutants can also be removed by catalytic converters. In a converter, waste gases pass over small beads coated with metals. These metals promote chemical reactions that change harmful substances to less harmful ones. In the United States and Canada, these devices are required in cars, but they are not often used in smokestacks.

Once acid rain has occurred, a few techniques can limit environmental damage. In a process known as liming, powdered limestone can be added to water or soil to neutralize the acid dropping from the sky. In Norway and Sweden, nations much afflicted with acid rain, lakes are commonly treated this way. Rural water companies may need to lime their reservoirs so that acid does not eat away water pipes. In cities, exposed surfaces vulnerable to acid rain destruction can be coated with acid-resistant paints. Delicate objects like statues can be sheltered indoors in climate-controlled rooms.
Cleaning up sulfur dioxide and nitrogen oxides will reduce not only acid rain but also smog, which will make the air look clearer. Based on a study of the value that visitors to national parks place on clear scenic vistas, the U.S. Environmental Protection Agency thinks that improving the vistas in eastern national parks alone will be worth $1 billion in tourist revenue a year.

[B]A -National Legislation[/B]
In the United States, legislative efforts to control sulfur dioxide and nitrogen oxides began with passage of the Clean Air Act of 1970. This act established emissions standards for pollutants from automobiles and industry. In 1990 Congress approved a set of amendments to the act that impose stricter limits on pollution emissions, particularly pollutants that cause acid rain. These amendments aim to cut the national output of sulfur dioxide from 23.5 million tons to 16 million tons by the year 2010. Although no national target is set for nitrogen oxides, the amendments require that power plants, which emit about one-third of all nitrogen oxides released to the atmosphere, reduce their emissions from 7.5 million tons to 5 million tons by 2010. These rules were applied first to selected large power plants in Eastern and Midwestern states. In the year 2000, smaller, cleaner power plants across the country came under the law.

These 1990 amendments include a novel provision for sulfur dioxide control. Each year the government gives companies permits to release a specified number of tons of sulfur dioxide. Polluters are allowed to buy and sell their emissions permits. For instance, a company can choose to reduce its sulfur dioxide emissions more than the law requires and sell its unused pollution emission allowance to another company that is further from meeting emission goals; the buyer may then pollute above the limit for a certain time. Unused pollution rights can also be "banked" and kept for later use. It is hoped that this flexible market system will clean up emissions more quickly and cheaply than a set of rigid rules.

Legislation enacted in Canada restricts the annual amount of sulfur dioxide emissions to 2.3 million tons in all of Canada’s seven easternmost provinces, where acid rain causes the most damage. A national cap for sulfur dioxide emissions has been set at 3.2 million tons per year. Legislation is currently being developed to enforce stricter pollution emissions by 2010.
Norwegian law sets the goal of reducing sulfur dioxide emission to 76 percent of 1980 levels and nitrogen oxides emissions to 70 percent of the 1986 levels. To encourage cleanup, Norway collects a hefty tax from industries that emit acid pollutants. In some cases these taxes make it more expensive to emit acid pollutants than to reduce emissions.

[B]B -International Agreements[/B]
Acid rain typically crosses national borders, making pollution control an international issue. Canada receives much of its acid pollution from the United States—by some estimates as much as 50 percent. Norway and Sweden receive acid pollutants from Britain, Germany, Poland, and Russia. The majority of acid pollution in Japan comes from China. Debates about responsibilities and cleanup costs for acid pollutants led to international cooperation. In 1988, as part of the Long-Range Transboundary Air Pollution Agreement sponsored by the United Nations, the United States and 24 other nations ratified a protocol promising to hold yearly nitrogen oxide emissions at or below 1987 levels. In 1991 the United States and Canada signed an Air Quality Agreement setting national limits on annual sulfur dioxide emissions from power plants and factories. In 1994 in Oslo, Norway, 12 European nations agreed to reduce sulfur dioxide emissions by as much as 87 percent by 2010.

Legislative actions to prevent acid rain have results. The targets established in laws and treaties are being met, usually ahead of schedule. Sulfur emissions in Europe decreased by 40 percent from 1980 to 1994. In Norway sulfur dioxide emissions fell by 75 percent during the same period. Since 1980 annual sulfur dioxide emissions in the United States have dropped from 26 million tons to 18.3 million tons. Canada reports sulfur dioxide emissions have been reduced to 2.6 million tons, 18 percent below the proposed limit of 3.2 million tons.

Monitoring stations in several nations report that precipitation is actually becoming less acidic. In Europe, lakes and streams are now growing less acid. However, this does not seem to be the case in the United States and Canada. The reasons are not completely understood, but apparently, controls reducing nitrogen oxide emissions only began recently and their effects have yet to make a mark. In addition, soils in some areas have absorbed so much acid that they contain no more neutralizing alkaline chemicals. The weathering of rock will gradually replace the missing alkaline chemicals, but scientists fear that improvement will be very slow unless pollution controls are made even stricter.

Predator Thursday, November 15, 2007 10:25 AM

Brain
 
[B][U][SIZE="3"][CENTER]Brain[/CENTER][/SIZE][/U][/B]

[B]I -INTRODUCTION[/B]
Brain, portion of the central nervous system contained within the skull. The brain is the control center for movement, sleep, hunger, thirst, and virtually every other vital activity necessary to survival. All human emotions—including love, hate, fear, anger, elation, and sadness—are controlled by the brain. It also receives and interprets the countless signals that are sent to it from other parts of the body and from the external environment. The brain makes us conscious, emotional, and intelligent.

[B]II -ANATOMY[/B]
The adult human brain is a 1.3-kg (3-lb) mass of pinkish-gray jellylike tissue made up of approximately 100 billion nerve cells, or neurons; neuroglia (supporting-tissue) cells; and vascular (blood-carrying) and other tissues.
Between the brain and the cranium—the part of the skull that directly covers the brain—are three protective membranes, or meninges. The outermost membrane, the dura mater, is the toughest and thickest. Below the dura mater is a middle membrane, called the arachnoid layer. The innermost membrane, the pia mater, consists mainly of small blood vessels and follows the contours of the surface of the brain.

A clear liquid, the cerebrospinal fluid, bathes the entire brain and fills a series of four cavities, called ventricles, near the center of the brain. The cerebrospinal fluid protects the internal portion of the brain from varying pressures and transports chemical substances within the nervous system.
From the outside, the brain appears as three distinct but connected parts: the cerebrum (the Latin word for brain)—two large, almost symmetrical hemispheres; the cerebellum (“little brain”)—two smaller hemispheres located at the back of the cerebrum; and the brain stem—a central core that gradually becomes the spinal cord, exiting the skull through an opening at its base called the foramen magnum. Two other major parts of the brain, the thalamus and the hypothalamus, lie in the midline above the brain stem underneath the cerebellum.

The brain and the spinal cord together make up the central nervous system, which communicates with the rest of the body through the peripheral nervous system. The peripheral nervous system consists of 12 pairs of cranial nerves extending from the cerebrum and brain stem; a system of other nerves branching throughout the body from the spinal cord; and the autonomic nervous system, which regulates vital functions not under conscious control, such as the activity of the heart muscle, smooth muscle (involuntary muscle found in the skin, blood vessels, and internal organs), and glands.

[B]A -Cerebrum[/B]
Most high-level brain functions take place in the cerebrum. Its two large hemispheres make up approximately 85 percent of the brain's weight. The exterior surface of the cerebrum, the cerebral cortex, is a convoluted, or folded, grayish layer of cell bodies known as the gray matter. The gray matter covers an underlying mass of fibers called the white matter. The convolutions are made up of ridgelike bulges, known as gyri, separated by small grooves called sulci and larger grooves called fissures. Approximately two-thirds of the cortical surface is hidden in the folds of the sulci. The extensive convolutions enable a very large surface area of brain cortex—about 1.5 m2 (16 ft2) in an adult—to fit within the cranium. The pattern of these convolutions is similar, although not identical, in all humans.

The two cerebral hemispheres are partially separated from each other by a deep fold known as the longitudinal fissure. Communication between the two hemispheres is through several concentrated bundles of axons, called commissures, the largest of which is the corpus callosum.
Several major sulci divide the cortex into distinguishable regions. The central sulcus, or Rolandic fissure, runs from the middle of the top of each hemisphere downward, forward, and toward another major sulcus, the lateral (“side”), or Sylvian, sulcus. These and other sulci and gyri divide the cerebrum into five lobes: the frontal, parietal, temporal, and occipital lobes and the insula.

The frontal lobe is the largest of the five and consists of all the cortex in front of the central sulcus. Broca's area, a part of the cortex related to speech, is located in the frontal lobe. The parietal lobe consists of the cortex behind the central sulcus to a sulcus near the back of the cerebrum known as the parieto-occipital sulcus. The parieto-occipital sulcus, in turn, forms the front border of the occipital lobe, which is the rearmost part of the cerebrum. The temporal lobe is to the side of and below the lateral sulcus. Wernicke's area, a part of the cortex related to the understanding of language, is located in the temporal lobe. The insula lies deep within the folds of the lateral sulcus.

The cerebrum receives information from all the sense organs and sends motor commands (signals that result in activity in the muscles or glands) to other parts of the brain and the rest of the body. Motor commands are transmitted by the motor cortex, a strip of cerebral cortex extending from side to side across the top of the cerebrum just in front of the central sulcus. The sensory cortex, a parallel strip of cerebral cortex just in back of the central sulcus, receives input from the sense organs.

Many other areas of the cerebral cortex have also been mapped according to their specific functions, such as vision, hearing, speech, emotions, language, and other aspects of perceiving, thinking, and remembering. Cortical regions known as associative cortex are responsible for integrating multiple inputs, processing the information, and carrying out complex responses.

[B]B -rebellum[/B]
The cerebellum coordinates body movements. Located at the lower back of the brain beneath the occipital lobes, the cerebellum is divided into two lateral (side-by-side) lobes connected by a fingerlike bundle of white fibers called the vermis. The outer layer, or cortex, of the cerebellum consists of fine folds called folia. As in the cerebrum, the outer layer of cortical gray matter surrounds a deeper layer of white matter and nuclei (groups of nerve cells). Three fiber bundles called cerebellar peduncles connect the cerebellum to the three parts of the brain stem—the midbrain, the pons, and the medulla oblongata.

The cerebellum coordinates voluntary movements by fine-tuning commands from the motor cortex in the cerebrum. The cerebellum also maintains posture and balance by controlling muscle tone and sensing the position of the limbs. All motor activity, from hitting a baseball to fingering a violin, depends on the cerebellum.

[B]C -Thalamus and Hypothalamus[/B]
The thalamus and the hypothalamus lie underneath the cerebrum and connect it to the brain stem. The thalamus consists of two rounded masses of gray tissue lying within the middle of the brain, between the two cerebral hemispheres. The thalamus is the main relay station for incoming sensory signals to the cerebral cortex and for outgoing motor signals from it. All sensory input to the brain, except that of the sense of smell, connects to individual nuclei of the thalamus.

The hypothalamus lies beneath the thalamus on the midline at the base of the brain. It regulates or is involved directly in the control of many of the body's vital drives and activities, such as eating, drinking, temperature regulation, sleep, emotional behavior, and sexual activity. It also controls the function of internal body organs by means of the autonomic nervous system, interacts closely with the pituitary gland, and helps coordinate activities of the brain stem.

[B]D -Brain Stem[/B]
The brain stem is evolutionarily the most primitive part of the brain and is responsible for sustaining the basic functions of life, such as breathing and blood pressure. It includes three main structures lying between and below the two cerebral hemispheres—the midbrain, pons, and medulla oblongata.

[B]D1 -Midbrain[/B]
The topmost structure of the brain stem is the midbrain. It contains major relay stations for neurons transmitting signals to the cerebral cortex, as well as many reflex centers—pathways carrying sensory (input) information and motor (output) commands. Relay and reflex centers for visual and auditory (hearing) functions are located in the top portion of the midbrain. A pair of nuclei called the superior colliculus control reflex actions of the eye, such as blinking, opening and closing the pupil, and focusing the lens. A second pair of nuclei, called the inferior colliculus, control auditory reflexes, such as adjusting the ear to the volume of sound. At the bottom of the midbrain are reflex and relay centers relating to pain, temperature, and touch, as well as several regions associated with the control of movement, such as the red nucleus and the substantia nigra.

[B]D2 -Pons[/B]
Continuous with and below the midbrain and directly in front of the cerebellum is a prominent bulge in the brain stem called the pons. The pons consists of large bundles of nerve fibers that connect the two halves of the cerebellum and also connect each side of the cerebellum with the opposite-side cerebral hemisphere. The pons serves mainly as a relay station linking the cerebral cortex and the medulla oblongata.

[B]D3 -Medulla Oblongata[/B]
The long, stalklike lowermost portion of the brain stem is called the medulla oblongata. At the top, it is continuous with the pons and the midbrain; at the bottom, it makes a gradual transition into the spinal cord at the foramen magnum. Sensory and motor nerve fibers connecting the brain and the rest of the body cross over to the opposite side as they pass through the medulla. Thus, the left half of the brain communicates with the right half of the body, and the right half of the brain with the left half of the body.

[B]D4 -Reticular Formation[/B]
Running up the brain stem from the medulla oblongata through the pons and the midbrain is a netlike formation of nuclei known as the reticular formation. The reticular formation controls respiration, cardiovascular function (see Heart), digestion, levels of alertness, and patterns of sleep. It also determines which parts of the constant flow of sensory information into the body are received by the cerebrum.

[B]E -Brain Cells[/B]
There are two main types of brain cells: neurons and neuroglia. Neurons are responsible for the transmission and analysis of all electrochemical communication within the brain and other parts of the nervous system. Each neuron is composed of a cell body called a soma, a major fiber called an axon, and a system of branches called dendrites. Axons, also called nerve fibers, convey electrical signals away from the soma and can be up to 1 m (3.3 ft) in length. Most axons are covered with a protective sheath of myelin, a substance made of fats and protein, which insulates the axon. Myelinated axons conduct neuronal signals faster than do unmyelinated axons. Dendrites convey electrical signals toward the soma, are shorter than axons, and are usually multiple and branching.

Neuroglial cells are twice as numerous as neurons and account for half of the brain's weight. Neuroglia (from glia, Greek for “glue”) provide structural support to the neurons. Neuroglial cells also form myelin, guide developing neurons, take up chemicals involved in cell-to-cell communication, and contribute to the maintenance of the environment around neurons.

[B]F -Cranial Nerves[/B]
Twelve pairs of cranial nerves arise symmetrically from the base of the brain and are numbered, from front to back, in the order in which they arise. They connect mainly with structures of the head and neck, such as the eyes, ears, nose, mouth, tongue, and throat. Some are motor nerves, controlling muscle movement; some are sensory nerves, conveying information from the sense organs; and others contain fibers for both sensory and motor impulses. The first and second pairs of cranial nerves—the olfactory (smell) nerve and the optic (vision) nerve—carry sensory information from the nose and eyes, respectively, to the undersurface of the cerebral hemispheres. The other ten pairs of cranial nerves originate in or end in the brain stem.

[B]III -HOW THE BRAIN WORKS[/B]
The brain functions by complex neuronal, or nerve cell, circuits (see Neurophysiology). Communication between neurons is both electrical and chemical and always travels from the dendrites of a neuron, through its soma, and out its axon to the dendrites of another neuron.
Dendrites of one neuron receive signals from the axons of other neurons through chemicals known as neurotransmitters. The neurotransmitters set off electrical charges in the dendrites, which then carry the signals electrochemically to the soma. The soma integrates the information, which is then transmitted electrochemically down the axon to its tip.

At the tip of the axon, small, bubblelike structures called vesicles release neurotransmitters that carry the signal across the synapse, or gap, between two neurons. There are many types of neurotransmitters, including norepinephrine, dopamine, and serotonin. Neurotransmitters can be excitatory (that is, they excite an electrochemical response in the dendrite receptors) or inhibitory (they block the response of the dendrite receptors).
One neuron may communicate with thousands of other neurons, and many thousands of neurons are involved with even the simplest behavior. It is believed that these connections and their efficiency can be modified, or altered, by experience.

Scientists have used two primary approaches to studying how the brain works. One approach is to study brain function after parts of the brain have been damaged. Functions that disappear or that are no longer normal after injury to specific regions of the brain can often be associated with the damaged areas. The second approach is to study the response of the brain to direct stimulation or to stimulation of various sense organs.

Neurons are grouped by function into collections of cells called nuclei. These nuclei are connected to form sensory, motor, and other systems. Scientists can study the function of somatosensory (pain and touch), motor, olfactory, visual, auditory, language, and other systems by measuring the physiological (physical and chemical) changes that occur in the brain when these senses are activated. For example, electroencephalography (EEG) measures the electrical activity of specific groups of neurons through electrodes attached to the surface of the skull. Electrodes inserted directly into the brain can give readings of individual neurons. Changes in blood flow, glucose (sugar), or oxygen consumption in groups of active cells can also be mapped.

Although the brain appears symmetrical, how it functions is not. Each hemisphere is specialized and dominates the other in certain functions. Research has shown that hemispheric dominance is related to whether a person is predominantly right-handed or left-handed (see Handedness). In most right-handed people, the left hemisphere processes arithmetic, language, and speech. The right hemisphere interprets music, complex imagery, and spatial relationships and recognizes and expresses emotion. In left-handed people, the pattern of brain organization is more variable.

Hemispheric specialization has traditionally been studied in people who have sustained damage to the connections between the two hemispheres, as may occur with stroke, an interruption of blood flow to an area of the brain that causes the death of nerve cells in that area. The division of functions between the two hemispheres has also been studied in people who have had to have the connection between the two hemispheres surgically cut in order to control severe epilepsy, a neurological disease characterized by convulsions and loss of consciousness.

[B]A -Vision[/B]
The visual system of humans is one of the most advanced sensory systems in the body (see Vision). More information is conveyed visually than by any other means. In addition to the structures of the eye itself, several cortical regions—collectively called primary visual and visual associative cortex—as well as the midbrain are involved in the visual system. Conscious processing of visual input occurs in the primary visual cortex, but reflexive—that is, immediate and unconscious—responses occur at the superior colliculus in the midbrain. Associative cortical regions—specialized regions that can associate, or integrate, multiple inputs—in the parietal and frontal lobes along with parts of the temporal lobe are also involved in the processing of visual information and the establishment of visual memories.

[B]B -Language[/B]
Language involves specialized cortical regions in a complex interaction that allows the brain to comprehend and communicate abstract ideas. The motor cortex initiates impulses that travel through the brain stem to produce audible sounds. Neighboring regions of motor cortex, called the supplemental motor cortex, are involved in sequencing and coordinating sounds. Broca's area of the frontal lobe is responsible for the sequencing of language elements for output. The comprehension of language is dependent upon Wernicke's area of the temporal lobe. Other cortical circuits connect these areas.

[B]C -Memory[/B]
Memory is usually considered a diffusely stored associative process—that is, it puts together information from many different sources. Although research has failed to identify specific sites in the brain as locations of individual memories, certain brain areas are critical for memory to function. Immediate recall—the ability to repeat short series of words or numbers immediately after hearing them—is thought to be located in the auditory associative cortex. Short-term memory—the ability to retain a limited amount of information for up to an hour—is located in the deep temporal lobe. Long-term memory probably involves exchanges between the medial temporal lobe, various cortical regions, and the midbrain.

[B]D -The Autonomic Nervous System[/B]
The autonomic nervous system regulates the life support systems of the body reflexively—that is, without conscious direction. It automatically controls the muscles of the heart, digestive system, and lungs; certain glands; and homeostasis—that is, the equilibrium of the internal environment of the body (see Physiology). The autonomic nervous system itself is controlled by nerve centers in the spinal cord and brain stem and is fine-tuned by regions higher in the brain, such as the midbrain and cortex. Reactions such as blushing indicate that cognitive, or thinking, centers of the brain are also involved in autonomic responses.

[B]IV -BRAIN DISORDERS[/B]
The brain is guarded by several highly developed protective mechanisms. The bony cranium, the surrounding meninges, and the cerebrospinal fluid all contribute to the mechanical protection of the brain. In addition, a filtration system called the blood-brain barrier protects the brain from exposure to potentially harmful substances carried in the bloodstream.
Brain disorders have a wide range of causes, including head injury, stroke, bacterial diseases, complex chemical imbalances, and changes associated with aging.

[B]A -Head Injury[/B]
Head injury can initiate a cascade of damaging events. After a blow to the head, a person may be stunned or may become unconscious for a moment.
This injury, called a concussion, usually leaves no permanent damage. If the blow is more severe and hemorrhage (excessive bleeding) and swelling occur, however, severe headache, dizziness, paralysis, a convulsion, or temporary blindness may result, depending on the area of the brain affected. Damage to the cerebrum can also result in profound personality changes.
Damage to Broca's area in the frontal lobe causes difficulty in speaking and writing, a problem known as Broca's aphasia. Injury to Wernicke's area in the left temporal lobe results in an inability to comprehend spoken language, called Wernicke's aphasia.

An injury or disturbance to a part of the hypothalamus may cause a variety of different symptoms, such as loss of appetite with an extreme drop in body weight; increase in appetite leading to obesity; extraordinary thirst with excessive urination (diabetes insipidus); failure in body-temperature control, resulting in either low temperature (hypothermia) or high temperature (fever); excessive emotionality; and uncontrolled anger or aggression. If the relationship between the hypothalamus and the pituitary gland is damaged (see Endocrine System), other vital bodily functions may be disturbed, such as sexual function, metabolism, and cardiovascular activity.
Injury to the brain stem is even more serious because it houses the nerve centers that control breathing and heart action. Damage to the medulla oblongata usually results in immediate death.

[B]B -Stroke[/B]
A stroke is damage to the brain due to an interruption in blood flow. The interruption may be caused by a blood clot (see Embolism; Thrombosis), constriction of a blood vessel, or rupture of a vessel accompanied by bleeding. A pouchlike expansion of the wall of a blood vessel, called an aneurysm, may weaken and burst, for example, because of high blood pressure.

Sufficient quantities of glucose and oxygen, transported through the bloodstream, are needed to keep nerve cells alive. When the blood supply to a small part of the brain is interrupted, the cells in that area die and the function of the area is lost. A massive stroke can cause a one-sided paralysis (hemiplegia) and sensory loss on the side of the body opposite the hemisphere damaged by the stroke.

[B]C -Brain Diseases[/B]
Epilepsy is a broad term for a variety of brain disorders characterized by seizures, or convulsions. Epilepsy can result from a direct injury to the brain at birth or from a metabolic disturbance in the brain at any time later in life.
Some brain diseases, such as multiple sclerosis and Parkinson disease, are progressive, becoming worse over time. Multiple sclerosis damages the myelin sheath around axons in the brain and spinal cord. As a result, the affected axons cannot transmit nerve impulses properly. Parkinson disease destroys the cells of the substantia nigra in the midbrain, resulting in a deficiency in the neurotransmitter dopamine that affects motor functions.

Cerebral palsy is a broad term for brain damage sustained close to birth that permanently affects motor function. The damage may take place either in the developing fetus, during birth, or just after birth and is the result of the faulty development or breaking down of motor pathways. Cerebral palsy is nonprogressive—that is, it does not worsen with time.
A bacterial infection in the cerebrum (see Encephalitis) or in the coverings of the brain (see Meningitis), swelling of the brain (see Edema), or an abnormal growth of healthy brain tissue (see Tumor) can all cause an increase in intracranial pressure and result in serious damage to the brain.

Scientists are finding that certain brain chemical imbalances are associated with mental disorders such as schizophrenia and depression. Such findings have changed scientific understanding of mental health and have resulted in new treatments that chemically correct these imbalances.
During childhood development, the brain is particularly susceptible to damage because of the rapid growth and reorganization of nerve connections. Problems that originate in the immature brain can appear as epilepsy or other brain-function problems in adulthood.

Several neurological problems are common in aging. Alzheimer's disease damages many areas of the brain, including the frontal, temporal, and parietal lobes. The brain tissue of people with Alzheimer's disease shows characteristic patterns of damaged neurons, known as plaques and tangles. Alzheimer's disease produces a progressive dementia (see Senile Dementia), characterized by symptoms such as failing attention and memory, loss of mathematical ability, irritability, and poor orientation in space and time.

[B]V -BRAIN IMAGING[/B]
Several commonly used diagnostic methods give images of the brain without invading the skull. Some portray anatomy—that is, the structure of the brain—whereas others measure brain function. Two or more methods may be used to complement each other, together providing a more complete picture than would be possible by one method alone.
Magnetic resonance imaging (MRI), introduced in the early 1980s, beams high-frequency radio waves into the brain in a highly magnetized field that causes the protons that form the nuclei of hydrogen atoms in the brain to reemit the radio waves. The reemitted radio waves are analyzed by computer to create thin cross-sectional images of the brain. MRI provides the most detailed images of the brain and is safer than imaging methods that use X rays. However, MRI is a lengthy process and also cannot be used with people who have pacemakers or metal implants, both of which are adversely affected by the magnetic field.

Computed tomography (CT), also known as CT scans, developed in the early 1970s. This imaging method X-rays the brain from many different angles, feeding the information into a computer that produces a series of cross-sectional images. CT is particularly useful for diagnosing blood clots and brain tumors. It is a much quicker process than magnetic resonance imaging and is therefore advantageous in certain situations—for example, with people who are extremely ill.

Changes in brain function due to brain disorders can be visualized in several ways. Magnetic resonance spectroscopy measures the concentration of specific chemical compounds in the brain that may change during specific behaviors. Functional magnetic resonance imaging (fMRI) maps changes in oxygen concentration that correspond to nerve cell activity.

Positron emission tomography (PET), developed in the mid-1970s, uses computed tomography to visualize radioactive tracers (see Isotopic Tracer), radioactive substances introduced into the brain intravenously or by inhalation. PET can measure such brain functions as cerebral metabolism, blood flow and volume, oxygen use, and the formation of neurotransmitters. Single photon emission computed tomography (SPECT), developed in the 1950s and 1960s, uses radioactive tracers to visualize the circulation and volume of blood in the brain.

Brain-imaging studies have provided new insights into sensory, motor, language, and memory processes, as well as brain disorders such as epilepsy; cerebrovascular disease; Alzheimer's, Parkinson, and Huntington's diseases (see Chorea); and various mental disorders, such as schizophrenia.

[B]VI -EVOLUTION OF THE BRAIN[/B]
In lower vertebrates, such as fish and reptiles, the brain is often tubular and bears a striking resemblance to the early embryonic stages of the brains of more highly evolved animals. In all vertebrates, the brain is divided into three regions: the forebrain (prosencephalon), the midbrain (mesencephalon), and the hindbrain (rhombencephalon). These three regions further subdivide into different structures, systems, nuclei, and layers.

The more highly evolved the animal, the more complex is the brain structure. Human beings have the most complex brains of all animals. Evolutionary forces have also resulted in a progressive increase in the size of the brain. In vertebrates lower than mammals, the brain is small. In meat-eating animals, particularly primates, the brain increases dramatically in size.

The cerebrum and cerebellum of higher mammals are highly convoluted in order to fit the most gray matter surface within the confines of the cranium. Such highly convoluted brains are called gyrencephalic. Many lower mammals have a smooth, or lissencephalic (“smooth head”), cortical surface.

There is also evidence of evolutionary adaption of the brain. For example, many birds depend on an advanced visual system to identify food at great distances while in flight. Consequently, their optic lobes and cerebellum are well developed, giving them keen sight and outstanding motor coordination in flight. Rodents, on the other hand, as nocturnal animals, do not have a well-developed visual system. Instead, they rely more heavily on other sensory systems, such as a highly developed sense of smell and facial whiskers.

[B]VII -RECENT RESEARCH[/B]
Recent research in brain function suggests that there may be sexual differences in both brain anatomy and brain function. One study indicated that men and women may use their brains differently while thinking. Researchers used functional magnetic resonance imaging to observe which parts of the brain were activated as groups of men and women tried to determine whether sets of nonsense words rhymed. Men used only Broca's area in this task, whereas women used Broca's area plus an area on the right side of the brain.

Predator Thursday, November 15, 2007 10:31 AM

Lava
 
[B][U][CENTER][SIZE="3"]Lava[/SIZE][/CENTER][/U][/B]

[B]I -INTRODUCTION[/B]
Lava, molten or partially molten rock that erupts at the earth’s surface. When lava comes to the surface, it is red-hot, reaching temperatures as high as 1200° C (2200° F). Some lava can be as thick and viscous as toothpaste, while other lava can be as thin and fluid as warm syrup and flow rapidly down the sides of a volcano. Molten rock that has not yet erupted is called magma. Once lava hardens it forms igneous rock. Volcanoes build up where lava erupts from a central vent. Flood basalt forms where lava erupts from huge fissures. The eruption of lava is the principal mechanism whereby new crust is produced (see Plate Tectonics). Since lava is generated at depth, its chemical and physical characteristics provide indirect information about the chemical composition and physical properties of the rocks 50 to 150 km (30 to 90 mi) below the surface.

[B]II -TYPES OF LAVA[/B]
Most lava, on cooling, forms silicate rocks—rocks that contain silicon and oxygen. Lava is classified according to which silicate rocks it forms: basalt, rhyolite, or andesite. Basaltic lava is dark in color and rich in magnesium and iron, but poor in silicon. Rhyolitic lava is light colored and poor in magnesium and iron, but rich in silicon. Andesitic lava is intermediate in composition between basaltic and rhyolitic lava. While color is often sufficient to classify lava informally, formal identification requires chemical analysis in a laboratory. If silica (silicon dioxide) makes up more than 65 percent of the weight of the lava, then the lava is rhyolitic. If the silica content is between 65 percent and 50 percent by weight, then the lava is andesitic. If the silica content is less than 50 percent by weight, then the lava is basaltic.

Many other physical properties, in addition to color, follow the distinctions between basaltic, andesitic, and rhyolitic lava. For example, basaltic lava has a low viscosity, meaning it is thin and runny. Basaltic lava flows easily and spreads out. Rhyolitic lava has a high viscosity and oozes slowly like toothpaste. The viscosity of andesitic lava is intermediate between basaltic and rhyolitic lava. Similarly, basaltic lava tends to erupt at higher temperatures, typically around 1000° to 1200° C (1800° to 2200° F), while rhyolitic lava tends to erupt at temperatures of 800° to 1000° C (1500° to 1800° F). Dissolved gases make up between 1 percent and 9 percent of magma. These gases come out of solution and form gas bubbles as the magma nears the surface. Rhyolitic lava tends to contain the most gas and basaltic lava tends to contain the least.

[B]III -ERUPTIVE STYLES[/B]
Lava can erupt in several different ways depending on the viscosity of the lava and the pressure from the overlaying rock. When lava erupts out of a vent or large crack, it may pour like water out of a large pipe. The lava flows downhill like a river and can also form large lava lakes. The rivers and lakes of lava are called lava flows. Other times, the pressure exerted by gas bubbles in the lava is so high that it shatters the overlying rock and shoots lava and rock fragments high into the air with explosive force. The fragments of hot rock and lava shot into the air are called pyroclasts (Greek pyro, “fire”; and klastos, “fragment”). At other times, the pressure may be so high that the volcano itself is destroyed in a cataclysmic explosion.

[B]A -Lava Flows[/B]
When lava flows out of a central vent, it forms a volcano. Basaltic lava is thin and fluid so it quickly spreads out and forms gently sloping volcanoes with slopes of about 5°. The flattest slopes are nearest the top vent, where the lava is hottest and most fluid. These volcanoes are called shield volcanoes because from a distance, they look like giant shields lying on the ground. Mauna Kea and Mauna Loa, on the island of Hawaii, are classic examples of shield volcanoes. Andesitic lava is more viscous and does not travel as far, so it forms steeper volcanoes. Rhyolitic lava is so viscous it does not flow away from the vent. Instead, it forms a cap or dome over the vent.

Sometimes, huge amounts of basaltic lava flow from long cracks or fissures in the earth. These basaltic lava flows, known as flood basalts, can cover more than 100,000 sq km (40,000 sq mi) to a depth of more than 100 m (300 ft). The Columbia River plateau in the states of Washington, Oregon, and Idaho was formed by repeated fissure eruptions. The accumulated basalt deposits are more than 4000 m (13,000 ft) thick in places and cover more than 200,000 sq km (80,000 sq mi). The Parana of Brazil and Paraguay covers an area four times as large. Flood basalts occur on every continent. When basaltic lava cools, it shrinks. In thick sheets of basaltic lava, this shrinking can produce shrinkage cracks that often occur in a hexagonal pattern and create hexagonal columns of rock, a process known as columnar jointing.

Two well-known examples of columnar jointing are the Giant’s Causeway on the coast of Northern Ireland and Devil’s Tower in northeastern Wyoming.
Basaltic lava flows and rocks are classified according to their texture. Pahoehoe flows have smooth, ropy-looking surfaces. They form when the semicooled, semihard surface of a lava flow is twisted and wrinkled by the flow of hot fluid lava beneath it. Fluid lava can drain away from beneath hardened pahoehoe surfaces to form empty lava tubes and lava caves. Other basaltic lava flows, known as aa flows, have the appearance of jagged rubble. Very fast-cooling lava can form volcanic glass, such as obsidian.

Vesicular basalt, or scoria, is a solidified froth formed when bubbles of gas trapped in the basaltic lava rise to the surface and cool. Some gas-rich andesitic or rhyolitic lava produces rock, called pumice, that has so many gas bubbles that it will float in water.

Pillow lava is made up of interconnected pillow-shaped and pillow-sized blocks of basalt. It forms when lava erupts underwater. The surface of the lava solidifies rapidly on contact with the water, forming a pillow-shaped object. Pressure of erupting lava beneath the pillow causes the lava to break through the surface and flow out into the water, forming another pillow. Repetition of this process gives rise to piles of pillows. Pillow basalts cover much of the ocean floor.

[B]B -Pyroclastic Eruptions[/B]
Pyroclasts are fragments of hot lava or rock shot into the air when gas-rich lava erupts. Gases easily dissolve in liquids under pressure and come out of solution when the pressure is released. Magma deep underground is under many tons of pressure from the overlying rock. As the magma rises, the pressure from the overlying rocks drops because less weight is pressing down on the magma. Just as the rapid release of bubbles can force a fountain of soda to be ejected from a shaken soda bottle, the rapid release of gas can propel the explosive release of lava.

Pyroclasts come in a wide range of sizes, shapes, and textures. Pieces smaller than peas are called ash. Cinders are pea sized to walnut sized, and anything larger are lava bombs.

Cinders and bombs tend to fall to earth fairly close to where they are ejected, but in very strong eruptions they can travel farther. Lava bombs as large as 100 tons have been found 10 km (6 mi) from the volcano that ejected them. When cinders and bombs accumulate around a volcanic vent, they form a cinder cone. Although the fragments of lava cool rapidly during their brief flight through the air, they are usually still hot and sticky when they land. The sticky cinders weld together to form a rock called tuff.
Ash, because it is so much smaller than cinders, can stay suspended in the air for hours or weeks and travel great distances. The ash from the 1980 eruption of Mount Saint Helens in the state of Washington circled the earth twice.

Many volcanoes have both lava eruptions and pyroclastic eruptions. The resulting volcano is composed of alternating layers of lava and pyroclastic material. These volcanoes are called composite volcanoes or stratovolcanoes. With slopes of 15° to 20°, they are steeper than the gently sloped shield volcanoes. Many stratovolcanoes, such as the picturesque Mount Fuji in Japan, have convex slopes that get steeper closer to the top.

Pyroclastic materials that accumulate on the steep upper slopes of stratovolcanoes often slide down the mountain in huge landslides. If the volcano is still erupting and the loose pyroclastic material is still hot, the resulting slide is called a pyroclastic flow or nuée ardente (French for "glowing cloud"). The flow contains trapped hot gases that suspend the ash and cinders, enabling the flow to travel at great speed. Such flows have temperatures of 800° C (1500° F) and often travel in excess of 150 km/h (100 mph). One such pyroclastic flow killed 30,000 people in the city of Saint-Pierre on the Caribbean island of Martinique in 1902. Only one person in the whole town survived. He was in a basement jail cell.

Loose accumulations of pyroclastic material on steep slopes pose a danger long after the eruption is over. Heavy rains or melting snows can turn the material into mud and set off a catastrophic mudflow called a lahar. In 1985 a small pyroclastic eruption on Nevado del Ruiz, a volcano in Colombia, melted snowfields near the summit. The melted snow, mixed with new and old pyroclastic material, rushed down the mountain as a wall of mud 40 m (140 ft) tall. One hour later, it smashed into the town of Armero 55 km (35 mi) away, killing 23,000 people.

[B]C -Explosive Eruptions[/B]
Rhyolitic lava, because it is so viscous, and because it contains so much gas, is prone to cataclysmic eruption. The small amount of lava that does emerge from the vent is too thick to spread. Instead it forms a dome that often caps the vent and prevents the further release of lava or gas. Gas and pressure can build up inside the volcano until the mountaintop blows apart. Such an eruption occurred on Mount Saint Helens in 1980, blowing off the top 400 m (1,300 ft) of the mountain.

Other catastrophic eruptions, called phreatic explosions, occur when rising magma reaches underground water. The water rapidly turns to steam which powers the explosion. One of the most destructive phreatic explosions of recorded history was the 1883 explosion of Krakatau, in the strait between the Indonesian islands of Java and Sumatra . It destroyed most of the island of Krakatau. The island was uninhabited, so no one died in the actual explosion. However, the explosion caused tsunamis (giant ocean waves) that reached an estimated height of 30 m (100 ft) and hit the nearby islands of Sumatra and Java, destroying 295 coastal towns and killing about 34,000 people. The noise from the explosion was heard nearly 2,000 km (1,200 mi) away in Australia.

Predator Thursday, November 15, 2007 10:36 AM

Milky Way
 
[B][U][SIZE="3"][CENTER]Milky Way[/CENTER][/SIZE][/U][/B]

[B]I -INTRODUCTION[/B]
Milky Way, the large, disk-shaped aggregation of stars, or galaxy, that includes the Sun and its solar system. In addition to the Sun, the Milky Way contains about 400 billion other stars. There are hundreds of billions of other galaxies in the universe, some of which are much larger and contain many more stars than the Milky Way.

The Milky Way is visible at night, appearing as a faintly luminous band that stretches across the sky. The name Milky Way is derived from Greek mythology, in which the band of light was said to be milk from the breast of the goddess Hera. Its hazy appearance results from the combined light of stars too far away to be distinguished individually by the unaided eye. All of the individual stars that are distinct in the sky lie within the Milky Way Galaxy.

From the middle northern latitudes, the Milky Way is best seen on clear, moonless, summer nights, when it appears as a luminous, irregular band circling the sky from the northeastern to the southeastern horizon. It extends through the constellations Perseus, Cassiopeia, and Cepheus. In the region of the Northern Cross it divides into two streams: the western stream, which is bright as it passes through the Northern Cross, fades near Ophiuchus, or the Serpent Bearer, because of dense dust clouds, and appears again in Scorpio; and the eastern stream, which grows brighter as it passes southward through Scutum and Sagittarius. The brightest part of the Milky Way extends from Scutum to Scorpio, through Sagittarius. The center of the galaxy lies in the direction of Sagittarius and is about 25,000 light-years from the Sun (a light-year is the distance light travels in a year, about 9.46 trillion km or 5.88 trillion mi).

[B]II -STRUCTURE[/B]
Galaxies have three common shapes: elliptical, spiral, and irregular. Elliptical galaxies have an ovoid or globular shape and generally contain older stars. Spiral galaxies are disk-shaped with arms that curve around their edges, making these galaxies look like whirlpools. Spiral galaxies contain both old and young stars as well as numerous clouds of dust and gas from which new stars are born. Irregular galaxies have no regular structure. Astronomers believe that their structures were distorted by collisions with other galaxies.

Astronomers classify the Milky Way as a large spiral or possibly a barred spiral galaxy, with several spiral arms coiling around a central bulge about 10,000 light-years thick. Stars in the central bulge are close together, while those in the arms are farther apart. The arms also contain clouds of interstellar dust and gas. The disk is about 100,000 light-years in diameter and is surrounded by a larger cloud of hydrogen gas. Surrounding this cloud in turn is a spherical halo that contains many separate globular clusters of stars mainly lying above or below the disk. This halo may be more than twice as wide as the disk itself. In addition, studies of galactic movements suggest that the Milky Way system contains far more matter than is accounted for by the visible disk and attendant clusters—up to 2,000 billion times more mass than the Sun contains. Astronomers have therefore speculated that the known Milky Way system is in turn surrounded by a much larger ring or halo of undetected matter known as dark matter.

[B]III -TYPES OF STARS[/B]
The Milky Way contains both the so-called type I stars, brilliant, blue stars; and type II stars, giant red stars. Blue stars tend to be younger because they burn furiously and use up all of their fuel within a few tens of millions of years. Red stars are usually older, and use their fuel at a slower rate that they can sustain for tens of billions of years. The central Milky Way and the halo are largely composed of the type II population. Most of this region is obscured behind dust clouds, which prevent visual observation.

Astronomers have been able to detect light from this region at other wavelengths in the electromagnetic spectrum, however, using radio and infrared telescopes and satellites that detect X rays (see Radio Astronomy; Infrared Astronomy; X-Ray Astronomy). Such studies indicate compact objects near the galactic center, probably a massive black hole. A black hole is an object so dense that nothing, not even light, can escape its intense gravity. The center of the galaxy is home to clouds of antimatter particles, which reveal themselves by emitting gamma rays when they meet particles of matter and annihilate. Astronomers believe the antimatter particles provide more evidence for a massive black hole at the Milky Way’s center.

Observations of stars racing around the center also suggest the presence of a black hole. The stars orbit at speeds up to 1.8 million km/h (1.1 million mph)—17 times the speed at which Earth circles the Sun—even though they are hundreds of times farther from the center than Earth is from the Sun. The greater an object’s mass, the faster an object orbiting it at a given distance will move. Whatever lies at the center of the galaxy must have a tremendous amount of mass packed into a relatively small area in order to cause these stars to orbit so quickly at such a distance. The most likely candidate is a black hole.

Surrounding the central region is a fairly flat disk comprising stars of both type II and type I; the brightest members of the latter category are luminous, blue supergiants. Imbedded in the disk, and emerging from opposite sides of the central region, are the spiral arms, which contain a majority of the type I population together with much interstellar dust and gas. One arm passes in the vicinity of the Sun and includes the great nebula in Orion. See Nebula.

[B]IV -ROTATION[/B]
The Milky Way rotates around an axis joining the galactic poles. Viewed from the north galactic pole, the rotation of the Milky Way is clockwise, and the spiral arms trail in the same direction. The period of rotation decreases with the distance from the center of the galactic system. In the neighborhood of the solar system the period of rotation is more than 200 million years. The speed of the solar system due to the galactic rotation is about 220 km/sec (about 140 mi/sec).

Predator Thursday, November 15, 2007 11:45 AM

Weather
 
[B][U][CENTER][SIZE="3"]Weather[/SIZE][/CENTER][/U][/B]

[B]I -INTRODUCTION[/B]
Weather, state of the atmosphere at a particular time and place. The elements of weather include temperature, humidity, cloudiness, precipitation, wind, and pressure. These elements are organized into various weather systems, such as monsoons, areas of high and low pressure, thunderstorms, and tornadoes. All weather systems have well-defined cycles and structural features and are governed by the laws of heat and motion. These conditions are studied in meteorology, the science of weather and weather forecasting.
Weather differs from climate, which is the weather that a particular region experiences over a long period of time. Climate includes the averages and variations of all weather elements.

[B]II -TEMPERATURE[/B]
Temperature is a measure of the degree of hotness of the air. Three different scales are used for measuring temperature. Scientists use the Kelvin, or absolute, scale and the Celsius, or centigrade, scale. Most nations use the Celsius scale, although the United States continues to use the Fahrenheit scale.

Temperature on earth averages 15° C (59° F) at sea level but varies according to latitude, elevation, season, and time of day, ranging from a record high of 58° C (140° F) to a record low of -88° C (-130° F). Temperature is generally highest in the Tropics and lowest near the poles. Each day it is usually warmest during midafternoon and coldest around dawn.

Seasonal variations of temperature are generally more pronounced at higher latitudes. Along the equator, all months are equally warm, but away from the equator, it is generally warmest about a month after the summer solstice (around June 21 in the northern hemisphere and around December 21 in the southern hemisphere) and coldest about a month after the winter solstice (around December 21 in the northern hemisphere and around June 21 in the southern hemisphere). Temperature can change abruptly when fronts (boundaries between two air masses with different temperatures or densities) or thunderstorms pass overhead.

Temperature decreases with increasing elevation at an average rate of about 6.5° C per km (about 19° F per mi). As a result, temperatures in the mountains are generally much lower than at sea level. Temperature continues to decrease throughout the atmosphere’s lowest layer, the troposphere, where almost all weather occurs. The troposphere extends to a height of 16 km (10 mi) above sea level over the equator and about 8 km (about 5 mi) above sea level over the poles. Above the troposphere is the stratosphere, where temperature levels off and then begins to increase with height. Almost no weather occurs in the stratosphere.

[B]III -HUMIDITY[/B]
Humidity is a measure of the amount of water vapor in the air. The air’s capacity to hold vapor is limited but increases dramatically as the air warms, roughly doubling for each temperature increase of 10° C (18° F). There are several different measures of humidity. The specific humidity is the fraction of the mass of air that consists of water vapor, usually given as parts per thousand. Even the warmest, most humid air seldom has a specific humidity greater than 20 parts per thousand. The most common measure of humidity is the relative humidity, or the amount of vapor in the air divided by the air’s vapor-holding capacity at that temperature. If the amount of water vapor in the air remains the same, the relative humidity decreases as the air is heated and increases as the air is cooled. As a result, relative humidity is usually highest around dawn, when the temperature is lowest, and lowest in midafternoon, when the temperature is highest.

[B]IV -CLOUDINESS[/B]
Most clouds and almost all precipitation are produced by the cooling of air as it rises. When air temperature is reduced, excess water vapor in the air condenses into liquid droplets or ice crystals to form clouds or fog. A cloud can take any of several different forms—including cumulus, cirrus, and stratus—reflecting the pattern of air motions that formed it. Fluffy cumulus clouds form from rising masses of air, called thermals. A cumulus cloud often has a flat base, corresponding to the level at which the water vapor first condenses. If a cumulus cloud grows large, it transforms into a cumulonimbus cloud or a thunderstorm. Fibrous cirrus clouds consist of trails of falling ice crystals twisted by the winds. Cirrus clouds usually form high in the troposphere, and their crystals almost never reach the ground. Stratus clouds form when an entire layer of air cools or ascends obliquely. A stratus cloud often extends for hundreds of miles.

Fog is a cloud that touches the ground. In dense fogs, the visibility may drop below 50 m (55 yd). Fog occurs most frequently when the earth’s surface is much colder than the air directly above it, such as around dawn and over cold ocean currents. Fog is thickened and acidified when the air is filled with sulfur-laden soot particles produced by the burning of coal. Dense acid fogs that killed thousands of people in London up to 1956 led to legislation that prohibited coal burning in cities.

Optical phenomena, such as rainbows and halos, occur when light shines through cloud particles. Rainbows are seen when sunlight from behind the observer strikes the raindrops falling from cumulonimbus clouds. The raindrops act as tiny prisms, bending and reflecting the different colors of light back to the observer’s eye at different angles and creating bands of color. Halos are seen when sunlight or moonlight in front of the observer strikes ice crystals and then passes through high, thin cirrostratus clouds.

[B]V -PRECIPITATION[/B]
Precipitation is produced when the droplets and crystals in clouds grow large enough to fall to the ground. Clouds do not usually produce precipitation until they are more than 1 km (0.6 mi) thick. Precipitation takes a variety of forms, including rain, drizzle, freezing rain, snow, hail, and ice pellets, or sleet. Raindrops have diameters larger than 0.5 mm (0.02 in), whereas drizzle drops are smaller. Few raindrops are larger than about 6 mm (about 0.2 in), because such large drops are unstable and break up easily. Ice pellets are raindrops that have frozen in midair. Freezing rain is rain that freezes on contact with any surface. It often produces a layer of ice that can be very slippery.

Snowflakes are either single ice crystals or clusters of ice crystals. Large snowflakes generally form when the temperature is near 0° C (32° F), because at this temperature the flakes are partly melted and stick together when they collide. Hailstones are balls of ice about 6 to 150 mm (about 0.2 to 6 in) in diameter. They consist of clusters of raindrops that have collided and frozen together. Large hailstones only occur in violent thunderstorms, in which strong updrafts keep the hailstones suspended in the atmosphere long enough to grow large.

Precipitation amounts are usually given in terms of depth. A well-developed winter storm can produce 10 to 30 mm (0.4 to 1.2 in) of rain over a large area in 12 to 24 hours. An intense thunderstorm may produce more than 20 mm (0.8 in) of rain in 10 minutes and cause flash floods (floods in which the water rises suddenly). Hurricanes sometimes produce over 250 mm (10 in) of rain and lead to extensive flooding.

Snow depths are usually much greater than rain depths because of snow’s low density. During intense winter storms, more than 250 mm (10 in) of snow may fall in 24 hours, and the snow can be much deeper in places where the wind piles it up in drifts. Extraordinarily deep snows sometimes accumulate on the upwind side of mountain slopes during severe winter storms or on the downwind shores of large lakes during outbreaks of polar air.

[B]VI -WIND[/B]
Wind is the horizontal movement of air. It is named for the direction from which it comes—for example, a north wind comes from the north. In most places near the ground, the wind speed averages from 8 to 24 km/h (from 5 to 15 mph), but it can be much higher during intense storms. Wind speeds in hurricanes and typhoons exceed 120 km/h (75 mph) near the storm’s center and may approach 320 km/h (200 mph). The highest wind speeds at the surface of the earth—as high as 480 km/h (300 mph)—occur in tornadoes. Except for these storms, wind speed usually increases with height to the top of the troposphere.

[B]VII -PRESSURE[/B]
Pressure plays a vital role in all weather systems. Pressure is the force of the air on a given surface divided by the area of that surface. In most weather systems the air pressure is equal to the weight of the air column divided by the area of the column. Pressure decreases rapidly with height, halving about every 5.5 km (3.4 mi).

Sea-level pressure varies by only a few percent. Large regions in the atmosphere that have higher pressure than the surroundings are called high-pressure areas. Regions with lower pressure than the surroundings are called low-pressure areas. Most storms occur in low-pressure areas. Rapidly falling pressure usually means a storm is approaching, whereas rapidly rising pressure usually indicates that skies will clear.

VIII -SCALES OF WEATHER
Weather systems occur on a wide range of scales. Monsoons occur on a global scale and are among the largest weather systems, extending for thousands of miles. Thunderstorms are much smaller, typically 10 to 20 km (6 to 12 mi) across. Tornadoes, which extend from the bases of thunderstorms, range from less than 50 m (55 yd) across to as much as 2 km (1.2 mi) across.
The vertical scale of weather systems is much more limited. Because pressure decreases so rapidly with height and because temperature stops decreasing in the stratosphere, weather systems are confined to the troposphere. Only the tallest thunderstorms reach the stratosphere, which is otherwise almost always clear.

[B]IX -CAUSES OF WEATHER[/B]
All weather is due to heating from the sun. The sun emits energy at an almost constant rate, but a region receives more heat when the sun is higher in the sky and when there are more hours of sunlight in a day. The high sun of the Tropics makes this area much warmer than the poles, and in summer the high sun and long days make the region much warmer than in winter. In the northern hemisphere, the sun climbs high in the sky and the days are long in summer, around July, when the northern end of the earth’s axis is tilted toward the sun. At the same time, it is winter in the southern hemisphere. The southern end of the earth’s axis is tilted away from the sun, so the sun is low in the sky and the days are short.

The temperature differences produced by inequalities in heating cause differences in air density and pressure that propel the winds. Vertical air motions are propelled by buoyancy: A region of air that is warmer and less dense than the surroundings is buoyant and rises. Air is also forced from regions of higher pressure to regions of lower pressure. Once the air begins moving, it is deflected by the Coriolis force, which results from the earth’s rotation. The Coriolis force deflects the wind and all moving objects toward their right in the northern hemisphere and toward their left in the southern hemisphere. It is so gentle that it has little effect on small-scale winds that last less than a few hours, but it has a profound effect on winds that blow for many hours and move over large distances.

[B]X -WEATHER SYSTEMS[/B]
In both hemispheres, the speed of the west wind increases with height up to the top of the troposphere. The core of most rapid winds at the top of the troposphere forms a wavy river of air called the jet stream. Near the ground, where the winds are slowed by friction, the air blows at an acute angle toward areas of low pressure, forming great gyres called cyclones and anticyclones. In the northern hemisphere, the Coriolis force causes air in low-pressure areas to spiral counterclockwise and inward, forming a cyclone, whereas air in high-pressure areas spirals clockwise and outward, forming an anticyclone. In the southern hemisphere, cyclones turn clockwise and anticyclones, counterclockwise.

The air spreading from anticyclones is replaced by sinking air from above. As a result, skies in anticyclones are often fair, and large regions of air called air masses form; these have reasonably uniform temperature and humidity. In cyclones, on the other hand, as air converges to the center, it rises to form extensive clouds and precipitation.

During summer and fall, tropical cyclones, called hurricanes or typhoons, form over warm waters of the oceans in bands parallel to the equator, between about latitude 5° and latitude 30° north and south. Wind speed in hurricanes increases as the air spirals inward. The air either rises in a series of rain bands before reaching the center or proceeds inward and then turns sharply upward in a doughnut-shaped region called the eye wall, where the most intense winds and rain occur. The eye wall surrounds the core, or eye, of the hurricane, which is marked by partly clear skies and gentle winds.

In the middle and high latitudes, polar and tropical air masses are brought together in low-pressure areas called extratropical cyclones, forming narrow zones of sharply changing temperature called fronts. Intense extratropical cyclones can produce blizzard conditions in their northern reaches while at the same time producing warm weather with possible severe thunderstorms and tornadoes in their southern reaches.

Thunderstorms are small, intense convective storms that are produced by buoyant, rapidly rising air. As thunderstorms mature, strong downdrafts of rain- or hail-filled cool air plunge toward the ground, bringing intense showers. However, because thunderstorms are only about 16 km (about 10 mi) wide, they pass over quickly, usually lasting less than an hour. Severe thunderstorms sometimes produce large hail. They may also rotate slowly and spout rapidly rotating tornadoes from their bases.

Most convective weather systems are gentler than thunderstorms. Often, organized circulation cells develop, in which cooler and denser air from the surroundings sinks and blows along the ground to replace the rising heated air. Circulation cells occur on many different scales. On a local scale, along the seashore during sunny spring and summer days, air over the land grows hot while air over the sea remains cool. As the heated air rises, the cooler and denser air from the sea rushes in. This movement of air is popularly called a sea breeze. At night, when the air over the land grows cooler than the air over the sea, the wind reverses and is known as a land breeze.

On a global scale, hot, humid air near the equator rises and is replaced by denser air that sinks in the subtropics and blows back to the equator along the ground. The winds that blow toward the equator are called the trade winds. The trade winds are among the most steady, reliable winds on the earth. They approach the equator obliquely from the northeast and southeast because of the Coriolis force.

The tropical circulation cell is called the Hadley cell. It shifts north and south with the seasons and causes tropical monsoons in India. For example, around July the warm, rising air of the Hadley cell is located over India, and humid winds blow in from the Indian Ocean. Around January the cooler, sinking air of the Hadley cell is located over India, and the winds blow in the opposite direction.

A variable circulation cell called the Walker Circulation exists over the tropical Pacific Ocean. Normally, air rises over the warm waters of the western Pacific Ocean over the Malay Archipelago and sinks over the cold waters in the eastern Pacific Ocean off the coast of Ecuador and Peru. Most years around late December this circulation weakens, and the cold waters off the coast of South America warm up slightly. Because it occurs around Christmas, the phenomenon is called El Niño (The Child). Once every two to five years, the waters of the eastern Pacific Ocean warm profoundly. The Walker Circulation then weakens drastically or even reverses, so that air rises and brings torrential rains to normally dry sections of Ecuador and Peru and hurricanes to Tahiti. On the other side of the Pacific Ocean, air sinks and brings drought to Australia. El Niño can now be predicted with reasonable accuracy several months in advance.

[B]XI -WEATHER FORECASTING[/B]
Since the early 20th century, great strides have been made in weather prediction, largely as a result of computer development but also because of instrumentation such as satellites and radar. Weather data from around the world are collected by the World Meteorological Organization, the National Weather Service, and other agencies and entered into computer models that apply the laws of motion and of the conservation of energy and mass to produce forecasts. In some cases, these forecasts have provided warning of major storms as much as a week in advance. However, because the behavior of weather systems is chaotic, it is impossible to forecast the details of weather more than about two weeks in advance.

Intense small-scale storms, such as thunderstorms and tornadoes, are much more difficult to forecast than are larger weather systems. In areas in which thunderstorms are common, general forecasts can be made several days in advance, but the exact time and location of the storms, as well as of flash floods and tornadoes, can only be forecast about an hour in advance. (For a discussion of weather forecasting methods and technologies, see Meteorology.)

[B]XII -WEATHER MODIFICATION[/B]
Human beings can change weather and climate. Water-droplet clouds with tops colder than about -5° C (about 23° F) can be made to produce rain by seeding them with substances such as silver iodide. Cloud seeding causes ice crystals to form and grow large enough to fall out of a cloud. However, although cloud seeding has been proven effective in individual clouds, its effect over large areas is still unproven.

Weather near the ground is routinely modified for agricultural purposes. For example, soil is darkened to raise its temperature, and fans are turned on during clear, cold nights to stir warmer air down to the ground and help prevent frost damage.

Human activities have also produced inadvertent effects on weather and climate. Adding gases such as carbon dioxide and methane to the atmosphere has increased the greenhouse effect and contributed to global warming by raising the mean temperature of the earth by about 0.5° C (about 0.9° F) since the beginning of the 20th century. More recently, chlorofluorocarbons (CFCs), which are used as refrigerants and in aerosol propellants, have been released into the atmosphere, reducing the amount of ozone worldwide and causing a thinning of the ozone layer over Antarctica each spring (around October). The potential consequences of these changes are vast. Global warming may cause sea level to rise, and the incidence of skin cancer may increase as a result of the reduction of ozone. In an effort to prevent such consequences, production of CFCs has been curtailed and many measures have been suggested to control emission of greenhouse gases, including the development of more efficient engines and the use of alternative energy sources such as solar energy and wind energy.

Predator Thursday, November 15, 2007 04:05 PM

Heart
 
[B][U][CENTER][SIZE="3"]Heart[/SIZE][/CENTER][/U][/B]

[B]I - INTRODUCTION[/B]
Heart, in anatomy, hollow muscular organ that pumps blood through the body. The heart, blood, and blood vessels make up the circulatory system, which is responsible for distributing oxygen and nutrients to the body and carrying away carbon dioxide and other waste products. The heart is the circulatory system’s power supply. It must beat ceaselessly because the body’s tissues—especially the brain and the heart itself—depend on a constant supply of oxygen and nutrients delivered by the flowing blood. If the heart stops pumping blood for more than a few minutes, death will result.

The human heart is shaped like an upside-down pear and is located slightly to the left of center inside the chest cavity. About the size of a closed fist, the heart is made primarily of muscle tissue that contracts rhythmically to propel blood to all parts of the body. This rhythmic contraction begins in the developing embryo about three weeks after conception and continues throughout an individual’s life. The muscle rests only for a fraction of a second between beats. Over a typical life span of 76 years, the heart will beat nearly 2.8 billion times and move 169 million liters (179 million quarts) of blood.

Since prehistoric times people have had a sense of the heart’s vital importance. Cave paintings from 20,000 years ago depict a stylized heart inside the outline of hunted animals such as bison and elephant. The ancient Greeks believed the heart was the seat of intelligence. Others believed the heart to be the source of the soul or of the emotions—an idea that persists in popular culture and various verbal expressions, such as heartbreak, to the present day.

[B]II - STRUCTURE OF THE HEART[/B]
The human heart has four chambers. The upper two chambers, the right and left atria, are receiving chambers for blood. The atria are sometimes known as auricles. They collect blood that pours in from veins, blood vessels that return blood to the heart. The heart’s lower two chambers, the right and left ventricles, are the powerful pumping chambers. The ventricles propel blood into arteries, blood vessels that carry blood away from the heart.

A wall of tissue separates the right and left sides of the heart. Each side pumps blood through a different circuit of blood vessels: The right side of the heart pumps oxygen-poor blood to the lungs, while the left side of the heart pumps oxygen-rich blood to the body. Blood returning from a trip around the body has given up most of its oxygen and picked up carbon dioxide in the body’s tissues. This oxygen-poor blood feeds into two large veins, the superior vena cava and inferior vena cava, which empty into the right atrium of the heart.

The right atrium conducts blood to the right ventricle, and the right ventricle pumps blood into the pulmonary artery. The pulmonary artery carries the blood to the lungs, where it picks up a fresh supply of oxygen and eliminates carbon dioxide. The blood, now oxygen-rich, returns to the heart through the pulmonary veins, which empty into the left atrium. Blood passes from the left atrium into the left ventricle, from where it is pumped out of the heart into the aorta, the body’s largest artery. Smaller arteries that branch off the aorta distribute blood to various parts of the body.

[B]A -Heart Val[/B]
Four valves within the heart prevent blood from flowing backward in the heart. The valves open easily in the direction of blood flow, but when blood pushes against the valves in the opposite direction, the valves close. Two valves, known as atrioventricular valves, are located between the atria and ventricles. The right atrioventricular valve is formed from three flaps of tissue and is called the tricuspid valve. The left atrioventricular valve has two flaps and is called the bicuspid or mitral valve. The other two heart valves are located between the ventricles and arteries. They are called semilunar valves because they each consist of three half-moon-shaped flaps of tissue. The right semilunar valve, between the right ventricle and pulmonary artery, is also called the pulmonary valve. The left semilunar valve, between the left ventricle and aorta, is also called the aortic valve.

[B]B -Myocardium[/B]
Muscle tissue, known as myocardium or cardiac muscle, wraps around a scaffolding of tough connective tissue to form the walls of the heart’s chambers. The atria, the receiving chambers of the heart, have relatively thin walls compared to the ventricles, the pumping chambers. The left ventricle has the thickest walls—nearly 1 cm (0.5 in) thick in an adult—because it must work the hardest to propel blood to the farthest reaches of the body.

[B]C -Pericardium[/B]
A tough, double-layered sac known as the pericardium surrounds the heart. The inner layer of the pericardium, known as the epicardium, rests directly on top of the heart muscle. The outer layer of the pericardium attaches to the breastbone and other structures in the chest cavity and helps hold the heart in place. Between the two layers of the pericardium is a thin space filled with a watery fluid that helps prevent these layers from rubbing against each other when the heart beats.

[B]D -Endocardium[/B]
The inner surfaces of the heart’s chambers are lined with a thin sheet of shiny, white tissue known as the endocardium. The same type of tissue, more broadly referred to as endothelium, also lines the body’s blood vessels, forming one continuous lining throughout the circulatory system. This lining helps blood flow smoothly and prevents blood clots from forming inside the circulatory system.

[B]E -Coronary Arteries[/B]
The heart is nourished not by the blood passing through its chambers but by a specialized network of blood vessels. Known as the coronary arteries, these blood vessels encircle the heart like a crown. About 5 percent of the blood pumped to the body enters the coronary arteries, which branch from the aorta just above where it emerges from the left ventricle. Three main coronary arteries—the right, the left circumflex, and the left anterior descending—nourish different regions of the heart muscle. From these three arteries arise smaller branches that enter the muscular walls of the heart to provide a constant supply of oxygen and nutrients. Veins running through the heart muscle converge to form a large channel called the coronary sinus, which returns blood to the right atrium.

[B]III -FUNCTION OF THE HEART[/B]
The heart’s duties are much broader than simply pumping blood continuously throughout life. The heart must also respond to changes in the body’s demand for oxygen. The heart works very differently during sleep, for example, than in the middle of a 5-km (3-mi) run. Moreover, the heart and the rest of the circulatory system can respond almost instantaneously to shifting situations—when a person stands up or lies down, for example, or when a person is faced with a potentially dangerous situation.

[B]A -Cardiac Cycle[/B]
Although the right and left halves of the heart are separate, they both contract in unison, producing a single heartbeat. The sequence of events from the beginning of one heartbeat to the beginning of the next is called the cardiac cycle. The cardiac cycle has two phases: diastole, when the heart’s chambers are relaxed, and systole, when the chambers contract to move blood. During the systolic phase, the atria contract first, followed by contraction of the ventricles. This sequential contraction ensures efficient movement of blood from atria to ventricles and then into the arteries. If the atria and ventricles contracted simultaneously, the heart would not be able to move as much blood with each beat.

During diastole, both atria and ventricles are relaxed, and the atrioventricular valves are open. Blood pours from the veins into the atria, and from there into the ventricles. In fact, most of the blood that enters the ventricles simply pours in during diastole. Systole then begins as the atria contract to complete the filling of the ventricles. Next, the ventricles contract, forcing blood out through the semilunar valves and into the arteries, and the atrioventricular valves close to prevent blood from flowing back into the atria. As pressure rises in the arteries, the semilunar valves snap shut to prevent blood from flowing back into the ventricles. Diastole then begins again as the heart muscle relaxes—the atria first, followed by the ventricles—and blood begins to pour into the heart once more.

A health-care professional uses an instrument known as a stethoscope to detect internal body sounds, including the sounds produced by the heart as it is beating. The characteristic heartbeat sounds are made by the valves in the heart—not by the contraction of the heart muscle itself. The sound comes from the leaflets of the valves slapping together. The closing of the atrioventricular valves, just before the ventricles contract, makes the first heart sound. The second heart sound is made when the semilunar valves snap closed. The first heart sound is generally longer and lower than the second, producing a heartbeat that sounds like lub-dup, lub-dup, lub-dup.

Blood pressure, the pressure exerted on the walls of blood vessels by the flowing blood, also varies during different phases of the cardiac cycle. Blood pressure in the arteries is higher during systole, when the ventricles are contracting, and lower during diastole, as the blood ejected during systole moves into the body’s capillaries. Blood pressure is measured in millimeters (mm) of mercury using a sphygmomanometer, an instrument that consists of a pressure-recording device and an inflatable cuff that is usually placed around the upper arm. Normal blood pressure in an adult is less than 120 mm of mercury during systole, and less than 80 mm of mercury during diastole.

Blood pressure is usually noted as a ratio of systolic pressure to diastolic pressure—for example, 120/80. A person’s blood pressure may increase for a short time during moments of stress or strong emotions. However, a prolonged or constant elevation of blood pressure, a condition known as hypertension, can increase a person’s risk for heart attack, stroke, heart and kidney failure, and other health problems.

[B]B -Generation of the Heartbeat[/B]
Unlike most muscles, which rely on nerve impulses to cause them to contract, heart muscle can contract of its own accord. Certain heart muscle cells have the ability to contract spontaneously, and these cells generate electrical signals that spread to the rest of the heart and cause it to contract with a regular, steady beat.

The heartbeat begins with a small group of specialized muscle cells located in the upper right-hand corner of the right atrium. This area is known as the sinoatrial (SA) node. Cells in the SA node generate their electrical signals more frequently than cells elsewhere in the heart, so the electrical signals generated by the SA node synchronize the electrical signals traveling to the rest of the heart. For this reason, the SA node is also known as the heart’s pacemaker.

Impulses generated by the SA node spread rapidly throughout the atria, so that all the muscle cells of the atria contract virtually in unison. Electrical impulses cannot be conducted through the partition between the atria and ventricles, which is primarily made of fibrous connective tissue rather than muscle cells. The impulses from the SA node are carried across this connective tissue partition by a small bridge of muscle called the atrioventricular conduction system. The first part of this system is a group of cells at the lower margin of the right atrium, known as the atrioventricular (AV) node. Cells in the AV node conduct impulses relatively slowly, introducing a delay of about two-tenths of a second before an impulse reaches the ventricles. This delay allows time for the blood in the atria to empty into the ventricles before the ventricles begin contracting.

After making its way through the AV node, an impulse passes along a group of muscle fibers called the bundle of His, which span the connective tissue wall separating the atria from the ventricles. Once on the other side of that wall, the impulse spreads rapidly among the muscle cells that make up the ventricles. The impulse travels to all parts of the ventricles with the help of a network of fast-conducting fibers called Purkinje fibers. These fibers are necessary because the ventricular walls are so thick and massive.

If the impulse had to spread directly from one muscle cell to another, different parts of the ventricles would not contract together, and the heart would not pump blood efficiently. Although this complicated circuit has many steps, an electrical impulse spreads from the SA node throughout the heart in less than one second.

The journey of an electrical impulse around the heart can be traced by a machine called an electrocardiograph . This instrument consists of a recording device attached to electrodes that are placed at various points on a person’s skin. The recording device measures different phases of the heartbeat and traces these patterns as peaks and valleys in a graphic image known as an electrocardiogram (ECG, sometimes known as EKG). Changes or abnormalities in the heartbeat or in the heart’s rate of contraction register on the ECG, helping doctors diagnose heart problems or identify damage from a heart attack.

[B]C -Control of the Heart Rate[/B]
In an adult, resting heart rate is normally about 70 beats per minute. However, the heart can beat up to three times faster—at more than 200 beats per minute—when a person is exercising vigorously. Younger people have faster resting heart rates than adults do. The normal heart rate is about 120 beats per minute in infants and about 100 beats per minute in young children. Many athletes, by contrast, often have relatively slow resting heart rates because physical training makes the heart stronger and enables it to pump the same amount of blood with fewer beats. An athlete’s resting heart rate may be only 40 to 60 beats per minute.

Although the SA node generates the heartbeat, impulses from nerves cause the heart to speed up or slow down almost instantaneously. The nerves that affect heart rate are part of the autonomic nervous system, which directs activities of the body that are not under conscious control. The autonomic nervous system is made up of two types of nerves, sympathetic and parasympathetic fibers. These fibers come from the spinal cord or brain and deliver impulses to the SA node and other parts of the heart.

Sympathetic nerve fibers increase the heart rate. These fibers are activated in times of stress, and they play a role in the fight or flight response that prepares humans and other animals to respond to danger. In addition to fear or physical danger, exercising or experiencing a strong emotion can also activate sympathetic fibers and cause an increase in heart rate. In contrast, parasympathetic nerve fibers slow the heart rate. In the absence of nerve impulses the SA node would fire about 100 times each minute—parasympathetic fibers are responsible for slowing the heart to the normal rate of about 70 beats per minute.

Chemicals known as hormones carried in the bloodstream also influence the heart rate. Hormones generally take effect more slowly than nerve impulses. They work by attaching to receptors, proteins on the surface of heart muscle cells, to change the way the muscle cells contract. Epinephrine (also called adrenaline) is a hormone made by the adrenal glands, which are located on top of the kidneys. Released during times of stress, epinephrine increases the heart rate much as sympathetic nerve fibers do. Thyroid hormone, which regulates the body’s overall metabolism, also increases the heart rate. Other chemicals—especially calcium, potassium, and sodium—can affect heart rate and rhythm.

[B]D -Cardiac Output[/B]
To determine overall heart function, doctors measure cardiac output, the amount of blood pumped by each ventricle in one minute. Cardiac output is equal to the heart rate multiplied by the stroke volume, the amount of blood pumped by a ventricle with each beat. Stroke volume, in turn, depends on several factors: the rate at which blood returns to the heart through the veins; how vigorously the heart contracts; and the pressure of blood in the arteries, which affects how hard the heart must work to propel blood into them. Normal cardiac output in an adult is about 3 liters per minute per square meter of body surface.

An increase in either heart rate or stroke volume—or both—will increase cardiac output. During exercise, sympathetic nerve fibers increase heart rate. At the same time, stroke volume increases, primarily because venous blood returns to the heart more quickly and the heart contracts more vigorously. Many of the factors that increase heart rate also increase stroke volume. For example, impulses from sympathetic nerve fibers cause the heart to contract more vigorously as well as increasing the heart rate. The simultaneous increase in heart rate and stroke volume enables a larger and more efficient increase in cardiac output than if, say, heart rate alone increased during exercise. In a healthy adult during vigorous exercise, cardiac output can increase six-fold, to 18 liters per minute per square meter of body surface.

[B]IV -DISEASES OF THE HEART[/B]
In the United States and many other industrialized countries, heart disease is the leading cause of death. According to the United States Centers for Disease Control and Prevention (CDC), more than 710,000 people in the United States die of heart disease each year. By far the most common type of heart disease in the United States is coronary heart disease, in which the arteries that nourish the heart become narrowed and unable to supply enough blood and oxygen to the heart muscle. However, many other problems can also affect the heart, including congenital defects (physical abnormalities that are present at birth), malfunction of the heart valves, and abnormal heart rhythms. Any type of heart disease may eventually result in heart failure, in which a weakened heart is unable to pump sufficient blood to the body.

[B]A -Coronary Heart Disease[/B]
Coronary heart disease, the most common type of heart disease in most industrialized countries, is responsible for over 515,000 deaths in the United States yearly. It is caused by atherosclerosis, the buildup of fatty material called plaque on the inside of the coronary arteries (see Arteriosclerosis). Over the course of many years, this plaque narrows the arteries so that less blood can flow through them and less oxygen reaches the heart muscle.

The most common symptom of coronary heart disease is angina pectoris, a squeezing chest pain that may radiate to the neck, jaw, back, and left arm. Angina pectoris is a signal that blood flow to the heart muscle falls short when extra work is required from the heart muscle. An attack of angina is typically triggered by exercise or other physical exertion, or by strong emotions. Coronary heart disease can also lead to a heart attack, which usually develops when a blood clot forms at the site of a plaque and severely reduces or completely stops the flow of blood to a part of the heart. In a heart attack, also known as myocardial infarction, part of the heart muscle dies because it is deprived of oxygen. This oxygen deprivation also causes the crushing chest pain characteristic of a heart attack. Other symptoms of a heart attack include nausea, vomiting, and profuse sweating. About one-third of heart attacks are fatal, but patients who seek immediate medical attention when symptoms of a heart attack develop have a good chance of surviving.

One of the primary risk factors for coronary heart disease is the presence of a high level of a fatty substance called cholesterol in the bloodstream. High blood cholesterol is typically the result of a diet that is high in cholesterol and saturated fat, although some genetic disorders also cause the problem. Other risk factors include smoking, high blood pressure, diabetes mellitus, obesity, and a sedentary lifestyle. Coronary heart disease was once thought to affect primarily men, but this is not the case. The disease affects an equal number of men and women, although women tend to develop the disease later in life than men do.

Coronary heart disease cannot be cured, but it can often be controlled with a combination of lifestyle changes and medications. Patients with coronary heart disease are encouraged to quit smoking, exercise regularly, and eat a low-fat diet. Doctors may prescribe a drug such as lovastatin, simvastatin, or pravastatin to help lower blood cholesterol. A wide variety of medications can help relieve angina, including nitroglycerin, beta blockers, and calcium channel blockers. Doctors may recommend that some patients take a daily dose of aspirin, which helps prevent heart attacks by interfering with platelets, tiny blood cells that play a critical role in blood clotting.

In some patients, lifestyle changes and medication may not be sufficient to control angina. These patients may undergo coronary artery bypass surgery or percutaneous transluminal coronary angioplasty (PTCA) to help relieve their symptoms. In bypass surgery, a length of blood vessel is removed from elsewhere in the patient’s body—usually a vein from the leg or an artery from the wrist. The surgeon sews one end to the aorta and the other end to the coronary artery, creating a conduit for blood to flow that bypasses the narrowed segment. Surgeons today commonly use an artery from the inside of the chest wall because bypasses made from this artery are very durable.

In PTCA, commonly referred to as balloon angioplasty, a deflated balloon is threaded through the patient’s coronary arteries to the site of a blockage. The balloon is then inflated, crushing the plaque and restoring the normal flow of blood through the artery.

[B]B -Congenital Defects[/B]
Each year about 25,000 babies in the United States are born with a congenital heart defect (see Birth Defects). A wide variety of heart malformations can occur. One of the most common abnormalities is a septal defect, an opening between the right and left atrium or between the right and left ventricle. In other infants, the ductus arteriosus, a fetal blood vessel that usually closes soon after birth, remains open. In babies with these abnormalities, some of the oxygen-rich blood returning from the lungs is pumped to the lungs again, placing extra strain on the right ventricle and on the blood vessels leading to and from the lung. Sometimes a portion of the aorta is abnormally narrow and unable to carry sufficient blood to the body.

This condition, called coarctation of the aorta, places extra strain on the left ventricle because it must work harder to pump blood beyond the narrow portion of the aorta. With the heart pumping harder, high blood pressure often develops in the upper body and may cause a blood vessel in the brain to burst, a complication that is often fatal. An infant may be born with several different heart defects, as in the condition known as tetralogy of Fallot. In this condition, a combination of four different heart malformations allows mixing of oxygenated and deoxygenated blood pumped by the heart. Infants with tetralogy of Fallot are often known as “blue babies” because of the characteristic bluish tinge of their skin, a condition caused by lack of oxygen.

In many cases, the cause of a congenital heart defect is difficult to identify. Some defects may be due to genetic factors, while others may be the result of viral infections or exposure to certain chemicals during the early part of the mother’s pregnancy. Regardless of the cause, most congenital malformations of the heart can be treated successfully with surgery, sometimes performed within a few weeks or months of birth. For example, a septal defect can be repaired with a patch made from pericardium or synthetic fabric that is sewn over the hole. An open ductus arteriosus is cut, and the pulmonary artery and aorta are stitched closed.

To correct coarctation of the aorta, a surgeon snips out the narrowed portion of the vessel and sews the normal ends together, or sews in a tube of fabric to connect the ends. Surgery for tetralogy of Fallot involves procedures to correct each part of the defect. Success rates for many of these operations are well above 90 percent, and with treatment most children with congenital heart defects live healthy, normal lives.

[B]C -Heart Valve Malfunction[/B]
Malfunction of one of the four valves within the heart can cause problems that affect the entire circulatory system. A leaky valve does not close all the way, allowing some blood to flow backward as the heart contracts. This backward flow decreases the amount of oxygen the heart can deliver to the tissues with each beat. A stenotic valve, which is stiff and does not open fully, requires the heart to pump with increased force to propel blood through the narrowed opening. Over time, either of these problems can lead to damage of the overworked heart muscle.

Some people are born with malformed valves. Such congenital malformations may require treatment soon after birth, or they may not cause problems until a person reaches adulthood. A heart valve may also become damaged during life, due to infection, connective tissue disorders such as Marfan syndrome, hypertension, heart attack, or simply aging.

A well-known, but poorly understood, type of valve malfunction is mitral valve prolapse. In this condition, the leaflets of the mitral valve fail to close properly and bulge backward like a parachute into the left atrium. Mitral valve prolapse is the most common type of valve abnormality, affecting 5 to 10 percent of the United States population, the majority of them women. In most cases, mitral valve prolapse does not cause any problems, but in a few cases the valve’s failure to close properly allows blood to leak backwards through the valve.

Another common cause of valve damage is rheumatic fever, a complication that sometimes develops after an infection with common bacteria known as streptococci. Most common in children, the illness is characterized by inflammation and pain in the joints. Connective tissue elsewhere in the body, including in the heart, heart valves, and pericardium, may also become inflamed. This inflammation can result in damage to the heart, most commonly one of the heart valves, that remains after the other symptoms of rheumatic fever have gone away.

Valve abnormalities are often detected when a health-care professional listens to the heart with a stethoscope. Abnormal valves cause extra sounds in addition to the normal sequence of two heart sounds during each heartbeat. These extra heart sounds are often known as heart murmurs, and not all of them are dangerous. In some cases, a test called echocardiography may be necessary to evaluate an abnormal valve. This test uses ultrasound waves to produce images of the inside of the heart, enabling doctors to see the shape and movement of the valves as the heart pumps.

Damaged or malformed valves can sometimes be surgically repaired. More severe valve damage may require replacement with a prosthetic valve. Some prosthetic valves are made from pig or cow valve tissue, while others are mechanical valves made from silicone and other synthetic materials.

[B]D -Arrhythmias[/B]
Arrhythmias, or abnormal heart rhythms, arise from problems with the electrical conduction system of the heart. Arrhythmias can occur in either the atria or the ventricles. In general, ventricular arrhythmias are more serious than atrial arrhythmias because ventricular arrhythmias are more likely to affect the heart’s ability to pump blood to the body.

Some people have minor arrhythmias that persist for long periods and are not dangerous—in fact, they are simply heartbeats that are normal for that particular person’s heart. A temporary arrhythmia can be caused by alcohol, caffeine, or simply not getting a good night’s sleep. Often, damage to the heart muscle results in a tendency to develop arrhythmias. This heart muscle damage is frequently the result of a heart attack, but can also develop for other reasons, such as after an infection or as part of a congenital defect.
Arrhythmias may involve either abnormally slow or abnormally fast rhythms.

An abnormally slow rhythm sometimes results from slower firing of impulses from the SA node itself, a condition known as sinus bradycardia. An abnormally slow heartbeat may also be due to heart block, which arises when some or all of the impulses generated by the SA node fail to be transmitted to the ventricles. Even if impulses from the atria are blocked, the ventricles continue to contract because fibers in the ventricles can generate their own rhythm. However, the rhythm they generate is slow, often only about 40 beats per minute. An abnormally slow heartbeat is dangerous if the heart does not pump enough blood to supply the brain and the rest of the body with oxygen. In this case, episodes of dizziness, lightheadedness, or fainting may occur. Episodes of fainting caused by heart block are known as Stokes-Adams attacks.

Some types of abnormally fast heart rhythms—such as atrial tachycardia, an increased rate of atrial contraction—are usually not dangerous. Atrial fibrillation, in which the atria contract in a rapid, uncoordinated manner, may reduce the pumping efficiency of the heart. In a person with an otherwise healthy heart, this may not be dangerous, but in a person with other heart disease the reduced pumping efficiency may lead to heart failure or stroke.
By far the most dangerous type of rapid arrhythmia is ventricular fibrillation, in which ventricular contractions are rapid and chaotic. Fibrillation prevents the ventricles from pumping blood efficiently, and can lead to death within minutes. Ventricular fibrillation can be reversed with an electrical defibrillator, a device that delivers a shock to the heart. The shock briefly stops the heart from beating, and when the heartbeat starts again the SA node is usually able to resume a normal beat.

Most often, arrhythmias can be diagnosed with the use of an ECG. Some arrhythmias do not require treatment. Others may be controlled with medications such as digitalis, propanolol, or disopyramide. Patients with heart block or several other types of arrhythmias may have an artificial pacemaker implanted in their chest. This small, battery-powered electronic device delivers regular electrical impulses to the heart through wires attached to different parts of the heart muscle. Another type of implantable device, a miniature defibrillator, is used in some patients at risk for serious ventricular arrhythmias. This device works much like the larger defibrillator used by paramedics and in the emergency room, delivering an electric shock to reset the heart when an abnormal rhythm is detected.

[B]E -Other Forms of Heart Disease[/B]
In addition to the relatively common heart diseases described above, a wide variety of other diseases can also affect the heart. These include tumors, heart damage from other diseases such as syphilis and tuberculosis, and inflammation of the heart muscle, pericardium, or endocardium.
Myocarditis, or inflammation of the heart muscle, was commonly caused by rheumatic fever in the past. Today, many cases are due to a viral infection or their cause cannot be identified. Sometimes myocarditis simply goes away on its own. In a minority of patients, who often suffer repeated episodes of inflammation, myocarditis leads to permanent damage of the heart muscle, reducing the heart’s ability to pump blood and making it prone to developing abnormal rhythms.

Cardiomyopathy encompasses any condition that damages and weakens the heart muscle. Scientists believe that viral infections cause many cases of cardiomyopathy. Other causes include vitamin B deficiency, rheumatic fever, underactivity of the thyroid gland, and a genetic disease called hemochromatosis in which iron builds up in the heart muscle cells. Some types of cardiomyopathy can be controlled with medication, but others lead to progressive weakening of the heart muscle and sometimes result in heart failure.

In pericarditis, the most common disorder of the pericardium, the saclike membrane around the heart becomes inflamed. Pericarditis is most commonly caused by a viral infection, but may also be due to arthritis or an autoimmune disease such as systemic lupus erythematosus. It may be a complication of late-stage kidney disease, lung cancer, or lymphoma; it may be a side effect of radiation therapy or certain drugs. Pericarditis sometimes goes away without treatment, but it is often treated with anti-inflammatory drugs. It usually causes no permanent damage to the heart. If too much fluid builds up around the heart during an attack of pericarditis, the fluid may need to be drained with a long needle or in a surgical procedure. Patients who suffer repeated episodes of pericarditis may have the pericardium surgically removed.

Endocarditis is an infection of the inner lining of the heart, but damage from such an infection usually affects only the heart valves. Endocarditis often develops when bacteria from elsewhere in the body enter the bloodstream, settle on the flaps of one of the heart valves, and begin to grow there. The infection can be treated with antibiotics, but if untreated, endocarditis is often fatal. People with congenital heart defects, valve damage due to rheumatic fever, or other valve problems are at greatest risk for developing endocarditis. They often take antibiotics as a preventive measure before undergoing dental surgery or certain other types of surgery that can allow bacteria into the bloodstream. Intravenous drug users who share needles are another population at risk for endocarditis. People who use unclean needles, which allow bacteria into the bloodstream, frequently develop valve damage.

[B]F -Heart Failure[/B]
The final stage in almost any type of heart disease is heart failure, also known as congestive heart failure, in which the heart muscle weakens and is unable to pump enough blood to the body. In the early stages of heart failure, the muscle may enlarge in an attempt to contract more vigorously, but after a time this enlargement of the muscle simply makes the heart inefficient and unable to deliver enough blood to the tissues. In response to this shortfall, the kidneys conserve water in an attempt to increase blood volume, and the heart is stimulated to pump harder. Eventually excess fluid seeps through the walls of tiny blood vessels and into the tissues. Fluid may collect in the lungs, making breathing difficult, especially when a patient is lying down at night. Many patients with heart failure must sleep propped up on pillows to be able to breathe. Fluid may also build up in the ankles, legs, or abdomen. In the later stages of heart failure, any type of physical activity becomes next to impossible.

Almost any condition that overworks or damages the heart muscle can eventually result in heart failure. The most common cause of heart failure is coronary heart disease. Heart failure may develop when the death of heart muscle in a heart attack leaves the heart with less strength to pump blood, or simply as a result of long-term oxygen deprivation due to narrowed coronary arteries. Hypertension or malfunctioning valves that force the heart to work harder over extended periods of time may also lead to heart failure. Viral or bacterial infections, alcohol abuse, and certain chemicals (including some lifesaving drugs used in cancer chemotherapy), can all damage the heart muscle and result in heart failure.

Despite its ominous name, heart failure can sometimes be reversed and can often be effectively treated for long periods with a combination of drugs. About 4.6 million people with heart failure are alive in the United States today. Medications such as digitalis are often prescribed to increase the heart’s pumping efficiency, while beta blockers may be used to decrease the heart’s workload. Drugs known as vasodilators relax the arteries and veins so that blood encounters less resistance as it flows. Diuretics stimulate the kidneys to excrete excess fluid.

A last resort in the treatment of heart failure is heart transplantation, in which a patient’s diseased heart is replaced with a healthy heart from a person who has died of other causes . Heart transplantation enables some patients with heart failure to lead active, healthy lives once again. However, a person who has received a heart transplant must take medications to suppress the immune system for the rest of his or her life in order to prevent rejection of the new heart. These drugs can have serious side effects, making a person more vulnerable to infections and certain types of cancer.
The first heart transplant was performed in 1967 by South African surgeon Christiaan Barnard. However, the procedure did not become widespread until the early 1980s, when the immune-suppressing drug cyclosporine became available. This drug helps prevent rejection without making patients as vulnerable to infection as they had been with older immune-suppressing drugs. About 3,500 heart transplants are performed worldwide each year, about 2,500 of them in the United States. Today, about 83 percent of heart transplant recipients survive at least one year, and 71 percent survive for four years.

A shortage of donor hearts is the main limitation on the number of transplants performed today. Some scientists are looking for alternatives to transplantation that would help alleviate this shortage of donor hearts. One possibility is to replace a human heart with a mechanical one. A permanent artificial heart was first implanted in a patient in 1982. Artificial hearts have been used experimentally with mixed success. They are not widely used today because of the risk of infection and bleeding and concerns about their reliability. In addition, the synthetic materials used to fashion artificial hearts can cause blood clots to form in the heart. These blood clots may travel to a vessel in the neck or head, resulting in a stroke. Perhaps a more promising option is the left ventricular assist device (LVAD). This device is implanted inside a person’s chest or abdomen to help the patient’s own heart pump blood. LVADs are used in many people waiting for heart transplants, and could one day become a permanent alternative to transplantation.

Some scientists are working to develop xenotransplantation, in which a patient’s diseased heart would be replaced with a heart from a pig or another species. However, this strategy still requires a great deal of research to prevent the human immune system from rejecting a heart from a different species. Some experts have also raised concerns about the transmission of harmful viruses from other species to humans as a result of xenotransplantation.

[B]V -HISTORY OF HEART RESEARCH [/B]
Scientific knowledge of the heart dates back almost as far as the beginnings of recorded history. The Egyptian physician Imhotep made observations on the pulse during the 2600s BC. During the 300s BC the Greek physician Hippocrates studied and wrote about various signs and symptoms of heart disease, and the Greek philosopher Aristotle described the beating heart of a chick embryo. Among the first people to investigate and write about the anatomy of the heart was another Greek physician, Erasistratus, around 250 BC. Erasistratus described the appearance of the heart and the four valves inside it. Although he correctly deduced that the valves prevent blood from flowing backward in the heart, he did not understand that the heart was a pump. Galen, a Greek-born Roman physician, also wrote about the heart during the second century AD. He recognized that the heart was made of muscle, but he believed that the liver was responsible for the movement of blood through the body.

Heart research did not greatly expand until the Renaissance in Europe (14th century to 16th century). During that era, scientists began to connect the heart’s structure with its function. During the early 16th century the Spanish physician and theologian Michael Servetus described how blood passes through the four chambers of the heart and picks up oxygen in the lungs. Perhaps the most significant contributions were made by English physician William Harvey, who discovered the circulation of blood in 1628. Harvey was the first to realize that the heart is a pump responsible for the movement of blood through the body. His work revealed how the heart works with the blood and blood vessels to nourish the body, establishing the concept of the circulatory system.

The 20th century witnessed extraordinary advances in the diagnosis of heart diseases, corrective surgeries, and other forms of treatment for heart problems. Many doctors had become interested in measuring the pulse and abnormal heartbeats. This line of research culminated in the 1902 invention of the electrocardiograph by Dutch physiologist Willem Einthoven, who received the Nobel Prize for this work in 1924. Another major advance in diagnosis was cardiac catheterization, which was pioneered in 1929 by German physician Werner Forssmann. After performing experiments on animals, Forssmann inserted a catheter through a vein in his arm and into his own heart—a stunt for which he was fired from his job. Two American physicians, André Cournand and Dickinson Richards, later continued research on catheterization, and the technique became commonly used during the 1940s. The three scientists received the Nobel Prize in 1956 for their work.

At the beginning of the 20th century, most doctors believed that surgery on the heart would always remain impossible, as the heart was thought to be an extremely delicate organ. Most of the first heart operations were done in life-or-death trauma situations. American physician L. L. Hill performed the first successful heart surgery in the United States in 1902, sewing up a stab wound in the left ventricle of an 8-year-old boy. The next year, French surgeon Marin Théodore Tuffier removed a bullet from a patient’s left atrium.

Surgery to correct some congenital defects involving blood vessels also helped lay the foundations for surgery on the heart itself. In 1938 American surgeon Robert Gross performed the first successful surgery to treat an open ductus arteriosus, tying the vessel closed with thread. In 1944 Gross and Swedish surgeon Clarence Crafoord each performed successful surgery for coarctation of the aorta. The same year, American surgeon Alfred Blalock and surgical assistant Vivien Thomas performed the first successful operation to correct tetralogy of Fallot. But the greatest leap forward came in 1953, when American physician John Gibbon introduced the heart-lung machine, a device to oxygenate and pump blood during surgery on the heart. This invention made open-heart surgery—with the heart stopped for the duration of the operation—possible. It led to now-routine surgical techniques such as valve replacement, correction of congenital defects, and bypass surgery.

The rapid pace of scientific discovery during the 20th century has also led to many nonsurgical treatments for diseases of the heart. The introduction of antibiotics to treat bacterial infections greatly reduced sickness and deaths due to heart disease from rheumatic fever, endocarditis, and other infections involving the heart, although these infections remain a significant threat in many developing nations. Many effective drugs to control hypertension, reduce cholesterol, relieve angina, limit damage from heart attacks, and treat other forms of heart disease have also been developed. Advances in electronics led to implantable pacemakers in 1959 and implantable defibrillators in 1982.

[B]VI -HEARTS IN OTHER ANIMALS [/B]
Among different groups of animals, hearts vary greatly in size and complexity. In insects, the heart is a hollow bulb with muscular walls that contract to push blood into an artery. Many insects have several such hearts arranged along the length of the artery. When the artery ends, blood percolates among the cells of the insect’s body, eventually making its way back to the heart. In an insect, blood may take as long as an hour to complete a trip around the body.

In earthworms and other segmented worms, known as annelids, blood flows toward the back of the body through the ventral blood vessel and toward the front of the body through the dorsal blood vessel. Five pairs of hearts, or aortic arches, help pump blood. The hearts are actually segments of the dorsal blood vessel and are similar in structure to those of insects.

In vertebrates, or animals with a backbone, the heart is a separate, specialized organ rather than simply a segment of a blood vessel. In fish, the heart has two chambers: an atrium (receiving chamber) and a ventricle (pumping chamber). Oxygen-depleted blood returning from the fish’s body empties into the atrium, which pumps blood into the ventricle. The ventricle then pumps the blood to the gills, the respiratory organs of fish. In the gills, the blood picks up oxygen from the water and gets rid of carbon dioxide. The freshly oxygenated blood leaves the gills and travels to various parts of the body. In fish, as in humans, blood passes through the respiratory organs before it is distributed to the body. Unlike in humans, the blood does not return to the heart between visiting the respiratory organs and being distributed to the tissues. Without the added force from a second trip through the heart, blood flows relatively slowly in fish compared to humans and other mammals. However, this sluggish flow is enough to supply the fish’s relatively low oxygen demand.

As vertebrates moved from life in the sea to life on land, they evolved lungs as new respiratory organs for breathing. At the same time, they became more active and developed greater energy requirements. Animals use oxygen to release energy from food molecules in a process called cellular respiration, so land-dwelling vertebrates also developed a greater requirement for oxygen. These changes, in turn, led to changes in the structure of the heart and circulatory system. Amphibians and most reptiles have a heart with three chambers—two atria and a single ventricle. These animals also have separate circuits of blood vessels for oxygenating blood and delivering it to the body.

Deoxygenated blood returning from the body empties into the right atrium. From there, blood is conducted to the ventricle and is then pumped to the lungs. After picking up oxygen and getting rid of carbon dioxide in the lungs, blood returns to the heart and empties into the left atrium. The blood then enters the ventricle a second time and is pumped out to the body. The second trip through the heart keeps blood pressure strong and blood flow rapid as blood is pumped to the tissues, helping the blood deliver oxygen more efficiently.

The three-chambered heart of amphibians and reptiles also creates an opportunity for blood to mix in the ventricle which pumps both oxygenated and deoxygenated blood with each beat. While in birds and mammals this would be deadly, scientists now understand that a three-chambered heart is actually advantageous for amphibians and reptiles. These animals do not breathe constantly—for example, amphibians absorb oxygen through their skin when they are underwater—and the three-chambered heart enables them to adjust the proportions of blood flowing to the body and the lungs depending on whether the animal is breathing or not. The three-chambered heart actually results in more efficient oxygen delivery for amphibians and reptiles.

Birds and mammals have high-energy requirements even by vertebrate standards, and a corresponding high demand for oxygen. Their hearts have four chambers—two atria and two ventricles—resulting in a complete separation of oxygenated and deoxygenated blood and highly efficient delivery of oxygen to the tissues. Small mammals have more rapid heart rates than large mammals because they have the highest energy needs. The resting heart rate of a mouse is 500 to 600 beats per minute, while that of an elephant is 30 beats per minute. Blood pressure also varies among different mammal species. Blood pressure in a giraffe’s aorta is about 220 mm of mercury when the animal is standing. This pressure would be dangerously high in a human, but is necessary in a giraffe to lift blood up the animal’s long neck to its brain.

Although other groups of vertebrates have hearts with a different structure than those of humans, they are still sufficiently similar that scientists can learn about the human heart from other animals. Scientists use a transparent fish, the zebra fish, to learn how the heart and the blood vessels that connect to it form before birth. Fish embryos are exposed to chemicals known to cause congenital heart defects, and scientists look for resulting genetic changes. Researchers hope that these studies will help us understand why congenital heart malformations occur, and perhaps one day prevent these birth defects.

Predator Thursday, November 15, 2007 04:13 PM

The human heart
 
1 Attachment(s)
The human heart is a hollow, pear-shaped organ about the size of a fist. The heart is made of muscle that rhythmically contracts, or beats, pumping blood throughout the body. Oxygen-poor blood from the body enters the heart from two large blood vessels, the inferior vena cava and the superior vena cava, and collects in the right atrium. When the atrium fills, it contracts, and blood passes through the tricuspid valve into the right ventricle. When the ventricle becomes full, it starts to contract, and the tricuspid valve closes to prevent blood from moving back into the atrium.

As the right ventricle contracts, it forces blood into the pulmonary artery, which carries blood to the lungs to pick up fresh oxygen. When blood exits the right ventricle, the ventricle relaxes and the pulmonary valve shuts, preventing blood from passing back into the ventricle. Blood returning from the lungs to the heart collects in the left atrium. When this chamber contracts, blood flows through the mitral valve into the left ventricle. The left ventricle fills and begins to contract, and the mitral valve between the two chambers closes. In the final phase of blood flow through the heart, the left ventricle contracts and forces blood into the aorta. After the blood in the left ventricle has been forced out, the ventricle begins to relax, and the aortic valve at the opening of the aorta closes.

Predator Thursday, November 15, 2007 04:15 PM

valves
 
1 Attachment(s)
Thin, fibrous flaps called valves lie at the opening of the heart's pulmonary artery and aorta. Valves are also present between each atrium and ventricle of the heart. Valves prevent blood from flowing backward in the heart. In this illustration of the pulmonary valve, as the heart contracts, blood pressure builds and pushes blood up against the pulmonary valve, forcing it to open. As the heart relaxes between one beat and the next, blood pressure falls. Blood flows back from the pulmonary artery, forcing the pulmonary valve to close, and preventing backflow of blood.

Predator Thursday, November 15, 2007 04:23 PM

Tissues
 
[B][U][CENTER][SIZE="3"]TISSUES[/SIZE][/CENTER][/U][/B]

[B]I -INTRODUCTION[/B]
Tissue, group of associated, similarly structured cells that perform specialized functions for the survival of the organism. Animal tissues, to which this article is limited, take their first form when the blastula cells, arising from the fertilized ovum, differentiate into three germ layers: the ectoderm, mesoderm, and endoderm. Through further cell differentiation, or histogenesis, groups of cells grow into more specialized units to form organs made up, usually, of several tissues of similarly performing cells. Animal tissues are classified into four main groups.

[B]II -EPITHELIAL TISSUES[/B]
These tissues include the skin and the inner surfaces of the body, such as those of the lungs, stomach, intestines, and blood vessels. Because its primary function is to protect the body from injury and infection, epithelium is made up of tightly packed cells with little intercellular substance between them.

About 12 kinds of epithelial tissue occur. One kind is stratified squamous tissue found in the skin and the linings of the esophagus and vagina. It is made up of thin layers of flat, scalelike cells that form rapidly above the blood capillaries and are pushed toward the tissue surface, where they die and are shed. Another is simple columnar epithelium, which lines the digestive system from the stomach to the anus; these cells stand upright and not only control the absorption of nutrients but also secrete mucus through individual goblet cells. Glands are formed by the inward growth of epithelium—for example, the sweat glands of the skin and the gastric glands of the stomach. Outward growth results in hair, nails, and other structures.

[B]III -CONNECTIVE TISSUES[/B]
These tissues, which support and hold parts of the body together, comprise the fibrous and elastic connective tissues, the adipose (fatty) tissues, and cartilage and bone. In contrast to epithelium, the cells of these tissues are widely separated from one another, with a large amount of intercellular substance between them. The cells of fibrous tissue, found throughout the body, connect to one another by an irregular network of strands, forming a soft, cushiony layer that also supports blood vessels, nerves, and other organs. Adipose tissue has a similar function, except that its fibroblasts also contain and store fat. Elastic tissue, found in ligaments, the trachea, and the arterial walls, stretches and contracts again with each pulse beat. In the human embryo, the fibroblast cells that originally secreted collagen for the formation of fibrous tissue later change to secrete a different form of protein called chondrion, for the formation of cartilage; some cartilage later becomes calcified by the action of osteoblasts to form bones. Blood and lymph are also often considered connective tissues.

[B]IV -MUSCLE TISSUES[/B]
These tissues, which contract and relax, comprise the striated, smooth, and cardiac muscles. Striated muscles, also called skeletal or voluntary muscles, include those that are activated by the somatic, or voluntary, nervous system. They are joined together without cell walls and have several nuclei. The smooth, or involuntary muscles, which are activated by the autonomic nervous system, are found in the internal organs and consist of simple sheets of cells. Cardiac muscles, which have characteristics of both striated and smooth muscles, are joined together in a vast network of interlacing cells and muscle sheaths.

[B]V -NERVE TISSUES[/B]
These highly complex groups of cells, called ganglia, transfer information from one part of the body to another. Each neuron, or nerve cell, consists of a cell body with branching dendrites and one long fiber, or axon. The dendrites connect one neuron to another; the axon transmits impulses to an organ or collects impulses from a sensory organ

Predator Thursday, November 15, 2007 04:25 PM

Epithelial_Cell
 
1 Attachment(s)
A color-enhanced microscopic photograph reveals the distribution of structures and substances in epithelial cells isolated from the pancreas. The red areas correspond to deoxyribonucleic acid, the blue to microtubules, and the green to actin. The cells secrete bicarbonate which neutralizes acid.

Predator Thursday, November 15, 2007 04:35 PM

Compound Microscope
 
1 Attachment(s)
Two convex lenses can form a microscope. The object lens is positioned close to the object to be viewed. It forms an upside-down and magnified image called a real image because the light rays actually pass through the place where the image lies. The ocular lens, or eyepiece lens, acts as a magnifying glass for this real image. The ocular lens makes the light rays spread more, so that they appear to come from a large inverted image beyond the object lens. Because light rays do not actually pass through this location, the image is called a virtual image.


01:57 AM (GMT +5)

vBulletin, Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.