Saturday, April 20, 2024
09:27 PM (GMT +5)

Go Back   CSS Forums > CSS Compulsory Subjects > General Science & Ability > General Science Notes

Reply Share Thread: Submit Thread to Facebook Facebook     Submit Thread to Twitter Twitter     Submit Thread to Google+ Google+    
 
LinkBack Thread Tools Search this Thread
  #21  
Old Tuesday, November 13, 2007
Predator's Avatar
Senior Member
Medal of Appreciation: Awarded to appreciate member's contribution on forum. (Academic and professional achievements do not make you eligible for this medal) - Issue reason:
 
Join Date: Aug 2007
Location: Karachi
Posts: 2,572
Thanks: 813
Thanked 1,975 Times in 838 Posts
Predator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to behold
Post Immunization

Immunization

I -INTRODUCTION
Immunization, also called vaccination or inoculation, a method of stimulating resistance in the human body to specific diseases using microorganisms—bacteria or viruses—that have been modified or killed. These treated microorganisms do not cause the disease, but rather trigger the body's immune system to build a defense mechanism that continuously guards against the disease. If a person immunized against a particular disease later comes into contact with the disease-causing agent, the immune system is immediately able to respond defensively.

Immunization has dramatically reduced the incidence of a number of deadly diseases. For example, a worldwide vaccination program resulted in the global eradication of smallpox in 1980, and in most developed countries immunization has essentially eliminated diphtheria, poliomyelitis, and neonatal tetanus. The number of cases of Haemophilus influenzae type b meningitis in the United States has dropped 95 percent among infants and children since 1988, when the vaccine for that disease was first introduced. In the United States, more than 90 percent of children receive all the recommended vaccinations by their second birthday. About 85 percent of Canadian children are immunized by age two.

II -TYPES OF IMMUNIZATION
Scientists have developed two approaches to immunization: active immunization, which provides long-lasting immunity, and passive immunization, which gives temporary immunity. In active immunization, all or part of a disease-causing microorganism or a modified product of that microorganism is injected into the body to make the immune system respond defensively. Passive immunity is accomplished by injecting blood from an actively immunized human being or animal.

A -Active Immunization
Vaccines that provide active immunization are made in a variety of ways, depending on the type of disease and the organism that causes it. The active components of the vaccinations are antigens, substances found in the disease-causing organism that the immune system recognizes as foreign. In response to the antigen, the immune system develops either antibodies or white blood cells called T lymphocytes, which are special attacker cells. Immunization mimics real infection but presents little or no risk to the recipient. Some immunizing agents provide complete protection against a disease for life. Other agents provide partial protection, meaning that the immunized person can contract the disease, but in a less severe form. These vaccines are usually considered risky for people who have a damaged immune system, such as those infected with the virus that causes acquired immunodeficiency syndrome (AIDS) or those receiving chemotherapy for cancer or organ transplantation. Without a healthy defense system to fight infection, these people may develop the disease that the vaccine is trying to prevent. Some immunizing agents require repeated inoculations—or booster shots—at specific intervals. Tetanus shots, for example, are recommended every ten years throughout life.

In order to make a vaccine that confers active immunization, scientists use an organism or part of one that has been modified so that it has a low risk of causing illness but still triggers the body’s immune defenses against disease. One type of vaccine contains live organisms that have been attenuated—that is, their virulence has been weakened. This procedure is used to protect against yellow fever, measles, smallpox, and many other viral diseases.
Immunization can also occur when a person receives an injection of killed or inactivated organisms that are relatively harmless but that still contain antigens. This type of vaccination is used to protect against bacterial diseases such as poliomyelitis, typhoid fever, and diphtheria.

Some vaccines use only parts of an infectious organism that contain antigens, such as a protein cell wall or a flagellum. Known as acellular vaccines, they produce the desired immunity with a lower risk of producing potentially harmful immune reactions that may result from exposure to other parts of the organism. Acellular vaccines include the Haemophilus influenzae type B vaccine for meningitis and newer versions of the whooping cough vaccine. Scientists use genetic engineering techniques to refine this approach further by isolating a gene or genes within an infectious organism that code for a particular antigen. The subunit vaccines produced by this method cannot cause disease and are safe to use in people who have an impaired immune system. Subunit vaccines for hepatitis B and pneumococcus infection, which causes pneumonia, became available in the late 1990s.

Active immunization can also be carried out using bacterial toxins that have been treated with chemicals so that they are no longer toxic, even though their antigens remain intact. This procedure uses the toxins produced by genetically engineered bacteria rather than the organism itself and is used in vaccinating against tetanus, botulism, and similar toxic diseases.

B -Passive Immunization
Passive immunization is performed without injecting any antigen. In this method, vaccines contain antibodies obtained from the blood of an actively immunized human being or animal. The antibodies last for two to three weeks, and during that time the person is protected against the disease. Although short-lived, passive immunization provides immediate protection, unlike active immunization, which can take weeks to develop. Consequently, passive immunization can be lifesaving when a person has been infected with a deadly organism.

Occasionally there are complications associated with passive immunization. Diseases such as botulism and rabies once posed a particular problem. Immune globulin (antibody-containing plasma) for these diseases was once derived from the blood serum of horses. Although this animal material was specially treated before administration to humans, serious allergic reactions were common. Today, human-derived immune globulin is more widely available and the risk of side effects is reduced.

III -IMMUNIZATION RECOMMENDATIONS
More than 50 vaccines for preventable diseases are licensed in the United States. The American Academy of Pediatrics and the U.S. Public Health Service recommend a series of immunizations beginning at birth. The initial series for children is complete by the time they reach the age of two, but booster vaccines are required for certain diseases, such as diphtheria and tetanus, in order to maintain adequate protection. When new vaccines are introduced, it is uncertain how long full protection will last. Recently, for example, it was discovered that a single injection of measles vaccine, first licensed in 1963 and administered to children at the age of 15 months, did not confer protection through adolescence and young adulthood. As a result, in the 1980s a series of measles epidemics occurred on college campuses throughout the United States among students who had been vaccinated as infants. To forestall future epidemics, health authorities now recommend that a booster dose of the measles, mumps, and rubella (also known as German measles) vaccine be administered at the time a child first enters school.
Not only children but also adults can benefit from immunization. Many adults in the United States are not sufficiently protected against tetanus, diphtheria, measles, mumps, and German measles. Health authorities recommend that most adults 65 years of age and older, and those with respiratory illnesses, be immunized against influenza (yearly) and pneumococcus (once).

IV -HISTORY OF IMMUNIZATION
The use of immunization to prevent disease predated the knowledge of both infection and immunology. In China in approximately 600 BC, smallpox material was inoculated through the nostrils. Inoculation of healthy people with a tiny amount of material from smallpox sores was first attempted in England in 1718and later in America. Those who survived the inoculation became immune to smallpox. American statesman Thomas Jefferson traveled from his home in Virginia to Philadelphia, Pennsylvania, to undergo this risky procedure.

A significant breakthrough came in 1796 when British physician Edward Jenner discovered that he could immunize patients against smallpox by inoculating them with material from cowpox sores. Cowpox is a far milder disease that, unlike smallpox, carries little risk of death or disfigurement. Jenner inserted matter from cowpox sores into cuts he made on the arm of a healthy eight-year-old boy. The boy caught cowpox. However, when Jenner exposed the boy to smallpox eight weeks later, the child did not contract the disease. The vaccination with cowpox had made him immune to the smallpox virus. Today we know that the cowpox virus antigens are so similar to those of the smallpox virus that they trigger the body's defenses against both diseases.

In 1885 Louis Pasteur created the first successful vaccine against rabies for a young boy who had been bitten 14 times by a rabid dog. Over the course of ten days, Pasteur injected progressively more virulent rabies organisms into the boy, causing the boy to develop immunity in time to avert death from this disease.

Another major milestone in the use of vaccination to prevent disease occurred with the efforts of two American physician-researchers. In 1954 Jonas Salk introduced an injectable vaccine containing an inactivated virus to counter the epidemic of poliomyelitis. Subsequently, Albert Sabin made great strides in the fight against this paralyzing disease by developing an oral vaccine containing a live weakened virus. Since the introduction of the polio vaccine, the disease has been nearly eliminated in many parts of the world.
As more vaccines are developed, a new generation of combined vaccines are becoming available that will allow physicians to administer a single shot for multiple diseases. Work is also under way to develop additional orally administered vaccines and vaccines for sexually transmitted diseases.

Possible future vaccines may include, for example, one that would temporarily prevent pregnancy. Such a vaccine would still operate by stimulating the immune system to recognize and attack antigens, but in this case the antigens would be those of the hormones that are necessary for pregnancy.
__________________
No signature...
Reply With Quote
The Following User Says Thank You to Predator For This Useful Post:
Qaiserks (Wednesday, April 01, 2009)
  #22  
Old Tuesday, November 13, 2007
Predator's Avatar
Senior Member
Medal of Appreciation: Awarded to appreciate member's contribution on forum. (Academic and professional achievements do not make you eligible for this medal) - Issue reason:
 
Join Date: Aug 2007
Location: Karachi
Posts: 2,572
Thanks: 813
Thanked 1,975 Times in 838 Posts
Predator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to behold
Post Microscope

Microscope

I -INTRODUCTION
Microscope, instrument used to obtain a magnified image of minute objects or minute details of objects.

II -OPTICAL MICROSCOPES
The most widely used microscopes are optical microscopes, which use visible light to create a magnified image of an object. The simplest optical microscope is the double-convex lens with a short focal length (see Optics). Double-convex lenses can magnify an object up to 15 times. The compound microscope uses two lenses, an objective lens and an ocular lens, mounted at opposite ends of a closed tube, to provide greater magnification than is possible with a single lens. The objective lens is composed of several lens elements that form an enlarged real image of the object being examined. The real image formed by the objective lens lies at the focal point of the ocular lens. Thus, the observer looking through the ocular lens sees an enlarged virtual image of the real image. The total magnification of a compound microscope is determined by the focal lengths of the two lens systems and can be more than 2000 times.

Optical microscopes have a firm stand with a flat stage to hold the material examined and some means for moving the microscope tube toward and away from the specimen to bring it into focus. Ordinarily, specimens are transparent and are mounted on slides—thin, rectangular pieces of clear glass that are placed on the stage for viewing. The stage has a small hole through which light can pass from a light source mounted underneath the stage—either a mirror that reflects natural light or a special electric light that directs light through the specimen.

In photomicrography, the process of taking photographs through a microscope, a camera is mounted directly above the microscope's eyepiece. Normally the camera does not contain a lens because the microscope itself acts as the lens system.

Microscopes used for research have a number of refinements to enable a complete study of the specimens. Because the image of a specimen is highly magnified and inverted, manipulating the specimen by hand is difficult. Therefore, the stages of high-powered research microscopes can by moved by micrometer screws, and in some microscopes, the stage can also be rotated. Research microscopes are also equipped with three or more objective lenses, mounted on a revolving head, so that the magnifying power of the microscope can be varied.

III -SPECIAL-PURPOSE OPTICAL MICROSCOPES
Different microscopes have been developed for specialized uses. The stereoscopic microscope, two low-powered microscopes arranged to converge on a single specimen, provides a three-dimensional image.
The petrographic microscope is used to analyze igneous and metamorphic rock. A Nicol prism or other polarizing device polarizes the light that passes through the specimen. Another Nicol prism or analyzer determines the polarization of the light after it has passed through the specimen. Rotating the stage causes changes in the polarization of light that can be measured and used to identify and estimate the mineral components of the rock.

The dark-field microscope employs a hollow, extremely intense cone of light concentrated on the specimen. The field of view of the objective lens lies in the hollow, dark portion of the cone and picks up only scattered light from the object. The clear portions of the specimen appear as a dark background, and the minute objects under study glow brightly against the dark field. This form of illumination is useful for transparent, unstained biological material and for minute objects that cannot be seen in normal illumination under the microscope.

The phase microscope also illuminates the specimen with a hollow cone of light. However, the cone of light is narrower and enters the field of view of the objective lens. Within the objective lens is a ring-shaped device that reduces the intensity of the light and introduces a phase shift of a quarter of a wavelength. This illumination causes minute variations of refractive index in a transparent specimen to become visible. This type of microscope is particularly effective for studying living tissue.

A typical optical microscope cannot resolve images smaller than the wavelength of light used to illuminate the specimen. An ultraviolet microscope uses the shorter wavelengths of the ultraviolet region of the light spectrum to increase resolution or to emphasize details by selective absorption (see Ultraviolet Radiation). Glass does not transmit the shorter wavelengths of ultraviolet light, so the optics in an ultraviolet microscope are usually quartz, fluorite, or aluminized-mirror systems. Ultraviolet radiation is invisible to human eyes, so the image must be made visible through phosphorescence (see Luminescence), photography, or electronic scanning.

The near-field microscope is an advanced optical microscope that is able to resolve details slightly smaller than the wavelength of visible light. This high resolution is achieved by passing a light beam through a tiny hole at a distance from the specimen of only about half the diameter of the hole. The light is played across the specimen until an entire image is obtained.

The magnifying power of a typical optical microscope is limited by the wavelengths of visible light. Details cannot be resolved that are smaller than these wavelengths. To overcome this limitation, the scanning interferometric apertureless microscope (SIAM) was developed. SIAM uses a silicon probe with a tip one nanometer (1 billionth of a meter) wide. This probe vibrates 200,000 times a second and scatters a portion of the light passing through an observed sample. The scattered light is then recombined with the unscattered light to produce an interference pattern that reveals minute details of the sample. The SIAM can currently resolve images 6500 times smaller than conventional light microscopes.

IV -ELECTRON MICROSCOPES
An electron microscope uses electrons to “illuminate” an object. Electrons have a much smaller wavelength than light, so they can resolve much smaller structures. The smallest wavelength of visible light is about 4000 angstroms (40 millionths of a meter). The wavelength of electrons used in electron microscopes is usually about half an angstrom (50 trillionths of a meter).

Electron microscopes have an electron gun that emits electrons, which then strike the specimen. Conventional lenses used in optical microscopes to focus visible light do not work with electrons; instead, magnetic fields (see Magnetism) are used to create “lenses” that direct and focus the electrons. Since electrons are easily scattered by air molecules, the interior of an electron microscope must be sealed at a very high vacuum. Electron microscopes also have systems that record or display the images produced by the electrons.

There are two types of electron microscopes: the transmission electron microscope (TEM), and the scanning electron microscope (SEM). In a TEM, the electron beam is directed onto the object to be magnified. Some of the electrons are absorbed or bounce off the specimen, while others pass through and form a magnified image of the specimen. The sample must be cut very thin to be used in a TEM, usually no more than a few thousand angstroms thick. A photographic plate or fluorescent screen beyond the sample records the magnified image. Transmission electron microscopes can magnify an object up to one million times. In a scanning electron microscope, a tightly focused electron beam moves over the entire sample to create a magnified image of the surface of the object in much the same way an electron beam scans an image onto the screen of a television. Electrons in the tightly focused beam might scatter directly off the sample or cause secondary electrons to be emitted from the surface of the sample. These scattered or secondary electrons are collected and counted by an electronic device. Each scanned point on the sample corresponds to a pixel on a television monitor; the more electrons the counting device detects, the brighter the pixel on the monitor is. As the electron beam scans over the entire sample, a complete image of the sample is displayed on the monitor.

An SEM scans the surface of the sample bit by bit, in contrast to a TEM, which looks at a relatively large area of the sample all at once. Samples scanned by an SEM do not need to be thinly sliced, as do TEM specimens, but they must be dehydrated to prevent the secondary electrons emitted from the specimen from being scattered by water molecules in the sample.
Scanning electron microscopes can magnify objects 100,000 times or more. SEMs are particularly useful because, unlike TEMs and powerful optical microscopes, they can produce detailed three-dimensional images of the surface of objects.

The scanning transmission electron microscope (STEM) combines elements of an SEM and a TEM and can resolve single atoms in a sample.
The electron probe microanalyzer, an electron microscope fitted with an X-ray spectrum analyzer, can examine the high-energy X rays emitted by the sample when it is bombarded with electrons. The identity of different atoms or molecules can be determined from their X-ray emissions, so the electron probe analyzer not only provides a magnified image of the sample, but also information about the sample's chemical composition.

V -SCANNING PROBE MICROSCOPES
A scanning probe microscope uses a probe to scan the surface of a sample and provides a three-dimensional image of atoms or molecules on the surface of the object. The probe is an extremely sharp metal point that can be as narrow as a single atom at the tip.

An important type of scanning probe microscope is the scanning tunneling microscope (STM). Invented in 1981, the STM uses a quantum physics phenomenon called tunneling to provide detailed images of substances that can conduct electricity. The probe is brought to within a few angstroms of the surface of the material being viewed, and a small voltage is applied between the surface and the probe. Because the probe is so close to the surface, electrons leak, or tunnel, across the gap between the probe and surface, generating a current. The strength of the tunneling current depends on the distance between the surface and the probe. If the probe moves closer to the surface, the tunneling current increases, and if the probe moves away from the surface, the tunneling current decreases. As the scanning mechanism moves along the surface of the substance, the mechanism constantly adjusts the height of the probe to keep the tunneling current constant. By tracking these minute adjustments with many scans back and forth along the surface, a computer can create a three-dimensional representation of the surface.

Another type of scanning probe microscope is the atomic force microscope (AFM). The AFM does not use a tunneling current, so the sample does not need to conduct electricity. As the metal probe in an AFM moves along the surface of a sample, the electrons in the probe are repelled by the electrons of the atoms in the sample and the AFM adjusts the height of the probe to keep the force on it constant. A sensing mechanism records the up-and-down movements of the probe and feeds the data into a computer, which creates a three-dimensional image of the surface of the sample.
__________________
No signature...
Reply With Quote
The Following 3 Users Say Thank You to Predator For This Useful Post:
hinanazar (Friday, November 13, 2009), madiha alvi (Tuesday, September 10, 2013), Qaiserks (Wednesday, April 01, 2009)
  #23  
Old Tuesday, November 13, 2007
Predator's Avatar
Senior Member
Medal of Appreciation: Awarded to appreciate member's contribution on forum. (Academic and professional achievements do not make you eligible for this medal) - Issue reason:
 
Join Date: Aug 2007
Location: Karachi
Posts: 2,572
Thanks: 813
Thanked 1,975 Times in 838 Posts
Predator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to behold
Post Energy

Energy


Energy, capacity of matter to perform work as the result of its motion or its position in relation to forces acting on it. Energy associated with motion is known as kinetic energy, and energy related to position is called potential energy. Thus, a swinging pendulum has maximum potential energy at the terminal points; at all intermediate positions it has both kinetic and potential energy in varying proportions. Energy exists in various forms, including mechanical (see Mechanics), thermal (see Thermodynamics), chemical (see Chemical Reaction), electrical (see Electricity), radiant (see Radiation), and atomic (see Nuclear Energy). All forms of energy are interconvertible by appropriate processes. In the process of transformation either kinetic or potential energy may be lost or gained, but the sum total of the two remains always the same.

A weight suspended from a cord has potential energy due to its position, inasmuch as it can perform work in the process of falling. An electric battery has potential energy in chemical form. A piece of magnesium has potential energy stored in chemical form that is expended in the form of heat and light if the magnesium is ignited. If a gun is fired, the potential energy of the gunpowder is transformed into the kinetic energy of the moving projectile. The kinetic mechanical energy of the moving rotor of a dynamo is changed into kinetic electrical energy by electromagnetic induction. All forms of energy tend to be transformed into heat, which is the most transient form of energy. In mechanical devices energy not expended in useful work is dissipated in frictional heat, and losses in electrical circuits are largely heat losses.

Empirical observation in the 19th century led to the conclusion that although energy can be transformed, it cannot be created or destroyed. This concept, known as the conservation of energy, constitutes one of the basic principles of classical mechanics. The principle, along with the parallel principle of conservation of matter, holds true only for phenomena involving velocities that are small compared with the velocity of light. At higher velocities close to that of light, as in nuclear reactions, energy and matter are interconvertible (see Relativity). In modern physics the two concepts, the conservation of energy and of mass, are thus unified.
__________________
No signature...
Reply With Quote
The Following User Says Thank You to Predator For This Useful Post:
madiha alvi (Tuesday, September 10, 2013)
  #24  
Old Wednesday, November 14, 2007
Predator's Avatar
Senior Member
Medal of Appreciation: Awarded to appreciate member's contribution on forum. (Academic and professional achievements do not make you eligible for this medal) - Issue reason:
 
Join Date: Aug 2007
Location: Karachi
Posts: 2,572
Thanks: 813
Thanked 1,975 Times in 838 Posts
Predator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to behold
Post Fingerprinting

Fingerprinting


I -INTRODUCTION
Fingerprinting, method of identification using the impression made by the minute ridge formations or patterns found on the fingertips. No two persons have exactly the same arrangement of ridge patterns, and the patterns of any one individual remain unchanged through life. To obtain a set of fingerprints, the ends of the fingers are inked and then pressed or rolled one by one on some receiving surface. Fingerprints may be classified and filed on the basis of the ridge patterns, setting up an identification system that is almost infallible.

II -HISTORY
The first recorded use of fingerprints was by the ancient Assyrians and Chinese for the signing of legal documents. Probably the first modern study of fingerprints was made by the Czech physiologist Johannes Evengelista Purkinje, who in 1823 proposed a system of classification that attracted little attention. The use of fingerprints for identification purposes was proposed late in the 19th century by the British scientist Sir Francis Galton, who wrote a detailed study of fingerprints in which he presented a new classification system using prints of all ten fingers, which is the basis of identification systems still in use. In the 1890s the police in Bengal, India, under the British police official Sir Edward Richard Henry, began using fingerprints to identify criminals. As assistant commissioner of metropolitan police, Henry established the first British fingerprint files in London in 1901. Subsequently, the use of fingerprinting as a means for identifying criminals spread rapidly throughout Europe and the United States, superseding the old Bertillon system of identification by means of body measurements.

III -MODERN USE
As crime-detection methods improved, law enforcement officers found that any smooth, hard surface touched by a human hand would yield fingerprints made by the oily secretion present on the skin. When these so-called latent prints were dusted with powder or chemically treated, the identifying fingerprint pattern could be seen and photographed or otherwise preserved. Today, law enforcement agencies can also use computers to digitally record fingerprints and to transmit them electronically to other agencies for comparison. By comparing fingerprints at the scene of a crime with the fingerprint record of suspected persons, officials can establish absolute proof of the presence or identity of a person.

The confusion and inefficiency caused by the establishment of many separate fingerprint archives in the United States led the federal government to set up a central agency in 1924, the Identification Division of the Federal Bureau of Investigation (FBI). This division was absorbed in 1993 by the FBI’s Criminal Justice Information Services Division, which now maintains the world’s largest fingerprint collection. Currently the FBI has a library of more than 234 million civil and criminal fingerprint cards, representing 81 million people. In 1999 the FBI began full operation of the Integrated Automated Fingerprint Identification System (IAFIS), a computerized system that stores digital images of fingerprints for more than 36 million individuals, along with each individual’s criminal history if one exists. Using IAFIS, authorities can conduct automated searches to identify people from their fingerprints and determine whether they have a criminal record. The system also gives state and local law enforcement agencies the ability to electronically transmit fingerprint information to the FBI. The implementation of IAFIS represented a breakthrough in crimefighting by reducing the time needed for fingerprint identification from weeks to minutes or hours.
__________________
No signature...
Reply With Quote
The Following User Says Thank You to Predator For This Useful Post:
madiha alvi (Tuesday, September 10, 2013)
  #25  
Old Wednesday, November 14, 2007
Predator's Avatar
Senior Member
Medal of Appreciation: Awarded to appreciate member's contribution on forum. (Academic and professional achievements do not make you eligible for this medal) - Issue reason:
 
Join Date: Aug 2007
Location: Karachi
Posts: 2,572
Thanks: 813
Thanked 1,975 Times in 838 Posts
Predator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to behold
Post Infrared Radiation

Infrared Radiation


Infrared Radiation, emission of energy as electromagnetic waves in the portion of the spectrum just beyond the limit of the red portion of visible radiation (see Electromagnetic Radiation). The wavelengths of infrared radiation are shorter than those of radio waves and longer than those of light waves. They range between approximately 10-6 and 10-3 (about 0.0004 and 0.04 in). Infrared radiation may be detected as heat, and instruments such as bolometers are used to detect it. See Radiation; Spectrum.
Infrared radiation is used to obtain pictures of distant objects obscured by atmospheric haze, because visible light is scattered by haze but infrared radiation is not. The detection of infrared radiation is used by astronomers to observe stars and nebulas that are invisible in ordinary light or that emit radiation in the infrared portion of the spectrum.

An opaque filter that admits only infrared radiation is used for very precise infrared photographs, but an ordinary orange or light-red filter, which will absorb blue and violet light, is usually sufficient for most infrared pictures. Developed about 1880, infrared photography has today become an important diagnostic tool in medical science as well as in agriculture and industry. Use of infrared techniques reveals pathogenic conditions that are not visible to the eye or recorded on X-ray plates. Remote sensing by means of aerial and orbital infrared photography has been used to monitor crop conditions and insect and disease damage to large agricultural areas, and to locate mineral deposits. See Aerial Survey; Satellite, Artificial. In industry, infrared spectroscopy forms an increasingly important part of metal and alloy research, and infrared photography is used to monitor the quality of products. See also Photography: Photographic Films.

Infrared devices such as those used during World War II enable sharpshooters to see their targets in total visual darkness. These instruments consist essentially of an infrared lamp that sends out a beam of infrared radiation, often referred to as black light, and a telescope receiver that picks up returned radiation from the object and converts it to a visible image.
__________________
No signature...
Reply With Quote
The Following 2 Users Say Thank You to Predator For This Useful Post:
madiha alvi (Tuesday, September 10, 2013), Qaiserks (Wednesday, April 01, 2009)
  #26  
Old Wednesday, November 14, 2007
Predator's Avatar
Senior Member
Medal of Appreciation: Awarded to appreciate member's contribution on forum. (Academic and professional achievements do not make you eligible for this medal) - Issue reason:
 
Join Date: Aug 2007
Location: Karachi
Posts: 2,572
Thanks: 813
Thanked 1,975 Times in 838 Posts
Predator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to behold
Post Greenhouse Effect

Greenhouse Effect


I -INTRODUCTION
Greenhouse Effect, the capacity of certain gases in the atmosphere to trap heat emitted from the Earth’s surface, thereby insulating and warming the Earth. Without the thermal blanketing of the natural greenhouse effect, the Earth’s climate would be about 33 Celsius degrees (about 59 Fahrenheit degrees) cooler—too cold for most living organisms to survive.

The greenhouse effect has warmed the Earth for over 4 billion years. Now scientists are growing increasingly concerned that human activities may be modifying this natural process, with potentially dangerous consequences. Since the advent of the Industrial Revolution in the 1700s, humans have devised many inventions that burn fossil fuels such as coal, oil, and natural gas. Burning these fossil fuels, as well as other activities such as clearing land for agriculture or urban settlements, releases some of the same gases that trap heat in the atmosphere, including carbon dioxide, methane, and nitrous oxide. These atmospheric gases have risen to levels higher than at any time in the last 420,000 years. As these gases build up in the atmosphere, they trap more heat near the Earth’s surface, causing Earth’s climate to become warmer than it would naturally.

Scientists call this unnatural heating effect global warming and blame it for an increase in the Earth’s surface temperature of about 0.6 Celsius degrees (about 1 Fahrenheit degree) over the last nearly 100 years. Without remedial measures, many scientists fear that global temperatures will rise 1.4 to 5.8 Celsius degrees (2.5 to 10.4 Fahrenheit degrees) by 2100. These warmer temperatures could melt parts of polar ice caps and most mountain glaciers, causing a rise in sea level of up to 1 m (40 in) within a century or two, which would flood coastal regions. Global warming could also affect weather patterns causing, among other problems, prolonged drought or increased flooding in some of the world’s leading agricultural regions.

II -HOW THE GREENHOUSE EFFECT WORKS
The greenhouse effect results from the interaction between sunlight and the layer of greenhouse gases in the Earth's atmosphere that extends up to 100 km (60 mi) above Earth's surface. Sunlight is composed of a range of radiant energies known as the solar spectrum, which includes visible light, infrared light, gamma rays, X rays, and ultraviolet light. When the Sun’s radiation reaches the Earth’s atmosphere, some 25 percent of the energy is reflected back into space by clouds and other atmospheric particles. About 20 percent is absorbed in the atmosphere. For instance, gas molecules in the uppermost layers of the atmosphere absorb the Sun’s gamma rays and X rays. The Sun’s ultraviolet radiation is absorbed by the ozone layer, located 19 to 48 km (12 to 30 mi) above the Earth’s surface.

About 50 percent of the Sun’s energy, largely in the form of visible light, passes through the atmosphere to reach the Earth’s surface. Soils, plants, and oceans on the Earth’s surface absorb about 85 percent of this heat energy, while the rest is reflected back into the atmosphere—most effectively by reflective surfaces such as snow, ice, and sandy deserts. In addition, some of the Sun’s radiation that is absorbed by the Earth’s surface becomes heat energy in the form of long-wave infrared radiation, and this energy is released back into the atmosphere.

Certain gases in the atmosphere, including water vapor, carbon dioxide, methane, and nitrous oxide, absorb this infrared radiant heat, temporarily preventing it from dispersing into space. As these atmospheric gases warm, they in turn emit infrared radiation in all directions. Some of this heat returns back to Earth to further warm the surface in what is known as the greenhouse effect, and some of this heat is eventually released to space. This heat transfer creates equilibrium between the total amount of heat that reaches the Earth from the Sun and the amount of heat that the Earth radiates out into space. This equilibrium or energy balance—the exchange of energy between the Earth’s surface, atmosphere, and space—is important to maintain a climate that can support a wide variety of life.

The heat-trapping gases in the atmosphere behave like the glass of a greenhouse. They let much of the Sun’s rays in, but keep most of that heat from directly escaping. Because of this, they are called greenhouse gases. Without these gases, heat energy absorbed and reflected from the Earth’s surface would easily radiate back out to space, leaving the planet with an inhospitable temperature close to –19°C (2°F), instead of the present average surface temperature of 15°C (59°F).

To appreciate the importance of the greenhouse gases in creating a climate that helps sustain most forms of life, compare Earth to Mars and Venus. Mars has a thin atmosphere that contains low concentrations of heat-trapping gases. As a result, Mars has a weak greenhouse effect resulting in a largely frozen surface that shows no evidence of life. In contrast, Venus has an atmosphere containing high concentrations of carbon dioxide. This heat-trapping gas prevents heat radiated from the planet’s surface from escaping into space, resulting in surface temperatures that average 462°C (864°F)—too hot to support life.

III -TYPES OF GREENHOUSE GASES
Earth’s atmosphere is primarily composed of nitrogen (78 percent) and oxygen (21 percent). These two most common atmospheric gases have chemical structures that restrict absorption of infrared energy. Only the few greenhouse gases, which make up less than 1 percent of the atmosphere, offer the Earth any insulation. Greenhouse gases occur naturally or are manufactured. The most abundant naturally occurring greenhouse gas is water vapor, followed by carbon dioxide, methane, and nitrous oxide. Human-made chemicals that act as greenhouse gases include chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), and hydrofluorocarbons (HFCs).

Since the 1700s, human activities have substantially increased the levels of greenhouse gases in the atmosphere. Scientists are concerned that expected increases in the concentrations of greenhouse gases will powerfully enhance the atmosphere’s capacity to retain infrared radiation, leading to an artificial warming of the Earth’s surface.

A -Water Vapor
Water vapor is the most common greenhouse gas in the atmosphere, accounting for about 60 to 70 percent of the natural greenhouse effect. Humans do not have a significant direct impact on water vapor levels in the atmosphere. However, as human activities increase the concentration of other greenhouse gases in the atmosphere (producing warmer temperatures on Earth), the evaporation of oceans, lakes, and rivers, as well as water evaporation from plants, increase and raise the amount of water vapor in the atmosphere.

B -Carbon Dioxide
Carbon dioxide constantly circulates in the environment through a variety of natural processes known as the carbon cycle. Volcanic eruptions and the decay of plant and animal matter both release carbon dioxide into the atmosphere. In respiration, animals break down food to release the energy required to build and maintain cellular activity. A byproduct of respiration is the formation of carbon dioxide, which is exhaled from animals into the environment. Oceans, lakes, and rivers absorb carbon dioxide from the atmosphere. Through photosynthesis, plants collect carbon dioxide and use it to make their own food, in the process incorporating carbon into new plant tissue and releasing oxygen to the environment as a byproduct.
In order to provide energy to heat buildings, power automobiles, and fuel electricity-producing power plants, humans burn objects that contain carbon, such as the fossil fuels oil, coal, and natural gas; wood or wood products; and some solid wastes. When these products are burned, they release carbon dioxide into the air. In addition, humans cut down huge tracts of trees for lumber or to clear land for farming or building. This process, known as deforestation, can both release the carbon stored in trees and significantly reduce the number of trees available to absorb carbon dioxide.

As a result of these human activities, carbon dioxide in the atmosphere is accumulating faster than the Earth’s natural processes can absorb the gas. By analyzing air bubbles trapped in glacier ice that is many centuries old, scientists have determined that carbon dioxide levels in the atmosphere have risen by 31 percent since 1750. And since carbon dioxide increases can remain in the atmosphere for centuries, scientists expect these concentrations to double or triple in the next century if current trends continue.

C -Methane
Many natural processes produce methane, also known as natural gas. Decomposition of carbon-containing substances found in oxygen-free environments, such as wastes in landfills, release methane. Ruminating animals such as cattle and sheep belch methane into the air as a byproduct of digestion. Microorganisms that live in damp soils, such as rice fields, produce methane when they break down organic matter. Methane is also emitted during coal mining and the production and transport of other fossil fuels.

Methane has more than doubled in the atmosphere since 1750, and could double again in the next century. Atmospheric concentrations of methane are far less than carbon dioxide, and methane only stays in the atmosphere for a decade or so. But scientists consider methane an extremely effective heat-trapping gas—one molecule of methane is 20 times more efficient at trapping infrared radiation radiated from the Earth’s surface than a molecule of carbon dioxide.

D -Nitrous Oxide
Nitrous oxide is released by the burning of fossil fuels, and automobile exhaust is a large source of this gas. In addition, many farmers use nitrogen-containing fertilizers to provide nutrients to their crops. When these fertilizers break down in the soil, they emit nitrous oxide into the air. Plowing fields also releases nitrous oxide.

Since 1750 nitrous oxide has risen by 17 percent in the atmosphere. Although this increase is smaller than for the other greenhouse gases, nitrous oxide traps heat about 300 times more effectively than carbon dioxide and can stay in the atmosphere for a century.

E -Fluorinated Compounds
Some of the most potent greenhouse gases emitted are produced solely by human activities. Fluorinated compounds, including CFCs, HCFCs, and HFCs, are used in a variety of manufacturing processes. For each of these synthetic compounds, one molecule is several thousand times more effective in trapping heat than a single molecule of carbon dioxide.

CFCs, first synthesized in 1928, were widely used in the manufacture of aerosol sprays, blowing agents for foams and packing materials, as solvents, and as refrigerants. Nontoxic and safe to use in most applications, CFCs are harmless in the lower atmosphere. However, in the upper atmosphere, ultraviolet radiation breaks down CFCs, releasing chlorine into the atmosphere. In the mid-1970s, scientists began observing that higher concentrations of chlorine were destroying the ozone layer in the upper atmosphere. Ozone protects the Earth from harmful ultraviolet radiation, which can cause cancer and other damage to plants and animals. Beginning in 1987 with the Montréal Protocol on Substances that Deplete the Ozone Layer, representatives from 47 countries established control measures that limited the consumption of CFCs. By 1992 the Montréal Protocol was amended to completely ban the manufacture and use of CFCs worldwide, except in certain developing countries and for use in special medical processes such as asthma inhalers.

Scientists devised substitutes for CFCs, developing HCFCs and HFCs. Since HCFCs still release ozone-destroying chlorine in the atmosphere, production of this chemical will be phased out by the year 2030, providing scientists some time to develop a new generation of safer, effective chemicals. HFCs, which do not contain chlorine and only remain in the atmosphere for a short time, are now considered the most effective and safest substitute for CFCs.

F -Other Synthetic Chemicals
Experts are concerned about other industrial chemicals that may have heat-trapping abilities. In 2000 scientists observed rising concentrations of a previously unreported compound called trifluoromethyl sulphur pentafluoride. Although present in extremely low concentrations in the environment, the gas still poses a significant threat because it traps heat more effectively than all other known greenhouse gases. The exact sources of the gas, undisputedly produced from industrial processes, still remain uncertain.

IV -OTHER FACTORS AFFECTING THE GREENHOUSE EFFECT
Aerosols, also known as particulates, are airborne particles that absorb, scatter, and reflect radiation back into space. Clouds, windblown dust, and particles that can be traced to erupting volcanoes are examples of natural aerosols. Human activities, including the burning of fossil fuels and slash-and-burn farming techniques used to clear forestland, contribute additional aerosols to the atmosphere. Although aerosols are not considered a heat-trapping greenhouse gas, they do affect the transfer of heat energy radiated from the Earth to space. The effect of aerosols on climate change is still debated, but scientists believe that light-colored aerosols cool the Earth’s surface, while dark aerosols like soot actually warm the atmosphere. The increase in global temperature in the last century is lower than many scientists predicted when only taking into account increasing levels of carbon dioxide, methane, nitrous oxide, and fluorinated compounds. Some scientists believe that aerosol cooling may be the cause of this unexpectedly reduced warming.

However, scientists do not expect that aerosols will ever play a significant role in offsetting global warming. As pollutants, aerosols typically pose a health threat, and the manufacturing or agricultural processes that produce them are subject to air-pollution control efforts. As a result, scientists do not expect aerosols to increase as fast as other greenhouse gases in the 21st century.

V -UNDERSTANDING THE GREENHOUSE EFFECT
Although concern over the effect of increasing greenhouse gases is a relatively recent development, scientists have been investigating the greenhouse effect since the early 1800s. French mathematician and physicist Jean Baptiste Joseph Fourier, while exploring how heat is conducted through different materials, was the first to compare the atmosphere to a glass vessel in 1827. Fourier recognized that the air around the planet lets in sunlight, much like a glass roof.

In the 1850s British physicist John Tyndall investigated the transmission of radiant heat through gases and vapors. Tyndall found that nitrogen and oxygen, the two most common gases in the atmosphere, had no heat-absorbing properties. He then went on to measure the absorption of infrared radiation by carbon dioxide and water vapor, publishing his findings in 1863 in a paper titled “On Radiation Through the Earth’s Atmosphere.”

Swedish chemist Svante August Arrhenius, best known for his Nobel Prize-winning work in electrochemistry, also advanced understanding of the greenhouse effect. In 1896 he calculated that doubling the natural concentrations of carbon dioxide in the atmosphere would increase global temperatures by 4 to 6 Celsius degrees (7 to 11 Fahrenheit degrees), a calculation that is not too far from today’s estimates using more sophisticated methods. Arrhenius correctly predicted that when Earth’s temperature warms, water vapor evaporation from the oceans increases. The higher concentration of water vapor in the atmosphere would then contribute to the greenhouse effect and global warming.

The predictions about carbon dioxide and its role in global warming set forth by Arrhenius were virtually ignored for over half a century, until scientists began to detect a disturbing change in atmospheric levels of carbon dioxide. In 1957 researchers at the Scripps Institution of Oceanography, based in San Diego, California, began monitoring carbon dioxide levels in the atmosphere from Hawaii’s remote Mauna Loa Observatory located 3,000 m (11,000 ft) above sea level. When the study began, carbon dioxide concentrations in the Earth’s atmosphere were 315 molecules of gas per million molecules of air (abbreviated parts per million or ppm). Each year carbon dioxide concentrations increased—to 323 ppm by 1970 and 335 ppm by 1980. By 1988 atmospheric carbon dioxide had increased to 350 ppm, an 8 percent increase in only 31 years.

As other researchers confirmed these findings, scientific interest in the accumulation of greenhouse gases and their effect on the environment slowly began to grow. In 1988 the World Meteorological Organization and the United Nations Environment Programme established the Intergovernmental Panel on Climate Change (IPCC). The IPCC was the first international collaboration of scientists to assess the scientific, technical, and socioeconomic information related to the risk of human-induced climate change. The IPCC creates periodic assessment reports on advances in scientific understanding of the causes of climate change, its potential impacts, and strategies to control greenhouse gases. The IPCC played a critical role in establishing the United Nations Framework Convention on Climate Change (UNFCCC). The UNFCCC, which provides an international policy framework for addressing climate change issues, was adopted by the United Nations General Assembly in 1992.

Today scientists around the world monitor atmospheric greenhouse gas concentrations and create forecasts about their effects on global temperatures. Air samples from sites spread across the globe are analyzed in laboratories to determine levels of individual greenhouse gases. Sources of greenhouse gases, such as automobiles, factories, and power plants, are monitored directly to determine their emissions. Scientists gather information about climate systems and use this information to create and test computer models that simulate how climate could change in response to changing conditions on the Earth and in the atmosphere. These models act as high-tech crystal balls to project what may happen in the future as greenhouse gas levels rise. Models can only provide approximations, and some of the predictions based on these models often spark controversy within the science community. Nevertheless, the basic concept of global warming is widely accepted by most climate scientists.

VI -EFFORTS TO CONTROL GREENHOUSE GASES
Due to overwhelming scientific evidence and growing political interest, global warming is currently recognized as an important national and international issue. Since 1992 representatives from over 160 countries have met regularly to discuss how to reduce worldwide greenhouse gas emissions. In 1997 representatives met in Kyôto, Japan, and produced an agreement, known as the Kyôto Protocol, which requires industrialized countries to reduce their emissions by 2012 to an average of 5 percent below 1990 levels. To help countries meet this agreement cost-effectively, negotiators are trying to develop a system in which nations that have no obligations or that have successfully met their reduced emissions obligations could profit by selling or trading their extra emissions quotas to other countries that are struggling to reduce their emissions. Negotiating such detailed emissions trading rules has been a contentious task for the world community since the signing of the Kyôto Protocol. A ratified agreement is still not yet in force, and ratification received a setback in 2001 when newly elected U.S. president George W. Bush renounced the treaty on the grounds that the required carbon-dioxide reductions in the United States would be too costly. He also objected that developing nations would not be bound by similar carbon-dioxide reducing obligations. However, many experts expect that as the scientific evidence about the dangers of global warming continues to mount, nations will be motivated to cooperate more effectively to reduce the risks of climate change.
__________________
No signature...
Reply With Quote
The Following User Says Thank You to Predator For This Useful Post:
madiha alvi (Tuesday, September 10, 2013)
  #27  
Old Wednesday, November 14, 2007
Predator's Avatar
Senior Member
Medal of Appreciation: Awarded to appreciate member's contribution on forum. (Academic and professional achievements do not make you eligible for this medal) - Issue reason:
 
Join Date: Aug 2007
Location: Karachi
Posts: 2,572
Thanks: 813
Thanked 1,975 Times in 838 Posts
Predator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to behold
Post Antimatter

Antimatter


Antimatter, matter composed of elementary particles that are, in a special sense, mirror images of the particles that make up ordinary matter as it is known on earth. Antiparticles have the same mass as their corresponding particles but have opposite electric charges or other properties related to electromagnetism. For example, the antimatter electron, or positron, has opposite electric charge and magnetic moment (a property that determines how it behaves in a magnetic field), but is identical in all other respects to the electron. The antimatter equivalent of the chargeless neutron, on the other hand, differs in having a magnetic moment of opposite sign (magnetic moment is another electromagnetic property). In all of the other parameters involved in the dynamical properties of elementary particles, such as mass, spin, and partial decay, antiparticles are identical with their corresponding particles.

The existence of antiparticles was first proposed by the British physicist Paul Adrien Maurice Dirac, arising from his attempt to apply the techniques of relativistic mechanics (see Relativity) to quantum theory. In 1928 he developed the concept of a positively charged electron but its actual existence was established experimentally in 1932. The existence of other antiparticles was presumed but not confirmed until 1955, when antiprotons and antineutrons were observed in particle accelerators. Since then, the full range of antiparticles has been observed or indicated. Antimatter atoms were created for the first time in September 1995 at the European Organization for Nuclear Research (CERN). Positrons were combined with antimatter protons to produce antimatter hydrogen atoms. These atoms of antimatter exist only for forty-billionths of a second, but physicists hope future experiments will determine what differences there are between normal hydrogen and its antimatter counterpart.

A profound problem for particle physics and for cosmology in general is the apparent scarcity of antiparticles in the universe. Their nonexistence, except momentarily, on earth is understandable, because particles and antiparticles are mutually annihilated with a great release of energy when they meet (see Annihilation). Distant galaxies could possibly be made of antimatter, but no direct method of confirmation exists. Most of what is known about the far universe arrives in the form of photons, which are identical with their antiparticles and thus reveal little about the nature of their sources. The prevailing opinion, however, is that the universe consists overwhelmingly of “ordinary” matter, and explanations for this have been proposed by recent cosmological theory (see Inflationary Theory).

In 1997 scientists studying data gathered by the Compton Gamma Ray Observatory (GRO) operated by the National Aeronautics and Space Administration (NASA) found that the earth’s home galaxy—the Milky Way—contains large clouds of antimatter particles. Astronomers suggest that these clouds form when high-energy events—such as the collision of neutron stars, exploding stars, or black holes—create radioactive elements that decay into matter and antimatter or heat matter enough to make it split into particles of matter and antimatter. When antimatter particles meet particles of matter, the two annihilate each other and produce a burst of gamma rays. It was these gamma rays that GRO detected.
__________________
No signature...
Reply With Quote
  #28  
Old Wednesday, November 14, 2007
Predator's Avatar
Senior Member
Medal of Appreciation: Awarded to appreciate member's contribution on forum. (Academic and professional achievements do not make you eligible for this medal) - Issue reason:
 
Join Date: Aug 2007
Location: Karachi
Posts: 2,572
Thanks: 813
Thanked 1,975 Times in 838 Posts
Predator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to behold
Post Magma

Magma


I -INTRODUCTION
Magma, molten or partially molten rock beneath the earth’s surface. Magma is generated when rock deep underground melts due to the high temperatures and pressures inside the earth. Because magma is lighter than the surrounding rock, it tends to rise. As it moves upward, the magma encounters colder rock and begins to cool. If the temperature of the magma drops low enough, the magma will crystallize underground to form rock; rock that forms in this way is called intrusive, or plutonic igneous rock, as the magma has formed by intruding the surrounding rocks. If the crust through which the magma passes is sufficiently shallow, warm, or fractured, and if the magma is sufficiently hot and fluid, the magma will erupt at the surface of the earth, possibly forming volcanoes. Magma that erupts is called lava.

II -COMPOSITION OF MAGMA
Magmas are liquids that contain a variety of melted minerals and dissolved gases. Because magmas form deep underground, however, geologists cannot directly observe and measure their original composition. This difficulty has led to controversy over the exact chemical composition of magmas. Geologists cannot simply assume it is the same as the composition of the rock in the source region. One reason for this is that the source rock may melt only partially, releasing only the minerals with the lowest melting points. For this reason, the composition of magma produced by melting 1 percent of a rock is different from the composition of magma produced by melting 20 percent of a rock. Experiments have shown that the temperature and pressure of the location within the earth, and the amount of water present at that location affect the amount of melting. Because temperature and pressure increase as depth within the earth increases, melting an identical source rock at different depths will produce magmas of different composition. Combining these considerations with the fact that the composition of the source rock may be different in different geographic regions, there is a considerable range of possible compositions for magma.

As magma moves toward the surface, the pressure and temperature decrease, which causes partial crystallization, or the formation of mineral crystals within the magma. The compositions of the minerals that crystallize are different from the initial composition of the magma because of changes in temperature and pressure, hence the composition of the remaining liquid changes. The resultant crystals may separate from the liquid either by sinking or by a process known as filter-pressing, in which pressure compresses the liquid and causes it to move toward regions of lower pressure while leaving the crystals behind. As a result, the composition of the remaining magma is different from that of the initial magma. This process is known as magmatic differentiation, and is the principal mechanism whereby a wide variety of magmas and rocks can be produced from a single primary magma (see Igneous Rock: Formation of Igneous Rocks).

The composition of magma can also be modified by chemical interactions with, and melting of, the rocks through which it passes on its way upward. This process is known as assimilation. Magma cannot usually supply enough heat to melt a large amount of the surrounding rock, so assimilation seldom produces a significant change in the composition of magma.

Magmas also contain dissolved gases, because gases are especially soluble (easily dissolved) in liquids when the liquids are under pressure. Magma deep underground is under thousands of atmospheres (units of measure) of pressure due to the weight of the overlying rock. Gases commonly dissolved in magma are carbon dioxide, water vapor, and sulfur dioxide.

III -PHYSICAL PROPERTIES OF MAGMA
The density and viscosity, or thickness, of magma is key physical factors that affect its upward passage. Most rocks expand about 10 percent when they melt, and hence most magma has a density of about 90 percent of the equivalent solid rock. This density difference produces sufficient buoyancy in the magma to cause it to rise toward the surface.

The viscosity of a fluid is a measure of its resistance to flow. The viscosity of a magma affects how quickly the magma will rise, and it determines whether crystals of significantly different density will sink rapidly enough to change the bulk composition of the magma. Viscosity also influences the rate of release of gases from the magma when pressure is released. The viscosity of magma is closely related to the magma’s chemical composition. Magma rich in silicon and poor in magnesium and iron, called felsic magma, is very viscous, or thick (see Igneous Rock: Felsic Rocks). Magma poor in silicon and rich in magnesium and iron, called mafic magma, is quite fluid (see Igneous Rock: Mafic Rocks).

IV -GEOLOGICAL FEATURES FORMED BY MAGMA
Some magma reaches the surface of the earth and erupts from volcanoes or fissures before they solidify. Other magmas fail to reach the surface before they solidify. Magma that reaches the surface and is erupted, or extruded, forms extrusive igneous rocks. Magma that intrudes, or pushes its way into rocks deep underground and solidifies there forms intrusive igneous rock.
Volcanoes are cone-shaped mountains formed by the eruption of lava. Magma collects in a reservoir surrounded by rock, called a magma chamber, about 10 to 20 km (6 to 12 mi) below the volcano. A conduit known as a volcanic pipe provides a passage for the magma from the magma chamber to the volcano. As the magma rises in the conduit, the pressure of the overlying rock drops. Gases expand and bubble out that were kept dissolved in the magma by the pressure. The rapidly expanding gases propel the magma up the volcanic pipe, forcing the magma to the surface and leading to an eruption. The same process occurs when a shaken bottle of soda is suddenly opened.

The viscosity and dissolved-gas content of the magma control the character of the eruption. Low-viscosity magmas often have a low gas content. They flow easily from volcanic conduits and result in relatively quiet eruptions. Once the magma reaches the surface, it rapidly spreads out and over the volcano. Such fluid lava creates broad, gently sloped volcanoes called shield volcanoes, so called because they resemble giant shields lying on the ground.
Low-viscosity lava can also flow from fissures (long cracks in the rock), forming huge lava lakes. Repeated eruptions result in formations called flood basalts. The Columbia Plateau, in the states of Washington, Oregon, and Idaho, is a flood basalt that covers nearly 200,000 sq km (about 80,000 sq mi) and is more than 4000 m (13,000 ft) thick in places.

If a low-viscosity magma contains moderate amounts of dissolved gas, the released gases can eject the magma from the top of the volcano with enough force to form a lava fountain. The blobs of lava that are ejected into the air are called pyroclasts. They accumulate around the base of the fountain, forming a cinder cone.

Medium-viscosity magmas usually contain higher amounts of gases. They tend to form stratovolcanoes. The higher amounts of gases in the magma lead to very explosive eruptions that spew out large amounts of volcanic material. Stratovolcanoes have steeper sides than shield volcanoes. They are also known as composite volcanoes because they are made up of alternating layers of lava flows and deposits of pyroclasts.

High-viscosity magmas do not extrude easily though volcanic conduits. They often have a high gas content that can cause catastrophic eruptions. Both of these properties tend to promote explosive behavior, such as occurred on May 18, 1980 at Mount Saint Helens in Washington, when about 400 m (about 1300 ft) of rock was blasted off of its summit.

Intrusive bodies of rock formed from magma are classified by their size and shape. A batholith is an intrusive body that covers more than 100 sq km (nearly 40 sq mi). Lopoliths are saucer-shaped intrusions and may be up to 100 km (60 mi) in diameter and 8 km (5 mi) thick. Laccoliths have a flat base and a domed ceiling and are usually smaller than lopoliths. Sills and dikes are sheetlike intrusions that are very thin relative to their length. They can be less than one meter (about one yard) to several hundred meters thick but can be larger; the Palisades sill in the state of New York is 300 m (1000 ft) thick and 80 km (50 mi) long. Sills are formed when magma is forced between beds of layered rock; they run parallel to the layering of the surrounding rock. Dikes are formed when magma is forced into cracks in the surrounding rock; they tend to run perpendicular to the layering of the surrounding rock.
__________________
No signature...
Reply With Quote
The Following 4 Users Say Thank You to Predator For This Useful Post:
Bhalla Changa (Wednesday, November 14, 2007), hinanazar (Friday, November 13, 2009), madiha alvi (Tuesday, September 10, 2013), Qaiserks (Wednesday, April 01, 2009)
  #29  
Old Wednesday, November 14, 2007
Predator's Avatar
Senior Member
Medal of Appreciation: Awarded to appreciate member's contribution on forum. (Academic and professional achievements do not make you eligible for this medal) - Issue reason:
 
Join Date: Aug 2007
Location: Karachi
Posts: 2,572
Thanks: 813
Thanked 1,975 Times in 838 Posts
Predator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to behold
Post Rain

Rain


I -INTRODUCTION
Rain, precipitation of liquid drops of water. Raindrops generally have a diameter greater than 0.5 mm (0.02 in). They range in size up to about 3 mm (about 0.13 in) in diameter, and their rate of fall increases, up to 7.6 m (25 ft) per sec with their size. Larger drops tend to be flattened and broken into smaller drops by rapid fall through the air. The precipitation of smaller drops, called drizzle, often severely restricts visibility but usually does not produce significant accumulations of water.

Amount or volume of rainfall is expressed as the depth of water that collects on a flat surface, and is measured in a rain gauge to the nearest 0.25 mm (0.01 in). Rainfall is classified as light if not more than 2.5 mm (0.10 in) per hr, heavy if more than 7.50 mm (more than 0.30 in) per hr, and moderate if between these limits.

II -PROCESS OF PRECIPITATION
Air masses acquire moisture on passing over warm bodies of water, or over wet land surfaces. The moisture, or water vapor, is carried upward into the air mass by turbulence and convection (see Heat Transfer). The lifting required to cool and condense this water vapor results from several processes, and study of these processes provides a key for understanding the distribution of rainfall in various parts of the world.

The phenomenon of lifting, associated with the convergence of the trade winds (see Wind), results in a band of copious rains near the equator. This band, called the intertropical convergence zone (ITCZ), moves northward or southward with the seasons. In higher latitudes much of the lifting is associated with moving cyclones (see Cyclone), often taking the form of the ascent of warm moist air, over a mass of colder air, along an interface called a front. Lifting on a smaller scale is associated with convection in air that is heated by a warm underlying surface, giving rise to showers and thunderstorms. The heaviest rainfall over short periods of time usually comes from such storms. Air may also be lifted by being forced to rise over a land barrier, with the result that the exposed windward slopes have enhanced amounts of rain while the sheltered, or lee, slopes have little rain.

III -AVERAGE RAINFALL
In the U.S. the heaviest average rainfall amounts, up to 1778 mm (70 in), are experienced in the Southeast, where air masses from the tropical Atlantic and Gulf of Mexico are lifted frequently by cyclones and by convection. Moderate annual accumulations, from 762 to 1270 mm (30 to 50 in), occur throughout the eastern U.S., and are caused by cyclones in winter and convection in summer. The central plains, being farther from sources of moisture, have smaller annual accumulations, 381 to 1016 mm (15 to 40 in), mainly from summer convective storms. The southwestern U.S. is dominated by widespread descent of air in the subtropical Pacific anticyclone; rainfall is light, less than 254 mm (less than 10 in), except in the mountainous regions. The northwestern states are affected by cyclones from the Pacific Ocean, particularly during the winter; but rainfall is moderate, especially on the westward-facing slopes of mountain ranges.

The world's heaviest average rainfall, about 10,922 mm (about 430 in) per year, occurs at Cherrapunji, in northeastern India, where moisture-laden air from the Bay of Bengal is forced to rise over the Khāsi Hills of Assam State. As much as 26,466 mm (1042 in), or 26 m (87 ft), of rain have fallen there in one year. Other extreme rainfall records include nearly 1168 mm (nearly 46 in) of rain in one day during a typhoon at Baguio, Philippines; 304.8 mm (12 in) within one hour during a thunderstorm at Holt, Missouri; and 62.7 mm (2.48in) in over a 5-min period at Portobelo, Panama.

IV -ARTIFICIAL PRECIPITATION
Despite the presence of moisture and lifting, clouds sometimes fail to precipitate rain. This circumstance has stimulated intensive study of precipitation processes, specifically of how single raindrops are produced out of a million or so minute droplets inside clouds. Two precipitation processes are recognized: (1) evaporation of water drops at subfreezing temperatures onto ice crystals that later fall into warmer layers and melt, and (2) the collection of smaller droplets upon larger drops that fall at a higher speed.

Efforts to effect or stimulate these processes artificially have led to extensive weather modification operations within the last 20 years (see Meteorology). These efforts have had only limited success, since most areas with deficient rainfall are dominated by air masses that have either inadequate moisture content or inadequate elevation, or both. Nevertheless, some promising results have been realized and much research is now being conducted in order to develop more effective methods of artificial precipitation.
__________________
No signature...
Reply With Quote
The Following User Says Thank You to Predator For This Useful Post:
madiha alvi (Tuesday, September 10, 2013)
  #30  
Old Wednesday, November 14, 2007
Predator's Avatar
Senior Member
Medal of Appreciation: Awarded to appreciate member's contribution on forum. (Academic and professional achievements do not make you eligible for this medal) - Issue reason:
 
Join Date: Aug 2007
Location: Karachi
Posts: 2,572
Thanks: 813
Thanked 1,975 Times in 838 Posts
Predator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to beholdPredator is a splendid one to behold
Post Acid Rain

Acid Rain


I -INTRODUCTION
Acid Rain, form of air pollution in which airborne acids produced by electric utility plants and other sources fall to Earth in distant regions. The corrosive nature of acid rain causes widespread damage to the environment. The problem begins with the production of sulfur dioxide and nitrogen oxides from the burning of fossil fuels, such as coal, natural gas, and oil, and from certain kinds of manufacturing. Sulfur dioxide and nitrogen oxides react with water and other chemicals in the air to form sulfuric acid, nitric acid, and other pollutants. These acid pollutants reach high into the atmosphere, travel with the wind for hundreds of miles, and eventually return to the ground by way of rain, snow, or fog, and as invisible “dry” forms.

Damage from acid rain has been widespread in eastern North America and throughout Europe, and in Japan, China, and Southeast Asia. Acid rain leaches nutrients from soils, slows the growth of trees, and makes lakes uninhabitable for fish and other wildlife. In cities, acid pollutants corrode almost everything they touch, accelerating natural wear and tear on structures such as buildings and statues. Acids combine with other chemicals to form urban smog, which attacks the lungs, causing illness and premature deaths.

II -FORMATION OF ACID RAIN
The process that leads to acid rain begins with the burning of fossil fuels. Burning, or combustion, is a chemical reaction in which oxygen from the air combines with carbon, nitrogen, sulfur, and other elements in the substance being burned. The new compounds formed are gases called oxides. When sulfur and nitrogen are present in the fuel, their reaction with oxygen yields sulfur dioxide and various nitrogen oxide compounds. In the United States, 70 percent of sulfur dioxide pollution comes from power plants, especially those that burn coal. In Canada, industrial activities, including oil refining and metal smelting, account for 61 percent of sulfur dioxide pollution. Nitrogen oxides enter the atmosphere from many sources, with motor vehicles emitting the largest share—43 percent in the United States and 60 percent in Canada.
Once in the atmosphere, sulfur dioxide and nitrogen oxides undergo complex reactions with water vapor and other chemicals to yield sulfuric acid, nitric acid, and other pollutants called nitrates and sulfates. The acid compounds are carried by air currents and the wind, sometimes over long distances. When clouds or fog form in acid-laden air, they too are acidic, and so is the rain or snow that falls from them.

Acid pollutants also occur as dry particles and as gases, which may reach the ground without the help of water. When these “dry” acids are washed from ground surfaces by rain, they add to the acids in the rain itself to produce a still more corrosive solution. The combination of acid rain and dry acids is known as acid deposition.

III -EFFECTS OF ACID RAIN
The acids in acid rain react chemically with any object they contact. Acids are corrosive chemicals that react with other chemicals by giving up hydrogen atoms. The acidity of a substance comes from the abundance of free hydrogen atoms when the substance is dissolved in water. Acidity is measured using a pH scale with units from 0 to 14. Acidic substances have pH numbers from 1 to 6—the lower the pH number, the stronger, or more corrosive, the substance. Some nonacidic substances, called bases or alkalis, are like acids in reverse—they readily accept the hydrogen atoms that the acids offer. Bases have pH numbers from 8 to 14, with the higher values indicating increased alkalinity. Pure water has a neutral pH of 7—it is not acidic or basic. Rain, snow, or fog with a pH below 5.6 is considered acid rain.
When bases mix with acids, the bases lessen the strength of an acid (see Acids and Bases). This buffering action regularly occurs in nature. Rain, snow, and fog formed in regions free of acid pollutants are slightly acidic, having a pH near 5.6. Alkaline chemicals in the environment, found in rocks, soils, lakes, and streams, regularly neutralize this precipitation. But when precipitation is highly acidic, with a pH below 5.6, naturally occurring acid buffers become depleted over time, and nature’s ability to neutralize the acids is impaired. Acid rain has been linked to widespread environmental damage, including soil and plant degradation, depleted life in lakes and streams, and erosion of human-made structures.

A -Soil
In soil, acid rain dissolves and washes away nutrients needed by plants. It can also dissolve toxic substances, such as aluminum and mercury, which are naturally present in some soils, freeing these toxins to pollute water or to poison plants that absorb them. Some soils are quite alkaline and can neutralize acid deposition indefinitely; others, especially thin mountain soils derived from granite or gneiss, buffer acid only briefly.

B -Trees
By removing useful nutrients from the soil, acid rain slows the growth of plants, especially trees. It also attacks trees more directly by eating holes in the waxy coating of leaves and needles, causing brown dead spots. If many such spots form, a tree loses some of its ability to make food through photosynthesis. Also, organisms that cause disease can infect the tree through its injured leaves. Once weakened, trees are more vulnerable to other stresses, such as insect infestations, drought, and cold temperatures.
Spruce and fir forests at higher elevations, where the trees literally touch the acid clouds, seem to be most at risk. Acid rain has been blamed for the decline of spruce forests on the highest ridges of the Appalachian Mountains in the eastern United States. In the Black Forest of southwestern Germany, half of the trees are damaged from acid rain and other forms of pollution.

C -Agriculture
Most farm crops are less affected by acid rain than are forests. The deep soils of many farm regions, such as those in the Midwestern United States, can absorb and neutralize large amounts of acid. Mountain farms are more at risk—the thin soils in these higher elevations cannot neutralize so much acid. Farmers can prevent acid rain damage by monitoring the condition of the soil and, when necessary, adding crushed limestone to the soil to neutralize acid. If excessive amounts of nutrients have been leached out of the soil, farmers can replace them by adding nutrient-rich fertilizer.

D -Surface Waters
Acid rain falls into and drains into streams, lakes, and marshes. Where there is snow cover in winter, local waters grow suddenly more acidic when the snow melts in the spring. Most natural waters are close to chemically neutral, neither acidic nor alkaline: their pH is between 6 and 8. In the northeastern United States and southeastern Canada, the water in some lakes now has a pH value of less than 5 as a result of acid rain. This means they are at least ten times more acidic than they should be. In the Adirondack Mountains of New York State, a quarter of the lakes and ponds are acidic, and many have lost their brook trout and other fish. In the middle Appalachian Mountains, over 1,300 streams are afflicted. All of Norway’s major rivers have been damaged by acid rain, severely reducing salmon and trout populations.

E -Plants and Animals
The effects of acid rain on wildlife can be far-reaching. If a population of one plant or animal is adversely affected by acid rain, animals that feed on that organism may also suffer. Ultimately, an entire ecosystem may become endangered. Some species that live in water are very sensitive to acidity, some less so. Freshwater clams and mayfly young, for instance, begin dying when the water pH reaches 6.0. Frogs can generally survive more acidic water, but if their supply of mayflies is destroyed by acid rain, frog populations may also decline. Fish eggs of most species stop hatching at a pH of 5.0. Below a pH of 4.5, water is nearly sterile, unable to support any wildlife.

Land animals dependent on aquatic organisms are also affected. Scientists have found that populations of snails living in or near water polluted by acid rain are declining in some regions. In The Netherlands songbirds are finding fewer snails to eat. The eggs these birds lay have weakened shells because the birds are receiving less calcium from snail shells.

F -Human-Made Structures
Acid rain and the dry deposition of acidic particles damage buildings, statues, automobiles, and other structures made of stone, metal, or any other material exposed to weather for long periods. The corrosive damage can be expensive and, in cities with very historic buildings, tragic. Both the Parthenon in Athens, Greece, and the Taj Mahal in Agra, India, are deteriorating due to acid pollution.

G -Human Health
The acidification of surface waters causes little direct harm to people. It is safe to swim in even the most acidified lakes. However, toxic substances leached from soil can pollute local water supplies. In Sweden, as many as 10,000 lakes have been polluted by mercury released from soils damaged by acid rain, and residents have been warned to avoid eating fish caught in these lakes. In the air, acids join with other chemicals to produce urban smog, which can irritate the lungs and make breathing difficult, especially for people who already have asthma, bronchitis, or other respiratory diseases. Solid particles of sulfates, a class of minerals derived from sulfur dioxide, are thought to be especially damaging to the lungs.

H -Acid Rain and Global Warming
Acid pollution has one surprising effect that may be beneficial. Sulfates in the upper atmosphere reflect some sunlight out into space, and thus tend to slow down global warming. Scientists believe that acid pollution may have delayed the onset of warming by several decades in the middle of the 20th century.

IV -EFFORTS TO CONTROL ACID RAIN
Acid rain can best be curtailed by reducing the amount of sulfur dioxide and nitrogen oxides released by power plants, motorized vehicles, and factories. The simplest way to cut these emissions is to use less energy from fossil fuels. Individuals can help. Every time a consumer buys an energy-efficient appliance, adds insulation to a house, or takes a bus to work, he or she conserves energy and, as a result, fights acid rain.

Another way to cut emissions of sulfur dioxide and nitrogen oxides is by switching to cleaner-burning fuels. For instance, coal can be high or low in sulfur, and some coal contains sulfur in a form that can be washed out easily before burning. By using more of the low-sulfur or cleanable types of coal, electric utility companies and other industries can pollute less. The gasoline and diesel oil that run most motor vehicles can also be formulated to burn more cleanly, producing less nitrogen oxide pollution. Clean-burning fuels such as natural gas are being used increasingly in vehicles. Natural gas contains almost no sulfur and produces very low nitrogen oxides. Unfortunately, natural gas and the less-polluting coals tend to be more expensive, placing them out of the reach of nations that are struggling economically.
Pollution can also be reduced at the moment the fuel is burned. Several new kinds of burners and boilers alter the burning process to produce less nitrogen oxides and more free nitrogen, which is harmless. Limestone or sandstone added to the combustion chamber can capture some of the sulfur released by burning coal.

Once sulfur dioxide and oxides of nitrogen have been formed, there is one more chance to keep them out of the atmosphere. In smokestacks, devices called scrubbers spray a mixture of water and powdered limestone into the waste gases (flue gases), recapturing the sulfur. Pollutants can also be removed by catalytic converters. In a converter, waste gases pass over small beads coated with metals. These metals promote chemical reactions that change harmful substances to less harmful ones. In the United States and Canada, these devices are required in cars, but they are not often used in smokestacks.

Once acid rain has occurred, a few techniques can limit environmental damage. In a process known as liming, powdered limestone can be added to water or soil to neutralize the acid dropping from the sky. In Norway and Sweden, nations much afflicted with acid rain, lakes are commonly treated this way. Rural water companies may need to lime their reservoirs so that acid does not eat away water pipes. In cities, exposed surfaces vulnerable to acid rain destruction can be coated with acid-resistant paints. Delicate objects like statues can be sheltered indoors in climate-controlled rooms.
Cleaning up sulfur dioxide and nitrogen oxides will reduce not only acid rain but also smog, which will make the air look clearer. Based on a study of the value that visitors to national parks place on clear scenic vistas, the U.S. Environmental Protection Agency thinks that improving the vistas in eastern national parks alone will be worth $1 billion in tourist revenue a year.

A -National Legislation
In the United States, legislative efforts to control sulfur dioxide and nitrogen oxides began with passage of the Clean Air Act of 1970. This act established emissions standards for pollutants from automobiles and industry. In 1990 Congress approved a set of amendments to the act that impose stricter limits on pollution emissions, particularly pollutants that cause acid rain. These amendments aim to cut the national output of sulfur dioxide from 23.5 million tons to 16 million tons by the year 2010. Although no national target is set for nitrogen oxides, the amendments require that power plants, which emit about one-third of all nitrogen oxides released to the atmosphere, reduce their emissions from 7.5 million tons to 5 million tons by 2010. These rules were applied first to selected large power plants in Eastern and Midwestern states. In the year 2000, smaller, cleaner power plants across the country came under the law.

These 1990 amendments include a novel provision for sulfur dioxide control. Each year the government gives companies permits to release a specified number of tons of sulfur dioxide. Polluters are allowed to buy and sell their emissions permits. For instance, a company can choose to reduce its sulfur dioxide emissions more than the law requires and sell its unused pollution emission allowance to another company that is further from meeting emission goals; the buyer may then pollute above the limit for a certain time. Unused pollution rights can also be "banked" and kept for later use. It is hoped that this flexible market system will clean up emissions more quickly and cheaply than a set of rigid rules.

Legislation enacted in Canada restricts the annual amount of sulfur dioxide emissions to 2.3 million tons in all of Canada’s seven easternmost provinces, where acid rain causes the most damage. A national cap for sulfur dioxide emissions has been set at 3.2 million tons per year. Legislation is currently being developed to enforce stricter pollution emissions by 2010.
Norwegian law sets the goal of reducing sulfur dioxide emission to 76 percent of 1980 levels and nitrogen oxides emissions to 70 percent of the 1986 levels. To encourage cleanup, Norway collects a hefty tax from industries that emit acid pollutants. In some cases these taxes make it more expensive to emit acid pollutants than to reduce emissions.

B -International Agreements
Acid rain typically crosses national borders, making pollution control an international issue. Canada receives much of its acid pollution from the United States—by some estimates as much as 50 percent. Norway and Sweden receive acid pollutants from Britain, Germany, Poland, and Russia. The majority of acid pollution in Japan comes from China. Debates about responsibilities and cleanup costs for acid pollutants led to international cooperation. In 1988, as part of the Long-Range Transboundary Air Pollution Agreement sponsored by the United Nations, the United States and 24 other nations ratified a protocol promising to hold yearly nitrogen oxide emissions at or below 1987 levels. In 1991 the United States and Canada signed an Air Quality Agreement setting national limits on annual sulfur dioxide emissions from power plants and factories. In 1994 in Oslo, Norway, 12 European nations agreed to reduce sulfur dioxide emissions by as much as 87 percent by 2010.

Legislative actions to prevent acid rain have results. The targets established in laws and treaties are being met, usually ahead of schedule. Sulfur emissions in Europe decreased by 40 percent from 1980 to 1994. In Norway sulfur dioxide emissions fell by 75 percent during the same period. Since 1980 annual sulfur dioxide emissions in the United States have dropped from 26 million tons to 18.3 million tons. Canada reports sulfur dioxide emissions have been reduced to 2.6 million tons, 18 percent below the proposed limit of 3.2 million tons.

Monitoring stations in several nations report that precipitation is actually becoming less acidic. In Europe, lakes and streams are now growing less acid. However, this does not seem to be the case in the United States and Canada. The reasons are not completely understood, but apparently, controls reducing nitrogen oxide emissions only began recently and their effects have yet to make a mark. In addition, soils in some areas have absorbed so much acid that they contain no more neutralizing alkaline chemicals. The weathering of rock will gradually replace the missing alkaline chemicals, but scientists fear that improvement will be very slow unless pollution controls are made even stricter.
__________________
No signature...
Reply With Quote
The Following 3 Users Say Thank You to Predator For This Useful Post:
comp Engr (Saturday, September 17, 2016), hinanazar (Friday, November 13, 2009), Qaiserks (Wednesday, April 01, 2009)
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Very Important : How to Prepare Study Notes Shaa-Baaz Tips and Experience Sharing 5 Sunday, May 21, 2017 08:30 PM
Effective Study Skills Sureshlasi Tips and Experience Sharing 1 Friday, November 16, 2007 09:28 AM
Regarding Notes Anonymous84 Tips and Experience Sharing 1 Wednesday, August 15, 2007 06:56 PM


CSS Forum on Facebook Follow CSS Forum on Twitter

Disclaimer: All messages made available as part of this discussion group (including any bulletin boards and chat rooms) and any opinions, advice, statements or other information contained in any messages posted or transmitted by any third party are the responsibility of the author of that message and not of CSSForum.com.pk (unless CSSForum.com.pk is specifically identified as the author of the message). The fact that a particular message is posted on or transmitted using this web site does not mean that CSSForum has endorsed that message in any way or verified the accuracy, completeness or usefulness of any message. We encourage visitors to the forum to report any objectionable message in site feedback. This forum is not monitored 24/7.

Sponsors: ArgusVision   vBulletin, Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.