CSS Forums

CSS Forums (http://www.cssforum.com.pk/)
-   General Science & Ability (http://www.cssforum.com.pk/css-compulsory-subjects/general-science-ability/)
-   -   Comprehensive Notes on Technology On Important Topics (http://www.cssforum.com.pk/css-compulsory-subjects/general-science-ability/70355-comprehensive-notes-technology-important-topics.html)

Yousuf Shah Khagga Sunday, November 04, 2012 12:38 AM

Comprehensive Notes on Technology On Important Topics
 
[B]From tomorrow Insha Allah I will start posting all these contents one by one
If seniors found mistakes than kindly correct then please[/B]
[B][U][I]1: Technology
2: Computer
3: Airplane
4: Helicopter
5: Radar
6: Laser
7: Telecommunication
8: Wireless Telecommunication
9: Radio
10: Television
11: telephone
12: Cellular Radio Telephone
13: Nuclear Weapons
14: Electric Motors and Generators
15: Internal-Combustion Engine
16: Steam Engine
17: Turbine
18: Rocket
19:Windmill
20: Motorcycle
21: Automobile
[/I][/U][/B][/SIZE]
:con:con:con

Yousuf Shah Khagga Monday, December 31, 2012 03:09 PM

[SIZE="6"][B][U]Technology
[/U][/B][/SIZE]

[B]I INTRODUCTION
[/B]Technology, general term for the processes by which human beings fashion tools and machines to increase their control and understanding of the material environment. The term is derived from the Greek words tekhnē, which refers to an art or craft, and logia, meaning an area of study; thus, technology means, literally, the study, or science, of crafting.
Many historians of science argue not only that technology is an essential condition of advanced, industrial civilization but also that the rate of technological change has developed its own momentum in recent centuries. Innovations now seem to appear at a rate that increases geometrically, without respect to geographical limits or political systems. These innovations tend to transform traditional cultural systems, frequently with unexpected social consequences. Thus technology can be conceived as both a creative and a destructive process.
[B]II SCIENCE AND TECHNOLOGY[/B]
The meanings of the terms science and technology have changed significantly from one generation to another. More similarities than differences, however, can be found between the terms.
Both science and technology imply a thinking process, both are concerned with causal relationships in the material world, and both employ an experimental methodology that results in empirical demonstrations that can be verified by repetition. Science, at least in theory, is less concerned with the practicality of its results and more concerned with the development of general laws, but in practice science and technology are inextricably involved with each other. The varying interplay of the two can be observed in the historical development of such practitioners as chemists, engineers, physicists, astronomers, carpenters, potters, and many other specialists. Differing educational requirements, social status, vocabulary, methodology, and types of rewards, as well as institutional objectives and professional goals, contribute to such distinctions as can be made between the activities of scientists and technologists; but throughout history the practitioners of “pure” science have made many practical as well as theoretical contributions.
Indeed, the concept that science provides the ideas for technological innovations and that pure research is therefore essential for any significant advancement in industrial civilization is essentially a myth. Most of the greatest changes in industrial civilization cannot be traced to the laboratory. Fundamental tools and processes in the fields of mechanics, chemistry, astronomy, metallurgy, and hydraulics were developed before the laws governing their functions were discovered. The steam engine, for example, was commonplace before the science of thermodynamics elucidated the physical principles underlying its operations.
In recent years a sharp value distinction has grown up between science and technology. Advances in science have frequently had their bitter opponents, but today many people have come to fear technology much more than science. For these people, science may be perceived as a serene, objective source for understanding the eternal laws of nature, whereas the practical manifestations of technology in the modern world now seem to them to be out of control.
[B]III ANCIENT AND MEDIEVAL TECHNOLOGY[/B]
Technology has been a dialectical and cumulative process at the center of human experience. It is perhaps best understood in a historical context that traces the evolution of early humans from a period of very simple tools to the complex, large-scale networks that influence most of contemporary human life. For the sake of simplicity, the following account focuses primarily on developments in the Western world, but major contributions from other cultures are also indicated.
[B]A Early Technology[/B]
The earliest known human artifacts are roughly flaked stones used for chopping and scraping, found primarily in eastern Africa. Known as Oldowan tools, they date from about 2.3 million years before present, and serve to define the beginning of the Stone Age. The first toolmakers were nomadic groups of people who used the sharp edges of stone to process food. By about 40,000 years before present, humans had begun to use fire and to make a variety of tools, including pear-shaped axes, scrapers, knives, and other instruments of stone, bone, and other materials. They had also begun to use tools to make clothing and build shelters for protection from inclement weather. The use of tools can be observed in many members of the animal kingdom, but the capacity for creating tools to craft other objects distinguishes humans from all other animals.
The next big step in the history of technology was the control of fire. By striking flint against pyrites to produce sparks, people could kindle fires at will, thereby freeing themselves from the necessity of perpetuating fires obtained from natural sources. Besides the obvious benefits of light and heat, fire was also used to bake clay pots, producing heat-resistant vessels that were then used for cooking grains and for brewing and fermenting. Fired pottery later provided the crucibles in which metals could be refined. Advanced thought processes may well have first developed around the hearth, and it was there that the first domesticated animal, the dog, was tamed.
Early technologies were not centered only on practical tools. Colorful minerals were pulverized to make pigments that were then applied to the human body, to clay utensils, and to baskets, clothing, and other objects. In their search for pigments, early peoples discovered the green mineral malachite and the blue mineral azurite. When these copper-containing ores were hammered they did not turn to powder but bent instead, and they could be polished but not chipped. Because of these qualities, small bits of copper were soon made into jewelry. Early peoples also learned that if this material was repeatedly hammered and put into a fire, it would not split or crack. This process of relieving metal stress, called annealing, eventually brought human civilizations out of the Stone Age—particularly when, about 3000 BC, people also found that alloying tin with copper produces bronze. Bronze is not only more malleable than copper but also holds a better edge, a quality necessary for such objects as swords and sickles.
Although copper deposits existed in the foothills of Syria and Turkey, at the headwaters of the Tigris and Euphrates, the largest deposits of copper in the ancient world were found on the island of Crete (Kríti). With the development of seaworthy ships that could reach this extremely valuable resource, Knossos (Knosós) on Crete became a wealthy mining center during the Bronze Age.
[B]A1 Rise of Agriculture[/B]
By the time of the Bronze Age, the human societies that dotted every continent had long since made a number of other technological advances. They had developed barbed spears, the bow and arrow, animal-oil lamps, and bone needles for making containers and clothing. They had also embarked on a major cultural revolution; the shift from nomadic hunting and herding societies to the more settled practice of agriculture.



[B]D Greek and Roman Technologies[/B]
The Persian Empire of Cyrus the Great was overthrown and succeeded by the empire of Greece's Alexander the Great. Greece had first become a power through its skill in shipbuilding and trading and by its colonization of the shores of the Mediterranean. The Greeks defeated the Persians, in part, because of their naval power.
The Persians and Greeks also introduced a new caste into the division of labor: slavery. By the time of Greece's Golden Age, its civilization depended on slaves for nearly all manual labor. Most scholars agree that in societies that practice slavery, problems in productivity tend to be solved by increasing the number of workers rather than by looking for new production methods or new energy sources. Because of this, theoretical knowledge and learning in Greece—and later in Rome—was largely separated from physical labor and manufacturing.
This is not to say that the Greeks did not develop many new technological ideas. People such as Archimedes, Hero of Alexandria, Ctesibius, and Ptolemy wrote about the principles of siphons, pulleys, levers, cams, fire engines, cogs, valves, and turbines. Some practical contributions of the Greeks were of great importance, such as the water clock of Ctesibius, the diptra (a surveying instrument) of Hero of Alexandria, and the screw pump of Archimedes. Similarly, Greek shipping was improved by Thales of Miletus, who introduced methods of navigation by triangulation, and by Anaximander, who produced the first world map. Nevertheless, the technological advances of the Greeks were not on a par with their contributions to theoretical knowledge and their wide-ranging speculations.
The Roman Empire that engulfed and succeeded that of the Greeks was somewhat similar in this respect. The Romans, however, were great technologists in the sense of organizing and building; they established an urban civilization that enjoyed the first long peaceful period in human history. The great change in engineering that occurred in the Roman period came as a shift from building tombs, temples, and fortifications to the construction of enormous systems of public works. Using water-resistant cement and the principle of the arch, Roman engineers built 70,800 km (44,000 mi) of roads across their vast empire. They also built numerous sports arenas and public baths and hundreds of aqueducts, sewers, and bridges. The engineer of public works for Rome in the 1st century AD, Sextus Julius Frontinus, fought corruption and illegal practices and took great pride in the public works that provided better sanitary conditions for the citizens of Rome.
Roman engineers were also responsible for introducing the water mill and for the subsequent design of undershot and overshot water wheels, which were used to grind grain, saw wood, and cut marble. In the military sphere, the Romans advanced technology by improving weapons such as the javelin and the catapult.
[B]E Middle Ages[/B]
The period between the fall of Rome and the Industrial Revolution—from approximately AD 500 to 1500—is known as the Middle Ages. Contrary to a popular image, however, this period was not “dark,” isolated, or backward. In fact, greater technological advancements were made in this period than during the Greek and Roman eras. (In addition, the Byzantine and Islamic cultures that thrived during this period were active in the areas of natural philosophy, art, literature, and religion, and Islamic culture in particular made many scientific contributions that would be of great importance in the European Renaissance.) Medieval technologies do not fall into simple categories, however, because the technology of the Middle Ages was eclectic. Medieval society was highly adaptive, willing to acquire new ideas and methods of production from any source—whether the cultures of Islam and Byzantium, or China, or the far-ranging Vikings.
[B]E1 Warfare and Agriculture[/B]
In the area of warfare, cavalry was improved as a military weapon with the invention of the lance and the saddle about the 4th century. These in turn led to the development of heavier armor, the breeding of larger horses, and the building of great castles. The introduction of crossbow and, later, gunpowder technologies from China, where they had been developed many centuries before, resulted in the manufacture of guns, cannons, and mortars (through the development of the blast furnace), thereby reducing the effectiveness of heavy shields and massive stone fortifications.
The introduction of a heavier plow that had wheels, a horizontal plowshare, and a moldboard, among other new features, made agriculture more productive in the Middle Ages. Three-field crop rotation and the resulting surplus of grains were among the developments that—together with political and social changes—led many peasants to abandon small, individual farming plots and to adopt the successful medieval communal pattern of open-field agriculture.
One of the most important machines of medieval times was the windmill. It not only increased the amount of grain ground and timber sawed; it also produced millwrights experienced with the compound crank, cams, and other technologies for gearing machines and linking their parts with other devices. The spinning wheel, introduced from India in the 13th or 14th century, improved the production of yarn and thread for cloth and became a common machine around the hearth. The hearth itself was transformed by the addition of a chimney to conserve wood, which was becoming scarce because of agricultural expansion. Farm surpluses by AD 1000 led to an increase in trade and in the growth of cities. Within the cities, architectural innovations of many kinds were developed, culminating in the great Gothic cathedrals with their high walls made possible by flying buttresses.[B]
E2 Transportation
[/B]Innovations in transportation during the Middle Ages revolutionized the spread of technologies and ideas across wide areas. Such devices as the horseshoe, the whiffletree (for harnessing animals to wagons effectively), and the spring carriage speeded the transfer of people and goods. Important changes also occurred in marine technology. The development of the deep keel, the triangular lateen sail for greater maneuverability, and the magnetic compass (in the 13th century) made sailing ships the most complex machines of the age. A school was established by Prince Henry of Portugal to teach navigators how to use these machines effectively. Perhaps more than did Copernicus's astronomical theories, Prince Henry's students changed humanity's perception of the world.
[B]E3 Other Major Inventions[/B]
Two other medieval inventions, the clock and the printing press, also have had a permanent influence on all aspects of human life. The invention of a weight-driven clock in 1286 meant that people would no longer live in a world structured primarily by the daily course of the sun and the yearly change of the seasons. The clock was also an immense aid to navigation, and the precise measurement of time was essential for the growth of modern science.
The invention of the printing press, in turn, set off a social revolution that is still in progress. (The Chinese had in fact developed both paper and printing—including textile printing—before the 2nd century AD, but these innovations did not become generally known to the Western world until much later.) The German printing pioneer Johannes Gutenberg solved the problem of molding movable type about 1450. Once developed, printing spread rapidly and began to replace hand-printed texts for a wider audience. Thus, intellectual life soon was no longer the exclusive domain of church and court, and literacy became a necessity of urban existence.
[B]IV MODERN TECHNOLOGY[/B]
By the end of the Middle Ages the technological systems called cities had long since become a central feature of Western life. In 1600 London and Amsterdam each had populations of more than 100,000, and twice that number resided in Paris. Also, the Dutch, English, Spanish, and French were beginning to develop global empires. Colonialism and trade produced a powerful merchant class that helped to create an increasing desire for such luxuries as wine, coffee, tea, cocoa, and tobacco. These merchants acquired libraries, wore clothing made of expensive fabrics and furs, and set a style of life aspired to by the wider populace. By the beginning of the 18th century, capital resources and banking systems were well enough established in Great Britain to initiate investment in mass-production techniques that would satisfy some of these middle-class aspirations.
[B]A The Industrial Revolution[/B]
The Industrial Revolution started in England, because that nation had the technological means, government encouragement, and a large and varied trade network. The first factories appeared in 1740, concentrating on textile production. In 1740 the majority of English people wore woolen garments, but within the next 100 years the scratchy, often soggy and fungus-filled woolens were replaced by cotton—especially after the invention of the cotton gin by Eli Whitney, an American, in 1793. Such English inventions as the flying shuttle and carding machines of John Kay, the water frame of Richard Arkwright, the spinning jenny of James Hargreaves, and the improvements in weaving made by Samuel Crompton were all integrated with a new source of power, the steam engine, developed in England by Thomas Newcomen, James Watt, Richard Trevithick, and in the U.S. by Oliver Evans. Within a 35-year period, from the 1790s to the 1830s, more than 100,000 power looms with 9,330,000 spindles were put into service in England and Scotland.
One of the most important innovations in the weaving process was introduced in France in 1801 by Joseph Jacquard; his loom used cards with holes punched in them to determine the placement of threads in the warp. This use of punched cards inspired the British mathematician Charles Babbage to attempt to design a calculating machine based on the same principle. Although this machine never became fully practical, it presaged the great computer revolution of the 20th century.

[B]A1 New Labor Pattern[/B]

The Industrial Revolution brought a new pattern to the division of labor. It created the modern factory, a technological network whose workers were not required to be artisans and did not necessarily possess craft skills. Because of this, the factory introduced an impersonal remuneration process based on a wage system. As a result of the financial hazards brought on by the economic systems that accompanied such industrial developments, the factory also led to the constant threat of unemployment for its workers.
The factory system was achieved only after much resistance from the English guilds and artisans, who could see clearly the threat to their income and way of life. In musket making, for example, gunsmiths fought the introduction of interchangeable parts and the mass production of rifles. Nevertheless, the factory system became a basic institution of modern technology, and the work of men, women, and children became just another commodity in the production process. The ultimate assembly of a product—whether a mechanical reaper or a sewing machine—was not the work of one person but the result of an integrated, corporate system. This division of labor into operations that were more and more narrowly described became the determining feature of work in the new industrial society, with all the long hours of tedium that this entailed.
[B]A2 Increased Pace of Innovation[/B]
As agricultural productivity increased and medical science developed, Western society came to have a strong belief in the desirability of technological change despite its less pleasant aspects. Pride and a large measure of awe resulted from such engineering achievements as the laying of the first Atlantic telegraph cable, the building of the Suez and Panama canals, and the construction of the Eiffel Tower, the Brooklyn Bridge, and the enormous iron passenger ship, the Great Eastern. The telegraph and railroads connected most of the major cities with one another. In the late 19th century, the American inventor Thomas Edison's light bulb began to replace candles and lamps, and within 30 years every industrial nation was generating electric power for lighting and other systems.
Such 19th- and 20th-century inventions as the telephone, the phonograph, the wireless radio, the motion picture, the automobile, and the airplane served only to add to the nearly universal respect that society in general felt for technology. With the development of assembly-line mass production of automobiles and household appliances, and the building of ever taller skyscrapers, acceptance of innovations became not only a fact of everyday life but also a way of life in itself. Society was being rapidly transformed by increased mobility, rapid communication, and a deluge of available information from mass media.
[B]A3 Technical Education[/B]
One of the several reasons why the United States became a technological leader in the 20th century was its development of an advanced system of technical education. Mechanical arts schools began in Philadelphia in the 18th century, and by the end of the 19th century they had spread to every major American city. In the 20th century, a state-based system of vocational education provided training in basic technical skills. Between 1862 and 1890, engineering and agricultural colleges in every state were funded by a federal program known as the Morrill Land Grant. In addition, since the early 1920s, every rural county in the nation has had a Federal Extension Service office that is responsible for disseminating information to farmers on new technologies and research.

Yousuf Shah Khagga Monday, January 07, 2013 08:40 PM

Computer
I INTRODUCTION

Computer, machine that performs tasks, such as calculations or electronic communication, under the control of a set of instructions called a program. Programs usually reside within the computer and are retrieved and processed by the computer’s electronics. The program results are stored or routed to output devices, such as video display monitors or printers. Computers perform a wide variety of activities reliably, accurately, and quickly.

II USES OF COMPUTERS

aPeople use computers in many ways. In business, computers track inventories with bar codes and scanners, check the credit status of customers, and transfer funds electronically. In homes, tiny computers embedded in the electronic circuitry of most appliances control the indoor temperature, operate home security systems, tell the time, and turn videocassette recorders (VCRs) on and off. Computers in automobiles regulate the flow of fuel, thereby increasing gas mileage, and are used in anti-theft systems. Computers also entertain, creating digitized sound on stereo systems or computer-animated features from a digitally encoded laser disc. Computer programs, or applications, exist to aid every level of education, from programs that teach simple addition or sentence construction to programs that teach advanced calculus. Educators use computers to track grades and communicate with students; with computer-controlled projection units, they can add graphics, sound, and animation to their communications. Computers are used extensively in scientific research to solve mathematical problems, investigate complicated data, or model systems that are too costly or impractical to build, such as testing the air flow around the next generation of aircraft. The military employs computers in sophisticated communications to encode and unscramble messages, and to keep track of personnel and supplies.

III HOW COMPUTERS WORK

The physical computer and its components are known as hardware. Computer hardware includes the memory that stores data and program instructions; the central processing unit (CPU) that carries out program instructions; the input devices, such as a keyboard or mouse, that allow the user to communicate with the computer; the output devices, such as printers and video display monitors, that enable the computer to present information to the user; and buses (hardware lines or wires) that connect these and other computer components. The programs that run the computer are called software. Software generally is designed to perform a particular type of task—for example, to control the arm of a robot to weld a car’s body, to write a letter, to display and modify a photograph, or to direct the general operation of the computer.

A The Operating System

When a computer is turned on it searches for instructions in its memory. These instructions tell the computer how to start up. Usually, one of the first sets of these instructions is a special program called the operating system, which is the software that makes the computer work. It prompts the user (or other machines) for input and commands, reports the results of these commands and other operations, stores and manages data, and controls the sequence of the software and hardware actions. When the user requests that a program run, the operating system loads the program in the computer’s memory and runs the program. Popular operating systems, such as Microsoft Windows and the Macintosh system (Mac OS), have graphical user interfaces (GUIs)—that use tiny pictures, or icons, to represent various files and commands. To access these files or commands, the user clicks the mouse on the icon or presses a combination of keys on the keyboard. Some operating systems allow the user to carry out these tasks via voice, touch, or other input methods.

B Computer Memory

To process information electronically, data are stored in a computer in the form of binary digits, or bits, each having two possible representations (0 or 1). If a second bit is added to a single bit of information, the number of representations is doubled, resulting in four possible combinations: 00, 01, 10, or 11. A third bit added to this two-bit representation again doubles the number of combinations, resulting in eight possibilities: 000, 001, 010, 011, 100, 101, 110, or 111. Each time a bit is added, the number of possible patterns is doubled. Eight bits is called a byte; a byte has 256 possible combinations of 0s and 1s.
A byte is a useful quantity in which to store information because it provides enough possible patterns to represent the entire alphabet, in lower and upper cases, as well as numeric digits, punctuation marks, and several character-sized graphics symbols, including non-English characters such as . A byte also can be interpreted as a pattern that represents a number between 0 and 255. A kilobyte—1,024 bytes—can store about 1,000 characters; a megabyte can store about 1 million characters; a gigabyte can store about 1 billion characters; and a terabyte can store about 1 trillion characters. Computer programmers usually decide how a given byte should be interpreted—that is, as a single character, a character within a string of text, a single number, or part of a larger number. Numbers can represent anything from chemical bonds to dollar figures to colors to sounds.
The physical memory of a computer is either random access memory (RAM), which can be read or changed by the user, or computer, or read-only memory (ROM), which can be read by the computer but not altered in any way. One way to store memory is within the circuitry of the computer, usually in tiny computer chips that hold millions of bytes of information. The memory within these computer chips is RAM. Memory also can be stored outside the circuitry of the computer on external storage devices, such as magnetic floppy disks, which can store about 2 megabytes of information; hard drives, which can store gigabytes of information; compact discs (CDs), which can store up to 680 megabytes of information; and digital video discs (DVDs), which can store 8.5 gigabytes of information. A single CD can store nearly as much information as several hundred floppy disks, and some DVDs can hold more than 12 times as much data as a CD.


C The Bus

The bus enables the components in a computer, such as the CPU and the memory circuits, to communicate as program instructions are being carried out. The bus is usually a flat cable with numerous parallel wires. Each wire can carry one bit, so the bus can transmit many bits along the cable at the same time. For example, a 16-bit bus, with 16 parallel wires, allows the simultaneous transmission of 16 bits (2 bytes) of information from one component to another. Early computer designs utilized a single or very few buses. Modern designs typically use many buses; some of them specialized to carry particular forms of data, such as graphics.

D Input Devices

Input devices, such as a keyboard or mouse, permit the computer user to communicate with the computer. Other input devices include a joystick, a rod like device often used by people who play computer games; a scanner, which converts images such as photographs into digital images that the computer can manipulate; a touch panel, which senses the placement of a user’s finger and can be used to execute commands or access files; and a microphone, used to input sounds such as the human voice which can activate computer commands in conjunction with voice recognition software. “Tablet” computers are being developed that will allow users to interact with their screens using a pen like device.

E The Central Processing Unit

Information from an input device or from the computer’s memory is communicated via the bus to the central processing unit (CPU), which is the part of the computer that translates commands and runs programs. The CPU is a microprocessor chip—that is, a single piece of silicon containing millions of tiny, microscopically wired electrical components. Information is stored in a CPU memory location called a register. Registers can be thought of as the CPU’s tiny scratchpad, temporarily storing instructions or data. When a program is running, one special register called the program counter keeps track of which program instruction comes next by maintaining the memory location of the next program instruction to be executed. The CPU’s control unit coordinates and times the CPU’s functions, and it uses the program counter to locate and retrieve the next instruction from memory.
In a typical sequence, the CPU locates the next instruction in the appropriate memory device. The instruction then travels along the bus from the computer’s memory to the CPU, where it is stored in a special instruction register. Meanwhile, the program counter changes—usually increasing a small amount—so that it contains the location of the instruction that will be executed next. The current instruction is analyzed by a decoder, which determines what the instruction will do. Any data the instruction needs are retrieved via the bus and placed in the CPU’s registers. The CPU executes the instruction, and the results are stored in another register or copied to specific memory locations via a bus. This entire sequence of steps is called an instruction cycle. Frequently, several instructions may be in process simultaneously, each at a different stage in its instruction cycle. This is called pipeline processing.

F Output Devices

Once the CPU has executed the program instruction, the program may request that the information be communicated to an output device, such as a video display monitor or a flat liquid crystal display. Other output devices are printers, overhead projectors, videocassette recorders (VCRs), and speakers.

IV PROGRAMMING LANGUAGES

Programming languages contain the series of commands that create software. A CPU has a limited set of instructions known as machine code that it is capable of understanding. The CPU can understand only this language. All other programming languages must be converted to machine code for them to be understood. Computer programmers, however, prefer to use other computer languages that use words or other commands because they are easier to use. These other languages are slower because the language must be translated first so that the computer can understand it. The translation can lead to code that may be less efficient to run than code written directly in the machine’s language.

A Machine Language

Computer programs that can be run by a computer’s operating system are called executable. An executable program is a sequence of extremely simple instructions known as machine code. These instructions are specific to the individual computer’s CPU and associated hardware; for example, Intel Pentium and Power PC microprocessor chips each have different machine languages and require different sets of codes to perform the same task. Machine code instructions are few in number (roughly 20 to 200, depending on the computer and the CPU). Typical instructions are for copying data from a memory location or for adding the contents of two memory locations (usually registers in the CPU). Complex tasks require a sequence of these simple instructions. Machine code instructions are binary—that is, sequences of bits (0s and 1s). Because these sequences are long strings of 0s and 1s and are usually not easy to understand, computer instructions usually are not written in machine code. Instead, computer programmers write code in languages known as an assembly language or a high-level language.

B Assembly Language

Assembly language uses easy-to-remember commands that are more understandable to programmers than machine-language commands. Each machine language instruction has an equivalent command in assembly language. For example, in one Intel assembly language, the statement “MOV A, B” instructs the computer to copy data from location A to location B. The same instruction in machine code is a string of 16 0s and 1s. Once an assembly-language program is written, it is converted to a machine-language program by another program called an assembler.
Assembly language is fast and powerful because of its correspondence with machine language. It is still difficult to use, however, because assembly-language instructions are a series of abstract codes and each instruction carries out a relatively simple task. In addition, different CPUs use different machine languages and therefore require different programs and different assembly languages. Assembly language is sometimes inserted into a high-level language program to carry out specific hardware tasks or to speed up parts of the high-level program that are executed frequently.

C High-Level Languages

High-level languages were developed because of the difficulty of programming using assembly languages. High-level languages are easier to use than machine and assembly languages because their commands are closer to natural human language. In addition, these languages are not CPU-specific. Instead, they contain general commands that work on different CPUs. For example, a programmer writing in the high-level C++ programming language who wants to display a greeting need include only the following command:

cout << ‘Hello, Encarta User!’ << endl;
This command directs the computer’s CPU to display the greeting, and it will work no matter what type of CPU the computer uses. When this statement is executed, the text that appears between the quotes will be displayed. Although the “cout” and “endl” parts of the above statement appear cryptic, programmers quickly become accustomed to their meanings. For example, “cout” sends the greeting message to the “standard output” (usually the computer user’s screen) and “endl” is how to tell the computer (when using the C++ language) to go to a new line after it outputs the message. Like assembly-language instructions, high-level languages also must be translated. This is the task of a special program called a compiler. A compiler turns a high-level program into a CPU-specific machine language. For example, a programmer may write a program in a high-level language such as C++ or Java and then prepare it for different machines, such as a Sun Microsystems work station or a personal computer (PC), using compilers designed for those machines. This simplifies the programmer’s task and makes the software more portable to different users and machines.

V FLOW-MATIC

American naval officer and mathematician Grace Murray Hopper helped develop the first commercially available high-level software language, FLOW-MATIC, in 1957. Hopper is credited for inventing the term bug, which indicates a computer malfunction; in 1945 she discovered a hardware failure in the Mark II computer caused by a moth trapped between its mechanical relays. She documented the event in her laboratory notebook, and the term eventually came to represent any computer error, including one based strictly on incorrect instructions in software. Hopper taped the moth into her notebook and wrote, “First actual case of a bug being found.”

VI FORTRAN

From 1954 to 1958 American computer scientist John Backus of International Business Machines, Inc. (IBM) developed Fortran, an acronym for Formula Translation. It became a standard programming language because it could process mathematical formulas. Fortran and its variations are still in use today, especially in physics.

VII BASIC

Hungarian-American mathematician John Kemeny and American mathematician Thomas Kurtz at Dartmouth College in Hanover, New Hampshire, developed BASIC (Beginner’s All-purpose Symbolic Instruction Code) in 1964. The language was easier to learn than its predecessors and became popular due to its friendly, interactive nature and its inclusion on early personal computers. Unlike languages that require all their instructions to be translated into machine code first, BASIC is turned into machine language line by line as the program runs. BASIC commands typify high-level languages because of their simplicity and their closeness to natural human language. For example, a program that divides a number in half can be written as

10 INPUT “ENTER A NUMBER,” X
20 Y=X/2
30 PRINT “HALF OF THAT NUMBER IS,” Y

The numbers that precede each line are chosen by the programmer to indicate the sequence of the commands. The first line prints “ENTER A NUMBER” on the computer screen followed by a question mark to prompt the user to type in the number labeled “X.” In the next line, that number is divided by two and stored as “Y.” In the third line, the result of the operation is displayed on the computer screen. Even though BASIC is rarely used today, this simple program demonstrates how data are stored and manipulated in most high-level programming languages.

VIII OTHER HIGH-LEVEL LANGUAGES

Other high-level languages in use today include C, C++, Ada, Pascal, LISP, Prolog, COBOL, Visual Basic, and Java. Some languages, such as the “markup languages” known as HTML, XML, and their variants, are intended to display data, graphics, and media selections, especially for users of the World Wide Web. Markup languages are often not considered programming languages, but they have become increasingly sophisticated.

A Object-Oriented Programming Languages

Object-oriented programming (OOP) languages, such as C++ and Java, are based on traditional high-level languages, but they enable a programmer to think in terms of collections of cooperating objects instead of lists of commands. Objects, such as a circle, have properties such as the radius of the circle and the command that draws it on the computer screen. Classes of objects can inherit features from other classes of objects. For example, a class defining squares can inherit features such as right angles from a class defining rectangles. This set of programming classes simplifies the programmer’s task, resulting in more “reusable” computer code. Reusable code allows a programmer to use code that has already been designed, written, and tested. This makes the programmer’s task easier, and it results in more reliable and efficient programs.
IX TYPES OF COMPUTERS

A Digital and Analog

Computers can be either digital or analog. Virtually all modern computers are digital. Digital refers to the processes in computers that manipulate binary numbers (0s or 1s), which represent switches that are turned on or off by electrical current. A bit can have the value 0 or the value 1, but nothing in between 0 and 1. Analog refers to circuits or numerical values that have a continuous range. Both 0 and 1 can be represented by analog computers, but so can 0.5, 1.5, or a number like  (approximately 3.14).
A desk lamp can serve as an example of the difference between analog and digital. If the lamp has a simple on/off switch, then the lamp system is digital, because the lamp either produces light at a given moment or it does not. If a dimmer replaces the on/off switch, then the lamp is analog, because the amount of light can vary continuously from on to off and all intensities in between.
Analog computer systems were the first type to be produced. A popular analog computer used in the 20th century was the slide rule. To perform calculations with a slide rule, the user slides a narrow, gauged wooden strip inside a rulerlike holder. Because the sliding is continuous and there is no mechanism to stop at any exact values, the slide rule is analog. New interest has been shown recently in analog computers, particularly in areas such as neural networks. These are specialized computer designs that attempt to mimic neurons of the brain. They can be built to respond to continuous electrical signals. Most modern computers, however, are digital machines whose components have a finite number of states—for example, the 0 or 1, or on or off bits. These bits can be combined to denote information such as numbers, letters, graphics, sound, and program instructions.

B Range of Computer Ability

Computers exist in a wide range of sizes and power. The smallest are embedded within the circuitry of appliances, such as televisions and wristwatches. These computers are typically preprogrammed for a specific task, such as tuning to a particular television frequency, delivering doses of medicine, or keeping accurate time. They generally are “hard-wired”—that is, their programs are represented as circuits that cannot be reprogrammed.
Programmable computers vary enormously in their computational power, speed, memory, and physical size. Some small computers can be held in one hand and are called personal digital assistants (PDAs). They are used as notepads, scheduling systems, and address books; if equipped with a cellular phone, they can connect to worldwide computer networks to exchange information regardless of location. Hand-held game devices are also examples of small computers.
Portable laptop and notebook computers and desktop PCs are typically used in businesses and at home to communicate on computer networks, for word processing, to track finances, and for entertainment. They have large amounts of internal memory to store hundreds of programs and documents. They are equipped with a keyboard; a mouse, trackball, or other pointing device; and a video display monitor or liquid crystal display (LCD) to display information. Laptop and notebook computers usually have hardware and software similar to PCs, but they are more compact and have flat, lightweight LCDs instead of television-like video display monitors. Most sources consider the terms “laptop” and “notebook” synonymous.
Workstations are similar to personal computers but have greater memory and more extensive mathematical abilities, and they are connected to other workstations or personal computers to exchange data. They are typically found in scientific, industrial, and business environments—especially financial ones, such as stock exchanges—that require complex and fast computations.
Mainframe computers have more memory, speed, and capabilities than workstations and are usually shared by multiple users through a series of interconnected computers. They control businesses and industrial facilities and are used for scientific research. The most powerful mainframe computers, called supercomputers, process complex and time-consuming calculations, such as those used to create weather predictions. Large businesses, scientific institutions, and the military use them. Some supercomputers have many sets of CPUs. These computers break a task into small pieces, and each CPU processes a portion of the task to increase overall speed and efficiency. Such computers are called parallel processors. As computers have increased in sophistication, the boundaries between the various types have become less rigid. The performance of various tasks and types of computing have also moved from one type of computer to another. For example, networked PCs can work together on a given task in a version of parallel processing known as distributed computing.

X NETWORKS

Computers can communicate with other computers through a series of connections and associated hardware called a network. The advantage of a network is that data can be exchanged rapidly, and software and hardware resources, such as hard-disk space or printers, can be shared. Networks also allow remote use of a computer by a user who cannot physically access the computer.
One type of network, a local area network (LAN), consists of several PCs or workstations connected to a special computer called a server, often within the same building or office complex. The server stores and manages programs and data. A server often contains all of a networked group’s data and enables LAN workstations or PCs to be set up without large storage capabilities. In this scenario, each PC may have “local” memory (for example, a hard drive) specific to itself, but the bulk of storage resides on the server. This reduces the cost of the workstation or PC because less expensive computers can be purchased, and it simplifies the maintenance of software because the software resides only on the server rather than on each individual workstation or PC.
Mainframe computers and supercomputers commonly are networked. They may be connected to PCs, workstations, or terminals that have no computational abilities of their own. These “dumb” terminals are used only to enter data into, or receive output from, the central computer.
Wide area networks (WANs) are networks that span large geographical areas. Computers can connect to these networks to use facilities in another city or country. For example, a person in Los Angeles can browse through the computerized archives of the Library of Congress in Washington, D.C. The largest WAN is the Internet, a global consortium of networks linked by common communication programs and protocols (a set of established standards that enable computers to communicate with each other). The Internet is a mammoth resource of data, programs, and utilities. American computer scientist Vinton Cerf was largely responsible for creating the Internet in 1973 as part of the United States Department of Defense Advanced Research Projects Agency (DARPA). In 1984 the development of Internet technology was turned over to private, government, and scientific agencies. The World Wide Web, developed in the 1980s by British physicist Timothy Berners-Lee, is a system of information resources accessed primarily through the Internet. Users can obtain a variety of information in the form of text, graphics, sounds, or video. These data are extensively cross-indexed, enabling users to browse (transfer their attention from one information site to another) via buttons, highlighted text, or sophisticated searching software known as search engines.

XI HISTORY

A Beginnings

The history of computing began with an analog machine. In 1623 German scientist Wilhelm Schikard invented a machine that used 11 complete and 6 incomplete sprocketed wheels that could add, and with the aid of logarithm tables, multiply and divide.
French philosopher, mathematician, and physicist Blaise Pascal invented a machine in 1642 that added and subtracted, automatically carrying and borrowing digits from column to column. Pascal built 50 copies of his machine, but most served as curiosities in parlors of the wealthy. Seventeenth-century German mathematician Gottfried Leibniz designed a special gearing system to enable multiplication on Pascal’s machine.

B First Punch Cards

In the early 19th century French inventor Joseph-Marie Jacquard devised a specialized type of computer: a silk loom. Jacquard’s loom used punched cards to program patterns that helped the loom create woven fabrics. Although Jacquard was rewarded and admired by French emperor Napoleon I for his work, he fled for his life from the city of Lyon pursued by weavers who feared their jobs were in jeopardy due to Jacquard’s invention. The loom prevailed, however: When Jacquard died, more than 30,000 of his looms existed in Lyon. The looms are still used today, especially in the manufacture of fine furniture fabrics.

C Precursor to Modern Computer

Another early mechanical computer was the Difference Engine, designed in the early 1820s by British mathematician and scientist Charles Babbage. Although never completed by Babbage, the Difference Engine was intended to be a machine with a 20-decimal capacity that could solve mathematical problems. Babbage also made plans for another machine, the Analytical Engine, considered the mechanical precursor of the modern computer. The Analytical Engine was designed to perform all arithmetic operations efficiently; however, Babbage’s lack of political skills kept him from obtaining the approval and funds to build it.
Augusta Ada Byron, countess of Lovelace, was a personal friend and student of Babbage. She was the daughter of the famous poet Lord Byron and one of only a few woman mathematicians of her time. She prepared extensive notes concerning Babbage’s ideas and the Analytical Engine. Lovelace’s conceptual programs for the machine led to the naming of a programming language (Ada) in her honor. Although the Analytical Engine was never built, its key concepts, such as the capacity to store instructions, the use of punched cards as a primitive memory, and the ability to print, can be found in many modern computers.

XII DEVELOPMENTS IN THE 20TH CENTURY

A Early Electronic Calculators

Herman Hollerith, an American inventor, used an idea similar to Jacquard’s loom when he combined the use of punched cards with devices that created and electronically read the cards. Hollerith’s tabulator was used for the 1890 U.S. census, and it made the computational time three to four times shorter than the time previously needed for hand counts. Hollerith’s Tabulating Machine Company eventually merged with two companies to form the Computing-Tabulating-Recording Company. In 1924 the company changed its name to International Business Machines (IBM).
In 1936 British mathematician Alan Turing proposed the idea of a machine that could process equations without human direction. The machine (now known as a Turing machine) resembled an automatic typewriter that used symbols for math and logic instead of letters. Turing intended the device to be a “universal machine” that could be used to duplicate or represent the function of any other existing machine. Turing’s machine was the theoretical precursor to the modern digital computer. The Turing machine model is still used by modern computational theorists.
In the 1930s American mathematician Howard Aiken developed the Mark I calculating machine, which was built by IBM. This electronic calculating machine used relays and electromagnetic components to replace mechanical components. In later machines, Aiken used vacuum tubes and solid state transistors (tiny electrical switches) to manipulate the binary numbers. Aiken also introduced computers to universities by establishing the first computer science program at Harvard University in Cambridge, Massachusetts. Aiken obsessively mistrusted the concept of storing a program within the computer, insisting that the integrity of the machine could be maintained only through a strict separation of program instructions from data. His computer had to read instructions from punched cards, which could be stored away from the computer. He also urged the National Bureau of Standards not to support the development of computers, insisting that there would never be a need for more than five or six of them nationwide.

B EDVAC, ENIAC, and UNIVAC

At the Institute for Advanced Study in Princeton, New Jersey, Hungarian-American mathematician John von Neumann developed one of the first computers used to solve problems in mathematics, meteorology, economics, and hydrodynamics. Von Neumann's 1945 design for the Electronic Discrete Variable Automatic Computer (EDVAC)—in stark contrast to the designs of Aiken, his contemporary—was the first electronic computer design to incorporate a program stored entirely within its memory. This machine led to several others, some with clever names like ILLIAC, JOHNNIAC, and MANIAC.
American physicist John Mauchly proposed the electronic digital computer called ENIAC, the Electronic Numerical Integrator And Computer. He helped build it along with American engineer John Presper Eckert, Jr., at the Moore School of Engineering at the University of Pennsylvania in Philadelphia. ENIAC was operational in 1945 and introduced to the public in 1946. It is regarded as the first successful, general digital computer. It occupied 167 sq m (1,800 sq ft), weighed more than 27,000 kg (60,000 lb), and contained more than 18,000 vacuum tubes. Roughly 2,000 of the computer’s vacuum tubes were replaced each month by a team of six technicians. Many of ENIAC’s first tasks were for military purposes, such as calculating ballistic firing tables and designing atomic weapons. Since ENIAC was initially not a stored program machine, it had to be reprogrammed for each task.
Eckert and Mauchly eventually formed their own company, which was then bought by the Rand Corporation. They produced the Universal Automatic Computer (UNIVAC), which was used for a broader variety of commercial applications. The first UNIVAC was delivered to the United States Census Bureau in 1951. By 1957, there were 46 UNIVACs in use.
Between 1937 and 1939, while teaching at Iowa State College, American physicist John Vincent Atanasoff built a prototype computing device called the Atanasoff-Berry Computer, or ABC, with the help of his assistant, Clifford Berry. Atanasoff developed the concepts that were later used in the design of the ENIAC. Atanasoff’s device was the first computer to separate data processing from memory, but it is not clear whether a functional version was ever built. Atanasoff did not receive credit for his contributions until 1973, when a lawsuit regarding the patent on ENIAC was settled.

XIII THE TRANSISTOR AND INTEGRATED CIRCUITS TRANSFORM COMPUTING

In 1948, at Bell Telephone Laboratories, American physicists Walter Houser Brattain, John Bardeen, and William Bradford Shockley developed the transistor, a device that can act as an electric switch. The transistor had a tremendous impact on computer design, replacing costly, energy-inefficient, and unreliable vacuum tubes.
In the late 1960s integrated circuits (tiny transistors and other electrical components arranged on a single chip of silicon) replaced individual transistors in computers. Integrated circuits resulted from the simultaneous, independent work of Jack Kilby at Texas Instruments and Robert Noyce of the Fairchild Semiconductor Corporation in the late 1950s. As integrated circuits became miniaturized, more components could be designed into a single computer circuit. In the 1970s refinements in integrated circuit technology led to the development of the modern microprocessor, integrated circuits that contained thousands of transistors. Modern microprocessors can contain more than 40 million transistors.
Manufacturers used integrated circuit technology to build smaller and cheaper computers. The first of these so-called personal computers (PCs)—the Altair 8800—appeared in 1975, sold by Micro Instrumentation Telemetry Systems (MITS). The Altair used an 8-bit Intel 8080 microprocessor, had 256 bytes of RAM, received input through switches on the front panel, and displayed output on rows of light-emitting diodes (LEDs). Refinements in the PC continued with the inclusion of video displays, better storage devices, and CPUs with more computational abilities. Graphical user interfaces were first designed by the Xerox Corporation, then later used successfully by Apple Inc.. Today the development of sophisticated operating systems such as Windows, the Mac OS, and Linux enables computer users to run programs and manipulate data in ways that were unimaginable in the mid-20th century.
Several researchers claim the “record” for the largest single calculation ever performed. One large single calculation was accomplished by physicists at IBM in 1995. They solved one million trillion mathematical subproblems by continuously running 448 computers for two years. Their analysis demonstrated the existence of a previously hypothetical subatomic particle called a glueball. Japan, Italy, and the United States are collaborating to develop new supercomputers that will run these types of calculations 100 times faster.
In 1996 IBM challenged Garry Kasparov, the reigning world chess champion, to a chess match with a supercomputer called Deep Blue. The computer had the ability to compute more than 100 million chess positions per second. In a 1997 rematch Deep Blue defeated Kasparov, becoming the first computer to win a match against a reigning world chess champion with regulation time controls. Many experts predict these types of parallel processing machines will soon surpass human chess playing ability, and some speculate that massive calculating power will one day replace intelligence. Deep Blue serves as a prototype for future computers that will be required to solve complex problems. At issue, however, is whether a computer can be developed with the ability to learn to solve problems on its own, rather than one programmed to solve a specific set of tasks.


XIV THE FUTURE OF COMPUTERS

In 1965 semiconductor pioneer Gordon Moore predicted that the number of transistors contained on a computer chip would double every year. This is now known as Moore’s Law, and it has proven to be somewhat accurate. The number of transistors and the computational speed of microprocessors currently doubles approximately every 18 months. Components continue to shrink in size and are becoming faster, cheaper, and more versatile.
With their increasing power and versatility, computers simplify day-to-day life. Unfortunately, as computer use becomes more widespread, so do the opportunities for misuse. Computer hackers—people who illegally gain access to computer systems—often violate privacy and can tamper with or destroy records. Programs called viruses or worms can replicate and spread from computer to computer, erasing information or causing malfunctions. Other individuals have used computers to electronically embezzle funds and alter credit histories . New ethical issues also have arisen, such as how to regulate material on the Internet and the World Wide Web. Long-standing issues, such as privacy and freedom of expression, are being reexamined in light of the digital revolution. Individuals, companies, and governments are working to solve these problems through informed conversation, compromise, better computer security, and regulatory legislation.
Computers will become more advanced and they will also become easier to use. Improved speech recognition will make the operation of a computer easier. Virtual reality, the technology of interacting with a computer using all of the human senses, will also contribute to better human and computer interfaces. Standards for virtual-reality program languages—for example, Virtual Reality Modeling language (VRML)—are currently in use or are being developed for the World Wide Web.
Other, exotic models of computation are being developed, including biological computing that uses living organisms, molecular computing that uses molecules with particular properties, and computing that uses deoxyribonucleic acid (DNA), the basic unit of heredity, to store data and carry out operations. These are examples of possible future computational platforms that, so far, are limited in abilities or are strictly theoretical. Scientists investigate them because of the physical limitations of miniaturizing circuits embedded in silicon. There are also limitations related to heat generated by even the tiniest of transistors.
Intriguing breakthroughs occurred in the area of quantum computing in the late 1990s. Quantum computers under development use components of a chloroform molecule (a combination of chlorine and hydrogen atoms) and a variation of a medical procedure called magnetic resonance imaging (MRI) to compute at a molecular level. Scientists use a branch of physics called quantum mechanics, which describes the behavior of subatomic particles (particles that make up atoms), as the basis for quantum computing. Quantum computers may one day be thousands to millions of times faster than current computers, because they take advantage of the laws that govern the behavior of subatomic particles. These laws allow quantum computers to examine all possible answers to a query simultaneously. Future uses of quantum computers could include code breaking and large database queries. Theorists of chemistry, computer science, mathematics, and physics are now working to determine the possibilities and limitations of quantum computing.
Communications between computer users and networks will benefit from new technologies such as broadband communication systems that can carry significantly more data faster or more conveniently to and from the vast interconnected databases that continue to grow in number and type.

Sadia2 Sunday, March 24, 2013 10:43 PM

Dont you think that content you posted on above topics is of nu-necessary length???


08:14 PM (GMT +5)

vBulletin, Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.