Friday, April 19, 2024
02:53 PM (GMT +5)

Go Back   CSS Forums > Other Competitive Examinations > Other Examinations

Reply Share Thread: Submit Thread to Facebook Facebook     Submit Thread to Twitter Twitter     Submit Thread to Google+ Google+    
 
LinkBack Thread Tools Search this Thread
  #1  
Old Wednesday, May 22, 2013
Nayla Khan's Avatar
Member
 
Join Date: Feb 2012
Posts: 45
Thanks: 26
Thanked 14 Times in 10 Posts
Nayla Khan is on a distinguished road
Default Syllabus and notes for NTS-GAT Computer Science

1. Programming Fundamentals 06%
2. Object Oriented Paradigm 05%
3. Discrete Structures 05%
4. Data Structures and Algorithms 09%
5. Digital Logic and Computer Organization 06%
6. Operating Systems 06%
7. Database Systems 07%
8. Software Engineering & Development 05%
9. Computer Communication and Networks 06%
10. Computer Architecture & Assembly Language 08%
11. Theory of Automata and Formal Languages 10%
12. Analysis of Algorithms 10%
13. Artificial Intelligence 07%
14. System Programming 05%
15. Numerical Computing 05%
Total 100%
Reply With Quote
  #2  
Old Wednesday, May 22, 2013
Nayla Khan's Avatar
Member
 
Join Date: Feb 2012
Posts: 45
Thanks: 26
Thanked 14 Times in 10 Posts
Nayla Khan is on a distinguished road
Default

1. PROGRAMMING FUNDAMENTALS:

Overview of computers and programming.Overview of language.Basics of structured and Modular programming.Basic Algorithms and problem solving, development of basic algorithms, analyzing problem, designing solution, testing designed solution. Fundamental programming constructs, translation of algorithms to programmes, data types, control structures, functions, arrays, records, files, testing programmes.


2. OBJECT ORIENTED PARADIGM:

Evolution of Object Oriented (OO) programming, OO concepts and principles, problem solving in OO paradigm, OO programme design process, classes, methods, objects and encapsulation; constructors and destructors, operator and function overloading, virtual functions, derived classes, inheritance and polymorphism. I/O and file processing, exception handling.

3. DISCRETE STRUCTURES:

Introduction to logic and proofs: Direct proofs; proof by contradiction, Sets, Combinatronics, Sequences, Formal logic, Prepositional and predicate calculus, Methods of Proof, Mathematical Induction and Recursion, loop invariants, Relations and functions, Pigeonhole principle, Trees and Graphs, Elementary number theory, Optimization and matching. Fundamental structures: Functions; relations (more specifically recursions); pigeonhole principle; cardinality and countability, probabilistic methods.


4. DATA STRUCTURES AND ALGORITHMS:

Introduction to data structures; Arrays, Stacks, Queues, Priority Queues, Linked Lists, Trees, Spanning Trees, Graphs and Traversals. Recursion, sorting and searching algorithms, Shortest path algorithms, Hashing, Storage and retrieval properties and techniques for the various data structures. Algorithm Complexity, Polynomial and Intractable Algorithms, Classes of Efficient Algorithms, Divide and Conquer, Dynamic, Greedy

5. DIGITAL LOGIC AND COMPUTER ORGANIZATION:

5.1 Digital Logic_______________________________________3%
Overview of Binary Numbers, Boolean Algebra, switching algebra, and logic gates, Karnaugh Map and Quin-McCluskey methods, simplification of Boolean functions, Combinational Design; two level NAND/NOR implementation, Tabular Minimization, Combinational Logic Design: adders, subtracters, code converters, parity checkers, multilevel NAND/NOR/XOR circuits, MSI Components, design and use of encoders, decoders, multiplexers, BCD adders, and comparators, Latches and flip-flops, Synchronous sequential circuit design and analysis, Registers, synchronous and asynchronous counters, and memories, Control Logic Design.

5.2 Computer Organiztion________________________________3%
Fundamentals of Computer Designincluding performance measurements & quantitative principles, principles of Instruction Set Design, Operands, addressing modes and encoding, pipelining of Processors: Issues and Hurdles, exception handling features, Instruction-Level Parallelism and Dynamic handling of Exceptions, Memory Hierarchy Design, Cache Design, Performance Issues and improvements, Main Memory Performance Issues, Storage Systems, Multiprocessors and Thread Level Parallelism.

6 .OPERATING SYSTEMS:

History and Goals, Evolution of multi-user systems, Process and CPU management, Multithreading, Kernel and User Modes, Protection, Problems of cooperative processes, Synchronization, Deadlocks, Memory management and virtual memory, Relocation, External Fragmentation, Paging and Demand Paging, Secondary storage, Security and Protection, File systems, I/O systems, Introduction to distributed operating systems. Scheduling and dispatch, Introduction to concurrency.


7. DATABASE SYSTEMS:

Basic database concepts; Entity Relationship modelling, Relational data model and algebra, Structured Query language; RDBMS; Database design, functional dependencies and normal forms; Transaction processing and optimization concepts; concurrency control and recovery techniques; Database security and authorization. Physical database design: Storage and file structure; indexed files; b-trees; files with dense index; files with variable length records; database efficiency and tuning.


8. SOFTWARE ENGINEERING AND DEVELOPMENT:

Introduction to Computer-based System Engineering; Project Management; Software Specification; Requirements Engineering, System Modelling; Requirements Specifications; Software Prototyping; Software Design: Architectural Design, Object-Oriented Design, UML modelling, Function-Oriented Design, User Interface Design; Quality Assurance; Processes & Configuration Management; Introduction to advanced issues: Reusability, Patterns; Assignments and projects on various stages and deliverables of SDLC.

9. COMPUTER COMMUNICATION AND NETWORKS:

Analogue and digital Transmission, Noise, Media, Encoding, Asynchronous and Synchronous transmission, Protocol design issues. Network system architectures (OSI, TCP/IP), Error Control, Flow Control, Data Link Protocols (HDLC, PPP). Local Area Networks and MAC Layer protocols (Ethernet, Token ring), Multiplexing, Switched and IP Networks, Inter-networking, Routing, Bridging, Transport layer protocols TCP/IP, UDP. Network security issues. Programming exercises, labs or projects involving implementation of protocols at different layers.

10. COMPUTER ARCHITECTURE AND ASSEMBLY LANUGUAGE:
Microprocessor Bus Structure: Addressing, Data and Control, Memory Organization and Structure (Segmented and Linear Models), Introduction to Registers and Flags, Data Movement, Arithmetic and Logic, Programme Control, Subroutines, Stack and its operation, Peripheral Control Interrupts.
Objectives and Perspectives of Assembly Language, Addressing Modes, Introduction to the Assembler and Debugger, Manipulate and translate machine and assembly code. Interfacing with high level languages, Real-time application.

11. THEORY OF AUTOMATA AND FORMAL LANGUAGES:

Finite State Models: Language definitions preliminaries, Regular expressions/Regular languages, Finite automata (FAs), Transition graphs (TGs), NFAs, Kleene’s theorem, Transducers (automata with output), Pumping ¬¬¬¬¬¬¬lemma and non regular language Grammars and PDA: Context free grammars, Derivations, derivation trees and ambiguity, Simplifying CFLs , Normal form grammars and parsing, Decidability, Chomsky’s hierarchy of grammars Turing Machines Theory: Turing machines, Post machine, Variations on TM, TM encoding, Universal Turing Machine, Context sensitive Grammars, Defining Computers by TMs.

12. ANALYSIS OF ALGORITHMS:

Asymptotic notations; Recursion and recurrence relations; Divide-and-conquer approach; Sorting; Search trees; Heaps; Hashing; Greedy approach; Dynamic programming; Graph algorithms; Shortest paths; Network flow; Disjoint Sets; Polynomial and matrix calculations; String matching; NP complete problems; Approximation algorithms


13. ARTIFICIAL INTELLIGENCE:

Artificial Intelligence: Introduction, Intelligent Agents. Problem-solving: Solving Problems by Searching, Informed Search and Exploration, Constraint Satisfaction Problems, Adversarial Search. Knowledge and reasoning: Logical Agents, First-Order Logic, Inference in First-Order Logic, Knowledge Representation. Planning and Acting in the Real World. Uncertain knowledge and reasoning: Uncertainty, Probabilistic Reasoning, Probabilistic Reasoning over Time, Making Simple Decisions, Making Complex Decisions. Learning: Learning from Observations, Knowledge in Learning, Statistical Learning Methods, Reinforcement Learning. Communicating, perceiving, and acting: Communication, Probabilistic Language Processing, Perception and Robotics. Introduction to LISP/PROLOG and Expert Systems (ES) and Applications.


14. SYSTEM PROGRAMMING:

System Programming overview: Application Vs. System Programming, System Software, Operating System, Device Drivers, OS Calls. Window System Programming for Intel386 Architecture: 16 bit Vs 32 bit, Programming, 32 bit Flat memory model, Windows Architecture. Virtual Machine (VM)Basics, System Virtual Machine, Portable Executable Format, Ring O Computer, Linear Executable format, Virtual Device Driver (V + D), New Executable format, Module Management, COFF obj format 16 bit. (Unix) other 32-bit O.S Programming for I 386; Unix Binaryble format (ELF), Dynamic shared objects, Unix Kernel Programming (Ring O), Unix Device Architecture (Character & Block Devices), Device Driver Development, Enhancing Unix Kernel.

15. NUMERICAL COMPUTING:

The concepts of efficiency, reliability and accuracy of a method.Minimising computational errors.Theory of Differences, Difference Operators, Difference Tables, Forward Differences, Backward Differences and Central Differences. Mathematical Preliminaries, Solution of Equations in one variable, Interpolation and Polynomial Approximation, Numerical Differentiation and Numerical Integration, Initial Value Problems for Ordinary Differential Equations, Direct Methods for Solving Linear Systems, Iterative Techniques in Matrix Algebra, Solution of non-linear equations.
Reply With Quote
  #3  
Old Thursday, May 23, 2013
Nayla Khan's Avatar
Member
 
Join Date: Feb 2012
Posts: 45
Thanks: 26
Thanked 14 Times in 10 Posts
Nayla Khan is on a distinguished road
Default

OPERATING SYSTEMS


History
Historically operating systems have been tightly related to the computer architecture, it is good idea to study the history of operating systems from the architecture of the computers on which they run.

Operating systems have evolved through a number of distinct phases or generations which corresponds roughly to the decades.

The 1940's - First Generations
• The earliest electronic digital computers had no operating systems.
• Programs were often entered one bit at time on rows of mechanical switches (plug boards).
• Programming languages were unknown (not even assembly languages).
• Operating systems were unheard of.

The 1950's - Second Generation
• Introduction of punch cards.
• The General Motors Research Laboratories implemented the first operating systems in early 1950's for their IBM 701.
• The system of the 50's generally ran one job at a time.
• These were called single-stream batch processing systems because programs and data were submitted in groups or batches.

The 1960's - Third Generation

• Batch processing systems that could run several jobs at once.
• Concept of multiprogramming in which several jobs are in main memory at once, a processor is switched from job to job as needed to keep several jobs advancing while keeping the peripheral devices in use. While one job was waiting for I/O to complete, another job could be using the CPU.
• Spooling (simultaneous peripheral operations on line). In spooling, a high-speed device like a disk interposed between a running program and a low-speed device involved with the program in input/output. Instead of writing directly to a printer, for example, outputs are written to the disk. Programs can run to completion faster, and other programs can be initiated sooner when the printer becomes available, the outputs may be printed.
• Time-sharing technique, a variant of multiprogramming technique, in which each user has an on-line (i.e., directly connected) terminal. Because the user is present and interacting with the computer, the computer system must respond quickly to user requests, otherwise user productivity could suffer. Timesharing systems were developed to multiprogram large number of simultaneous interactive users.

Fourth Generation

• Operating system entered in the personal computer and the workstation age.
• Two operating systems have dominated the personal computer scene:
• MS-DOS, written by Microsoft, Inc. for the IBM PC and other machines using the Intel 8088 CPU and its successors
• UNIX, which is dominant on the large personal computers using the Motorola 6899 CPU family.

Goals of Operating System:
There are two main goals of Operating System:-
• Convenience for the user: Operating System is supposed to make it easier to compute. This view is particularly clear when you look at Operating Systems for small PCs.
• Efficient operation of the computer system: For large, shared, multi user systems. These systems are expensive, so it is desirable to make them as efficient as possible.

Evolution of multi-user systems
A multi-user operating system is a computer operating system (OS) that allows multiple users on different computers or terminals to access a single system with one OS on it.

A multi-user operating system allows multiple users to access the data and processes of a single machine from different computers or terminals. These were previously often connected to the larger system through a wired network, though now wireless networking for this type of system is more common.

• These programs are often quite complicated and must be able to properly manage the necessary tasks required by the different users connected to it.
• The users will typically be at terminals or computers that give them access to the system through a network.
• A multi-user operating system differs from a single-user system on a network in that each user is accessing the same OS at different machines.
• Multiple people require the system to be functioning properly simultaneously. This type of system is often used on mainframes and similar machines, and if the system fails it can affect dozens or even hundreds of people.
• A multi-user operating system is often used in businesses and offices where different users need to access the same resources, but these resources cannot be installed on every system. In a multi-user operating system, the OS must be able to handle the various needs and requests of all of the users effectively.

• the multi-user operating system is able to ensure that each user does not hinder the efforts of another, and that if the system fails or has an error for one user, it might not affect all of the other users. This makes a multi-user operating system typically quite a bit more complicated than a single-user system that only needs to handle the requests and operations of one person.

• Example: OS may need to handle numerous people attempting to use a single printer simultaneously. The system processes the requests and places the print jobs in a queue that keeps them organized and allows each job to print out one at a time. Without a multi-user OS, the jobs could become intermingled and the resulting printed pages would be virtually incomprehensible.
Reply With Quote
  #4  
Old Thursday, May 23, 2013
Nayla Khan's Avatar
Member
 
Join Date: Feb 2012
Posts: 45
Thanks: 26
Thanked 14 Times in 10 Posts
Nayla Khan is on a distinguished road
Default

OPERATING SYSTEMS..CONTINUED


CPU MANAGEMENT
The heart of managing the processor comes down to two related issues:
• Ensuring that each process and application receives enough of the processor's time to function properly
• Using as many processor cycles as possible for real work

The basic unit of software that the operating system deals with in scheduling the work done by the processor is either a process or a thread, depending on the operating system.

The application you see (word processor, spreadsheet or game) is, indeed, a process, but that application may cause several other processes to begin, for tasks like communications with other devices or other computers. There are also numerous processes that run without giving you direct evidence that they ever exist. For example, Windows XP and UNIX can have dozens of background processes running to handle the network, memory management, disk management, virus checks and so on.
A process, then, is software that performs some action and can be controlled by a user, by other applications or by the operating system.
It is processes, rather than applications, that the operating system controls and schedules for execution by the CPU. In a single-tasking system, the schedule is straightforward. The operating system allows the application to begin running, suspending the execution only long enough to deal with interrupts and user input.

Interrupts are special signals sent by hardware or software to the CPU. It's as if some part of the computer suddenly raised its hand to ask for the CPU's attention in a lively meeting. Sometimes the operating system will schedule the priority of processes so that interrupts are masked -- that is, the operating system will ignore the interrupts from some sources so that a particular job can be finished as quickly as possible. non-maskable interrupts (NMIs) must be dealt with immediately, regardless of the other tasks at hand.

While interrupts add some complication to the execution of processes in a single-tasking system, the job of the operating system becomes much more complicated in a multi-tasking system. Now, the operating system must arrange the execution of applications so that you believe that there are several things happening at once. This is complicated because the CPU can only do one thing at a time. Today's multi-core processors and multi-processor machines can handle more work, but each processor core is still capable of managing one task at a time.

In order to give the appearance of lots of things happening at the same time, the operating system has to switch between different processes thousands of times a second. Here's how it happens:
• A process occupies a certain amount of RAM.
• When two processes are multi-tasking, the operating system allots a certain number of CPU execution cycles to one program.
• After that number of cycles, the operating system makes copies of all the registers, stacks and queues used by the processes, and notes the point at which the process paused in its execution.
• It then loads all the registers, stacks and queues used by the second process and allows it a certain number of CPU cycles.
• When those are complete, it makes copies of all the registers, stacks and queues used by the second program, and loads the first program


MULTITHREADING

The ability of an operating system to concurrently run programs that have been divided into subcomponents, or threads.
Multithreading, when done correctly, offers better utilization of processors and other system resources. Multithreaded programming requires a multitasking/ multithreading operating system, such as UNIX/Linux, Windows NT/2000 or OS/2, capable of running many programs concurrently. A word processor can make good use of multithreading, because it can spell check in the foreground while saving to disk and sending output to the system print spooler in the background.


KERNEL AND USER MODE


The processor switches between the two modes depending on what type of code is running on the processor. Applications run in user mode, and core operating system components run in kernel mode. Many drivers run in kernel mode, but some drivers run in user mode.

• When you start a user-mode application, Windows creates a process for the application. The process provides the application with a private virtual address space and a private handle table. Because an application's virtual address space is private, one application cannot alter data that belongs to another application. Each application runs in isolation, and if an application crashes, the crash is limited to that one application. Other applications and the operating system are not affected by the crash.
In addition to being private, the virtual address space of a user-mode application is limited. A processor running in user mode cannot access virtual addresses that are reserved for the operating system. Limiting the virtual address space of a user-mode application prevents the application from altering, and possibly damaging, critical operating system data.

• All code that runs in kernel mode shares a single virtual address space. This means that a kernel-mode driver is not isolated from other drivers and the operating system itself. If a kernel-mode driver accidentally writes to the wrong virtual address, data that belongs to the operating system or another driver could be compromised. If a kernel-mode driver crashes, the entire operating system crashes.


PROBLEMS OF COOPERATIVE PROCESSES


• Independent process cannot affect or be affected by the execution of another process.
• Cooperating process can affect or be affected by the execution of another process.
• Any process that shares data with other processes is a cooperating process.

Advantages of process cooperation:
– Information sharing – such as shared files.
– Computation speed-up – to run a task faster, we must break it into subtasks, each of which will be executing in parallel. This speed up can be achieved only if the computer has multiple
processing elements (such as CPUs or I/O channels).
– Modularity – construct a system in a modular function (i.e., dividing the system functions into
separate processes).
– Convenience – one user may have many tasks to work on at one time. For example, a user may be editing, printing, and compiling in parallel.

PAGING AND DEMAND PAGING


Paging
The basic method of implementing paging involves breaking physical memory into fixed-sized blocks called frames and breaking logical memory into blocks of same size called pages. When a process is to be executed, its pages are loaded into any available memory frames, and a contiguous memory space is not needed hence. Paging is carried out by the Memory Management Unit.

Advantages of paging
• Eliminates the need of a contiguous memory for a process
• Avoids external fragmentation (occurs when a larger contiguous memory is allocated to a process than needed)

Disadvantages of paging
• Internal fragmentation still exists.
• Memory reference overhead: 2 references into the memory ( 1 for the page table and 1 for the frame)

How is paging done?
Every address generated by the CPU is divided into page number and page offset. The page number is used as an index into the page table. The page table for a process consists of all pages of the process and also shows in which frame every page exists.

DEMAND PAGING

A demand-paging system is similar to a paging system with swapping. Swapping is the process of rolling in and rolling out of memory from the main memory. Swapping is essentially having only the needed information in the memory and putting the rest in a secondary storage.

Imagine a process as a collection of a large number of pages. All pages together form the virtual memory. Now, not all pages are needed at once at any point for the process. Hence, demand paging suggests the operating system to put only relevant pages into the physical memory and give the user an illusion of a larger non-existent virtual memory.

How is demand paging carried out?
• Needed pages are in main memory. Other pages are in secondary storage.
• When a required page is not in memory, the page search in the main memory fails and a page fault occurs.
• Only when a page fault occurs, is the requested page taken from the secondary memory. This new page is swapped with another page( preferably a page that wont be needed in the near future).

Advantages of demand paging
  • More space in physical memory. Hence more processes can be run in the same memory.
  • Less loading latency occurs at program start-up, as less information is accessed from secondary storage and less information is brought into main memory.

Disadvantages of demand paging
  • A page fault results in look up into the secondary storage. The secondary storage is slower, hence, the process is slower. This is one trade-off for memory.

THRASHING

When paging is used, a problem called "thrashing" can occur, in which the computer spends an unsuitably large amount of time swapping pages to and from a backing store, hence slowing down useful work.Thrashing occurs when there is insufficient memory available to store the working sets of all active programs. Adding real memory is the simplest response, but improving application design, scheduling, and memory usage can help. Another solution is to reduce the number of active tasks on the system. This reduces demand on real memory by swapping out the entire working set of one or more processes.

EXTERNAL FRAGMENTATION


In computer storage, fragmentation is a phenomenon in which storage space is used inefficiently, reducing capacity and often performance. Fragmentation leads to storage space being "wasted", and the term also refers to the wasted space itself.

There are three different but related forms of fragmentation:
• external fragmentation,
• internal fragmentation,
• data fragmentation,
External fragmentation

External fragmentation arises when free memory is separated into small blocks and is interspersed by allocated memory. It is a weakness of certain storage allocation algorithms, when they fail to order memory used by programs efficiently. The result is that, although free storage is available, it is effectively unusable because it is divided into pieces that are too small individually to satisfy the demands of the application. The term "external" refers to the fact that the unusable storage is outside the allocated regions.

For example, consider a situation wherein a program allocates 3 continuous blocks of memory and then frees the middle block. The memory allocator can use this free block of memory for future allocations. However, it cannot use this block if the memory to be allocated is larger in size than this free block.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
My notes (plz check) Roshan wadhwani Pakistan Affairs 147 Wednesday, November 09, 2022 12:36 PM
More Than 2000 Words to enhance Vocabulary Qurratulain English (Precis & Composition) 22 Saturday, June 13, 2020 01:55 PM
Political Science notes in Urdu Zoyee Political Science 3 Monday, May 21, 2012 09:57 AM
Suggest me books for English precise and composition asim4u Books Suggestions 8 Tuesday, September 13, 2011 09:14 AM
can any one provide notes and syllabus of MA English? Nayab123 Degree Programs and Courses 11 Monday, May 30, 2011 02:29 PM


CSS Forum on Facebook Follow CSS Forum on Twitter

Disclaimer: All messages made available as part of this discussion group (including any bulletin boards and chat rooms) and any opinions, advice, statements or other information contained in any messages posted or transmitted by any third party are the responsibility of the author of that message and not of CSSForum.com.pk (unless CSSForum.com.pk is specifically identified as the author of the message). The fact that a particular message is posted on or transmitted using this web site does not mean that CSSForum has endorsed that message in any way or verified the accuracy, completeness or usefulness of any message. We encourage visitors to the forum to report any objectionable message in site feedback. This forum is not monitored 24/7.

Sponsors: ArgusVision   vBulletin, Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.