Saturday, May 25, 2019
05:06 PM (GMT +5)

Go Back   CSS Forums > Punjab Public Service Commission > PPSC Others Examinations

Reply Share Thread: Submit Thread to Facebook Facebook     Submit Thread to Twitter Twitter     Submit Thread to Google+ Google+    
LinkBack Thread Tools Search this Thread
Old Friday, July 05, 2013
bluesky's Avatar
Join Date: Jan 2012
Location: France
Posts: 48
Thanks: 28
Thanked 25 Times in 15 Posts
bluesky is on a distinguished road
Default Deputy Headmistress Notes

Administrative and Supervisory Structure and Operation

According to the Constitution of Pakistan (1973), the Federal Government is entrusted the responsibility for policy, planning, and promotion of educational facilities in the federating units. This responsibility is in addition to the overall policymaking, coordinating and advisory authority; otherwise, education is the provincial subject. The Federal Ministry of Education administers the educational institutions located in the federal capital territory. Universities located in various provinces are administered by the provincial governments, but are exclusively funded by the federal government through the Higher Education Commission.

The Federal Ministry of Education is headed by the Minister of Education. The most senior civil servant in the Ministry is the Education Secretary assisted by Joint Secretary and Joint Educational Advisors of each wing. There are 6 wings in the Federal Ministry of Education and each wing is headed by Joint Educational Advisor

The provincial Education Departments are headed by their respective Provincial Education Ministers. The civil servant in charge of the department is the Provincial Education Secretary. The provinces are further divided into districts for the purpose of administration. The head of the Education Department in a district is Executive District Officer (EDO). Literacy Department functions separately in case of Punjab and Sindh only it is headed by Executive District Officer (EDO) literacy. In the Provinces of KPK and Balochistan, literacy is the part of Education Department. The hierarchy then runs down to the District Education Officer, Sub-district Education Officer, Supervisors or Assistant Sub-district Education Officers .

At the grass root level (the union council level), Learning Coordinators (LCs) provide academic guidance as well as supervise the schools. The administrative structure has been decentralized under the Devolution Plan. Village Education Committees (VECs)/ School Management Committees (SMCs) have been set up in the provinces at grass root level.
Reply With Quote
Old Friday, July 05, 2013
bluesky's Avatar
Join Date: Jan 2012
Location: France
Posts: 48
Thanks: 28
Thanked 25 Times in 15 Posts
bluesky is on a distinguished road

Present Scenario of Education in Pakistan

The government of Pakistan recognized that education is the basic right of every citizen; therefore, access to education for every citizen is crucial for economic development and for poverty alleviation. The present government has given much importance to education sector it has not only emphasized raising the present literacy rate but also emphasized improving the quality of education. The over all estimated literacy rate was 50.5 percent, for male 63 percent and for female 38 percent during 2001-2002. Urban literacy rate was 70 percent and rural literacy rate is 30 percent during the same period. Pakistan net primary enrolment rate was 66 percent (male 82 percent, female 50 percent) and gross enrolment rate was 78 percent (male 91 percent, female 64 percent) during 2000-01. About 45 percent children who enrolled in grade-1 drop out before completing primary education cycle (male drop out 45 percent, female drop out 54 percent). There are about 4 million children of 5-9 age group who are left out of school. The left out includes those children who never enrolled and those who drop out.

Enrolment at primary level was 16.63 million during 2000-01. The gross enrolment at middle level was 34 percent, male 36 percent and female 33 percent in 2000-01. The gross enrolment at secondary level was 22 percent, 20 percent for female and 24 percent for male. The total number of Arts and Science colleges were 916 (male 536 and female 380) with the enrolment of 763,000 during 2000-01. There are 68 universities in Pakistan with the enrolment of 1.1 million. Out of the total universities, 40 universities are managed by public sector. There are 203,439 educational institutions in Pakistan of which 36,096 institutions are run private sector and the share of the private sector is about 18 percent.

The major issues and challenges of the education system include low literacy rate, high drop out rate, wide spread teacher absenteeism, weak management and supervision structure, shortage of trained and qualified teachers specially female, lack of teachers dedication, motivation and interest in their profession and lack of physical facilities. Moreover the curriculum is mostly outdated, irrelevant and does not fulfill the requirements of present day.
Reply With Quote
The Following 2 Users Say Thank You to bluesky For This Useful Post:
maryamAli (Thursday, September 29, 2016), shams shaikh (Wednesday, June 22, 2016)
Old Friday, July 05, 2013
bluesky's Avatar
Join Date: Jan 2012
Location: France
Posts: 48
Thanks: 28
Thanked 25 Times in 15 Posts
bluesky is on a distinguished road

Education For All (EFA)

Education For All refers to the global commitment to ensure that by 2015 all children would complete primary education of good quality (Universal Primary Completion), and that gender disparity would be eliminated in primary and secondary education preferably by 2005 and no later than 2015. This commitment was made at the World Education Forum in Dakar, Senegal in April 2000 and reaffirmed in the Millennium declaration in New York in September 2000. The Government of Pakistan is attaching top priority to EFA. The country has ten year Perspective Development Plan (2001-11) to visualize the long term macro-economic and sectoral growth strategies, Poverty Reduction and Human Development is the priority area of the Plan. Sector-wide development approach covering all the sectors of education has been adopted under the Perspective Plan. In order to address the EFA implications linkage plan focusing on development of other sectors of Education has also been prepared.

Nearly 80% of the ESR covers different goals of Education for All by 2015, reducing illiteracy by 50 percent with a focus on reducing the gender gap by 2015, life skills and learning opportunities for youth and adults; and early childhood education. The targeted groups for EFA goals belong to disadvantaged communities with minimal opportunities. These groups are highly vulnerable, without access to learning facilities, or public sector facilities, which are functioning at sub-optimal levels.
Reply With Quote
Old Friday, July 05, 2013
bluesky's Avatar
Join Date: Jan 2012
Location: France
Posts: 48
Thanks: 28
Thanked 25 Times in 15 Posts
bluesky is on a distinguished road

Female Education

The Pakistani educational system has demonstrated a discriminatory trend against women. This bias is evident in the pattern of literacy, which shows a strong correlation between gender and literacy rates. The illiteracy rate is very high among Pakistani women of all age groups. In 1998, the adult illiteracy rates were 42 percent for males and 71.1 percent for females. In the same year, the illiteracy rate for male youth and female youth was 25 and 53 percent, respectively. This gender-based discriminatory trend in education has contributed to the persistence of illiteracy and to a chronic shortage of educated people and has had a major impact on the continued underdevelopment of Pakistan.

Teachers’ Training

In Pakistan, there are 90 Colleges of Elementary Education which offer teachers’ training programs for Primary Teaching Certificate (PTC) and Certificate in Teaching (CT) to primary school teachers. For secondary school teachers, there are 16 Colleges of Education, offering graduate degrees in education and there are departments of education in 9 universities which train teachers at the master’s level. There are only 4 institutions which offer in-service teachers’ training. Besides these, the Allama Iqbal Open University, Islamabad, offers a very comprehensive teachers’ training program based on distance learning; its total enrolment is about 10,000 per annum of which 7,000 complete various courses every year

Private Education Sector

Private sector involvement in education is encouraging. The Federal Bureau of Statistics survey (1999-2000) indicates that there are 36,096 private educational institutions in Pakistan. About 61 percent of the institutions are in urban areas and 39 percent in rural areas. The percentage share of private sector in enrollment is 18 percent at primary school level, 16 percent at middle school level and 14 percent at high school level.

It has been observed that most of the private schools select their own curricula and textbooks, which are not in conformity with public schools. Majority of the schools are “English Medium” which attracts the parents for sending their children to these schools. Most of the schools are overcrowded and do not have adequate physical facilities. These schools are usually charging high fees from the students. Most of the schools are unregistered; therefore, in most cases the certificates issued by these institutions are not recognized by public schools. Majority of these institutions are functioning in the rented buildings.

The National Education Policy 1998-2010 proposed that there shall be regulatory bodies at the national and provincial levels to regulate activities and smooth functioning of privately managed schools and institutions of higher education through proper rules and regulations. A reasonable tax rebate shall be granted on the expenditure incurred on the setting up of educational facilities by the private sector. Grants-in-Aid for specific purposes shall be provided to private institutions. Setting up of private technical institutions shall be encouraged. Matching grants shall be provided for establishing educational institutions by the private sector in the rural areas or poor urban areas through Education Foundation. In rural areas, schools shall be established through public-private partnership schemes. The government shall not only provide free land to build the school but also bear a reasonable proportion of the cost of construction and management. Liberal loan facilities shall be extended to private educational institutions by financial institutions.

Despite all shortcomings of private education mentioned above, PIHS survey indicates that enrolment rates in public schools have declined since 1995-96 particularly a large decline has been observed in rural areas. It is generally perceived by parents that quality of education in private schools are better than the public schools, therefore, those parents who can afford prefer to send their children to private schools. These trends indicate that the public education system is unable to meet public demand for providing quality education in the country
Reply With Quote
Old Thursday, July 11, 2013
bluesky's Avatar
Join Date: Jan 2012
Location: France
Posts: 48
Thanks: 28
Thanked 25 Times in 15 Posts
bluesky is on a distinguished road

Measurement And Testing

As discussed earlier, measurement is an essential component of the evaluation
process. It is a critical part since resulting decisions are only as good as the data
upon which the data are based. In general sense, data collection is involved in
all phases of evaluation – the planning phase, the process phase and the product
phase. Measurement, however, the process of quantifying the degree to which
someone or something possesses a given trait, normally occurs in the process and
the product phase.
Testing is necessary at certain points and useful at others. Testing can be conducted
at the end of an instruction cycle – semester, term or unit. Such posttesting is
for the purpose of determining the degree to which objectives (formal or informal)
have been achieved, be they instructional objectives or program objectives.
Frequently, prettest or baseline data are collected at the beginning of the cycle.
Prettests serve several purposes, the most important being that knowledge of the
current status of a group may provide guidance for future activities as well as a
basis of comparison for posttest results.
There are a variety of situations where testing is useful. A teacher may administer
tests of entry behaviour to determine whether assumed pre-requisites have indeed
been achieved. A special project designed to reduce dropouts may administer
attitude tests and tests of personality variables such as introversion, aggression
and anxiety in an effort to identify potential dropouts or to better understand
students having difficulties. A school may administer tests of scholastic aptitude
in order to determine realistic achievement goals for students and to assist in the
guidance process.

Data Collection

There are three major ways to collect data:
i) administer a standardized instrument
ii) administer a locally developed instrument
iii) record naturally available data (such as grade point averages and absenteeism)
Depending upon the situation, one of these ways may be most appropriate or a
combination may be required. Collection of available data, requiring minimum
effort, sounds very attractive. There are not very many situations, however,
for which this type of data is appropriate. Even when it is appropriate – that is,
will facilitate intended decision making – there are problems inherent in this
type of data. For example, the same letter grade does not necessarily represent
the same level of achievement, even in two different classes in the same school
or two different schools in the same system. Further, the records, for which the
data are taken may be incomplete and disorganized. Developing an instrument
for a particular purpose also has several major drawbacks. The development of
a ‘good’ instrument requires considerable time, effort and skill. Training at least
equivalent to a course in testing and evaluation is necessary in order to acquire
the skills for good instrument development.
In contrast, the time it takes to select an appropriate instrument (usually from
among standardized, commercially available instruments) is inevitably less than
the time it takes to develop an instrument which measures the same thing. Further
standardized instruments are typically developed by experts who possess the
necessary skills. Thousands of standardized instruments are available which yield
a wide variety of data for a wide variety of purposes. Major areas for which numerous
measuring instruments have developed include achievement, personality and
aptitude. Each of these can be in turn further divided into may subcategories. In
general, it is usually a good idea to find out whether a suitable instrument is already
available before jumping into instrument development. There are situations,
however, for which use of available instruments is impractical or inappropriate.
A teacher-made test, for example, is more appropriate or valid for assessing the
degree to which students have achieved the objectives of a given unit.

Classification Schemes

At this point it must be emphasized that a test is not necessarily a written set of
questions to which an individual responds in order to determine whether he/she
passes. A more inclusive definition of a test is a means of measuring the knowledge,
skills, feelings, intelligence or aptitude of an individual or a group. Tests produce
numerical scores which can be used to identify, classify or otherwise evaluate test
takers. While in practice most of tests are in paper-and-pencil form, there are
many different kinds of tests and many different ways to classify them. The various
classification schemes overlap considerably, and categories are by no means
mutually exclusive. Any test can be classified on more than one dimension.
Response Behaviours
The term response behaviours refers to the way in which behaviours to be measured
are exhibited. While in some cases responses to questions or other stimuliare given orally,
usually, they are either written or take the form of an actual

A. Written responses

Written tests can be classified as either essays (subjective) or objective and standardized
or locally-developed.

Essay Vs Objective Tests

i) Essays
An essay test is one in which the number of questions is limited and
responders must compose answers, typically lengthy, e.g., ‘identify and
discuss the major reforms in African education during the colonial
period.’ Determining the goodness or correctness of answers to such
questions involves some degree of subjectivity.

ii) Objective tests
An objective test is one for which subjectivity in scoring is eliminated,
at least theoretically. In other words, anyone scoring a given test should
come up with the same score. Examples of objective tests are multiple
–choice tests, true-false tests, matching tests, and short answer tests.

Standardized Vs Locally Developed Tests

i) Standardized Tests

A standardized test is one that is developed by subject matter and
measurement specialists, that is field-tested under uniform administration
procedures, that is revised to meet certain criteria and scored and
interpreted using uniform procedures and standards. Standardization
permeates all aspects of the test to the degree that it can be administered
and scored exactly the same way every time it is given. Although other
measurement instruments can be standardized, most standardized tests
are objective, written tests requiring written responses. Although exceptions
occur, the vast majority of standardized tests have been administered
to groups referred to as the norm group. The performance of a norm
group for a given test serves as the basis of comparison and interpretation
for other groups to whom the test is administered. The score of a
norm group are called norms. Ideally, the norm group is a large, well
defined group which is representative of the group and subgroups for
whom the test is intended.

ii) Locally-developed test

The opposite of standardized test is obviously non-standardized test.
Such tests are usually developed locally for a specific purpose. The tests
used by teachers in the classroom are examples of locally-developedtests.
Such tests do not have the characteristics of standardized tests. A
locally-developed test may be as good as a standardized test, but not
that often. Use of locally-developed test is usually more practical and
more appropriate. A locally-developed test would more likely reflect
what was actually taught in the classroom to a greater degree than standardized

B. Performance Tests

For may objectives and areas of learning, use of written tests is inappropriate way
of measuring behaviour. You cannot determine how well a student can type a
letter, for example, with a multiple choice test or open question. Performance is
one of these areas. Performance can take the form of a procedure or a product. A
procedure is a series of steps, usually in a definite order, executed in performing
an act or a task. Examples include, adjusting a microscope, passing a football,
setting margins on type writer, drawing geometric figures or calculating sum of
figures in Excel. A product is a tangible outcome or result. Examples of a product
include typed letter, a painting, a poem, and a science project. In either case the
performance is observed and rated in some way. Thus, performance is one which
requires the execution of an act or the development of a product in order to determine
whether or to what degree a given ability or trait exists.

Data Collection Methods

There are many different ways to collect data and classifying data collection
methods is not easy. However, a logical way to categorize them initially is in terms
of whether the data are obtained through self-report or observation.

a) Self Report

Self-report data consist of oral or written responses from individuals. An
obvious type of self-report data is that resulting from the administration
of standardized or locally-developed written tests, including certain
achievement, personality and aptitude tests. Another type of self-report
measure used in certain evaluation efforts is the questionnaire, an
instrument with which you are probably familiar. Also, interviews are
sometimes used.

b) Observation

When observation is used, data are collected not by asking but by
observing. A person being observed usually does not write anything;
he or she does something and that behaviour is observed and recorded.
For certain evaluation questions, observation is clearly the most appropriate
approach. To use an example, you could ask students about theirsportsmanship
and you could ask teachers how they handle behaviour
problems, but more objective information would probably would be
obtained by actually observing students at sporting events and teacher
in their classrooms. Two types of observation which are used in evaluation
efforts are natural observation and observation of simulation. Certain
kinds of behaviours can only be observed as they occur naturally. In
such situations, the observer does not control or manipulate anything,
and in fact works very hard at not affecting the observed situation in
any way. As an example, classroom behaviour can best be addressed
through observation. In simulation observation, the evaluator creates
the situation to be observed and tells participants what activities they
are to engage in. This technique allows the evaluator to observe behaviour
which occurs infrequently in natural situations or not at all.

c) Rating scale

Is an instrument with a number of items related to a given variable,
each item representing a continuum of categories between two extremes;
persons responding to items place a mark to indicate their position
on each item. Rating scales can be used as self-report or as n observation
instrument, depending on the purpose for which they are used.

Behaviours Measured

Virtually all possible behaviours that can be measured fall into one of three major
achievement, character and personality, and aptitude.
All of these
can be standardized or locally-developed. These categories also apply equally
well to the three domains of educational outcome, namely cognitive, affective
an psychomotor.

1. Achievement

Achievement tests measure the correct status of individuals with respect to proficiency
in given areas of knowledge or skills. Achievement tests are appropriate for
many types of evaluation besides individual student evaluation. Achievement test
can be standardized (which are designed to cover content which are common to
many classes of the same type) or locally-developed (designed to measure particular
set of learning outcomes, set by specific teacher). Standardized tests are, in
turn, available for individual curriculum areas or in the form of batteries, which
measure achievement in several different areas.
A diagnostic test is a type of achievement test which yields multiple scores for
each area of achievement; these scores facilitate identification of specific areas or
deficiency or learning difficulty. Items in a diagnostic test are intended to identify
skills and knowledge that students must have achieved before they proceed toanother level.
Ideally, diagnosis should be an ongoing process and the teacher
must design them in a way that such tests help him/her find out the problems
that students encounter as they proceed in the learning process.

2. Character and Personality

Tests of character and personality are designed to measure characteristics of individuals
along a number of dimensions and to assess feelings and attitudes toward
self, others, and a variety of activities, institutions and situations. Most of the
tests of character and personality are self-report measures and ask an individual
to respond to a series of questions or statements. There are instruments in this
category which are designed to measure personality, attitudes, creativity, and
interest of students.

3. Aptitude

Aptitude tests are measures of potential. They are used to predict how well
someone is likely to perform in a future situation. Tests of general aptitude are
variously referred to as scholastic aptitude tests, intelligence tests, and tests of
general mental ability. Aptitude tests are also available to predict a person’s likely
level of performance after following some specific future instruction or training.
Aptitude tests are available in the form of individual test on specific subject or
content or in the form of batteries. While virtually all aptitude tests are standardized
and administered as part of school testing program, the results are useful to
teachers, counsellors and administrators. Readiness aptitude tests (or prognostic
tests) are administered prior to instruction or training in a specific area in order to
determine whether and to what degree a student is ready for, or will profit from,
an instruction. Readiness tests, which are part of aptitude tests, typically include
measurement of variables such as auditory discrimination, visual discrimination
and motor ability.

Performance Standards

Performance standards are the criteria to which the results of measurement are
compared in order to interpret them. A test score in and of itself means nothing.
If I tell you that Ahmed got 18 correct, what does that tell you about Ahmed’s
performance? Absolutely nothing. Now if I tell that the average score for the test
was 15, at least you know that he did better than the average. If instead I tell you
that a score of 17 was required for a master classification, you don’t know anything
about the performance of the rest of the class, but you know that Ahmed
attained mastery. These are the two ways with which we can interpret the results
of a test, first by comparing it to other students in the class (that is Norm-Referenced
Measurement) or by comparing it to a pre-determined criteria (that is
Criterion-Referenced Measurement).

Norm-referenced standards

Any test, standardized or locally-developed, which reports and interprets each
score in terms of its relative position with respect to other scores on the same test,
is norm-referenced. If your total IQ score is 100, for example, the interpretation
is that your measured intelligence is average, average compared to scores of a
norm group. The raw scores resulting from administration of standardized test
are converted to some other index which indicates relative position. One such
equivalent technique familiar to you is the percentile. A given percentile indicates
the percentage of the scores that were lower than the percentile’s corresponding
score. For example, Mr. Ahmed might have scored on the 42nd percentile in a
standardized math test, that means 42% of the students who took that test scored
below Ahmed. In such way of communicating student scores, there is no indication
of what Mr. Ahmed knows or does not know. The only interpretation are in
terms of Ahmed’s achievement compared to the achievement of others.
Norm-referenced tests are based on the assumption that measured traits involve
normal curve properties. The idea of the normal curve is that measured traits
exist in different amounts in different people. Some people have a lot of it, some
people have little of it, and most have some amount called the ‘average’ amount.
For example, if you administer a math test to a class of 100 students, given that
the test of appropriate level – that is not too easy nor too low – a small portion
of the class will perform high and another equal portion will perform low, while
the majority will perform around the average score.

Criterion-referenced standards

Any test which reports and interprets each score in terms of an absolute standard
is criterion-referenced. In other words, interpretation of one person’s score has
nothing to do with anybody’s score. A score is compared with a standard or performance,
not with scores of other people. When criterion-referenced tests are used,
everyone taking the test may do well or everyone may do poorly. In this context,
criterion can be defined as a domain of behaviours measuring an objective.
Reply With Quote
Old Thursday, July 11, 2013
Amna's Avatar
Super Moderator
Moderator: Ribbon awarded to moderators of the forum - Issue reason: Best Moderator Award: Awarded for censoring all swearing and keeping posts in order. - Issue reason: Diligent Service Medal: Awarded upon completion of 5 years of dedicated services and contribution to the community. - Issue reason:
Join Date: Aug 2005
Location: Desert of Dream
Posts: 2,905
Thanks: 446
Thanked 1,940 Times in 1,031 Posts
Amna has much to be proud ofAmna has much to be proud ofAmna has much to be proud ofAmna has much to be proud ofAmna has much to be proud ofAmna has much to be proud ofAmna has much to be proud ofAmna has much to be proud of

Kindly, mention the source.
To succeed,look at things not as they are,but as they can be.:)
Reply With Quote
Old Thursday, July 11, 2013
bluesky's Avatar
Join Date: Jan 2012
Location: France
Posts: 48
Thanks: 28
Thanked 25 Times in 15 Posts
bluesky is on a distinguished road

by Ridwan Mohamed OSMAN African Virtual university
Reply With Quote
Old Friday, July 12, 2013
bluesky's Avatar
Join Date: Jan 2012
Location: France
Posts: 48
Thanks: 28
Thanked 25 Times in 15 Posts
bluesky is on a distinguished road

Characteristics of a good test
There are a number of characteristics which are desirable for all tests. Standardized tests, which are developed by experts in both subject matter and evaluation, are more likely to live up to these standards. These characteristics, which are discussed in a more detailed manner, include;

a) Test objectivity
b) Discrimination
c) Comprehensiveness
d) Validity
e) Reliability
f) Specification of conditions of administering
g) Direction of scoring and interpretation

Test Objectivity

Test objectivity means that an individual’s score is the same, or essentially the same, regardless of who is doing the scoring. A test is objective when instructor opinion, bias, or individual judgment is not a major factor in scoring. Tests may be scored by more than one person, at different or the same time. In education, we wonder to what extent is the scoring of these individual scorers is the same. If the possible difference between the people scoring the same test high, that test is low in objectivity. Though individual’s may naturally be different in the way they perceive information, we assume that the more objective a test is, the more aspires to the high quality evaluation we all envision in education. This does not mean that tests which do not have high degree of objectivity (such as subjec- tive tests) are not of quality. Even subjective tests, though they are designed to measure information that can be looked at from different angles, certain level of objectivity is necessary. The individual designing must have something in mind that constitutes good performance on such test. That thing can be multiple, but criteria must be developed to make sure any scoring is fair enough to contribute to discriminating of students on such basis. Objectivity is a relative term.


The test should be constructed in such a manner that it will detect or measure small differences in achievement or attainment. This is essential if the test is to be used for ranking students on the basis of individual achievement or for assi- gning grades. It is not an important consideration if the test is used to measure the level of the entire class or as an instructional quiz where the primary purpose is instruction rather than measurement. As is true with validity, reliability, and objectivity, the discriminating power of a test is increased by concentrating on and improving each individual test item. After the test has been administered, an item analysis can be made that will show the relative difficulty of each item and the extent to which each discriminates between good and poor students. Often, as in obtaining reliability, it is necessary to increase the length of the test to get clear-cut discrimination. A discriminating test: (1) Produces a wide range of scores when administered to the students who have significantly different achievements. (2) Will include items at all levels of difficulty. Some items will be answered correctly only by the best students; others will be relatively easy and will be answered correctly by most students. If all students answer an item correctly, it lacks discrimination.


For a test to be comprehensive, it should sample major lesson objectives. It is neither necessary nor practical to test every objective that is taught in a course, but a sufficient number of objectives should be included to provide a valid measure of student achievement in the complete course.


The most important characteristic of a good examination is validity; that is, the extent to which a test measures what it is intended to measure. It is vital for a test to be valid in order for the results to be accurately applied and interpreted. Validity isn’t determined by a single statistic, but by a body of research that de- monstrates the relationship between the test and the behaviour it is intended to measure. There are three types of validity:

a) Content validity When a test has content validity, the items on the test represent the en- tire range of possible items the test should cover. Individual test questions may be drawn from a large pool of items that cover a broad range of topics.
In some instances where a test measures a trait that is difficult to define, an expert judge may rate each item’s relevance. Because each judge is basing their rating on opinion, two independent judges rate the test separately. Items that are rated as strongly relevant by both judges will be included in the final test.

b) Criterion-related Validity A test is said to have criterion-related validity when the test is demonstrated to be effective in predicting criterion or indicators of a construct. There are two different types of criterion validity:

• Concurrent Validity occurs when the criterion measures are obtained at the same time as the test scores. This indicates the extent to which the test scores accurately estimate an individual’s current state with regards to the criterion. For example, on a test that measures levels of depres- sion, the test would be said to have concurrent validity if it measured the current levels of depression experienced by the test taker.

• Predictive Validity occurs when the criterion measures are obtained at a time after the test. Examples of test with predictive validity are career or aptitude tests, which are helpful in determining who is likely to succeed or fail in certain subjects or occupations.

c) Construct Validity A test has construct validity if it demonstrates an association between the test scores and the prediction of a theoretical trait. Intelligence tests are one example of measurement instruments that should have construct validity. The instructor can ensure whether his/her test items are valid by following accepted test construction procedures that include:

(1) Use of the lesson objectives as a basis for the test requirements. An exa- mination so constructed will tend to measure what has been taught.

(2) Review of the test items and the completed examination by other ins- tructors.

(3) Selection of the most appropriate form of test and type of test item. Thus, if the instructor desires to measure “ability to do,” he must select that form of the test that will require the student to demonstrate his “ability to do.” If another less desirable form is used, it must be recogni- zed that the validity of the measurement has been reduced.

(4) Presentation of test requirements in a clear and unambiguous manner. If the test material cannot be interpreted accurately by the student, he or she will not realize what is being covered; hence, he or she will be unable to respond as anticipated. Such a test cannot be valid.

(5) Elimination, so far as is possible, of those factors that are not related to the measurement of the teaching pints. A test that is not within the capabilities of the students as to time or educational level may fail to measure their actual learning in the course.


Reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly. For example, if a test is designed to measure a trait (such as introversion), then each time the test is administered to a subject, the results should be approximately the same. Unfortunately, it is impossible to cal- culate reliability exactly, but there several different ways to estimate reliability.

a) Test-Retest Reliability To gauge test-retest reliability, the test is administered twice at two different points in time. This kind of reliability is used to assess the consistency of a test across time. This type of reliability assumes that there will be no change in the quality or construct being measured. Test-retest reliability is best used for things that are stable over time, such as intelligence. Generally, reliability will be higher when little time has passed between tests.

b) Inter-ratter Reliability
This type of reliability is assessed by having two or more independent judges score the test. The scores are then compared to determine the consistency of the ratters estimates. One way to test inter-ratter reliability is to have each ratter assign each test item a score. For example, each ratter might score items on a scale from 1 to 10. Next, you would calculate the correlation between the two rating to determine the level of inter-ratter reliability. Another means of testing inter-ratter reliability is to have ratters determine which category each observations falls into and then calculate the percentage of agreement between the ratters. So, if the ratters agree 8 out of 10 times, the test has an 80% inter-ratter reliability rate.

c) Parallel-Forms Reliability Parallel-forms reliability is gauged by comparing to different tests that were created using the same content. This is accomplished by creating a large pool of test items that measure the same quality and then randomly dividing the items into two separate tests. The two tests should then be administered to the same subjects at the same time.

d) Internal Consistency Reliability This form of reliability is used to judge the consistency of results across items on the same test. Essentially, you are comparing test items that measure the same construct to determine the tests internal consistency. When you see a question that seems very similar to another test question, it may indicate that the two ques- tions are being used to gauge reliability. Because the two questions are similar and designed to measure the same thing, the test taker should answer both questions the same, which would indicate that the test has internal consistency.

The following factors will influence the reliability of a test:

(1) Administration. It is essential that each student have the same time, equipment, instructions, assistance, and examination environment. Test directions should be strictly enforced.
(2) Scoring. Objectivity in scoring contributes to reliability. Every effort should be made to obtain uniformity of scoring standards and practices.
(3) Standards. The standards of performance that are established for one class should be consistent with those used in other classes. A change in grading policies not based upon facts, uniform standards, and expe- rience factors gained from other classes will affect the reliability of test results.
(4) Instruction. The reliability of tests results will be affected if the instruc- tion presented to a class tends to overemphasize the teaching points included in the examination. This is often known as “teaching the test” and is undesirable. When the instructor gives students obvious clues as to the test requirements, he not only affects the reliability of the test, but he insults the intelligence of his class. (5) Length. The more responses required of students, the more reliable will be the test or measuring device.

Specification of conditions of administration

A good test must specify the conditions under which the test must be conducted. The conditions must give all students a fair chance to show what their performance. This will improve the reliability of the test. Standardized must come along with specification on the conditions in which students must perform. Administering the test with highly varied conditions will highly interfere with the results of the test. In general, when administering a test, the following must be kept in mind:
• Physical conditions o Light, ventilation, quiet, etc.
• Psychological conditions o Avoid inducing test anxiety o Try to reduce test anxiety o Don’t give test when other events will distract
• Suggestions o Don’t talk unnecessarily before the test o Minimize interruptions o Don’t give hints to individuals who ask about items o Discourage cheating o Give students equal time to take the test.

Direction for scoring and interpreting test results

Good test must come with direction on how to score and interpret the results of a test. This is specially important for standardized tests, which are used by indi- viduals other than those who developed the test in the first place. Such directions are also important for locally-developed tests, since other individuals may get involved in the process of scoring and interpreting the test, due to unforeseen circumstances.
The following guidelines contribute to the development of clear direction for tests.
1. Provide clear descriptions of detailed procedures for administering tests in a standardized manner.
2. Provide information to test takers or test users on test question formats and procedures for answering test questions, including information on the use of any needed materials and equipment.
3. Establish and implement procedures to ensure the security of testing materials during all phases of test development, administration, scoring, and reporting.
4. Provide procedures, materials and guidelines for scoring the tests, and for monitoring the accuracy of the scoring process. If scoring the test is the responsibility of the test developer, provide adequate training for scorers.
5. Correct errors that affect the interpretation of the scores and communi- cate the corrected results promptly.
6. Develop and implement procedures for ensuring the confidentiality of scores.
Reply With Quote
Old Saturday, July 19, 2014
Junior Member
Join Date: Oct 2012
Location: Chunian
Posts: 1
Thanks: 0
Thanked 0 Times in 0 Posts
aqifnadeem is on a distinguished road
Default Interview of headmistress/headmaster ppsc

can any one inform me that what questions are asked from the cadidates of deputy headmistress in ppsc?
Reply With Quote
Old Friday, July 25, 2014
Join Date: Jun 2012
Posts: 49
Thanks: 19
Thanked 17 Times in 13 Posts
asmarrz is on a distinguished road

can anyone share interview questions???
Reply With Quote

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On

Similar Threads
Thread Thread Starter Forum Replies Last Post
Repeated Questions of Agriculture Last Island Agriculture 6 Tuesday, January 31, 2017 06:30 PM
Repeated Questions of Forestry Last Island Forestry 0 Saturday, December 03, 2011 06:24 PM
Need help: Constitution's questions rqabutt Constitutional Law 11 Monday, February 07, 2011 10:06 PM

CSS Forum on Facebook Follow CSS Forum on Twitter

Disclaimer: All messages made available as part of this discussion group (including any bulletin boards and chat rooms) and any opinions, advice, statements or other information contained in any messages posted or transmitted by any third party are the responsibility of the author of that message and not of (unless is specifically identified as the author of the message). The fact that a particular message is posted on or transmitted using this web site does not mean that CSSForum has endorsed that message in any way or verified the accuracy, completeness or usefulness of any message. We encourage visitors to the forum to report any objectionable message in site feedback. This forum is not monitored 24/7.

Sponsors: ArgusVision   vBulletin, Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.