D. N. Bezboruah
The present euphoria in Asom over the high percentage of success in the High School Leaving (HSLC) Examination of 2009 conducted by the Board of Secondary Education, Assam (SEBA) is well justified for at least two reasons. In the first place, the newspapers and TV channels are so full of bad news these days that every bit of good news that touches one and all is cause enough for rejoicing. Secondly, the success rates of the HSLC examinations conducted by the SEBA have been so dismal in the past that a percentage of pass of anything above even 50 per cent is deemed to be excellent. And this year we have had a record, with the success rate at 61.55 per cent – the highest in the last ten years. However, the point that is being missed is that in most advanced countries a success rate of 61.55 per cent would be regarded as a rather low level of performance. But the euphoria was bound to be there in a State where we have had success rates as low as 26 per cent and 35 per cent for many years in the previous decade. I don’t have to tell anyone that such low rates of success in school final examinations are abnormal and indicative of very serious aberrations in the process of education. When only 26 per cent of all candidates appearing in the school final examination of any country or State ass in the examination the clear message that goes out is that 74 per cent or nearly three-quarters of the students taking the examination deserved to fail. I can assure my readers that there is no society anywhere on this planet where 74 per cent of the children deserve to fail at the school final examination. There is such a thing as the normal distribution curve in Statistics (often referred to as the bell-shaped curve) that tells us that a failure rate of 74 per cent is a very serious aberration. Something like this can happen only when there have been serious lapses in teaching or when students have been tested on what they haven’t been taught or when the scoring of answer scripts has been done in a very irresponsible manner. When the situation remains unchanged over several years and the percentage of pass remains abysmally low for years, there is a clear indication that the SEBA has not been alive to the grim significance of the examination results because it probably does not have anyone trained in educational evaluation or anyone with the guts to tell the bosses about the kind of aberrations that must exist in the teaching and/or testing procedures. My information about the examination wing of SEBA confirms my suspicion about the lack of trained personnel among the SEBA staff in charge of examinations.
When you have a situation like this, you can have all kinds of aberrations related to examinations that even affect the teaching-learning situation. The Board can remain insensitive to abnormally low success rates over many years. Likewise, when the powers that be are suitably embarrassed about the low percentage of success, officers can make examinations very easy, instruct examiners to be very lenient about their markings and increase the grace marks to even double the success rate within a year. This is no more than a badly-performed ritual which says nothing about the health of the education system in the State but pleases everyone in a society obsessed with examination results even when the examinations are often neither valid nor reliable. Most people would like to know the significance of what the words valid (or validity) and reliable (or reliability) in the context of examinations are. I shall make an attempt to put them in as simple and untechnical language as possible. But I know that my credentials to talk about examinations are bound to be questioned. And why not? So a little bit of personal background may be in order. At some point in my days related to education I went and did a certificate course in Educational Evaluation conducted by the NCERT. Later on, I was a resource person and consultant on Language Testing and Examination Reform for language to States like Karnataka, Tamil Nadu, Orissa and Rajasthan, the Union Territory of Pondicherry and for the Central Board of Secondary Education and SEBA. I have lost count of the workshops and training courses on examination reform and language testing I have participated in. Any test or examination is a measuring device for some kind of achievement or performance. There can be tests for things like aptitude, just as there can be prognostic tests about how someone is likely to perform in a future test on the basis of past performance at other tests. Any measuring device has to be precise, reliable and valid, meaning that the device enables us to measure what we are trying to measure with considerable accuracy. So, in educational evaluation we are primarily concerned with validity and reliability. Validity has to do with the right choice of the measuring instrument for what we are trying to measure. One can look at it in two ways – one that is appropriate for any situation and another that is directly related to the examination scenario. When I am buying potatoes, the obvious measuring instrument is a pair of scales. No one would dream of using a measuring tape to buy potatoes just as no one would use anything except a measuring tape or rod to buy cloth. We must use valid measuring instruments for the job we have. And take the green grocer who uses a pair of scales quite deftly to weigh peas or potatoes. But take him into a chemistry lab and ask him to use the chemical balance. He may fail to do so, because he has not been trained to use a chemical balance. So we see that while routine measurements may not call for any training, any sophisticated measurements require training. In the examination situation, anyone who deducts marks for spelling mistakes in a Mathematics paper is making measurements that are not valid. The Mathematics test was not the valid instrument for measuring spelling ability.
Apart from being valid, a measuring instrument must also be reliable. In other words, it should be accurate. But look at some of our language tests. The paper may carry 100 marks. But are the zero and the 100 on that scale really working. How many examinees ever get a zero in a language test? How many ever got above 80? In other words, we have a scale that begins at around seven or eight and ends at around 80. This is an unreliable measuring instrument. And even a reliable measuring instrument could become unreliable when the examiner, too overloaded with answer scripts, gets some of them examined by the spouse.
There are other aspects of unreliability that often get overlooked. When we were students, the few objective type of tests for language that we had were confined to the grammar paper. The other language papers only had questions that called for long essay-type answers. This led to a great deal of subjectivity in assessment that made the measuring instrument unreliable to a point. This is best established by the fact that the same answer script would get different marks at the hands of different examiners. But while the increasing number of objective-type questions reduced the subjective element in evaluation, we have got ourselves into another kind of unreliability. In a language test, objective tests and short answer-type tests fail to measure the skill of composition and the ability of the examinee to produce at least a paragraph of correct connected writing. This is a rather unfortunate aspect of failure in a language test.
All said and done, the present obsession of parents and guardians with examinations and examination results is unfortunate, considering that we have long been dealing with undependable examination results. These examinations that are deemed to determine the fates of our students are neither always valid nor reliable. They give people undeserved advantages in those cases where the very tradition of the examining board is very liberal. It is certainly not true that the student from Bihar who has got a job because he got 90 per cent marks in his school final or B.A. examination is better than his counterpart in Asom who has got only 75 per cent in the comparable examination. Here the examiners are not quite so liberal. But the candidate from Bihar gets the job in preference to a better candidate merely because his marks read better.
That is precisely why it is so important for parents to get out of their present unhealthy obsession with examinations and examination results as far as their children’s education is concerned. This is perhaps the only country in the world where a few children commit suicide every year for not being able to get the kind of marks at school final examinations that their parents expected. What a terrible tragedy! What a terrible snuffing out of lives at adolescence for no rhyme or reason! It is high time parents got out of this examination obsession and got their children to read for enjoyment, to learn, to reason, to form opinions, evaluate experiences and acquire skills. That way, they would be far better rounded personalities and far better prepared for life than with the present single-minded obsession with scoring high marks at moth-eaten examinations that are often neither valid nor reliable. THE SENTINEL
The present euphoria in Asom over the high percentage of success in the High School Leaving (HSLC) Examination of 2009 conducted by the Board of Secondary Education, Assam (SEBA) is well justified for at least two reasons. In the first place, the newspapers and TV channels are so full of bad news these days that every bit of good news that touches one and all is cause enough for rejoicing. Secondly, the success rates of the HSLC examinations conducted by the SEBA have been so dismal in the past that a percentage of pass of anything above even 50 per cent is deemed to be excellent. And this year we have had a record, with the success rate at 61.55 per cent – the highest in the last ten years. However, the point that is being missed is that in most advanced countries a success rate of 61.55 per cent would be regarded as a rather low level of performance. But the euphoria was bound to be there in a State where we have had success rates as low as 26 per cent and 35 per cent for many years in the previous decade. I don’t have to tell anyone that such low rates of success in school final examinations are abnormal and indicative of very serious aberrations in the process of education. When only 26 per cent of all candidates appearing in the school final examination of any country or State ass in the examination the clear message that goes out is that 74 per cent or nearly three-quarters of the students taking the examination deserved to fail. I can assure my readers that there is no society anywhere on this planet where 74 per cent of the children deserve to fail at the school final examination. There is such a thing as the normal distribution curve in Statistics (often referred to as the bell-shaped curve) that tells us that a failure rate of 74 per cent is a very serious aberration. Something like this can happen only when there have been serious lapses in teaching or when students have been tested on what they haven’t been taught or when the scoring of answer scripts has been done in a very irresponsible manner. When the situation remains unchanged over several years and the percentage of pass remains abysmally low for years, there is a clear indication that the SEBA has not been alive to the grim significance of the examination results because it probably does not have anyone trained in educational evaluation or anyone with the guts to tell the bosses about the kind of aberrations that must exist in the teaching and/or testing procedures. My information about the examination wing of SEBA confirms my suspicion about the lack of trained personnel among the SEBA staff in charge of examinations.
When you have a situation like this, you can have all kinds of aberrations related to examinations that even affect the teaching-learning situation. The Board can remain insensitive to abnormally low success rates over many years. Likewise, when the powers that be are suitably embarrassed about the low percentage of success, officers can make examinations very easy, instruct examiners to be very lenient about their markings and increase the grace marks to even double the success rate within a year. This is no more than a badly-performed ritual which says nothing about the health of the education system in the State but pleases everyone in a society obsessed with examination results even when the examinations are often neither valid nor reliable. Most people would like to know the significance of what the words valid (or validity) and reliable (or reliability) in the context of examinations are. I shall make an attempt to put them in as simple and untechnical language as possible. But I know that my credentials to talk about examinations are bound to be questioned. And why not? So a little bit of personal background may be in order. At some point in my days related to education I went and did a certificate course in Educational Evaluation conducted by the NCERT. Later on, I was a resource person and consultant on Language Testing and Examination Reform for language to States like Karnataka, Tamil Nadu, Orissa and Rajasthan, the Union Territory of Pondicherry and for the Central Board of Secondary Education and SEBA. I have lost count of the workshops and training courses on examination reform and language testing I have participated in. Any test or examination is a measuring device for some kind of achievement or performance. There can be tests for things like aptitude, just as there can be prognostic tests about how someone is likely to perform in a future test on the basis of past performance at other tests. Any measuring device has to be precise, reliable and valid, meaning that the device enables us to measure what we are trying to measure with considerable accuracy. So, in educational evaluation we are primarily concerned with validity and reliability. Validity has to do with the right choice of the measuring instrument for what we are trying to measure. One can look at it in two ways – one that is appropriate for any situation and another that is directly related to the examination scenario. When I am buying potatoes, the obvious measuring instrument is a pair of scales. No one would dream of using a measuring tape to buy potatoes just as no one would use anything except a measuring tape or rod to buy cloth. We must use valid measuring instruments for the job we have. And take the green grocer who uses a pair of scales quite deftly to weigh peas or potatoes. But take him into a chemistry lab and ask him to use the chemical balance. He may fail to do so, because he has not been trained to use a chemical balance. So we see that while routine measurements may not call for any training, any sophisticated measurements require training. In the examination situation, anyone who deducts marks for spelling mistakes in a Mathematics paper is making measurements that are not valid. The Mathematics test was not the valid instrument for measuring spelling ability.
Apart from being valid, a measuring instrument must also be reliable. In other words, it should be accurate. But look at some of our language tests. The paper may carry 100 marks. But are the zero and the 100 on that scale really working. How many examinees ever get a zero in a language test? How many ever got above 80? In other words, we have a scale that begins at around seven or eight and ends at around 80. This is an unreliable measuring instrument. And even a reliable measuring instrument could become unreliable when the examiner, too overloaded with answer scripts, gets some of them examined by the spouse.
There are other aspects of unreliability that often get overlooked. When we were students, the few objective type of tests for language that we had were confined to the grammar paper. The other language papers only had questions that called for long essay-type answers. This led to a great deal of subjectivity in assessment that made the measuring instrument unreliable to a point. This is best established by the fact that the same answer script would get different marks at the hands of different examiners. But while the increasing number of objective-type questions reduced the subjective element in evaluation, we have got ourselves into another kind of unreliability. In a language test, objective tests and short answer-type tests fail to measure the skill of composition and the ability of the examinee to produce at least a paragraph of correct connected writing. This is a rather unfortunate aspect of failure in a language test.
All said and done, the present obsession of parents and guardians with examinations and examination results is unfortunate, considering that we have long been dealing with undependable examination results. These examinations that are deemed to determine the fates of our students are neither always valid nor reliable. They give people undeserved advantages in those cases where the very tradition of the examining board is very liberal. It is certainly not true that the student from Bihar who has got a job because he got 90 per cent marks in his school final or B.A. examination is better than his counterpart in Asom who has got only 75 per cent in the comparable examination. Here the examiners are not quite so liberal. But the candidate from Bihar gets the job in preference to a better candidate merely because his marks read better.
That is precisely why it is so important for parents to get out of their present unhealthy obsession with examinations and examination results as far as their children’s education is concerned. This is perhaps the only country in the world where a few children commit suicide every year for not being able to get the kind of marks at school final examinations that their parents expected. What a terrible tragedy! What a terrible snuffing out of lives at adolescence for no rhyme or reason! It is high time parents got out of this examination obsession and got their children to read for enjoyment, to learn, to reason, to form opinions, evaluate experiences and acquire skills. That way, they would be far better rounded personalities and far better prepared for life than with the present single-minded obsession with scoring high marks at moth-eaten examinations that are often neither valid nor reliable. THE SENTINEL
No comments:
Post a Comment