Student scores: Statistics can ensure fairness, inclusivity
A fair scoring system that can peg scores from different school boards on one consistent scale will ensure fairness in assessing performance. Fortunately, comparing scores from different school boards and consistently ranking the scores is a solved problem in statistics
“Test scores aren’t perfect, but having a test score for math or reading or other things that we can objectively measure is a meaningful component that makes a lot of sense.”
Bill Gates
This year has been a tough one for school students, especially for those who have finished +2 now. Without the final exams, alternative methods to compute scores were followed. When different school boards follow different philosophies of evaluation and awarding marks, it spells trouble for students.
It is good that education departments at central and state governments rose to the occasion to come up with objective measures to compute scores. However, it has to be urged that more should be done to bring fairness in scoring and ranking of performance. There are two parts to this problem. The first part is a fair scoring system; the second is a cumulative ranking system to boost inclusivity by providing disadvantage points.
A fair scoring system that can peg scores from different school boards on one consistent scale will ensure fairness in assessing performance. Fortunately, comparing scores from different school boards and consistently ranking the scores is a solved problem in statistics.
See video: How badly did NEET hit Tamil Nadu students?
Statistics offers multiple solutions to this problem and the kernel of such solutions is: how higher or lower than the average is a student’s score and how are the scores of students clustered in different ranges. The solution in its basic version uses average and standard deviation to convert each student’s score into something called a “normal” score. Such normal scores from one school board’s results, say CBSE, can be converted to normal scores in another board, say state board or ICSE, through statistical formulae. With scores from different boards being converted to one normal scale, the ranking of students becomes fair and objective. However, this basic version assumes that the scores in each board are distributed “normally”, along the famous bell curve to represent the distribution of scores. If the distribution is not “normal”, suitable methods from statistics’ toolkit are be evaluated. Problems like ranking students within the same normalised score can be solved by using tie-breaker criteria that can be suitably defined. The 2013 report submitted to MHRD by a committee chaired by Prof S.K. Joshi discusses many ways to standardise school board scores.
The postgraduate engineering entrance exam GATE has been adopting a statistical method to compute something called “GATE Score”, which is comparable across different engineering branches and across different years. For example, a GATE Score of 800 in 2020 computer science exam, 2020 civil engineering exam or 2019 mechanical engineering exam are all considered equivalent. So we do have evidence that statistical tools effectively bring fairness and objectivity in scoring.
While a fair scoring mechanism ushers in objective performance assessment, achieving inclusivity is a different problem. To address it, we have to recognise social faultlines like region, caste, religion, gender, type of school, etc. A good starting point is the matrix system of providing disadvantage points that was suggested by Yogendra Yadav and Satish Deshpande. This solution was suggested originally in 2006 and it can be tweaked to be implemented within the existing reservation buckets and in general category (call them buckets).
In this solution, three tables are prepared for each bucket. In the first table, Region, Religion, Caste and Gender are given points. For example, a BC Hindu rural female student will get higher points than a BC Hindu urban male, and so on. There will typically be less than 50 combinations. Similarly, the second table will contain the school type and corresponding points. A student from village vernacular medium school will get higher points than one from a town English medium school, and so on. The third table will cover the family’s financial status and occupation status. For example, a labourer’s ward gets more points than a Class I official’s ward. The points to be awarded in each table can be determined statistically to be within a range so that outlying excellent performers are accorded due recognition.
The disadvantage points are then added up, optionally normalised if statistically needed and further added to the fair (normalised) exam score and we can call this “final” score. Now we have two scores: fair score and final score. Fair score will be used to select the students in each bucket based on cut-off scores and within each bucket, final score will be used for ranking. With this approach, the selection process can become more inclusive without disturbing the existing reservation percentages.
Would all these statistical gyrations make the selection process arcane? The answer to this is a counter-question: Will the new method of selection bring in candidates from hitherto unrepresented groups, make the process fair to candidates from all school boards and from all social groups? The answer to this is: Yes; and hence a little more sophistication in selection is worth the effort, even if it creates some degree of opaqueness. The preparation of the disadvantage tables and points can be updated every year, making the process live to vicissitudes of social dynamics and respond to it quickly. There can even be disadvantage points for migrant labourers and so on. The solution thus can become a testament to inclusivity and is consistent with pursuing diversity on broad criteria as suggested by Justice Lewis Powell when addressing questions related to affirmation action based on race (US Supreme Court, 1978).
The current process turns a Nelson’s eye to (1) disparate mark distribution in different school boards within a state and (2) under-representation of many social groups across all buckets. The solutions suggested will be a starting point towards fairness in representation. The milestones for fairness and inclusivity are ever evolving and newer forms of deprivation, for example – caused by digital divide – need to be reckoned with, at some point in the future.
It needs to be noted that the two solutions are independent of each other and score standardisation is a more urgent problem. The fair score solution can be extended to other problems too. For example, Tamil Nadu wants exemption from NEET. Leaving aside the legal questions, we could have multiple modes of selection aided by Statistics. For example, Tamil Nadu can form a normalised score list based on school marks. And students who cleared NEET can have their scores normalised, converted to the school marks’ normalised scale and then a single rank list can be created.
While Mark Twain humorously put down statistics as “Lies, Damned Lies and Statistics”, we do have a case for using it now to correct aberrations.