A-levels and algorithms: Not a good result

Isaac is 12, so his GCSEs and A-levels are still a few years away. But they’re not that far off. So I looked on with interest at this year’s A-level results last Thursday.

What I saw maddened and saddened me in equal measure.

In some respects, this year was no different from any other. Jubilation from students who attained or exceeded the grades they needed to confirm places at university. Disappointment from others who fell short and now face an uncertain future, possibly at an alternate university via the clearing process.

Some – make that many – were angry because this year isn’t quite like any other year. 2020, of course, is the year that formal public exams were cancelled. Instead, an algorithm reviewed and reduced nearly 40% of results, some by two or more grades.

If that sounds like a lot, well, it is.

What does the algorithm do?

Let’s start by defining what an algorithm actually is.

An algorithm is essentially a process or set of rules. In essence, they are programmes that enable computers to solve problems by making ‘intelligent’ decisions. They do this by processing thousands or even millions of individual bits of data at a speed far beyond anything humans can achieve.

Algorithms play a role throughout our modern world. They determine what our Facebook feeds and Google search results look like. They calculate the fastest route from A to B on your car’s sat-nav. Or they recommend films you might like to watch on Netflix based on your past history.

And now, of course, they review and adjust A-level grades of hundreds of thousands of students without the need for exam scores.

The purpose of an algorithm is to be 100% objective. Take the data for each individual. Analyse it in a consistent way. Deliver a result.

When algorithms go wrong

However, an algorithm is only as good as the rules a human gives it. If it is told to increase the grades of anyone whose first name has six or fewer letters in it, it will do so. Or it will adjust grades based on factors such as the historical performance of past students from the school. If an algorithm’s creator gives it biased rules, it will faithfully produce biased results.

And, of course, an algorithm only considers data. You can feed it students’ coursework scores, mock exam results and teacher-assessed grades, but it cannot adjust for qualitative factors such as students who perform better in exam conditions, or who didn’t take mocks as seriously as others, or a variety of other factors, some of which teachers may have built into their predictions.

Algorithms cannot make judgement calls.

All this raises some worrying questions. Why should a school’s performance in the past affect a pupil’s grades today? How can an algorithm know that overriding a teacher’s assessment produces a fairer result? Even if the algorithm gets it right 90% of the time, how does that justify the other 10%?

For sure, the algorithm will have made mistakes. All algorithms do, particularly when faced with situations its programming hasn’t allowed for. As Thursday went on, more and more odd stories kept appearing on both social and mainstream media.

How could a student who had achieved straight As/A*s throughout their course suddenly be downgraded to a B? What’s the rationale for doing something so counter-intuitive?

Why were students in disadvantaged areas hit harder than others? If this was due to a bias in the algorithm’s construction, this is the education equivalent of racial profiling.

How did the algorithm decide to issue the same person a C for Maths but an A for Further Maths? This is analogous to an 18-year-old failing their driving test while at the same time earning a Formula 1 super-licence. It defies logical explanation.

Now, it’s in the nature of any algorithm that, at a macro level, it is generally more accurate and objective than humans can ever be. But when you drill down into the detail, there are always outliers where data doesn’t conform to the algorithm’s rules, producing anomalous results, as in the example above.

Checks and balances

This wouldn’t be so bad if there was a robust appeals process in place to review and correct errors quickly. But then the exams regulator Ofqual curiously revoked the appeals process on Saturday night.

This is rather worrying.

Algorithms work best when there is also a human element to act as a check and balance. This is particularly the case when the outcome of an algorithm’s decisions has such a major and far-reaching impact. A change in A-level results will have a profound impact on many students’ future prospects. We’re not just talking about Netflix making a slightly odd film recommendation on a Saturday night here. It matters – a lot.

It’s doubly important to have that human element when the algorithm is new and untested like this. But with no appeals process and time being of the essence, there is now a real risk of any errors being corrected too late (or not at all). This could result in many students undeservedly missing out on a place at their first-choice university. Or potentially missing out on university altogether.

This is why a total and unchecked reliance on an algorithm that has thrown up so many anomalies is a very dangerous thing to do.

Man or machine?

How can we trust an algorithm when it throws up so many inexplicable errors? How can we be sure this is fairer than just using teacher-assessed grades?

The fundamental question here is: who do you trust – man or machine?

As a parent, when the time comes I hope that Isaac can be measured on a full two years’ worth of coursework and exams. But if not, I’d rather he was assessed by teachers who know him as a person rather than a computer programme who views him as a set of numbers.

Wouldn’t you?

If you’re still in doubt, let me frame it another way. You apply for a job and go for an assessment centre where you have a face-to-face interview and do some aptitude tests. Normally, both elements would be taken into account. But if getting the job was down to one or the other, would you rather it was the interviewer or an algorithm who makes the decision?

Do you think that any Prime Minister would allow an algorithm to decide who should be in their Cabinet?

And once you have started using algorithms to make such key decisions, where does it end? How about we solve the backlog of cases in the UK’s criminal justice system by replacing trial by jury with an algorithm too? Sure, a few innocent people would go to jail, but think of the speed and efficiency savings!

While we’re at it, why not give algorithms the nuclear launch codes? (If you grew up in the 1980s, you’d know the answer to this is: because we watched WarGames.)

Okay, I’m being a bit facetious now. But you get my drift, right?

What now?

As a social media manager, it’s part of my job to understand algorithms. I know how they work, their benefits and their flaws. They are so useful for so many applications, both today and in the future.

But not for this. At least, not in this form.

As a parent of three future GCSE/A-level students, I worry about the automation of exam results. I worry about the danger of an algorithm penalising my kids based on factors they cannot control: postcode, even race. And I worry about a government that prizes data more than judgement – and that’s before we even start to talk about the erosion of data privacy rights.

So am I concerned that this is just the first of many algorithmic interventions that will shape my kids’ future? Yes. In fact, I’m more than worried: I’m terrified.

———-

If you liked this post, why not follow me on the following social networks?
FacebookTwitterInstagram