Exam grades "re-adjustment"
Schroedingers Cat
Shipmate
in Hell
From what I have seen, A-level results in the UK have been significantly downgraded. Mainly if you are in a state school.
Less so in Scotland, because they complained.
And significantly less so in public schools.
Meaning that many many people, especially from the state sector and the more deprived areas, have now lost their university places.
Every time I think the level of utter fuming raging fury I experience has reached its limit, the Corrupt dung-heap in charge here dig deeper. It is clearly and definitively classist.
So many hard-working young people have had their futures torn from them. Yes, many will recover. It will all be behind them and they will have a place in society. But it just makes everything harder.
Even before the patronising comments from Gavin Williamson about people being "promoted beyond their ability". Like him and his fucking cronies.
In. The. Sea.
Less so in Scotland, because they complained.
And significantly less so in public schools.
Meaning that many many people, especially from the state sector and the more deprived areas, have now lost their university places.
Every time I think the level of utter fuming raging fury I experience has reached its limit, the Corrupt dung-heap in charge here dig deeper. It is clearly and definitively classist.
So many hard-working young people have had their futures torn from them. Yes, many will recover. It will all be behind them and they will have a place in society. But it just makes everything harder.
Even before the patronising comments from Gavin Williamson about people being "promoted beyond their ability". Like him and his fucking cronies.
In. The. Sea.
Tagged:
Comments
I have had a group of young people.e in my home today who are waiting for GCSE results, due next week.
They are very anxious.
The prospect of being downgraded from mocks and predictions is very upsetting to them
ETA: I just found this article https://www.bbc.com/news/education-53759832
I suspect that this won't go away.
16-18 yr olds study three of four subjects assessed in A-Level exams. Grades from the exams, released in August each year, determine university entrance.
Exams did not take place due to COVID. Mock exams and teacher predictions were fed into a national 'leveling' algorithm that has reduced 40% of predicted grades - although private schools seem to have escaped this reduction.
Some straight A students (all year, in all assessments and mocks) in schools with less than stellar reputations, have got Bs and Cs.
The predicted "results" were wrong. That is indisputable.
The open question is what you should do about it.
I'd guess that the levelling was driven by the historic accuracy of the teacher predictions at each school, but I'd be interested to see a link to the algorithm, if anyone has it. So if over the past few years, teachers in this school have predicted that some pupils will get As and Bs, and what they've actually got in the exam has been mostly Cs, then we learn that the teachers at that school are wildly optimistic about their pupils' prospects.
The problem, of course, with such a significant re-scaling of people's marks, is that it's easy to show that it does "the right thing" in aggregate, but there's very much less confidence that it's doing "the right thing" in an individual pupil's case.
I ask because I do not believe that beyond a requisite level of academic achievement that marks show much of anything relevant.
As I understand it, it is not based on the accuracy of school predictions, rather on school performance - so that outstanding students at less successful schools are disadvantaged.
From the FT today:
'Ofqual, the regulator for England, said grades were based primarily on predictions calculated by teachers based on past work, mock exams and student rankings. Using an algorithm, these results were then standardised according to factors including a schools’ past performance and pupils’ past exam results.'
You are absolutely correct that this system is not capable of doing 'the right thing' in an individual pupil's case - and it is this aspect that I think will see the political right take up the cause.
That seems like a bad choice. On a national scale, the statement that each year's cohort will be similar to the adjacent ones is statistically sound. At the individual school level, that statement is very much less sound, particularly when you're dealing with individuals in the tails of the distribution.
But perhaps they don't have complete records of teacher predictions from previous years to match to results from previous years.
What the marks show is precisely the level of academic achievement demonstrated. They don't purport to do anything else.
Exams are "fairer" than teacher grades in the sense that they put everyone on the same playing field, rather than having different teachers award different grades for the same quality of work. There is a perennial discussion about the extent to which exams correlate with the ability of the pupil in some kind of work-like environment (which usually isn't 3 hours of panic with no access to reference materials), but that's a different discussion from the one about teacher grades vs central assessment.
Frankly, I'd expect from understanding measurement that a multiple source assessment would be more valid that a single result on a test.
The oddity is that A-level results are meaningless long-term. They don't show anything specifically relevant, but they are used as the gatekeepers for the next stage of education. Same with GCSEs. So failing to achieve a particular level will disadvantage a generation.
It is possible to compensate, by studying more, by obtaining exam qualifications. But it takes time and money to do so. And it shouldn't be necessary (because they have already done the work).
This government is the one that has put more and more emphasis on exams, and away from continuous assessment. I always used to do OK in exams, and less well in ongoing assessment, but for many, this is not the case.
I gather from that approx 40% of A level estimates were downgraded but the Secretary of State for education claimed that normally 75% are usually over estimated anyway.
The GCSE process (not awarded yet) involved looking at individual school’s past grades and comparing with this year’s estimates and teachers were asked to rank students as well as estimate grades for them. Whilst this would seem fair on the surface (and obviously this is a difficult conundrum to make fair) it can lead to individual unfairness if a school has a better cohort this year than last.
I have problems with ranking students according to ability, having been one of those shy students who went unnoticed. But the other problem is that schools with very small cohorts of students did not have grades adjusted, which advantaged private schools. It was a good year to study music and the classics apparently.
I suspect that future years will now take mocks a lot more seriously!
I suppose one question might be, does the algorithm preserve the year by year standard deviation for the school as well as the year by year average? If it does then there's a semblance of justice. If it doesn't then there's none.
May I wish your son and his friends well, as they wait and as they come to terms with that next Thursday brings.
This year has been hard on the mental health of young people, and (IMO) young men in particular (with their not-always-good communication skills).
Asher
The teacher assessment grades sent to exam boards based on continuous assessment are a one off (hopefully).
Students usually have predicted grades on their university application that are renowned to be highly inaccurate & depend more on how much they beg & cajole their teachers & tutors & how realistic their choices are in relation to their abilities & efforts.
(I work with A-Level students)
Not quite. Prelims used to be necessary for use in the old appeals process that allowed a student to have their grade reviewed if the school could present solid evidence of working at a higher grade than the one awarded by the exam. This system disappeared about 5-6 years ago, but most teachers in Scotland continue to take prelims seriously and try to produce an exam that is of a similar standard to the real thing. Some of these will be bought in from companies specialising in this activity, others will written by teachers or compiled by mixing and matching questions from several papers. My own approach varies depending on how much time I have. And, to clarify, the teacher assessments in Scotland were not solely based on prelims, but a body of evidence of which prelims may have been a part.
I'm surprised to hear this. I taught Sixth Formers for many years, and never had anyone even ask about their predicted grades. It was a part of the system that students seemed unaware of, in normal times.
Universities will still have the same number of places to be filled and they will want to fill them.
It wasn’t a thing when I was at school either but now more and more will say, “but I need an A to do...” when they’ve been getting C’s all year. I think offers are higher than they used to be too.
Sure, but it's a reasonable concern that this year's debacle might unfairly disadvantage smart kids from generally bad schools in favour of average kids from good schools or private schools.
In other words, if the re-allocation of exam grades is biased in favour of wealthy kids from "nice" state schools and private schools, then the good universities will tend to take even more of those kids, and fewer of the able kids from poor backgrounds.
And that looks like a problem. It's not completely obvious to me how to fix the problem, though.
AIUI, the sources are a.) how pupils did in a mockup of their A-Levels; b.) how their teachers think they would have done in their A-Levels; c.) how well their teachers have historically predicted A-Level results. IOW, the result still based on a single assessment measure, viz. A-Levels, but because it can't be measured directly, they're trying to measure it indirectly by a combination of inputs.
But there must be some degree of flexibility in the number of places available per course, because when universities make conditional offers, they can't know how many pupils will turn down their offer because they prefer somewhere else.
So if you just allow the results to stand, then more students will go to the better universities this year than in normal years, but those universities should have the flexibility to allow this.
That's two different questions. The algorithm is intended to predict what grades the students would have got in the exams, had they been able to sit them.
It's not intended to predict how successful those students will be in the future, any more than people hope that exam grades predict that - and there's plenty of evidence that coming from a wealthy, supportive background gives you something like a grade advantage per exam, on average, over most of the grade range, over someone from a deprived background, and that that advantage evaporates by the time you get to degree results.
& in terms of universities - they want students! They rely on them to stay afloat. We’ve not heard anything more in recent weeks about the anonymous thirteen universities that were in danger of going bust from the lockdown and lack of international students.
This is because many universities now state that they will not make offers to pupils whose predicted grades fall below a certain level, and this is often quite a high level, like A*AA.
HOWEVER, if you receive an offer, and then fail to make your grades, there is often a fair chance that your chosen university will accept you anyway.
This obviously creates quite a systemic pressure to inflate predicted grades. If it weren't for the latter problem, one could say "There's no point in my inflating your predicted grade anyway if you then fail to hit your offer", but that's not true!
UCAS don't help by having a weaselly form of words saying that the predicted grade should be "the grade an applicant's school or college believes they're likely to achieve in positive circumstances" (my emphases, and that's a direct quote from their official website). What the heck does that mean? What are "positive circumstances" exactly? Does it mean "if they work really really hard between now and the exam although they never did before", for example? Because of course that's what some pupils claim!
I've seen second hand a comment that it broadly maps this years grades to the previous years grades, so if a school was lucky last year it's average student goes through, while if a school was unlucky last year it's bright student doesn't.
Also at that point, you've got no correction for teachers favourite's, so again an average student that the teacher likes gains at the expense of a bright but annoying student.
So far, purgatorially unfair.
The other corralery is that the teachers over-estimate by effectively half a grade across the board, which does seem a bit high (granted you have the students who crash out and lose multiple grades, but .
The narration is that this happens more in 'stupid' state schools, whereas 'clever' private schools got the grades right.
However, somehow this system seems to have the extra-ordinary feature that this year that private schools did somehow get better. And the suggestion is that the correction didn't get applied for small (private) schools. In which case they (by an extra-ordinary co-incidence) benefited at the expense of normal schools.
Although the anecdotes on the Guardian seem extremely extreme. With schools receiving the "lowest set of results in their history" and B's becoming U's (both from different schools named Notre Dome). And if you're getting that sort of output something is wrong (I can believe some new teachers are clueless, but at that point you need some independent validation).
Has some details, but is written to confuse.
We employ people in the expectation that by the time they start work, they will have achieved X qualification all the time.
Applying after you get your grades doesn't leave much time to plan.
Everyone wants to use the exams, but they don't exist. So they use what exists?
It looks very much like results depend on:
Absolute Prediction (small classes, see 8.4 in link, 5 tapered to 20).
Relative Prediction matched onto centres previous results.
Previous results matched onto new results
(I'm not sure how it decides b- and b+ before shifting, I think that that is what the previous bit does)
In which case it looks very biased in the selectively of it's corrections.
It is a choice to start the academic year in September for universities - you could start it in January.
This feature was fantastic for my other son last year. He is probably on the autistic spectrum and did not meet his grades for his uni offer. Consequently he could not have a place on his chosen course (computer science). Instead they offered him a place on 2 different courses which was undersubscribed (electronic engineering with either computer science or music tech. This had clearly been offered having considered his prior GCSE grades where he had good BTECs in engineering (a distinction) and music tech. He took the first offer up and was very pleased to get into his chosen university to do an appropriate course and really relieved that he would not have to go through clearing and phone lots of universities; he is loving the course. Presumably it also suits universities as they get students who are keen to come to their universities and who are doing appropriate courses.
The figure I saw quoted was 75% of predicted grade were over generous. In many ways it's a pity that exams have changed from a high proportion of coursework vs final exams - that would be a great predictor, as by March 2020 most coursework would be in.
That said, it is extremely disturbing that the adjustments seem to be targeted and that individual pupils' results are determined not by ability bit by their school (and presumably their social background). Heinous.
There perhaps needs to be a reflection on why teachers grades have historically been off the mark but that can wait until this is sorted. It would be very interesting if the variances between predicted and achieved were ranked over a period of time by school. It may well have exposed something that needs addressing in other ways.
I guess it comes down to the problem that it is all so heavily based on exams, that you need to exams. Without them, using a proxy is incredibly risky - proxies always are unless you have a lot of information and effort to getting them right. If teacher predictions are always so far out, then they are clearly not a good proxy.
Maybe universities should have addressed this too. Worked out how they could get the best students without exam grades. They knew this was all coming anyway.
It is a mess. And, as usual, those at posh schools will do fine, those at lesser places will suffer.
Is it to certify mastery of a certain subject, or to rank who is the best at that body of knowledge.
People taking the exams tend to think of them as assessing your competence in a subject, not where you rank in your age cohort.
That doesn't address concerns about how employers will view grades issued this year over the next few years though.
Universities would still need a way of normalizing across schools, though. Most universities won't have many applicants from the same school - and certainly not for the same courses - so they'd be in a position of comparing Teacher A's recommendation with Teacher B's recommendation to decide which pupil to take. And clearly, Teachers A and B want their pupils to succeed, and have the best pupil they've had in several years, and unconditionally support their pupil.
If the teachers are from schools that send kids to the university every year, you have a point of comparison ("Tell me how Pupil A compares to Student X, who we admitted last year"). Which is fine for the "good" schools, but not so good for the schools/kids who are claimed to have been disadvantaged by this year's debacle.
I my career, recommendations for promotion were often based on who was doing the recommending.