Exam grades "re-adjustment"

From what I have seen, A-level results in the UK have been significantly downgraded. Mainly if you are in a state school.

Less so in Scotland, because they complained.

And significantly less so in public schools.

Meaning that many many people, especially from the state sector and the more deprived areas, have now lost their university places.

Every time I think the level of utter fuming raging fury I experience has reached its limit, the Corrupt dung-heap in charge here dig deeper. It is clearly and definitively classist.

So many hard-working young people have had their futures torn from them. Yes, many will recover. It will all be behind them and they will have a place in society. But it just makes everything harder.

Even before the patronising comments from Gavin Williamson about people being "promoted beyond their ability". Like him and his fucking cronies.

In. The. Sea.
Tagged:
«1345678

Comments

  • It is desperate.

    I have had a group of young people.e in my home today who are waiting for GCSE results, due next week.

    They are very anxious.

    The prospect of being downgraded from mocks and predictions is very upsetting to them
  • CaissaCaissa Shipmate
    edited August 2020
    Can you explain this to Canadian? Our university entrance is usually based solely on high school grades achieved in specific courses.
    ETA: I just found this article https://www.bbc.com/news/education-53759832
  • Just to note that ideas of 'individual effort and reward' are prominent on the right, and so I suspect that the injustice of this will be opposed on both left and right.

    I suspect that this won't go away.
  • Caissa wrote: »
    Can you explain this to Canadian? Our university entrance is usually based solely on high school grades achieved in specific courses.

    16-18 yr olds study three of four subjects assessed in A-Level exams. Grades from the exams, released in August each year, determine university entrance.

    Exams did not take place due to COVID. Mock exams and teacher predictions were fed into a national 'leveling' algorithm that has reduced 40% of predicted grades - although private schools seem to have escaped this reduction.

    Some straight A students (all year, in all assessments and mocks) in schools with less than stellar reputations, have got Bs and Cs.
  • Let's start with some facts. This cohort is not magically much better than last year's cohort, or the year before, and they won't be magically better than next year's. If the "exam results" - by which you mostly mean "predictions by teachers" are not broadly similar to last year's, the "results" are wrong.

    The predicted "results" were wrong. That is indisputable.

    The open question is what you should do about it.
  • CaissaCaissa Shipmate
    Thanks for the explanation, Asher.
  • asher wrote: »
    Some straight A students (all year, in all assessments and mocks) in schools with less than stellar reputations, have got Bs and Cs.

    I'd guess that the levelling was driven by the historic accuracy of the teacher predictions at each school, but I'd be interested to see a link to the algorithm, if anyone has it. So if over the past few years, teachers in this school have predicted that some pupils will get As and Bs, and what they've actually got in the exam has been mostly Cs, then we learn that the teachers at that school are wildly optimistic about their pupils' prospects.

    The problem, of course, with such a significant re-scaling of people's marks, is that it's easy to show that it does "the right thing" in aggregate, but there's very much less confidence that it's doing "the right thing" in an individual pupil's case.
  • Is there a thought the these A-level exams are fairer in some way that an algorithm? or teacher assigned marks from course work? Has there been data shown regarding statistical reliability and validity?

    I ask because I do not believe that beyond a requisite level of academic achievement that marks show much of anything relevant.
  • asher wrote: »
    Some straight A students (all year, in all assessments and mocks) in schools with less than stellar reputations, have got Bs and Cs.

    I'd guess that the levelling was driven by the historic accuracy of the teacher predictions at each school, but I'd be interested to see a link to the algorithm, if anyone has it. So if over the past few years, teachers in this school have predicted that some pupils will get As and Bs, and what they've actually got in the exam has been mostly Cs, then we learn that the teachers at that school are wildly optimistic about their pupils' prospects.

    The problem, of course, with such a significant re-scaling of people's marks, is that it's easy to show that it does "the right thing" in aggregate, but there's very much less confidence that it's doing "the right thing" in an individual pupil's case.

    As I understand it, it is not based on the accuracy of school predictions, rather on school performance - so that outstanding students at less successful schools are disadvantaged.

    From the FT today:
    'Ofqual, the regulator for England, said grades were based primarily on predictions calculated by teachers based on past work, mock exams and student rankings. Using an algorithm, these results were then standardised according to factors including a schools’ past performance and pupils’ past exam results.'


    You are absolutely correct that this system is not capable of doing 'the right thing' in an individual pupil's case - and it is this aspect that I think will see the political right take up the cause.
  • asher wrote: »
    As I understand it, it is not based on the accuracy of school predictions, rather on school performance - so that outstanding students at less successful schools are disadvantaged.

    That seems like a bad choice. On a national scale, the statement that each year's cohort will be similar to the adjacent ones is statistically sound. At the individual school level, that statement is very much less sound, particularly when you're dealing with individuals in the tails of the distribution.

    But perhaps they don't have complete records of teacher predictions from previous years to match to results from previous years.
    Is there a thought the these A-level exams are fairer in some way that an algorithm? or teacher assigned marks from course work? Has there been data shown regarding statistical reliability and validity?

    I ask because I do not believe that beyond a requisite level of academic achievement that marks show much of anything relevant.

    What the marks show is precisely the level of academic achievement demonstrated. They don't purport to do anything else.

    Exams are "fairer" than teacher grades in the sense that they put everyone on the same playing field, rather than having different teachers award different grades for the same quality of work. There is a perennial discussion about the extent to which exams correlate with the ability of the pupil in some kind of work-like environment (which usually isn't 3 hours of panic with no access to reference materials), but that's a different discussion from the one about teacher grades vs central assessment.
  • edited August 2020
    @asher It is probably as valid as scores from an exam? Validity meaning it predicts future student performance in this case. Does the algorithm predicts what it is supposed to predict? That would be something to see in the future.

    Frankly, I'd expect from understanding measurement that a multiple source assessment would be more valid that a single result on a test.
  • Well, yes, but we do seem to be wedded to exams.
  • They should have deferred a year, it was always going to be a clusterfuck.
  • I ask because I do not believe that beyond a requisite level of academic achievement that marks show much of anything relevant.

    The oddity is that A-level results are meaningless long-term. They don't show anything specifically relevant, but they are used as the gatekeepers for the next stage of education. Same with GCSEs. So failing to achieve a particular level will disadvantage a generation.

    It is possible to compensate, by studying more, by obtaining exam qualifications. But it takes time and money to do so. And it shouldn't be necessary (because they have already done the work).

    This government is the one that has put more and more emphasis on exams, and away from continuous assessment. I always used to do OK in exams, and less well in ongoing assessment, but for many, this is not the case.
  • HeavenlyannieHeavenlyannie Shipmate
    edited August 2020
    I have a son awaiting GCSE results next week in a state school.
    I gather from that approx 40% of A level estimates were downgraded but the Secretary of State for education claimed that normally 75% are usually over estimated anyway.
    The GCSE process (not awarded yet) involved looking at individual school’s past grades and comparing with this year’s estimates and teachers were asked to rank students as well as estimate grades for them. Whilst this would seem fair on the surface (and obviously this is a difficult conundrum to make fair) it can lead to individual unfairness if a school has a better cohort this year than last.
    I have problems with ranking students according to ability, having been one of those shy students who went unnoticed. But the other problem is that schools with very small cohorts of students did not have grades adjusted, which advantaged private schools. It was a good year to study music and the classics apparently.
  • It was the disparity in grade reductions between schools in poorer and more affluent areas that caused the Scottish U-turn on the issue to say that you couldn't get a worse grade than in your prelims. I think it was something like twice as many for pupils in less well off locations. I believe though that the Scottish system has standardised national mock exams, so can rely slightly more on those scores as a fair indicator of performance, whereas English schools all set and mark their own mock papers, often based on old exam questions, so there is more variability in scores between establishments (e.g. some places base scores on how you would do if that was the real thing, others are quite harsh marking to give students a wake-up call).

    I suspect that future years will now take mocks a lot more seriously!
  • Yes, my son’s school takes mocks seriously in that they are correctly invigilated but some of the papers were real ones which covered subjects not yet studied so are not good as standalones for grade estimates (his estimated grades for sixth form are more realistic). Not all his subjects had mocks.
  • DafydDafyd Shipmate
    I haven't gathered exactly what the algorithm does. In principle I can see a role for statistical correction. But if that statistical correction is based entirely upon the average past performance of the school the child went to that is condemning the child based entirely upon the past performance of the school they went to without regard to the individual performance of the child then that is entirely unjust.
    I suppose one question might be, does the algorithm preserve the year by year standard deviation for the school as well as the year by year average? If it does then there's a semblance of justice. If it doesn't then there's none.
  • I have a son awaiting GCSE results next week in a state school.
    I gather from that approx 40% of A level estimates were downgraded but the Secretary of State for education claimed that normally 75% are usually over estimated anyway.
    The GCSE process (not awarded yet) involved looking at individual school’s past grades and comparing with this year’s estimates and teachers were asked to rank students as well as estimate grades for them. Whilst this would seem fair on the surface (and obviously this is a difficult conundrum to make fair) it can lead to individual unfairness if a school has a better cohort this year than last.
    I have problems with ranking students according to ability, having been one of those shy students who went unnoticed. But the other problem is that schools with very small cohorts of students did not have grades adjusted, which advantaged private schools. It was a good year to study music and the classics apparently.

    May I wish your son and his friends well, as they wait and as they come to terms with that next Thursday brings.

    This year has been hard on the mental health of young people, and (IMO) young men in particular (with their not-always-good communication skills).

    Asher
  • Be careful using the term ‘predicted grades’.
    The teacher assessment grades sent to exam boards based on continuous assessment are a one off (hopefully).
    Students usually have predicted grades on their university application that are renowned to be highly inaccurate & depend more on how much they beg & cajole their teachers & tutors & how realistic their choices are in relation to their abilities & efforts.

    (I work with A-Level students)
  • Pendragon wrote: »
    It was the disparity in grade reductions between schools in poorer and more affluent areas that caused the Scottish U-turn on the issue to say that you couldn't get a worse grade than in your prelims. I think it was something like twice as many for pupils in less well off locations. I believe though that the Scottish system has standardised national mock exams, so can rely slightly more on those scores as a fair indicator of performance, whereas English schools all set and mark their own mock papers, often based on old exam questions, so there is more variability in scores between establishments (e.g. some places base scores on how you would do if that was the real thing, others are quite harsh marking to give students a wake-up call).

    I suspect that future years will now take mocks a lot more seriously!

    Not quite. Prelims used to be necessary for use in the old appeals process that allowed a student to have their grade reviewed if the school could present solid evidence of working at a higher grade than the one awarded by the exam. This system disappeared about 5-6 years ago, but most teachers in Scotland continue to take prelims seriously and try to produce an exam that is of a similar standard to the real thing. Some of these will be bought in from companies specialising in this activity, others will written by teachers or compiled by mixing and matching questions from several papers. My own approach varies depending on how much time I have. And, to clarify, the teacher assessments in Scotland were not solely based on prelims, but a body of evidence of which prelims may have been a part.
  • Be careful using the term ‘predicted grades’.
    The teacher assessment grades sent to exam boards based on continuous assessment are a one off (hopefully).
    Students usually have predicted grades on their university application that are renowned to be highly inaccurate & depend more on how much they beg & cajole their teachers & tutors & how realistic their choices are in relation to their abilities & efforts.

    (I work with A-Level students)

    I'm surprised to hear this. I taught Sixth Formers for many years, and never had anyone even ask about their predicted grades. It was a part of the system that students seemed unaware of, in normal times.
  • TelfordTelford Shipmate
    From what I have seen, A-level results in the UK have been significantly downgraded. Mainly if you are in a state school.

    Less so in Scotland, because they complained.

    And significantly less so in public schools.

    Meaning that many many people, especially from the state sector and the more deprived areas, have now lost their university places.

    Every time I think the level of utter fuming raging fury I experience has reached its limit, the Corrupt dung-heap in charge here dig deeper. It is clearly and definitively classist.

    So many hard-working young people have had their futures torn from them. Yes, many will recover. It will all be behind them and they will have a place in society. But it just makes everything harder.

    Even before the patronising comments from Gavin Williamson about people being "promoted beyond their ability". Like him and his fucking cronies.

    In. The. Sea.

    Universities will still have the same number of places to be filled and they will want to fill them.
  • I'm surprised to hear this. I taught Sixth Formers for many years, and never had anyone even ask about their predicted grades. It was a part of the system that students seemed unaware of, in normal times.

    It wasn’t a thing when I was at school either but now more and more will say, “but I need an A to do...” when they’ve been getting C’s all year. I think offers are higher than they used to be too.

  • Telford wrote: »
    Universities will still have the same number of places to be filled and they will want to fill them.

    Sure, but it's a reasonable concern that this year's debacle might unfairly disadvantage smart kids from generally bad schools in favour of average kids from good schools or private schools.

    In other words, if the re-allocation of exam grades is biased in favour of wealthy kids from "nice" state schools and private schools, then the good universities will tend to take even more of those kids, and fewer of the able kids from poor backgrounds.

    And that looks like a problem. It's not completely obvious to me how to fix the problem, though.
  • @asher It is probably as valid as scores from an exam? Validity meaning it predicts future student performance in this case. Does the algorithm predicts what it is supposed to predict? That would be something to see in the future.

    Frankly, I'd expect from understanding measurement that a multiple source assessment would be more valid that a single result on a test.

    AIUI, the sources are a.) how pupils did in a mockup of their A-Levels; b.) how their teachers think they would have done in their A-Levels; c.) how well their teachers have historically predicted A-Level results. IOW, the result still based on a single assessment measure, viz. A-Levels, but because it can't be measured directly, they're trying to measure it indirectly by a combination of inputs.
  • Telford wrote: »
    Universities will still have the same number of places to be filled and they will want to fill them.

    Sure, but it's a reasonable concern that this year's debacle might unfairly disadvantage smart kids from generally bad schools in favour of average kids from good schools or private schools.

    In other words, if the re-allocation of exam grades is biased in favour of wealthy kids from "nice" state schools and private schools, then the good universities will tend to take even more of those kids, and fewer of the able kids from poor backgrounds.

    And that looks like a problem. It's not completely obvious to me how to fix the problem, though.

    But there must be some degree of flexibility in the number of places available per course, because when universities make conditional offers, they can't know how many pupils will turn down their offer because they prefer somewhere else.

    So if you just allow the results to stand, then more students will go to the better universities this year than in normal years, but those universities should have the flexibility to allow this.
  • @asher It is probably as valid as scores from an exam? Validity meaning it predicts future student performance in this case. Does the algorithm predicts what it is supposed to predict?

    That's two different questions. The algorithm is intended to predict what grades the students would have got in the exams, had they been able to sit them.

    It's not intended to predict how successful those students will be in the future, any more than people hope that exam grades predict that - and there's plenty of evidence that coming from a wealthy, supportive background gives you something like a grade advantage per exam, on average, over most of the grade range, over someone from a deprived background, and that that advantage evaporates by the time you get to degree results.


  • It may also be worth noting that significant drops in grades year on year are an Ofsted inspection trigger. Some institutions that might have had a hit and reinspection might have escaped on this occasion. Although, who knows what’ll happen with Ofsted next year!?!

    & in terms of universities - they want students! They rely on them to stay afloat. We’ve not heard anything more in recent weeks about the anonymous thirteen universities that were in danger of going bust from the lockdown and lack of international students.
  • I'm surprised to hear this. I taught Sixth Formers for many years, and never had anyone even ask about their predicted grades. It was a part of the system that students seemed unaware of, in normal times.

    It wasn’t a thing when I was at school either but now more and more will say, “but I need an A to do...” when they’ve been getting C’s all year. I think offers are higher than they used to be too.

    This is because many universities now state that they will not make offers to pupils whose predicted grades fall below a certain level, and this is often quite a high level, like A*AA.

    HOWEVER, if you receive an offer, and then fail to make your grades, there is often a fair chance that your chosen university will accept you anyway.

    This obviously creates quite a systemic pressure to inflate predicted grades. If it weren't for the latter problem, one could say "There's no point in my inflating your predicted grade anyway if you then fail to hit your offer", but that's not true!

    UCAS don't help by having a weaselly form of words saying that the predicted grade should be "the grade an applicant's school or college believes they're likely to achieve in positive circumstances" (my emphases, and that's a direct quote from their official website). What the heck does that mean? What are "positive circumstances" exactly? Does it mean "if they work really really hard between now and the exam although they never did before", for example? Because of course that's what some pupils claim!
  • Spot on in my experience TurquoiseTastic. ‘Aspirational grades’ might be a better way of describing them.

  • Indeed, and UCAS even describe them as such elsewhere on their website. How then can it possibly be justified for universities to use them as a selection tool?
  • In normal times it is a dumb system, over a process where you’d apply *after* you got the grades - like the rest of your life. At the moment it is an even more ridiculous system than normal.
  • jay_emmjay_emm Shipmate
    Dafyd wrote: »
    I haven't gathered exactly what the algorithm does. In principle I can see a role for statistical correction. But if that statistical correction is based entirely upon the average past performance of the school the child went to that is condemning the child based entirely upon the past performance of the school they went to without regard to the individual performance of the child then that is entirely unjust.
    I suppose one question might be, does the algorithm preserve the year by year standard deviation for the school as well as the year by year average? If it does then there's a semblance of justice. If it doesn't then there's none.

    I've seen second hand a comment that it broadly maps this years grades to the previous years grades, so if a school was lucky last year it's average student goes through, while if a school was unlucky last year it's bright student doesn't.
    Also at that point, you've got no correction for teachers favourite's, so again an average student that the teacher likes gains at the expense of a bright but annoying student.

    So far, purgatorially unfair.

    The other corralery is that the teachers over-estimate by effectively half a grade across the board, which does seem a bit high (granted you have the students who crash out and lose multiple grades, but .
    The narration is that this happens more in 'stupid' state schools, whereas 'clever' private schools got the grades right.

    However, somehow this system seems to have the extra-ordinary feature that this year that private schools did somehow get better. And the suggestion is that the correction didn't get applied for small (private) schools. In which case they (by an extra-ordinary co-incidence) benefited at the expense of normal schools.

    Although the anecdotes on the Guardian seem extremely extreme. With schools receiving the "lowest set of results in their history" and B's becoming U's (both from different schools named Notre Dome). And if you're getting that sort of output something is wrong (I can believe some new teachers are clueless, but at that point you need some independent validation).
  • jay_emmjay_emm Shipmate
    The mail (sorry)
    Has some details, but is written to confuse.
  • In normal times it is a dumb system, over a process where you’d apply *after* you got the grades - like the rest of your life. At the moment it is an even more ridiculous system than normal.

    We employ people in the expectation that by the time they start work, they will have achieved X qualification all the time.

    Applying after you get your grades doesn't leave much time to plan.
  • Indeed, and UCAS even describe them as such elsewhere on their website. How then can it possibly be justified for universities to use them as a selection tool?

    Everyone wants to use the exams, but they don't exist. So they use what exists?
  • jay_emmjay_emm Shipmate
    here is the gov version of the mail.
    It looks very much like results depend on:
    Absolute Prediction (small classes, see 8.4 in link, 5 tapered to 20).
    Relative Prediction matched onto centres previous results.
    Previous results matched onto new results
    (I'm not sure how it decides b- and b+ before shifting, I think that that is what the previous bit does)

    In which case it looks very biased in the selectively of it's corrections.
  • In normal times it is a dumb system, over a process where you’d apply *after* you got the grades - like the rest of your life. At the moment it is an even more ridiculous system than normal.

    We employ people in the expectation that by the time they start work, they will have achieved X qualification all the time.

    Applying after you get your grades doesn't leave much time to plan.

    It is a choice to start the academic year in September for universities - you could start it in January.

  • Weirdly, the opposite seems to be happening for students in Victoria (I don't know about other parts of Australia), with teachers' assessments expected to boost grades. See
  • HOWEVER, if you receive an offer, and then fail to make your grades, there is often a fair chance that your chosen university will accept you anyway.
    They might not accept you on the course you wanted but might offer alternatives.
    This feature was fantastic for my other son last year. He is probably on the autistic spectrum and did not meet his grades for his uni offer. Consequently he could not have a place on his chosen course (computer science). Instead they offered him a place on 2 different courses which was undersubscribed (electronic engineering with either computer science or music tech. This had clearly been offered having considered his prior GCSE grades where he had good BTECs in engineering (a distinction) and music tech. He took the first offer up and was very pleased to get into his chosen university to do an appropriate course and really relieved that he would not have to go through clearing and phone lots of universities; he is loving the course. Presumably it also suits universities as they get students who are keen to come to their universities and who are doing appropriate courses.
  • ExclamationMarkExclamationMark Shipmate
    edited August 2020
    Be careful using the term ‘predicted grades’.
    The teacher assessment grades sent to exam boards based on continuous assessment are a one off (hopefully).
    Students usually have predicted grades on their university application that are renowned to be highly inaccurate & depend more on how much they beg & cajole their teachers & tutors & how realistic their choices are in relation to their abilities & efforts.

    (I work with A-Level students)

    The figure I saw quoted was 75% of predicted grade were over generous. In many ways it's a pity that exams have changed from a high proportion of coursework vs final exams - that would be a great predictor, as by March 2020 most coursework would be in.

    That said, it is extremely disturbing that the adjustments seem to be targeted and that individual pupils' results are determined not by ability bit by their school (and presumably their social background). Heinous.
  • Spot on in my experience TurquoiseTastic. ‘Aspirational grades’ might be a better way of describing them.
    When I was at school, my teachers told us what grades we could expect if we buckled down and worked hard (and, didn't panic in the exam), as an incentive to do so. Obviously it was unusual for all students to get those grades in all subjects - there were always the issues with students slacking a bit, getting questions in the exam that they hadn't revised on as much as topics that didn't appear on the exam, etc. But, those "aspirational grades" served the purpose of motivating students to put in the effort. I didn't even know that teachers produced expected grades accompanying the UCCA process until one university I'd applied for suggested that I should study chemistry rather than physics because the predictions were that I'd do better at my A levels (I keep wondering about contacting said university having got a 1st in Physics, got a PhD in Nuclear Structure Physics, and 25 years of post-doctoral environmental physics research ....) in which case those predictions given through UCCA were probably more realistic than the aspirational ones I was given, and probably underestimated my grades (I was always good at exams, but struggled with course work - especially that which was only internally assessed and didn't contribute to the grade I got, I was on the leading edge of modernisation with for physics some assessed course work contributing to the grade rather than it all being based on exams).


  • ExclamationMarkExclamationMark Shipmate
    edited August 2020
    Don't forget too that there's an underlying issue in UK exams which has sat there for years. Grades are based on a distribution curve with no set pass marks (this can vary year on year). A certain % will get A grades while a certain % will fail: both % are flexible but again within limits. It's not quite true to say that if you pass you condemn someone to fail but it's not a long way off - pass marks can vary and it's not true to say that 45% guarantees a pass. In some years it may, in others not.

    There perhaps needs to be a reflection on why teachers grades have historically been off the mark but that can wait until this is sorted. It would be very interesting if the variances between predicted and achieved were ranked over a period of time by school. It may well have exposed something that needs addressing in other ways.
  • I think it does show the real laws with the exam system. We would do better to scrap it, and test throughout the course - meaningthat pupils know what they have already got and so universities can know that someone has receieved A* all year, so can offer based on them not getting below A overall. For example.

    I guess it comes down to the problem that it is all so heavily based on exams, that you need to exams. Without them, using a proxy is incredibly risky - proxies always are unless you have a lot of information and effort to getting them right. If teacher predictions are always so far out, then they are clearly not a good proxy.

    Maybe universities should have addressed this too. Worked out how they could get the best students without exam grades. They knew this was all coming anyway.

    It is a mess. And, as usual, those at posh schools will do fine, those at lesser places will suffer.
  • I think the situation in Wales is quite instructive. They retained the AS-Level so had the option of awarding at least the same grade as was achieved in the AS, which seems likely to be fairly accurate. It's the decision in England to go the full 2 years without any external assessment at all which is the killer.
  • It is a problem with confusing the purpose of qualifications.

    Is it to certify mastery of a certain subject, or to rank who is the best at that body of knowledge.

    People taking the exams tend to think of them as assessing your competence in a subject, not where you rank in your age cohort.
  • The only practical solution for universities and colleges would have been to have good discussions with teachers (more than just the 'predicted grades' they supplied) and based on those offer unconditional places.

    That doesn't address concerns about how employers will view grades issued this year over the next few years though.
  • The only practical solution for universities and colleges would have been to have good discussions with teachers (more than just the 'predicted grades' they supplied) and based on those offer unconditional places.

    Universities would still need a way of normalizing across schools, though. Most universities won't have many applicants from the same school - and certainly not for the same courses - so they'd be in a position of comparing Teacher A's recommendation with Teacher B's recommendation to decide which pupil to take. And clearly, Teachers A and B want their pupils to succeed, and have the best pupil they've had in several years, and unconditionally support their pupil.

    If the teachers are from schools that send kids to the university every year, you have a point of comparison ("Tell me how Pupil A compares to Student X, who we admitted last year"). Which is fine for the "good" schools, but not so good for the schools/kids who are claimed to have been disadvantaged by this year's debacle.
  • TelfordTelford Shipmate
    The only practical solution for universities and colleges would have been to have good discussions with teachers (more than just the 'predicted grades' they supplied) and based on those offer unconditional places.

    Universities would still need a way of normalizing across schools, though. Most universities won't have many applicants from the same school - and certainly not for the same courses - so they'd be in a position of comparing Teacher A's recommendation with Teacher B's recommendation to decide which pupil to take. And clearly, Teachers A and B want their pupils to succeed, and have the best pupil they've had in several years, and unconditionally support their pupil.

    If the teachers are from schools that send kids to the university every year, you have a point of comparison ("Tell me how Pupil A compares to Student X, who we admitted last year"). Which is fine for the "good" schools, but not so good for the schools/kids who are claimed to have been disadvantaged by this year's debacle.


    I my career, recommendations for promotion were often based on who was doing the recommending.
Sign In or Register to comment.