The SAM's and additional SAM's have comprehensive mark schemes, but any ideas of approx grade boundaries.
Thank you for your enquiry which is a popular one within Ask the Expert.
We cannot issue estimated, indicative or ball park grade boundaries for the new SAMs and SCAMS until we have seen student evidence in the first unit exams, November 2011, and Controlled Assessment moderation, May 2012.
In January 2012 there will be an examiners’ report which will contain the raw and UMS grade boundaries and similarly in August 2012 for the controlled assessments.
You may like to consider using the UMS system where an A* is 90% UMS, A 80%, B 70% and so on, regardless of raw mark.
Hope this helps.
Ask the Expert: ScienceSubjectAdvisor@Edexcelexperts.co.uk
What is the conversion factor to work out Raw marks (out of 60) to UMS marks (out of 80).
This is very poor as all we as a centre are looking for some guidance to support pupils that are rightly nervous. They see themselves as guinea pigs, sitting this exam for the first time.
An indicative or ballpark figure on the specimen papers surly shouldn't be too much to ask for?
I wonder if it might help for people to be aware of how the determination of grade boundaries happens, so that you can see why it is not possible for Edexcel to produce grade boundaries for specimen papers.
When a question paper is written, the Principal Examiner writing it will have a variety of targets to hit: the paper must cover the specification evenly, it must fit the Assessment Objectives (AOs) listed in the specification with the correct percentages, it must have a range of question types (multiple choice, short answer etc). In addition, it must target a range of grades. Principal Examiners must juggle all these different targets when writing a paper.
In the case of the GCSE Science papers, the grade target is - very roughly - 20 marks to the lowest grades (D / C at Higher Tier; G / F at Foundation), 20 marks to the middle grades (B at Higher, E at Foundation) and 20 marks to the top grades (A / A* at Higher, D / C at Foundation). This means that - again, very roughly - the Principal Examiner writing the paper might expect a candidate to score 10 marks for a D, 20 marks for a C and so on. However, the candidate will almost certainly perform differently to the expectation. Factors such as the context used in the question, the command word, a potentially unusual word or words used as part of a question stem - even the ordering of the questions - can all mean that the mean mark on the question paper varies from session to session.
This is why Awarding Bodies have a meeting to set grade boundaries - because these grade boundaries change, sometimes quite significantly, from session to session, despite the best efforts of the examining team to produce papers of a consistent standard. If we simply said "40 marks is an A", in some sessions we might find that no-one reached this mark, whereas in the next session, 45% of students might get an A. This would be extremely unfair on students - there would be a lottery around their grades depending on whether the paper they sat was straight-forward or much tougher.
Much fairer - bearing in mind that the overall ability of all candidates sitting an examination doesn't change that much from series to series - is to ensure that roughly the same percentages of candidates achieve a particular grade at each sitting. This is the purpose of an Awarding meeting to set grade boundaries. The senior examiner team look at "Archive" work - i.e. past student work that had been considered worthy of a particular grade (we only look at A, C and F grades for Awarding purposes) - and then they look at candidate work from the exam session. Using their subject knowledge and guided by information from the Principal Examiner for the paper, plus some of the marking statistics, they select a narrow range of marks where the grade boundary could lie. Statistical evidence is then used to firm up the grade boundary - almost always by trying to match the percentage of candidates achieving that grade, whilst bearing in mind other information we have about the ability of those students compared to those in previous series (information such as KS2/KS3 SAT scores, when they existed, teacher estimates and so on).
As you can see, the process of assigning grade boundaries therefore needs:
1. an archive of scripts that show examiners where the standard has been in the past
2. information about how a large cohort of candidates actually performed on the paper
3. statistical information about the percentages of candidates on each mark on the paper
4. historical statistical information about (a) the candidates who sat this paper and (b) performance on the same paper in previous years.
For SAMs papers, the only information we have is 1 and 4(b). We are lacking, crucially, 2, 3 and 4(a). This means that we are unable to provide any accurate grade boundary information on sample papers, where we have no data on how candidates actually performed on that exact paper. The additional difficulty here is that the papers are very different to the previous style of assessment, so it’s not simply a case of matching grade boundaries across from the old multiple choice tests.
One further point is worth making. If Edexcel issued “suggested grade boundaries” for the sample papers, these would essentially be taken from thin air and not based on candidate performance i.e. they would be nonsense. Worse, however, is that they would also be misleading to teachers and to students as there would be no guarantee that they could be relied upon. So, if we said “We think that a C grade on the B1 Higher Tier SAMs is 25/60” and it turned out that, in the first set of live papers, the C grade was set at 35/60, there would be a deluge of complaints from teachers and students alike that the fictional and notional grade boundary that we’d provided didn’t match to the real ones in November.
Personally, as an ex-teacher and an ex-Principal Examiner, I’d rather know that provided grade boundaries had some meaning. And, actually, I’d rather set my own grade boundaries for my own students using the same Awarding process that the Awarding Bodies use. So, if I know that my GCSE sets usually get 65% C grade or better, I’d set my C grade boundary on their mock exam accordingly. Not only is that a better reflection of where the grade boundary probably lies, it also means that the grades you give are specific to the students in front of you.
Senior Product Manager for Science