I feel like a massive hypocrite. I wasn't unhappy about the introduction of the idea of 12 core practical skills into A-level that would only be assessed as pass/fail. I thought that it would allow more freedom to teachers regarding practical work. Although I qualify that, as I did also say that really it would have been better to go back to the free practical project that we had pre-2008. It was the change to controlled assessment that killed practical in A-level.
If I was a university I would want to see the log book of the student, I would insist that the student 'pass' the practical part to get in. (What about a student who doesn't pass, but gets an A? I would want to know why, poor teaching, illness extreme lack of organisation or laziness or maybe an impairment of the student or teacher absence. I think this situation would be rare). The pass/fail and log book of the practical work could actually help keep teachers honest about the experiences of the students. As I say particularly if universities may ask to see it.
I also reckon that if a student can answer an A-level question working out how to calculate the emf induced across an aeroplane wing or workout whether a circuit with a particular capacitor could crate a vibration quickly enough to produce a sound of a certain pitch, then they can probably read a thermometer, use a stop watch and even a micrometer and oscilloscope.
However, things are different at GCSE. At GCSE it isn't (always) about the child, the headline figures are more important. Assessment drives teaching and floor targets drive teachers to the limit. Why are teachers going to bust a gut sorting the practical investigation skills of GCSE students when it doesn't make any contribution to the final grade? If we were gaming the system before then we sure as hell will be now.
Example tasks from A-level physics "Use a stop watch or light gate for timing", "Correctly construct circuits from circuit diagrams". Firstly, how many schools have enough equipment for everyone to be able to do this independently? If anyone at Ofqual (or anywhere else) thinks that this is suddenly going to make heads who are cutting back on staff and fixing holes in roofs to increase the science department capitation they are living in cloud cuckoo land. So how can even the most honest of teachers realistically say that a student can construct a circuit when it is likely they will work in pairs? What about those two students who have messed around during the lesson and clipped crocodile clips onto each others blazers? What about the under pressure NQT who can't get a class to be quiet to explain what to do? What about the student who is off and misses a practical? What about the students who ask 'what is the point of this'? 'will anyone care if I fail'? (Only the deputy head). What about the student who has to work with the weaker student and therefore does all the work? What about the student who copies the person they are working with? What about the students who point blank refuses?
More over what about the student who takes another piece of work to 'copy up'? What about the lost lab books? What about the teacher who fills folders with blank pieces of paper and claims they have done what they should have (Yes, I have come across this teacher), what about the teacher who is really struggling with workload and struggling to be organised? What happens when a teacher falls ill? What happens to the class that has the list of supply teachers? What happens when the technician fails to supply the right chemicals so the experiment does not work? What happens when the department runs out of money and can't afford to do a particular practical? What happens when teacher illness means there is restricted time for practical? How will the head of department track all the work? Will it be possible to mark this work formatively and allow students a second chance - it doesn't sound that way at A-level.
Are schools really going to take it seriously when in reality no one is checking? Can-do tasks anyone? I know from examiners that students got 24/24 in can-do tasks and 0 for science in the news, how likely is that really?
The current system doesn't create that same level of issues if classes are shared. The controlled assessment is a team effort, it does not need to fall entirely on the shoulders of one member of staff unless it is designed that way (Triple science). To my my mind the new suggestion adds to work load with no added benefit for students. One year GCSE courses mean that all the core science coursework has to be completed at the end of year 10, it is less likely to walk.
I really think that the administrative burden of controlled assessment and lab books needs to be considered.
However, I am not advocating that we keep the controlled assessment. And unlike A-levels I am not even suggesting that we return to the circumstances of the previous specification. That had massive issues too. It all did. Encouraging a wider range of practical activities is a good thing, although I do wonder if there really will be a wider range of practical activities or if resistance of a wire and sodium thiosulphate will once again be the staple of all science investigations.
I would suggest one of two things. Either we scrap the idea of assessing the practical aspect of practical work, or we accept that students can get a high score in it.
To understand the next part you have to realise that the GCSE specification that I follow has statements divided into low, medium and high demand.
I went to a session run by the examiner for the controlled assessment of our GCSE, He talked about areas that he would expect students to score highly in, risk assessment and drawing tables for example. I would agree with him - some areas are easier than others. In exam some questions are easier than others - you only need to read the examiners report to see the questions described by how many students were able to answer them.
Why can't it be the case that students do better in areas of practical work than aspect of the exam. There is a lot more to remember for the exam and you complete the exam in one go. Students can have their controlled assessment broken down skill by skill to ensure they are comfortable with it. I know what is in the controlled assessment, I can teach to the test so to speak. We can do similar practicals before hand and I have certainly developed teaching activities that train my students to automatically hit 5/8 sections within the controlled assessment with ease.
Why can't practical work be low demand work? Why does the spread of marks have to be the same as an exam - it isn't the same thing.
This is especially true now that we have terminal examinations. Students don't need to get a grade or UMS score for each exam/module. Totals can be considered instead. These totals can then be spread over the percentage of each grade ofqual decide they want to give out.
So what does this mean would I do? I would have a number of experiments (15, 18, 21 as it can be divided by 3!) and from that students would submit 9 tables of results or simple observations/conclusions. From them the teacher would judge the level of competence with the equipment given, the score would be 1, 2, or 3 depending on the quality of the results and the experiment.
The UMS marks of this would contribute 10% - why 10%? Because that would be a grade, it could not be ignored, yet it would not make that much difference overall.
I would still include references to experiments in the exam, thereby making it necessary to complete all the experiments. Or at least, to make a persuasive case to management for practical equipment. I actually think that it would be possible to complete an exam on practical work without ever doing the practical work. I think that there is evidence to back this up - schools used to give the back up data to who cohorts in the 2006 specifications and students would still do alright in their controlled assessments.
We need to decide if practical work is important to science in the UK. And we need to do it urgently. If we think that it is then we need to think about the way that our accountability system, how our financing of practical work and how our pedagogical approaches support it.
This is a really interesting piece. A couple of thoughts - firstly, controlled assessments do present a problem for awarding. It isn't that students do better at controlled assessment than exams that is the problem - after all, that's what UMS is for. The problem is the range of marks you get on controlled assessments compared to the exams. If you look at the released graphs on the Ofqual consultation, the spread can be seen to be lower than the written papers and this makes it difficult to differentiate. Pretty much all the graphs show a very small percentage of candidates getting from 0-40% of the marks. It means you have a lot of dead marks in there that don't help you differentiate out the students, which you really don't want (and you end up having to rely on the written aspect). This is in contrast to the written papers where there's (mostly) a nice spread. Also if you did a 10% direct assessment for completing practical work then that also fails to differentiate, and you end up trying to differentiate on 90% of the marks, as we found during GCSE 2006 - as I think the consultation says, it's quite difficult to separate out grades of competency of direct practical skills. Now with a completely terminal exam instead of a series of modules, you could argue that would stretch students out better so there's more latitude to accommodate this, but it still doesn't solve the problem I don't think.
ReplyDeleteSecondly, it's going to be really important for the awarding bodies to design assessments in such a way that the students will be at a disadvantage should they not have done the practical work. This will be a challenge, but the released (draft) assessment objectives do require both knowledge and application of practical techniques and procedures, so that does help with this aim.