Archive

Posts Tagged ‘feedback’

What are we testing for?

June 30, 2014 3 comments

There are many types of assessment that take place within our schools. Formative, Summative, AfL etc are all buzzwords that relate to some kind of assessment that occurs on a daily basis,  each with a different perceived purpose. So what is the big picture? What are we testing for? Should we be doing it?

I was recently told by someone that they had received feedback from a lesson observation which marked them down for “lack of AfL”. I asked if they had been given anymore information than that and was told that the observer had said “You need to use lollysticks or whiteboards.” This left me a little underwhelmed. The person I was speaking to was an NQT, and I thought that feedback was fairly meaningless and certainly less than helpful.

AfL

We’re talking “Assessment for Learning”, not Aussie Rules. The basic premise, as set out by Wiliam et al in the black box series, is to assess in lessons as you go along, to check the class understand what you’re teaching them. In theory, I can see this is excellent practice, but in reality it has in many places become a box ticking exercise. I spoke to a senior teacher this year who described lesson observations as “a game you play”. This frustrated me, I’m a firm believer of trying to be the best teacher I can be every lesson, and I try not to do anything out if the ordinary in observations. (My year 11s did throw the following accusation at me once: “Sir, how comes you pronounced your T’s properly when [the head] was in the room?”- this apparent vocal change was entirely subconsciously!)

That aside, I do think there is place for AfL in lessons, I’m a big fan of whiteboards, they’re versatile, they allow you to check answers from a whole class to ensure they know how to do something. They don’t, however, do some magic and allow pupils to remember how to do things forever. Lollysticks, on the other hand, seem less useful for AfL. I’ve always been told they are “AfL”, but I don’t really see how. You still only get an answer from one person. They may be good for some things (whether they are or not is a debate for another time, but you can read Tom Bennett’s (@tombennett71) thoughts here), but I don’t see how they fit here.

If the feedback was, ” You need to ensure whole class engagement, try using a random name selection method such as lollysticks.” I could have understood it. If it was “You need to ensure the whole class are ready for the task, use whiteboards.” I could have understood it. but to say “You need AfL use lollysticks or whiteboards” just doesn’t help anyone. AfL is great, but use it correctly, ask yourself “why am I doing this?” if the answer is “to tick a box” then don’t bother! If it’s “because it will aid the pupils learning.” Then give it a hell yeah.

Formative/Summative Assessment

This whole dichotomy which is often discussed between formative and summative assessment seems silly. Yes, there’s a difference between checking progress in a lesson on a whiteboard and sitting a test, but surely the point of end of term tests is to see how much pupils have learned? If your class have done half a term on Algebra, and they all got the expanding brackets questions wrong then you need to go back over expanding brackets. Thus the assessment is still formative.

I think many of us are guilty of overtesting to gather evidence of progress. An inevitable consequence of the raft of policy around this area. This itself can lead to issues. A twenty minute end of topic test which takes place at the end of a lesson where pupils have covered the content may give positive results, but the retention may not be there and they may not be able to recreate those results the following week. We need to structure our schemes of work, our lessons and our testing to create sustained progress. To ensure learning that sticks. I’m not sure how we do this yet, but I think a mastery based curriculum may be a good start. I’ve read a lot on memory and learning from Joe Kirby (@joe_kirby) which I want to get deeper into. (IE this but he has other posts too)

I think in class tests are a vital part of what we do, but their primary purpose should be to inform future teaching and learning, with progress monitoring a by product. When progress monitoring becomes the primary gain, we’ve got our priorities confused.

High stakes external exams

These are the output of our education system, what we are always building towards. It seems strange to put teenagers through such tribulations at an age when hormones are flying etc, but I’m not sure what the answer is. We need to have some form of qualifications that distinguishes each of us. @Bigkid4 has some good suggestions here. If you have any ideas I’d love to hear them.

This post is part of the #blogsync initiative for June 2014, you can read the others here.

Engaging with Written Feedback

July 1, 2013 9 comments

During February the #blogsync topic was on engaging and motivating pupils. In school, as a department, we were looking at ways to engage pupils with written feedback and to motivate them to interact with that feedback and attempt the challenges set. I figured that these two things would dovetail nicely and the idea behind this post was formed.

Much has been written on written feedback before and if you are looking for ways to improve your own I would highly recommend reading the four posts on the topic written by Mark Miller (@GoldfishBowlMM on twitter) they can be found at http://thegoldfishbowl.edublogs.org/category/feedback/ . I would also recommend this by David Didau (@learningspy on twitter) http://learningspy.co.uk/2013/01/26/work-scrutiny-whats-the-point-of-marking-books/

Within our department we have been developing our strategy on written feedback over the last few years, and around Christmas-time one of my colleagues came up with a way to personalise feedback and set questions on a computerised from which could then be printed and used as a starter for the next lesson with the class. He also set it up to include pupil’s names via a mail merge. This would provide the pupils with feedback on their work and set them a challenge which either focused on a skill they were struggling with or set them a challenge which would push them to the next level. Our marking usually ties in to the mini assessments, so these would tie in there as well. Before this we had been using marking stickers that had a box for students to comment in on it, but this was replaced by the question. Here is a picture of the previous sticker (the size would be 1/4 of an A4 page):

MS1 

 

And here is the new look one (A5):

MS2

I wanted to run a check on the effectiveness of the new system, so prior to switching I surveyed three of my classes with the following question:

“On a scale of 1 to 5, where one is the lowest and 5 the highest, how much does marking of books help you with your maths.”

I had all responses from 1 to 5 and the mean was 2.7 (1dp), unsurprisingly my top set year 8 had a mean of 3.5 which was much higher than the other classes.

I ran with the new idea for a term and re asked the same question. The results were slightly higher, this time the mean was 3.2 (3.9 for said year 8 class).
The data suggests that there is an overall increase in engagement with written feedback. I looked through some of the slips to see if anyone had drastically changed, and there were a few people who put 4’s that had put 2s, and a few who had jumped up one, so I asked them why they thought their perception of the helpfulness of marking had changed. There were two main answers that they all seemed to give a variant of. “Because it’s much easier to read when it’s typed,” and “There is a question to do”. During this time my HOD did a marking scrutiny and commented that my marking was much easier to read when typed, so I have taken this on board and intend to use consistently in future. Most of the team are using it now and we are going to implement it across the whole team next year to bring consistency to our marking.

I’m under no illusion that these surveys constitute concrete proof that the new marking strategy has improve the engagement with the written feedback in my classes, but all pupils are now answering the questions which certainly shows they are reading it. This is different to before where the higher ability pupils would write excellent comments, the lower ability pupils would write something like “thanks” and middle ability pupils would not write anything. The effects seemed to be higher on lower sets than it did on higher sets, but this could related to the fact their baseline was much lower. I hope to repeat the survey at some point next year with my classes to see how the data looks after a prolonged period of using tis marking strategy; this should give me an idea of the long term effects.

In conclusion:

The evidence suggests that the new strategy has increased engagement within the sample. This is because the feedback is easier to read and it includes something for pupils to attempt, rather than to just read. This is enough for me to decide to continue with the strategy.

%d bloggers like this: