There is a lot of cognitive science research that proves what revision strategies work best for embedding information into the long term memory – which is our goal in relation to exam success. Some of it is common sense, but other aspects may surprise you or challenge your thinking.
There are many time-consuming revision strategies that actually fool us into thinking we have embedded the knowledge into our long term memory. For example, simply re-reading texts or notes has been seen to have a low impact with regard to memory retention, especially considering how much time this can take, but students are happy because this is a relatively undemanding task that takes little mental effort and it feels like effective revision. Re-reading ‘Of Mice and Men’ for an English Literature exam doesn’t have the impact we need, especially given how time consuming it is as a revision activity, therefore other, better, strategies should be undertaken. Other edu-myths also cloud effective planning for exam revision. There is an old adage abound in education that: “We learn: 10 percent of what we read; 20 percent of what we hear; 30 percent of what we both see and hear; 50 percent of what we discussed with others; 80 percent of what we experience personally; 95 percent of what we teach to someone else.” This is a myth based on no evidence. It has become perpetuated because it is an easily reductive formula, but it is unfounded. David Didau lances this particularly boil to good effect here. We must go beyond these simplifications and seek answers from more reputable research to judge against our experience.
The following strategies are underpinned by more reputable scientific research and evidence:
– Information retrieval over re-reading: It may prove more challenging in the short term, but getting students to try to remember the content of a given topic is more effective than making revision notes based on their original content, textbooks etc. ‘Concept mapping’ is an ideal teaching tool for this (think of its popular branding, image and colour laden brother ‘mind-mapping’!). At the end of each week for example, have students attempt to retrieve the information, without their notes or books. They create a hierarchy of connections that they can attempt to organise conceptually.
Research: http://learninglab.psych.purdue.edu/downloads/2012_Karpicke_CDPS.pdf. Thank you to @websofsubstance whose excellent blog post of retrieval helped me source this research: http://websofsubstance.wordpress.com/2013/04/06/golden-retrievers/
– Collaborative retrieval: Typically we associate revision activities and memory as requiring individual focus. Indeed, there is some evidence that group work can inhibit some learning, but there is evidence that students working in groups can have a positive effect, where students work together ‘cross cueing’ the information they are recalling. Put simply, they help one another remember and retrieve aspects of key information they would not have remembered individually. Also, the social nature of working together can create memory cues that help individuals recall well over time. Of course, any errors in retrieval, either individually or collaboratively, need teacher correction.
– ‘Spacing’ versus ‘massed’ practice: This finding is common sense really. ‘Spacing‘ is when revising the same information two or three times across a few days improves the likelihood of retaining information in the long term memory (Nuttall, 1999). This may include revising a poem and making connections with another poem, then revisiting the key aspects of that poem in the subsequent lesson, before finally doing a ‘concept map’ at the end of the week to revise the learning from the lessons that week. ‘Massed‘ practice, or ‘cramming‘, can have a good short term effect on memory recall, but it fails in the long term in comparison to ‘spacing’ out revision. There is no exact time or number of days concerning how much ‘spaced’ time should be allocated; however, the research indicted the number of days ‘spacing’ is shorter the nearer the exam. In practical terms, over a half-term, we could revisit a concept after a couple of weeks, but nearer they exam we would cluster a couple more ‘revisions’ of the concept/information.
David Didau has written an excellent blog explaining spacing etc. and the implications for curriculum planning, and what ‘progress’ in learning may look like here.
Research: http://psi.sagepub.com/content/14/1/4.full.pdf?ijkey=Z10jaVH/60XQM&keytype=ref&siteid=sppsi and for an in-depth focus on ‘spacing’: http://uweb.cas.usf.edu/~drohrer/pdfs/Carpenter_et_al_2012EPR.pdf
– Using ‘worked examples’: This is the common method of using past exemplars or creating your own through ‘shared writing‘ strategies. It gives students a working template for their revision and reduces obstacles that stops them learning more knowledge. Ideally, teachers should lead model worked examples of exam questions, thereby giving students a clear idea of an excellent answer, before fading back and letting students tackle exam questions independently. Of course, once more, quality feedback is key in this process.
A great blog by Joe Kirby goes into great depth about the ‘why’ of using ‘worked examples’ here.
– Regular in-class testing: Drilling answers to tests, under test conditions, can improve both short term and long term memory to boost revision (Roediger et al 2011). Like the retrieval practice of ‘concept mapping’, the very act of retrieval without resources to support proves more memorable than any ‘re-study’ activity. Taking a test can lead to students becoming less confident, therefore quick and accurate feedback is key to making testing highly effective and building confidence. There is research to say that teachers often drastically overestimate what they believe their students to know (Kelly, 1999) so repeated testing is a practical necessity. In terms of learning, there is much research that testing revision material has a positive impact on long term memory in comparison with simply revisiting material.
Another important consideration is that students naturally revise in a ‘massed’ learning style i.e. last minute cramming! It is labelled the ‘procrastination scallop‘ by Jack Michael here. This led to a recommended ‘exam a day’ approach, which forces students to distribute their revision more evenly, rather than just cramming. It may seem excessive, but getting students to do challenging retrieval that informs the teacher what they know and don’t know (and invariably if they have revised or not) regularly, like quizzes etc. could do the job.
Research: http://people.duke.edu/~ab259/pubs/Roediger&Butler(2010).pdf and the ‘exam a day’ research: http://www.teachpsych.com/ebooks/tips2011/I-07-01Leeming2002.pdf
A lot less scientific, but a fun revision strategy that works for many:
– Building a ‘palace of memory’ is a much less scientific way of improving memory recall, but it is apparently thousands of years old, originating with the Greek poet, Simonides of Ceos, in the fifth century BC. See this Guardian article for an excellent example of the method in action: http://www.guardian.co.uk/lifeandstyle/2012/jan/15/memory-palaces-lists
How does this equate to a revision programme?
I am now avoiding revision activities or homework revision tasks that recommend simply revisiting information. I will plan to interleave different topics each week, to create the necessary ‘spacing’ between topics (in my English GCSE class this will mean studying poetry for English Literature at the start of the week, the novel and short stories in the middle of the week, ending the week with English Language revision). I will give regular mini-tests, drilling individual answers, with ‘worked examples’ in the first instance to model a good answer. The feedback on their answers will be timely and regular. I want to undertake weekly retrieval activities that reflect upon what they have learnt that week (combining ‘spacing’ and ‘retrieval’)
It is clear that the process of revision happens inside and outside the classroom. Students who possess the grit and resilience to persist with the humdrum nature of revision tasks will have a greater chance at success, but teachers must also identify and plan revision strategies that work. Of course, our experience and intuition about what will work best for our students is important, but we should challenge our assumptions with the wider research that is easily accessible on the web.
“I believe that work of excellence is transformational. Once a student sees that he or she is capable of excellence, that student is never quite the same. There is a new self-image, a new notion of possibility. There is an appetite for excellence. After students have had a taste of excellence, they’re never quite satisfied with less; they’re always hungry.” (page 8, ‘An Ethic of Excellence’ by Ron Berger)
One feedback strategy I have found helped enhance the writing of my students so far this year was the use of ‘gallery critique‘. The initial inspiration came from Ron Berger, whose ‘Ethic of Excellence‘ provided inspiration in the pursuit of motivating students. Like any teaching and learning strategy, it is far from flawless, but I think that having trialled it extensively with different groups, from students to teachers themselves, in staff training, it was well worth nominating.
After having selected the ‘gallery critique‘ strategy to meet the #blogsync brief of identifying a strategy that elicits motivation, it transpired that David Didau then wrote a peerless summary of the strategy here. This synthesis of research, expressed so skilfully, did make me think that my post had become rather redundant, but I wanted to explore some of the evidence base for the effectiveness of the strategy – particularly my specific use with my GCSE class.
More broadly, the evidence base for the effectiveness of feedback and assessment for learning is sound and thorough. Feedback has the greatest impact in John Hattie’s seminal synthesis of research, ‘Visible Learning‘; although, of course, feedback itself is a broad term. Dylan Wiliam is lauded as a guru in this particular area. He defined the five key areas of effective assessment for learning as follows:
– clarifying and understanding learning intentions and criteria for success
– engineering effective classroom discussions, questions and tasks that elicit evidence of learning
– providing feedback that moves learners forward
– activating students as instructional resources for each other, and
– activating students as owners of their own learning
The “big idea” that ties these together is that we use evidence of student learning to adapt teaching and learning, or instruction, to meet student needs.
(From ‘Excellence in Assessment‘ by Dylan Wiliam)
The strategy of ‘gallery critique‘ is so appealing because, done well, it addresses each of the five areas of effective assessment for learning. I have learnt, through experience of trailing the strategy, that clarifying the success criteria is essential if students are going to create work worthy of a gallery. Each time I now use the ‘gallery critique‘ method I make sure I have used multiple models of high quality work matching their task as a precursor. Also, equally crucial, is having the highest expectations of behaviour when undertaking the gallery reflection and feedback. It can be an off putting strategy if you have a challenging group, given you expect students to walk around the classroom, but, like anything in the classroom, they need training until this strategy just becomes a ‘new normal’ for how they would learn on a regular basis. Of course, it is about being explicit about exactly how students should move about the room. I demand silence during the gallery reflection stage, verbally celebrating students who are undertaking the task with particular focus. I ensure students have a scaffold for their responses using the ‘ABC’ feedback model (they write on their large post it notes – either A for ‘Agree with…’, B for ‘Build upon…’ and C for ‘Challenge…’). I also articulate tight time-frames to ensure students are focused on the job. I then select exemplars that have multiple examples of feedback and talk through them with the class, huddled around in an arc facing the work, questioning students appropriately. Students follow up the ‘gallery critique’ with some sustained ‘dedicated improvement and reflection time‘, whilst I attempt to remedy any misapprehensions with individual students.
In terms of evidence, I focused upon using the strategy with my Y10 group preparing for an ‘Of Mice and Men’ controlled assessment. I regularly identified distinct improvements to drafted paragraphs based on using the ‘gallery critique‘ method; however, I am suspect about my own instincts here, because as Hattie states, almost every teaching intervention makes some form of improvement. That being said, we repeated this method of formative assessment, with the second batch of model paragraphs being distinctively better than the first (I included more exemplar models the second time around). I couldn’t grade this improvement, as it was part of the controlled assessment process, so any marking of drafts isn’t allowed (much to the annoyance of students who are used to this being the case), but the paragraphs were clearly better. I did want the ‘soft data’ of student voice evidence, so I undertook a student voice activity with my trial group. I did undertake the questionnaire just before their controlled assessment so they were nervous and lacking in confidence somewhat (by the end of the lesson I had a different response to their ‘confidence level’ question – with more than half of the group feeling more confident).
The evidence from the questionaries from my Y10 GCSE group is certainly not a ringing endorsement of the strategy! What clearly came through the questionnaire was that 82% of students in my GCSE group preferred teacher assessment over peer or self assessment. Only 18% favoured peer assessment. Of course, students are always dependent and reassured by teacher assessment, for good or ill, but it does draw into question whether this strategy enhances motivation, or whether it is simply defers the true gratification for students that is teacher assessment. One complication is that students know I will not, and cannot, mark a draft of their work, as the controlled assessment process prohibits this, so their annoyance may translate to their views on the questionnaire. 27% of students evaluating that the ‘gallery critique’ method was “not useful at all”; 32% thought it was useful at times; 18% deemed it useful and 18% thought it was very useful. Their reflective opinion did appear to clash with the quality of their written outcomes, but it is an interesting piece of evidence (arguably, watching videos would receive a high percentsge for its usefulness but I would be rightly sceptical of their judgement!). Interestingly, 64% of the group thought that reading the work of others was “useful at times”. Clearly, the desire for teacher led assessment predominates and is indeed the dominant model for education – why wouldn’t students be conditioned to be reliant upon it? Does the strategy motivate students undertaken in this specific manner in the English classroom? Clearly not as much as I thought.
The next crucial question: does it work? The proof will inevitably be in the summative pudding of the controlled assessment mark. I will be able to equate it with their previous reading assessment, not ideally as there are differences. I will also be able to compare their performance with other groups (again, recognising that a host of variables are at play) to ensure there is some hard data to supplement the student voice and my teacher observations of progress.
It is the case with assessment for learning, like most teaching strategies, a balanced variety of well honed approaches will work best to help students make progress. Peer assessment that is well scaffolded and modelled, and conducted with well chosen groupings, can be highly effective formative assessment, as the evidence suggests, but striking a delicate balance of assessment for learning is key. Students often dislike self-assessment, but that self-regulating skill is key to success, therefore we must persevere, ensuring our pedagogy scaffolds the assessment to make it purposeful and have impact.
It is only appropriate to end with the inspirational words of Ron Berger when thinking about the value of the ‘gallery critique’ strategy:
“Most discussions of assessment start in the wrong place. The most important assessment that goes on in a school isn’t done to students but goes on inside students. Every student walks around with a picture of what is acceptable, what is good enough. Each time he works on something he looks at it and assesses it. Is this good enough? Do I feel comfortable handing this in? Does it meet my standards? Changing assessment at this level should be the most important assessment goal of every school. How do we get inside students’ heads and turn up the knob that regulates quality and effort.” (P103, ‘An Ethic of Excellence’)
In summary, ‘Gallery critique’ is one very useful formative assessment strategy for getting students to better ‘turn up the knob that regulates quality and effort’, making their work worthy of a gallery.
Does public speaking matter?
What do the Houses of Parliament, the Oxford Union, big business board rooms, assembly halls and court chambers have in common? They are the seats of power for people who lead our nation, the great…and the not-so-great and good. What other common factor is at work in such settings? Each respective setting requires expert speaking and listening skills. Indeed, power in society equates with the power of knowledge and to speak and to listen in such social settings. We must empower every student with the tools to speak in such settings if we seek real social mobility. Now, my argument is that when Gove suggests that we should move towards an ‘all eggs in one basket‘ summative exam, we should reject that proposition. We should instead look to a richer, much more varied assessment model that has speaking and listening rooted at its core.
“We value what we measure, rather than measuring what we value” is a common refrain in education. Michael Gove has recently declared that if we are to return to an education system of rigour we must have a fitting assessment model. Now, few professionals could argue with this ambition for rigour, but Gove has indicated that high standards will only be upheld by the narrowest of assessments – an ‘all eggs in one basket’ summative exam approach. Such a narrow model (although it does signal the positive jettisoning of endless resits and time-consuming controlled assessments) fails to prepare our students of today for a complex tomorrow. One shift we must make is to place challenging oral assessments at the heart of our curriculum model, across curriculum subjects, if we are to move towards a curriculum fit for the twenty first century. We need to show we value those key skills for success: speaking and listening skills. They should be rooted in our daily practice – not be seen as burdensome or extraneous high-stakes assessments.
I can remember with vivid immediacy my experience of speaking and listening presentations in my English lessons. Notably, I remember no such challenge outside of English, except a couple of Spanish orals, which were rather less than memorable. I loved many of my English lessons, as you would likely expect, but the prospect of presenting to my peers filled me with dread. At KS3 I gave a dire talk on earthworms; at KS4 I lowered the bar still further with a bleak explanation of cancer. Each time I had to present to the group my fear was nearly insurmountable, resulting in my feigning illness on more than one occasion. Now I am confident speaking to a hall of over one hundred fellow professionals. How has this transformation occurred? Repeated deliberate practice. Was it solely down to those assessments – of course not – but they made a difference. I was made to undertake that challenge, whereas if the assessment was not an external requirement I may not have had to complete such a task. If those assessments didn’t exist on a more formal basis would we have undertaken them given factors like student recalcitrance or merely absence? Ultimately, one lingering impact of those tentative presentations and group discussions is that am able to become successful at my job and so much more.
Oracy has always been the poor sibling to reading and writing and once more we are failing to exploit a realigned curriculum to raise the status of speaking and listening. Despite its lowly status, educationalists across the globe recognise its primacy in the very act of learning. Even a rudimentary understanding of child language acquisition will spell out that oracy is the very foundation for successful reading and writing. I know, for example, that my young daughter’s oral proficiency will correlate strongly with her future ability to read and write successfully. Indeed, reading itself is a form of listening – described here by E. D. Hirsch:
“Reading—even skimming—is indeed accompanied by “subvocalization.” Although some teachers use this term to refer to children whispering to themselves as they make the transition from reading out loud to silent reading, researchers use this term to refer to the internal voice we all hear while we read silently. We use an inner voice and an inner ear. Reading IS listening.”
To say that listening complements reading also highlights its crucial role in the writing process. ‘Subvocalization’ is also inherent in the writing process, so much so that we commonly use the phrase ‘the writer’s voice’ without a second thought. You are likely voicing this blog this very moment! Extended talk and oral rehearsal can aid the writing process as much as it can prepare for a speaking performance. Put simply, speaking and listening are integral to reading and writing. If we foreground the assessment of speaking and listening, we enrich reading and writing.
I teach English and we have three speaking and listening assessments at GCSE for English Language (none for English Literature) which accounts for 20% of the overall grade for English Language – not far off from an appropriate percentage for how I see speaking listening could being assessed in all subjects. Of course, Modern Foreign Languages has oral assessment at the heart of its curriculum, but in my opinion, there is a paucity of high quality oral assessments inter-connected across our curriculum (which would bolster the learning of foreign languages, a particular need for British students). To use an aural metaphor, we need each teacher in the school to be a player in a orchestra, each contributing to the music that is speaking and listening skills. We fail to exploit the many rich opportunities for rigorous assessment in the form of debate and individual presentations. We expect students to undertake university interviews, to give seminar presentations, to perform a ‘viva voce’ in further education – not even getting starting on the world of work; yet we only tinker at the margins with preparatory assessments that would further nudge teachers and schools to raise the standards of speaking and listening assessment. The opportunities are legion, but too often forsaken.
An approach to public speaking could be rigorous and systematic – a balancing point to end of course exams. We can record assessments with ease and relatively cheaply – it is already a requirement for parts of the iGCSE and the International Baccalaureate. This may create somewhat of a burden, but that does add greater rigour and consistency to the process – a price well worth paying. We can also balance internal and external assessment judgements too to add greater consistency. One interesting comparison between AQA GCSE English and the International Baccalaureate, for example, is that with the IB all written coursework is assessed externally and half of the speaking and listening is assessed externally too. It would cost exam boards some money, but it would be roundly welcomed by teachers and it would take away accusations of ‘cheating’ or grade creep levelled at teachers.
A rather unhidden truth is that our assessment models are largely dictated by the exam boards, of which we pay handsome sums of money for the privilege of the undertaking. I am not shocked when a company driven by a profit motive selects an assessment model which prioritises cost over quality. When I consider controlled assessments: the bastard child of coursework and examinations, the reality is that exam boards have a vested interest in an assessment model that are cheap, easily digitalised, easily replicable and mass produced tasks. Reductive written exams are the epitome of an easily outsourced and replicable model – but such exams alone do not provide a rich, holistic model of accurate assessment. Speaking and listening assessments, rigorously assessed, ideally with a balance of internal and external judgements, but at the very least recorded for standardising purposes, cost time and money. But we must ask, what is the best education worth? According to official accounts released by Companies House, Edexcel made profits of more than £60 million in 2010 – compared with just more than £10 million in 2004. AQA and OCR are actually charities, with a mission to “do good in education” – a better, more comprehensive assessment model would go some way to doing that ‘good‘. We must lobby fiercely for a system of assessment fit for the future.
If we truly measure what we value, rather than value what we measure, and we want to leverage as much social mobility as is possible in a system distorted by social inequality, then we must broaden our assessment model. We must encompass speaking and listening skills, with as many opportunities for public speaking as possible, into our assessment model if we want to develop students who can thrive and succeed.
I recently wrote a post about how a singular ‘all eggs in one basket‘ three hour examination would have a negative and narrowing effect upon our curriculum and, of course, our students. After thinking about what prospective assessments we can look forward to, or not, I thought about our purpose beyond helping students make the right moves along the conveyor belt of passing exams. Before I came to thinking about what assessment model would be more appropriate, I thought about starting with what type of students we are aiming to develop. We often focus upon the quantifiable outcomes in school: league tables, international measures and evidence based outcomes of cognitive ability, but we too often neglect those non-cognitive learning dispositions which will see our students flourish in a rapidly changing world. We ignore the less easily quantifiable aspects of an education – such as developing character: dispositions like resilience, perseverance and self-discipline. How do we value those aspects in a system so bent on measurement and examined assessments? How do we go some way to balancing cognitive development with character development?
As we teach the International Baccalaureate at my school, alongside A Levels, it occurred to me that their ‘learner profile’ was a good place to start to investigate a fitting school curriculum, with a functional assessment model, which purport to have that aim of engendering confident, flexible and resilient learners who will thrive in a future abound with complexity and challenge at their core.
International Baccalaureate Learner profile: http://www.ibo.org/programmes/profile/documents/Learnerprofileguide.pdf
The aim of all IB programmes is to develop internationally minded people who, recognizing their common humanity and shared guardianship of the planet, help to create a better and more peaceful world.
IB learners strive to be:
They develop their natural curiosity. They acquire the skills necessary to conduct inquiry and research and show independence in learning. They actively enjoy learning and this love of learning will be sustained throughout their lives.
They explore concepts, ideas and issues that have local and global significance. In so doing, they acquire in-depth knowledge and develop understanding across a broad and balanced range of disciplines.
They exercise initiative in applying thinking skills critically and creatively to recognize and approach complex problems, and make reasoned, ethical decisions.
They understand and express ideas and information confidently and creatively in more than one language and in a variety of modes of communication. They work effectively and willingly in collaboration with others.
They act with integrity and honesty, with a strong sense of fairness, justice and respect for the dignity of the individual, groups and communities. They take responsibility for their own actions and the consequences that accompany them.
They understand and appreciate their own cultures and personal histories, and are open to the perspectives, values and traditions of other individuals and communities. They are accustomed to seeking and evaluating a range of points of view, and are willing to grow from the experience.
They show empathy, compassion and respect towards the needs and feelings of others. They have a personal commitment to service, and act to make a positive difference to the lives of others and to the environment.
They approach unfamiliar situations and uncertainty with courage and forethought, and have the independence of spirit to explore new roles, ideas and strategies. They are brave and articulate in defending their beliefs.
They understand the importance of intellectual, physical and emotional balance to achieve personal well-being for themselves and others.
They give thoughtful consideration to their own learning and experience. They are able to assess and understand their strengths and limitations in order to support their learning and personal development.
The IB ‘Learner Profile‘ is emblazoned about my school, and although in reality we have a relatively small cohort in the context of the whole school, the learner profile sparks my interest each time I walk past it. It makes me think how the IB constructs its aims and shapes it curriculum around its students. The IB is rightly lauded by Gove and he is critical of our qualifications not stacking up against such international models, but I am yet to be convinced that he is leading an authentic shift towards our core purpose being centred around our students and their future. With the IB Diploma foregrounds qualities, such as ‘open mindedness’, they are fostered in real terms by having the ‘Theory of Knowledge‘, at the core of the diploma, a philosophical exploration of knowing, with a rigorous focus upon the domains of knowledge in each other subject area of the IB Diploma. It is placed alongside the ‘Extended Essay‘ – a genuinely independent piece of assessment that requires students to devise their own thinking and undertake real inquiry, supported by expert teachers. Not only that, with the ‘Creativity, Action and Service (CAS)‘ assessed element of the qualification, active citizenship is made real. The ‘Learner Profile‘ isn’t just window dressing – it underpins the philosophy and aims of the qualification – shaping the assessment model to fit those aims.
Another school school system celebrated by Gove is that of Singapore. I am interested in the ‘Desired Outcomes of Education‘ in Singapore. Once more, a core focus is centred upon what type of learner their system is looking to develop:
1. The Desired Outcomes of Education (DOE)1 are attributes that educators aspire for every Singaporean to have by the completion of his formal education. These outcomes establish a common purpose for educators, drive our policies and programmes, and allow us to determine how well our education system is doing.
2. The person who is schooled in the Singapore Education system embodies the Desired Outcomes of Education. He has a good sense of self-awareness, a sound moral compass, and the necessary skills and knowledge to take on challenges of the future. He is responsible to his family, community and nation. He appreciates the beauty of the world around him, possesses a healthy mind and body, and has a zest for life.
In sum, he is:
• a confident person who has a strong sense of right and wrong, is adaptable and resilient, knows himself, is discerning in judgment, thinks independently and critically, and communicates effectively;
• a self-directed learner who takes responsibility for his own learning, who questions, reflects and perseveres in the pursuit of learning;
• an active contributor who is able to work effectively in teams, exercises initiative, takes calculated risks, is innovative and strives for excellence; and, a concerned citizen who is rooted to Singapore, has a strong civic consciousness, is informed, and takes an active role in bettering the lives of others around him
Lastly, I was interested in another programme praised by Gove, that once more places character development, and a more holistic view of the student, at the heart of its core purpose – of course, alongside exam success etc. – the KIPP programme in America. The debate about KIPP schools fills column inches in America, so a quick Google search will do the job of beginning further research into their system, but I wanted to focus upon their ‘Character Growth Card’. Students are graded on their ‘character’. This may seem anathema to some, but at least it is a recognition that some things are valued in education beyond examination scores.
KIPP Character Growth Card: http://www.kipp.org/files/dmfile/KIPPCharacterGrowthCardandSupportingMaterials.pdf
These qualities best embody what type of students the KIPP programme aims to develop:
OPTIMISM: expecting the best in the future and working to achieve it;
Gets over frustrations and setbacks quickly;
Believes that effort will improve his or her future
ZEST: approaching life with excitement and energy, feeling alive and activated;
GRIT: finishing what one starts, completing something despite obstacles; a combination of persistence and resilience;
Finishes whatever he or she begins;
Tries very hard even after experiencing failure;
Works independently with focus
CURIOSITY: taking an interest in experience and learning new things for its own sake; finding things fascinating Is eager to explore new things;
Asks and answers questions to deepen understanding;
Actively listens to others
SOCIAL INTELLIGENCE: being aware of motives and feelings of other people and oneself; including the ability to reason within large and small groups;
Able to find solutions during conflicts with others;
Demonstrates respect for feelings of others;
Knows when and how to include others
GRATITUDE: being aware of and thankful for opportunities that one has and for good things that happen;
Recognises and shows appreciation for others;
Recognises and shows appreciation for his/her opportunities
SELF-CONTROL: regulating what one feels and does; being self-disciplined
SELF-CONTROL – SCHOOL WORK:
Comes to class prepared;
Pays attention and resists distractions;
Remembers and follows directions;
Gets to work right away rather than procrastinating
SELF-CONTROL – INTERPERSONAL
Remains calm even when criticized or otherwise provoked;
Allows others to speak without interruption;
Is polite to adults and peers;
Keeps temper in check.
The formation of ‘character’ being explicitly linked to an education is nothing new – Plato advocated the telling of stories to help “fashion” the minds of the impressionable young; John Locke had the revolutionary idea that women were equally deserving of an education that developed character. Today, educationalists, such as Guy Claxton, have proffered their own version of such skills; creating a sort of ‘character taxonomy’. I do get slightly suspicious when ‘solutions’ are bandied about easily; particularly if such ‘experts‘ start selling their particular ‘brand‘ of character building. Each school should look at their own context and needs for their students – not buy in some quick fix. I happen to think the whole programme of PSHCE is a rather elaborate sham that doesn’t help create character, as much as reading ‘If’ by Rudyard Kipling over and over can do so! Covering topics such as ‘open mindedness’ in splendid isolation from domains of subject knowledge is foolhardy, but having a curriculum where we reinforce and foreground learning dispositions and character traits throughout the curriculum, in a coherent way, with assessment models constructed for that aim, is entirely valid. Perhaps we could use the time freed up from PSHCE in a more productive way?
I do not doubt that development of domains of core knowledge are essential (this article by Daniel Willingham brilliantly sums up the importance of knowledge here), but whilst I agree that our choice of what knowledge is important (which is currently up for debate), it should be balanced with what dispositions of character we are seeking to develop in our students – such as the resilience to tackle challenging new domains of knowledge. Of course, assessment matters. What we assess skews how we teach, whether intentionally or more indirectly. If we create a narrowed curriculum of summative three hour exams alone we risk losing the opportunity to promote a rich range of skills integral to learning new knowledge. With robust and reliable speaking and listening assessments, for example, such as recorded public debates, presentations or a viva voce based upon their research, we can harness and hone communication skills so crucial in the formation of self-confidence and resilience. If we were to raise the profile of guided research and inquiry skills, bound to specific domains of knowledge, in our assessment, such as the IB style ‘Extended Essay’, or portfolio based projects, we could better foster resilience and perseverance, whilst honing skills appropriate for a future where information will only proliferate still further.
In our obsession for easily measurable outcomes (easily packaged, replicable and cheap to administer and judge of course!) we are forgetting that assessment can work in our favour, if we work backwards from the point of what we want students to know and how we want students to approach their pursuit of knowledge. Jean Piaget’s view of intelligence is appropriate: “Intelligence is what you use when you don’t know what to do.” The US Department for Education are looking to address this balance between cognitive and non-cognitive dispositions, focusing upon dispositions such as resilience (indeed, resilience is included in the ‘Common Core Curriculum’ for mathematics). It is summarised in this very useful report: http://www.ed.gov/edblogs/technology/files/2013/02/OET-Draft-Grit-Report-2-17-13.pdf. I think the report is outstanding and the recommendations it poses should frame our curriculum development. Two such recommendations stood out:
“Educators and administrators interested in promoting grit, tenacity, and perseverance should draw on key research-based best practices, for example, (1) provide students with opportunities to take on higher-order or long-term goals that are “worthy” to the student—goals that are “optimally challenging” and aligned with the students’ own interests, and (2) provide a rigorous and supportive environment for accomplishing their goals.” (Page xii of report)
“Administrators and educators need professional development, curriculum materials, and technological supports. Other potentially high-leverage strategies may be restructuring school days to have longer periods and increasing school staffing so that teachers can give individual students more thoughtful feedback and attention.” (Page xiii of report)
Is there a whiff of jargon about the whole business? Yes – and we should be wary of creating a new pseudo-subject akin to PSHCE. Are schools solely responsible for character building? Absolutely not – parental role models trump teachers every time – as John Hattie’s states: “The effect of parental engagement over a student’s school career is equivalent to adding an extra two to three years to that student’s education”. Should we do our best to reinforce dispositions that help (both students and parents) with learning and foster the qualities of character that make our students happy and more healthy citizens? Yes. Should we place character development at the heart of our model for a future curriculum, including, crucially, how we shape our assessment model – I think we should. That does not mean ramming our sense of morality in the faces of our students in the vain hope they will make significant changes to their character, but it is a positive belief that if we enhance our curriculum (keeping it richly broad) and tweak our assessment models towards a holistic and a more authentic range of outcomes that we can do a better job of developing rounded young adults ready for the future.
Finally, I would like to end with this quote from Novel Laureate Professor of Economics from Chicago University, Dr James Heckman, from a Boston Review article – see here:
“First, life success depends on more than cognitive skills. Non-cognitive characteristics—including physical and mental health, as well as perseverance, attentiveness, motivation, self-confidence, and other socio-emotional qualities—are also essential. While public attention tends to focus on cognitive skills—as measured by IQ tests, achievement tests, and tests administered by the Programme for International Student Assessment (PISA)—non-cognitive characteristics also contribute to social success and in fact help to determine scores on the tests that we use to evaluate cognitive achievement.”
See here for an excellent research piece by Heckman on ‘soft skills’.
Note: I am aware there are debates about the selectivity of KIPP schools and the ultimate success of their graduates. Singaporean education has also been criticised for being highly conformist and hot-housing students to succeed. I do not believe simple education tourism works, but that we should consider carefully our new curriculum aims and our assessment model – reviewing international models as a point of reference, not as a quick fix.
Marking workload getting on top of you?
Many schools, and departments, have been reflecting about their marking policies ever since OFSTED declared more than a healthy interest in scrutinising books. Progress over time has rightly been identified as more important than single lesson snap shots – of course, that evidence if best found in ongoing student work and the attendant formative assessments. This has combined with greater scrutiny of standards of literacy, particularly writing. I have no problem with this; as you would expect from an English teacher. I think it is of paramount importance to have the highest standards for writing across the curriculum. Unfortunately, it appears that in many schools OFSTED fear has fuelled a misguided obsessed with marking, resulting in draconian whole-school marking policies that are less about learning and more about monitoring teachers. Marking and assessment must be the servant, and not the master, of our pedagogy and our profession.
Firstly, I think it is important to understand the OFSTED context, so I can then move beyond it to the more important context: the pedagogy and the learning. In the recent guidance to OFSTED inspectors for judging literacy standards in schools – see here – it relates some specific guidance:
“A basic way of reviewing pupils’ work is to select an extended piece of writing from near the beginning of a pupil’s book (or folder of work). This can then be compared with a piece from the middle and one nearer the end. Is there a discernible difference in length, presentation, sophistication (e.g. paragraphing or length of paragraphs), common errors, use of vocabulary and variation in style? Look at the teacher’s marking. Are the same issues highlighted in the later pieces as in the earlier ones? Has the teacher identified any developing strengths or commented on improvement?
When looking at books from other subjects, it is important to form a view of what it is reasonable to expect. If pupils are writing in a form that would be taught in English, it is reasonable to expect that they would draw on what they have learnt already. This is often the case in primary schools. In secondary schools, there is considerably more variety. Do teachers identify important errors (such as some of those contained in questions about literacy in lessons above). Key subject terms should be spelt correctly. Basic sentence punctuation should be accurate. If it is not and is not identified, how will pupils improve?”
This extract outlines that OFSTED inspectors are guided towards a scrutiny that is selective and one that recognises “variety“, whilst maintaining high expectations of formative feedback. Ultimately, the goal is to successfully recognise written feedback that combines high expectations of literacy and guides students towards making progressive improvement in their writing (reflecting their knowledge and understanding). It is therefore key that we do not overreact with a marking policy that has teachers poring over every written word by students, but instead we need one that recognises the importance of formative written and spoken feedback with a “view to what is reasonable to expect“. We can still maintain the highest of standards, whilst marking reasonably and not to excess. We will maintain the highest of standards not by doing more and more writing assessments, but by slowing down the whole process and getting students actively engaging in drafting and proof reading their writing. We must avoid the tyranny of content coverage at the expense of in depth, quality learning.
A wealth of great research and evidence has lauded the impact of feedback and of assessment for learning strategies for decades. Luminaries such as Dylan Wiliam have guided the way. We must use this valid focus on literacy and high standards of formative assessment as positive leverage to improve our pedagogy and refine our use of assessment for learning strategies. Yes, teachers should give written feedback to a high standard, but we must be reasonable regarding what we can expect is realistic and sustainable for teachers. The answer is a balance of quality, selective formative feedback with well trained peer and self-assessment. If we want great lessons planned and executed consistently then marking must be selective; with a process that builds in reflection time for students – not a roller coaster of internal assessment points, arbitrarily set to give the impression of high standards.
This national context has informed, but not misdirected or narrowed, our redesign of the policy for assessment and marking in our English and Media faculty. We have consciously renamed it our ‘feedback policy’. The relabelling of our policy from ‘marking’ to the broader term ‘feedback’ is more than just window dressing. It is a realignment of priorities currently skewed by a fear of OFSTED. Marking quite obviously presupposes a ‘mark’ on the page; whereas much of our daily pedagogy consists of oral formative feedback. Oral feedback has the unassailable strength of being instantaneous in comparison to the delay of written feedback. Regardless of what teaching and learning activity are being undertaken, oral feedback is integral to learning and progression. We have therefore foregrounded its importance in our feedback policy – placing it on par with written feedback (personally, I think it actually has greater impact on learning). Indeed, our policy is an attempt to unite the two and to enhance our pedagogy, rather than arbitrarily tighten our accountability measures.
Our feedback policy can be found here: 2013 English and Media Faculty Feedback policy
We mark students’ summative work using a separate portfolio approach, with five major end assessments, each supported by a formative mini-task:
Crucially, we have adapted our feedback policy to serve our students and to help them improve, not to tick the OFSTED box; however, by creating a system that records oral feedback more systematically in the students’ books we have managed to meet both requirements. Our approach to feedback is precisely selective and measured. We are also aiming to use assessment and feedback as the servant, not master of our pedagogy. We are using ‘Dedicated Improvement and Reflection Time’ (the label borrowed from the outstanding Jackie Breere), as a continuous formative process within lesson time to raise standards of literacy through a targeted and smart use of peer and self-assessment, combined with skilled oral feedback:
Teachers take the opportunities during lesson to monitor and formatively guide their writing, using our stamp system and getting students to record our comments to identify issues and to set targets. We are not carting home bags of books on a weekly basis, on top of our already thorough and rigorous marking regime, that see students take a little more than cursory glance at, or struggle to find value in even when given time. The oral feedback becomes the written feedback and students are engaged actively in the process. Students also undertake the standard proof reading exercises, of their own writing and of their peers, using highlighters, but in a systematic and highly consistent way. We are building good habits for students, whilst maximising lesson time. When students are writing, or undertaking other activities, teachers can be constantly having dialogues about their work and how they can best improve.
Here are some examples of using our stamp system simply and effectively during classwork, whilst the students are completing their writing so they can improve instantaneously (well, we hope they improve!):
We view that dialogue as so important that we now have ‘one-to-one weeks’ in each term when we undertake ‘dedicated improvement and reflection time‘ (we must remember that students often struggle with written feedback alone, therefore finding time to discuss their progress is typically more effective – as well as being more effective in terms of teacher workload). They are once more guided through peer proof reading and self-regulating strategies (with some valuable extended reading time), whilst the teacher has a crucial conversation about their progress. In those often five minute conversations we can identify issues and/or targets, as well as reviewing their preparatory book work and their portfolio of finished work. The most important part of ‘dedicated improvement and reflection time’ (DIRT) is the time given to students. They need time to reflect on feedback; to analyse and grasp their targets and to ask questions to illuminate how they can progress further. By doing less writing in this manner we will work slower, but ultimately standards will likely be higher.
I would reiterate that OFSTED’s focus upon the evidence of written marking has made us reflect upon the efficacy of our practice and attempt to improve it, but we have not forgotten that assessment and marking – rebranded more holistically as feedback – should be the servant of the classroom teacher, not our master. Its very function is to support students – it should not be used as a stick to beat teachers. My key messages about the current ‘marking’ focus for me are as follows:
– We should remember that oral feedback is as valuable as written feedback and we should shape our pedagogy with that in mind – closing the gap between the two. The gap should also be closed between the teacher giving feedback, both orally and in the written form, and students self-assessing their own writing and peers giving effective feedback;
– We should remember that peer and self-assessment done well takes careful training and scaffolding, but we must not ignore decades of research about the impact of AFL, taking the retrograde step of relying solely on written teacher feedback;
– We should undertake written feedback that is selective, targeted and uses precise language;
– We should dedicate more than adequate time for students to act upon feedback;
– We should devote time to engage in dialogue with students to ensure they understand what they need to do to improve.
A great post by Tom Sherrington, with useful strategies to ‘close the marking gap’: http://headguruteacher.com/2012/06/17/264/
Useful OFSTED case study: http://www.ofsted.gov.uk/resources/good-practice-resource-making-marking-matter
The original research about AFL that is still required reading for teachers: Inside the Black Box’, by Black and Wiliam – https://www.measuredprogress.org/documents/10157/15653/InsideBlackBox.pdf