Evaluating and enhancing assessment – why now?
Understanding what good assessment and feedback practices look like, and how to best support those practices in a sustainable way, have been concerns for educators in the Higher Education sector long before the pandemic hit.
But while the Covid response was a catalyst for change that has provided some much-needed momentum, in light of this rapid shift to digital transformation there is now a pressing need for an evaluation of how feedback and assessment processes have changed and whether they are indeed meeting the needs of learners first and foremost, but also of assessors and their institutions. And we’re hopeful the lessons learned thus far can also provide some inkling of possibilities for future priorities and developments in this space.
As it appears that universities are now settling into variations of ongoing hybrid models of teaching and assessment, the push for digital solutions to support feedback and assessment is gaining strength.
The reasons for this are (or should be) as much about keeping learners and educators connected and ensuring that opportunities for ongoing and incidental feedback aren’t lost, as they are about enabling processes to be streamlined and scalable. But there are challenges involved.
Perhaps one of the biggest challenges remains the temptation to replicate existing methods of assessment via digital means.
It may be possible to transfer somewhat traditional assessment methods, such as exams, online – and indeed, we saw many universities grappling with ways to stage manage this – but is it desirable? As Jenny Masters wrote in her blog post on rethinking assessment design at the beginning of the pandemic, ‘’it seems logical that we should ask students to engage in learning tasks where the assessment is a natural part of the process … the learner learns through the process of building the assessment artefact and, conversely, the product is tangible evidence of the learning that has taken place.”
Any move to digital should also be aiming to move to ‘assessment for’ rather than ‘of’ learning and should include a process of ‘going back to basics’ to ask if our assessments are leading to improvement, developing learners as self-regulating learners – preparing them for the way that they will continue to be assessed in the real world.
And students are recognising this themselves. Our recent research, published in our Career Readiness Report, found that a whopping 82% of students identified practical and contextualised authentic assessment activity as being quite or very important in preparing them for the future, as opposed to just 63% who said the same about examination-focussed assessment.
Beyond the type of assessment adopted, there is a real need to ensure that students are being supported to develop adequate assessment and feedback literacy.
It’s crucial that students understand why they are being assessed, and that they have a good sense of what the task involves and how their work or performance will be judged and that they are enabled to activity engage in the feedback process, not just be passive recipients. One of the best ways to achieve this is to involve students as co-creators in their own assessment – and in our previous webinar – on that very topic of students as co-creators – the panellists discussed this at some length.
A principle-led approach
As a post-pandemic response to the pivot to online assessment, Jisc, the UK-based leader in digital learning advice and research, has released a set of Principles of Good Assessment and Feedback to guide future practice in the online environment.
The 7 principles are:
Principle 1. Help learners understand what good looks like
Principle 2. Support the personalised needs of learners
Principle 3. Foster active learning
Principle 4. Develop autonomous learners
Principle 5. Manage staff and learner workload effectively
Principle 6. Foster a motivated learning community
Principle 7. Promote learner employability
The way forward – aligning principles to practice
The 7 principles provide a sound framework for practitioners to use in the design of learning teaching and assessment. As part of our webinar we’ll have examples of practice from Helen Dugmore from Murdoch University and Laurie Murphy from James Cook University in Australia and Robert Chmielewski from the University of Edinburgh.
It’s always gratifying and instructive for us here at PebblePad to see – and be able to share – ideas for how PebblePad is used to support a wide raft of learning, teaching and assessment ambitions.
Reflecting on the development of PebblePad, which started from the basis of enabling pedagogically sound, assessment-for-learning approaches, CEO Shane Sutherland has said, “Our hope was that work would be shared for assessment in its earliest stages of development, and that feedback from teachers or peers would lead to further learning and to ongoing development of the asset. I think it’s reasonable to claim that many of our customers use the platform in just this way – investing their time and effort in providing rich formative feedback for learners as well as summative feedback alongside grading.”
At Bett 2022, Shane participated in a panel to discuss to the topic “What does good assessment look like?”. As noted above, it’s something we here at PebblePad have long engaged with, and it’s heartening to note that Shane’s thinking was very much aligned with that of the Jisc principles. With the goal of developing confident, autonomous learners, Shane’s top priorities included co-created outcomes, students having a clear understanding of the task and the criteria, assessment forming part of the learning, and that feedback should be iterative, ipsative and dialogic – and includes multiple voices and perspectives.
We’re looking forward to furthering the discussion on what good assessment and feedback practice looks like in our digital age, and we very much hope you’ll add your voice.