In a recent article in the Times Higher Education, Lisa Gray, PebblePad’s Senior Consultant, Learning, Teaching and Assessment, discussed the increasing use of language generation tools in higher education and the potential implications for assessment practices.
While some may see these tools as a threat to traditional teaching methods, Lisa argues that they actually present exciting opportunities for advancing assessment practices. As educators explore the capabilities of these tools, they are finding new ways to evaluate student learning and provide personalised feedback.
To learn more about the potential of language generation tools in education, register now for our upcoming webinar on the topic. And in the meantime, check out Lisa’s full article below and let us know what you think – we’re keen to know how our colleagues in higher education are responding to the opportunities and challenges that AI brings.
While the advent of AI tools like ChatGPT undoubtedly prompt a fundamental shift in the way knowledge can be acquired and disseminated through their ability to generate human-like text, it’s important to remember that these tools themselves do not understand concepts and contexts as we do but use their abilities to predict which words follow another to create a response that we, as the reader, assign meaning to.
But regardless of the mechanism by which they create such plausible responses, their potential should not be underestimated, particularly in terms of the potential disruption to assessment practices.
So, what are the implications of the arrival of such tools for the assessment landscape? Is it the end of essays as we know them and a return to face-to-face exams to avoid the risk of collusion and dishonesty? Or another factor tipping the balance for assessment reform in higher education and an opportunity to rethink how we assess our students to better prepare them for an uncertain and ever-changing future?
We argue here for the latter.
The purpose of assessment in a changing world
There has been much written on the need for universities to rethink assessment practices over the years, with questions over whether traditional exams and written essays are truly assessing the knowledge, skills and behaviours that are both desired both as course outcomes, but also best preparing students for future success.
It’s certainly true that the skills required for a workplace that is being transformed by technology are everchanging. Machines are taking over not just automated tasks, but also those requiring thought and decision making. And there is clear evidence employers are also increasingly valuing skills, attitudes, aptitudes and behaviours over degree results. So what does this all mean for how we prepare and assess our learners for success? Particularly in the light of tools that can generate such sophisticated responses to questions posed of them?
A call for better assessment design
We would argue, more now than ever, for a move towards more authentic practices, assessing not just knowledge and facts, but the application of knowledge in context.
By taking the knowledge students have gained (through their course and other routes) and asking them to do something with that knowledge, relating to the context of their studies and potential future career environment, we are asking students to engage deeply, be curious, think critically, to analyse and solve problems. These are higher order skills that will be essential for success in life outside of their educational experience, using approaches that better reflect the ways that they will continue to be assessed as they continue through their careers.
By moving to these more authentic approaches, educators can create more meaningful and relevant experiences for students, whilst also gaining a more accurate understanding of their knowledge and skills.
By also asking students to surface their learning as part of the assessment experience, for example asking them to plan how they will approach the task, to explore and critically analyse their sources and materials, to reflect regularly on their learning and plan for their next steps, and importantly to engage in dialogue with tutors along the way, we are also maximising the learning opportunity, and minimising the possibility of cheating.
And we can go further. By engaging students in discussions around learning outcomes – asking them to contextualise these outcomes and co-design learning activities and approaches to evidencing that learning – we are not only developing self-regulating lifelong learners able to develop original ideas and think critically about the material, but also making it increasingly hard for cheating to take place.
And why would students want to cheat in the first place? It’s important to remember that they do not often set out to, it is often as a result of a lack of clarity or understanding around the purpose of a task. If we engage students in conversations around the purpose of assessment (i.e., enhancing their own development and preparing them for success), and make it clear what the expectations are in terms of collusion and plagiarism, the possibility becomes much less likely. Developing better ‘assessment literacy’ of both staff and students is an essential part of this picture.
What needs to be in place?
Changing assessment practices at scale is no easy task, particularly now with many university staff still recovering from the demands of teaching through the pandemic. But with the advent of these new tools presenting a very immediate problem for many current assignments, it is a task that can no longer be avoided.
Support will be needed to help busy staff understand the potential opportunities, risks and challenges these tools present, and to design the right solutions for them and for their students. Learning designers, educational developers, library staff and students themselves (as well as relevant professional associations) should be part of that journey – bringing to bear their expertise and experience to ensure the outcomes are the right ones.
A measured approach
Ultimately, a measured approach to the advent of these new tools is essential. They do offer opportunities (for example creating content to potentially free up time more valuably spent on the development and assessment of key skills), as well as risks and challenges that all need further exploration.
The tools are certainly not going away and are only going to continue to improve in terms of sophistication and availability. So, whilst the debate continues, and regardless of how pervasive use of these tools become in our assessment landscape, there is no escaping they present a challenge to traditional assessment methods that needs addressing now. Let’s use this opportunity well.