Daily Bulletin

Men's Weekly

.

  • Written by Thomas Corbin, Research fellow, Center for Research in Assessment and Digital Learning, Deakin University
Assessment in the age of AI – unis must do more than tell students what not to do

In less than three years, artificial intelligence technology has radically changed the assessment landscape. In this time, universities have taken various approaches, from outright banning the use of generative AI, to allowing it in some circumstances, to allowing AI by default.

But some university teachers and students have reported they remain confused and anxious, unsure about what counts as “appropriate use” of AI. This has been accompanied by concerns AI is facilitating a rise in cheating.

There is also a broader question about the value of university degrees today if AI is used in student assessments.

In a new journal article, we examine current approaches to AI and assessment and ask: how should universities assess students in the age of AI?

Read more: Researchers created a chatbot to help teach a university law class – but the AI kept messing up

Why ‘assessment validity’ matters

Universities have responded to the emergence of generative AI with various policies aimed at clarifying what is allowed and what is not.

For example, the United Kingdom’s University of Leeds set up a “traffic light” framework of when AI tools can be used in assessment: red means no AI, orange allows limited use, green encourages it.

For example, a “red” light on a traditional essay would indicate to students it should be written without any AI assistance at all. An “amber” marked essay would perhaps allow AI use for “idea generation” but not for writing elements. A “green” light would permit students to use AI in any way they choose.

In order to help ensure students comply with these rules, many institutions, such as the University of Melbourne, require students to declare their use of AI in a statement attached to submitted assessments.

The aim in these and similar cases is to preserve “assessment validity”. This refers to whether the assessment is measuring what we think it is measuring. Is it assessing students’ actual capabilities or learning? Or how well they use the AI? Or how much they paid to use it?

But we argue setting clear rules is not enough to maintain assessment validity.

Our paper

In a new peer-reviewed paper, we present a conceptual argument for how universities and schools can better approach AI in assessments.

We begin by making the distinction between two approaches to AI and assessment:

  • discursive changes: only modify the instructions or rules around an assessment. To work, they rely on students understanding and voluntarily following directions.

  • structural changes: modify the task itself. These constrain or enable behaviours by design, not by directives.

For example, telling students “you may only use AI to edit your take-home essay” is a discursive change. Changing an assessment task to include a sequence of in-class writing tasks where development is observed over time is a structural change.

Telling a student not to use AI tools when writing computer code is discursive. Developing a live, assessed conversation about the choices a student has made made is structural.

A reliance on changing the rules

In our paper, we argue most university responses to date (including traffic light frameworks and student declarations) have been discursive. They have only changed the rules around what is or isn’t allowed. They haven’t modified the assessments themselves.

We suggest only structural changes can reliably protect validity in a world where AI use means rule-breaking is increasingly undetectable.

So we need to change the task

In the age of generative AI, if we want assessments to be valid and fair, we need structural change.

Structural change means designing assessments where validity is embedded in the task itself, not outsourced to rules or student compliance.

This won’t look the same in every discipline and it won’t be easy. In some cases, it may require assessing students in very different ways from the past. But we can’t avoid the challenge by just telling students what to do and hoping for the best.

If assessment is to retain its function as a meaningful claim about student capability, it must be rethought at the level of design.

Authors: Thomas Corbin, Research fellow, Center for Research in Assessment and Digital Learning, Deakin University

Read more https://theconversation.com/assessment-in-the-age-of-ai-unis-must-do-more-than-tell-students-what-not-to-do-257469

Business News

The ultimate checklist for launching a digital-first business

If you’re launching a business in 2025, chances are it’s going to be digital-first. Whether you’re running an online store, offering consulting services, or building something entirely new, they all h...

Daily Bulletin - avatar Daily Bulletin

“SMBs Are Building the Future While Australia Sleeps” — Marc Degli on AI, Innovation, and What Needs to Change

Australia’s startup scene has been called “emerging” for a decade — but for many founders, it still feels stalled. Government funding is mired in bureaucracy. Investors hedge their bets on “safe” deal...

Daily Bulletin - avatar Daily Bulletin

Hydrogen Pipe Infrastructure: A Guide to Future Networks

As Australia moves towards a cleaner energy future, hydrogen is emerging as a key player. But how do we get this promising energy source from production facilities to where it's needed? The answer l...

Daily Bulletin - avatar Daily Bulletin

LayBy Deals