Blog

Reimagining Assessment in the Age of AI

Author

Daniele Di Mitri

Reimagining Assessment in the Age of AI

AI in Higher Education and the Future of University Assessment

Image credit: CC by Kathryn Conrad / https://betterimagesofai.org/

The Rise of Generative AI in Higher Education

The explosion of generative AI in education over the past two years has forced universities to confront a fundamental question: what are we, as educators, actually assessing?

This is not a question about how to catch students using ChatGPT or whether we should ban AI tools. Rather, it is a question about the entire paradigm through which we evaluate learning in higher education.

The uncomfortable reality is that students are using generative AI to complete most of their assignments. According to recent data from a Nature article on assessment in the AI era [1], 92% of UK undergraduates are now using AI in some form. Yet, universities continue to rely on traditional exams and essays to evaluate learning — not just written documents, but also code, data analysis, or presentation slides.

This is not inherently the problem. The problem is that traditional higher education assessment is designed to evaluate the learning product rather than understanding the learning process. Using this product-based assessment model, we cannot know how much of a final deliverable reflects the student's actual thinking and critical engagement when AI has participated in its creation.

The Problem with Product-Based Assessment

Focusing solely on the product of learning stems from an outdated conceptualisation of higher education that operates within a transactional model. Students focus on producing a "grade-worthy" product rather than on the learning that happens along the way.

Generative AI in education has made this weakness more visible and more urgent. It exposes what was already broken: treating assignments as discrete, one-time events rather than iterative learning experiences.

Before redesigning university assessment systems, we need to clarify what we want students to learn.

What Should Universities Be Assessing?

Learning outcomes, according to the European Qualification Framework (EQF) [2],

typically fall into three categories:

  1. Knowledge: concepts, notions, and foundational understanding
  2. Technical skills: programming, data analysis, design
  3. Transversal competencies: collaboration, communication, reflection, and metacognitive awareness

Yet traditional exams and essays capture only fragments of this spectrum.

They cannot properly assess:

  • Collaborative competence
  • Reflective capacity
  • Metacognitive awareness
  • Research mindset development

Meanwhile, generative AI systems can now produce writing that surpasses the quality of most student work. They generate code, create visualisations, and synthesise information. This capability renders traditional written assessment increasingly meaningless as a signal of student learning.

Universities must rethink assessment in the age of AI.

One solution lies in shifting focus to the learning process itself.

Process-based assessment asks students to:

  • Document intermediate steps
  • Articulate decision-making
  • Reflect on challenges and iterations
  • Make AI use transparent

This is what process-folios are designed to capture: transparent records of a student's learning journey, not just the final outcome.

As Berkley scholar Jason Gulya advocates [3], universities should move away from "one-and-done" assignments toward "process clusters". Students complete multiple related assignments that build toward mastery of a specific competency.

Instead of traditional grades, they receive:

  • Complete
  • Try Again

A Complete means the learning objective is met. A Try Again means revision is required.

This reframes higher education assessment as formative, iterative, and growth-oriented rather than purely summative.

Competency-Based Assessment in the AI Era

Documenting process works best when paired with competency-based assessment.

Rather than grading individual assignments, educators assess progress across defined competencies such as:

  • User research
  • Prototyping
  • Collaboration
  • Communication

Assignments are mapped to competencies. Once students demonstrate consistent achievement across tasks, mastery is reached.

This approach aligns strongly with digital university models and online higher education environments.

Continuous Assessment vs High-Stakes Exams

The Nature article proposes alternatives such as:

  1. Conversation-based assessments
  2. Continuous assessment replacing final exams
  3. Transparent frameworks acknowledging AI involvement

For distance-learning institutions like the German University of Digital Science, conversation-based assessment may be complex. However, continuous assessment supported by AI-driven feedback systems offers a powerful alternative to single high-stakes exams.

When educators observe learning across multiple touch points over weeks and months, they develop a richer understanding of conceptual growth and skill acquisition.

This model is already standard in medical education.

There is no reason other disciplines cannot adopt similar continuous evaluation approaches.

AI Transparency and Ethical Integration

A practical step forward is implementing AI transparency statements.

Students describe:

  • Where AI was used
  • How it was integrated
  • What contributions were their own

Structured scales such as the AI Assessment Scale [4,5] allow students to rate AI use across stages:

  • Planning
  • Drafting
  • Revising

This approach does not police AI usage. It integrates AI ethically and transparently into learning processes.

Co-Created Rubrics and Peer Assessment

Another transformative technique is student co-creation of assessment rubrics.

When students define success criteria:

  • They clarify learning objectives
  • They develop metacognitive awareness
  • They internalise quality standards

Peer assessment becomes a natural extension of this process.

This model shifts focus from impressing instructors to genuinely engaging with quality standards — reducing overreliance on AI for superficial perfection.

Scaling Assessment in Digital Universities

A legitimate concern is scalability.

Process-based and competency-driven assessment can be time-intensive.

However, emerging learning analytics and AI-supported systems can:

  • Sustain adaptive dialogue
  • Provide personalised feedback
  • Track competency progress
  • Surface hidden learning patterns

The goal is not to automate judgment but to augment human insight through hybrid AI-supported assessment systems.

A Framework for Scalable AI-Integrated Assessment

At the November 2025 professor retreat at Schwilowsee, a scalable framework was presented:

  • Project-based learning experiences
  • Design thinking integration
  • Student-defined success criteria
  • Prototype implementation
  • Self-assessment and peer assessment
  • Process documentation via collaborative platforms
  • Competency mapping across assignments
  • 90% mastery threshold for competency achievement
  • Complete / Try Again evaluation model

This framework values:

  • Process over product
  • Transparency over detection
  • Iteration over finality
  • Growth over grades

The Future of AI in Higher Education Assessment

The challenge of generative AI in universities is not fundamentally about academic integrity.

It is about recognising that traditional assessment systems were already misaligned with meaningful learning outcomes.

The future requires:

  • Revamped grading policies
  • Process documentation
  • Competency-based frameworks
  • Continuous feedback
  • AI transparency integration

As AI capabilities continue to advance, product-based assessment will become increasingly ineffective.

Universities that adapt will lead the next phase of higher education innovation.

References

[1] Kovanović, V., Barthakur, A., Joksimović, S., & Siemens, G. (2025). Why universities need to radically rethink exams in the age of AI. Nature, 648(8092), 35-37.

https://doi.org/10.1038/d41586-025-03915-7

[2] https://europass.europa.eu/en/european-qualifications-framework-eqf

[3] Gulya, J. (2025, April 8). Bringing a process mindset to higher ed. Higher Education Digest. https://www.highereducationdigest.com/bringing-a-process-mindset-to-highered/

[4] Perkins, Mike, Roe Jasper, and Leon Furze. “Reimagining the Artificial Intelligence Assessment Scale: A Refined Framework for Educational Assessment.” Journal of University Teaching and Learning Practice 22, no. 7 (2025).

https://doi.org/10.53761/rrm4y757.

[5] https://aiassessmentscale.com/