Assessment in the AI Era

Sonia Ben Slimane, Learning and Research Quality Assurance Manager, Associate Researcher and ERIM Executive Director at ESCP Business School

Description

Domain:
Assessment & Pedagogy
Challenge Area:
Authentic and Skills-Relevant Assessment
Status:
Emerging Practice (pilots and experimental practices)
Implementation Complexity:
Medium

This institutional initiative addresses the challenge of assessing student learning in an era of widespread generative AI use. Rather than prohibiting or policing AI tools, the approach focuses on redesigning assessment to better align with learning objectives such as critical thinking, judgment, and applied understanding. It promotes assessment formats that emphasise the learning process, oral reasoning, contextualised tasks, and transparency in AI use. By rethinking traditional written assignments and integrating AI-aware pedagogical choices, the initiative aims to preserve academic integrity while enhancing the relevance, fairness, and educational value of assessment in higher education.

Practical Implementation

At the institutional level, this best practice was implemented through a coordinated effort across programmes and campuses to rethink assessment in response to student use of generative AI. Faculty were engaged through structured discussions, workshops, and shared reflection sessions to identify limitations of traditional assessment formats and explore alternatives.

Concrete changes included increased use of oral examinations, in-class and timed assessments, process-based evaluation (such as drafts, logs, and reflective components), and explicit requirements for students to disclose and justify their use of AI tools. These measures supported academic integrity while encouraging responsible AI use and closer alignment between assessment methods and intended learning outcomes.

Impact Measurement

Impact was assessed using qualitative and process-oriented indicators rather than relying solely on quantitative performance metrics. Given the evolving nature of AI-aware assessment, the institution prioritised evidence of pedagogical effectiveness, faculty adoption, and student engagement.

Faculty feedback was collected through workshops, focus groups, and post-implementation discussions, focusing on changes in student behaviour, credibility of assessed work, and instructors’ confidence in evaluating reasoning and understanding. Student-level indicators included the ability to explain and defend work orally, consistency between written submissions and live questioning, and the quality of reflective statements on AI use.

At programme level, the institution monitored assessment design practices, including diversification of assessment formats, inclusion of AI-use guidelines in syllabi, and adoption of transparency requirements. Collective review sessions enabled cross-disciplinary comparison and identification of scalable practices. Overall, impact measurement focused on improved alignment between learning objectives, assessment methods, and demonstrated student competencies, rather than AI detection or punitive measures.

Enablers

  • Institution-wide faculty engagement and workshops
  • AI-aware assessment design principles
  • Oral, process-based, and contextualised assessment formats
  • Clear guidelines for transparent AI use
  • Cross-campus coordination and shared review practices

Files

Assessment in the AI Era
Download Material
Worksheet: Adapting Assessment in the AI Era
Download Material