Evaluating AI-Driven Dialogues in Higher Education
AI-powered chatbot tutors vs. traditional quizzes: examining whether guided learning dialogues enhance knowledge acquisition and critical thinking skills.
Duration: March 2025 – February 2026
Status: Ongoing
Educational Level: Tertiary Level
Topic: Artificial Intelligence AI
Keywords: AI Tutors, Formative Assessment, Student Engagement
Initial Situation
Traditional assessment methods in higher education primarily evaluate knowledge recall rather than higher-order thinking skills. While efficient for large cohorts, they offer limited opportunities for open-ended reasoning, critical analysis, and reflective dialogue. Formative assessment through dialogue has proven pedagogically valuable, yet scaling personalized interactions remains challenging. AI-powered educational technologies offer potential solutions, but empirical evidence on their effectiveness in developing critical thinking remains limited. Can AI dialogue systems genuinely encourage higher-order cognitive skills, or do they merely simulate surface-level interactions? Brian’s AI-based dialogue tool allows teachers to create customizable AI tutors with defined completion criteria for open-text exercises. However, systematic research is needed to evaluate whether such tools enhance learning outcomes compared to conventional methods and how students experience AI-mediated learning. This project addresses this knowledge gap through rigorous comparative study in higher education settings.
Objectives
This project aims to evaluate whether AI-driven dialogue exercises enhance learning outcomes compared to traditional assessment methods in diverse higher education settings. We will examine student performance, engagement patterns, and experiences with AI-mediated learning. The study will document what works in classroom contexts and identify conditions under which AI dialogue tools show promise for developing critical thinking skills.
Method
We will conduct a study with BFH students across multiple courses during 2025. Where feasible, students will be assigned to either AI dialogue exercises or conventional learning methods. Data collection includes learning outcome measures, learning analytics from the Brian platform (engagement patterns, completion rates), and student surveys examining user experience and perceived effectiveness. We combine quantitative performance data with qualitative feedback to understand how AI dialogue tools function in classroom settings.
Planned Translation
This project partners with Brian AG to establish a direct research-practice feedback loop. Findings will inform platform improvements while participating BFH lecturers receive implementation support. Results will be disseminated through conference presentations, and validated features integrated into Brian’s platform. This project will contribute empirical evidence on AI dialogue systems in authentic higher education contexts. Findings will inform Brian’s platform development and provide insights for educators considering AI tools. Impact will be assessed through student learning outcomes, platform usage patterns, and faculty feedback on feasibility.