BeLEARN, Conversational Agents in Reflective Writing

Exploring the Effects of Conversational Agents on Learners’ Reflective Writing

This project explores the impact of conversational reflection with the help of an agent (i.e., chatbot) on business students’ reflection skills and learning.

Duration: January 2024 – December 2024
Status: Completed
Educational Level: Tertiary Level
Topic: Artificial Intelligence AI, Digital Tools
Keywords: Adaptive Learning System, Artificial Intelligence, Metacognition

Initial Situation

Reflective writing helps students make sense of their learning and build metacognitive skills. Yet many students—especially in business courses—struggle to reflect deeply. They find it hard to structure their thoughts, move beyond description, and translate experiences into actionable insights. Typical solutions (static prompts or end-of-course essays) rarely provide the scaffolding or timely feedback learners need. Teachers would like to guide reflection but often lack the time and tools to give individual feedback at scale. Conversational agents (chatbots) could help by asking follow-up questions, giving step-by-step guidance, and modelling high-quality reflection. However, evidence on how such agents influence reflection quality, learning, and students’ experience is still limited. Institutions also need trustworthy, privacy-aware implementations and simple dashboards that let lecturers monitor progress without adding workload. Our project addresses these gaps by designing, testing, and deploying a conversational agent and a journey dashboard that support personalised reflective writing in real university courses.

Objectives

  • Design and integrate MindMate, a conversational agent that guides reflective writing step by step.
  • Develop a dashboard that visualizes learners’ progress.
  • Evaluate effects on reflection quality, metacognitive skills, and engagement versus static prompts.
  • Compare prompt strategies, ensure responsible AI use, and translate results into classroom practice at BFH and partner institutions (Innosuisse project).

Method

We use an iterative, design-based research approach across BFH business courses. Co-design workshops with students and lecturers informed rapid prototypes of MindMate and the dashboard. Field studies integrated both into the Reflect app, with two course pilots comparing conversational and static prompts. Evaluation combined rubric-based reflection analysis, learning analytics, surveys, and interviews. Secure NLP tools supported thematic and sentiment analyses. Quantitative and qualitative results were synthesised into design principles and teaching guidelines.

Results

Our pilots show promising results. Students using MindMate produced reflections that were more structured, specific, and action-oriented than those using static prompts. They reported clearer guidance, a non-judgmental space to think, and better links between theory and practice. Effects were strongest when the agent used step-by-step prompts and brief, personalised feedback. Teachers valued the dashboard for a quick overview of progress and to target feedback time where it mattered most. We also identified important guardrails: be transparent about AI support, avoid over-scaffolding that replaces thinking, and provide clear expectations for academic integrity and data use. Details of the study design, results, and design implications are documented in our CHI 2025 extended abstract on MindMate and a companion paper exploring XR-supported reflective writing.

Implemented Translation

  • In-class use: MindMate is integrated in the Reflect app and has been used in at least two BFH courses.
  • Dashboard for educators: Lecturers track progress and identify students needing support; feedback is streamlined.
  • Teaching resources: Short guides, sample prompts, and activity templates help instructors adopt reflective writing with AI support.
  • Scale-up (Innosuisse grant, 2024–2026): We are extending functionality, strengthening privacy and governance, and preparing roll-out to additional courses and partner institutions. Training and onboarding materials for educators are part of this translation work.

Observed: Higher quality and depth of reflections, increased student engagement, and more efficient, targeted teacher feedback. Expected: Stronger metacognitive skills and self-regulated learning, improved transfer from experience to action, and scalable, equitable access to formative feedback. For institutions, the approach supports competence-oriented assessment and contributes to sustainable skills development aligned with lifelong learning goals. Impact is being monitored through rubric scores, analytics, surveys, and educator uptake.

Publications

Neshaei, S. P., Wambsganss, T., El Bouchrifi, H., & Käser, T. (2025, April). MindMate: Exploring the effect of conversational agents on reflective writing. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1–9). Association for Computing Machinery (ACM). https://doi.org/10.1145/3706599.3720029

Li, J., Neshaei, S. P., Müller, L., Rietsche, R., Davis, R. L., & Wambsganss, T. (2025, April). SpatiaLearn: Exploring XR learning environments for reflective writing. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1–11). Association for Computing Machinery (ACM). https://doi.org/10.1145/3706599.3719742

Project Lead

BeLEARN, Conversational Agents in Reflective Writing
Prof. Dr. Thiemo Wambsganss Institute for Digital Technology Management, BFH

Project Collaborators

BeLEARN, Conversational Agents in Reflective Writing
Dr. Andrew Ellis Virtuelle Akademie, BFH
BeLEARN, Conversational Agents in Reflective Writing
Dr. Patrick Jermann Center for digital education, EPFL

Participating Institutions