BeLEARN, Enhancing Creativity in VET with AI

Leveraging Deep Generative Models to Enhance Creativity in Vocational Training

Can AI enhance creativity in vocational education? Our research explores how next-gen generative tools shape creative learning in Swiss VET.

Duration: January 2024 – December 2025
Status: Completed
Educational Level: Upper Secondary Level – Vocational Education
Topic: Artificial Intelligence AI, Digital Tools
Keywords: Adaptive Learning System, Digital Skills, Artificial Intelligence, Metacognition

Initial Situation

In Swiss vocational education and training (VET), fashion design apprentices must generate many ideas quickly, iterate, and justify their choices. Digital tools exist, but most either focus on technical skills (e.g., pattern making) or produce “one-click” results that bypass learning. Early trials with text-to-image systems showed promise, yet outputs were often hard to control, aesthetics were inconsistent with curriculum goals, and the tools offered little support for reflection. Teachers asked for an approach that:

  1. keeps creativity and authorship with the learner,
  2. scaffolds exploration rather than replaces it, and
  3. fits into short lesson blocks on iPads.

Our project investigates how modern generative AI—especially diffusion models—can be integrated into VET to strengthen creative confidence, idea diversity, and design quality while remaining transparent and teachable.

Objectives

  • Build an AI-assisted “sketch-first” tool that lets apprentices start from their own sketches and then iteratively control shape, color, and texture.
  • Study how such tools affect creative processes, outcomes, and metacognition in VET.
  • Co-design classroom workflows with teachers; ensure usability on iPad.
  • Translate results into practice via pilots, teacher materials, and an open classroom beta.
  • Share findings through peer-reviewed publications and practitioner resources.

Method

We combined research-through-design with mixed methods:

  • Co-design workshops with teachers and apprentices to define requirements.
  • Prototyping of an iPad app using diffusion models and controllable pipelines (sketch guidance, sliders for shape vs. color/texture, prompt presets, and version history).
  • Classroom pilots (BBZ, IDM) comparing “sketch-first” AI to baseline workflows.
  • Data collection: task timings, iteration counts, rubric-based expert ratings, diversity/novelty metrics, think-alouds, interviews, and post-task surveys (creative confidence, workload, satisfaction).
  • Thematic analysis of qualitative data and statistical tests for quantitative measures.
    Iterative releases addressing usability, transparency, and responsible use.

Results

  • Creativity & speed: Learners produced more distinct concepts in the same time and reached “first satisfying idea” faster.
  • Control matters: Starting from one’s own sketch plus separate controls for shape vs. color/texture yielded higher ratings for authorship and alignment with briefs than pure text prompts.
  • Learning value: Reflection prompts and version history helped students explain choices and compare alternatives; teachers reported better critique sessions.
  • Usability: The iPad app reduced friction versus prior desktop tools and supported short lesson blocks.
  • Cautions: Need for guidance on dataset bias and authorship; students benefit from structured prompts and ethics checklists.

Overall, the approach enhanced creative confidence without turning AI into a “black box”.

Implemented Translation

We piloted the tool in multiple classes at BBZ and IDM with lesson plans, worksheets, and assessment rubrics. Teachers received short trainings and could customize prompt presets to their curriculum. A classroom beta of the iPad app (“SketchAI”) is available for partner schools; feedback cycles every term inform updates. Beyond VET, we initiated collaboration with HKB to explore use in higher arts education.

  • Planned next steps: widen access to additional schools, add onboarding tutorials and privacy controls, and publish an educator’s toolkit (examples, rubrics, safety guidelines). A short demo video can be shared.
  • Measured in pilots: more ideas per session, shorter time to first viable concept, and higher expert ratings for variety and brief-fit; students reported increased creative confidence and clearer rationales in critiques.
  • Expected broader impact: scalable support for exploration and reflection across design subjects, improved equity of participation (quieter students iterate more), and time savings for teachers during ideation blocks. We will track adoption (active classes) and learning indicators (self-efficacy).
Publications

Davis, R. L., Mwaita, K. F., Müller, L., Tozadore, D. C., Novikova, A., Käser, T., & Wambsganss, T. (2025, April). SketchAI: A “sketch-first” approach to incorporating generative AI into fashion design. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1–7). Association for Computing Machinery (ACM). https://doi.org/10.1145/3706599.3719782

Mwaita, K. F., Davis, R. L., Müller, L., De Angeli, A., Haller, M., & Wambsganss, T. (2026). Sketch, prompt, or both? Exploring interaction modalities in generative AI [Manuscript submitted for publication in the Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26)]. Association for Computing Machinery (ACM).

Project Lead

BeLEARN, Enhancing Creativity in VET with AI
Prof. Dr. Thiemo Wambsganss Institute for Digital Technology Management, BFH

Project Collaborators

BeLEARN, Enhancing Creativity in VET with AI
Dr. Richard Davis Department of Learning in Engineering Sciences, KTH
BeLEARN, Enhancing Creativity in VET with AI
Livia Müller Institute for Digital Technology Management, BFH
BeLEARN, Enhancing Creativity in VET with AI
Prof. Dr. Pierre Dillenbourg Computer-Human Interaction Lab for Learning & Instruction, EPFL

Participating Institutions