AI-Driven Engine for Context-Aware Digital Competences Assessment

Revolutionizing training: Our AI project uses LLMs to add context to Situational Judgment Tests, enhancing digital competency assessments for the workforce

Abstract

This research project aims to address the challenges posed by the rapid digitalization of the workforce, which demands extensive re- and up-skilling across the globe. Recognizing the limitations of current competency frameworks, which often overlook the nuanced scenarios professionals encounter, the project proposes the development of an AI-driven engine for digital competencies. This engine, powered by large language models, seeks to enhance situational judgment tests (SJTs) by generating realistic, context-specific work scenarios. The innovative, dialog-based assessment method integrates personal context, offering a tailored evaluation of digital competencies. This approach not only fills a crucial gap in current educational and professional training programs by improving the relevance and accuracy of competency assessments but also ensures inclusivity and personalization in competency evaluation, paving the way for a more adaptable and skilled future workforce.

(Zwischen-) Ergebnisse und Infos zum Projektstand

1. SJT guide

 

This guide is intended to be translated into practice through systematic steps that enable practitioners to develop, implement, and refine Situational Judgment Tests (SJTs) aligned with specific organizational or educational needs. Competency mapping is conducted first, ensuring that relevant skills or behaviors are identified according to the job context. Scenarios are then generated by subject matter experts, often through the critical incident technique, to capture realistic workplace challenges. Various response formats, such as closed-ended or constructed-response items, are selected to suit the intended assessment goals.

 

During pretesting, pilot samples are recruited to complete the draft SJTs, and data are analyzed to confirm reliability, fairness, and the absence of bias. Rasch modeling or other item response theory approaches may be applied to fine-tune item difficulty and address potential differences across demographic groups. If specific readability targets are required, metrics like the Flesch-Kincaid Grade Level or Gunning Fog Index are used to adjust language complexity and ensure accessibility.

 

Once validation is completed, practitioners can integrate the resulting SJTs into selection processes, training evaluations, or professional development programs. Adaptations for digital delivery are also encouraged, such as incorporating chatbots or online platforms for efficient administration. Regular updates, guided by evolving technologies or job requirements, are recommended. Following these stages and consulting the psychometric evidence offered in the guide empowers practitioners to create robust, fair, and valid SJTs that effectively measure and develop essential competencies in diverse settings.

 

2. Ready-to-Use Online Platform

 

A ready-to-use online platform has been developed in collaboration with the Informatics Department, enabling the creation of a personalized DigComp questionnaire. First, users provide their work context so that the questionnaire can be adapted to their specific needs. Next, this contextual information is loaded into an LLM-enabled questionnaire engine, where questions are generated in real time. The adapted questionnaire is then completed by users, after which the results are made available for analysis. This platform integrates advanced large language model features to automatically tailor question items based on each user’s job role, industry, or specific skill gaps. As a result, the final digital competence assessment remains contextually relevant and provides more precise insights for both individual users and organizational stakeholders.

This platform is fully operational for practitioners and will also be showcased at the BFH Business Breakfast.

Translation

The project aims to develop a versatile and adaptable measurement instrument for assessing digital competencies through the Sit.uational Judgement Test (SJT) methodology. Designed for a wide range of settings, from educational institutions to corporate environments, the instrument emphasizes modularity and customization to ensure its long-term relevance and utility. To further its applicability and sustainability, strategies such as offering a ‹white-label› version, partnering for specialized module development, and providing tailored tools and workshops are being explored. Supported by the Institute for Digital Technology Management at BFH, this initiative seeks to create a sustainable resource for effectively measuring and enhancing digital literacy and skills.

Gesamtleitung

Projektmitarbeit

Beteiligte Institutionen