Teaching Engineering and Computing Ethics with Deepfakes

How does ethical priming affect how students perceive deepfakes (emotionally, in terms of attention, and in moral judgement), and what implications does this have for professional ethics education?

Abstract

 

Concept

The rise of deepfakes poses particular educational issues both for potential consumers of deepfakes and for their potential producers. Since the production and use of deepfakes typically depends on multiple inputs from a wide variety of actors, they provide an evident example of the “many hands” problem in engineering ethics (van der Poel, Royakkers, and Zwart, 2015), in which the attribution of individual responsibility becomes extremely difficult in collective settings. Furthermore, the capacity of technology to create separation between those producing deepfakes and those affected by their production increases the risk of producers feeling themselves free of traditional social obligations to others (Hoffman, 2000; 2008).

 

At present, the education of engineers and computer scientists seems ill-equipped to face the above challenges. Indeed, research evidence from engineering programmes (both internationally and in Switzerland) indicates that, far from developing ethical engagement during their studies, engineering students appear to become increasingly disengaged from ethical concerns (Cech, 2014; Tormey et al., 2015; Lönngren, 2020). Although there are efforts to develop ethics materials which could be used for teaching around deepfakes (see, for example, https://mediaethicsinitiative.org/, or the EPFL AMLD), like in the field of engineering ethics more widely, the design of case studies and other educational materials is typically not based upon evidence about how people learn ethics or indeed about how they learn more generally (see Hess and Fore, 2019).

 

Method

We propose to use an experimental model to evaluate two potential educational approaches in using deepfakes to develop students’ moral sensitivity and moral reasoning (Bebeau, 2002). In both conditions students will watch three videos: an authentic video, a high quality deepfake video, and a low quality deepfake video.

 

Three types of data will be collected: (i) attentional data will be collected using eye-tracking (ii) emotional responses will be collected using emotional facial recognition software, and (iii) moral judgement will be collected using a protocol modelled on Kohlberg’s “Moral Judgement Interview”.

 

In the control and experimental conditions, students will be primed differently: In the control condition, they will be primed to assess the video quality in terms of technical proficiency, and in the experimental condition, they will be primed to assess the video quality in terms of both technical proficiency and ethical considerations. The research participants will be engineering students who have taken at least one machine learning course.

 

Ansprechperson(en)

Beteiligte Personen

Beteiligte Institutionen