Skip to main content
  • Research article
  • Open access
  • Published:

AI ethics as a complex and multifaceted challenge: decoding educators’ AI ethics alignment through the lens of activity theory

Abstract

This study explores university educators’ perspectives on their alignment with artificial intelligence (AI) ethics, considering activity theory (AT), which forms the theoretical underpinning of this study. To do so, 37 educators from a higher education institution were selected to write their metaphors about AI ethics alignment, out of which 11 attended semi-structured interviews, in which they answered some questions about their AI ethics alignment and narrated some experiences. The study reveals diverse and often contradictory perspectives on AI ethics, highlighting a general lack of awareness and inconsistent application of ethical principles. Some educators metaphorised AI ethics as fundamental but difficult to understand, while others pointed to the difficulties of regulating ethical violations. The findings highlight the need for targeted professional development on AI ethics, collaborative policy making and a multidisciplinary approach to promote ethical use of AI in higher education. This study also calls for stronger alignment between educators’ personal ethical standards and institutional norms to reduce AI-related risks in educational settings.

Introduction

For the last century, artificial intelligence (AI) has been a subject of fascination and debate centered on its possible ethics and morality (Adams et al., 2022). Ethics of AI in education technology (EdTech) have risen to prominence due to generative AI systems being able to identify intricate connections within human languages which is referred to as the most sophisticated soft technology that was invented by humans (Bozkurt, 2023a, b; Bozkurt et al., 2024). These AI technologies are intended for understanding and producing human language and thus, may have the potential to make EdTech effective through collaborative creativity enabled by synthetic content production. Nonetheless, it is essential to consider educators’ ethical standpoint together with those of learners since they ought to ensure responsible usage of educational AI systems based on best practices in teaching and learning (Akgun & Greenhow, 2022). As classrooms gradually incorporate AI systems more deeply into their fabric; educators’ attitudes towards these moral dimensions should be known if we are to understand how each educator perceives and addresses them during instruction (Dieterle et al., 2022).

In the field of higher education, although AI ethics have gained momentum in recent years (Al-Zahrani & Alasmari, 2024; Chan, 2023; Mah & Groß, 2024), there remains a need for further investigation, particularly regarding how educators align with ethical principles in their use of AI (Bond et al., 2024). Bond et al. (2024) called for “an enhanced focus on ethics in AIED [AI in Education] future research” (p. 33) arguing that the rapid integration of AI into education presents unique ethical challenges, which educators may not be fully equipped to address without a more robust framework for ethical alignment. This highlights the pressing need for empirical studies that examine how educators interpret and apply AI ethics in diverse educational contexts, ensuring that ethical considerations keep pace with technological advancements in higher education.

In response to this call, this study explores university educators’ perception of their alignment with AI ethics within higher education, offering insights into the complexities of ethical decision-making in academic settings. Furthermore, by utilizing activity theory (AT), this study provides a structured lens through which to examine how educators’ ethical decision-making is influenced by the broader socio-cultural and institutional contexts in which they operate. This approach allows for a deeper understanding of the dynamic interplay between individual agency and systemic factors regarding ethical considerations surrounding AI integration in higher education. That is, this study seeks to investigate how educators in higher education comprehend and express their ethical positioning vis-a-vis AI in education settings. The aforementioned broad purpose can be specified in the following research questions:

  • 1.How do university educators perceive, metaphorize and conceptualize AI ethics alignment within the context of higher education?

  • 2.How does AT explain these perspectives?

Literature review

Given the worldwide effects of digitalization brought about by AI in educational settings, it is necessary to adopt a global perspective that takes into account all relevant ethical principles (Kassymova et al., 2023). Additionally, as stated by Taylor and Deb (2021), educators can teach AI ethics principles in order to cultivate a basic comprehension of AI ideas, thereby, promoting awareness of ethical implications in technology. AI ethics is important, but the concentration should not be disintegrated with real life. As Schiff (2022) observes, most national AI policy plans on education prioritize “Education for AI” rather than “AI for Education” (p. 530). In addition to Schiff’s statement, Schultz and Seele (2023) add that the rules in AI ethics should not be synthetic and sound unnatural.

In higher education, there has been recently a growing concern regarding the use of AI such as the increase in plagiarism and cheating (Chan, 2023) or the decrease in students’ critical thinking abilities (Civil, 2023), to name a few. Although recent studies suggest that with proper training and guidelines, the potential drawbacks of generative AI can be mitigated—e.g., it may even enhance learning by fostering critical thinking skills (Darwin et al., 2024; Ruiz-Rojas et al., 2024)—this growing concern has led some universities to ban AI or hastily revise their plagiarism policies (Chan, 2023). In a very recent meta systematic review study, Bond et al. (2024) reviewing systematic reviews regarding AI in higher education categorized its top benefits and challenges (see Table 1).

Table 1 Top benefits and challenges of AI in Higher Education (adopted from Bond et al., 2024)

As can be seen in Table 1, the top challenge of using AI in higher education reported in the studies reviewed is the lack of ethical considerations. Bond et al. (2024) also emphasized it at the end of their study by providing a call for “increased ethics” (p. 33) in future studies. While the role of AI in higher education continues to evolve, another urgent consideration emerges—how to ensure these technologies are used responsibly (Chan, 2023). As AI tools become more embedded in academic environments, the need for thoughtful alignment with ethical principles by educators gains prominence (Ray, 2023).

In this perspective, Akgun and Greenhow (2022) discuss some ethical challenges when it comes to using AI in education including the potential perpetuation of systematic biases and discriminations, responsibility towards educators as well as learners about ethics-related algorithms use, and professional development needs for educators among others. In contrast to what Akgun and Greenhow (2022) stated, Munn (2022) criticizes the effectiveness of AI ethical principles in dealing with ethical outcomes, arguing that such principles are often empty, irrelevant, and lack enforcement mechanisms. Instead, he proposes looking into other approaches that tackle broader oppressions as well as specific concerns like truthfulness in reporting, auditing accuracy, or control over information flows. Regarding this, Zhou et al. (2024) state that personal judgment and preference are the factors that primarily influence individuals’ decisions on whether to comply with rules, rather than following established norms or guidelines strictly. Ferretti (2022), however, states that there are compelling justifications to assert that, in several instances, governments that enforce strict regulations are the most effective means to ensure an ethical advancement of AI systems rather than personal judgment or preference.

Borenstein and Howard (2021) explore different methodologies for addressing AI ethics, including stakeholder engagement during policy formulation stages; identifying specific institutional ethical concerns; and establishing technological standards among others. They suggest that future educators who will employ this technology must understand its effect on people’s lives, hence they should be educated about them in order not to violate any AI ethics. Bleher and Braun (2023) support this view, stating that despite the lack of implementation, AI ethics are crucial for protecting education and society. To safeguard education and society, AI ethics are of the utmost importance. As university educators’ decisions influence the learning experiences and privacy of students their alignment with AI ethics fostering an environment of trust, equity, and transparency.

Against this backdrop, this study aims to explore university educators’ alignment with AI ethics. By employing AT as the theoretical framework, the study offers a structured lens to examine the complex interactions between educators and AI ethics. AT helps to unpack the dynamic relationships between technology, pedagogy, and ethics, providing insights into how educators navigate ethical challenges as they integrate AI into their teaching practices. This approach allows for a deeper understanding of how ethical considerations are negotiated and implemented in real-world educational settings.

Theoretical framework

Activity theory (AT), a concept originating from cultural-history theory, was proposed by Vygotsky in 1978. Activity theory, drawing on Marx’s philosophical foundations, goes beyond the dichotomy of idealism and materialism, as well as the division between the subjective inner world of the knower and the objective outward material world of the known. It does this by employing ‘activity’ as its fundamental unit of analysis (Cole & Engeström, 1993).

It emphasizes that humans acquire knowledge through meaningful actions like collaborative dialogue and social interactions. Leont’ev (1981) further developed this theory into a conceptual framework, while Engeström (1993) expanded on its ideas. The theory consists of seven elements: subject, object, tools, community, rules, division of labor, and the outcome. The subject refers to participants, the object is the reason for activities, the outcome is the intended result of actions, the tools represent content, the community is the environment, the rules are teaching strategies, and the division of labor refers to learning modes.

Since its advent, different studies have employed this framework in mainstream education in general (Kamali et al., 2024; Nazari & Karimpour, 2022) and higher education in specific (Ramírez et al., 2011). Additionally, AT has been employed as a relevant framework to study human and computer interaction (Nardi, 1996). This framework has also been used as a theoretical underpinning in studies regarding AI ethics (e.g., Keegan et al., 2024). In a similar vein, we recognize that AT serves as an appropriate theoretical underpinning of this study because it gives a complete view of teachers’ thoughts on AI ethics alignment. It might give an extensive examination of activities carried out by an individual taking into consideration all the constituents of an activity system namely educators, objects, tools, community, rules, and division of labor. Additionally, among other things, it also provides an understanding of how different parts within one activity system interact with each other leading to behaviors and thinking processes.

To apply AT in this study, we began by identifying the activity system focused on educators’ alignment with AI ethics. This involved defining key components (Engeström, 1993) including subjects (language educators in higher education), objects (AI ethics alignment), tools (AI technologies), community (higher education institutions), rules (ethical guidelines), the division of labor (distribution of roles) and the outcome (discovering educators’ belief). Upon the completion of data collection (i.e., metaphors and interviews), we analyzed the interactions within the activity system, paying particular attention to conflicts and motivations that shape educators’ ethical considerations in different themes (Engeström, 2015).

Method

Research design

The current research is a phenomenological qualitative study (Englander, 2016) that seeks to investigate the underlying levels of teachers’ beliefs on AI ethics and probe deeper into their alignment with them. In order to achieve this, metaphor analysis was used in the investigation of latent facets of teacher thought about alignment between AI and ethics while semi-structured interviews were also conducted so as to bring out lived experiences among these individuals—thereby helping understand how people make sense of themselves through relating with such technological systems as those based on artificial intelligence.

Furthermore, taking on an AT lens aids our understanding in that it helps us make sense of the reasons behind teachers’ actions or judgments made in particular sociocultural contexts associated with their professional work, thereby broadening our horizons beyond what was previously attainable. This approach allows for consideration of different factors, including individual experiences, organizational frameworks, and cultural forces, among others, that come into play when robots are involved in teaching people during the digital era.

Participants and context

The study was conducted in a school of languages of a university in Türkiye, where students spend a year learning English to be ready for their faculty courses. The participants of this study in phase 1, metaphor elicitation, were 37 educators who worked in the language school. They were all between 21 and 52 and taught an L2 (English, Arabic, and Turkish) to university students. In the second phase of the study, semi-structured interviews, 11 educators attended the interview in which they answered some follow-up questions about AI ethics alignment. They were all between 25 and 51 and had teaching experience of 1 to 22 years (Table 2).

Table 2 Demographic information of the participants in the interview

Data collection

Data collection was conducted in two phases: metaphor elicitation and semi-structured interviews. In the first phase, i.e., metaphor elicitation, convenience purposeful sampling was employed meaning that 37 educators who worked in the same workplace of researchers were selected based on their familiarity with AI. They received a Google form in which they completed three metaphors about AI ethics alignment, including “AI ethics are like … because …”, “violating AI ethics is like … because …” and “following AI ethics is like … because …”. In the second phase—semi-structured interview—11 educators attended interviews in which they answered questions about their AI ethics alignment and narrated some experiences. All the interviews were conducted by the first researcher in English and took between 25 and 45 min in which participating educators answered questions such as “How do your colleagues see the AI ethics?” or “How do you ensure that your teaching practices with AI follow ethical guidelines and policies?” (See appendix A). The questions were all made based on AT themes, including inter alia, rules, tools, community, etc., and the metaphors elicited.

Ethical Board Review (EBR) was approved before the data collection process. All research participant educators were informed about the purpose of the research and their right to withdraw from the study in any part of it. They were assured that their identity would remain autonomous and their data would be used only for research purposes.

Data analysis

Like the data collection phase of the study, data analysis was conducted in two phases. In the first phase, systematic metaphor analysis was employed (Schmitt, 2005). Initially, as described in data collection, researchers located the focal point for metaphor analysis by selecting the statement to be completed. Following this, they sought out background metaphors to inform the design of the statement. Subsequently, researchers conducted an analysis of metaphorical subgroups to delve into potential “metaphoric clusters, models, or concepts” (Schmitt, 2005, p. 372). Lastly, researchers reconstructed individual instances of metaphorical concepts to distill overarching themes. It is important to note that, as this study followed a theoretical framework (AT) with predetermined categories, any metaphors that did not fit within these categories were excluded.

During the second phase, semi-structured interviews, the interview data underwent transcription, coding, and analysis following the principles of thematic analysis (Braun & Clarke, 2006). This study primarily employed deductive thematic analysis, wherein data were categorized according to predetermined criteria (AT). Initially, the second researcher transcribed the data, after which all researchers familiarized themselves with the transcriptions. The first researcher codified the data and created concepts relevant to AI ethics alignment. Then, in a session, all researchers discussed the codes and subthemes that emerged with the aim of ensuring the trustworthiness of the results (Lincoln & Guba, 1985). Through the 11th interview, the researchers consistently encountered repeating information and recurring codes and patterns; thus, the researchers concluded that the data was saturated (Guest et al., 2006) as further data collection did not reveal any new insights pertinent to the research questions. Subsequently, the thematic map was created by the first researcher. Additionally, the Interviewee Transcript Review (ITR) method (Rowlands, 2021) was employed, allowing participants to review the findings section to validate the interpretations.

“Data are not coded in an epistemological vacuum” (Braun & Clarke, 2006, p. 84) which means researcher positionality should be taken into consideration. All researchers were working in the Turkish context of higher education where all the participants came from. This adds to the familiarity with the contexts and participants which results in a deeper understanding of participants’ motivations, reasons, and concerns. The interviewer was familiar with all the interviewed educators which can have its own pros and cons such as enhanced rapport as an advantage and bias as a disadvantage.

Findings

Metaphor analysis

The first section discusses the metaphor findings that are analyzed thematically based on the underlying theoretical framework of the study—AT. Within the constructs of the AT model, subject of the study is language educators in higher education, AI ethics alignment is the object of this study, and discovering educators’ belief on it is the outcome. Therefore, the four sections within which the metaphors are categorized are; rules, tools, community, and division of labor which are the remaining parts of the AT model.

Rules

The first theme in AT that will be discussed is the “rule” category which refers to the rules, norms, and conventions that govern or guide behavior within a particular activity system. The metaphors extracted and thematized in this theme are abundant. AI ethics were considered as a cookbook, blind eye, superego, constitutions, manual, and conscience. For example, E17 asserted that AI rules are like “a cookbook because while they can suggest delicious recipes, it is important to be open to making adaptations as not everything will work for your liking or the ingredients you have” (E17, Metaphor). This suggests the specificity of rules and the importance of having contextualized, localized AI ethics. According to E17, general AI ethics cannot be applied to all contexts and there is a need for context-specific AI ethics rules for different nationalities, cultures, and even individual institutions. Another educator metaphorized AI ethics as conscience “because they are out of sight but effective in making decisions” (E25, Metaphor). The controlling power of AI ethics is elicited here. The educator asserted that AI ethics can avoid misbehavior, misinformation, and misconduct as a hidden agenda and, therefore, seems necessary for AI development.

Educators’ metaphors also targeted the alignment and violation of AI ethics rules. Breaking the traffic rules, abusing or manipulating, crossing the red light, and telling lies were among the metaphors for violating AI ethics; while metaphors for AI ethics alignment included, inter alia, following traffic rules and a road map, using a candle in the dark, adhering to the Hippocratic oath, and following one’s conscience. E35 created a surprising reason for the metaphor he made which was “crossing the red light”. The research participant expressed that “Violating AI ethics is like crossing the red light because everyone does it but no one admits it” (E35, Metaphor). The metaphor reiterates the common practice of violating AI ethics which is not admitted by the committers. This is a thought-provoking metaphor bringing the unanimity of AI ethics violation into the spotlight. E27 amplified the importance of AI ethics alignment by metaphorizing it as a Hippocratic oath stating that “Following AI ethics rules is like adhering to the Hippocratic oath because you choose to use a technology for a good purpose rather than a damaging one” (E27, Metaphor). This metaphor reiterates the importance of these ethics and how they should be followed by educators.

Tools

The next theme in the AT model is tools. In AT, tools represent the various resources, both physical and conceptual, that individuals use to accomplish tasks within their activity systems. A large number of metaphors for AI ethics emerged in this category including, inter alia, torches, a map in a maze, a compass, shade, Google map, a guidebook, the foundation of a building, and water in a colander. E32 asserted in her metaphor that “AI ethics are like “the foundation of a building because without it everything will collapse” (E32, Metaphor). The foundation metaphor highlights the fundamental vitality of AI ethics and how it can overshadow all the other aspects of AI development and innovation. In another metaphor, AI ethics were metaphorized as “water in a colander because they are challenging to control” (E34, Metaphor). The educator asserted the tempting characteristics inherent in AI ethics to be violated and admitted the difficulty of controlling its alignment.

Metaphors for AI ethics violations in the “tools” category were also interesting, including opening Pandora’s box, navigating a boat through stormy seas, walking in a minefield, and slapping oneself. Unseen consequences were among the reasons that emerged extensively in this metaphor. For example, E4 stated that AI ethics violation is like “Opening Pandora’s box because it can unleash unforeseen consequences” (E4, Metaphor). Or E13 posits that it is like “Slapping oneself because one will feel the consequences later” (E13, Metaphor). As both metaphors demonstrate, the consequences of AI ethics violations are not tangible immediately, and time reveals them. AI ethics alignment, on the other hand, was metaphorized by educators as nurturing a garden, a police officer’s gun, a shopping cart, and turning a steering wheel. E8 argued that “Following AI ethics rules is like a police officer’s gun because this makes it a harmless tool in the hands of the knowledgeable” (E8, Metaphor). She admitted that AI ethics can control the inherent danger of AI tools which can be used for non-humanitarian purposes.

Community

Community is the next category in AT. It refers to a group of individuals who share a common goal, interest, or activity within a particular context. Given that, the metaphors that emerged in this category are politicians, a captain, and bodyguards. In justifying the “captain” as a metaphor for AI ethics, E31 pointed out “it helps us to find the true path in a stormy ocean” (E31, Metaphor). His manifestation revealed the guiding role of AI ethics and how they can lead us toward a safe shore of security and academic integrity. Violating these rules was metaphorized as criminals and poisoning a well. Again, E31 believed that violating AI ethics is like “poisoning a well” and his justification is that we “are polluting fresh and free flow of information and science” (E31, Metaphor) by this violation. Safety guidelines, insurance, and moving toward utopia were the metaphors educators used for AI ethics alignment, which emerged in the community category. E16 metaphorized AI ethics alignment as safety guidelines explaining that “it could help create a safe community for all its users” (E16, Metaphor). This metaphor reiterates the importance of safety in the new era of AI.

Division of labor

The last category of AT that the themes emerged under is the division of labor. It refers to the distribution of tasks, roles, and responsibilities among participants in the activity. The metaphors that emerged in this category are jungle, assistant, an impotent king, and estate agent. In describing AI ethics as a jungle, E9 reasoned that “it provides an abundance of oxygen to those who want it” (E9, Metaphor). The choice issue implied in this quote shows that AI is a positive advancement and people should seize this opportunity to employ them in their work. Another educator argued that AI ethics are like “an impotent king’s orders because they are stated explicitly but no one knows how and whether they are being considered or implemented” (E35, Metaphor). The confusion regarding AI ethics illustrated in this quote signifies the obscurity and vagueness of AI ethics rules and emphasizes the importance of transparency and clarity vis-à-vis these rules. Returning the shopping cart after you are done with shopping, cheating in an exam, and overspeeding or using the emergency lane are the AI ethics violation metaphors that emerged under the division of labor category. E9, in simulating AI ethics violation to returning the shopping cart after you are done with shopping, persuasively attributed it to the lack of “consequence to returning a shopping cart” (E9, Metaphor). He argued that “you can easily get away with it [shopping cart], but you do it just because it is the right thing to do” (E9, Metaphor). The social responsibility hidden in this metaphor convinced us to categorize it in the division of labor category. Being a responsible citizen and serving oneself are two themes which emerged under the AI ethics alignment in the division of labor category. The role of serving oneself was mentioned by E12 when she stated her metaphor because “this will enrich and advocate the healthy use of it” (E12, Metaphor). Therefore, it is hard without these alignments to guarantee AI safety, security, and positive use.

Semi-structured interviews

In the second phase of the study which deals with data that emerged from the interviews, data are thematically analyzed with an eye on the theoretical underpinning of the study, i.e., AT. Like the metaphor analysis part, our focus lies on language educators as the subject and we aim to investigate their perspectives on the alignment of AI ethics as the object to find out the outcome. Consequently, data are analyzed in four sections: rules, tools, community, and division of labor (for a summary, see Appendix B).

Rules

There are four themes that emerged in this category of AT in the present study including unforeseen ethical breach, partiality in information, regulatory void, and prioritizing individual morality over institutional norms.

The first sub-theme in the theme of the rule was discussed by E2 when she argued the unintentionality of the unethical use of AI. She stated, “… but I can say that even most of them [users] actually, although they want to follow the ethics, they don’t know how to use it. They don’t know the ethics; that’s the issue. They just think that they follow” (E2, Interview). As the quote suggests lack of sufficient knowledge about AI ethics can cause some violations against it.

It was in the E4 interview that the topic of partiality in information was raised. The educator drawing on her experience in a language classroom explicitly explained the AI-generated response which was wrong or biased. See the following quote:

… I had experiences where AI just was giving me wrong information. I had one thing where I was teaching facts and opinions. So the AI said it was the Sahara desert is the hottest desert in the world. I thought it was true. I took it, I went to the classroom and I read it and the students were like, well it’s not true. So that also created some confusion in the classroom (E4, Interview).

It is evident in the quote that an ethical rule about AI should consider the biased responses that come from a pool of information and ask for multiple checks or triangulation of results.

It was E3 who argued against the availability of rules and regulations regarding AI and believed that there is a lack of them (regularity void). E3 pointed it out in the following quote:

E3: I use AI to give reviews, of course, I go through it. Sometimes there is a problem with irregularity. I try to do away with it, but I do use it and I feel I’m more sort of helpful to students this way.

Int: So do you think it is ethical or not?

E3: I don’t think it’s unethical. OK, it may be considered unethical if the organization has straightforward rules about it, but I don’t think we have (E3, Interview).

As the quote manifests, E3 uses AI in providing feedback for students without mentioning the source (although he makes changes in the comments) and he believes there is no rule against it. This lack of rules was also mentioned by E6, E7, and E11.

The last subtheme in the rules theme is prioritizing individual morality over institutional norms. E10 puts it this way expressing “… personal ethics should supersede organizational ethics. But if you don’t have any personal ethics, then organizational ethics have to be followed. Otherwise, I would risk my position, right? So I would hope personal ethics would lead the way” (E10, Interview). As it suggests, personal ethics—or conscience—may have strong power to stop people from violating the ethical rules of AI which seems correct for all aspects of human life.

Tools

In the tools theme of the AT, in the present study, three subthemes emerged, including negative impact of AI on cognitive engagement, unethical aids for student success, and advocacy for ethical tool development.

In the first subtheme negative impact of AI on students’ cognitive engagement was discussed, the impacts such as laziness, stopping critical thinking, and the like. E3 claimed that AI tools made his students reliant on it; so stopped thinking clearly.

E3: It’s making us more lethargic and lazy. We’ve stopped thinking, We’ve stopped writing … We are writing less. We are thinking less … Apart from speaking, everything is AI generated, so it is making us lazy, which in real life drastically reduces our performance because in real life our text needs to be our text, and our speech needs to be our speech, but without any AI help. So, I think we need to have rules or even some ethical considerations for the percentage of using AI in any assignment or even in a day (E3, Interview).

As the quote evidently demonstrates, the educator posits that using AI by students should be controlled by ethical restrictions or some regulations. Otherwise, there will be a crisis of critical thinking and over-reliance on AI tools.

The second subtheme in this part is contradictory to the first subtheme. Unlike E3, E9 argued that unethical use of AI can have a positive impact on students’ learning.

E9: If the information is, let’s say trustworthy, then it will impact students positively, I think. Although we made a mistake here by not giving the right to authors or resources themselves… So it depends on the quality of the item you’re using or you’re stealing if we say …

Int: So what I understood right now is that although it is not unethical, it can help a student (E9, Interview).

E9: Exactly (E9, Interview).

As the educator confessed, it is ok to violate ethical rules as far as it helps students learn and can develop their interlanguage development.

Advocacy for ethical tool development as the last subtheme in the tools theme of AT deals with some ethical tools that can add trustworthiness to the generated language. See quote 5.

E10: I mean, obviously it would not be in the AI creator’s interest to watermark everything, but if we had, if there was a way to watermark and make sure that it was known that these are AI-generated, that would solve a problem. But the people who write these programs certainly don’t want Watermark (E10, Interview).

Except for the suggestion that the educator had here by adding a watermark to the AI-generated language, she discussed the unwillingness to do so by the people involved which brings business and financial issues into the spotlight.

Community

The community theme of AT in the present study is composed of three subthemes namely; community engagement and inclusivity, ethical role modeling, and scholarly integrity and risk awareness.

The first subtheme which deals with community engagement was evident in two educators’ interviews: E5 and E8. E8 in response to the question regarding following AI ethics provided the following answer.

I think it is talked about and they believe it must be encouraged, especially since there are some new rules from the Ministry of Education that schools and you know, educators in general should be following such as rules of ethics. So in order to maintain a very transparent and fair outcome, let’s say yes (E8, Interview).

As the educator explicitly argued there should be some rules by gatekeepers and policy makers such as ministries to encourage meeting the AI ethical considerations by all AI users. This can guarantee the ethical alignment which leads to the ethical use of these tools.

The second subtheme in this theme expressed by three of the educators (E3, E6, and E9) brought the modeling concept into the spotlight. E9 expressed in an interview “So if I am the educator, I might be the model for them. Follow these rules. They will unconsciously feel that they need to do the same, OK, by attributing everyone’s or every author the right” (E9, Interview). This shows being a role model in AI ethics alignment can teach the students implicitly to follow the ethical rules in using AI.

The last subtheme is scholarly integrity and risk awareness. In this subtheme, E10 argued that not teaching the AI ethics alignment, we as educators encourage plagiarism.

I mean if we simply encourage the use without telling them that they need to quote or cite, then we’re condoning this copy-paste attitude and that will lead them down the wrong path obviously. But other than that I can’t. The impact would be definitely negative because they put their scholarships in jeopardy and even attendance at the university (E10, Interview).

As the quote represents, the copy-paste attitude, or more academically plagiarism, will put students’ academic lives at risk of losing scholarships or suspension and bring harsh consequences for them.

Division of labor

The last theme of AT in this study in which three subthemes emerged is the Division of labor which consists of ethical implementation and oversight, ethical guidance and support, and stakeholder collaboration for ethical AI integration.

The first subtheme in this theme was delineated in E7 when she explained that the use of AI for getting a deeper understanding of an issue without citation does not make any violation of ethical regulations. In the interview, E7 stated, “If I use it to get a more in-depth look at something, then that is ethical” (E7, Interview). E7 continued, “Mostly like the ideas of citing what you have there, not plagiarizing, saying that, for example, this idea is from AI, or do not say this is my idea when you get it from AI, these types of things” (E7, Interview). As E7 implied, this is not necessary to cite explicitly where the idea is taken from, and by only not mentioning this is not your idea, the AI ethics are followed.

The second subtheme was discussed by E6 when he asserted that “what I got is that even the rules are not complete now to see if you’re following them or not, or violating them or not” (E6, Interview). E6 rightly argued that the AI ethical rules are almost in infancy and are incomplete now; therefore, to check the AI ethics alignment of educators there must be complete rules against which alignment or violation is assessed.

The last subtheme in the division of labor theme highlights the role of stakeholders in establishing, maintaining, and executing AI ethic rules. See quote from E8.

I would say globally, global rules, but we may also have school rules and university rules maybe. AI is a reality and it’s just gonna increase and it’s just gonna be, it’s just gonna grow as a part of our life daily…. So I think … we should try to change and modify our rules, keeping in mind that it is a helpful tool. There should be more sort of guidelines on how to use AI (E8, Interview).

As the quote shows, we need rules related to AI ethics and it can be done by the involvement of all parties involved in different aspects of AI use. It also needs an interdisciplinary effort to ensure the usefulness and effectiveness of the rules.

Discussion

The present study explores educators’ beliefs about their alignment with AI ethics by examining the hidden aspects of it through metaphor analysis and probing educators’ lived experiences through metaphors and semi-structured interviews (Fig. 1). The findings of the study, in line with previous studies (e.g., Bergman et al., 2024; Gabriel, 2020; Jobin et al., 2019; Morley et al., 2020; Ray, 2023), found that AI ethics alignment by the educators is in its infancy. This study, however, could add to the body of research on different grounds. The findings of the study call for increasing awareness about AI ethics and also encourage further investigation both in devising and applying the context-sensitive rules in different layers of AT in AI ethics. There are other findings which will be discussed in turn.

Fig. 1
figure 1

Activity theoretic exploration of educators’ AI ethics alignment

Metaphors and interviews in the first theme of AT (rules) revealed that AI ethics are followed and violated like any other rules in human life (e.g., traffic rules) and there is a lack of these regulations. This is in line with Zhou et al. (2024) who posited that individuals violate rules based on their discretion. We, therefore, conclude establishing and executing new rules is important; however, it may not be very helpful if the individuals decide not to follow them. Given that raising awareness towards these rules and their influential impact on society’s health and safety should be elaborated. The interviews also brought the lack of regulations, biased information, and unintended violations into the spotlight. The findings of this study could add to the ones of Munn (2022) who posited that “a flood of AI guidelines and codes of ethics” (p. 869) are out there which are “meaningless principles” (p. 869) by arguing that the majority of educators are not aware of these rules. The last subtheme which is “prioritizing individual morality over institutional norms” completely challenges the prevailing findings by Ferretti (2022) who argued that “there are good reasons to conclude that, in many cases, governments implementing hard regulation are in principle (if not yet in practice) the best instruments to secure an ethical development of AI systems” (p. 239). This emphasizes the importance of context-specific rules for AI ethics.

The second theme of the AT model, tools, revealed that educators considered some serious consequences for violating AI ethics by metaphorizing it as “slapping oneself” and “walking in a minefield” which showed how dangerous violating AI ethics might be. Our metaphor findings corroborate existing literature on AI ethics (Bleher & Braun, 2023; Zhou & Chen, 2023) asserting that AI ethics, albeit not being practiced, play important roles in safeguarding education and society. This finding is contradictory to the interview result of this theme with educators who showed low awareness of AI ethics rules and alignment with them suggesting a nuanced understanding. Despite the apparent lack of awareness regarding these rules and educators’ assertions of potential non-compliance, it unveils a deeper sentiment. It indicates that while surface-level awareness may be lacking and some educators express reluctance to adhere to these rules, there exists an underlying recognition of the potential severity of issues that can arise from disregarding AI ethics.

AI ethics alignment in community, as the third theme, in the AT model was metaphorized by educators as a captain in a ship who shows the path. This, along with Schultz and Seele (2023), shows the vitality of AI ethics without which we get lost—like the captain of the ship. The interview data also revealed that the community should be engaged in AI ethics by encouraging its alignment, and educators must be ethical role models for the students by following AI ethical roles. These findings are in agreement with Hasas et al. (2024) who urged the need for responsible AI development, aligning with societal values. The last subtheme in the community theme which is “scholarly integrity and risk awareness” shows how skipping teaching ethical roles by educators can impact students’ academic and personal lives. As Taylor and Deb (2021) asserted that educators can incorporate comparable AI ethics modules into any course where learners have a basic grasp of AI concepts, this study believes that AI ethics should be a part of any curriculum.

The last subtheme in the AT model in which sub-themes emerged from metaphors and interviews is the division of labor. The metaphors elicited here show how AI ethics can inform order in society. For instance, the metaphor “an impotent king” brought out two things about AI ethics as they say one can govern over a territory and play a significant role in that area (Taebi et al., 2019). The interview findings also highlighted different players involved in coming up with rules and regulations on AI ethics. In doing so, Hasas et al. (2024) suggest that communities or societies should be held accountable for such context-specific measures (Munn, 2022). Therefore, this sub-theme—like the first sub-theme (rules)—underscores the need for local and contextually relevant ethical AI rule-making.

Conclusion, implications and suggestions

The use and integration of AI in educational activities is a complex and multifaceted ethical issue. In particular, the study revealed that the use of open-source AI products in education, as well as their widespread use in every field, should be ethically safe guarded. This research also revealed how confused educators are and have very different views and approaches by revealing their concerns about AI ethics. Educators stated that AI causes negative consequences on students at the cognitive level due to interactions outside the natural flow. In addition, they expressed concern about the ethical issues surrounding student achievements or failures in the educational process of these AI-powered technologies.

Based on the findings of this study, we recommend the following action plan for education policymakers and educators. First, during the development of such technologies in educational institutions, policymakers should adopt a multidisciplinary approach to ensure that the vehicles comply with moral and ethical norms, and wide participation from various segments of the institution and society can contribute positively. Education policymakers should establish guidelines with clear rules or recommendations on how to use AI ethically and effectively in educational institutions. These rules should be collaborative, which includes creating a decision-making framework for ethics that reflects the values of society as well as educational objectives. This can create an important ecosystem by engaging educators, students, and parents in conversations about the ethics surrounding AI. Second, educators need critical thinking-oriented professional development workshops that focus on and prioritize ethical attitudes and behaviors in the use of AI in education. Educators participating in this study made many statements about the ethical use of AI, and professional development workshops will contribute positively to the educators in which they can express themselves and include current AI ethics.

We encourage further research to adopt an interdisciplinary approach on AI ethics, and involves ethicists and other professionals such as technologists, pedagogues, and psychologists to investigate AI ethics alignments in different contexts and fields of inquiry. These measures will help people understand the educational context in which AI is employed for teaching. We are also aware of the limitations of this study, including the small sample size and the specific context in which it was conducted. We recommend that future research address these limitations by employing quantitative methods with larger participant pools and replicating the study in diverse educational contexts.

Appendix A

Interview core questions

  1. 1.

    Do you use AI tools in your class? How much? Why? Are they helpful?

  2. 2.

    How do you ensure that the AI tools you use align with ethical considerations in your teaching practices?

  3. 3.

    Can you share experiences where ethical concerns influenced your choice of AI tools or how you use them in the classroom?

  4. 4.

    Are there specific ethical challenges or benefits you associate with the integration of AI tools?

  5. 5.

    How do ethical considerations about AI align with the subject you teach? Any difference between AI ethics in teaching English and other subjects?

  6. 6.

    Can you give an example where ethical considerations influenced your expected outcomes?

  7. 7.

    How do your colleagues see the AI ethics? Are they encouraged or discouraged?

  8. 8.

    How do you ensure that your teaching practices with AI follow ethical guidelines and policies?

  9. 9.

    Are there specific rules that guide your decisions regarding AI ethics?

  10. 10.

    How do the values and norms of your teaching community align with ethical considerations about AI?

  11. 11.

    In what ways does your community collectively address and promote ethical practices in AI use?

  12. 12.

    From your perspective, how does the ethical alignment of AI impact your students’ learning outcomes?

  13. 13.

    Can you share a situation where ethical decisions influenced learning outcomes in your teaching?

Appendix B

Theme

Subtheme

Extract from the interviews

Rules

Unforeseen ethical breach

but I can say that even most of them [users] actually, although they want to follow the ethics, they don’t know how to use it. They don’t know the ethics, that’s the issue. They just think that they follow (E2, Interview)

Partiality in information

… I had experiences where AI just was giving me wrong information. I had one thing where I was teaching facts and opinions. So the AI said it was the Sahara desert is the hottest desert in the world. I thought it was true. I took it, I went to the classroom and I read it and the students were like, well it’s not true. So that also created some confusion in the classroom (E4, Interview)

Regulatory void

E3: I use AI to give reviews, of course, I go through it. Sometimes there is a problem with irregularity. I try to do away with it, but I do use it and I feel I’m more sort of helpful to students this way

Int: So do you think it is ethical or not?

E3: I don’t think it’s unethical. OK, it may be considered unethical if the organization has straightforward rules about it, but I don’t think we have (E3, Interview)

Prioritizing individual morality over institutional norms

“… personal ethics should supersede organizational ethics. But if you don’t have any personal ethics, then organizational ethics have to be followed. Otherwise, I would risk my position, right? So I would hope personal ethics would lead the way” (E10, Interview)

Tools

Negative impact of AI on cognitive engagement

E3: It’s making us more lethargic and lazy. We’ve stopped thinking, We’ve stopped writing … We are writing less. We are thinking less … Apart from speaking, everything is AI generated, so it is making us lazy, which in real life drastically reduces our performance because in real life our text needs to be our text, and our speech needs to be our speech, but without any AI help. So, I think we need to have rules or even some ethical considerations for the percentage of using AI in any assignment or even in a day (E3, Interview)

Unethical aids for student success

E9: If the information is, let’s say trustworthy, then it will impact students positively, I think. Although we made a mistake here by not giving the right to authors or resources themselves… So it depends on the quality of the item you’re using or you’re stealing if we say …

Int: So what I understood right now is that although it is not unethical, it can help a student (E9, Interview)

E9: Exactly (E9, Interview)

Advocacy for ethical tool development

E10: I mean, obviously it would not be in the AI creator’s interest to watermark everything, but if we had, if there was a way to watermark and make sure that it was known that these are AI-generated, that would solve a problem. But the people who write these programs certainly don’t want Watermark (E10, Interview)

Community

Community engagement and inclusivity

I think it is talked about and they believe it must be encouraged, especially since there are some new rules from the Ministry of Education that schools and you know, educators in general should be following such as rules of ethics. So in order to maintain a very transparent and fair outcome, let’s say yes (E8, Interview)

Ethical role modeling,

“So if I am the educator, I might be the model for them. Follow these rules. They will unconsciously feel that they need to do the same, OK, by attributing everyone’s or every author the right” (E9, Interview)

Scholarly integrity and risk awareness

I mean if we simply encourage the use without telling them that they need to quote or cite, then we’re condoning this copy-paste attitude and that will lead them down the wrong path obviously. But other than that I can’t. The impact would be definitely negative because they put their scholarships in jeopardy and even attendance at the university (E10, Interview)

Division of Labor

Ethical implementation and oversight

“If I use it to get a more in-depth look at something then that is ethical” … Mostly like the ideas of like citing what you have there, not plagiarizing, saying that for example this idea is from AI, or do not say this is my idea when you get it from AI, these types of things” (E7, Interview)

Ethical guidance and support

“what I got is that even the rules are not complete now to see if you’re following them or not, or violating them or not” (E6, Interview)

Stakeholder collaboration for ethical AI integration

I would say globally, global rules, but we may also have school rules and university rules maybe. AI is a reality and it’s just gonna increase and it’s just gonna be, it’s just gonna grow as a part of our life daily…. So I think … we should try to change and modify our rules, keeping in mind that it is a helpful tool. There should be more sort of guidelines on how to use AI (E8, Interview)

Availability of data and materials

Data from this study (narrative inquiries, interview audio, and their transcriptions) are available and will be shared upon request.

References

Download references

Acknowledgements

We extend our heartfelt appreciation to the teachers who participated in the study despite their demanding schedules, the ones whose valuable contributions greatly enriched our study.

Funding

Anadolu University funds this paper with grant number YTS-2024-2559.

Author information

Authors and Affiliations

Authors

Contributions

Jaber Kamali: conceptualization (supporting); data curation (lead); formal analysis (lead); methodology (equal); supervision (lead); writing—original draft (equal); writing—review and editing (equal). Muhammet Furkan Alpat: conceptualization (lead); data curation (supporting); formal analysis (supporting); methodology (equal); writing—original draft (equal); writing—review and editing (equal). Aras Bozkurt: conceptualization (supporting); data curation (supporting); formal analysis (supporting); methodology (supporting); writing—original draft (supporting); writing—review and editing (equal).

Corresponding author

Correspondence to Jaber Kamali.

Ethics declarations

Competing interests

We have no conflicts of interest to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kamali, J., Alpat, M.F. & Bozkurt, A. AI ethics as a complex and multifaceted challenge: decoding educators’ AI ethics alignment through the lens of activity theory. Int J Educ Technol High Educ 21, 62 (2024). https://doi.org/10.1186/s41239-024-00496-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41239-024-00496-9

Keywords