1 Introduction

In this paper, I analyse the relationship between AI, education and democracy from the standpoint of John Dewey’s philosophy of education (1956, 2018). This allows me to clarify how the notions of democracy and democratic education might be understood in the context of AI, develop a set of criteria that schools and education systems need to meet to be considered democratic, and assess whether the use of AI helps them achieve this ideal.

Among the many promises surrounding digital technologies,”democratisation” of the domain to which they are applied is among the most common and, arguably, most poorly conceptualised. On top of being largely speculative, the debates surrounding technology and democratisation suffer from a lack of clarity – it is often not evident what is implied by the processes of democratisation, or what exactly is to be democratised – and, as I argue in this paper, there are good reasons to suspect that AI will actually undermine democratic ideals.

With the growing uptake of digital educational technologies and the emerging interest in educational AI prompted by generative models, promises of technologically-induced democratisation have also been extended to education. AI has recently been touted for its potential to democratise education by companies such as DuolingoFootnote 1 and Khan AcademyFootnote 2 academic initiatives like Wharton InteractiveFootnote 3 and education-centric media outlets.Footnote 4 Later in this paper I demonstrate that even if companies do not explicitly refer to democracy and democratisation, democratic promises are often implicit in their marketing or their claims can be interpreted as corresponding to some notions of democratic education. A discussion of democratic promises has also found its way into academic literature (Adel et al., 2024; Kamalov et al., 2023), even if the authors sometimes adopt a critical stance and express scepticism about AI’s democratic potential (Bulathwela et al., 2021; Kucirkova & Leaton Gray, 2023).

This emerging debate on AI and its (in)ability to democratise education is muddled by the multiplicity of meanings attributed to the notion of technologically-induced democratisation. In some circles democratisation has become a largely empty term, a more positive spin on clichés about technology’s ability to “transform” or “revolutionise” all human activity. However, when we discuss education as being more or less democratic due to a particular development, we need to consider not only democratic values, but also a tapestry of debates concerning the relationship between democracy and education – a tradition which the current discussion has not adequately acknowledged.

What we mean and what we can mean when we express hopes that some new development will render education more democratic will depend on our understanding of democracy – a term that is often underdefined and taken for granted, especially in educational contexts (Feu et al., 2017). Sant (2019) argues that the literature distinguishes as many as eight versions of democratic education (e.g., liberal, deliberative, neoliberal, elitist, etc.) and it may not be possible to analyse specific usages of democracy and democratisation in educational contexts without investigating the speakers’ background political-theoretical assumptions.

Consequently, I explicitly adopt John Dewey’s understanding of democratic education (1956, 2018). Dewey’s prolific and influential work outlining the relationship between democratic principles and educational practices allows me to argue in Section 2 that education that can be considered democratic needs to 1) prepare students to live in a democracy by helping them acquire skills and knowledge necessary to be a member of a democratic society, 2) incorporate democratic practices and values, 3) be democratically governed, and 4) provide free, equal and widespread access to quality education. I also note how each of these different aspects of democratic education is invoked in the debate surrounding educational AI.

In Section 3, I outline selected characteristics of contemporary educational AI namely its focus on individualisation, mastery and automation. This allows me to examine in Section 4 the extent to which contemporary educational AI meets the four requirements derived from Dewey’s understanding of democratic education (as presented in Section 2). I argue that AI’s emphasis on passive acquisition of knowledge, its individualistic framing of learning, its impact on the role of the teachers, and its governance structure are largely incompatible with a Deweyan understanding of democratic education. While I focus primarily on Intelligent Tutoring Systems (ITS) which are the most prominent kind of commercial ed-tech tools today, I also discuss how my analysis relates to other forms of AI developed by commercial actors and the research community. I use such alternative applications to provide examples of educational tools that are more in line with pragmatist democratic ideals.

2 Democratic Education from a Deweyan Perspective

2.1 Dewey, Democracy and Education

Democracy is central to Dewey’s philosophy (Pappas, 2008) as he rejects atomistic views of the society as a sum of individuals, and criticises ideas about a conflict between societal and individual needs (Dewey, 2008, p. 322–328). For Dewey, our dispositions and wellbeing cannot be isolated from engagement with others and our environment, and he argues that individual development is best achieved under democratic conditions (Pappas, 2008, p. 216). However, Dewey adopts a “thick “ conception of democracy (Coeckelbergh, 2024) that entails more than a system of government characterised by free and fair elections and proportional representation. In one of his most cited passages, he defines democracy as a “mode of associated living, of conjoint communicated experience” (Dewey, 2018, p. 93). A central aspect of Deweyan democracy is cooperation for the purposes of achieving shared goals and enacting intersubjectively defined values (Dewey, 2008, pp. 348–349; Honneth, 1998), which requires widespread participation in deliberative and communicative processes and social action (Dewey, 1957, p. 294; 2008, p. 346).

It is not surprising that for Dewey education is fundamental to democracy – schools are places where children acquire the knowledge, skills and dispositions necessary to live in a democratic society (Dewey, 1956, 2018; Lind, 2023). For this reason, Biesta (2006, 2015) argues after Dewey that education cannot be reduced to only learning – mastery of the curriculum and increased performance in testing. Dewey sees education as more holistic, focused on the cultivation of individual character and its habituation to cooperative practices and the variety of social roles that children will be asked to adopt in democratic life.

This has practical consequences for Dewey, as in his view education can only be successful in its democratic aim if it is itself social and rooted in experience. He admonishes the one-sided, lecture-style practice of teachers “talking at” students and merely asking them to remember the transmitted information (Dewey, 2018, p. 43). He argues that children need to deliberate upon the problems posed to them in the classroom and such deliberation should occur in an intersubjective manner – exposing children to others as sources of insight and cooperation partners. This not only reflects future situations in which children might be asked to identify and address shared problems together with their peers, but also accustoms them to difference as one of the main principles of democracy. Living together requires us to recognise disparities in skills, resources and circumstances and to adapt our practices to allow everyone to equally participate in democratic action, regardless of their social status. For this reason educational theorists inspired by Dewey are critical of civic education efforts that merely teach children about democracy and argue that schools should be the very places where the meaning of democratic living is interrogated, shaped and enacted (Biesta & Lawy, 2006; Biesta, 2015). In this sense, one of the tasks of democratic education is to identify and foster conditions in which all the involved parties can participate in intersubjective deliberation, define shared problems and propose and implement solutions.

Relatedly, Dewey recognises that learning cannot happen in the abstract and should connect to children’s existing experience and the variety of practices that make up our societies. Due to his pragmatic outlook he sees knowledge and dispositions cultivated in schools as tools whose value lies in helping students recognise and deal with issues they (may) encounter in their lives. On the one hand, this leads him to arguing that students may learn more willingly and effectively if knowledge is presented in line with its practical relevance (Dewey, 2018, p. 255). This could entail, for example, teaching history by exposing students to textile production as grasping the difference in effort required to manufacture wool and cotton clothing can help them understand the extent to which the Industrial Revolution affected not only working conditions and practices, but also the geography and politics of countries, as well as family life and social values (Dewey, 1956, pp. 19–29). On the other hand, Dewey recognises the value of this practical mode of learning, because it allows the children to go beyond learning about different occupations and tasks necessary to keep the society going. Practical engagement with the world cultivates in children the skills, knowledge and dispositions that will be fundamental in their future lives.

In this sense, learning should depart from experience, but is also a source of experience. However, education would not be complete if it only focused on children’s professional learning and merely sought to train the workers of the future. Of course, Dewey recognises that the division of labour is fundamental to democratic societies (Honneth, 1998), but he sees it as only one aspect of collaborative living characteristic of democracy and places equal emphasis on cooperation, deliberation and public participation. Consequently, if schools are meant to prepare children to live in a democratic society, they should be spaces where children acquire experience in working together with others, negotiating the values and objectives of such collaborative action, and including the widest range of participants in these processes. For this reason, Lind (2023) argues that Deweyan commitments require us to create educational environments in which students can “think together” and implement their ideas pertaining to the material and to education as such – its meaning, purpose and form of organisation.

2.2 Four Ways for Education to be Democratic

Even this brief overview makes it possible to highlight four core features of Dewey’s view of democratic education and each of them is reflected, to a different extent, in the debate surrounding educational AI.Footnote 5 First, education needs to create spaces where children learn about and negotiate the meaning of democracy, which involves deliberation upon shared values, the aims and purposes of social action, and the means through which they should be achieved. If this meaning is invoked in the context of educational AI, authors typically voice concerns that AI may be used by non-democratic regimes in a censoring manner to stop students from learning about democracy (e.g., Pea et al., 2023), or that it may be used to spread propaganda and cultivate a narrow understanding of civic rights and responsibilities (Smuha, 2022). On the other hand, Saltman (2020) and Selwyn (2022) note that AI’s role in shaping future citizens is currently not determined and argue that the technology could both reinforce and undermine democratic values.

Second, education needs to cultivate the skills, knowledge and dispositions necessary to participate in democratic processes, which entails the integration of democratic practices within schools and an emphasis on communication and cooperation that are at the center of democratic living. In the context of AI, authors primarily worry that the technology might standardise and entrench dominant learning practices, thus making it more difficult for educators and students to rely on pedagogies that could foster deliberation, engagement and dissent (Bartoletti, 2022; Berendt et al., 2020; Saltman, 2020). On the other hand, companies like Instructure point out that AI can have democratising effects by helping students develop skills (e.g., creativity) they would not otherwise acquire, thus “fostering a more inclusive and collaborative environment where everyone's ideas can flourish.”Footnote 6 Moreover, some (e.g., MagicSchool.ai) claim that the introduction of AI will relieve teachers of administrative tasks allowing them more time and opportunities to foster relationships and a sense of community. While this promise is not framed explicitly as democratic, the company’s claim that “relationships are at the center of learning”Footnote 7 can be interpreted as such from a Deweyan perspective.

Third, education should ensure that children gain experience in democratic living and thinking together, which implies children’s participation in the decisions surrounding education and its overall democratic governance. This can be enacted at different levels: democratic shaping of educational policy through public debate and electoral processes, the involvement of teachers and parents in the decision-making within individual schools, as well as the participation of students in school governance, for example through the formation of representative student bodies. As such, democratic education can be defined in terms of shared influence over students’, educators’ and parents’ common environment. In reference to AI, the increasing reach of private companies within education is seen as a threat to democracy as it erodes the public’s ability to make decisions about education and allows these private companies to influence or even control teaching practices and educational governance (Berendt et al., 2020; Saltman, 2020; Williamson et al., 2023).

Four, education has to extend equally to all members of the democratic community. After all, it would be difficult to conceive of an education system as (fully) democratic if access to learning is gated due to social class, religion or other factors. Concerns about AI leading to unequal access to good education are ubiquitous in academic literature (Holmes et al., 2022; Holstein & Doroudi, 2022; Nguyen et al., 2023; Williamson et al., 2023), and AI’s potential to reduce such inequalities is commonly lauded by the proponents of the technology (Nemorin et al., 2023; Schiff, 2022). Mavrikis et al. (2021) argue that the cost-effectiveness of AI would enable poorer students to afford high-quality tutoring that would otherwise not be available to them, while others claim that personalisation techniques would help reduce attainment gaps for students who might struggle in traditional classrooms due to, e.g., disability or learning difficulties (Baker et al., 2023; Kasneci et al., 2023). Arguably, when ed-tech companies promise that AI will democratise education, they primarily refer to an equality and breadth of access to quality learning. Khan (2024), one of the leading proponents of educational AI, Duolingo,Footnote 8 Derek Hayong Li, founder of Squirrel AI,Footnote 9 and the Bill & Melinda Gates FoundationFootnote 10 all recently expressed hopes that AI will empower human teachers to provide better instruction to a greater number of students than possible today. Similarly, Google centers its educational marketing on AI’s potential to make learning more inclusive (e.g., through personalised and adaptive instruction)Footnote 11 and OpenAI highlights testimonies from university lecturers and administrators who claim that AI can help them improve access to education and level the educational playing field.Footnote 12

It is worth highlighting that while many authors discuss AI’s impact on democracy and democratic education by focusing only on one meaning of the term, their analyses often touch upon wider democratic concerns. The paper by Saltman (2020) is a good example as his main arguments pertain to governance – he criticises AI as a tool for the privatisation and neoliberalisation of education. However, this critique allows (or maybe requires) him to consider citizenship education, as well as the kinds of (un)democratic practices enabled by the technology.

2.3 The Advantages of a Deweyan Analysis

My Deweyan perspective allows me to present the four aspects of democratic education not as distinct or isolated but as complementary – in ideal terms, none of them can render education democratic on their own. Of course, in practice, progress on each of the four aspects might move the needle in the right direction due to the democratic deficits and challenges inherent to our schools, but we would still be justified in claiming that more needs to be done to achieve the Deweyan ideal. It would be possible to imagine an authoritarian country where all children are able to attend and graduate good quality schools, but concerns about the undemocratic nature of education would still be meaningful and warranted. Likewise, schools could succeed in teaching students about the procedural functioning of democracy without incorporating democratic pedagogies, leaving questions about the validity and desirability of such endeavours. It is also common to see democratic practices and democratic governance implemented only in some schools, typically those more affluent, which results in the replication of existing inequalities and democratic deficits.

Moroever, Dewey’s philosophy offers a nuanced understanding of technology and its impact on individuals and the society. As discussed by Hickman (2001) Dewey conceptualises technology as the means of shaping new circumstances and transforming our lived environment, while also allowing us to adapt to and respond to these changes. A pragmatist perspective makes it possible to understand and evaluate technologies’ impact on knowledge, dispositions, values, political systems and practices, and in recent years Dewey’s philosophy has enjoyed a resurgence in philosophy of technology (see, e.g., Coeckelbergh, 2024; van de Poel & Kudina, 2022; Wieczorek, 2024). Consequently, a pragmatist analysis of educational AI helps me discuss what dispositions are cultivated by the technology and how its use shapes students’ ability to engage in democratic life. Furthermore, my analysis can contribute to the resurgence of pragmatism in philosophy of technology by connecting it to Dewey’s philosophy of education – an avenue that has not been extensively explored.

3 Characteristics of the Current Educational AI Paradigm

Any discussion of today’s educational AI needs to include a consideration of intelligent tutoring systems (ITS). Although ITS are not the only type of educational AI being developed today, they have become one of the leading examples of the technology in recent years (Cukurova, 2024; Holmes & Tuomi, 2022) and the emergence of generative AI has only increased their prominence. Many companies are currently developing their own AI tutors or adapting general purpose models, such as GPT- 4 for educational contexts. Typical ITS (such as Khanmigo by Khan Academy and Socratic by Google) are meant to simulate one-on-one tutoring sessions by recommending learning material, helping students resolve problems, providing advice, and engaging students in conversations about their educational experiences. Proponents of ITS argue that they can improve and democratise education by providing high-quality, personalised one-on-one tutoring to virtually all students. For example, Khan (2024) bases his pitch on Bloom’s (1984) claim that students with human tutors perform “two standard deviations” better than their peers educated through traditional means. He argues that AI-powered ITS can help educators overcome issues of accessibility, as the technology could be scaled at relatively low cost and thus immensely improve the educational outcomes of a wide range of students.

It is, of course, not a given that ITS can guarantee learning gains of the same magnitude as those identified by Bloom, especially as that study has been notoriously difficult to replicate even with human tutors. However, the promises of individual tutoring seem to have captured the imagination of educational AI developers as sentiments similar to Khan’s have been expressed by, e.g., Sam Altman,Footnote 13 and Khan Academy has received significant support from Microsoft. Consequently, the learning model behind ITS has become paradigmatic for today’s educational AI efforts and three of its aspects are particularly relevant for this paper: individualisation, mastery and automation.Footnote 14

Commercial educational AI frames learning as predominantly or even exclusively individualistic. Arguably, this can be seen as an extrapolation of the trends in educational technology dating back to Skinner’s teaching machines (Watters, 2021) and wider developments in education associated with standardised testing, neoliberalism and “learnification” (Biesta, 2006). In this sense, the perceived sophistication of contemporary AI leads the proponents of ITS to believe that they can finally deliver the efficiency of one-on-one learning at scale. At best, they see communal, classroom-based learning as an unfortunate necessity and a poor compromise – after all, there are not enough tutors available for everyone. According to such arguments (e.g., Khan, 2024), all or most of education can be optimised if we approach it as an individual effort and personalise the content, form of delivery and rate of progress to the abilities, interests and circumstances of a particular student. Group instruction is thus framed as inferior because it cannot offer the same level of adaptability and jeopardises individual learning to ensure that everyone in the classroom can progress through the material.

This understanding of the individual–group dynamics is possible because contemporary educational AI focuses predominantly on mastery aspects of learning. Good education is equated with acquisition of information and improved performance during standardised testing. Accordingly, the necessity to adapt the material and its presentation to a group of recipients with varied levels of comprehension and ability is an obstacle to effective communication. By contrast, a machine extremely proficient in the processing of information teaching in an individual manner, could, according to this argument, optimise information flows and ensure that students learn more and do it at a faster pace. As a consequence, the aspects of learning that cannot be easily measured or which are not reducible to the mastery of the curriculum (e.g., character formation), fail to enter AI enthusiasts’ field of vision or are treated in an off-hand manner.

Moreover, with the help of educational AI, more and more elements of teachers’ work stand to be automated. This is also a continuation of larger historical trends (Bergviken Rensfeldt & Rahm, 2023), but ed-tech companies present many educational practices as either superfluous or inefficiently performed. The literature currently abounds with claims that AI will automate tedious and bureaucratic tasks and thus leave teachers free to spend their time on what they do best – delivering material (e.g., Baker et al., 2023; Kasneci et al., 2023; Schiff, 2021) – and many companies market their tools (including but not limited to ITS) primarily by highlighting their time-saving potential (e.g., Google, MagicSchool, Brisk, Edcafe, PowerSchool). At the same time, as proponents of AI argue that the technology has achieved a super-human level of intelligence (even if only in a narrow range of tasks) or will do so in near future, AI sometimes functions in the popular imagination as a potential replacement for human teachers (Selwyn, 2019). Regardless of the credibility of such claims, research shows that even the mundane and formulaic aspects of teachers’ jobs can play a significant role in educational practices. For example, Selwyn et al. (2023) demonstrated that an attempt to replace daily roll call with a facial recognition system deprived teachers of opportunities to check on their students and ensure that there is nothing on their mind that would prevent them from fully engaging with the material. Such nuances are lost in the AI-powered pursuit to automate and optimise educational practices and this can have far-reaching effects on the quality of education, as well as its democratic potential.

4 Is AI-mediated Education Democratic?

It may be the case that the widespread introduction of AI in schools will increase access to education, e.g., by providing children with always available personal tutors, and that this will have a positive impact on students’ mastery of the curriculum. However, such claims will only be verified with time, and as I argue in this section, AI would not fare equally well in fulfilling other conditions of democratic education rooted in Dewey’s philosophy – providing children with opportunities to explore and negotiate the meaning of democracy, acquire dispositions for and experience in democratic living, and practice democratic deliberation and decision-making within school settings.

It is important to distinguish that there are two levels to my analysis and the notion of democratic education relates differently to each of these levels: a) the ideal (perhaps unachievable) state described by Dewey towards which schools should strive and b) education as implemented (imperfectly) in real-world schools around the world. Some of the discussed interventions could bring particular schools closer to the Deweyan model (i.e., those that already struggle with access, scarcity of resources or teacher shortages). However, I argue below that educational AI, at least in commercial, ITS-centric form, is incompatible with the Deweyan ideal. Its implementation would further pull away schools from the notion of democratic education I forward in this paper, or would put a ceiling on their attempts to achieve the values inherent to Dewey’s thought.

4.1 Mastery

First, AI’s focus on the mastery of curriculum might help students learn about democracy, but democratic education cannot be reduced to passive absorption of wisdom. It is possible that the mastery-based model of learning promoted by ITS and other AI tools would instil in students a range of dispositions and beliefs necessary to engage in democratic living (e.g., by teaching about social action, democratic processes and procedures, etc.). However, the Deweyan model of democratic education is concerned not just with the outcome, but also with the process of learning – the latter requiring students to enact concrete roles and behaviour and test them through experience. Consequently, just as pragmatists would question whether students can be prepared for living in democracy by merely listening to lectures on democratic virtues and procedures, it is doubtful that relevant experience can be acquired while sitting in front of a screen and receiving information from a personal tutor. This is in line with insights from constructive alignment theory which moves away from lecturing by positing that students successfully develop knowledge by participating in activities designed to foster specific competences. In particular, Stamov Roßnagel et al. (2021) suggest that such approaches positively contribute to students’ learning experience and motivation. In fact, a review of empirical research conducted by Teegelbeckers et al. (2023) demonstrated that instruction helps students acquire knowledge about democracy. However, the paper also points out that practical and discussion-based teaching methods such as collaborative group work, practical projects and democratic decision-making can be more suitable for fostering not only knowledge but also a wider range of democratic competences like dealing with differences, political engagement, and both the ability and confidence in participating in political decision-making.

In particular, pragmatist democratic education requires schools to be spaces where the very meaning of democracy should be interrogated and enacted. Students should be given the opportunity to debate and co-determine what it means to live and learn with others in a democratic fashion, and the results of such inquiries should be reflected in educational practice. While promises of personalisation offered by ITS may sound appealing, today’s AI tools mostly vary the rate of progress of particular students, make cosmetic changes to examples and problems in the hopes of engaging a student’s interests (e.g., by making a math problem about airplanes instead of trains), or pose leading questions to guide students towards the desired answer. Such techniques rely on inferences based on aggregate data and their outputs may not be granular enough to accomodate individual students or their particular circumstances (Holmes & Tuomi, 2022).

On the other hand, democratisation of educational practices entails student agency in shaping the classroom dynamics and learning goals so that students can co-determine what, how and why they learn. As contemporary AI models merely repurpose existing information through statistical means, they will always be limited by the data used for training and the purposes to which the model was developed. In this sense, AI may be adept at leading students towards predetermined learning outcomes, but it is not clear to what extent it can accommodate the new and unexpected directions entailed by a truly participatory education (that have not been anticipated in its development). Of course, even practice-based learning activities build towards specific learning objectives and it might be impossible to conceptualise teaching and curriculum design that does not incorporate them. However, learning objectives and methods differ according to their flexibility and the extent to which they allow student influence. My argument is that the mastery-based methods and objectives embedded in ITS are much less flexible and student-focused than participatory learning approaches used in today’s classrooms.

In particular, the personalisation offered by contemporary educational AI does not extend to the underlying power relations and pedagogies. As a consequence of democratic practices being enacted in the classroom (e.g., flipped classrooms, participatory deliberation), the students themselves become a source of knowledge for their peers and the teachers – active participants who influence others and are influenced in turn. However, it is inconceivable for an ITS to learn from students in any meaningful sense, to incorporate a new perspective and adapt its beliefs, values and behaviours. At most, an ITS could “learn” by collecting students’ personal information and using it in future interactions, but despite sharing the name, human and machine learning are distinct processes. Gaining experience in making a computer “learn” something is not the same as gaining experience in the deliberation characteristic of democratic living with other humans. Even if data collection entails a two-way flow of information, the ITS always remains in a position of epistemic authority as the entity responsible for delivering instruction. This limits the kinds of social dynamics that students can experiment with and habituate to, ossifying schools as places where knowledge is passed down hierarchically rather than co-created in a horizontal, participatory manner.

Not all forms of educational AI are susceptible to the above objections. For example, AI-assisted simulations utilising virtual and augmented reality have been used in medical education and training (Holmes & Tuomi, 2022) and similar applications could provide opportunities for democratically-oriented practice-based learning in school settings. General purpose generative AI tools could simulate the outcomes of in-class deliberations over societal problems by generating a short text or image outlining the potential consequences of the solutions suggested by the students, thus allowing them to experiment with different courses of action and perspectives. AI-powered game-based learning could also provide opportunities to experiment with democratic decision-making, for example by facilitating the use of games simulating deliberation in legislative bodies and diverse communities, thus allowing students to gain experience in political participation.

While I criticise the shallow personalisation offered by ITS, researchers like Wang et al. (2024) demonstrated how AI tools can provide in-depth annotation of students’ questions and responses in online tutoring sessions. This could help us better understand students’ learning experiences, thus improving the ability of human and AI tutors to scaffold the learning, design more appropriate activities or provide engaging feedback. Such techniques could also be relevant for AI learning companions designed to guide students engaging in practice-based learning. In particular, Mavrikis et al. (2022) suggest that AI-sourced feedback can help students engaging in experiential learning (e.g., the aforementioned simulation and game-based learning or Exploratory Learning Environments) and reduce the sense of disorientation associated with the freeform nature of this approach and its relative lack of clearly defined expectations. Similarly, while mastery learning is strongly associated with a testing focus, AI tools such as ITS (and other chatbots), text editors, and learning analytics-powered dashboards could offer automatic formative assessment, continuous feedback during learning activities or suggestions on how to participate in collaborative activities.

4.2 Individualisation

AI’s emphasis on individual learning also poses a challenge to democratic education. Communication, is arguably the most important aspect of Dewey’s vision of democracy as members of the society need to deliberate upon shared problems, share their perspectives with others and co-determine solutions that would accommodate the variety of circumstances, needs and values co-existing within the public (Coeckelbergh, 2024; Lind, 2023). ITS and other individualistic AI tools may be more efficient at transmitting information, but they would reduce the extent to which students engage with their peers. From a Deweyan point of view, schools should equip children with the communicative skills necessary to live in a democratic society, while also providing experience in approaching new situations from an intersubjective perspective and addressing shared problems in a cooperative manner. These objectives that can only be achieved in communal spaces and engagement with others cannot be fully simulated by an AI model (as interpersonal deliberation and cooperation entails consideration of concrete perspectives and circumstances) or only happen alongside individual learning.

Even if AI tools were indeed more efficient at transmitting information, knowledge is inherently social and students need to observe how it is shaped and used by others and to participate in such processes. Some tools (e.g., Khanmigo) propose to account for this by simulating conversations with literary or historical figures (such dialogues can also be initiated with general purpose chatbots like ChatGPT or Gemini). This might make learning more engaging and interactive, but does not respond to my objection. First, AI models may process information, but it is doubtful whether they know anything and even if they did, their relationship with knowledge would be different than humans’ – for example, it would not be embodied and situated. As such, while interacting with such simulations children will certainly learn something about the communicative uses of information, but this would still differ from what they would learn observing how humans co-create and exchange knowledge. Second, children are aware that they are interacting with AI agents rather than real persons and there is growing evidence suggesting that they change their behaviour to accommodate this (see, e.g., Andries & Robertson, 2023). While some communicative skills acquired in such interactions might be transferrable to human interaction, the experience the students acquire (and consequently its outcomes) is distinct from what they would develop during human interactions.

Moreover, the individual differences in ability and rate of progress that seem to motivate the turn to one-on-one tutoring in AI development are not an obstacle to learning within the democratic model of education I advocate. Students need to learn about and cope with social differences if they are to live in a democratic society and they can do it by experiencing how others struggle or glide through various challenges posed to them in the classroom. This exposes children not only to cognitive inequalities but can also make them aware how different social roles and backgrounds shape others’ perspectives, interests and abilities. The individualisation of learning entailed by ITS would limit the children’s opportunities to recognise the diversity of circumstances and experiences and accommodate them in their communicative and cooperative practices. The agreeableness and infinite patience of AI tutors might also not be as desirable as some of their proponents claim (Khan, 2024). Disagreements and conflicts are an integral part of democratic life and children may not learn to address them if they primarily deal with an AI chatbot with guardrails requiring it to remain “nice” and enthusiastically go along with most of the users’ suggestions. By contrast, interactions with peers require students to balance competing interests and desires and resolve arguments and tensions – skills fundamental for intersubjective deliberation and collaboration.

Non-ITS AI tools might well help in increasing interaction and exposing students to social differences. Holmes and Tuomi (2022) point to learning network orchestrators which, while not very popular, have been used to pair students with a human tutor capable of helping them with an identified problem. Similar algorithms, building off learning analytics data, could be used to pair students who are likely to disagree in peer discussions, thus giving them an opportunity to learn how to resolve differences and collaborate despite differing perspectives. Similarly, the data generated through AI could also be a topic of discussion. Cukurova (2024) observes that AI-supplied data can help us examine our thought processes, hidden assumptions and values that guide learning. Similarly, such data could be presented to students and prompt a debate on how social differences impact individual interests, predispositions and educational outcomes. And if we are intent on using simulations with historical and literary figures, they may be more useful as prompts for group discussion rather than as a substitute for peer interaction. Students could, for example, debate whether Khanmigo’s rendition of a given figure is convincing and plausible, thus engaging in a wider reflection about their expectations and purposes of such simulated/imagined conversations.

4.3 Automation

The increased automation of educational practices poses questions about the role of teachers in AI-mediated learning. Although I am sceptical that AI can fully replace teachers, it is likely that it will transform many of their responsibilities, for example by placing them in a facilitatory role and reducing the extent to which they engage with the material (Schiff, 2021). However, teachers’ jobs cannot be reduced to the transmission of information. It could be argued that the delegation of instruction to ITS and the automation of administrative tasks would allow teachers to adopt more caring attitudes and more mindfully engage with the interpersonal dimension of learning, thus counterbalancing the individualistic tendencies of AI. However, there are reasons to doubt such claims. I already noted, referring to Selwyn et al. (2023), that administrative tasks often provide teachers with opportunities to be caring and attentive by requiring them to focus on the needs on circumstances of particular students. Similarly, while a study by Guo and Wang (2024) demonstrated that tools like ChatGPT can provide useful writing feedback and do so more efficiently than human teachers, some of the generated feedback was not pertinent, the overall comments were too lengthy and overwhelming, and the model’s lack of knowledge about the students might have reduced the appropriateness of the feedback. Teachers using automatic feedback generation might spend less time writing comments to students, but ensuring that LLM-provided feedback is on point, digestible and appropriate could require a comparable time commitment. Moreover, when AI tools promise to ease teachers’ administrative burdens, can we guarantee that teachers will not spend just as much time reviewing AI-generated reports and lesson plans as they are currently spending on completing the paperwork themselves? After all, the ease with which such new documents are generated might prompt administrators to require them in an even greater number.

However, on top of providing instruction teachers also serve as role models for their students and their engagement with knowledge and participation in in-class deliberations is a valuable example of democratic citizenship. The shift in teachers’ epistemic roles and authority associated with the automatising tendencies inherent to contemporary AI might negatively impact educators’ ability to engage and develop relationships with students, convincingly mentor them or connect the material to lived experiences and the wider social context, which would have an overall negative impact on democratic education.

For these reasons, democratic education might be better served by AI tools that seek to augment teachers’ existing capabilities rather than fully automate their tasks, even if, as noted by Cukurova (2024), hybrid intelligence cases which would most fully embody this purpose are still a thing of the future. However, interventions such as that by Wang et al. (2024) can have positive impact on teachers’ ability to parse and interpret students’ responses in dialogue, while Cukurova (2024) also points to research indicating that multimodal learning analytics dashboards supplying teachers with data on their students’ activity and performance prompt educators to spend their time on, e.g., scaffolding, rather than mere monitoring. If democratic classroom require the involvement of human teachers – capable of maintaining relationships with and between students, connecting the material to lived experience, and serving as examples of active citizens – the use of AI to augment existing teaching practices is a much more promising avenue than their automation through ITS.

4.4 Privatisation

Finally, greater reliance on AI in educational practices would increase the influence technology companies wield over education. Many educational AI tools, and particularly ITS, are today developed either by multinational and powerful companies such as Google, Microsoft, Amazon or Meta, or by entities supported or influenced by such companies. Widespread adoption of AI in schools would allow big tech to shape what, how and why students learn, especially if we consider the power they already wield over education systems by providing IT solutions, (e.g., platforms like Google Classroom and Microsoft Teams). Big tech’s presence enables it to influence educational practices and the content of learning, but also collect enormous quantities of students’ data, which can be used to affect beliefs and behaviour both at the individual (e.g., through personalised advertising) and societal level (e.g., by influencing political preferences).

Importantly for this paper, reliance on AI tools supplied by private companies would limit the public’s ability to shape education. A Deweyan notion of democratic education requires widespread participation in the governance of education and this is not possible when privately owned corporations unilaterally make decisions about the technologies which form the backbone of our schools. Big tech companies are to a large extent immune from public control as they have demonstrated an extraordinary ability to shape regulatory efforts by mobilising armies of lawyers and lobbyists. And while some entities have introduced promising regulation such as the European Union’s GDPR, the AI act or the Digital Services Act, the relative newness of these solutions, their lax enforcement,Footnote 15 and the speed at which new technologies are introduced raise doubts about the extent to which democratic bodies can influence technological developments. Moreover, countries willing to regulate or otherwise curb big tech companies’ power often face risks of losing out on technological investment and need to deal with the stigma of being perceived as “uncompetitive” or “stifling innovation”.

At a more local level, AI also poses challenges for educators and students who might wish to exert some influence over the technology. The opacity of AI tools makes it difficult to understand how and why decisions are made, which limits the ability of users to challenge and change the way AI functions and affects their practices, especially as this requires digital literacy and resources that are not universally possessed. Consequently, teachers might find it challenging to adopt practices or engage with material not supported by a particular iteration of educational AI used in their school. Arguably, their freedom to do so is already restricted by local and national educational policymakers, but in principle such actors are more amenable to public control than privately supplied tools. It is also possible that schools might be locked into particular providers of educational AI as the effort required to train teachers and students to use a new tool, as well as the cost of adopting new technology (especially if it requires replacing the underlying IT infrastructure) might make it impractical to simply choose one that is more suited to the needs and goals of the learning community. While this would significantly reduce the ability of local actors to shape the education system in which they are involved, it is important to emphasise that consumer choice is not synonymous with democratic decision-making (Biesta, 2015). The ability to choose from a range of tools is different from the ability to influence how tools are made and to which purposes they are employed, and researchers have been emphasising the need to widen the community of people who get to shape the development of educational technology (Baker & Hawn, 2022).

Consequently, the increased reliance on privately supplied educational AI would inscribe significant democratic deficits in the governance of education. It would also result in children spending a large portion of their formative years within an environment over which they have little to no influence. Of course, today’s schools are not exactly the laboratories of democracy Dewey envisioned in his philosophical work and there are good reasons for limiting the extent to which children can participate in educational governance. However, if the shape and values of education are not experienced as matters of public concern and a subject of democratic decision-making, this will inhibit children’s ability to grow into active and responsible citizens.

Since the influence of private corporations is one of the obstacles to truly democratic education, it would be desirable for states to engage in efforts to develop educational AI as public infrastructure. After all, education is a matter of public concern and one of the central sectors for any state – vital for its long-term prosperity and development. There is already some recognition that educational AI should be subject to strict regulation and (at least partly) controlled by democratically elected officials. In particular, at the international level the EU’s AI act prohibits the use of emotion recognition tools in select sectors, including education, and both the EU and UNESCO have engaged in efforts to develop recommendations for the development and implementation of educational AI. While such attempts are welcome, there is no reason why they should stop there (other than lobbying and general lack of political will). States could invest in the development of public educational AI that would reflect a broader range of educational values, including democratic ones. They could also take greater control over how companies introduce their tools into schools, for example by organising procurements requiring AI to meet stringent requirements (ideally co-determined in dialogue with educators and students), rather than leaving the deployment of AI to market processes. However, the democratic shortcomings of educational AI cannot be fully blamed on commercial interests, even if they are exacerbated by the current market forces. Reliance on privately-supplied educational technologies is particularly problematic in the context of democratic governance of education, but AI’s emphasis on individualisation, mastery and automation is not directly tied to a business model. It may be the case that such ideas are particularly appealing to for-profit corporations, but they have played a central role in the history of educational technology and it is likely that they will find a way into non-commercial educational AI efforts – especially if for-profit actors dominate the debate on the goals, values and methods of learning.

5 Conclusion

Educational AI, at least in its commercial, ITS-centric form, is not conducive to democratic education in the Deweyan sense. While it is possible it will increase access to education, there are good reasons to believe that by individualising learning, focusing on mastery, automating teachers’ tasks and increasing the influence of private companies, the technology will have a negative impact on the democratic dimension of education. Following Dewey, I noted that students need to acquire skills and dispositions to live in a democratic – communicative and collaborative – fashion, gain experience in intersubjective deliberation and communication, and grow accustomed to systems and environments that are subject to public control. The aforementioned aspects of educational AI are significant obstacles to such goals.

However, our educational technologies do not have to follow the trends I discuss in this paper. On the one hand, the individualistic, mastery-focused style of learning has been at the centre of educational technology efforts since at least the 1960 s (Bergviken Rensfeldt & Rahm, 2023; Watters, 2021), which makes it reasonable to assume that the underlying assumptions about education will remain stable even as companies introduce new tools and techniques. On the other hand, an interrogation and critique of the assumptions, values and theories of learning underlying educational technologies can help us imagine alternative applications of AI in education. As I have demonstrated, a pragmatist notion of democratic education can serve as an alternative to the ideas that are currently in vogue among developers of educational AI and in my analysis I outlined AI tools that would be more in line with a Deweyan view of education.

In particular, a pragmatist perspective makes clear that AI becomes an obstacle to democratic education when it serves as a replacement for human interaction and experience. Consequently, democratic ideals might be better served if instead of using AI to automate teachers’ tasks and transmit information in an individualised manner, we identified ways for the technology to increase the number of interactions between students and educators and to allow them to experiment with new directions and approaches to learning. AI does not have to be synonymous with the individualistic, mastery-based learning pervading the ed-tech sector and if the technology is going to disrupt existing practices of teaching and learning, critical educators could use this opportunity to imbue more collaborative and experimental learning in our education systems. After all, even though Dewey is cited by many educators as a major influence, this does not mean that the practices he espoused in his philosophy and tried to implement in his practical work have become dominant in our schools.

At the same time, even if today’s AI is not in line with pragmatist educational ideals, its complete rejection might not be the most prudent. AI and other digital technologies are likely to play an increasing role in our societies and schools would do well to prepare students for such developments. On the one hand, this should include AI literacy classes that would help children understand how AI works, what are its limitations and what are its impacts on, e.g., our epistemic and communicative practices. On the other hand, a Deweyan view of education helps us recognise that children need not only knowledge about AI, but also experience in working alongside it and using it to resolve shared problems and achieve intersubjectively defined goals. In this sense, only through critical examination and critical engagement with the values and purposes of AI can we reconcile the technology with democracy and democratic education.