Content-type: text/html Downes.ca ~ Stephen's Web ~ 21 Answers

Stephen Downes

Knowledge, Learning, Community

Half an Hour, Feb 23, 2024

 

Ben Williamson argues that the 21 arguments he summarizes "demonstrate how AI cannot be considered inevitable, beneficial or transformative in any straightforward way." Of course, nobody is actually proposing this. Similarly, nobody disagrees that AI "requires public deliberation and ongoing oversight."

It would of course be wrong to speculate on any perceived actual intents behind the posting of these 21 arguments, but the suggestion that emerges is that they appear to tip the scale against the use of AI in education. But they are, for the most part, easily addressed, and that is the purpose of this post.

Williamson's statements indented and in italics.

 

Argument 1

Definitional obscurity.  The term 'artificial intelligence' lacks clarity, mystifies the actual operations of technologies, and implies much more capability and 'magic' than most products warrant. In education it important to separate different forms of AI that have appeared over the last half-century.

 

Almost any discussion of artificial intelligence will begin by distinguishing between traditional 'expert system' models based on rules and inferences, the oft-referenced Good Old-Fashioned AI (GOFAI) and contemporary approaches, which in turn are divided into machine learning, which is a toolbox of statistical methods, and 'deep learning', which is based on neural network models of learning. 

 

What groups these very different theories is not the claim that any of them achieves actual intelligence, but rather, the idea that they are based on methods and principles derived from observations and emulations of actual human intelligence. 'Artificial Intelligence' is a term of art, in other words, not a term of achievement. People who work in AI know this, and AI is never represented by them as anything otherwise.

 

At the current time, most discussion about AI in education concerns data systems that collect information about students for analysis and prediction, often previously referred to as 'learning analytics' using 'big data'; and 'generative AI' applications like chatbot tutors that are intended to support students' learning through automated dialogue and prompts. 

 

Though I haven't compiled the statistics (as one should) my own impression is that since the release of chatGPT in the fall of 2022 most discussion of AI, even in education, has shifted from learning analytics to content analysis and generation. Even prior to 2022 it was well-known that there was a wide range of potential applications in education, of which learning analytics was only one.

 

These technologies have their own histories, contexts of production and modes of operation that should be foregrounded over generalized claims that obscure the actual workings and effects of AI applications, in order for their potential, limitations, and implications for education to be accurately assessed

 

This argument is based on the idea that "genealogical analysis traces how contemporary practices and institutions emerged out of specific struggles, conflicts, alliances, and exercises of power." No doubt this is an important avenue of study, and probably more important than "claims that obscure the actual workings and effects of AI applications," but the field of study is much broader than these two alternatives, and arguably, what AI does now is probably more important than what it used to do and where it came from.
 

 

Argument 2

Falling for the (critical) hype. Promotion of AI for schools is frequently supported by hype. This takes two forms: first, industry hype is used to attract policy interest and capture the attention of teachers and leaders, positioning AI as a technical solution for complex educational problems. It also serves the purpose of attracting investors' attention as AI requires significant funding. 

This is materially the same as saying that (some) AI is developed and sold by commercial enterprises, and that these enterprise both advertise their product and raise funds to support it. Agreed, it's distasteful. But this model has nothing in particular to do with AI and everything to do with the wider environment of society, media and economy.

 

Second, AI in education can be characterized by 'critical hype'—forms of critique that implicitly accept what the hype says AI can do, and inadvertently boost the credibility of those promoting it. The risk of both forms of hype is schools assume a very powerful technology exists that they must urgently address, while remaining unaware of its very real limitations, instabilities and faults.

Anything can be characterized as something. The question of relevance is whether this characterization is accurate, how prevalent it is, and who is doing it. There is a wealth of literature in both the popular and academic press that very definitely focuses on the limitations of contemporary AI, to the point where national and international policy frameworks are mostly risk-based

Yes, there is a risk of hype. There's a risk of hype in everything.

 

Argument 3

Unproven benefits. AI in education is characterized by lots of edtech industry sales pitches, but little independent evidence. While AIED researchers suggest some benefits based on small scale studies and meta-analyses, most cannot be generalized, and the majority are based on studies in specific higher education contexts. 

It is worth noting that there is very little in contemporary education that has proven benefits, which is what leads to things like Hattie's effect scale. This though is complicated by the fact that there is little agreement in the field as to what constitutes a benefit - is is career preparation, content knowledge, test scores, personal enrichment, socialization, or propagation of the faith? 

What matters is whether AI systems are able to do what their developers claim they can do. To a large degree, these claims are substantiated. There are numerous studies over time of AI systems proving their capabilities in games like chess and go and Jeopardy, writing software, correcting grammar and spelling, translating and transcribing text, and much much more. Are some or any of these things beneficial? That's more of a philosophical question.

Schools remain unprotected against marketing rhetoric from edtech companies, and even big tech companies, who promise significant benefits for schools without supplying evidence that their product 'works' in the claimed ways. They may just exacerbate the worst existing aspects of schooling.

Schools remain unprotected against a lot of things, including legislation that makes it a crime to talk about gender, critical race theory, or evolution. I'm currently more concerned about school texts that get basic facts about biology wrong than I am about schools possibly buying the wrong technology. If there is anything legislators fear, it is that AI might actually get rid of the worst aspects of teaching, said worst aspects having prospered and proliferated under the existing model.

 

Argument 4

Contextlessness. AI applications promoted to schools are routinely considered as if context will not affect their uptake or use.

Again, it is not clear to me that any number of people actually say or believe this.  

 

Like all technologies, social, political and institutional contexts will affect how AI is used (or not) in schools. Different policy contexts will shape AI's use in education systems, often reflecting particular political priorities. How AI is then used in schools, or not, will also be context specific, reflecting institutional factors as mundane as budgetary availability, leadership vision, parental anxiety, and teacher capacity, as well as how schools interpret and enact external policy guidance and demands. AI in schools will not be context-free, but shaped by a variety of national and local factors.

This is not an argument against AI. This is an argument against one-size-fits-all technologies - things like standardized curricula, standard textbooks, standardized exams, and the like. This is an argument against government policies that are undemocratic, inflexible, discriminatory and elitist. It is an argument against the factors that create inequalities in access to education and policies that promote fact- and reality-based curricula.

I'm not promising the moon here, but the preponderance of the evidence suggests to me that education al techn ology in general and AI in particular offer educators and administrators much greater capacity to respond to contextual and even individual factors, while smoothing out the inequalities that so often rob children of a decent chance at an education even before they've left their own community. 

 

Argument 5

Guru authority. AI discourse centres AI 'gurus' as experts of education, who emphasize narrow understandings of learning and education. Big names use platforms like TED talks to speculate that AI will boost students' scores on achievement tests through individualized forms of automated instruction. Such claims often neglect critical questions about purposes, values and pedagogical practices of education, or the sociocultural factors that shape achievement in schools, emphasizing instead how engineering expertise can optimize schools for better measurable outcomes. 

This argument, in brief, is that "AI is promoted by gurus, therefore, AI is wrong." Obviously this is a bad argument, easily displayed by the fact that everything in society from fad diets to economic theory is promoted by 'gurus'.

The fact of it is that this arguments gets the causal flow reversed. It's not correct to case "big names use platforms like TED talks to promote X." Rather - and an examination of Chris Anderson's TED will prove this - the case is that "platforms like TED promote X by promoting sympathetic voices to guru status."

 

Argument 6

Operational opacity. AI systems are 'black boxes', often unexplainable either for technical or proprietary reasons, uninterpretable to either school staff or students, and hard to challenge or contest when they go wrong. 

This is a much more interesting argument because it addresses the issue of 'explainability' of decisions or recommendations made by deep learning systems. The difficult is that instead of taking into account a few dozen or even a few hundred factors, as explainable systems do, AI systems take into account terns of thousands of variables. Which were the key variables? Were there any key variables? 

This is the case of any complex system. What cased the rain to fall on Smith's farm but not Jones's? At a certain point, so many factors are involved that no actual answer is possible. All that can be spoken of is the overall tendency of a storm system, and how random factors are at play at the storm front. There is a large literature on AI explainability, which a variety of approaches being considered (these in turn are informed by a deep philosophical literature on counterfactuals and possible world analysis). 

At a certain point, a demand for an explanation sometimes feels like people would rather have a wrong explanation than no explanation. But surely this isn't reasonable.

This bureaucratic opacity will limit schools' and students' ability to hold accountable any actors that insert AI into their administrative or pedagogic processes. If AI provides false information based on a large language model produced by a big tech company, and this results in student misunderstanding with high-stakes implications, who is accountable, and how can redress for mistakes or errors be possible?

The opacity of AI is not bureaucratic, it is structural. And it points to a fundamental feature of explanations in general - they are answers to 'why' questions, and specifically, all explanations are of the form "why this instead of that?" As suggested above, the relevance of any given why-question depends very much on what you count as a benefit, which is why the reference to bureaucratic opacity cited above begins, "we have to go all in on what we really believe education should be about." Do we? Really?

Meanwhile, the question of accountability, while important, is a separate issue. The publication of any AI system is followed by vigorous community testing that soon reveals its flaws. The question of accountability varies depends on what was known, what could have been known, what a reasonable person would have done, and what could have been done otherwise. Accountability, in other words, a legal issue, not a technical issue. 

In general, I think everyone in the field agrees: if the AI cannot be relied upon to produce a reliable result in a case where hard would be caused by an unreliable result, don't use it. Just as: if a car cannot be counted on to stop when you need it to stop, don't use the car. 

 

Argument 7

Curriculum misinfo. Generative AI can make up facts, garble information, fail to cite sources or discriminate between authoritative and bad sources, and amplify racial and gender stereotypes. While some edtech companies are seeking to create applications based only on existing educational materials, others warn users to double check responses and sources. The risk is that widespread use of AI will pollute the informational environment of the school, and proffer 'alternative facts' to those contained in official curriculum material and teaching content.

It should be noted at the outset that there is no shortage of garbled information and 'alternative facts' (a phrase actually coined by a high-ranking government official to explain her disassembly) in existing traditional and online content. This points to the danger of training AI on traditional media sources. However, when AI is trained on specific sources the way, say, PDF.ai does, then the reliability is much greater. 

Is it perfect? No. But neither is an information staffed completely by humans. Ultimately the issue will come down to which system does it better? And when it's shown that AI produces better results than human librarians and Google searchers (the way, say, it already performs more consistent and fair grading), the risk of 'curriculum misinfo' will be on the other shoe.

 

Argument 8

Knowledge gatekeeping. AI systems are gatekeepers of knowledge that could become powerful determinants of which knowledge students are permitted or prohibited from encountering

Obviously there fact of gatekeeping is not what is at issue here, as gatekeeping is in wide use society-wide, from content restrictions over what may be shown to minors to laws regulating dangerous and offensive material generally, not to mention subscription-based gatekeeping. This is important to keep in mind.

 

This can happen in two ways: personalized learning systems prescribing (or proscribing) content based on calculations of its appropriateness in terms of students' measurable progress and 'mastery'; or students accessing AI-generated search engine results during inquiry-based lessons, where the model combines sources to produce content that appears to match a student's query. 

There are many more forms of knowledge gatekeeping than the two listed here, even in the domain of education. Every curriculum decision is in one way or another a form of content gatekeeping (though educators I would imagine prefer to think of it as the opening of one door rather than the closing of another).

In these ways, commercial tech systems can substitute for social and political institutions in determining which knowledge to hand down to the next generation.

Is the issue there that the system is commercial or that it is non-human? Because the issues here are very different.

For myself, I have no problem with depending on non-human systems making content decisions for me, something I experience every time I put my music player on shuffle. But I do have problems with commercial enterprises making these decisions, because they are always working in their own interest, rather than mine.

It's a bit like social networking: it's not that networking online is inherently bad, but when it's combined with commercial incentives you get cesspools like Facebook or Twitter, and this is bad. Same for AI.

 

 Argument 9

Irresponsible development. The development of AI in education does not routinely follow 'responsible AI' frameworks. Many AIED researchers have remained complacent about the impacts of the technologies they are developing, emphasizing engineering problems rather than socially, ethically and politically 'responsible' issues.

We've had discussions internally about the recent use of the term 'responsible AI' in place of the term 'ethical AI'. The difference, if I had to characterize it quickly, is that 'responsible AI' promises not to cause harm, while 'ethical AI' includes in addition a commitment to do good.

I don't know what the definition of 'many AIED researchers' is here, or whether we have any actual statics shoulding how many researchers are "complacent" about the impact of the technologies they are developing, but if I had to consider the evidence, I would say based on the fairly constant rumble of labour unrest in the field that if there is complacency it exists most in the board room and less on the shop floor. As usual.

I thin k, though, that the argument that 'people should not be complacent about the impacts of what they do' is a rule that can, and should, be applied broadly.

 

Argument 10

Privacy and protection problems. Adding AI to education enhances the risk of privacy violations in several ways. 

I grew up in a small town, where everyone knew your business, so my experience of privacy is probably a bit different from most people raised in an impersonal urban environment where people live in cubicles separated from each other. I think different cultures have different expectations of privacy and protection, and that this is one area that would benefit from a less culturally-specific perspective.

Various analytics systems used in education depend on the continuous collection and monitoring of student data, rendering them as subject of ongoing surveillance and profiling. AI inputs such as student data can risk privacy as data are transported and processed in unknown locations. Data breaches, ransomware and hacks of school systems are also on the rise, raising the risk that as AI systems require increased data collection, student privacy will become even more vulnerable.

Pretty much everything described in the previous paragraph is not AI. Sure, AI can use privacy-breaching technology as input. So can the CIA. So can your father-in-law. The movie Porky's (1981, and for many years Canada's highest grossing box office hit) is predicated on privacy-breaching technology.

Yes, AI benefits from the violation of what many would consider reasonable protection of privacy. So does the insurance industry, which most people are not demanding be shut down. So does Revenue Canada, which (in theory at least) has a mandate to ensure people do not hide their earnings in offshore tax havens. 

Privacy and security are important social issues that require a lot of discussion. AI is just one factor in that discussion. Shutting down AI would resolve nothing.

 

Argument 11

Mental diminishment. Reliance on AI for producing tailored content could lead to a diminishment of students' cognitive processes, problem solving abilities and critical thinking. It could also lead to a further devaluation of the intrinsic value of studying and learning, as AI amplifies instrumentalist processes and extrinsic outcomes such as completing assignments, gaining grades and obtaining credits in the most efficient ways possible—including through adopting automation.

First, this argument is based on a 'could', and has yet to be shown to be the case. Any of a wide range of capacities could be diminished. Additionally, any of a wide range of capacities could be augmented.

Second, it is arguable that the loss of some critical capabilities - problem solving and critical thinking - has already been caused by traditional media such as television, and that concerns about AI are doing the same are too little too late. Similarly, the amplification of instrumentalist processes has already been caused by the existing system of grading and promotion than, and concerns about AI once again come too far after the fact.

Third, the loss of many capabilities doesn't matter. Most people are not able to build a fire from scratch, despite the absolutely critical role fire plays in modern technology. We have matches and lighters, and no real fear that these will disappear. Similarly, very few people can manage the care and feeding or a horse, despite how important transportation is today. It may be that we no longer need problem-solving and critical thinking in the future, not if we have machines that do this. 

 

Argument 12

Commercialization infrastructuralization. Introducing AI into schools signifies the proliferation of edtech and big tech industry applications into existing infrastructures of public education

Yes. I remember the same being said when the internet was introduced to schools, and the same being said when schools were connected to the electrical grid. To the extent that these technologies are commercial services, and reflect commercial priorities, this is a problem.

Schools now work with a patchwork of edtech platforms, often interoperable with administrative and pedagogic infrastructures like learning management and student information systems. Many of these platforms now feature AI, in both the forms of student data processing and generative AI applications, and are powered by the underlying facilities provided by big tech operators like AWS, Microsoft, Google and OpenAI. By becoming infrastructural to schools, private tech operators can penetrate more deeply into the every routines and practices of public education systems. 

The problem - once again, and  it feels like, for the billionth time - is that the system is depending on commercial providers, not by a specific type of technology.

In my newsletter today I linked to a system called Oxide that would allow schools (or even individuals) to buy cloud hardware outright and manage it themselves using open source technology. There are, additionally, open source AI systems (and a good robust discussion in the field about what we even mean by open source AI). This is the issue at stake, not the issue of whether schools are using AI.

Honestly, reading about the opposition to AI feels like hating a particular type of mathematics because some corporations might put advertisements in calculators.

 

Argument 13

Value generation. AI aimed at schools is treated by the industry and its investors as a highly valuable market opportunity following the post-Covid slump in technology value. 

No disputing that. Billions have been invested at it looks already like the investors will make a good return. I pay $20 a month for chatGPT 4 because it solves coding problems for me. Value generation doesn't get more basic than that.

The value of AI derives from schools paying for licenses and subscriptions to access AI applications embedded in edtech products (often at a high rate to defray the high costs of AI computing), and the re-use of the data collected from its use for further product refinement or new product development by companies.

This is not a complete sentence, and this reflects some of the confusion. Schools do pay for products, and some of these products have AI embedded in them (everything from cars to security systems to exam proctoring). This is a part - but a small part - of the revenue AI companies have been able to earn over the last few years.

School data is (sometimes illegally) collected and used, first, to improve AI systems through training, and second, to support advertising and marketing functions. I think the second use is a lot more problematic than the first. With the first (as research ethics boards around the would will attest) there's no real issue provided there is transparency and consent. Using student data to manipulate students (among other people) to buy products is problematic.

These are called economic rent and data rent, with schools paying both through their use of AI. As such, AI in schools signifies the enhanced extraction of value from schools.

Yes, schools pay for their use of AI. Part of this may be in the exchange of data for services. That's the same deal they made when they started to use AI. That's the same deal they made when they purchased wall maps (sponsored by Nielson chocolate bars) to display on school walls. Mmmmm. Chocolate. 

If you think the price of AI is too high, or inappropriate, you look for a different supplier. AI is not linked intrinsically to any particular funding model.

 

Argument 14

Business fragility. Though AI is promoted as a transformative force for the long term, the business models that support it may be much more fragile than they appear. 

Oh, count on it. 

 

AI companies spend more money to develop and run their models than they make back, even with premium subscriptions, API plus-ins for third parties and enterprise licenses. While investors view AI favourably and are injecting capital into its accelerated development across various sectors, enterprise customers and consumers appear to be losing interest with long term implications for the viability of many AI applications. 

To this point, we don't care. Sometimes investors win, sometimes they lose. 

The risk here is that schools could buy in to AI systems that prove to be highly volatile, technically speaking, and also vulnerable to collapse if the model provider's business value crashes.

This is a risk that accompanies any purchase made by anybody. I bought a Kona bicycle last year, then the company was sold, and they didn't produce any new lines this year. Now I'm wondering whether parts will be available in the future. If the school purchases electric school buses, and we end up standardizing on hydrogen, that's a problem. Old school Betamax machines littered closets for years. We thought WiMax would be a thing, and then it wasn't. 

The question here is whether AI - speaking broadly and generally as a product category - is any more of a risk than any other technology. At this point, some 17 months after the launch of chatGPT, there is probably more risk. That's not a reason to eschew AI entirely, it's an argument to minimize risk (by the same token, schools should not lock in to 20 years of paper supplies).

 

Argument 15

Individualization. AI applications aimed at schools often treat learning as a narrow individual cognitive process that can be modelled by computers

I'm not sure exactly what the argument here is because the paper is paywalled. But it appears that the suggestion is that human learning is something fundamentally different that cannot be modeled by computers. This is possioble the case - it is, after all, an empirical question - but it is beyond dispute that some aspects of human learning can be modeled by computer.

As is well known, my own argument is that human learning and neural network learning are fundamentally similar.

While much research on AI in education has focused on its use to support collaboration, the dominant industry vision is of personalized and individualized education—a process experienced by an individual interacting with a computer that responds to their data and/or their textual prompts and queries via an interface. 

Quite so. Quite a bit could be said about this. My own work has involved two major distinctions: between personal and personalized learning, and between collaboration and cooperation.  

In a nutshell, while the bulk of argumentation in education traces the opposition between personalized and collaborative learning, which are at odds with each other, there's a genuine alternative that is personal and cooperative learning, which can coexist. 

The former - personalized and collaborative learning - are favoured by technology producers, because they both fit the model of one-system many-users, which is cost efficient at mass production. AI is touted as a mechanism that can support both - though as centralized systems, these depend on centralized AI.

The latter - which have not enhanced my guru status - are not favoured by technology systems, because they fit the model of many-users many-systems. You can't grab economies of scale, you can't centralize production, you can't own the consumer (or their data). It depends on decentralized AI.

See, the problem isn't whether learning can be modelled by computers. The problem is in thinking of learning as a narrow individual cognitive process. 

In other contexts, students have shown their dissatisfaction with the model of automated individualized instruction by protesting their schools and private technology backers.

 As they should.

 

Argument 16

Replacing labour. For most educators the risk of technological unemployment by AI remains low; precariously employed educators may, however, risk being replaced by cost-saving AI. In a context where many educational institutions are seeking cost savings and efficiencies, AI is likely to be an attractive proposition in strategies to reduce or eliminate the cost of teaching labour.

Higher education in North America (I can't speak so much for other domains) is a blatantly unfair labour environment where a substantial part of the labour is performed by underpaid graduate or post-graduate students. The organization of labour in academic is long overdue for reform. So I'm not even remotely concerned about the disruption of academic labour by AI.

Having said that, the people who are most interested in cost savings and efficiencies are students (and even more: potential students who cannot afford to mortgage their future for a chance at a future). If we can produce something like the same result for less, the overall benefit to society would be substantial. So to the extent that this risk exists, it means AI is worth considering.

After all, what counts as a 'benefit' depends very much on your point of view.

 

Argument 17

Standardized labour. If teachers aren't replaced by automation then their labour will be required to work with AI to ensure its operation. The issue here is that AI and the platforms it is plugged in to will make new demands on teachers' pedagogic professionalism, shaping their practices to ensure the AI operates as intended. 

This argument is a bit like saying that with the arrival of moving pictures, actors will be replaced by projectionists. And those parts of academic labour that can be standardized will indeed be mechanized. AI systems are not production line systems. Sure, there may be some dull jobs (there are always dull jobs). But the labour isn't 'replaced'. It moves.

Teachers' work is already shaped by various forms of task automation and automated decision-making via edtech and school management platforms, in tandem with political demands of measurable performance improvement and accountability. The result of adding further AI to such systems may be increased standardization and intensification of teachers' work as they are expected to perform alongside AI to boost performance towards measurable targets.

The drive toward AI isn't the same as the drive toward standardization. Standardization is an example of a wrong model of pedagogy. But choosing a wrong model of pedagogy - something that has been underway for decades - is not entailed by choosing AI.

 

Argument 18

Automated administrative progressivism. AI reproduces the historical  emphasis on efficiency and measurable results/outcomes, so-called administrative progressivism, that has characterized school systems for decades. New forms of automated administrative progressivism will amplify bureaucracy, reduce transparency, and increase the opacity of decision-making in schools by delegating analysis, reporting and decisions to AI.

You can't complain that there's an emphasis on measurable result sand outcomes and then, in the next sentence, complain that it reduces transparency. If you don't want to be measured (or at the very least, observed) then you don't want to be accountable. You have to pick one (or some socially acceptable combination of the two).

What AI accomplishes in fact is an end to our reliance on a very small number of easily quantifiable measurements and outcomes, and a capacity to evaluate outcomes and success according to a broad range of both qualitative and quantitative indicators, and to respond to different criteria for evaluation, and different descriptions of benefit and success, using the same data set.

 

Argument 19

Outsourcing responsibility. The introduction of AI into pedagogic or instructional routines represents the offloading of responsible human judgment, framed by educational values and purposes, to calculations performed by computers. 

This appears essentially to be a restatement of the argument from accountability, discussed above.

It's a bit like arguing that responsibility for traffic accidents is offloaded to machines because we're using cars instead of running really fast.

Teachers' pedagogic autonomy and responsibility is therefore compromised by AI, as important decisions abut how to teach, what content to teach, and how to adapt to students' various needs are outsourced to efficient technologies that, it is claimed, can take on the roles of planning lessons, preparing materials and marking on behalf of teachers.

Let's accept that this is true, though it presumes that the existing model of one teacher-many students remains intack through the forseeable future.

It is arguable that we want teachers' pedagogic autonomy and responsibility to be compromised, in some cases, by AI (just as we want them to be compromised by laws governing discipline and punishment, laws governing hate speech, laws governing child abuse, and more). 

Arguing against AI in all cases is a bit like arguing against calculators because students need no longer depend solely on the teacher's word that 645 + 644 = 1389. It's a bit like arguing against the use of the atlas so students no longer depend on the teachers' assertion that Lake Huron is larger than Lake Superior (which actually happened in my own childhood).

AI is a part - only a part, mind you - of a much more integrated and networked learning environment, and that in the main is a good thing.

 

Argument 20

 

Bias and discrimination. In educational data and administrative systems, past data used to make predictions and interventions about present students can amplify historical forms of bias and discrimination

Quite right. 

Problems of bias and discrimination in AI in general could lead to life-changing consequences in a sector like education. Moreover, racial and gender stereotypes are a widespread problem in generative AI applications; some generative AI applications produced by right wing groups can also generate overtly racist content and disinformation narratives, raising the risk of young people accessing political propaganda.

Again, no disagreement.

Bias and discrimination are already widespread problems in society, and the understanding of many AI practitioners is that our systems should be designed to mitigate them.

I haven't heard any serious AI research argue that we should ignore the potential for bias and discrimination, though there's no doubt there's an element in society that would prefer bias and discrimination to be amplified, and make fun of efforts by AI developers to mitigate the effects.

But blaming AI feels a bit like blaming the megaphone for the Nuremberg rallies. Removing the megaphone does not remove the problem.

 

Argument 21

 

Environmental impact. AI, and particularly generative AI, is highly energy-intensive and poses a threat to environmental sustainability. Visions of millions of students worldwide using AI regularly to support their studies, while schools deploy AI for pedagogic and administrative purposes, is likely to exact a heavy environmental toll. Given today's students will have to live with the consequences on ongoing environmental degradation, with many highly conscious of the dangers of climate change, education systems may wish to reduce rather than increase their use of energy-intensive educational technologies. Rather than rewiring edtech with AI applications, the emphasis should be on 'rewilding edtech' for more sustainable edtech practices.

I've dealt with this argument in other fora, and in brief, the problem here is not AI, it is our use of fossil fuels. If I run an AI system here in Ontario, where 95% of our energy is generated from non-fossil fuel sources, my environmental impact is minimal.

Meanwhile, there is a wide range of AI applications being used (including in my furnace) to minimize the environmental impact of all our other human activities.

 

Conclusion

AI isn't perfect, and pretty much nobody in the field thinks its perfect. People are well-aware of the risks of development and implementation errors.

But as I think I've demonstrated here, most of the arguments against AI offered in Ben Williamson's post have nothing to do with AI. They reflect fears about commercialism, bad pedagogical models, and bias and prejudice in society.

Attacking AI to all the the things you don't like in society and arguing against AI on that basis does no service to society; it privileges a distorted view of AI, and minimizes our own role - and the role of government and corporations - in the other problems we face.

It's probably too much to ask that he cease and desist, but I think that a more nuanced - and may I say, informed - view of AI is warranted.

Mentions

- AI in education is a public problem | code acts in education, Feb 23, 2024
, - [2301.01602] Unpacking the "Black Box" of AI in Education, Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - Hattie effect size list - 256 Influences Related To Achievement , Feb 23, 2024
, - , Feb 23, 2024
, - Sex and gender in biology textbooks don't mesh with science - Futurity, Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - China turns to AI tablets for students after tutoring crackdown - Rest of World, Feb 23, 2024
, - How AI Could Save (Not Destroy) Education | Sal Khan | TED - YouTube, Feb 23, 2024
, - , Feb 23, 2024
, - The Idea TED Didn't Consider Worth Spreading: The Rich Aren't Really Job Creators | Open Culture, Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis - The Verge, Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - Porky's - Wikipedia, Feb 23, 2024
, - The real risk of generative AI is a crisis of knowledge | Wonkhe, Feb 23, 2024
, - , Feb 23, 2024
, - AI x edtech: What is Emerge looking to invest in? - YouTube, Feb 23, 2024
, - , Feb 23, 2024
, - Degenerative AI in education | code acts in education, Feb 23, 2024
, - Subprime Intelligence, Feb 23, 2024
, - The power of edtech investors in education | code acts in education, Feb 23, 2024
, - AI hype is fading, according to earnings calls, Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - Gab’s Racist AI Chatbots Have Been Instructed to Deny the Holocaust | WIRED, Feb 23, 2024
, - Nuremberg rallies - Wikipedia, Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - , Feb 23, 2024
, - 21 Answers, Feb 23, 2024



Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Nov 23, 2024 9:08 p.m.

Canadian Flag Creative Commons License.

Force:yes