Katie Davis asks, "When does technology support child development, and when does it not?" I'm going to say first that I think this is a pretty good answer: "I argue that digital experiences that are self-directed and community supported are best for children's healthy development," writes Davis. Now - is this the complex made concrete? Not exactly - though it does give people a tool to create their own concrete response to the question (and that's what you need, because no single concrete answer will ever resolve a complex question). You can read Davis's brand new blog here (I opted not to subscribe to the newsletter, which appears to function more as advertising for her book). Personally, I think that something like a blog should be used to help develop ideas, rather than market them afterward - because it's experiences and community support that create knowledge, not 672 footnotes covering 86 pages (in the field we just call that 'academic cover').
No doubt this one is going to be on slide presentations for years to come. "An academic paper entitled 'Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT' was published this month in an education journal, describing how artificial intelligence (AI) tools 'raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism'. What readers – and indeed the peer reviewers who cleared it for publication – did not know was that the paper itself had been written by the controversial AI chatbot ChatGPT." The Guardian article only links to a ResearchGate copy but it can still be found in Innovations in Education and Teaching International. Though the journal itself was "tipped off", "the four academics who peer-reviewed it assumed it was written by these three scholars." And why wouldn't they? From where I sit, this is an experiment with human subjects (the four reviewers) conducted without prior consent, which seems pretty unethical to me. I feel for the reviewers.
Though it will no doubt be credited to some ivy-covered institution, the concept of AI literacy (or as it's called here, 'critical AI') has been bubbling up through the discourse for the last few years as an offshoot of digital literacy. This article takes as its point of departure Anna Ridler's 2018 installation Myriad (Tulips), which it calls "one example of an art practice that self-reflexively uses the tools and techniques of ML, also perfectly encapsulates, indexes, and indeed embodies a critical perspective on AI, one that both informs and is shaped by academic research on the same." The same could probably have been said about many of the exhibits at the Ars Electronica exhibit I attended in Linz in 2009. At any rate, we're already to the point where we can find an AI literacy literature review, though just from last year; "the ability to understand, use, monitor, and critically reflect on AI applications without necessarily being able to develop AI models themselves is commonly referred to as being 'AI literate'". You can see more at the McGill guide to AI Literacy. Or the World Economic Forum page on it. Or the Birmingham City report on AI literacy in primary education. Or this training course for young people from the Council of Europe.
Do read the whole article, the contents of which I can only hint at here. Alan Levine argues, "Art is the voice of a person and whenever AI art is anything more than aesthetically pleasing it's not because of what the AI did it's because of what a person did." So, like Alan, I am a photographer, and like a million other people, I took a photo of the Taj Mahal (which I consider the most beautiful building in the world). I could have simply purchased a phone, but mine is based on my experience of being there. This is an important point, because while it's true that 'all art is a remix', what Levine reminds us here is that the associations humans make are different from the ones AIs make. That's because human associations, and therefore, human remixes, are based on individual experiences. Even if our algorithms are the same as the AIs (and there are arguably similarities) our data is very different. And this, too, is what makes something aesthetically pleasing to us - not because of how it was created, or even because of who created it, but because of how it speaks to our experiences.
Now (at the risk of making this post too long) let me take this a step further. The greatest danger of AI is not that it will replace human authors, or anything like that, but rather, it is that it will reshape human experiences. This happens in one of at least two ways: either it reshapes them in its own image, reducing human experience to the bland and the generic (think 1960s bowdlerized television); or it reshapes them at the hands of some unethical AI manipulator (think recommendation algorithms that take us deeper down the extremist rabbit hole). The human, indeed, ethical, response to AI is to experience the world as fully and completely as possible, and to offer back to AI and other humans the remixes that are based on that experience in all their unpredictable and chaotic glory.
The thesis here is that while there was an incredible 50x increase in the productivity of the average manual worker from 1870–1970, this productivity gain has leveled off in the 50 years since then, with dire consequences for out future economic prospects. While I find this article very conservative in its approach (and it reads a lot like much of what we see in the business press) its strength is that it least tries to consider objections to that account - for example, the great decoupling that took off starting in the Reagan era (he puts it at 1972, which is inaccurate), or for example, the argument against productivity (reminding of Kalle Lasn's "economic progress is killing the planet" argument), or for example, the productivity backlash based on opposition to the rise of billionaires.
I think he misses one major consideration - that much of the productivity gains were illusory, created by over-exploitation of resources and offloading environmental costs. We have massive non-manual worker sectors (specifically: cultural, health, education, information, and service) that didn't really exist in 1870. And, of course, we can't measure productivity by GDP. "What we really want is a new kind of productivity. We want a kind of productivity that is actually more productive, more inclusive, leaves us time for an uninterrupted personal life, and ultimately feels better — more purpose, more fulfillment, more aliveness, and less hurry." Pro tip: stop reading at 'What You Can Do Now', because what follows is a sales pitch for some online program.
This MOOC covers all applications of analytics in learning, surveys criticisms, describes ethical approaches, and examines the ethics of analytics with a view to recent ethical theory.
This paper presents an overview of connectivism, offering a connectivist account of learning and a detailed analysis of how learning occurs in networks.
Coronavirus / Covid19 quick reference kit, to take your class or conference online cheaply and in a hurry:
Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken in three dozen countries on six continents.
The Future of Online Learning 2020, April 28, 2020.
TSupporting Open Educational Resources, January 7, 2022.
The Agile Approach to Learning Design, Online Educa Berlin, December 7, 2015.
Your Instant Decentralized Learning Community, April 6, 2021.
The MOOC Ecosystem, Association of Medical Educators of Europe (AMEE) E-Learning Symposium, Glasgow, Scotland, September 6, 2015.