[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

AI ethics as a complex and multifaceted challenge: decoding educators' AI ethics alignment through the lens of activity theory
Jaber Kamali, Muhammet Furkan Alpat, Aras Bozkurt, International Journal of Educational Technology in Higher Education, 2024/12/17


Icon

The research here suggests that educators are not aware of 'ethical rules' governing the use of AI, and might not follow them even if they did. They are, however, mindful of the consequences that might follow as a result of violating them. The authors thus argue for more education about AI ethics, and more attention in the drafting of these ethics to ensure they are relevant to the people intended to follow them. To me all this points to the fact that we don't really have an establish 'ethics' of AI yet (despite numerous expect groups proclaiming to have found them). I don't think you get a new technology and suddenly an established body of ethics around it the next day. P.S. the 'lens' of Activity Theory brings nothing to this discussion, in my view, and I see its invocation here as nothing more than an offering of a sacrifice to the machine.

Web: [Direct Link] [This Post][Share]


Handbook of Children and Screens
Dimitri A. Christakis, Lauren Hale, SpringerLink, 2024/12/17


I obviously haven't had time to give this book a thorough reading, but I've been through the recommendations sections in most of the chapters and seen enough of the rest to know that it should be a big deal for educators. For one thing, the digital version (644 page PDF) is open source, which is fantastic. I recognize the names of many of the authors, so there's some credibility there. And it's comprehensive - each of the three major sections is divided into (unnumbered) subsections, covering everything from brain development to screens and physical health to parenting in a digital age to educational technology, gaming and gambling in youth. And much more. The recommendations aren't so much controversial as they are perhaps overly cautious, but they're well-supported and well argued throughout.

Web: [Direct Link] [This Post][Share]


Hello, Social Web
Ryan Barrett, Anuj Ahooja, A New Social, 2024/12/17


Icon

This is an initiative launched by Bridgy's Ryan Barrett and Anuj Ahooja (Bridgy is a service that allows people to connect to one social network from another). They declare an intent to "liberate people's networks from their platforms, enabling The Last Network Effect and leveling the playing field across the open social web." Here's their press release. "For its first year, A New Social's initiatives will be focused on improving Bridgy Fed, user advocacy, and providing services to platforms, people, and social apps." So, it's Bridgy people focusing on Bridgy. More. Via Laurens Hof.

 

Web: [Direct Link] [This Post][Share]


Big Ideas in Tech for 2025
Andreessen Horowitz, 2024/12/17


Icon

Andreessen Horowitz takes a swing at predicting next year's "big ideas" but the outcome is, to my mind, a bit hit and miss, informed as much by what they hope will happen as by what will probably actually happen. Some hits: a resurgence of nuclear (and especially small modular reactors, though they don't mention them); decentralization of defense technologies; tobot data collection; real-time AI; AI-based analysis of qualitative data; proof of personhood; decentralized governance; product defensibility (sorry to say); and AI co-pilots. The rest are basically misses, with a16z dreaming of a world powered by libertarians and crypto-bros.

Web: [Direct Link] [This Post][Share]


Why large language models struggle with long contexts
Timothy B. Lee, Understanding AI, 2024/12/17


Icon

If you've done any amount of work with a large language model, you will have encountered this problem: transformer-based models like ChatGPT get less efficient as you give them more and more content to consider. In short, "transformers have a scaling problem." This article considers this problem. It describes "FlashAttention," which "calculates attention in a way that minimizes the number of these slow memory operations." It also mentions "scaling attention across multiple GPUs." It also considers a Google hybrid between a transformer and a Recurrent Neural Network (RNN) and Mamba, another RNN project. But it's an open issue because "we want models that can handle billions of tokens" without forgetting them.

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2024 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.