[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

The Google Willow thing
Scott Aaronson, Shtetl-Optimized, 2024/12/11


Icon

The news here is Google's fault-tolerance milestone in its Willow quantum computing chip, the publication stamp (paywalled on Nature) having been placed on the preprint on arXiv from August. "Scientifically," writes Scott Aaronson, "the headline result is that, as they increase the size of their surface code, from 3×3 to 5×5 to 7×7, Google finds that their encoded logical qubit stays alive for longer rather than shorter." But the most interesting bit to me is this: "it would also take ~10^25 years for a classical computer to directly verify the quantum computer's results" Hence, "all validation of Google's new supremacy experiment is indirect." 

Web: [Direct Link] [This Post][Share]


The Meaning of 2025 | HESA
Alex Usher, HESA, 2024/12/11


Icon

There are many places where I disagree with Alex Usher, but I think we're on the same page on this one. First, "we've spent 80 years building a system of higher education that is simply more expensive to run than the public is willing to support."Second, "Think about the consequences of reducing those cross-subsidies within universities at the exact moment when advances in technology are opening up huge potential advances in energy, materials science, and health." The cost of not supporting the system is huge. Usher argues that government is not coming to save the system. probably true. But I counter with the obvious: industry isn't going to save the system either. And so we agree that the higher education sector "is going to have to work out solutions on its own." I've spent a lifetime working on digital technologies for learning to try to help make that happen. But like a light bulb, the sector is going to have to want to change.

Web: [Direct Link] [This Post][Share]


These technologies are complex…. – Taccle AI
Graham Attwell, Taccle AI, 2024/12/11


Icon

I'm sort of over the whole 'education versus AI' debate that Graham Attwell describes here. There's only so many times people like Ben Williamson can make the same point, and only so much speed with which AI companies can roll out new models to counter the sceptics. I'm especially tired of the debate being framed as 'education versus giant corporate capitalism', partially because education has been part of giant corporate capitalism for as long as I can remember, and partially because AI was developed, in the first place, in educational institutions. None of the us-versus-them debates can be properly applied to either AI or educational institutions. And that's why I'm over it.

Web: [Direct Link] [This Post][Share]


Century-Scale Storage
Maxwell Neely-Cohen, Harvard Law School, 2024/12/11


Icon

This is a nice (though long) article by Maxwell Neely-Cohen asking the basic question, "If you had to store something for 100 years, how would you do it?" He runs through all the likely answers, including dispersal and decentralized storage, before reaching the inevidable conclusion that "the success of century-scale storage comes down to the same thing that storage and preservation of any duration does: maintenance." Neely-Cohen also warns that we might be entering a 'dark age' where most of what we produce is lost to the future. "On the internet, Alexandria burns daily." Via Molly White, who gives us a long thread of relevant quotes from the article.

Web: [Direct Link] [This Post][Share]


Striking a Balance: Navigating the Ethical Dilemmas of AI in Higher Education
Katalin Wargo, Brier Anderson, EDUCAUSE Review, 2024/12/11


Icon

According to this article, "Responsible AI integration in higher education requires striking a balance between riding the wave of AI advancements and upholding ethical principles." I don't think the idea of a 'balance' is at all the right way to think of this. Ethics and AI aren't some sort of opposites you have to 'balance'. And 'balance' itself is not a viable ethical principle; I can think of many things I would not like to 'balance' with something else. This higher level criticism also applies to many of the individual points. For example, the very first suggestion is to "create generative AI training materials to support faculty, staff, and students aimed toward combatting the digital divide." Clearly, an effort to 'balance' is being made here. But the balance here makes no sense; how exactly are these 'generative AI training materials' supposed to 'combat the divital divide'?

Web: [Direct Link] [This Post][Share]


What Happens When SMEs are Building All the Courses?
Mark Oehlert, 2024/12/11


Icon

OK, let's leave aside that the prediction (that everyone will be a full stack engineer in 2025) is absurd. Real developers aren't using Vercel or Replit, and there are still serious limitations to what Claude or ChatGPT will code. What about Marc Oehlert's prediction in this shirt article? "What if the systems get smart enough so that we don't need anyone creating compliance training content anymore? I mean we will but it'll just be Brad in HR bringing up his AI assistant to create the new course on ethics, or sexual harassment, or information awareness." Take it one step further - why do we need Brad in HR to be doing anything? What can't (potential) students just create their own courses? This will happen. Not by 2025. But soon.

Web: [Direct Link] [This Post][Share]


Beyond Pattern Matching: The Complex Journey Toward World Models, Critical AI Literacy, and Ethical Innovation
Carlo Iacono, Hybrid Horizons, 2024/12/11


Icon

This article takes some very different articles and weaves them into one coherent narrative. The fabric stretches a bit thin at points but the article overall succeeds. One is Yann LeCun's argument that after language models we will need 'world models' for reliable AI. "These systems would need to learn through direct interaction with the world, much like humans do." Quite right. Another is Tiera Tanksley's proposal for a critical race algorithmic literacy (CRAL) that leads Carlo Iacono to propose a framework for more ethical AI development, which would combine these. The article also mentions Steve Solomon on environmental concerns and Hartmut Neven on Google's 'Willow' Quantum computing chip, all (I think) from the 2024 Hudson Forum. Image: Insta.

Web: [Direct Link] [This Post][Share]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2024 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.