Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

ANZREG 2024
76772 image icon

I certainly agree with Scott Leslie's recommendation that this keynote address is worth a read. I gave it a nice slow thorough read while watching the baseball game after making a cycling video. Let me linger on three points:

  • "for Yunkaporta everything is about relationships and flows..." This is presented as traditional knowledge, but it's basically core knowledge for me too.
  • "classification, cataloguing, and ontological mapping... flattens reality into a list of predetermined categories and definitions."
  • "spam... exploit(s) existing aggregations of human attention... So every commercial website is mostly spam."

My response: Knowledge is about interactions and flows, but language distorts knowledge by flattening distinctions and creating focal points that attract power and influence. Artificial Intelligence (as we know it today) is based on language - they are literally 'large language models'. That's why is appears to reify colonial inequalities, when in fact if designed correctly could do the opposite, by transcending the limitations of language.

Today: 121 Total: 239 Hugh Rundle, 2024/07/03 [Direct Link]
Announcing the Ladybird Browser Initiative
76771 image icon

This is how Firefox started, back before Firefox depended on Google for revenue and before Mozilla was buying digital advertising companies. Sure, maybe it'll be nothing. But maybe, it won't. "Unlike traditional business models that rely on monetizing the user, Ladybird is funded entirely by sponsorships and donations from companies and individuals who care about the open web."

Today: 83 Total: 212 Chris Wanstrath, Ladybird, 2024/07/03 [Direct Link]
ChatGPT Now Has PhD-Level Intelligence, and the Poor Personal Choices to Prove It
76770 image icon

It's funny because it's true. Via Mignon Fogarty. "Its predecessors already produce hundreds or even thousands of words almost instantaneously. Now GPT-5 brings PhD writing skills to the table, meaning it can generate text at a rate of about ten words per day.

Today: 86 Total: 249 Katie Burgess, McSweeney's Internet Tendency, 2024/07/03 [Direct Link]
Modeling Minds (Human and Artificial)
76769 image icon

Benjamin Riley posted this article a couple months ago and referenced it in a discussion today on the nature of intelligence. The focus here is on higher order skills and the role they play in intelligence (indeed, what we describe as 'intellegence' often referes to these skills directly). The main argument here is that while large language models (LLM) may surpass humans in specific domains, they cannot transfer that learning into other domains. In fact, there is a domain in AI called transfer learning, which I don't address in my reply to Riley, but I could.

Today: 33 Total: 284 Benjamin Riley, Cognitive Resonance, 2024/07/02 [Direct Link]
Exploring Preference Signals for AI Training - Creative Commons
76768 image icon

According to Creative Commons, "through engagement with a wide variety of stakeholders, we heard frustrations with the 'all or nothing' choices they seemed to face with copyright...  way of making requests about some uses, not enforceable through the licenses, but an indication of the creators' wishes." In particular, they want to be able to limit the use of their work to train AI. I commented in a meeting today that it was telling that this, of all possible preferences, is the one that surfaced as most significant. I would rank 'use of content to create weapons' or 'use of content to undermine social good' as more significant preferences. I also commented that, without access to open content, AI will be created and trained exclusively by commercial entities with licnesing agreements, which will mean there is no possibility of an open artificial intelligence.

Today: 22 Total: 284 Catherine Stihler, Creative Commons, 2024/07/02 [Direct Link]
2025 Open Access Policy
76767 image icon

The Bill & Melinda Gates Foundation's Open Access Policy is being updated for next year. It applies to all Gates funded manuscripts starting in January. It mandates preprints "as a preprint in a preprint server recognized by the foundation or preapproved preprint server." It requires a CC By 4.0 license on publications, though grantees retain copyright. The foundation will not pay article processing charges (APC) but doesn't rule out grantees paying the costs. And it asserts "underlying data supporting the Funded Manuscripts shall be made accessible immediately and as open as possible upon availability, subject to any applicable ethical, legal, or regulatory requirements or restrictions." The policy is broadly endorsed and (I am told) offered to governments as an example to emulate for their own open access policy. Image: Open Access Network.

Today: 18 Total: 290 Gates Open Access Policy, 2024/07/02 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Jul 03, 2024 06:37 a.m.

Canadian Flag Creative Commons License.