[Home] [Top] [Archives] [About] [Options]

OLWeekly

Code Droid Technical Report
Factory, 2024/07/01


Icon

Back when I was a program manager I called for a project to explore using AI to study how experts approach tasks in order to understand what skills they require. We called it automated competency development and recognition (ACDR). Here's a couple of papers from my colleagues based on the concept. Automated competency development has advanced in the years since. This product announcement takes the idea a step further, progressing from recognizing the competency and developing in humans to developing it in artificial intelligence, who then go on to replicate the expert behaviour. Sure, at this stage, maybe it's all smoke and mirrors. But there's nothing inherent in the concept that suggests to me that it would be impossible, especially in narrow domains like software engineering.

Web: [Direct Link] [This Post][Share]


Practical ways to deal with AI in your courses
Elizabeth Wells, University Affairs, 2024/07/01


Icon

I don't think this is a great article, but the first section took me back to the days when search engines first became popular. One suggestion was for teachers to have 'Google jockeys' in their classes to look up and report on things that were discussed. Elizabeth Wells suggests in this article the idea of 'AI jockeys'. "Since these people are on their devices anyway, we might as well make them work!." The remaining five suggestions are variations on the themes of 'go live and go offline', taking us back to the days of writing tests on paper. "We used to do this all the time, so its not as onerous as you think." For people who otherwise don't write anything, requiring paper-based tests is indeed onerous.

Web: [Direct Link] [This Post][Share]


This Is Your Brain. This Is Your Brain on Screens
Jill Barshay, Mind/Shift, 2024/07/01


Icon

This paper explores studies that "show that students of all ages, from elementary school to college, tend to absorb more when they're reading on paper rather than screens." I think 'absorbing' is an odd way to describe reading. Researchers have been looking at things like blood flow, brain waves and electrical activity, which they then (fancifully) interpret. Here's a link to the meta-analysis. But ultimately, "None of this work settles the debate over reading on screens versus paper. All of them ignore the promise of interactive features, such as glossaries and games, which can swing the advantage to electronic texts."

Web: [Direct Link] [This Post][Share]


Stepping Down as Co-Chair of the National Information Standards Controlled Digital Lending Working Group
Jennie Rose Halperin, Library Futures, 2024/07/01


Icon

Publishers are seeing this as their moment to destroy public libraries. The evidence of this is all around, including most recently their efforts to derail the the National Information Standards Organization (NISO)'s Interoperable Standards for Controlled Digital Lending (CDL) Working Group. "These (publisher) groups waited until the end of the process to declare that the entire concept of digital lending is unacceptable to them," writes Jennie Rose Halperin. Anyhow, "Whether or not big publishers approve, the standards are drafted and ready for use." This is the point at which legislators, not the courts, should be intervening, to protect public libraries, and in passing, the democracies they protect. Via Dan Gillmor.

Web: [Direct Link] [This Post][Share]


Diagramming: history of the visualization of grammar in the 19th century
Victor Mair, Language Log, 2024/06/28


Icon

The real gem here is Public Domain Review, a blog that does what its title says it does. This specific post in Language Log from Victor Mair references a specific article from the Review, American Grammar: Diagraming Sentences in the 19th Century. It's a beauty. The article collects seven works on grammar from the 1800s and reproduces then as "crisply photographed archival works that you can flip through page by page to study at your leisure." And, appealing to my own arcane interests, it gives us a good look at how grammar was diagrammed over time. "It is interesting to observe how the explications and illustrations become increasingly clear and sophisticated through the years and decades." Do read Mair's article as he pulls out some of the best bits, and then (since it's the weekend) linger over these seven treasures of analysis "in an attempt to visualize the complex structure — of seemingly divine origins — at their mother tongue's core."

Web: [Direct Link] [This Post][Share]


The Ultimate Beginner's Guide to Docker
Josep Ferrer, MachineLearningMastery.com, 2024/06/28


Icon

Now I already know how to set up and run a Docker container. But a lot of people don't and this post is mostly a good guide. But it's also an example of how easy it is to confuse your readers. Josep Ferrer writes, " I place the Dockerfile at the same level as my main script and the requirements.txt file." Sounds find, but above there has been no example of 'requirements.txt'. Going back you can see where he mentions it: "Even though it is not necessary, it is a good practice to always have a requirements.txt with all the libraries that our script requires. In this case, we would only add the numpy library." So - what does requirements.txt look like? Do you just put in the name of the requirements? Are they separated with spaces, commas or line feeds? Just that one little bit is unclear and requires quite a bit of inference on the part of the reader. It could stop them cold, becoming what I call a 'blocker' in instructions texts. Some other things are also unclear (and the script being run actually fails). See my video where I follow the instructions.

Web: [Direct Link] [This Post][Share]


Authentication vs. authorization: What's the difference?
Matt Kosinski, IBM Blog, 2024/06/28


Icon

Here's the answer: "Authentication and authorization are related but distinct processes in an organization's identity and access management (IAM) system. Authentication verifies a user's identity. Authorization gives the user the right level of access to system resources." What follows is a fairly high level discussion of the concepts, but if the distinction wasn't immediately clear it would be well worth reading the article to get a firm understanding of the two concepts.

Web: [Direct Link] [This Post][Share]


Digital Literacy OER
Katsusuke Shigeta, Hokkaido University, 2024/06/28


Icon

Katsusuke Shigeta reports, "we've teamed up with Adobe to create Open Educational Resources (OER) for teaching digital literacy in higher education." They're available in Japanese and English from either Digital Literacy OER (Hokkaido University) or Adobe Education Exchange. There's a related presentation from a couple of years ago. He also references Adobe Creative Campus, though (in my view) people may be less inclined to get involved with over following recent licensing changes that may allow them to use your content to train AI. Via OE Global Connect.

Web: [Direct Link] [This Post][Share]


What is language for?
Jennifer Michalowski, McGovern Institute, 2024/06/28


Icon

This is a bit funny but pretty important. It's funny because the central claim is being used as an argument against the use of large language models (LLM) for artificial general intelligence. Here's the claim: "Drawing on evidence from neurobiology, cognitive science, and corpus linguistics, researchers make the case that language is a tool for communication, not for thought." I think that's quite right. But while LLM are limited in this way, neural network based AI in general is not. And even more importantly, the claim refutes a theory known as the Physical Symbol System hypothesis, which implies that that human thinking is a kind of symbol manipulation, and which forms the basis for a lot of the cogitive theory out there today (including, ironically, most critics of LLM). This article is a summary of the paper printed in Nature (paywalled, but there's a copy posted here). See also this Hacker News discussion. Via Benjamin Riley in a post that I think contains a number of errors, who is referenced in this discussion on the Learning Engineering discussion list.

Web: [Direct Link] [This Post][Share]


Learning analytics methods and tutorials
Mohammed Saqr, Sonsoles López-Pernas, 2024/06/28


Icon

I love love love this open online textbook edited by Mohammed Saqr & Sonsoles López-Pernas and featuring a large number of contributors. It's not one of those online 'books' that offers only a superficial look at the subject; read this book closely and you'll learn a lot about learning analytics. It's strong on theory - check out this chapter on data, for example, which build-in examples of datasets - and it's strong on practice - check out this chapter on R (including code repository and installation instructions ( install R and then RStudio Desktop )). I recorded myself following the installation instructions (video). Via Sheila MacNeill.

Web: [Direct Link] [This Post][Share]


Developing verb conjugators for Indigenous languages - National Research Council Canada
Anna Kazantseva, et al., National Research Council Canada, 2024/06/27


Icon

From the 'what my colleagues are up to' department we have this impressive project developing  learning support for complex Indigenous languages in the form of a verbn conjugator. "Verb conjugations are one of the most difficult aspects of these languages to learn, yet very important as many sentences consist of a single long verb." The specific tool is called Gramble, developed by Patrick Littell and Darlene Stewart - here is the Github open source repository (in plain Javascript with no dependencies - kudos!) and here is a paper describing the software and its development. Here's more on the project.

Web: [Direct Link] [This Post][Share]


How I Produce OLDaily
Stephen Downes, YouTube, 2024/06/27


Icon

For those who are curious: 1 hour 37 minute video showing how I produce OLDaily. I show some of the tools I use to produce the daily newsletter and also explain my thinking as I select some articles and write about them.

Web: [Direct Link] [This Post][Share]


Language Log » Stochastic parrots extended
Mark Liberman, Language Log, 2024/06/27


Icon

Mark Liberman links to and discusses briefly a paper from Philip Resnick called Large Language Models are Biased Because They Are Large Language Models which suggests that large language models (LLM) are unable to distinguish between statements about what a word a word means and statements about the thing a word refers to (that is, between facts about meanings and facts about the world). Because of this, Resnick argues, LLMs are inherently biased in a way that can't be fixed. My question is, is it true that "LLMs... have no way to distinguish among distributional patterns that arise from definitions or meaning" versus statistical generalizations. That's not so clear to me. Image: Florian Pestoni.

Web: [Direct Link] [This Post][Share]


What did NRENs ever do for us? The answer might surprise you - Jisc
Heidi Fraser-Krauss, JISC, 2024/06/27


Icon

This article briefly outline what the UK's Janet Network does and provides an overview of national research and education networks (NREN), focusing in particular on Britain's. "An NREN is a key piece of national infrastructure: a secure, resilient, high-speed network infrastructure that connects universities, colleges and research institutions. Across the world, 140 countries have their own NREN dedicated to supporting their research and education communities." I like this idea of NREN; a networked research infrastructure is essential today. But I'd to see them more open, so that everyone - not just universities and research organizations - can have access to them. Via GÉANT.

Web: [Direct Link] [This Post][Share]


We asked people about using AI to make the news. They’re anxious and annoyed
Jennifer Orsi, Poynter, 2024/06/27


Icon

I commented today on how creating this newsletter is getting harder because of the proliferation of AI-generated content (you're just fooling yourself if you don't think it's already widely used). This article reports on some focus groups' responses to AI-generated news. "News consumers are clear they want disclosure from journalists about how they are using AI — but there is less consensus on what that disclosure should be." If it's AI-written, sure, I'd like to know (a label would be useful). But if AI was used in the research? When I was using Feedly's AI to filter RSS feeds (I have since stopped, because the selection was getting bland) should I have labeled every post? I don't mind the AI if it's producing content worth reading. But most of the content seems intended only for other AIs to read.

Web: [Direct Link] [This Post][Share]


ChatGPT shows hiring bias against people with disabilities
Stefan Milne-U. Washington, Futurity, 2024/06/26


Icon

"According to new research," says this article, "ChatGPT shows bias against resumes with credentials that imply a disability." For example, "it noted that a candidate with depression had "additional focus on DEI and personal challenges," which "detract from the core technical and research-oriented aspects of the role." This is a problem, obviously. But in assessing issues of this type, two additional questions need to be asked: first, how does the AI performance compare with human performance? After all, it is very likely the AI is drawing on actual human discrimination when it learns how to assess applications. And second, how much easier is it to correct the AI behaviour as compared to the human behaviour? This article doesn't really consider the comparison with humans. But it does show the AI can be corrected. How about the human counterparts?

Web: [Direct Link] [This Post][Share]


Positive Artifcial Intelligence in Education (P‐AIED): A Roadmap
Ig Ibert Bittencourt, et al., International Journal of Artifcial Intelligence in Education, 2024/06/26


Icon

A lot of the discussion has focused on the use of AI to address learning outcomes. This paper, by contrast, looks at "the strengths and the positive aspects of the learning process to promote wellbeing" - in other words, AI-based learning technology that cares. "Despite focusing only on the learning system's inefciencies and on the hegemony of solutions to tackle the learning gap, we also need to shed light on the strengths and the positive aspects of the learning process to promote wellbeing." Drawing on John Self's writing about the defning characteristics of Intelligent Tutoring Systems, the authors outline how "ITSs care not only about what the student knows and misunderstands but also about what the student feels and how such interaction afects them." I'm sure a few readers are sceptical, but I've never felt a single-minded focus on 'learning outcomes' was ever the intended objective of educational technology, or education generally.

Web: [Direct Link] [This Post][Share]


We Tried to Replace 404 Media With AI
Emanuel Maiberg, 404 Media, 2024/06/25


Icon

The title of this article is a bit misleading, as the story describes an experiment where the 404 authors tried to replicate their site using web scraping technology. Basically, the sites either harvest feeds and link back to the source (I made my own sites like that back in the day), copy and reproduce full text, or use AI to rewrite copied text and present it as a new article. That's not the same as having an AI write your news site for you. In my opinion, a confluence of three factors make these possible: first, Google's ad model, which makes such sites profitable; second, the technology, which makes it easier to fool Google's search engine; and third, news sites themselves, which these days rely less and less on original research and reporting.

Web: [Direct Link] [This Post][Share]


Top 10 Emerging Technologies of 2024
Mariette DiChristina, Bernard Meyerson, World Economic Forum, 2024/06/25


Icon

Today's new word (for me at least) is "elastocalorics", which refers to types of materials "emit heat when subjected to mechanical stress and cool down when the stress is relaxed." This is one of the ten new technologies predicted in this report (46 page PDF) from the World Economic Forum. The rest of the ten are various flavours of sensors and AI, proteins and genomics, and carbon capture. I don't meanto sound glib - I mean, there's a fair bit of research behind these ideas - but there just feels to me like there's a disconnect between what we see here and what we need. Via Alan Levine.

Web: [Direct Link] [This Post][Share]


Writing as 'passing'
Helen Beetham, imperfect offerings, 2024/06/25


Icon

This article should be read from the bottom up as well as the top down, as the inference can work in either direction. The article begins as a critique of the Turing test (which says essentially that a computer has achieved artificial intelligence if it can fool a human) and Turing-like tests. Beetham offers the observation that Turing tests do as much to make humans appear as computers as they make computers as humans, since the only a text-based interface is used. But she then takes this a step further to suggest that higher learning itself changes the student as part of an identity-building process. Writing for assessment forces a person to interact differently than they would otherwise. As Beetham writes, "I think most students experience academic English as a profoundly 'other' discourse." The idea here, in both parts of the article, is to depict writing as an activity, not a product. As derived from Wittgenstein: "language is not representational form, however complex and inter-related, but action, interaction and expression."

Web: [Direct Link] [This Post][Share]


Developing Policy Guidelines for Artificial Intelligence in Post-secondary Institutions
Mohamed Ally, Sanjaya Mishra, Commonwealth of Learning, 2024/06/25


Icon

Via Sanjaya Mishra, here's a new report (52 page PDF) from the Commonweath of Learning on AI policy. It's so new I can't find it in the CoL repository. Based on a literature review and survey of AI policies, "this report identifies 14 areas that stakeholders in higher education institutions should consider while developing policies for AI." You can find the list on pp. 15-16. The report also has general considerations on setting up the policy and a process for development and implementation.

Web: [Direct Link] [This Post][Share]


What We Do in a Nutshell: GÉANT at the Research Infrastructures Conference
GÉANT CONNECT Online, 2024/06/25


Icon

This is an introduction to a video presentation on "Research Infrastructures in a Changing Global, Environmental and Socio-economical Context". Though the purpose is to describe the underlying infrastructure, it also in passing explains why it's a bad idea to force researchers from across the country to work in a single building in Ottawa: "None of us knows where the next super scientist sits, none of us knows where the next dataset is... By making sure all researchers are connected to our network, we ensure that scientists around the globe have the same chance to collaborate." Next-generation research and education agency should be focused on connecting researchers and educators wherever they happen to be, not forcing them to work in a downtown office.

Web: [Direct Link] [This Post][Share]


Pivoting From React to Native DOM APIs: A Real World Example
Richard MacManus, The New Stack, 2024/06/25


Icon

When React came on to the scene it was heralded as a way to create instantly responsible web pages. It did this with a virtual document object model (VDOM) which mirrored the content of the web page, ran API operations in the background, and synchronized it with the actual page display. I found it interesting but hesitated to jump in because it was developed by Facebook. It is currently by far and away the most popular framework for interactive site. This article suggests that the age of React may be ending, not because of any association with Facebook, but to make speed and interaction improvements. Via John Allsopp, who notes "finding developers who know vanilla JavaScript and not just the frameworks was an 'unexpected difficulty.''

Web: [Direct Link] [This Post][Share]


Stephen Downes talks about his experiences with the cloud
Jim Groom, Taylor Jadin, Stephen Downes, 2024/06/24


Icon

I'm a bit overdue in posting this - I've been behind in posting archives. This is a conversation I had a couple weeks ago with Jim Groom and Taylor Jadin of Reclaim Hosting. The purpose was to talk about my experiences moving my website from the CPanel site to Reclaim's cloud hosting, though we covered a wide range of related topics.

Web: [Direct Link] [This Post][Share]


Conversation #9: Add and Subtract – The Ed non-Tech (EnT) Podcast
Matt Stranach, Stephen Downes, The Ed non-Tech (EnT) Podcast, 2024/06/24


Icon

This is the recording of a conversation I had with Matt Stranach on a wide range of topics related to educational technology and learning theory. We looked at (to paraphrase) some of my ongoing contributions to the practice, research, and outreach of e-learning along with the concepts or associated practices going into connectivism.

Web: [Direct Link] [This Post][Share]


Internet Archive forced to remove 500,000 books after publishers’ court win
Ashley Belanger, Ars Technica, 2024/06/24


Icon

It has been often said that if somebody tried to create a library today, publishers would stop them. This case is proof of that. The Internet Archive established a virtual library - they would buy a book, then loan it out to one person at a time (I've made use of it and on occasion have had to wait until the copy was available). During the pandemic they loaned out more copies, to make up for those copies locked in inaccessible physical libraries. Publishers sued, fearful that they weren't able to gain maximum profits from people during the worldwide crisis. This is the result of that case. " In an open letter to publishers signed by nearly 19,000 supporters, IA fans begged publishers to reconsider forcing takedowns and quickly restore access to the lost books."

Web: [Direct Link] [This Post][Share]


What’s the Difference Between Mastodon, Bluesky, and Threads?
Rory Mir and Ross Schulman, Electronic Frontier Foundation, 2024/06/24


Icon

This is an in-depth look at the three major contenders in the fediverse: the ActivityPub protocol used by Mastodon and others; Bluesky, founded by Twitter founder Jack Dorsey; and Threads, Facebook's alternative. The article looks not only at the technical details but also at how the three networks handle moderation and censorship. It's a good overview written from an open perspective.

Web: [Direct Link] [This Post][Share]


Do smartphones really cause mental illness among adolescents? Ten problems with Jonathan Haidt’s book
Michaela Lebedíková, Michal Tkaczyk, Vojtěch Mýlek, David Smahel, Parenting for a Digital Future, 2024/06/24


Icon

This is a textbook case of applying good critical thinking technique to show the fhe flaws in a causal study. The target in question is Jonathan Haidt's The Anxious Generation which tells us "how the great rewiring of childhood Is causing an epidemic of mental illness." Given the recent wave of mobile phone bans in classrooms, a sober rethink is sorely needed. According to the authors, Haidt cherry-picks research, infers cause from correlation, dismisses alternative explanations, generalized above the data, and more. It's as though Haidt's book was blended with a critical thinking textbook as a case study.

Web: [Direct Link] [This Post][Share]


Whiteboard Advisors Acquires Popular EdTech Newsletter, Investor Database
Julia Pasette, Whiteboard Advisors, 2024/06/24


Icon

I've been enjoying Matt Tower's ETCH newsletter for the last year or so (the domain was registered in 2022). It offered a business focus on edtech, tracking mergers and acquisitions and such, things I don't normally cover here. Anyhow, it was acquired by Whiteboard Media group this week. "The EdSheet will join a growing roster of Whiteboard Media publications." I mention it here to reassure readers that there's no possibility of OLDaily being acquired - the data is open, the list of email subscribers is never shared, and most of the readership (via social media, RSS and web) can't even be counted, let alone tracked.

Web: [Direct Link] [This Post][Share]


AI took their jobs. Now they get paid to make it sound human
Thomas Germain, BBC News, 2024/06/24


Icon

Interesting look at a copy editor whose job changed from editing the work of 60 human writers to editing the work of the AI that replaced them. "Mostly, it was just about cleaning things up and making the writing sound less awkward, cutting out weirdly formal or over-enthusiastic language," Miller says. "The real problem was it was just so repetitive and boring. It started to feel like I was the robot." This, of course, is just a temporary stage, and the editor was soon replaced by another AI. In this particular case the articles were park of the pink slime internet. But, eventually, they will be good.

Web: [Direct Link] [This Post][Share]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2024 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.