Diagramming: history of the visualization of grammar in the 19th century
Victor Mair,
Language Log,
2024/06/28
The real gem here is Public Domain Review, a blog that does what its title says it does. This specific post in Language Log from Victor Mair references a specific article from the Review, American Grammar: Diagraming Sentences in the 19th Century. It's a beauty. The article collects seven works on grammar from the 1800s and reproduces then as "crisply photographed archival works that you can flip through page by page to study at your leisure." And, appealing to my own arcane interests, it gives us a good look at how grammar was diagrammed over time. "It is interesting to observe how the explications and illustrations become increasingly clear and sophisticated through the years and decades." Do read Mair's article as he pulls out some of the best bits, and then (since it's the weekend) linger over these seven treasures of analysis "in an attempt to visualize the complex structure — of seemingly divine origins — at their mother tongue's core."
Web: [Direct Link] [This Post][Share]
The Ultimate Beginner's Guide to Docker
Josep Ferrer,
MachineLearningMastery.com,
2024/06/28
Now I already know how to set up and run a Docker container. But a lot of people don't and this post is mostly a good guide. But it's also an example of how easy it is to confuse your readers. Josep Ferrer writes, " I place the Dockerfile at the same level as my main script and the requirements.txt file." Sounds find, but above there has been no example of 'requirements.txt'. Going back you can see where he mentions it: "Even though it is not necessary, it is a good practice to always have a requirements.txt with all the libraries that our script requires. In this case, we would only add the numpy library." So - what does requirements.txt look like? Do you just put in the name of the requirements? Are they separated with spaces, commas or line feeds? Just that one little bit is unclear and requires quite a bit of inference on the part of the reader. It could stop them cold, becoming what I call a 'blocker' in instructions texts. Some other things are also unclear (and the script being run actually fails). See my video where I follow the instructions.
Web: [Direct Link] [This Post][Share]
Authentication vs. authorization: What's the difference?
Matt Kosinski,
IBM Blog,
2024/06/28
Here's the answer: "Authentication and authorization are related but distinct processes in an organization's identity and access management (IAM) system. Authentication verifies a user's identity. Authorization gives the user the right level of access to system resources." What follows is a fairly high level discussion of the concepts, but if the distinction wasn't immediately clear it would be well worth reading the article to get a firm understanding of the two concepts.
Web: [Direct Link] [This Post][Share]
Digital Literacy OER
Katsusuke Shigeta,
Hokkaido University,
2024/06/28
Katsusuke Shigeta reports, "we've teamed up with Adobe to create Open Educational Resources (OER) for teaching digital literacy in higher education." They're available in Japanese and English from either Digital Literacy OER (Hokkaido University) or Adobe Education Exchange. There's a related presentation from a couple of years ago. He also references Adobe Creative Campus, though (in my view) people may be less inclined to get involved with over following recent licensing changes that may allow them to use your content to train AI. Via OE Global Connect.
Web: [Direct Link] [This Post][Share]
What is language for?
Jennifer Michalowski,
McGovern Institute,
2024/06/28
This is a bit funny but pretty important. It's funny because the central claim is being used as an argument against the use of large language models (LLM) for artificial general intelligence. Here's the claim: "Drawing on evidence from neurobiology, cognitive science, and corpus linguistics, researchers make the case that language is a tool for communication, not for thought." I think that's quite right. But while LLM are limited in this way, neural network based AI in general is not. And even more importantly, the claim refutes a theory known as the Physical Symbol System hypothesis, which implies that that human thinking is a kind of symbol manipulation, and which forms the basis for a lot of the cogitive theory out there today (including, ironically, most critics of LLM). This article is a summary of the paper printed in Nature (paywalled, but there's a copy posted here). See also this Hacker News discussion. Via Benjamin Riley in a post that I think contains a number of errors, who is referenced in this discussion on the Learning Engineering discussion list.
Web: [Direct Link] [This Post][Share]
Learning analytics methods and tutorials
Mohammed Saqr, Sonsoles López-Pernas,
2024/06/28
I love love love this open online textbook edited by Mohammed Saqr & Sonsoles López-Pernas and featuring a large number of contributors. It's not one of those online 'books' that offers only a superficial look at the subject; read this book closely and you'll learn a lot about learning analytics. It's strong on theory - check out this chapter on data, for example, which build-in examples of datasets - and it's strong on practice - check out this chapter on R (including code repository and installation instructions ( install R and then RStudio Desktop )). I recorded myself following the installation instructions (video). Via Sheila MacNeill.
Web: [Direct Link] [This Post][Share]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2024 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.