Princeton's endowment return next year at roughly $3.77 billion and its annual operating budget is $1.86 billion. The math, writes Malcolm Gladwell, is pretty easy. "Princeton could let in every student for free. The university administrators could tell the U.S. government and all of its funding agencies, 'It's cool. We got this.'" They could, but they won't. They continue to fundraise for money they don't need. They continue to charge tuition fees they don't need. (I found this in a LinkedIn post which is gone and will never be found again because LinkedIn search (and social media search in general) is simply awful).
This post makes a number of predictions about commercial learning systems for 2023 and the observations seem reasonable to me. One prediction: performance and talent management solutions will evolve into learning systems, focusing on career path development. Another: the end of the LXP, which as Craig Weiss says, was never really an LXP to begin with. "The moment assigned learning appeared, the 'learner-centric' was eliminated," he writes. "What I see is the continuation of vendors pushing the 'LXP' or 'learning experience' narrative, although the key components – the core is falling by the wayside."
Matthias Melcher offers some comments on my recent presentation, most of which I am in agreement with (and especially the notions of 'scientific apprenticeships' and 'simple things like annotation and rearranging'). He raises, though, a common concern about the notion of consensus, saying "majority is not a poor surrogate of what the consensus of the wisest big heads would achieve". I offer a response in the comments, noting that sort of consensus I am talking about refers to agreement on a source-of-truth mechanism (voting, documentation, provenance, whatever) as opposed to making political and economic decisions by agreement.
I said in my presentation the other day that AI is already here. This report (28 page PDF) from Deloitte confirms it. "In fact, more than a quarter of our survey respondents have reached full-scale deployment of five or more types of AI applications within their organization," report the authors. As always, deployment of the technology varies (and though the survey doesn't report, I would expect educational institutions to be laggards). And, as the report says, deployment is never finished, as the technology, and the use cases, continue to evolve. Finally, as always, "Technology cannot deliver transformative results unless organizations reimagine how work gets done." In the case of AI, things like data fluency and trust in agility are required to make it work. Via Irving Wladawsky-Berger.
Alex Usher offers a good take-down of a recent MacDonald-Laurier Institute paper (54 page PDF) on "viewpoint diversity" at Canadian Universities, calling it "Really bad. Icelandic rotting shark bad. Crystal Pepsi bad. Final Season of Game of Thrones bad." He then offers a point by point demolition. The sample is biased. The scale is polarized. The survey measures the wrong thing. It has an absurd definition of 'academic freedom'. Etc. My own take on all of this is that 'viewpoint diversity' in Canadian academia (or anywhere else) can be an issue when and only when the demand for 'viewpoint diversity' is applied to corporate boardrooms and financial institutions. That's not going to happen, of course. So why don't we just let people believe what they believe and do the jobs they do?
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2022 Stephen Downes Contact: email@example.comThis work is licensed under a Creative Commons License.