Open Access to University Press Frontlists: A Call to Action
Curtis Brundy, Laura Hanscom, Barbara Kern, Brigitte Weinsteiger,
The Scholarly Kitchen,
2023/09/26
The book - or as a university press (UP) like to call them, a monograph - is the last bastion of closed access publishing. I cannot count how many times I've encountered a potentially useful source, only to find it has been locked out of sight in book form. As the authors of this article note, only a tiny percentage of monographs is published as open access. They call for this to change. "With a knowledge equity mindset, we want to ensure that our collection dollars have the greatest impact possible. Simply put, if we are going to spend funds on monographs, we want to maximize our spending's global impact and equity through OA." Hear hear!
Web: [Direct Link] [This Post]
Metaverse: What happened to Mark Zuckerberg's next big thing?
James Clayton,
BBC,
2023/09/26
The concept of the metaverse, like pretty much everything else in the ed tech space, was been swamped by the all-consuming interest in generative AI. There was a short time Before, though, that Meta, Facebook's parent company, was prepared to invest heavily in the metaverse. This article makes two points: first, that Meta's Mark Zuckerberg asserts that Meta is still committed to this path, and second, that "VR is still fringe. It's not the way most people play games, and it definitely isn't how most people choose to spend their time." I think that there's more to the metaverse than just VR, that Meta is not wrong in sticking to its approach, and that it's not just Meta that's interested in the persistence of data across the real and virtual world.
Web: [Direct Link] [This Post]
Getty Images launches an AI-powered image generator
Kyle Wiggers,
TechCrunch,
2023/09/26
What's most important here, I think, is the marketing. "Getty Images, one of the largest suppliers of stock images, editorial photos, videos and music, today announced the launch of a generative AI art tool that it claims is "commercially safer" than other, rival solutions on the market." It is 'safer' only if you agree that image generation AI systems violate copyright. And the market share of commercial AI such as Getty's depends on successfully making that copyright argument. Now, of course, an argument does not become invalid simply because of the self-serving motivation of those advancing it. But we should not be swayed by marketing campaigns intended to persuade is that these copyright problems actually exist, focusing our attention to the question of whether AI actually reproduces copyright material in an illegitimate manner. Because, mostly, it doesn't, at least in my view. See also: Digiday.
Web: [Direct Link] [This Post]
Dethroning the All-powerful AI: Developing Ethics for a Demystified AI
Quito Tsui,
Bot Populi,
2023/09/26
The increasingly complex nature of artificial intelligence (AI) lends to it a sense of mystery and opacity, writes Quito Tsui. "Echoing the genre of myth, emerging technologies and discussions around them are today infused with a sense of incomprehensibility, or a fundamental inability to understand or audit the 'decision-making' of predictive tools, and an inviolable sense that these technologies defy our mortal ethical frameworks." To 'tame' this mystification, Tsui argues that "it is vital to reorient the direction of AI" and "we should be explicit about the direction we want AI to face, the direction in which it should serve. By focusing the gaze of AI on responding to the needs of the global majority, and mobilizing AI for those purposes in a directed manner, we can reign in its mystical status." Interesting argument, useful conclusion.
Web: [Direct Link] [This Post]
Response to Louise Antony
Victor Kumar, Richmond Campbell,
Medium,
2023/09/26
As Justin Weinberg summarizes, "Last month, NDPR published an especially critical and widely circulated review by Louise Antony (UMass) of a new book (A Better Ape: The Evolution of the Moral Mind and How it Made us Human) in moral philosophy and now the book's authors, Victor Kumar (Boston) and Richmond Campbell (Dalhousie), have published a response." The book, of course, is not openly accessible, but both the review and especially the response are well worth reading. I wouldn't agree with everything Kumar and Campbell argue, but I do agree with this: "Our book is about one central and recurring idea: that morality evolves with knowledge and social structure."
Web: [Direct Link] [This Post]
The What and Why of Scenario-Based Learning
Karan Hotwani,
Upside Learning,
2023/09/26
This bit is right: "Simulations are just a model of the way a part of the world works. They can be in any valid state, and taken to any other valid state." As such, they place learning into a context, which supports meaningful practice. However, the best simulations are not 'stories' and they are not the 'branching scenarios' described in this article. Rather, they are open-ended, and working with such a scenario doesn't involve a series of 'decisions' but rather an adjustment (or non-adjustment) of parameters. Instead of being based on a pre-structured decision tree, they are based on data models. These models may consist of simple rules and connections between different types of data (as in the old Sim City games) or they may be based on dynamic relations that change and shape themselves as they adapt to your input.
Web: [Direct Link] [This Post]
Ethics by Agreement in Multi-agent Software Systems
Vivek Nallur, Rem Collier,
ICSOFT 2019,
2023/09/26
Interesting paper I found while revising my most recent article "Should we embed machines with ethics that we know to be good (ethics-by-design), or should we repose our faith in a ethics-making method that leads to ethics emerging by agreement in a society of machines?" As the authors suggest, the former sounds like the best option, but in a world with a multiplicity not only of ethical perspectives, but also a multiplicity of interacting AI systems, it's impossible to achieve. So, better to consider how AIs expressing the various perspectives can come to agreement on ethical questions.
Web: [Direct Link] [This Post]
There are many ways to read OLDaily; pick whatever works best for you:
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2023 Stephen Downes Contact: stephen@downes.ca
This work is licensed under a Creative Commons License.