[Home] [Top] [Archives] [About] [Options]

OLDaily

Smarter Parts Make Collective Systems Too Stubborn
Jordana Cepelewicz, Nautilus, 2019/03/06


Icon

According to this article, " In a paper published earlier this month in Science Advances, for instance, a team led by Neil Johnson, now a physicist at George Washington University, demonstrated that a decentralized model performed best under Goldilocks conditions, when its parts were neither too simple nor too capable." The headline of the Nautilus article suggests an explanation for the phenomenon. It's based on the feedback loops between members in the network and (I think) has a lot to do with whether the individuals are pursuing a collective goal.

Web: [Direct Link] [This Post]


How do we know how to act together?
Jonathan Birch, London School of Economics, 2019/03/06


Icon

"Think of a simple act of cooperation: two people pick up a sofa together, carry it into a room, and put it down on the floor." This is a remarkable thing, and while Jonathan Birch wonders why humans can do this and apes can't, I'm more interested in wondering how it can be done at all. What's interesting is that "each person knows how to do their own individual part in a way that actively enables the other person to do the above three things." What do we need in order to make this work? Language? No, the cues can be non-verbal. Shared objective? Well, maybe, though the exact objective is constantly shifting and changing (sure, you want the couch in the room, but where exactly in the room?). Birch says, "what I can’t imagine is joint know-how without any understanding of each other’s thoughts." But suppose your couch-carrying partner is a robot without thoughts, but which behaves the same way your partner would? Would that cause you to fail? Probably not.

Web: [Direct Link] [This Post]


CaImAn an open source tool for scalable calcium imaging data analysis
Andrea Giovannucci, et.al., eLife, 2019/03/06


Icon

During the discussion on connectivism Monday someone asked whether we could obtain data to demonstrate the principles at the neural level alongside the social and conceptual level. The consensus was that this wasn't feasible. But here we have, first of all, data that reports neural-level data, so we can see whether a neuron is activated or not, and second, a computer program that can interpret these images in order to compile data sets without massive human effort. So - one step closer, though it may be some time before we can put the data to the purpose of testing the theory. And (pace Negroponte) a vote of confidence for open source, open data, and cooperation.

Web: [Direct Link] [This Post]


Summarising learning materials using AI - paucity of data, abundance of stuff
Donald Clark, Donald Clark Plan B, 2019/03/06


Icon

Could we use artificial intelligence (AI) to create open educational resources (OER)? Ignatia asked and I suggested I didn't really know of any (notwithstanding the utility of AI in automatically generating metadata, as I showed in my OER talk). Donald Clark responded with this post pointing to his Wildfire service, saying "AI, we believe, is far better at precise goals, such as identifying key learning points, creating links to external content, creating podcasts using text to speech and the semantic interpretation of free text input by learners."

Web: [Direct Link] [This Post]


Editors Canada releases new guidelines for the ethical editing of students’ work
Jessica Natale Woollard, University Affairs, 2019/03/06


Icon

This article provides some background around the guidelines for editing student work as released by Editors Canada. Historically the service was officially frowned upon, and historically it was contracted by students who could afford it. The guidelines are intended to reflect that reality. You can read the guidelines yourself - but if you're like be they seem arbitrary. They're based on the 2016 Professional Editorial Standards, but disallow specific practices. I have to say the logic eludes me. For example, editors can do B1, and B8-9, but not B2-7, which includes making suggestioins about positioning graphic materials, recommending deletion of repetitive text, or recommending headings and nabvigational aids (among other things). Why is all this important? As AI becomes mainstream, it will offer editing assistance to students. Will it be governed by these constraints? Why would it be?

Web: [Direct Link] [This Post]


Keeping CALM: when distributed consistency is easy
Adrian Colyer, The Morning Paper, 2019/03/06


Icon

This is a key question: "What is the family of problems that can be consistently computed in a distributed fashion without coordination, and what problems lie outside that family?" Here's the proposed answer: "Consistency as Logical Monotonicity (CALM). A program has a consistent, coordination-free distributed implementation if and only if it is monotonic." By monotonic, we mean this: "once we learn something to be true, no further information can come down the line later on to refute that fact." How do we get monotonicity? Confluent operations, that is, "If it produces the same sets of outputs for any non-deterministic ordering and batching of a set of inputs." Give it the same data, however ordered, and it produces the same results. It gives you something to think about. Accounting is confluent; the order of transactions doesn't matter, the balance is the same in the end. Voting is confluent; you vote in morning or evening, but the final tally is the same. But causation and agency are not confluent. When you're trying to make something happen, order matters.

Web: [Direct Link] [This Post]


A beginner's guide to Professor Rose Luckin
edCentral, 2019/03/06


Icon

"Luckin is in the process of developing a robot called 'Colin' to take on a teaching assistant role and demonstrate how robots could help teachers in the classroom. Focusing on pupil wellbeing, Colin will collect data which will help identify the areas where individual learners need the most support." She is also one of those educators who blogs only once in a blue moom - her most recent post is from 2017 (it's a pretty good post on implications of AI for education, though). For something more recent, have a look at this JISC coverage of 'the AI revolution is here' and this interview in Sifted that explores her interest in ethical AI and mentions her EDUCATE project, an startup clinic based at the UCL Institute of Education to connect entrepreneurs with educators and with research.

Web: [Direct Link] [This Post]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2019 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.