Content-type: text/html Downes.ca ~ Stephen's Web ~ AInevitability and it's AImpenetrableness

Stephen Downes

Knowledge, Learning, Community

Alan Levine wavers somewhere between AI-tech inevitability and AI-tech doom and gloom. From my perspective, nobody is at either of these two extremes, though critics of one side or another sometimes write as though their opponents are. As usual, though, Levine takes the discussion into a more useful direction: "as users/subjects of this stuff we have absolutely no comprehensible mental model of what it does... Without any kind of internal intuition, our only source of understanding is our statistically insignificant experiences of typing something into a box turning the crank and seeing what pops out. And then we come to conclusions based on either the crap we get or the stuff that is actually tenable." And that does, I think, characterize a lot of the writing I see out there.

If one were to ask me whether I have a mental model of what's going on inside an AI, I would say that I think I do. Think of a Rorschach test or a word association test. What's happening here is that your brain is being stimulated, and you're responding with the next thing that pops into your mind. Internally, the stimulation activates a part of your neural network, and your response is the word or phrase that is most similar to that part of your neural network. Tweak the parameters a bit and you can get a 'what comes next' sort of response, based on the same principles. That's what's happening, at least as I see it. But with computers, not with your brain.

Having some sense of what they're doing is really important when dealing with AI engines. As Levine says, "To me we are getting a bit over distracted by the candy sliding out of the bottom of the AI machine and not having any kind of understanding, even schematic, of what goes on behind the front of the machine." There are games we can play (or teach our children to play) along the lines of "one of these things does not belong" (another typical AI function) from Sesame Street where we asks what things we associate with what and think about why we think that this, rather than that, comes next, and only then think is concrete terms about things like input data and similarity algorithms.

Today: 6 Total: 82 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Nov 21, 2024 4:13 p.m.

Canadian Flag Creative Commons License.

Force:yes