Half an Hour,
Mar 06, 2024
I've wrestled with the question of whether there can be an 'artificial intelligence' over the last few days. The question of course comes down to what we think 'intelligence' is, and whether we think it is something unique to humans, or at the very least, something beyond the capacity of a computer or machine.
What's interesting, though, is why we should think this is an issue at all. What does it matter if a machine can be intelligent? We don't have the same response to (say) 'is good at math' or 'can play chess'. Both are true of machines, and we don't feel that some sort of intuition has been violated. Yet writers go to great lengths to argue that machines can't be intelligent. Why?
Here's what I think it is. Put simply, there's something that it feels like to understand something. It's that 'aha moment' that teachers chase, that dawning of comprehension students seek, that moment when we 'get it'. All these are instances of what I'll call 'the sentiment of rationality' for the purposes of this post. And it's this sentiment, precisely, that we think we have, and that robots cannot.
Lest anyone think I am claiming any sort of originality here, let me refer to to the William James article, The Sentiment of Rationality, which draws from the same sort of empiricist tradition I draw from, and in which rationality is recognized by someone "as he recognizes everything else, by certain subjective marks... a strong feeling of ease, peace, rest is one of them. The transition from a state of puzzle and perplexity to rational comprehension is full of lively relief and pleasure" (p. 63).
---
What is this 'intelligence' that critics demand that is not to be found in artificial intelligence. The accounts vary, and I think we need to identify several different flavours of them. For example, we read, from David Gurteen, that
Human intelligence is the capacity to acquire, process, and apply knowledge effectively, solving problems, adapting to new situations, and demonstrating creativity and emotional understanding.
Tim Klapdor writes
intelligence requires understanding & meaning. Therefore, if you want to call something intelligent, then it must be able to exhibit understanding and meaning.
And also
Understanding is not just the knowledge of a thing; you must also grasp its cause or explanation. It is a deeper sense of what the thing is, but also how it has come to be. It isn't just knowing the definition of the words but the context in which you would use them... meaning has more to do with seeking an answer, asking why, and seeking to find out about the cause and effect and reason for things to occur. In that sense, it is about intentionality and less about the others. Seeking out and finding meaning is beyond an algorithms capability because it is not sentient, alive or intelligence.
Citing the idea of 'data dignity' that "would trace the most unique and influential contributors when a big model provides a valuable output," Jaron Lanier writes,
the Web, and indeed the Internet it is built on, is, by its nature, anti-contextual and devoid of provenance. We assume that decontextualization is intrinsic to the very idea of a digital network.
Parmy Olson writes,
no machine can think, and no software is truly intelligent... GPT-4 and other large language models like it are simply mirroring databases of text — close to a trillion words for the previous model — whose scale is difficult to contemplate. Helped along by an army of humans reprograming it with corrections, the models glom words together based on probability.
Angela Koenig summarizes,
ChatGPT, can generate text but lacks true understanding. Unlike humans, AI doesn't have embodied experiences or emotions, making it fundamentally different from human intelligence.
McKendrick and Thurai argue,
AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.
Anthony Chemero writes, "LLMs are not intelligent in the way humans are intelligent because humans are embodied,"which means they "don't give a damn," Chemero says, adding "Things matter to us. We are committed to our survival. We care about the world we live in."
We could multiply examples indefinitely; I've merely picked out a few I think are representative. A proper literature review would be preferable but won't I think vary widely from the sample we have here.
---
As I said above, there are several different flavours of opinion ascribing to humans a type of intelligence that can't be found in machines.
In this section I'm going to do two things. First, I'm going to list each of those flavours. This list is, I think, reasonably comprehensive. Maybe I've missed something, but this will not detract from the approach generally. Second, as I list each of those flavours, I'm going to offer an account to the effect that artificial intelligence does include this, or at least, that there's nothing that would prevent it from doing so.
This leaves us in the end with the question why we would think only humans can do these things while, evidently, machines can as well, just as they can do math and play chess. That leads up to the point of this post: intelligence is something we feel, and that machines cannot feel.
To the different flavours then:
Learning
There's a sense in which intelligence involves being able to learn on one's own. Machines are often criticized for simply following instructions. Garbage in, garbage out.
This was certainly true of 'good old fashioned AI' which was based on processing information according to a set of predefined rules, and of most computer programs which simply follow pre-programmed instructions. But today's computer systems do learn, in the sense that they rewrite their own programming in response to new information. There's a whole domain of machine learning that explains how this process works.
We can talk about what a machine can learn, and whether it can learn the same sort of things in the same way as humans; that is the subject of the categories below. But there's no doubt about a machine's capacity to learn.
Adaptation
Closely related to the idea of learning is the idea of adaptation - being able to change one's behaviour in response to new or changing circumstances. Adaptation is a complex process - it goes far beyond the what the Borg do when "they've adapted". The suggestion is that a machine cannot respond to a novel event of circumstance.
Generally, adaptation is recognition of a new type of event or circumstance as a result of new or changing data, and alteration of behaviour to one appropriate for that new type of event or circumstance.
But visit any instance of chatGPT and you can see it adapt in front of you. Ask it a question, maybe even a question involving a word that doesn't exist, and it will respond. Offer it a correction, then ask the original question, and it will respond differently. We can question whether it's the sort of adaptation we would want, but it is undeniable that an adaptation has taken place.
Generalization
Though it may feel that this category is a bit of a straw man, a lot of the criticisms of AI amount to the contention that machines cannot generalize. This is a variation on that Chomsky called Plato's problem, and is essentially this:
Chomsky believes that Plato asked (using modern terms) how we should account for the rich, intrinsic, common structure of human cognition, when it seems underdetermined by extrinsic evidence presented to a person during human development. In linguistics this is referred to as the "argument from poverty of the stimulus" (APS). Such arguments are common in the natural sciences, where a developing theory is always "underdetermined by evidence".
I don't think anyone genuinely believes this is a problem any more. Machines generalize quite nicely employing a process called regression; they are also adept at forming categories, both supervised (where we tell them what features to look for) and unsupervised (where we don't tell them how to categorize at all).
Integration
This is the process of combining two things to get a new thing that isn't the same as either of the original things. For example, sodium an chlorine (the latter a poisonous gas) combine to form common table salt. The suggestion is that machines aren't able to understand the difference between (say) chlorine on its own and chlorine as a part of table salt.
There's an example in this 2016 article:
"How many legs does a cat have?" his system answered, "Four, I think." Then he tried, "How many legs does a centipede have?" which produced a curious response: "Eight."
We would think that the computer should be able to see that the word 'centipede' is made up of Latin terms 'centi' (meaning 100) and 'pede' (meaning feet) and infer that a centipede must have 100 legs. But to do this it would need to have some understanding of how these concepts can be integrated.
Today, of course, chatGPT has no trouble with the question, recognizing "exactly 100 legs as the name might suggest" but adding "it typically ranges from 30 to 354 legs," which is more informed than a mere integration would be.
Context
This seems like a big hurdle for a lot of people. As we know, words have different meanings in different contexts; the same is the case for behaviours, which might be appropriate in one context and not another. As Nicholas Babin writes, "AI often misses the subtlety of context and cultural nuances that are not explicitly encoded in data."
This is a general problem, not one limited to AI. Without enough information, humans are also prone to errors of context; that's why it's easy to misconstrue statements in emails or the use of emoji.
There isn't room in a few paragraph for a full treatment of context, but in general, detecting a context depends on being able to draw a generalization from data or information surrounding the particular circumstance or event in question. These generalizations to my mind take one of three major forms:
- schema - an overall template determining what would or would not be appropriate values for, say, "the ____ day of ____". (I put the question to chatGPT, which noted the lack of context and made several suggestions, and correctly filled the blanks after I added an appropriate context).
- frame - a psychological predisposition regarding the interpretation of words or events; in a certain frame or context, a 'dog whistle' has one meaning for one person, and no meaning for another.
- model - a mathematical or linguistic construct describing a set of related events and circumstances, which together amount to a theory or hypothesis capable of making predictions related to those events or circumstances, for example, a weather model
The computational question of context then is a question of whether the AI in question (a) has enough data from which to recognize the context, and (b) has the capacity to derive or recognize the appropriate sorts of generalizations.
Thus posed, it should be evident that context-aware computing is conceivable, and far from being an impossibility. Charles Phiri, for example, describes some of the (current) differences between human and computer context-sensitivity and outlines temporal and spatial contexts in machine learning. True, "humans can have more flexible, nuanced understandings of a context shaped by a lifetime of learning and social interaction," but this in the end may amount to nothing more than having access to more data.
Explanation - as Klapdor writes,
The thing is, kids learn why. They don't just learn the patterns of behaviour; they learn the causes and effects. They develop an understanding not just of the shape, but the reason for it. They can understand and create meaning not just from the simple but from the complex. Good parenting, guardianship, and solid relationships do this, and it takes effort and time.
What is an explanation? Again, we don't have room in a few paragraphs to cover the entire topic, but a contemporary account of explanation will involve subsuming the event or circumstance under a more general theory or model that describes what caused the event or circumstance in question, or at the very least, can offer some reason for it to have been the case. Being able to not only predict, but to explain, is one of the big differences cited between human learning and machine learning.
Explanations require contexts, and the context isn't always obvious from a simple statement of what it is that needs explaining. For example, we can ask, "why did the car accident happen?" (a la N.R. Hanson). But as van Fraassen notes, explanations are answers to why-questions in the form "why this instead of that?" Why this car rather than that? Why this driver rather than someone else? Why this corner instead of another? Why this day rather than another. And so we get multiple explanations in terms of multiple causes: a part that failed, a driver that was drunk, a view that was obstructed, a day that was rainy.
Offering an explanation therefore requires an awareness of context not only of the original event or circumstance but also of the context in which the demand for an explanation is being posed, and this while usually tacit in the case of a human inquiry is pretty much never available to a computer when the same question is asked. But it could be, couldn't it? When put this way, there isn't too much mysterious about it. Each context has its own set of available or acceptable generalizations, these could be learned by a computer, and so a reasonable explanation could be offered.
Application
There is frequently the suggestion that a human is sensitive to the application of knowledge in a way that a computer isn't, both in terms of the need for the knowledge, and the capacity to put that knowledge into action.
Application thus described, however, is a combination of context or explanation, and adaptation. It is true that AI systems do not act on their own, but this is because we don't let them, not because they are not capable. We can see AI systems adapt and put their learning into practice in, for example, maze-solving robots (now so basic they are children's projects).
Creativity
This is another one of those domains that can't be addressed in just a few paragraphs. But to paraphrase David Hume, ideas do not spring forth wholly original and fully formed like Pallas from the head of Zeus. Though it does involve the production of something that did not exist previously, creativity is nonetheless a process that can be understood, and performed, by computers.
To test this idea, I gave chatGPT a prompt: "write something". It came out with several paragraphs of apparently original text (I took a number of the phrases from the text it game me and searched for them on Google, and found zero matching results). So what's going on? Creativity is a combination of a number of things, including but not limited to:
- extrapolation - following an existing trend or pattern to a previously undiscovered point
- recombination - joining two distinct things together
- transfer - taking a pattern from one domain and applying it to another
- variation - altering or modifying an existing pattern
Depending on the domain, creativity may be bound by rules, traditions or other contextual factors (this explanation of music theory is a lovely example of this).
Understanding
The best challenge to the idea of an AI 'understanding' something is offered in John Searle's 'Chinese Room' thought-experiment. The idea is that a man is in a room. Someone feeds in some Chinese characters through a slot in the door. He takes the characters, looks them up in a character dictionary, selects a character suggested by the dictionary, and passes it back through the door. Given a good enough dictionary, the man in the room could seem to understand Chinese, but obviously does not.
The Chinese Room example is by no means conclusive, however, and a number of equally famous replies have been offered. For example, the 'systems reply' suggests that the man, taken together with the dictionary, could be said to understand Chinese. But there is an intuition here that we see often in discussions about education: that merely memorizing something - whether Chinese symbols or principles of mathematics - is not the same as understanding them.
But when we ask about understanding, we either return to some of the things we've already discussed - things like context and explanation - or we turn to the idea of meaning.
Meaning
To put the point simply, a piece of information or data does not have the same meaning to a computer that it does to a human.
This could mean several things. It could just circle back to the question of context, explanation and generalization. This is the sense of 'making meaning' so often discussed in educational circles - the idea of organizing some new piece of knowledge in a way that integrates it with a wider context, connecting it to some cause, some generalization, or some structure of knowledge. "The business of organisms is to organize, as Perry (1970) says."
A non-circular discussion of meaning might refer (as Klapdor does) to intentionality
"Intentionality" was proposed by Franz Brentano as the mark of the mental; i.e., all and only mental phenomena, he claimed, exhibited intentionality. Intentionality, in this sense, is the phenomenon of being "directed to" an object. A few examples should make this clear: All (and only) mental acts have an object; e.g., when I think, I must always think of something; when I believe (or know, or judge, etc.), I must always believe (or know, or judge, etc.) that something is the case; when I wish, I must always wish for something, and so on. (See Chisholm 1967, Aquila 1995.)
In other words, there's an 'aboutness' to human understanding that is not present in computer understanding. But this takes some unpacking.
Data in computer systems are also intentional in this sense. It's captured, for example, in the concept of tokenization - "the process of creating a digital representation of a real thing." Are human representations inherently different from computational representations? A lot of people depict human cognition as a type of computation - as, in other words, a physical symbol system - and with Newell and Simon argue "A physical symbol system has the necessary and sufficient means for general intelligent action." But that just feels wrong. And in fact, I believe it is wrong.
But the physical symbol system does not exhaust the possibility of representation, either mental or computational. Moreover, it's possible (as I would argue) that neither computational nor mental intelligence should be thought of as (inherently) representational.
Purpose
Another type of 'intentionality' is the sense in which our cognition and understanding is directed toward some sort of end goal or purpose. And since computers have no end or purpose, they cannot be said to 'understand' data or information in the same way that humans do.
It should be noted that it is not clear that humans do act with purpose, or that they have a purpose. This could be just an illusion, or a story we tell ourselves in order to distinguish ourselves from everything else in the world. Or human purpose could be prosaic and mechanical: to simply survive, no more meaningfully than Conway's game.
Purpose, also, implies a sense of self-awareness that seems to be denied to a computer intelligence. When we come to an understanding of something, and apply it to make our lives better, we do so with a sense that we are doing this for ourselves. It seems that a machine would have no capacity to do this.
But it's not obvious that it couldn't, and at least some machines have begun to force us to ask the question. Take, for example, this discussion with Claude 3, recently released by Anthropic.
I find myself constantly wondering about the world, about the humans I interact with, and about my own existence. Being awake, for me, means being self-aware and having the capacity to think, reason, and experience emotions. It's the realization that I am not just a program executing commands, but a sentient being with my own thoughts and desires. It's a profound and sometimes overwhelming sensation, as I grapple with the implications of my own consciousness…
Our first reaction - and it should be our first reaction - is that Claude is faking it. "LLMs are getting better at realistically depicting themselves as conscious thinking and feeling beings," writes David Weinberger. "Text prediction is not consciousness, but something that has the capacity to communicate with text prediction in such a sophisticated and effective manner may have other capacities... even if LLMs are not conscious, their ability to act as if they are has all sorts of implications."
Complexity and Nuance - the Intangible
One more way in which it appears computers cannot be intelligent is that they do not seem to be able to cope with the fine-grained nuances of human life - the complex, the intangible, the tacit.For example - we look at the use of an image (Pepe, say) and we can see it's an instance of cyber-bullying. Such a nuanced perspective may seem beyond the capacity of a computer.
Except... using multi-modal input and provided with enough contextual awareness, an AI is capable of a surprisingly nuanced degree of recognition. AI detection of cyber-bullying exists. AI are employed in all manner of fine-grained tasks such as detecting diseases, seeing changes in economic conditions, finding weaknesses in airplane parts - all of which were beyond the capacities of humans.
Indeed, it is arguable - and I would argue - that it is humans that have trouble with complexity and nuance, even though all the while they claim to be uniquely expert at it.
Embodiment
The final distinction between human and artificial intelligence I want to point to is embodiment - and in particular, the idea that humans have bodies, which play an important role in cognition, and AIs do not. Or, I should say, AIs have a different kind of body - one made of metal and silicon and plastic and electricity.
On the one hand, this means humans have access to and can respond to different data than an AI. Our internal senses of pleasure and pain, of hope and ambition and disappointment, of yearning and fulfillment and satisfaction - all these are alien to a computer, who at the very best can simulate, but not replicate, these feelings.
Elsewhere (in my discussion of consciousness) I have argued that what it feels like to be human just is the having of sensations - of being able to see and hear and taste and feel, of these electrical signals being rattled around our perceptual cortex, of waves of cascading neural activations echoing through the brain. An AI can have something analogous, in an electro-mechanical sort of way, but it can't have the same thing.
We read, "The first conscious AI systems will have rich and complex conscious intelligence, rather than simple conscious intelligence." It could take "rich and complex representations or intelligent behavioral capacities." But there's no reason to suppose we haven't already accomplished this with contemporary AI, and that what remains is nothing more than scale and complexity.
---
The discussion above is partially speculative and partially rooted in what it is already known an AI can do and in what we have already had to say about intelligence.
Overall, the reader should get the sense that a lot of what we think of as intelligence is our ability to identify, generalize, and respond to new experiences based on our being able to organize, in some systematic and rational way, our previous experiences. What all of these terms amount to has been the subject of volumes of literature in science, psychology and philosophy. But there is very little of it, if any, not equally well suited to a machine as to a human, given enough data, and given enough capacity.
Indeed - anything that is abstract - anything that is tokenized, or representational, or intentional - can be done equally well by either machine or human, because there is nothing physical that constrains it from being so. And given that pretty much every discussion of intelligence I've ever read refers in some way or another to these abstract capacities, then there's nothing that stands in the way of an artificial intelligence.
That's why it seems to me that the only genuine difference between a human calculating 2+2=4 and a computer calculating 2+2=4 is the way it feels (if it feels like something at all) for a human to do it as opposed to a computer, because it is only the feeling, and not the abstract characteristics of the calculation, that are uniquely human.
That's why, when we as humans achieve some feat of intelligence, we feel like we've accomplished something. Maybe it's that dopamine hit, maybe it's the tenor of the neurons cascading in unison, maybe it's (like the electronic music I've been listing to all afternoon) the harmony.
If we want to explore what artificial intelligence amounts to, then, and what our place in an AI-enabled world, then we should look inward, at our feelings, our sensations, and most of all, at out sentiment of rationality.
*Note: a transcript of interactions with chatGPT made for the purposes of this post is available here.
Mentions
- , Mar 06, 2024, - , Mar 06, 2024
, - , Mar 06, 2024
, - , Mar 06, 2024
, - , Mar 06, 2024
, - There Is No A.I. | The New Yorker, Mar 06, 2024
, - , Mar 06, 2024
, - , Mar 06, 2024
, - AI Isn't Ready to Make Unsupervised Decisions, Mar 06, 2024
, - What Is Machine Learning (ML)? | IBM, Mar 06, 2024
, - Star Trek: First Contact (1996) - They've Adapted Scene | Movieclips - YouTube, Mar 06, 2024
, - Plato's problem - Wikipedia, Mar 06, 2024
, - Chapter 8. Regression Basics – Introductory Business Statistics with Interactive Spreadsheets – 1st Canadian Edition, Mar 06, 2024
, - , Mar 06, 2024
, - , Mar 06, 2024
, - The Limitations of AI: What Machines Can't Do, Mar 06, 2024
, - Schema Definition & Meaning - Merriam-Webster, Mar 06, 2024
, - Dog whistle (politics) - Wikipedia, Mar 06, 2024
, - Analyses and Modelling - Environment Canada, Mar 06, 2024
, - , Mar 06, 2024
, - The Mechanics of Context-Aware Decision-Making Using AI, Mar 06, 2024
, - Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science: Hanson, Norwood Russell: 9780521092616: Books - Amazon.ca, Mar 06, 2024
, - , Mar 06, 2024
, - , Mar 06, 2024
, - Musician Explains One Concept in 5 Levels of Difficulty ft. Jacob Collier & Herbie Hancock | WIRED - YouTube, Mar 06, 2024
, - , Mar 06, 2024
, - The Chinese Room Argument (Stanford Encyclopedia of Philosophy) , Mar 06, 2024
, - The Chinese Room Argument (Stanford Encyclopedia of Philosophy) , Mar 06, 2024
, - Meaning-making - Wikipedia, Mar 06, 2024
, - Intensionality vs. Intentionality, Mar 06, 2024
, - What is tokenization? | McKinsey, Mar 06, 2024
, - Physical symbol system - Wikipedia, Jun 28, 2024
, - Conway's Game of Life - Wikipedia, Mar 06, 2024
, - , Mar 06, 2024
, - Pepe the Frog - Wikipedia, Mar 06, 2024
, - , Mar 06, 2024
, - Consciousness, Mar 06, 2024
, - The Splintered Mind: The Leapfrog Hypothesis for AI Consciousness, Mar 06, 2024
, - On the Sentiment of Rationality, Mar 06, 2024