Content-type: text/html Downes.ca ~ Stephen's Web ~ Relevant Similarity

Stephen Downes

Knowledge, Learning, Community

Nov 22, 1997

Posted to HotWired 23 Nov 97

This is no place for a treatise, however, I'd like to sketch in this post an alternative to traditional logic which may be worth some thought.

Standard defenses of logic begin the way this thread did: with the postulation of some examples, and an appeal to their self-evidence. And they are self evident.

True, there is a large gaping hole in traditional logic: it is incomplete. This gap is most obvious in self-referential sentences, and becaomes a problem for proof when statements in logic refer to logic itself. As Godel effectively showed, logic cannot prove that logic is true.

An additional gaping hole in traditional logic is that it is contentless. As a previous poster mentioned, logic, properly so-called, is nothing more than a system for the manipulation of symbols. But logic, by itself, cannot show that any statement is true. You need foundational premises, and on these, logic is silent.

But, against these, are the following observations: (a) humans reason, and (b) they do it remarkably effectively. So, it will not be enough to show that traditional logic fails. We need to show what it is that humans do that works so successfully.

Working toward an alternative, then: let me point to two ways in which I think traditional accounts of logic misrepresent the way humans think:

  1. Logic is sentential, but human reasoning is non-sentential, and
  2. logic separates form and content, while human reasoning does not separate form and content.
Let's look at each.

The example given in the first post was in syllogistic form. We were presented with two sentences which led deductively to a third sentence. Logical analysis states that this argument is of the form "All S are P, m is an S, thus, m is a P".

Do we reason in this way? It's not very likely. First of all, none of us has the knowledge that "All S are P". Second, when we infer from "m is an S" to "m is a P" we make no explicit reference to the universal (All S are P) at all. But even more to the point, there is no identifiable mental event corresponding to "m is an S" in the first place.

The theory that we have 'sentences in the brain' is long established and has its contemporary defenders, such as Fodor, Chmosky, Pylyshyn, and many others. But the evidence points against this. When we see a dog, we do not create a (possibly infinite) set of sentences like 'm is a dog', 'm has four legs', 'm is chewing on my leg', etc., in our brain. What we do is we form an image of a dog. But images are not sentences. They have no sentential properties. And more to the point, they cannot function as a premise in a syllogism.

Which leads to the second problem with traditional logic: the separation of form and content. The traditional theory says that logic is a collection of formal structures (like, say, jello molds) waiting to be filled with content (like, say, jello). Content is supplied by the senses, formed into a set of primitive 'observation sentences', which in turn can be shuffled from form to form to produce new, unexpected observation sentences (called 'predictions').

But again: humans just don't reason this way. Or, more precisely, we don't observe this way. No 'observation sentence' can be formed without the support of a host of theoretical assumptions (see Quine, Word and Object or even Lakoff, Women, Fire and Dangerous Things.

Consider the sentence 'm is a dog'. In conventional language we might say "This is a dog", "That is a dog", or even "Mutt is a dog". Right off the bat, we need a theory of reference, which tells us that 'm' is. But people do not come with built-in theories of reference.

Even more problematic is 'is a dog'. What does that mean? Well, it means that the item in question belongs to a larger category, 'dogs'. What is it to be a dog? We need to have some prior conception of categorization, in general, and dogs, in particular.

The need for prior categorization has led to a lot of mythology in philosophy - see, for example, Kripke's Naming and Necessity. Unless categories are something we can build on the fly as needed - and that's not possible in traditional theory - we need a complete et of all possible categorizations prior to our forming any observation sentence.

The answer to this dilemma has to be that whatever 'observation statements' are formed in our brains must be formed as a direct consequence of whatever is observed. Or to put it another way: what we form in our brains are not descriptions formed by some sort of cognitive reflection on observational content, but rather, are impressions directly caused by observational content.

These two threads of thought lead naturally to an alternative picture of reasoning which I cast under the heading 'relevant similarity'.

When we observe a phenomenon, as I said, we do not form a sentence 'm is a dog' in our brain. Rather, what happens is that we acquire a non-sentential impression in our brain. More specifically, what happens when we see a dog is that patterns of neurons are activated, one layer after the other, the pattern of activation directly corresponding to the physical stimulation caused by the presence of a dog in our visual field.

Think of this pattern of perception as being similar to a wave washing up on a beach. As the interconnected neurons stimulate each other, this wave of perception washes up against pre-existing patterns, formed by prior perceptions, in our brain. In some cases - where the wave bears no resemblance to the shore - the wave crashes meaninglessly into to prior perceptions. But in more interesting cases, the wave is relevantly similar to prior perceptions.

The concept 'dog' in our mind corresponds no no single entity (and specifically, no single 'word' or 'sign'). The concept 'dog' is stored as a pattern of interconnections between neurons. When we perceive a dog, this pattern is re-enforced because of its similarity with the current perception.

When we activate the pattern of interconnected neurons corresponding to 'dog', we do not activate it in isolation. This is because the same neurons which compose the concept 'dog' also compose parts of other patterns, for example, 'legs', 'teeth', 'kibble', and 'Snoopy'. If, in the current context, the concept of 'legs' is relevant (that is, if the concept of 'legs' forms a part of our recent perceptions), then the perception of a specific dog, and the activation of the concept 'dog' in our mind, stimulates the activation of 'legs' in our mind. Thus, when we see a dog, if it's relevant, we think of legs.

This is not fantastical if you reflect on how you perceive. Think of Paris. What did you think? Did you form an isolated concept 'Paris' in your mind? Of course not. Your thought of Paris was accompanied with a wide variety (different for each person) of closely associated concepts: 'France', 'Eiffel Tower', 'Seine', 'Diana', etc.

This is how we would reason were we confronted with phenomena we have never seen. According to the 'brain writing' theorists, were an alien Kzin to land on earth, we would have to have all the required sentence structures and categories already formed in our brain in order to draw conclusions. But it seems unreasonable to expect this. Rather, what would happen is something like this: we see what appears to be a very large cat with teeth and claws emerging from the spaceship, this observation is similar to (and therefore activates) previous conceptions of such things as 'tigers' and 'lions', which in turn activated closely associated concepts such as 'danger' and 'flee'. A perfectly appropriate response, completely unjustified by traditional logic.

Relevant similarity can generate two closely related and fundamental concepts in traditional logic: universality, and categorization. Both are formed by the activation of parts of concepts rather than the whole.

Think of it this way: suppose we saw two phenomena. The pattern of neural activation might be represented thus:

10111011
10110100
. The two patterns have in common the following sequence:
1011xxxx.
The latter is a 'category', which is composed of two members.

Universality is obtained in a similar fashion, except that the process of abstraction is taken to much greater lengths.

Traditional logic views categorization and universality in one of two ways:

  1. the extrapolation of a general statement 'All A are B' from a (very large) set of specifics 'm is an A and m is a B, 'n is an A and n is a B', etc. Unfortunately, this form of reasoning never works.
  2. the postulation of a prior set of 'necessary truths' including a set of natural categories bolstered by a theory of 'essential natures'. But this doesn't work either: given two competing sets of essential natures or necessary truths, we have no means of deciding which to follow.

Relevant similarity can explain the emergence of traditional logic, but is not tied to the assumptions of traditional logic. What emerges from this observation is a set of phenomea with considerable explanatory power themselves:

First, no two persons' concepts of any phenomena will be an exact match. Your concept of 'Paris' will be different from mine. You think of 'France' while I, because of my experiences there, think of the Seine. This difficulty becomes more exaggerated the more abstract we get. Your concept of 'freedom', 'democracy' and even 'natural' may be very different from mine because they are abstractions from very different sets of experiences.

But second, this means that any system of categorization will do, if it meets the need. We already do this a lot. Consider the way we subdivide physical geography. I live in the city of Brandon, in the Brandon school division, in the Westman regional health authority, in the provincial riding of Brandon-East, in the Federal Riding of Brandon-Souris, in the province of Manitoba, in the Country of Canada. None of these categorizations matches, or even is a subset, of the other (except Manitoba - Canada). None of them is a priori true. Each is directed toward a particular objective, and is invoked only when it is relevant to do so.

Finally, third, and most importantly, the employment of relevant similarity undermines the traditional concept of 'truth'. This takes a little explaining.

In formal logic, truth is nothing more than a meaningless quality which we attach to particular sentences, and which is preserved through sanctioned manipulations of that sentence (indeed, the basis of traditional logic is 'truth-preservation', and traditional logic is bolstered by a mirror set of axioms, called 'semantics', which reassure us that the manipulations really do preserve truth.

But thought of strictly in the abstract, there is no reason why we should employ a system of logic which preserves truth. For one thing, we could form a mirror-logic which preserves falsity. Alternatively, we could form a completely arbitrary concept, 'boog', which is preserved.

Here is a short intuitive proof of that. Suppose sentences were composed of 1s and 0s (semantics, in fact does that - these are known as 'truth conditions'). Then the rule of deduction is that:

An inference preserves truth, if and only if, where in the premise there is a 1, then in the conclusion there is also a 1.
For example, an inference from 1011 to 1111 preserves truth because, in every place in the premise where there is a 1, there is in the conclusion also a 1. But an inference from 1100 to 0001 does not preserve truth, because there is a 1 in the premise (the first and second spots, actually) where there is not a 1 in the conclusion.

But given no other information, there is no reason why we can't play with the rules. Such as:

  • A sentence preserves falsity, if and only if, where in the premise there is a 0, then in the conclusion there is also a 0. Or,
  • A sentence preserves boog, if and only if, where in the premise there is a sequence '10', there is in the conclusion a sequence '01'.

Why don't we use one of the alternative logics? Because we'd die! Only the first form of logic works with our perceptions. If our use of truth-preservation has any foundation, it must be based on the evidence of our senses. That's a pretty weak foundation. If your concept of 'S' differs from my concept of 'S', then there is no means of knowing whether 'S is P' is true.

In relevant similarity, there are two possible approaches to truth:

  1. Truth by agreement - a sentence 'S is P' is true because most of us agree that it is, or
  2. Truth by inner conviction - a sentence 'S is P' is true because it activates a strongly interconnected set of neurons whenever a perception corresponding to the sentence is experienced.

My own preference is for the latter, and this is probably its origin, but I think that an analysis of truth in contemporary society would reveal that it is a combination of the two: some things we believe to be true because society as a whole does, and other things we believe to be true because of an inner conviction.

As I said at the outset of this note, this is not the place for a treatise. However, I hope you (the very patient reader who made it to the end) found some value in it.



Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Nov 23, 2024 7:06 p.m.

Canadian Flag Creative Commons License.

Force:yes