Cover of "The Last Superstition"
The Last Superstition: Great Gobs of Uncertainty

Chapter 6: The lump under the rug

In this section, Feser argues that the existence of the mind is incompatible with materialism. Not only that, but materialist explanations of mind often refer, if only implicitly or subconsciously, to aristotelian concepts.

But first, he has to dispel a misconception:

to say that something has a final cause or is directed toward a certain end or goal is not necessarily to say that it consciously seeks to realize that goal. […] Thus it is no good to object that mountains or asteroids seem to serve no natural function or purpose, because Aristotelians do not claim that every object in the natural world necessarily serves some function. [pp. 237–238]

As I understand it, this is like saying that a pair of glasses is for improving sight, but of course the glasses themselves can’t possibly be conscious of this.

This is indeed an important point to keep in mind, and it’s a pity that the next sentence is

What they do claim is that everything in the world that serves as an efficient cause also exhibits final causality insofar as it is “directed toward” the production of some determinate range of effects.

Yes, but pretty much everything is the efficient (or proximate) cause of something. The mountains and asteroids that Feser just mentioned are the efficient cause of certain photons being reflected from the sun into my eye. Their gravity also attracts me, though only in hard-to-measure ways. A mountain can affect the weather and climate around it, and depending on its orbit, the asteroid might be on its way to kill all life on Earth. Does this “production of some determinate range of effects” automatically mean that they have final causes? Are these final causes related to what they do as efficient causes? That is, if a star looks beautiful in a telescope, does that mean that it’s for looking beautiful? Or, to come back to an earlier example, would an aristotelian say that the moon orbits, therefore it’s for orbiting?

If so, then this reflects a childish understanding of the world, one where bees are there to pollinate plants, rain is there to water them, and antelopes are there to feed lions. If not, and if a thing’s final cause can be very different from its efficient cause (e.g., the moon orbits the Earth, and reflects light, but maybe its final cause is something else, like eclipses), then why bring it up?

The Mind as Software

Next, Feser considers the currently-fashionable metaphor of seeing the brain as a computer that processes symbols. Since I criticized him earlier for not understanding software, or even of considering “Form” as a type of software, I was interested to see what he had to say.

First of all, nothing counts as a “symbol” apart from some mind or group of minds which interprets and uses it as a symbol. […] By themselves they cannot fail to be nothing more than meaningless neural firing patterns (or whatever) until some mind interprets them as symbols standing for such-and-such objects or events. But obviously, until very recently it never so much as occurred to anyone to interpret brain events as symbols, even though (of course) we have been able to think for as long as human beings have existed. [p. 239]

Here, Feser confuses the map with the territory: we can explain the brain at a high level by comparing it to a computer processing symbols. But symbols are only symbols if they’re interpreted as such by a mind. So neural firing patterns aren’t true according-to-Hoyle symbols, therefore checkmate, atheists!

This is like saying that the circadian rhythm is not a clock, because clocks have hands and gears.

Likewise, a little later, he writes:

No physical system can possibly count as running an “algorithm” or “program” apart from some user who assigns a certain meaning to the inputs, outputs, and other states of the system. [p. 240]

Again, Feser is paying too much attention to the niceties and details at the expense of the gist.

Imagine a hypothetical anthill. In the morning, the ants head out from the anthill, roughly at random, dropping pheromones on the ground as they do so. If one of the ants stumbles upon a piece of food, it picks it up and follows its trail back to the anthill. If its left antenna senses pheromone but the right one doesn’t, it turns a bit to the left; if its right antenna senses pheromone but its left one doesn’t, it turns a bit to the right. If both sense pheromone, it continues in a straight line. If we trace the biochemical pathways involved, we might find that the pheromone binds to a receptor protein that then changes shape and affects the strength with which legs on one or the other side of the body push against the ground, which makes the ant turn left or right.

We can imagine similar mechanisms by which other ants, sensing that one trail smells twice as strongly of pheromone (because the first ant traversed it twice) and will prefer to follow that trail rather than wander at random.

These ants, of course, have no real brain to speak of. There’s no question of an ant being able to understand what a symbol is, let alone interpret it, let alone consciously follow an algorithm. All of the above is just fancy chemistry. And so Feser would, no doubt, say that the first ant is not following a “retrace my tracks” algorithm. Nor are the other ants following an algorithm to look for food where some food has already been discovered. Whatever it is that these ants are doing, it’s not an algorithm, because no one is assigning meaning to any part of the system.

But that doesn’t change the fact that the ants are finding food and bringing it back to the anthill. In which case, who cares if it’s a proper algorithm, or just something that looks like one to us humans?

Only what can be at least in principle conscious of following such rules can be said literally to follow an algorithm; everything else can behave only as if it were following one. [p. 241]

Feser then imagines a person who assigns arbitrary meanings to the buttons and display on a calculator (I like to think of a calculator whose buttons have been scrambled, or are labeled in an alien alphabet):

For example, if we took “2” to mean the number three, “+” to mean minus, and “4” to mean twenty-three, we would still get “4” on the screen after punching in “2,” “+,” “2,” and “=,” even though what the symbols “2 + 2 = 4” now mean is that three minus three equals twenty-three. [p. 242]

And likewise, if the pattern of pixels “All men are mortal” were interpreted to mean that it is raining in Cleveland, that would lead to absurd results.

What Feser ignores is that no one would use that calculator, because it doesn’t work. Or, at least, anyone who put three apples in a basket, then ate three of them, and expected to be able to sell 23 apples at market would soon realize that Mother Nature doesn’t care for sophistry.

If we had a calculators where the keycaps had all been switched around, or were labeled in alienese, we could eventually work out which button did what, by using the fact that any number divided by itself is 1, that any number multiplied by zero is zero, and so on. The specific symbols used for these operations, the numerical base the calculator uses, and other details don’t matter so long as the calculator can be used to do arithmetic, any more than a car’s speed changes depending on whether you refer to it in miles per hour, kilometers per hour, knots, or furlongs per fortnight.

Feser also applies his reasoning to Dawkins’s theory of memes:

If the competition between memes for survival is what, unbeknown to us, “really” determines all our thoughts, then we can have no confidence whatsoever that anything we believe, or any argument we ever give in defense of some claim we believe, is true or rationally compelling. For if the meme theory is correct, then our beliefs seem true to us, and our favored arguments seem correct, simply because they were the ones that happened for whatever reason to prevail in the struggle for “memetic” survival, not because they reflect objective reality. [p. 245]

This is reminiscent of Alvin Plantinga’s idea that since natural selection selected our senses for survival rather than for accuracy, then they can’t be trusted. That is, if I see a river in front of me, it’s merely because perceiving the current situation (whatever it might be) as a river helped my ancestors survive, and not necessarily because the current situation includes a river. Feser’s argument is similar, but applied to thoughts instead of senses.

https://www.youtube-nocookie.com/embed/hou0lU8WMgo?rel=0

This argument is technically correct, but less interesting than one might think: for one thing, we don’t need to speculate about whether our senses or thought processes are fallible: we know that they are. Every optical illusion tricks us into seeing things that aren’t there, and the psychological literature amply catalogs the ways in which our thoughts fail us (for instance, humans are notoriously bad at estimating probabilities). And for another, the best way to respond correctly to objects in the environment is, to a first approximation, to perceive them accurately.

If I may reuse my earlier illustration, imagine a person who thinks that the word “chair” refers to a yellow tropical fruit, the one that you and I call “banana”, and vice-versa. How long would it take this person to realize that they have a problem? If I invited them into my office and said, “take a chair”, they might look around for a bowl of fruit, but after two or three such instances, they’d probably realize that “chair” doesn’t mean what they think it does. On the other hand, it took me years before I realized that “gregarious” means “friendly” rather than “talkative”.

A clever writer can probably devise a dialog where “chair” can mean either “chair” or “banana”, but it would be difficult to do so, and would probably sound stilted. By comparison, it would be much easier to write a piece that makes sense whether you think that “gregarious” means “friendly” or “talkative”. And likewise, we can imagine an animal whose senses are mis-wired in such a way that it perceives a dangerous predator as a river, and has muscles and nerves mis-wired such that when it thinks it’s walking toward the river, it’s actually running away from the predator. But this is a contrived example, and unlikely in the extreme to be useful in the long run. A far more effective strategy (and one far more likely to evolve) is having some simple rules give the right answer 80% or 90% of the time. That is, to perceive the world accurately enough to survive in most plausible situations.

Feser and Plantinga are committing what’s been called the “any uncertainty implies great gobs of uncertainty” fallacy.

Series: The Last Superstition

Cover of "The Last Superstition"
The Last Superstition: Material Brains, Immaterial Software

Chapter 5: The Mind-Body Problem

After spending several pages, as is his wont, trashing Locke, Descartes, and other people he doesn’t agree with, Feser tells us why materialist explanations of the mind are doomed: the human mind is all about final causes: we plan, we imagine, we make mental images and so on. All of these involve “directedness toward” some object or aim, or intentionality. In other words, the mind is obvious proof that final causes exist.

And it should be obvious that it is simply a conceptual impossibility that it should ever be explained in terms of or reduced to anything material […]: material systems, the latter tell us, are utterly devoid of final causality; but the mind is the clearest paradigm of final causality; hence the mind cannot possibly be any kind of material system, including the brain. [p. 194]

There’s that word “obvious” again. Feser really ought to stop using it, since it causes so much trouble. Here, he’s committing the fallacy of composition. In fact, what Feser is saying is listed as an example of the fallacy at logicallyfallacious.com:

Your brain is made of molecules. Molecules do not have consciousness. Therefore, your brain cannot be the source of consciousness.

By coincidence, I recently saw Daniel Dennett present his talk, Consciousness: Whose User Illusion is it? in which he used examples that apply here as well: you can pick up a camcorder at Best Buy, record a video, and burn it to a DVD, but there are no pictures on the DVD. You can look through a microscope, but you won’t see tiny pictures on the disk. You can listen as closely as you like without hearing people talking. The pictures and sounds are not there. And yet the DVD does quite well at recording pictures, sounds, and video for later playback.

So do camcorders have an immaterial component? What about my car radio, which, since it can tune in on a radio signal, has some infinitesimal amount of intentionality; does it have an infinitesimal immaterial mind?

This sort of thing is why I can’t take Feser seriously. It’s one thing to proceed logically from premises that I don’t accept, or to value different things differently and come to opposite conclusions. But Feser commits a lot of elementary logical fallacies (or at least allows them to end up in print), and so he comes across as either a sloppy thinker or a dishonest one; either he can’t see the fallacies that lead to his desired conclusion, or he’s trying to fool people into thinking that his (and, their, presumably) conclusions follow logically from uncontroversial premises.

Series: The Last Superstition

Cooches Are Filthy and Disgusting. Who Knew?

The BillDo is up in arms again. This time, he’s unhappy about a piece that aired on The Daily Show a few days ago about the right’s War on Women, contrasting it to the War on Christmas™. And specifically at a bit where Jon Stewart suggested that, to prevent unwanted government intrusion into their sex life, women could protect their vaginas by placing mangers in front of them:

Vagina manger

It’s hard to tell what exactly it is about that image that has given poor Billy the vapors. He’s called it “obscene” and “vulgar“, “Stewart not only made a vulgar attack on Christians, he objectified women“; an “unprecedented assault on Christian sensibilities“, “anti-Christian and grossly misogynist“, even “What Jon Stewart did ranks with the most vulgar expression of hate speech ever aired on television“; “so indefensible—putting a nativity scene ornament in between the legs of a naked woman—that no one save the maliciously sick would even try to defend it“.

So he’s clearly in a tizzy, but doesn’t say exactly what the problem is, or why this comedy bit should warrant such over-his-usual-over-the-top rhetoric, which means that I need to guess.

Psychologist Jonathan Haidt has been working on a theory of morals, studying what people find to be moral or immoral. This is from a psychological standpoint, not a philosophical or ethical one. That is, he’s not interested so much about figuring out what’s right or wrong, as much as he is in finding out how people think about right and wrong.

One of his categories is “sanctity/degradation”, which concerns purity and contamination: an action is immoral if it contaminates the purity of the person or community. Thus, for instance, I’m guessing that most people would object if someone brought a dog turd in a clear Ziploc bag onto a subway train, because (in people’s minds, at least) dog turds are filthy and disgusting, and the subway car and its passengers would be in a sense contaminated by its presence.

As far as I can make out, this is the explanation that best fits BillDo’s reaction: he feels that manger scenes are pure and holy, and photoshopping one in proximity to a set of ladyparts contaminates it with, I don’t know, cooter cooties or something. Which leads inexorably to the conclusion that Bill thinks vaginas are filthy. I wonder how Mrs. Catholic League feels about that. Or maybe Bill feels this way because he’s gay. Dunno.

At any rate, this seems like his personal hangup. And maybe, until such time as he can get over it, and realize that a vagina is no more dirty than any other body part, especially once it’s been thoroughly washed, ideally by a willing showermate, that he should just fuck off.

(Psychological analysis brought to you by the Institute for Advanced Psychological Research and Bajingo Jokes.)

Three Different Things that Look Similar

Here are three statements:

  • St. Anselm says that no one really disbelieves in God.
  • Stephen Hawking says that spacetime is smooth at the Big Bang.
  • PZ Myers says that “The only appropriate responses should involve some form of righteous fury, much butt-kicking, and the public firing and humiliation of some teachers”

All three are of the form “person X says Y“, but they’re really three different types of statement. See if you can figure it out before meeting me after the jump.

Read More

The Thing, and the Name of the Thing

Yesterday, during a routine medical examination, I found out that I have a dermatofibroma.

Don’t worry about me. My prognosis is very good. I should still have a few decades left. It means that at some point I got bitten by an insect, but a piece of stinger was probably left behind, and scar tissue formed around it.

But if you thought, if only for a moment, that something with a big scary name like “dermatofibroma” must be a big scary thing, well, that’s what I want to talk about.

I’ve mentioned elsewhere that as far as I can tell, the human mind uses the same machinery to deal abstract notions and patterns as it does with tangible objects like coins and bricks. That’s why we speak of taking responsibility, of giving life, of sharing our troubles, and so forth. (And I bet there’s research to back me up on this.)

A word is the handle we use to grab hold of an idea (see what I did there?), and sometimes we’re not very good at distinguishing between the word and the idea. I know that it’s a relief to go to the doctor with some collection of symptoms and find out that my condition has a name. Even if I don’t know anything about it, at least it’s a name. It’s something to hold on to. Likewise, I remember that back in the 80s, simply coming up with the name “AIDS” seemed to make the phenomenon more tractable than some unnamed disease.

I think a lot of deepities and other facile slogans work because people tend not to distinguish between a thing, and the word for that thing. Philosophers call this a use-mention error. C programmers know that it’s important to distinguish a variable, a pointer to that variable, a pointer to a pointer to the variable, and so forth1.

The solution, I’ve found, is to keep a mental model of whatever the discussion is about, kind of like drawing a picture to help you think about a math problem. For instance, if a news report says that “seasonally-adjusted unemployment claims were up 1% in December” and I wonder why the qualifier “seasonally-adjusted” was thrown in there, I can think of department stores hiring lots of people for a few months to take handle the Christmas rush.

Richard Feynman describes this process in Surely You’re Joking, Mr. Feynman. In the chapter Would You Solve the Dirac Equation?, he writes:

I can’t understand anything in general unless I’m carrying along in my mind a specific example and watching it go. Some people think in the beginning that I’m kind of slow and I don’t understand the problem, because I ask a lot of these “dumb” questions: “Is a cathode plus or minus? Is an an-ion this way, or that way?”

But later, when the guy’s in the middle of a bunch of equations, he’ll say something and I’ll say, “Wait a minute! There’s an error! That can’t be right!”

The guy looks at his equations, and sure enough, after a while, he finds the mistake and wonders, “How the hell did this guy, who hardly understood at the beginning, find that mistake in the mess of all these equations?

He thinks I’m following the steps mathematically, but that’s not what I’m doing. I have the specific, physical example of what he’s trying to analyze, and I know from instinct and experience the properties of the thing. So when the equation says it should behave so-and-so, and I know that’s the wrong way around, I jump up and say, “Wait! There’s a mistake!”

This sort of thinking is a way to have the analytical and intuitive parts of your mind working in tandem. If you have an intuitive understanding of the system in question — be it computer code or preparing a Thanksgiving meal for twelve — you can apply that intuition toward understanding how everything is supposed to work. At the same time, your analytical mind can work out the numerical and logical parts. Normally, they should give the same result; if they don’t, then there’s probably an error either in your analysis or in your intuition.

The downside of this approach is that I tend to get very frustrated when I read theologians and philosophers — or at least the sorts of philosophers who give philosophy a bad reputation — because they tend to say things like “a lesser entity can never create something greater than itself” without saying how one can tell whether X is greater or lesser than Y, and without giving me anything to hang my intuition on. And if a discussion goes on for too long without some sort of anchor to reality, it becomes hard to get a reality check to correct any mistakes that may have crept in.

Since I started with jargon, I want to close with it as well. Every profession and field has its jargon, because it allows practitioners to refer precisely to specific concepts in that field. For instance, as a system administrator, I care whether an unresponsive machine is hung, wedged, angry, confused, or dead (or, in extreme cases, simply fucked). These all convey shades of meaning that the user who can’t log in and do her work doesn’t see or care about.

But there’s another, less noble purpose to jargon: showing off one’s erudition. This usage seems to be more prevalent in fields with more, let’s say bullshit. If you don’t have anything to say, or if what you’re saying is trivial, you can paper over that inconvenient fact with five-dollar words.

In particular, I remember an urban geography text I was assigned in college that had a paragraph that went on about “pendular motion” and “central business district”s and so on. I had to read it four or five times before it finally dawned on me that what it was saying was “people commute between suburbs and downtown”.

If you’re trying to, you know, communicate with your audience, then it behooves you to speak or write in such a way that they’ll understand. That is, you have a mental model of whatever it is you’re talking about; and at the end of your explanation, your audience should have the same model in their minds. Effective communication is a process of copying data structures from one mind to another in the least amount of time.

That geography text seemed like a textbook example (if you’ll pardon the expression) of an author who knew that what he was saying was trivial, and wanted to disguise this fact. I imagined at the time that he wanted geography to be scientific, and was jealous of people in hard sciences, like physicists and astronomers, who can set up experiments and get clear results. A more honest approach, it seems to me, would have been to acknowledge from the start that while making geography scientific is a laudable goal, it is inherently a messy field; there are often many variables involved, and it is difficult to tease out each one’s contribution to the final result. Add to this the fact that it’s difficult or impossible to conduct rigorously controlled experiments (you can’t just build a second Tulsa, but without the oil industry, to see how it differs from the original), and each bit of solid data becomes a hard-won nugget of knowledge.

So yes, say that people commute. Acknowledge that it may seem trivial, but that in a field full of uncertainty, it’s a well-established fact because of X and Y and Z. That’s the more honest approach.


1: One of my favorite error messages was in a C compiler that used 16 bits for both integers and pointers. Whenever my code tried to dereference an int or do suspicious arithmetic with a pointer, the compiler would complain of “integer-pointer pun”.

(Update, 11:43: Typo in the Big Scary Word.)

Autistic Artists and Plagiarism

I’ve been having a bit of an argument with someone on another site — a wiki — over his tendency to copy pages from other sites, instead of restating the information in his own words.

Stick around. This isn’t about SIWOTI. I promise I’ll get to the autistic artists soon enough.

I think we all recognize that there’s a difference between copying, and summarizing or paraphrasing. Paraphrasing is a two-step process: first, you read and understand the original text, that is, you convert it into an internal representation in your brain; and then you take that internal representation and turn it back into text. Copying, on the other hand, is a relatively mindless activity: you just take the original string of words and duplicate them.

Paraphrasing takes much more mental activity than copying, and that’s why it’s more respectable: if you can successfully paraphrase an article, that means you’ve managed to understand it, and have also managed to express thoughts in writing.

There are a number of autistic people with exceptional artistic talents: , Gilles Tréhin, Stephen Wiltshire, and others. Chuck Close isn’t autistic, but he is face-blind, meaning that he doesn’t see faces. Yet he’s an artist known for his portraits of faces.

Stephen Wiltshire - Royal Albert Hall
Drawing of the Royal Albert Hall by Stephen Wiltshire, made at the age of 9.

What I notice about these artists is that their pictures are realistic. They seem to have an innate grasp of perspective. Windows and such are not evenly spaced on paper, but become progressivel closer as they are bunched together. Balconies and buttresses change orientation as they go around a building, and so forth. These are things that pre-Renaissance artists struggled with. (Okay, I’m not talking about Chuck Close so much here. More Wiltshire and Tréhin.)

And this brings us back to copying vs. paraphrasing.

The stereotypical child’s drawing has a house represented as an irregular pentagon, a tilted rectangle for a chimney, some curlicues for smoke coming out of the chimney, and one to four stick figures one and a half to two stories tall, standing on a flat expanse of green. In other words, it looks nothing like a house.

So I suspect that the way normal people draw is comparable to paraphrasing, as described above: when we see a house, or a tree, or a person, we don’t really see the lines, colors, and shapes formed on our retinas. All of that detail is processed, number-crunched, and turned into some internal data structure that represents the subject. For instance, I can instantly recognize my friends and family, even under different lighting conditions, or after the passage of time has altered their features. But I would have much more trouble describing them to you in such a way that you could pick them out of a lineup. I’d have even more trouble drawing a picture of them.

So when ordinary people draw a house or a face, we have trouble converting our abstract internal representation into concrete lines, because we never paid much attention to those lines. That’s one of the things you learn in art class. You have to unlearn the intuitive understanding of what a thing is, and look past it to see what the thing looks like. (This may be related to “first sight” in Terry Pratchett’s A Hat Full of Sky.)

But if someone has a problem recognizing things, if their world is a jumble of lines and colors, that may serve them in good stead in artistic endeavors, in that they’re not distracted by what things are, and can see what things look like. There’s an art class exercise in which you have to copy a picture — say, a portrait — that’s been turned upside-down. That way, the original picture is what it is, but it isn’t a face, and you’re not distracted by its being a face.

Just in case it wasn’t obvious, I’m not a neurologist, psychologist, or even an artist, so I’m not qualified to make pronouncements on this. But it seems like fairly nifty idea.