Creative Ignorance

The other day, I was in a discussion about whether computers can be creative. Personally, I thought the answer was a big “duh, yes”, if only because programs (often even my own) often do things that surprise me, but at least I managed to shift the conversation toward the question of “what is creativity? How will you recognize it when a computer achieves it?” And along the way, I noticed a couple of things about creativity.

For one thing, the perception of creativity can depend on the audience’s ignorance. Years ago, I wrote a custom email filter for my boss, because none of the commonly-used ones could easily do what he wanted (like filter on the number of people in the “To:” and “Cc:” lines). When I showed it to him, he thought it was the most amazing thing ever, that we should write a paper about it, and send it in to a research journal. I told him that this was too trivial, and that I couldn’t in good conscience call it groundbreaking or innovative, and that I’d be embarrassed to send it to a research journal.

In short, my boss thought my code was innovative because he knew far less than I did about the state of mail filters. And to this day, whenever I see a statue or painting or something and think, “Oh, that’s cleverly cool! I never would’ve thought of that”, I immediately have second thoughts along the lines of “Yes, but that’s because you don’t hang out with artists and go to galleries and such. The person who did this probably just took five or six ideas that were floating around the technisphere and tweaked them.”

A lot of the proposed definitions of “creativity” circled around the general idea of “using a tool in a new or unexpected way”. And it occurred to me that you don’t need intelligence to be creative in this way. If you don’t know what a tool is for, you won’t be burdened with preconceived ideas of how you ought to use it. In fact, that’s how natural selection works: it has no intelligence whatsoever, and doesn’t know that wings are “for” protecting eggs, and doesn’t punish those individuals that manage to use them for gliding or flying.

Of course, if you’re an adult human, then you’re intelligent (at least compared to natural selection or a bacterium), so this type of creativity is harder. But you can use first sight.

In Terry Pratchett’s A Hat Full of Sky, Tiffany Aching is said to have “first sight and second thoughts”. First sight is the ability to see what’s actually in front of you, rather than what you think is there.

There’s an old story about a student who was asked on a test to measure the height of a building with a barometer that I’m sure you’re all familiar with. Because the problem specified the use of a barometer, clearly the instructor expected students to use the barometer for the thing that barometers are supposed to be used for, namely measure air pressure.

The student’s smartass answers seem creative (oh, come on, admit it: you thought it was cool, the first time you heard the story) is that he ignores the fact that barometers are for measuring air pressure, and sees its other properties: it has mass, so it can be swung like a pendulum; it has length, so it can be used to count off units of height; it has value, so it can be offered as a bribe.

Outside of the world of contrived puzzles, first sight can also be useful, because it lets you stop asking “what is this for?” and start asking “what can I do with this?”. That last question, in turn, breaks down into sub-questions like “what tools do I have?”, “what properties do they have?”, and “how does this property help me solve my problem?”

For instance, spreadsheets are nominally for tabulating data, aggregating sums and averages of interesting numbers, and like that. But people have noticed that hey, Excel does arithmetic, so why not use it as a calculator? I’ve also worked with people who noticed that hey, it lays things out in neat columns, so why not use it as a to-do list?

When technology advances, old tools sometimes become cheap enough to do simple tasks. Car phones have existed for a long time, but if you grew up in the 1960s, you probably decided that they were just fancy toys that rich people used to flaunt their wealth. But in the 1990s, they became cheap enough that anyone could have one. So if you were running a business in the 90s and were expecting people to use pay phones to stay in touch with the office while they were traveling, you were going to have your lunch eaten by the people who had looked at the field the way it was, not the way you imagined it, and realized that they could just give all their salespeople and field techs cell phones.

On a grander scale, the Internet was originally set up for government researchers to share data, and as a nuclear-war-resistant means of communication for the military. It certainly wan’t built to help you find friends from High School or coordinate popular uprisings in the Middle East. That part came from people looking at the thing for what it was, and ignoring &mdash: or often ignorant of — what it was supposed to be for.

What’s interesting about this, I think, is that you don’t need to be a genius to be creative. In fact, you don’t even need intelligence at all. A lot of creationists look at the complexity of biological systems and can think only in terms of a superior intellect putting the pieces together to achieve a goal.

But if I’m right, then it’s possible to be creative simply by being to stupid to know what’s impossible. Creativity can be what Dennett called a crane, rather than a skyhook.

What Good Is Math?

I think most people, if you asked them, would say that teaching math in school is a good thing. If you ask why, the usual answer is something like, it allows you to figure out how much carpet and wallpaper you need to buy to redecorate your living room, to determine whether the 12-can pack of ravioli at Costco is cheaper than what you can find at Safeway, and so forth.

But that’s all elementary and High School level stuff: arithmetic, geometry, a dash of algebra. I’ve rarely used trigonometry after college, and I know one person who used calculus in quilting. I think it’s safe to say that most people never use calculus, differential equations, prepositional logic, etc. after college. But I still say there’s value in studying math beyond the practical.

Let me ask a contrasting question: why should kids play sports in school? Only a tiny minority of them will make a living playing or coaching sports, and less than half will even play on the office softball team or the like.

The answer I get most often is that sports teach teamwork: how to subsume your immediate desires for the greater good, make sure you do your part and trust your teammates to do theirs, and communicate effectively to make sure you’re not working at cross-purposes. You learn to win graciously and learn from your failures.

In other words, it’s not so much that playing sports develops muscle tone and general fitness. But rather, it’s an indirect way of teaching other skills like cooperation. They may not be taught explicitly, and there are other ways of teaching them, but this works.

The same thing, I think, applies to math. A few years after learning it, you probably won’t be able to prove that there is an infinite number of primes, or how to integrate a function. But that’s okay, because math teaches skills other than the purely pragmatic.

Proving theorems, for instance, forces you to distinguish between what you think is true, and what you can demonstrate; what looks right, and what is right. Geometry teachers always admonish students not to reason from the diagram because concrete examples are often misleading: just because line AB is perpendicular to line CD in this drawing doesn’t mean that that’s always the case. The point is to figure out what’s universally true, not just what’s true about this particular instance.

These are skills that apply to non-mathematical professions: judges and lawyers often deal with people who have clearly done something wrong, and need to distinguish between “I know that ain’t right” and whether the action in question is legal or not. Police officers likewise need to know the difference between “I know Jimmy’s been selling crack to students” and “I can prove to a jury that Jimmy’s been selling crack”. Programmers will find that they write code that works in situations that they didn’t imagine, because it’s a truism that end users will do weird things with your software that you never dreamed of. Financiers should be able to put together a portfolio that can survive unexpected catastrophic changes in the market. (And as much as it pains me to defend the Bush administration, Donald Rumsfeld was quite right in distinguishing between “known unknowns” and “unknown unknowns”, and trying to plan for both.)

Or take the discussion about defining information in the “I Get Email” thread; specifically, the exchange between Tom and Troublesome Frog on whether all living beings contain information. It seems that Tom doesn’t quite get the difference between ∀ xy p (no matter how you define information, all living beings contain information) and ∃ xy p (we can come up with a definition of “information” such that all living beings contain information).

Math is very big on abstraction. This starts in algebra, which is all about figuring things out about things that you know you don’t know much about. And it just gets more abstract from there. The first time you demonstrate that there is no solution for some equation, or better yet, that there can be no proof of a given proposition, can be quite a thrill.

And abstract thought is one of those things we humans are good at. It’s what allows us to formulate moral rules that apply to everyone, and see things like “the market” instead of a bunch of people trading stuff. It seems that we ought to learn to do it well.

One of my favorite types of SAT question is the one that presents a math problem, and one of the possible answers is “not enough information to solve the problem”. The lesson here isn’t just “know your limitations”. It also shows that just because something looks solvable doesn’t mean that it is; that just because something is printed in an official-looking book doesn’t make it kosher. And also, the sooner you figure out that a problem has no solution, the less time you’ll waste looking for one.

Finally, one thing that everyone should get out of any math class is that you can figure stuff out on your own, without looking the answers up in the back of the book. Yes, the same is true of science and other classes, but math is one of those branches where you don’t need fancy equipment to work on a problem and figure out the solution.

One problem I see is that a lot of people seem to think that knowledge is something that is handed down from on high, rather than something that can be created. This seems to be at the root of the claim that evolution is just another religion: “I have my priests who tell me that God created humans, and you have your priests who tell you humans evolved. The only difference between creation and evolution is which team you’re on.” There’s no arguing with someone who simply repeats what they were taught; but if you’re in an argument with someone who thinks that mere mortals can work out answers on their own, you might get somewhere. And that’s something we could use more of.

A Musical Interlude

Sorry for not posting anything recently (aside from trying to get answers from Tom). So to tide you over, here’s some music:

Siouxsie and the Banshees improve on an Iggy Pop song, The Passenger:
[youtube http://www.youtube.com/watch?v=4nAON-MwUPY&fs=1&hl=en_US]

Tom Shear (d/b/a Assemblage 23) doing his thing in Anthem. I, for one, love the way the various voices come in and out throughout the song.
[youtube http://www.youtube.com/watch?v=vpuu0-7_R5Q&fs=1&hl=en_US]

Conclusions Arrived at After Watching a Bunch of College Students, Both Theist and Atheist, Discussing the Origin and Nature of Morality

We all want simple rules by which to live. But simple rule sets are simplistic. Failing that, we want a simple set of principles by which to make rules by which to live. But those are usually arbitrary, and often fail to take into account that people want different things for many reasons, some good and some bad.

There aren’t any simple answers. There aren’t even any simple ways to get at the answers.

And as I said earlier, morality, like the value of a dollar, is an evolving emergent phenomenon.

MLK Art Photos

a.gallery {
float: left;
clear: left;
margin-left: 0px;
margin-right: 1em;
margin-top: 1em;
margin-bottom: 1em;
}

I met up with R. on the Mall today to see the MLK art, an accordion-folded display that showed four pictures of Martin Luther King, each with a quotation.

A PA system at the base played King’s I Have a Dream speech on continuous loop.

Read More

Teabagger Rally Photos

a.gallery {
float: left;
clear: left;
margin-top: 1em;
margin-bottom: 1em;
margin-right: 1em;
margin-left: 0em;
}

Some photos of the Glenn Beck teabagger rally on the Mall today.

Read More

The Thing, and the Name of the Thing

Yesterday, during a routine medical examination, I found out that I have a dermatofibroma.

Don’t worry about me. My prognosis is very good. I should still have a few decades left. It means that at some point I got bitten by an insect, but a piece of stinger was probably left behind, and scar tissue formed around it.

But if you thought, if only for a moment, that something with a big scary name like “dermatofibroma” must be a big scary thing, well, that’s what I want to talk about.

I’ve mentioned elsewhere that as far as I can tell, the human mind uses the same machinery to deal abstract notions and patterns as it does with tangible objects like coins and bricks. That’s why we speak of taking responsibility, of giving life, of sharing our troubles, and so forth. (And I bet there’s research to back me up on this.)

A word is the handle we use to grab hold of an idea (see what I did there?), and sometimes we’re not very good at distinguishing between the word and the idea. I know that it’s a relief to go to the doctor with some collection of symptoms and find out that my condition has a name. Even if I don’t know anything about it, at least it’s a name. It’s something to hold on to. Likewise, I remember that back in the 80s, simply coming up with the name “AIDS” seemed to make the phenomenon more tractable than some unnamed disease.

I think a lot of deepities and other facile slogans work because people tend not to distinguish between a thing, and the word for that thing. Philosophers call this a use-mention error. C programmers know that it’s important to distinguish a variable, a pointer to that variable, a pointer to a pointer to the variable, and so forth1.

The solution, I’ve found, is to keep a mental model of whatever the discussion is about, kind of like drawing a picture to help you think about a math problem. For instance, if a news report says that “seasonally-adjusted unemployment claims were up 1% in December” and I wonder why the qualifier “seasonally-adjusted” was thrown in there, I can think of department stores hiring lots of people for a few months to take handle the Christmas rush.

Richard Feynman describes this process in Surely You’re Joking, Mr. Feynman. In the chapter Would You Solve the Dirac Equation?, he writes:

I can’t understand anything in general unless I’m carrying along in my mind a specific example and watching it go. Some people think in the beginning that I’m kind of slow and I don’t understand the problem, because I ask a lot of these “dumb” questions: “Is a cathode plus or minus? Is an an-ion this way, or that way?”

But later, when the guy’s in the middle of a bunch of equations, he’ll say something and I’ll say, “Wait a minute! There’s an error! That can’t be right!”

The guy looks at his equations, and sure enough, after a while, he finds the mistake and wonders, “How the hell did this guy, who hardly understood at the beginning, find that mistake in the mess of all these equations?

He thinks I’m following the steps mathematically, but that’s not what I’m doing. I have the specific, physical example of what he’s trying to analyze, and I know from instinct and experience the properties of the thing. So when the equation says it should behave so-and-so, and I know that’s the wrong way around, I jump up and say, “Wait! There’s a mistake!”

This sort of thinking is a way to have the analytical and intuitive parts of your mind working in tandem. If you have an intuitive understanding of the system in question — be it computer code or preparing a Thanksgiving meal for twelve — you can apply that intuition toward understanding how everything is supposed to work. At the same time, your analytical mind can work out the numerical and logical parts. Normally, they should give the same result; if they don’t, then there’s probably an error either in your analysis or in your intuition.

The downside of this approach is that I tend to get very frustrated when I read theologians and philosophers — or at least the sorts of philosophers who give philosophy a bad reputation — because they tend to say things like “a lesser entity can never create something greater than itself” without saying how one can tell whether X is greater or lesser than Y, and without giving me anything to hang my intuition on. And if a discussion goes on for too long without some sort of anchor to reality, it becomes hard to get a reality check to correct any mistakes that may have crept in.

Since I started with jargon, I want to close with it as well. Every profession and field has its jargon, because it allows practitioners to refer precisely to specific concepts in that field. For instance, as a system administrator, I care whether an unresponsive machine is hung, wedged, angry, confused, or dead (or, in extreme cases, simply fucked). These all convey shades of meaning that the user who can’t log in and do her work doesn’t see or care about.

But there’s another, less noble purpose to jargon: showing off one’s erudition. This usage seems to be more prevalent in fields with more, let’s say bullshit. If you don’t have anything to say, or if what you’re saying is trivial, you can paper over that inconvenient fact with five-dollar words.

In particular, I remember an urban geography text I was assigned in college that had a paragraph that went on about “pendular motion” and “central business district”s and so on. I had to read it four or five times before it finally dawned on me that what it was saying was “people commute between suburbs and downtown”.

If you’re trying to, you know, communicate with your audience, then it behooves you to speak or write in such a way that they’ll understand. That is, you have a mental model of whatever it is you’re talking about; and at the end of your explanation, your audience should have the same model in their minds. Effective communication is a process of copying data structures from one mind to another in the least amount of time.

That geography text seemed like a textbook example (if you’ll pardon the expression) of an author who knew that what he was saying was trivial, and wanted to disguise this fact. I imagined at the time that he wanted geography to be scientific, and was jealous of people in hard sciences, like physicists and astronomers, who can set up experiments and get clear results. A more honest approach, it seems to me, would have been to acknowledge from the start that while making geography scientific is a laudable goal, it is inherently a messy field; there are often many variables involved, and it is difficult to tease out each one’s contribution to the final result. Add to this the fact that it’s difficult or impossible to conduct rigorously controlled experiments (you can’t just build a second Tulsa, but without the oil industry, to see how it differs from the original), and each bit of solid data becomes a hard-won nugget of knowledge.

So yes, say that people commute. Acknowledge that it may seem trivial, but that in a field full of uncertainty, it’s a well-established fact because of X and Y and Z. That’s the more honest approach.


1: One of my favorite error messages was in a C compiler that used 16 bits for both integers and pointers. Whenever my code tried to dereference an int or do suspicious arithmetic with a pointer, the compiler would complain of “integer-pointer pun”.

(Update, 11:43: Typo in the Big Scary Word.)

TAM 8 Miscellanea

Some notes I jotted down during talks at TAM 8:

My wife and I have an agreement: if Brian Williams ever becomes single, she gets to leave me and marry him. And if Rachel Maddow ever… um, changes her mind… then I get to marry her.
— Hal Bidlack(?)

I’m a vegetarian zombie. I only eat rotten fruit.
— Joe Nickell

At the Q&As after talks, most people would introduce themselves by giving their name and employer. But one person prefaced his question with:

Hello. My name would waste valuable time, and where I work is embarrassing.

Paul Provenza on George W. Bush:

He’s like a low-rent antichrist: two sixes and a five.

He thinks history will vindicate him. Who does he think he is, The Velvet Underground?

He also mentioned getting into a fight with a network censor who allowed a sketch that made fun of God, but not one about Jesus, because:

You can make fun of God, because he doesn’t exist. But you can’t make fun of Jesus, because he’s God’s son.

I’d brought Richard Dawkins’s The Greatest Show on Earth to read on the plane.* Colour plates 18-19 show a map of the Earth’s tectonic plates, including one labeled “Philippine Plate”.

So of course, given my precedent of having people sign books they didn’t write, I had to get Phil Plait to sign it:


* I didn’t get to read much of it on the way over, though: I didn’t get an assigned seat in advance for the flight from Detroit to Las Vegas, so instead of getting an aisle seat like I’d wanted, I got stuffed next to two Chinese young men in the very last row, by the window, the complete opposite of where I wanted to be.

But during the hustle and bustle of people competing to see how much crap can be shoved into an overhead bin without making the fuselage bulge, a Chinese man asked me if I’d be willing to trade seats with him so that he could sit with his sons. He even apologized that his was an aisle seat instead of the window that I so obviously wanted. I thanked him, and we traded.

When I got to my new seat, there was someone already in it, chatting up the good-looking lady in the middle seat. He went back to his seat in the row behind. A few moments later, a young woman from the row behind came up and swapped places with the lady the guy had been chatting up.

So my new seat neighbor turned out to be a geologist on her way to TAM. I suppose if we weren’t headed for the same convention on skepticism and rational thinking, it would’ve been easy to invoke mystical forces of fate or destiny. But of course that would’ve been silly.

At any rate, she was a better conversationalist than Dawkins’s book, so I didn’t get as much reading done as I’d thought.

Autistic Artists and Plagiarism

I’ve been having a bit of an argument with someone on another site — a wiki — over his tendency to copy pages from other sites, instead of restating the information in his own words.

Stick around. This isn’t about SIWOTI. I promise I’ll get to the autistic artists soon enough.

I think we all recognize that there’s a difference between copying, and summarizing or paraphrasing. Paraphrasing is a two-step process: first, you read and understand the original text, that is, you convert it into an internal representation in your brain; and then you take that internal representation and turn it back into text. Copying, on the other hand, is a relatively mindless activity: you just take the original string of words and duplicate them.

Paraphrasing takes much more mental activity than copying, and that’s why it’s more respectable: if you can successfully paraphrase an article, that means you’ve managed to understand it, and have also managed to express thoughts in writing.

There are a number of autistic people with exceptional artistic talents: , Gilles Tréhin, Stephen Wiltshire, and others. Chuck Close isn’t autistic, but he is face-blind, meaning that he doesn’t see faces. Yet he’s an artist known for his portraits of faces.

Stephen Wiltshire - Royal Albert Hall
Drawing of the Royal Albert Hall by Stephen Wiltshire, made at the age of 9.

What I notice about these artists is that their pictures are realistic. They seem to have an innate grasp of perspective. Windows and such are not evenly spaced on paper, but become progressivel closer as they are bunched together. Balconies and buttresses change orientation as they go around a building, and so forth. These are things that pre-Renaissance artists struggled with. (Okay, I’m not talking about Chuck Close so much here. More Wiltshire and Tréhin.)

And this brings us back to copying vs. paraphrasing.

The stereotypical child’s drawing has a house represented as an irregular pentagon, a tilted rectangle for a chimney, some curlicues for smoke coming out of the chimney, and one to four stick figures one and a half to two stories tall, standing on a flat expanse of green. In other words, it looks nothing like a house.

So I suspect that the way normal people draw is comparable to paraphrasing, as described above: when we see a house, or a tree, or a person, we don’t really see the lines, colors, and shapes formed on our retinas. All of that detail is processed, number-crunched, and turned into some internal data structure that represents the subject. For instance, I can instantly recognize my friends and family, even under different lighting conditions, or after the passage of time has altered their features. But I would have much more trouble describing them to you in such a way that you could pick them out of a lineup. I’d have even more trouble drawing a picture of them.

So when ordinary people draw a house or a face, we have trouble converting our abstract internal representation into concrete lines, because we never paid much attention to those lines. That’s one of the things you learn in art class. You have to unlearn the intuitive understanding of what a thing is, and look past it to see what the thing looks like. (This may be related to “first sight” in Terry Pratchett’s A Hat Full of Sky.)

But if someone has a problem recognizing things, if their world is a jumble of lines and colors, that may serve them in good stead in artistic endeavors, in that they’re not distracted by what things are, and can see what things look like. There’s an art class exercise in which you have to copy a picture — say, a portrait — that’s been turned upside-down. That way, the original picture is what it is, but it isn’t a face, and you’re not distracted by its being a face.

Just in case it wasn’t obvious, I’m not a neurologist, psychologist, or even an artist, so I’m not qualified to make pronouncements on this. But it seems like fairly nifty idea.

Nouns

For all the diversity in human speech, as far as I know, every language has verbs and nouns.

No big surprise there: our world is full of things, like trees and lakes and ostriches and stars, something that nouns are very good at describing. And a lot of these things do things that we care about, like attack or fall or impede, which is where verbs come in.

But nouns refer to a lot of things that aren’t, well, things, like symmetry and justice and heaps and understanding. I can imagine an alien species in which every language uses different parts of speech for things and for collections of things that, as a whole, have a certain property. Call this an assemblage. Thus, to them, “rock” would be a noun, but “heap”, as in “a heap of rocks”, would be an assemblage. “Symmetry”, “pair”, and “order” would also be assemblages, rather than nouns.

They might even go further, and have yet another part of speech to describe the motion of things that has certain properties, like “dance” or “following”.

I want to emphasize that this wouldn’t change what the world is like; it would just change the words and sentences they use to describe it. And perhaps say something about the way they think.

To these aliens, a sentence like “time is money” would sound odd, because it would have a grammatical error (assuming that “time” is an assemblage, while “money” is a thing). In fact, we already have something like this in English, which treats nouns about people differently from nouns about things: “Who didn’t finish its dinner?” is bad English (note, too, how this makes the line “It rubs the lotion on its skin” in Silence of the Lambs particularly creepy).

It’s known that our brains are wired to treat people differently from other elements in our environment. See, for instance, the way we’re more prone to see people and faces in random noise like inkblots, clouds, and wood grain, than inanimate objects. So it seems reasonable to consider that our brains have special-purpose modules for nouns and verbs.

The obvious explanation is that our distant ancestors, before there was speech, still needed to deal with things and actions to survive. Once language appeared, the brain already had the infrastructure necessary to model things and actions, and manipulate that model, so evolution built on what was available. This can perhaps also be seen in the way that a lot of expressions treat abstractions as though they were things: “weighing the evidence”, “transferring ownership”, and so forth.

I don’t want to read too much into these sorts of things. I note, for instance, that in French, there’s a smaller distinction between nouns about people and nouns about non-people. And in German, the gender of both “Kind” (child) and “Mädchen” (young woman) is neuter.

Nonetheless, there does seem to be scientific literature on stroke patients who have trouble naming things, but no trouble naming actions, or vice-versa.[citation needed] And this suggests that the brain has separate modules for dealing with nouns and verbs.

In practice, I think this means that we are predisposed to see the world in terms of nouns and verbs, even when we’re not dealing with concrete things, and this can affect our perceptions. I guess it’s a bit like Neil DeGrasse Tyson explaining to people that a hot ball of rock, a huge ball of gas that generates its own heat, and an irregular lump of ice are vastly different things, and so it doesn’t make sense to lump Mercury, Jupiter, and Pluto all under the label of “planet”.

For instance, if we’re thinking about the way languages have migrated through history, it might be tempting to think of one language displacing another, much as putting a finger in a glass displaces water. But of course languages don’t behave the way that solid objects like fingers and water do; multiple languages can coexist, even in the same mind.

I guess what this all boils down to is that there’s a difference between what something is, and what it’s called.

Update, 15:47: Typo.