At my last job, my commute was about an hour each way, on a typical day. At times I would amuse myself by trying to figure out how much time I could save if I drove at 70, 75, 80, 85 miles per hour. Interestingly (or depressingly) enough, it never amounted to more than ten minutes — and that was assuming that I never slowed down, never got stuck behind someone who was only doing 70. In practice, the only times I ever made those 35 miles in less than 50 minutes was when I was returning home after midnight.
It also meant that if I had a 10 o’clock meeting, I had to be on the road by 9:00 at the very latest. It was very odd, the first time I woke up at 8:45, thought that even in emergency panic mode and with the sort of ruthless optimization that only a life-long geek would concoct, there was no way I could get dressed, cleaned enough to pass for presentable, make a cup of coffee so I wouldn’t crash on the highway, and make it behind the wheel in less than 20 minutes. I realized with a Cold Equations chill that I was already late, even though the meeting wouldn’t begin for more than an hour.
There’s a saying that “what you don’t know won’t hurt you” and it’s obvious nonsense: the cancer eating away at your liver, the distracted driver coming around the blind curve on the road, the mercury in your salmon steak, all can hurt or kill you, whether you know they’re there or not, whether you believe in them or not.
Science is a method for figuring out what the world is like, arguably the most reliable one ever devised. There continues to be debate as to what does and doesn’t constitute science, but as far as I can tell, the lab coats, equipment, double-blind experiments, methodological naturalism and all the rest are secondary. It really comes down to two questions that scientists must ask:
1) What is the world like?
The fundamental axiom of science is that if you want to know what the world around us is like, there’s no better way of finding out than to go look at it. This stands in contrast to approaches like divine inspiration, pure reasoning, and appeal to ancient authorities. As someone pointed out, “if the bird and the bird book disagree, trust the bird.” It doesn’t matter how many degrees you have, how many awards you’ve received, or how many experts disagree. If Stephen Hawking says that in a given setup, the dial should point one way, but you set up your equipment and the dial points another way, then you’re right and Hawking is wrong.
The second question is social, not methodological:
2) How do I know this isn’t garbage?
This is where the degrees, double-blind experiments, etc. come in. This is also what separates pseudoscientific fakes like Answers in Genesis’s “peer-reviewed” “research” journals from the real thing. Scientists go to a lot of effort to see whether and how they’re wrong.
Peer review and discussion in journals is just “given enough eyeballs, all bugs are shallow” applied to research: journals invite reviewers to comment on papers to give them a chance to look for errors, and the publication gives the entire world a chance to do so as well. Any scientist who expects to publish her results knows that her colleagues will be looking at her work to see if it’s garbage, so she needs to anticipate this by looking for errors herself.
Over the years, scientists and philosophers of science have come up with a whole slew of Ways To Be Wrong, from dirty equipment to self-delusion to signals lost in the noise to the experimenter affecting the experiment. A lot of effort in experimental design consists in seeing how it could fail; a scientist must ask himself, “how can I make sure this experiment tells me what the world is like, regardless of whether or not I like the result?”
By any measure, science has proven an immensely successful way of finding out what the world is like. We have models and theories that allow us to send probes to other planets, figure out which wheat stalks to cross to get a more disease-resistant variety, predict the weather a week from now, build computers and solar panels and better cheese-making vats, and so forth.
These models, theories, and tools allow us to go beyond our five senses, and peer into the future. They also allow us to ask “what if” questions. What if a 20 kg rock hit Oslo going at Mach 20? How much destruction would it cause? What if everyone in the US were inoculated against tuberculosis using a vaccine that kills 0.001% of those who receive it? Would this kill more people than it saved? What if the Caribbean sea were one degree warmer than it is now? How many more Cat 4 hurricanes would we expect to see?
Richard Feynman, in one of his essays, divided policy questions into scientific and moral questions: the scientific question is, “if I do X, what will happen?” and the moral question is “Do I want that to happen?” A scientist can tell you that if you place an explosive charge on such-and-such support pillar of such-and-such building, it will destroy the building and kill anyone in it. The moral question, “Do I want that building to be destroyed and any inhabitants killed?” depends on the specifics: is the building in the way of something else you want to build? Is it unoccupied? Does it harbor a terrorist cell? Are there any innocent bystanders?
Of course, since science has proven so reliable in answering many “what if?” questions, it is irresponsible to make policy decisions without the best scientific prediction of what it would entail. This would be like sending troops into combat without reconnoitering the terrain first. And, of course, ignoring the scientific evidence because one doesn’t like the conclusions is like ignoring a reconnaissance report because one doesn’t like what it says about enemy troop strength. As Richard Feynman said, with regards to the Challenger disaster,
For a successful technology, reality must take precedence
over public relations, for Nature cannot be fooled.
As technology advances and as the world becomes increasingly interconnected, there are more and more policy issues on which science can shed light. But naturally, people who stand to be adversely affected by policy decisions find it more expedient to shoot the messenger than to face up to the fact that they’re on the wrong side of an argument. Loggers and real estate developers don’t want to know that the forests they’re clearing today will change rain and erosion patterns and hurt people tomorrow. Factory owners don’t want to know that the cheaper or more efficient process they want to use will kill a hundred people through mercury poisoning. Drivers don’t want to think about how much they’re contributing to global warming and the depletion of fossil fuels. Disease-conscious consumers don’t want to think about the resources that went into making the plastic wrapper that their fruit came in.
But nature will not be fooled; closing our eyes will not stop these things from happening. The only responsible course is to use science where applicable to get a good idea of the consequences of our actions.
This is not to say that we should always do what appears best from the scientific model: we as a society seem to have decided that the (foreseeable, measurable) deaths from alcohol are preferable to the lost freedom and deaths that would result from outlawing it. Bulldozing a park to build a school, a park, or a factory may be a good trade-off, all things considered. But ignoring or denying scientific evidence is simply irresponsible.