Inside, cells are jam-packed with power plants, assembly lines, recycling units and more. Miniature monorails whisk materials from one part of the cell to another.
Such sophistication has led even the most hard-bitten atheists to remark on the apparent design in living organisms. The late Nobel laureate Francis Crick, co-discoverer of DNA’s structure and an outspoken critic of religion, has nonetheless remarked, “Biologists must constantly keep in mind that what they see was not designed but rather evolved.”
Clearly, Crick (and others like him) considers the appearance of design to be strictly an illusion, created by naturalistic evolution. Yet it’s also clear that this impression is so compelling that an atheistic biologist must warn his colleagues against it.
In contrast, ID theorists contend that living organisms appear designed because they are designed. And unlike the design thinkers whom Darwin deposed, they’ve developed rigorous new concepts to test their idea.
Why did Crick say that? Probably (I can’t be sure, since he’s passed on now) because he was tired of people using his words to justify things like IDC. But you just can’t win.
What are these allegedly “rigorous new concepts,” and how can we “test their idea”? This is the question I’ve been asking for a while.
First, let’s note that the author identifies himself as an “early organizer of the intelligent design movement” and has a degree in educational psychology. Judge for yourself his qualifications to speak about the history of creationism, or his potential biases.
This shining star of objective journalism explains that IDC relies on two basic assumptions:
[N]amely, that intelligent agents exist and that their effects are empirically detectable.
Its chief tool is specified complexity. That’s a mouthful, and the math behind it is forbidding, but the basic idea is simple: An object displays specified complexity when it has lots of parts (is complex) arranged in a recognizable, delimited pattern (is specified).
He then explains that an article and a flagellum are both excellent examples of things with specified complexity. The flagellum is such a hackneyed example, it’s been debunked to death. I think that the brief statement here is a pretty good illustration of what’s wrong with the idea of “specified complexity.” How do we measure “recognizable” or “delimited”? Complexity may well be measurable, the problem is specifying some level that indicates “design.”
Of course, what’s important here is not what we conclude about the flagellum or the cell, but how we study it. Design theorists don’t derive their conclusions from revelation, but by looking for reliable, rigorously defined indicators of design and by ruling out alternative explanations, such as Darwinism.
Calling their work religious is just a cheap way to dodge the issues. The public–and our students–deserve better than that.
Of course, what scientists do is not the Sherlock Holmes thing: “when you have eliminated the impossible, whatever remains, however improbable, must be the truth.” You can’t eliminate everything but one. You have to positively engage the nature of “intelligent agents.” You invoke intention specifically because that lets you generate teleology. But there’s no positive evidence of teleology.
Here’s the logic of specified complexity, the product of “the Isaac Newton of Information.” There’s a magic number, a probability of an event above which you can attribute something to “regularity” (which has nothing to do with how much fiber is in your diet). If the probability is a little lower, but not too much, it might be attributed to “chance” (however Dembski defines that). If the probability is low enough, Dembski says it must be “design.” Those scare quotes aren’t meant to be petty. We don’t use those words the way he does. He thinks we can eliminate, in one fell swoop, every hypothesis ever.
That relies on our ability to assess the probability of every possible hypothesis without knowing what it is.
But forget that. Say it’s true. Consider this point, which I’ve lifted from some writings by Elliot Sober. Regularity and chance can only fail to be disproven. In time, with more data, anything could wind up being design, but once something crosses the threshold, it is proven as design. That’s a bizarre asymmetry.
But evolution is not a purely random process. It’s an emergent process. It has a level of structure greater than chance, but without the teleology of design. The mind is an emergent process of the brain. Evolution is an emergent process of natural selection. They aren’t chance, they aren’t regularity, and they aren’t design.
What probability separates design from emergent processes? Even if it were coherent to get those probabilities, what level would separate these new categories?