Biologie, Mathematik

Irreducible Complexity Revisited

Prof. Dr. Dr. William A. Dembski · 
01.01.2004

Michael Behe’s concept of irreducible complexity, and in particular his use of this concept to critique Darwinism, continues to come under heavy fire from the biological community. The problem with Behe, so Darwinists inform us, is that he has created a problem where there is no problem. Far from constituting an obstacle to the Darwinian mechanism of random variation and natural selection, irreducible complexity is thus supposed to be eminently explainable by this same mechanism. But is it really? It’s been eight years since Behe introduced irreducible complexity in Darwin’s Black Box (a book that continues to sell 15,000 copies per year in English alone). I want in this essay to revisit Behe’s concept of irreducible complexity and indicate why the problem he has raised is, if anything, still more vexing for Darwinism than when he first raised it. The first four sections of this essay review and extend material that I’ve treated elsewhere. The last section contains some novel material.

1 The Definition of Irreducible Complexity 

Highly intricate molecular machines play an integral part in the life of the cell and are increasingly attracting the attention of the biological community. For instance, in February 1998 the premier biology journal Cell devoted a special issue to “macromolecular machines.” All cells use complex molecular machines to process information, convert energy, metabolize nutrients, build proteins, and transport materials across membranes. Bruce Alberts, president of the National Academy of Sciences, introduced this issue with an article titled “The Cell as a Collection of Protein Machines.” In it he remarked, We have always underestimated cells…. The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines…. Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts.[1]

Almost six years later (December 2003), BioEssays published its own special issue on “molecular machines.” In the introductory essay to that issue, Adam Wilkins, the editor of BioEssays, remarked, The articles included in this issue demonstrate some striking parallels between artifactual and biological/molecular machines. In the first place, molecular machines, like man-made machines, perform highly specific functions. Second, the macromolecular machine complexes feature multiple parts that interact in distinct and precise ways, with defined inputs and outputs. Third, many of these machines have parts that can be used in other molecular machines (at least, with slight modification), comparable to the interchangeable parts of artificial machines. Finally, and not least, they have the cardinal attribute of machines: they all convert energy into some form of ‘work’.[2]

Alberts and Wilkins here draw attention to the strong resemblance between molecular machines and machines designed by human engineers. Nevertheless, as neo-Darwinists, they regard the cell’s marvelous complexity as products of Darwinian evolution and thus as only apparently designed. In the 1990s, however, scientists began to challenge the neo-Darwinian view and argue that such protein machines could only have arisen by means of actual design. For example, in 1996 Lehigh University biochemist Michael Behe, who is a coauthor of this text, published a book titled Darwin’s Black Box. In that book he detailed the failure of neo-Darwinian theory to explain the origin of complex molecular machines in the cell. But he didn’t stop there. He also argued that these molecular machines exhibit actual design. Central to his argument was the idea of irreducible complexity.

A functional system is irreducibly complex if it contains a multipart subsystem (i.e., a set of two or more interrelated parts) that cannot be simplified without destroying the system’s basic function. I refer to this multipart subsystem as the system’s irreducible core.[3] This definition is more subtle than it might first appear, so let’s consider it closely. Irreducibly complex systems belong to the broader class of functionally integrated systems. Functionally integrated systems consist of parts that are tightly adapted to each other and thus render the system’s function highly sensitive to isolated changes of those parts. For an integrated system, a change in one place often shuts down the system entirely or else requires multiple changes elsewhere for the system to continue to function. We can therefore define the core of a functionally integrated system as those parts that are indispensable to the system’s basic function: remove parts of the core, and you can’t recover the system’s basic function from the other remaining parts. To say that a core is irreducible is then to say that no other systems with substantially simpler cores can perform the system’s basic function.

The basic function of a system consists of three things:

  1. What the system does in its natural setting or proper context; this is known the system’s primary function (also main function).
  2. The minimal level of function needed for the system to perform adequately in its natural setting or proper context; this is known as the system’s minimum function.
  3. The way or manner in which the system performs its primary function; this is known as the system’s mode of function. Because the basic function of a system includes its mode of function, basic function is concerned not just with ends but also with means. Glue and nails, for instance, may perform the same primary function of fastening together pieces of wood and do so equally well in certain contexts, but the way in which they do it is completely different.

=====FIGURE=====

Figure: Rube Goldberg’s Pencil Sharpener: “Open window (A) and fly kite (B). String (C) lifts small door (D) allowing moths (E) to escape and eat red flannel shirt (F). As weight of shirt becomes less, shoe (G) steps on switch (H) which heats electric iron (I) and burns hole in pants (J). Smoke (K) enters hole in tree (L), smoking out opossum (M) which jumps into basket (N), pulling rope (O) and lifting cage (P), allowing woodpecker (Q) to chew wood from pencil (R), exposing lead. Emergency knife (S) is always handy in case opossum or the woodpecker gets sick and can’t work.” This system is not functionally integrated!

=====FIGURE=====

Along the same lines, consider an outboard motor whose basic function is to propel a small fishing boat around a lake by means of a gasoline- or electric-powered engine that turns a propeller. The outboard motor is irreducibly complex and its irreducible core includes, among other things, a propeller, an engine, and a drive shaft connecting engine to propeller. Now, we can imagine simplifying this arrangement by replacing the engine and drive shaft with a rubber band that, when wound up, turns the propeller. But it’s unlikely that the level of performance attainable from such an arrangement will propel a boat around a lake. In other words, minimum function is unlikely to be preserved with the rubber band. Yet even if it was, this new arrangement would not perform the primary function in the same way as the original outboard motor: the original outboard motor depended on the turning of rotors and not the torsion of an elastic medium.

As another example of an irreducibly complex system, consider an old-fashioned pocket watch. The basic function of the watch is to tell time by means of a winding mechanism. Several parts of the watch are indispensable to that basic function, for instance, the spring, the face, and the hour hand. These belong to the irreducible core. But note that other parts of the watch are dispensable, for instance, the crystal, the metal case, and the chain. Because these parts are unnecessary or redundant to the system’s basic function, they do not belong to the irreducible core. Whether other parts of the watch belong to the irreducible core depends on the minimum level of function demanded of the watch. The hour hand by itself is adequate for telling the hour and even certain ranges of minutes. But if it is important to know the exact minute, then the minute hand will also be required and belong to the irreducible core. Notice that many irreducibly complex systems are like the pocket watch in containing parts that are not crucial to the system’s basic function—parts that therefore lie outside the system’s irreducible core.

For an irreducibly complex system, each of the parts of the irreducible core plays an indispensable role in achieving the system’s basic function. Thus, removing parts, even a single part, from the irreducible core results in complete loss of the system’s basic function. Nevertheless, to determine whether a system is irreducibly complex, it is not enough simply to identify those parts whose removal renders the basic function unrecoverable from the remaining parts. To be sure, identifying such indispensable parts is an important step for determining irreducible complexity in practice. But it is not sufficient. Additionally, we need to establish that no simpler system achieves the same basic function. Consider, for instance, a three-legged stool. Suppose the stool’s basic function is to provide a seat by means of a raised platform. In that case each of the legs is indispensable for achieving this basic function (remove any leg and the basic function can’t be recovered among the remaining parts). Nevertheless, because it’s possible for a much simpler system to exhibit this basic function (for example, a solid block), the three-legged stool is not irreducibly complex.

To determine whether a system is irreducibly complex therefore employs two approaches: (1) An empirical analysis of the system that by removing parts (individually and in groups) and then by rearranging and adapting remaining parts determines whether the basic function can be recovered among those remaining parts. (2) A conceptual analysis of the system, and specifically of those parts whose removal renders the basic function unrecoverable, to demonstrate that no system with (substantially) fewer parts exhibits the basic function. Indispensable parts identified in step (1) and then confirmed in step (2) to admit no simplification belong to the irreducible core of an irreducibly complex system. Note that steps (1) and (2) can be employed separately or together and, if together, need not be taken in any particular order. Thus, one might first do a conceptual analysis to determine what parts are required to perform a basic function and then verify empirically which parts are indeed indispensable for the system to achieve its basic function. For instance, for the outboard motor discussed previously, a conceptual analysis reveals that no system performing its basic function can omit a propeller, engine, and drive shaft. In consequence, these parts belong to the irreducible core, a fact that can then be confirmed empirically by removing them and showing the basic function to be unrecoverable among the remaining parts.

Irreducible complexity differs sharply from another form of complexity that may be called cumulative complexity. A system is cumulatively complex if the parts of the system can be arranged sequentially so that the successive removal of parts never leads to the complete loss of function. An example of a cumulatively complex system is a city. It is possible successively to remove people and services from a city until one is down to a tiny village—all without losing the sense of community, which in this case constitutes the city’s basic function. If we now think of the successive removal of citizens and services from a city as running a videotape backwards, then by changing the videotape direction and running it forwards we see the gradual evolution of a city. The gradual buildup of complexity through a Darwinian evolutionary process runs forward what in reverse is the successive removal of components from a cumulatively complex system where at each step in the removal process function is preserved. It follows that the Darwinian selection mechanism can readily account for cumulative complexity.

But what about irreducible complexity? Can the Darwinian selection mechanism account for irreducible complexity? If selection acts with reference to a goal, then there is no difficulty for selection to produce irreducible complexity. Take the old-fashioned pocket watch considered earlier. Given the goal of constructing a functioning timepiece, one can specify a goal-directed selection process that in turn selects a spring, a face, an hour hand, a minute hand, and all the other indispensable parts required for the pocket watch to keep time, and at the end puts all these parts together to form a functional watch. Similarly, one can imagine an organism forming a new structure over the course of several generations by successively bringing about certain components (perhaps by random variation), setting them aside (by a goal-directed selection process), and then, once all the components are in place, putting them together to form that new structure. Given a prespecified goal, selection has no difficulty producing irreducibly complex systems.

There’s an obvious difficulty extending this line of reasoning to biology, however. The selection operating in biology is Darwinian natural selection, and this form of selection operates without goals, plans, or purposes. Natural selection looks not to the future but only to the present. It asks what will benefit the organism now rather than at some future date or in some future offspring. It is interested only in immediate gratification, not delayed gratification. It is an opportunist rather than a strategist. These characteristics of natural selection at once limit it but also account for its appeal among mechanistically inclined biologists who prefer to understand the emergence of biological complexity as the result of undirected material processes and thus apart from design. Yet by making selection an undirected process, Darwin unduly restricted the type of complexity that biological systems could manifest. According to Darwin’s theory, biological systems should readily exhibit cumulative complexity but have a hard time exhibiting irreducible complexity.

Why is that? The problem is that for an irreducibly complex system, its basic function is attained only when all components from the irreducible core are in place simultaneously. It follows that if natural selection is going to select for the function of an irreducibly complex system, it has to produce the irreducible core all at once or not at all. That might not be a problem if the systems in question were simple. But they are not. The irreducibly complex biochemical systems that Michael Behe, for instance, considered in Darwin’s Black Box are protein machines consisting of numerous distinct proteins each of which is indispensable to the machine’s basic function.

Darwinism, committed as it is to a gradual evolutionary process that incrementally builds complexity and function, now faces a dilemma. Darwinian evolution cannot produce an irreducibly complex system exhibiting a given basic function by having natural selection act on and improve simpler precursors that already display that function. The problem is that the function doesn’t exist, and therefore is not selectable by natural selection, until the irreducibly complex system is in place already. It follows that Darwinian evolution can produce an irreducibly complex system that serves a given basic function only by taking already existing systems that serve different functions and redeploying them to form the irreducibly complex system. But, as we shall see later in this essay, there is no evidence that the redeployments required to form such irreducibly complex systems could happen, much less be properly coordinated, by a gradual Darwinian evolutionary process. Instead, the evidence suggests that any such redeployment would require such massive coordination of the redeployed systems as to place the resulting irreducibly complex system beyond the reach of Darwinian evolution. Of course, such massive coordination bespeaks design.

In the Origin of Species, Darwin emphasized that his theory is a gradualistic theory in which complex biological structures (“complex organs,” in Darwin’s terminology) must be capable of being formed by what he called “numerous, successive, slight modifications.”[4] It follows that Darwin’s theory is confirmed to the degree that biologists can lay out detailed, testable Darwinian pathways by which complex biological structures could have been formed by numerous, successive, slight modifications. Alternatively, the theory is disconfirmed to the degree that biologists are not only unable to provide such pathways but also have positive reasons for thinking that such pathways do not exist. The irreducible complexity of protein machines therefore powerfully disconfirms Darwin’s theory. Moreover, because irreducible complexity occurs at the biochemical level, there is no more fundamental level of biological analysis to which the irreducible complexity of protein machines can be referred and at which a Darwinian analysis in terms of natural selection and random variation can still hope for success. Underlying biochemistry is ordinary chemistry and physics, neither of which can explain biological complexity.

One irreducibly complex protein machine that has especially captured the imagination of the biological community is the bacterial flagellum. In public lectures Harvard biologist Howard Berg calls the bacterial flagellum “the most efficient machine in the universe.” The flagellum is an acid-powered rotary motor with a whip-like tail whose rotating motion propels a bacterium through its watery environment. This whip-like tail acts as a propeller. It spins at tens of thousands of rpm and can change direction in a quarter turn. The intricate machinery of the flagellum includes a rotor, a stator, O-rings, bushings, mounting disks, a drive shaft, a propeller, a hook joint for the propeller, and an acid powered motor. The basic function of the bacterial flagellum is to propel the bacterium through its watery environment by means of a fast-spinning bidirectional whip-like tail (the propeller, also known as a filament). Note that a whiplike tail with these properties is not a luxury but a necessity if the flagellum is to be of any use as a motility structure for seeking food. In propelling a bacterium through its watery environment, the flagellum must overcome Brownian motion (the random motion of water molecules, which jostles small objects suspended in water). The reason flagella need to rotate bidirectionally is that Brownian motion sets bacteria off their course as they wend their way up a nutrition gradient. Reversing direction of the rotating tail causes the bacterium to tumble, reset itself, and try again to get to the food it needs. The minimal functional requirements of a flagellum, if it is going to do a bacterium any good at all in propelling it through its watery environment up a nutrition gradient, is that the whiplike tail (or filament) rotate bidirectionally and extremely fast. Flagella of known bacteria spin at rates well above 10,000 rpm (actually, closer to 20,000 rpm and even as high as 100,000 rpm). Anything substantially less than this will prevent a bacterium from overcoming the disorienting effects of Brownian motion and thus prevent it from finding the concentrations of nutrients it needs to survive, reproduce, and flourish.[5] The flagellum’s intricate machinery requires the coordinated interaction of about thirty proteins and another twenty or so proteins to assist in their assembly. Yet the absence of any one of these proteins would result in the complete loss of motor function.[6] These proteins form the irreducible core of the flagellum. How complex is this core? John Postgate describes some of the complexity:

A typical bacterial flagellum, we now know, is a long, tubular filament of protein. It is indeed loosely coiled, like a pulled-out, left-handed spring, or perhaps a corkscrew, and it terminates, close to the cell wall, as a thickened, flexible zone, called a hook because it is usually bent…. One can imagine a bacterial cell as having a tough outer envelope within which is a softer more flexible one, and inside that the jelly-like protoplasm resides. The flagellum and its hook are attached to the cell just at, or just inside, these skins, and the remarkable feature is the way in which they are anchored. In a bacterium called Bacillus subtilis … the hook extends, as a rod, through the outer wall, and at the end of the rod, separated by its last few nanometers, are two discs. There is one at the very end which seems to be set in the inner membrane, the one which covers the cell’s protoplasm, and the near-terminal disc is set just inside the cell wall. In effect, the long flagellum seems to be held in place by its hook, with two discs acting as a double bolt, or perhaps a bolt and washer….[7]

This quotation merely scratches the surface of the complexities involved with the bacterial flagellum. Here Postgate describes what amounts to a propeller and its attachment to the cell wall. Additionally there needs to be a motor that runs the propeller. This motor needs to be mounted and stabilized. Moreover, it must be capable of bidirectional rotation. The complexities quickly mount, and a conceptual analysis reveals that the bacterial flagellum possesses an extremely complicated irreducible core.

So how did the bacterial flagellum originate? On a Darwinian view, a bacterium with a flagellum evolved via the Darwinian selection mechanism from a bacterium lacking not only a flagellum but also all the genes coding for flagellar proteins (including any genes homologous to the genes for the flagellum). For the Darwinian mechanism to produce a bacterial flagellum, random genetic changes therefore had to bring about the genes that code for flagellar proteins and then selection had to preserve these proteins, gather them to the right location in the bacterium, and then properly assemble them. How plausible is this? The remainder of this essay will argue that such a Darwinian explanation is highly implausible and that intelligent design in fact provides a far more compelling explanation.

2 The Argument from Irreducible Complexity 

In Darwin’s Black Box, Michael Behe introduced the idea of irreducible complexity and then argued that the irreducible complexity of protein machines provides convincing evidence of actual design in biology. Since its publication in 1996, Behe’s book has been widely reviewed, both in the popular press and in scientific journals.[8] It has also been widely discussed over the Internet.[9] By and large critics have conceded that Behe got his scientific facts straight. They have also conceded his claim that detailed neo-Darwinian accounts for how irreducibly complex protein machines could come about are absent from the biological literature. Nonetheless, they have objected to his argument on theoretical and methodological grounds. Behe presents what may be described as an argument from irreducible complexity. This argument purports to establish that irreducibly complex biological systems are beyond the reach of the Darwinian evolutionary mechanism and that only design can properly account for them.

How does the argument from irreducible complexity reach this conclusion? Unfortunately, critics have understood this argument in two ways, neither of which does justice to it. Thus, critics tend to see the argument from irreducible complexity as making either a purely logical or a purely empirical point. The logical point is this: Certain structures are provably inaccessible to the Darwinian mechanism. They have property IC (i.e., irreducible complexity). But certain biological structures also have property IC, so they, too, must be inaccessible to the Darwinian mechanism. The empirical point is this: Certain biological structures are awfully complicated. There is not even a suggestion in the biological literature concerning how the Darwinian mechanism might construct them. So chances are that something beyond natural selection was responsible for their origin. So stated, these are fundamentally different points and require different justifications. If the argument from irreducible complexity makes a purely logical point, then it needs to be rigorous and mathematical in the way that the mathematics underlying second law of thermodynamics (known as ergodic theory) is used to preclude perpetual motion machines. But if the argument from irreducible complexity makes a purely empirical point, then it appears to be nothing more than an argument from ignorance, merely highlighting that the evolutionary pathways leading to certain biological systems have yet to be adequately explained, a fact that critics readily concede.
According to critics, neither the logical point nor the empirical point nor a combination of the two poses a challenge to evolutionary theory. Let’s consider these in turn. As for the logical point, irreducible complexity clearly cannot close off all possible avenues for Darwinian evolution. Irreducible complexity guarantees that all parts of the system’s irreducible core are indispensable in the sense that if you remove a part from the core, you cannot recover the original basic function of the system from the remaining parts. But that leaves the possibility of removing parts and isolating subsystems that serve some other basic function (a function that could conceivably be subject to selection pressure). Irreducible complexity, treated as a purely logical restriction, therefore leaves a loophole for the Darwinian mechanism. Specifically, it leaves open the possibility that unknown indirect Darwinian pathways could evolve an irreducibly complex system via other systems that exhibit different functions from the system in question.
As for the empirical point, it seems merely to commit the standard fallacy of arguing from ignorance. If certain biological systems are incredibly complicated and we haven’t figured out how they originated, what of it? That doesn’t mean the Darwinian mechanism or some other material mechanism didn’t do it. It may just mean that we haven’t yet figured out how those mechanisms did it. And as for conflating the logical and empirical points, that’s the most disreputable option of all, for it makes proponents of intelligent design guilty of equivocation, using irreducible complexity to score a logical or empirical point as expedience dictates.
This refutation of Behe’s argument is too easy. In fact, the argument from irreducible complexity is more subtle than any of these criticisms suggests. The argument from irreducible complexity is properly conceived as making three key points: a logical, an empirical, and an explanatory point. What’s more, far from canceling each other, these points work together, reinforcing each other. The logical point is this: Certain structures are provably inaccessible to a direct Darwinian pathway. They have property IC (i.e., irreducible complexity). But certain biological structures also have property IC, so they, too, must be inaccessible to a direct Darwinian pathway. This formulation looks similar to the previous logical point, but it differs in one crucial respect. In the previous formulation, inaccessibility was with respect to the Darwinian mechanism taken without restriction and therefore with respect to all Darwinian pathways whatsoever, both direct and indirect. This time around, we consider the Darwinian mechanism only with respect direct Darwinian pathways.
A direct Darwinian pathway is one in which a system evolves by natural selection incrementally enhancing a given function. As the system evolves, the function does not evolve but stays put. Thus we might imagine that in the evolution of the heart, its function from the start was to pump blood. In that case a direct Darwinian pathway might account for it. On the other hand, we might imagine that in the evolution of the heart its function was initially to make loud thumping sounds to ward off predators, and only later did it take on the function of pumping blood. In that case an indirect Darwinian pathway would be needed to account for it. Here the pathway is indirect because not only does the system evolve but also the system’s function evolves. Now, as a logical point, the argument from irreducible complexity is only concerned with precluding direct Darwinian pathways. This is evident from the definition of irreducible complexity where the irreducible core is defined strictly in relation to a single function, namely, the basic function of the irreducibly complex system (a function that could not exist without all the parts of the irreducible core being in place).

In ruling out direct Darwinian pathways to irreducibly complex systems, the argument from irreducible complexity is saying that irreducibly complex biochemical systems are provably inaccessible to direct Darwinian pathways. How can we see that such systems are indeed provably inaccessible to direct Darwinian pathways? Consider what it would mean for an irreducibly complex system to evolve by a direct Darwinian pathway. In that case the system must have originated via the evolution of simpler systems that performed the same basic function. But because the irreducible core of an irreducibly complex system can’t be simplified without destroying the basic function, there can be no evolutionary precursors with simpler cores that perform the same function. It follows that the only way for a direct Darwinian pathway to evolve an irreducibly complex system is to evolve it all at once and thus by some vastly improbable or fortuitous event. Accordingly, to attribute irreducible complexity to a direct Darwinian pathway is like attributing Mount Rushmore to wind and erosion. There’s a sheer possibility that wind and erosion could sculpt Mount Rushmore but not a realistic one.

The proof that irreducibly complex systems are inaccessible to direct Darwinian pathways is probabilistic. The proof, though employing logic and mathematics, therefore does not rule out direct Darwinian pathways as a strict logical impossibility. It’s logically possible for just about anything to attain any other thing as a vastly improbable or fortuitous event. For instance, it’s logically possible that a rank chess amateur might stumble upon a series of brilliant moves and thereby defeat the reigning world chess champion in match play. But if that happens, it will be despite the amateur’s limited chess ability and not because of it. Likewise, if a direct Darwinian pathway begets an irreducibly complex biochemical system, then it is despite the intrinsic properties or capacities of the Darwinian mechanism and not because of them. Thus, in saying that irreducibly complex biochemical systems are provably inaccessible to direct Darwinian pathways, design proponents are saying that the Darwinian mechanism has no intrinsic capacity for generating such systems except as vastly improbable or fortuitous events.

At any rate, critics of the argument from irreducible complexity look to save Darwinism not by enlisting direct Darwinian pathways to bring about irreducibly complex systems but by enlisting indirect Darwinian pathways to bring them about. In indirect Darwinian pathways, a system evolves not by preserving and enhancing an existing function but by continually transforming its function. Whereas with direct Darwinian pathways structures evolve but functions stay put, with indirect Darwinian pathways both structures and functions evolve. This interplay of structures and functions evolving jointly is sometimes known as coevolution. How does the argument from irreducible complexity handle indirect Darwinian pathways? Here the point at issue is no longer logical but empirical. The fact is that for irreducibly complex biochemical systems, no indirect Darwinian pathways are known. At best biologists have been able to isolate subsystems of such systems that perform other functions. But any reasonably complicated machine always includes subsystems that perform functions distinct from the original machine. So the mere occurrence or identification of subsystems that could perform some function on their own is no evidence for an indirect Darwinian pathway leading to the system. What’s needed is a seamless Darwinian account that’s both detailed and testable of how subsystems undergoing coevolution could gradually transform into an irreducibly complex system. No such accounts are available or forthcoming. Indeed, if such accounts were available, critics of intelligent design would merely need to cite them, and intelligent design would be refuted.

At this point the standard move by critics of intelligent design is to turn the tables and charge that the argument from irreducible complexity is an argument from ignorance. A common way to formulate this criticism is to say, “Absence of evidence is not evidence of absence.” But as with so many overused expressions, this one requires critical scrutiny. Certainly this dictum appropriately characterizes many everyday circumstances. Imagine, for instance, someone feverishly hunting about the house for a missing set of car keys, searching under every object, casing the house, bringing in reinforcements, and then the next morning, when all hope is gone, finding them on top of the car outside. In that case the absence of evidence prior to finding the car keys was not evidence of absence. Yet with the car keys there was independent evidence of their existence in the first place.

But what if we weren’t sure that there even were any car keys? The situation in evolutionary biology is even more extreme than that. One might not be sure our hypothetical set of car keys exist, but at least one has the reassurance that car keys exist generally. Indirect Darwinian pathways that account for irreducible complexity are more like the leprechauns supposedly hiding in a child’s room. Precisely because the absence of evidence for the existence of leprechauns is complete, it is unreasonable to cite “Absence of evidence is not evidence of absence” as a reason for taking leprechauns seriously. And yet that, essentially, is what evolutionary theory counsels concerning the utterly fruitless search for credible indirect Darwinian pathways that account for irreducible complexity.

If after repeated attempts looking in all the most promising places you don’t find what you expect to find and if you never had any evidence that the thing you were looking for existed in the first place, then you have reason to think that the thing you are looking for doesn’t exist at all. That’s the argument from irreducible complexity’s point about indirect Darwinian pathways. It’s not just that we don’t know of such a pathway for, say, the bacterial flagellum (the irreducibly complex biochemical machine that has become the mascot of the intelligent design community). It’s that we don’t know of such pathways for any such systems. The absence here is pervasive and systemic. That’s why critics of Darwinism like Franklin Harold and James Shapiro (neither of whom is an intelligent design proponent) argue that positing as-yet undiscovered indirect Darwinian pathways for such systems constitute “wishful speculations.”[10] To recap, the argument from irreducible complexity makes a logical and an empirical point. The logical point is that irreducible complexity renders biological structures provably inaccessible to direct Darwinian pathways. The empirical point is that the failure of evolutionary biology to discover indirect Darwinian pathways leading to irreducibly complex biological structures is pervasive and systemic and therefore reason to doubt and even reject that indirect Darwinian pathways are the answer to irreducible complexity. The logical and empirical points together constitute a devastating indictment of the Darwinian mechanism, which has routinely been touted as capable of solving all problems of biological complexity once an initial life-form is on the scene. Even so, the logical and empirical points together don’t answer how one gets from Darwinism’s failure in accounting for irreducibly complex systems to the legitimacy of employing design in accounting for them.

This is where the argument from irreducible complexity needs to make a third key point, namely, an explanatory point. Scientific explanations come in many forms and guises, but the one thing they cannot afford to be without is causal adequacy. A scientific explanation needs to call upon causal powers sufficient to explain the effect in question. Otherwise, the effect is unexplained. The effect in question is the irreducible complexity of certain biochemical machines. How did such systems come about? Not by direct Darwinian pathways—irreducible complexity rules them out on logical and mathematical grounds. And not by indirect Darwinian pathways either—the absence of scientific evidence here is as complete as it is for leprechauns. Nor does appealing to unknown material mechanisms help matters, for in that case not only is the absence of evidence complete but also the very theory for which there’s no evidence is absent as well. Thus, when it comes to irreducibly complex biochemical systems, there’s no evidence that material mechanisms are causally adequate to bring them about. But what about intelligence? Intelligence is well known to produce irreducibly complex systems (e.g., humans regularly produce machines that exhibit irreducible complexity). Intelligence is thus known to be causally adequate to bring about irreducible complexity. The argument from irreducible complexity’s explanatory point, therefore, is that on the basis of causal adequacy, intelligent design is a better scientific explanation than the Darwinian mechanism for the irreducible complexity of biochemical systems. In making its logical and empirical points, the argument from irreducible complexity assumes a negative or critical role, identifying limitations of the Darwinian mechanism. By contrast, in making its explanatory point, the argument from irreducible complexity assumes a positive or constructive role, providing positive grounds for thinking that irreducibly complex biochemical systems are in fact designed. One question about these points is now likely to remain. The logical point rules out direct Darwinian pathways to irreducible complexity and the empirical point rules out indirect Darwinian pathways to irreducible complexity. But the absence of empirical evidence for direct Darwinian pathways leading to irreducible complexity is as complete as it is for indirect Darwinian pathways. It might seem, then, that the logical point is superfluous inasmuch as the empirical point dispenses with both types of Darwinian pathways. But in fact the logical point strengthens the case against Darwinism in a way that the empirical point cannot. If you look at the best confirmed examples of Darwinian evolution in the biological literature (from Darwin to the present), what you find is natural selection steadily improving a given feature performing a given function in a given way. Indeed, the very notion of “improvement” (which played such a central role in Darwin’s Origin of Species) typically connotes that a given thing is getting better in a given respect.

Improvement in this sense corresponds to a direct Darwinian pathway. By contrast, an indirect Darwinian pathway (where one function gives way to another function and thus can no longer improve because it no longer exists), though often inferred by evolutionary biologists from fossil or molecular data, tends to be much more difficult to establish rigorously. The reason is not hard to see: By definition natural selection selects for existing function—in other words, a function that is already in place and helping the organism in some way. On the other hand, natural selection cannot select for future function—functions that are not present and in some way currently helping the organism are invisible to natural selection. Once a novel function comes to exist, the Darwinian mechanism can select for it. But making the transition from old to new functions is not a task to which the Darwinian mechanism is suited. How does one evolve from a system exhibiting an existing selectable function to a new system exhibiting a novel selectable function? Because natural selection only selects for existing function, it is no help here, and all the weight is on random variation to come up with the right and needed modifications during the crucial transition time when functions are changing. (Or, as Darwin put it, “unless profitable variations do occur, natural selection can do nothing.”[11]) Yet the actual evidence that random variation can produce the successive modifications needed to evolve irreducible complexity is nil.

The argument from irreducible complexity, in making the logical point that irreducible complexity rules out direct Darwinian pathways, therefore rules out the form of Darwinian evolution that is best confirmed. Indirect Darwinian pathways, by contrast, are so open ended that there is no way to test them scientifically unless they are carefully specified—and invariably, when it comes to irreducibly complex systems, they are left unspecified, thus rendering them neither falsifiable nor verifiable. In making its logical point, the argument from irreducible complexity therefore takes logic as far as it can go in limiting the Darwinian mechanism and leaves empirical considerations to close off any remaining loopholes. And since logical inferences are inherently stronger than empirical inferences, the argument from irreducible complexity’s refutation of the Darwinian mechanism is as strong and tight as possible. It’s not just that certain biological systems are so complex that we can’t imagine how they evolved by Darwinian pathways. Rather, we can show conclusively that direct Darwinian pathways are causally inadequate to bring them about and that indirect Darwinian pathways, which have always been more difficult to substantiate, are utterly without empirical support in bringing them about. Conversely, we do know what has the causal power to produce irreducible complexity — intelligent design.

3 Scaffolding and Roman Arches 

Having laid out the basic definitions and general logic underlying the argument from irreducible complexity, let’s now consider the two main objections that Darwinists have raised against the argument from irreducible complexity. I’ll deal with one objection in this section and the other in the next. These objections attempt to show that an irreducibly complex system could, on closer examination, have been produced by gradual increments apart from design. According to the scaffolding objection, for evolution to produce an irreducibly complex system, first some non-irreducibly complex system needs to arise by mutation and selection incrementally adding components. Then, at some point, a subsystem arises that is able to function autonomously (i.e., without the rest of the system). Since it can function autonomously, the other components are now vestigial and drop away. When all have dropped away, we have a system that is irreducibly complex. In short, what appears to be a qualitative difference is really only the result of a lot of small quantitative changes.

The scaffolding objection thus claims that eliminating functional redundancy is a plausible route to irreducible complexity. If you will, instead of evolution achieving irreducible complexity from the bottom up by gradually adding components to a system, irreducible complexity is supposed to arise from the top down by taking a system and removing redundant components. For instance, there are situations in which, according to Thomas Schneider, “a functional species can survive without a particular genetic control system but … would do better to gain control ab initio.”[12] In such situations, Schneider continues, Any new function must have this property until the species comes to depend on it, at which point it can become essential if the earlier means of survival is lost by atrophy or no longer available. I call such a situation a “Roman arch” because once such a structure has been constructed on top of scaffolding, the scaffold may be removed, and will disappear from biological systems when it is no longer needed. Roman arches are common in biology, and they are a natural consequence of evolutionary processes.[13]

To build a Roman arch requires a scaffold. So long as the scaffold is in place, pieces of the arch can be shifted in and out of position. But once all the pieces of the arch are in position and the scaffold is removed (i.e., redundancy is eliminated), each of the pieces of the arch becomes indispensable and the arch itself forms an irreducibly complex system. But there are two problems here. First, strictly speaking a Roman arch is not irreducibly complex. Yes, each of the pieces of the arch is indispensable in the sense that if you remove a part, the remaining parts cannot be rearranged to form an arch. But a Roman arch is simplifiable — a single, solid piece of rock can be made into the same shape as the arch, thereby performing the same function as the arch and doing so in essentially the same manner. Even so, one might argue that the failure of a Roman arch to be, strictly speaking, irreducibly complex is not all that serious. A Roman arch, after all, is functionally integrated, and so the question remains whether scaffolds constitute a plausible route to functionally integrated systems generally and thus perhaps to irreducibly complex systems in particular.

Notwithstanding, there is a more serious problem with the scaffolding objection. Consider what it would mean for Darwinian evolution to produce an irreducibly complex system like the bacterial flagellum by means of a scaffold. The Darwinian selection mechanism acts by taking advantage of, or selecting for, an existing function. What’s more, an irreducibly complex system like the bacterial flagellum obviously exhibits a basic function that is selectable. It follows that the bacterial flagellum plus any putative scaffold exhibits that same basic function, though the scaffold, by now being redundant, is destined to be eliminated by natural selection. So let’s ask the following question: In building up to the aggregate system of irreducibly complex system plus scaffold, when did the basic function arise? With a bacterial flagellum plus scaffold, for instance, when did bidirectional rotary motion for propelling the bacterium through its watery environment arise?

Scaffolding does nothing to change the fact that the basic function of an irreducibly complex system arises, by definition, only after all the core components of that system are in place. Given an irreducibly complex system to be explained by scaffolding, the challenge for the Darwinist is to identify a sequence of gradual functional intermediaries leading to it. These need to start from some initial simple system and eventually lead to an irreducibly complex system plus scaffold, whereupon natural selection then discards the scaffold once it becomes redundant. Even though the scaffold can help build the irreducibly complex system, the scaffold is specifically adapted to the basic function of the system it is helping to construct (e.g., the flagellum). What’s more, the only evidence of that basic function is from the irreducibly complex system itself. Thus, for the Darwinian mechanism to produce an irreducibly complex system by means of a scaffold, the system plus scaffold must have served a different function up until all the core components of the final irreducibly complex system became available, snapped into place, and formed a functional system. But in that case the scaffold metaphor becomes inappropriate—a scaffold, after all, is for constructing a structure serving a definite function and not for evolving structures whose functions are likewise evolving. That brings us to the next, and indeed principal, objection that Darwinists have raised against the argument from irreducible complexity.

4 Coevolution and Co-option 

To explain irreducible complexity, Darwinists in the end always fall back on indirect Darwinian pathways. In an indirect Darwinian pathway, not only does a structure evolve but so does its associated function. By contrast, in a direct Darwinian pathway, natural selection enhances or improves a structure that already serves a given function, but the function itself does not change. Since the function of an irreducibly complex system is not attained until all the parts of the irreducible core are in place, a direct Darwinian pathway would therefore have to produce such a system in one fell swoop. But that’s absurd. These systems are incredibly complicated and must, if they are to be produced apart from design, arise by, as Darwin put it, “numerous, successive, slight modifications.”[14] Thus, the only way for Darwinism to explain irreducible complexity is by means of an indirect Darwinian pathway in which structures and functions coevolve.

One way this could happen is for parts previously targeted for other systems to break free and be co-opted into a novel system. It is as though pieces from a car, bicycle, motorboat, and train were suitably recombined to form an airplane. Evolutionary theorists sometimes denote such systems as patchworks or bricolages. Thus any such airplane would be a patchwork or bricolage of preexisting materials originally targeted for different uses. Clearly, there is no logical impossibility that prevents such patchworks from forming irreducibly complex systems. But a patchwork, if sufficiently intricate and elegant, begs a precise causal account of how it arose. The bacterial flagellum, for instance, is an engineering marvel of miniaturization and performance. Simply to call such a system a patchwork of co-opted preexisting materials is therefore hardly illuminating and does nothing to answer how it originated. The problem with trying to explain an irreducibly complex system like the bacterial flagellum as a patchwork is that it requires multiple coordinated co-options. It is not just that one thing evolves for one function, and then, perhaps without any modification at all, gets used for some completely different function (imagine a rock first being used as a paperweight and then being co-opted for use as a doorstop). The problem is that multiple protein parts from different functional systems all have to break free and then all have to coalesce to form a newly integrated system (as with the airplane formed by taking parts from a car, bicycle, motorboat, and train).

Even if all the parts (i.e., proteins) for a bacterial flagellum are in place within a cell but serving other functions, there is no reason to think that those parts can come together spontaneously to form a tightly integrated system like the flagellum. The problem here is that parts performing functions in separate systems are unlikely to be adapted to each other so that they can work together coherently within a single system. Imagine a screw that’s part of one system and a nut that’s part of another system. If these systems originated independently, as they would for separately evolved biological systems, it is unlikely that the screw will be adapted to the nut so that the fit is mechanically useful (i.e., neither too tight, thereby preventing the screw from screwing into the nut at all, nor too loose, thereby preventing the screw from properly meshing with the nut). This problem is magnified in the cell. Take the evolution of the bacterial flagellum. Besides those proteins that go into a flagellum, a cell evolving a flagellum will have many other proteins that play no conceivable role in a flagellum. The majority of proteins in the cell will be of this sort. How then can those, and only those, proteins that go into a functional flagellum be brought together and guided to their proper locations in the cell without interfering crossreactions from the other proteins? It is like going through a giant grocery store blindfolded, taking items off the racks, and hoping that what ends up in the shopping cart are the precise ingredients for a cake. Such an outcome is highly unlikely. University of Rochester biologist Allen Orr, who is no fan of intelligent design, agrees:
We might think that some of the parts of an irreducibly complex system evolved step by step for some other purpose and were then recruited wholesale to a new function. But this is … unlikely. You may as well hope that half your car’s transmission will suddenly help out in the airbag department. Such things might happen very, very rarely, but they surely do not offer a general solution to irreducible complexity.[15]

The problem with such co-option scenarios is that they require multiple coordinated co-options from multiple functional systems to bring about an irreducibly complex system.

But what if instead co-option occurred more gradually and incrementally? In the evolution of the bacterial flagellum, imagine natural selection gradually co-opting existing protein parts into a single evolving structure whose function co-evolves with the structure. In that case, an irreducibly complex system might arise by gradually co-opting parts that initially were dispensable but eventually become indispensable (as required of the parts that belong to core of an irreducibly complex system). Here is how Allen Orr sketches this possibility:

An irreducibly complex system can be built gradually by adding parts that, while initially just advantageous, become—because of later changes—essential [i.e., indispensable]. The logic is very simple. Some part (A) initially does some job (and not very well, perhaps). Another part (B) later gets added because it helps A. This new part isn’t essential, it merely improves things. But later on, A (or something else) may change in such a way that B now becomes indispensable. This process continues as further parts get folded into the system. And at the end of the day, many parts may all be required.[16]

Let’s evaluate this argument. Orr posits a gradual increase in complexity in which novel parts that enhance function are added and alternately rendered indispensable. But which function (or “job,” as Orr puts it) are we talking about? Obviously, functions along the way must be different from the final function because the final function is exhibited by an irreducibly complex system and hence cannot be exhibited by any system with a substantially simpler irreducible core. But then we run smack into an empirical problem: there is no empirical evidence that irreducibly complex biochemical systems like the bacterial flagellum came about by this method of add a component, make it indispensable, add another component, make it indispensable, etc. Indeed, Orr, along with the rest of the Darwinian community, never offers anything more than highly abstract scenarios for how irreducible complexity might arise. But clearly, something more is required. Minimally what’s required are detailed, testable reconstructions or models that demonstrate how indirect Darwinian pathways might reasonably have produced actual irreducibly complex biochemical machines like the bacterial flagellum. Orr, by contrast, merely gestures at unspecified abstract systems designated schematically by letters like “A” and “B.” Evolutionary biologists have nothing like detailed evolutionary pathways leading to irreducibly complex systems like the bacterial flagellum. The closest thing that biologists have been able to find as a possible evolutionary precursor to the bacterial flagellum is what’s known as a type III secretory system (TTSS). The TTSS is a type of pump that enables certain pathogenic bacteria to inject virulent proteins into host organisms. One bacterium possessing the TTSS is Yersinia pestis, the organism responsible for the black plague that during the Fourteenth Century killed a third of the population of Europe. The TTSS was the delivery system by which Yersinia pestis inflicted its massive destruction of human life. Now it turns out that the ten or so proteins that go into the construction of the TTSS are similar (homologous) to proteins found in the bacterial flagellum. What’s more, the TTSS corresponds roughly to the part of the flagellum used in the construction of its filament (i.e., the long whip-like tail). But note, it is not possible simply to substitute the TTSS for the corresponding part of the bacterial flagellum and have a functioning flagellum. Because the proteins in the TTSS are not adapted to the proteins of the bacterial flagellum, the resulting kludge would be nonfunctional.

Despite such difficulties relating the TTSS to the bacterial flagellum, suppose we treat the TTSS as a subsystem of the flagellum. As such, it performs a function distinct from the flagellum. Notwithstanding, finding a subsystem of a functional system that performs some other function is hardly an argument for the original system evolving from that other system. One might just as well say that because the motor of a motorcycle can by itself function as a heater, therefore the motor evolved into the motorcycle. Perhaps it did, but not without intelligent design. Indeed, multipart, tightly integrated functional systems almost invariably contain multipart subsystems that could serve some different function. At best the TTSS represents one possible step in the indirect Darwinian evolution of the bacterial flagellum. But that still wouldn’t constitute a solution to the evolution of the bacterial flagellum. What’s needed is a complete evolutionary path and not merely a possible oasis along the way. To claim otherwise is like saying we can travel by foot from Los Angeles to Tokyo because we’ve discovered the Hawaiian Islands.

There’s another problem here. The whole point of bringing up the TTSS was to posit it as an evolutionary precursor to the bacterial flagellum. The best current evidence put forward by evolutionary biologists, however, points to the TTSS as evolving from the flagellum and not vice versa.[17] It’s easy to understand intuitively that the TTSS is more likely to have evolved from the bacterial flagellum than vice versa. The bacterial flagellum is a motility structure for propelling a bacterium through its watery environment. Water has been around since the origin of life. Indeed, evolutionary biologists surmise that the bacterial flagellum is 2 to 3 billion years old. But the TTSS is a delivery system for animal and plant pathogens. Its function therefore depends on existence of multicellular organisms. Accordingly, the TTSS could only have been around since the rise of multicellular organisms, which evolutionary biologists place around 600 million years ago.

It follows that the TTSS does not explain the evolution of the flagellum. At best the bacterial flagellum could explain the evolution of the TTSS. But even that isn’t quite right. The TTSS is, after all, much simpler than the flagellum. The TTSS contains ten or so proteins that are homologous to proteins in the flagellum. The flagellum requires an additional thirty or forty proteins, which are unique. Evolution needs to explain the emergence of complexity from simplicity. But if the TTSS evolved from the flagellum, then all we’ve done is explain the simpler in terms of the more complex. Despite these difficulties, Darwinists continue to posit the TTSS as an evolutionary precursor to the bacterial flagellum.[18] Some of them even go so far as to posit a few intermediate structures by which the TTSS is supposed to have evolved into bacterial flagellum.[19] But as evolutionary precursors to the bacterial flagellum, such intermediate structures are on even shakier ground than the TTSS. Unlike the TTSS, they exist only in the imaginations of evolutionary biologists. They do not exist in nature or in the laboratory, and evolutionary biologists never define them with enough specificity to be able to recognize them should they ever actually encounter them. In positing such intermediates, Darwinists purport to provide transitional steps that could lead from the TTSS to the bacterial flagellum. Some even claim that in providing such imaginary intermediates they have provided a “detailed, testable, step-by-step” Darwinian account for the formation of the bacterial flagellum.[20] But this is wishful thinking.

One such reconstruction proposes the following transitional steps leading to the bacterial flagellum: (1) Posit a bacterium that possesses “an ancestral TTSS” to start the evolutionary ball rolling. (2) Next, suppose this bacterium evolves a pilus or hair-like filament that extrudes through the TTSS; this pilus will later become the “propeller” that drives the fully evolved flagellum. (3) Next, suppose this pilus experiences “rapid improvements … under selection for increased strength, minimizing breakage, increased speed of assembly, etc.” (4) Next, suppose the pilus, though originally involved in adhesion, evolves motility that initially is quite crude, being nondirectional and simply for “random dispersal.” (5) Next, suppose this “crudely functioning protoflagellum” gets a chemotaxis and switching system tacked on so that motility becomes directional and interactive with the environment. (6) And finally, suppose this entire system gets refined through natural selection, which evolves a hook and additional axial components and thereby forms a modern flagellum.[21] To justify such a model, Darwinists need to show that each step in it is reasonably likely to follow from the previous one. This requires being able to assess the probability of transitioning from one step to the next. And this in turn presupposes that the biological structures at each step are described in sufficient detail so that it is possible to assess the probabilities of transitioning between steps. Darwinism is a theory about connecting points in biological configuration space. It says that you can connect point A to point B in biological configuration space provided that you can take small enough steps where each step is fitness enhancing (or at least fitness neutral). The steps need to be small because Darwinism is a theory of gradual incremental change where each step along the way is reasonably probable. As Darwin put it in his Origin, for his theory to succeed it must explain biological complexity in terms of “numerous, successive, slight modifications.” Anything else would cause his theory to fall apart on the rocks of improbability.

Are the transitions from one step to the next in the preceding model reasonably probable? Does each step in this model constitute, as Darwin required, only a “slight modification”? There’s no way even to begin to answer this question because this model is not sufficiently detailed. All evolutionary biologists actually have in hand are the modern TTSS, the modern bacterial flagellum, and various homologous biochemical structures embedded in the flagellum present in extant organisms. Evolutionary biologists have neither the intermediates that this model posits nor the ancestral TTSS that starts this model off. They don’t know what these intermediates look like. They don’t have their precise biochemical specification. They don’t know if the intermediate systems that the model hypothesizes would work. They have no way of determining how easy or hard it is for the Darwinian mechanism to bridge the steps in this model. Evolutionary biologists typically invoke gene duplications and mutations at key points where the Darwinian mechanism is supposed to effect transitions that are reasonably probable. But what gene exactly is being duplicated? And what locus on which gene is being mutated? Evolutionary biologists never say. Indeed, the steps in these models are so unspecific and bereft of detail that these questions are unanswerable. But unless we know detailed answers to such questions, there’s no way to know whether the transitions these models describe are reasonably probable and therefore of the type required by Darwin’s theory. It follows that such models are untestable. To actually test such models requires being able to evaluate the likelihood of transitioning from one step in the model to the next. Yet because the intermediate systems described at the various transitional steps are so lacking in detail (they are hypothetical; they do not, as far as we know, currently exist in nature; they are not available in any laboratory; and researchers for now have no experimental procedures for generating them in the laboratory), the models offer no way to carry out this evaluation.

It’s therefore not surprising that the scientific literature shows a complete absence of detailed, testable, step-by-step proposals for how coevolution and co-option could actually produce irreducibly complex biochemical systems. In place of such proposals, Darwinists simply observe that because subsystems of irreducibly complex systems might be functional, any such functions could be selected by natural selection. And from this unexceptional observation, Darwinists blithely conclude that selection works on those parts and thereby forms irreducibly complex systems.[22] But this conclusion is completely unfounded, and accounts for cell biologist Franklin Harold’s frank admission that “there are presently no detailed Darwinian accounts of the evolution of any biochemical or cellular system, only a variety of wishful speculations.”[23]Biologist Lynn Margulis is equally forthright: “Like a sugary snack that temporarily satisfies our appetite but deprives us of more nutritious foods, neo-Darwinism sates intellectual curiosity with abstractions bereft of actual details—whether metabolic, biochemical, ecological, or of natural history.”[24]

To sum up, the Darwinian mechanism requires a selectable function if that mechanism is going to work at all. What’s more, functional pieces pulled together from various systems via coevolution and co-option are selectable by the Darwinian mechanism. But what is selectable here is the individual functions of the individual pieces and not the function of the yet-to-be-produced system. The Darwinian mechanism selects for preexisting function. It does not select for future function. Once that function is realized, the Darwinian mechanism can select for it as well. But making the transition from existing function to novel function is the hard part. How does one get from functional pieces that are selectable in terms of their individual functions to a system that makes use of those pieces and exhibits a novel function? In the case of irreducibly complex biochemical machines like the bacterial flagellum, the Darwinian mechanism is no help whatsoever.

5 The Connection with Specified Complexity 

In my books The Design Inference and No Free Lunch, I describe a formal criterion for detecting design, namely, specified complexity.[25] In this essay, we’ve seen that there are no detailed, testable, step-by-step Darwinian accounts for the evolution of any irreducibly complex biochemical machine such as the bacterial flagellum. What’s more, without the bias of speculative Darwinism coloring our conclusions, we are naturally inclined to see such irreducibly complex systems as the products of intelligent design. All our intuitions certainly point in that direction. That’s why Richard Dawkins writes, “Biology is the study of complicated things that give the appearance of having been designed for a purpose.”[26] That’s also why Francis Crick writes, “Biologists must constantly keep in mind that what they see was not designed, but rather evolved.”[27] Yet for Dawkins, Crick, and fellow Darwinists, the appearance of design in biology cannot be trusted. Accordingly, any intuitions that lead us to see actual design in biological systems are in fact leading us astray.

But intuitions need not lead us astray; they can also lead us aright. In fact, they often lead us to truths that might otherwise elude us. How, then, do scientists differentiate between the sound intuitions that lead us aright and the faulty intuitions that lead us astray? The problem for science with intuitions is that they are informal and imprecise. Hence, to determine whether intuitions are leading us astray or aright, scientists attempt to flesh out intuitions with precise formal analyses. Darwinists claim to have done just that. Thus, they purport to have shown where our intuitions about design in biology break down and how the Darwinian selection mechanism can bring about the appearance of design in biology. But Darwinists have demonstrated no such thing. As we’ve seen in the previous sections, Darwin’s theory offers no insight into the emergence of irreducibly complex molecular machines.

It follows that we need once again to take seriously our intuitions that such systems (notably the bacterial flagellum) are in fact designed. The challenge, then, for the design theorist is to provide precise formal analyses showing that our intuitions about design in biology are indeed justified and, specifically, how various biological systems satisfy the formal criterion for detecting design described at the start of this book, namely, the criterion of specified complexity. What, then, does such a formal, design-theoretic analysis of irreducibly complex systems look like? How does it demonstrate that such systems are indeed complex and specified, therefore exhibit specified complexity, and thus are in fact designed? The details here are technical, but the general logic by which design theorists argue that irreducibly complex systems exhibit specified complexity is straightforward: for a given irreducibly complex system and any putative evolutionary precursor, show that the probability of the Darwinian mechanism evolving that precursor into the irreducibly complex system is small. In such analyses, specification is never a problem—in each instance, the irreducibly complex system, any evolutionary precursor, and any intermediate between the precursor and the final irreducibly complex system are always specified in virtue of their biological function. Also, the probabilities here need not be calculated exactly. It’s enough to establish reliable upper bounds on the probabilities and show that they are small. What’s more, if the probability of evolving a precursor into a plausible intermediate is small, then the probability of evolving that precursor through the intermediate into the irreducibly complex system will a fortiori be small.

Darwinists object to this approach to establishing the specified complexity of irreducibly complex biochemical systems. They contend that design theorists, in taking this approach, have merely devised a “tornado-in-a-junkyard” strawman. The image of a “tornado in a junkyard” is due to astronomer Fred Hoyle. Hoyle imagined a junkyard with all the pieces for a Boeing 747 strewn in disarray and then a tornado blowing through the junkyard and producing a fully assembled 747 ready to fly.[28] Darwinists object that this image has nothing to do with how Darwinian evolution produces biological complexity. Accordingly, in the formation of irreducibly complex systems like the bacterial flagellum, all such arguments are said to show is that these systems could not have formed by purely random assembly. But, Darwinists contend, evolution is not about randomness. Rather, it is about natural selection sifting the effects of randomness.

To be sure, if design theorists were merely arguing that pure randomness cannot bring about irreducibly complex systems, there would be merit to the Darwinists’ tornado-in-a-junkyard objection. But that’s not what design theorists are arguing. The problem with Hoyle’s tornado-in-a-junkyard image is that, from the vantage of probability theory, it made the formation of a fully assembled Boeing 747 from its constituent parts as difficult as possible. But what if the parts were not randomly strewn about in the junkyard? What if, instead, they were arranged in the order in which they needed to be assembled to form a fully functional 747. Furthermore, what if, instead of a tornado, a robot capable of assembling airplane parts were handed the parts in the order of assembly? How much knowledge would need to be programmed into the robot for it to have a reasonable probability of assembling a fully functioning 747? Would it require more knowledge than could reasonably be ascribed to a program simulating Darwinian evolution?

Design theorists, far from trying to make it difficult to evolve irreducibly complex systems like the bacterial flagellum, strive to give the Darwinian selection mechanism every legitimate advantage in evolving such systems. The one advantage that cannot legitimately be given to the Darwinian selection mechanism, however, is prior knowledge of the system whose evolution is in question. That would be endowing the Darwinian mechanism with teleological powers (in this case foresight and planning) that Darwin himself insisted it does not, and indeed cannot, possess if evolutionary theory is effectively to dispense with design. Yet even with the most generous allowance of legitimate advantages, the probabilities computed for the Darwinian mechanism to evolve irreducibly complex biochemical systems like the bacterial flagellum always end up being exceedingly small.[29]

The reason these probabilities always end up being so small is the difficulty of coordinating successive evolutionary changes apart from teleology or goal-directedness. In the Darwinian mechanism, neither selection nor variation operate with reference to future goals (like the goal of evolving a bacterial flagellum from a bacterium lacking this structure). Selection is natural selection, which is solely in the business of conferring immediate benefits on an evolving organism. Likewise, variation is random variation, which is solely in the business of perturbing an evolving organism’s heritable structure without regard for how such perturbations might benefit or harm future generations of the organism. In attempting to coordinate the successive evolutionary changes needed to bring about irreducibly complex biochemical machines, the Darwinian mechanism therefore encounters a number of daunting probabilistic hurdles. These include the following:[30]

  1. Availability. Are the parts needed to evolve an irreducibly complex biochemical system like the bacterial flagellum even available?
  2. Synchronization. Are these parts available at the right time so that they can be incorporated when needed into the evolving structure?
  3. Localization. Even with parts that are available at the right time for inclusion in an evolving system, can the parts break free of the systems in which they are currently integrated and be made available at the “construction site” of the evolving system?
  4. Interfering Cross-Reactions. Given that the right parts can be brought together at the right time in the right place, how can the wrong parts that would otherwise gum up the works be excluded from the “construction site” of the evolving system?
  5. Interface Compatibility. Are the parts that are being recruited for inclusion in an evolving system mutually compatible in the sense of meshing or interfacing tightly so that, once suitably positioned, the parts work together to form a functioning system?
  6. Order of Assembly. Even with all and only the right parts reaching the right place at the right time, and even with full interface compatibility, will they be assembled in the right order to form a functioning system?
  7. Configuration. Even with all the right parts slated to be assembled in the right order, will they be arranged in the right way to form a functioning system?

To see what’s at stake in overcoming these hurdles, imagine you are a contractor who has been hired to build a house. If you are going to be successful at building the house, you will need to overcome each of these hurdles. First, you have to determine that all the items you need to build the house (e.g., bricks, wooden beams, electrical wires, glass panes, and pipes) exist and thus are available for your use. Second, you need to make sure that you can obtain all these items within a reasonable period of time. If, for instance, crucial items are back-ordered for years on end, then you won’t be able to fulfill your contract by completing the house within the appointed time. Thus, the availability of these items needs to be properly synchronized. Third, you need to transport all the items to the construction site. In other words, all the items needed to build the house need to be brought to the location where the house will be built.

Fourth, you need to keep the construction site clear of items that would ruin the house or interfere with its construction. For instance, dumping radioactive waste or laying high-explosive mines on the construction site would effectively prevent a usable house from ever being built there. Less dramatically, if excessive amounts of junk found their way to the site (items that are irrelevant to the construction of the house, such as tin cans, broken toys, and discarded newspapers), it might become so difficult to sort through the clutter and thus to find the items necessary to build the house that the house itself might never get built. Items that find their way to the construction site and hinder the construction of a usable house may thus be described as producing interfering crossreactions. Fifth, procuring the right sorts of materials required for houses in general is not enough. As a contractor you also need to ensure that they are properly adapted to each other. Yes, you’ll need nuts and bolts, pipes and fittings, electrical cables and conduits. But unless nuts fit properly with bolts, unless fittings are adapted to pipes, and unless electrical cables fit inside conduits, you won’t be able to construct a usable house. To be sure, each part taken by itself can make for a perfectly good building material capable of working successfully in some house or other. But your concern here is not with some house or other but with the house you are actually building. Only if the parts at the construction site are adapted to each other and interface correctly will you be able to build a usable house. In short, as a contractor you need to ensure that the parts you are bringing to the construction site not only are of the type needed to build houses in general but also share interface compatibility so that they can work together effectively.

Sixth, even with all and only the right materials at the construction site, you need to make sure that you put the items together in the correct order. Thus in building the house, you need first to lay the foundation. If you try to erect the walls first and then lay the foundation under the walls, your efforts to build the house will fail. The right materials require the right order of assembly to produce a usable house. Seventh and last, even if you are assembling the right building materials in the right order, the materials need also to be arranged appropriately. That’s why, as a contractor, you hire masons, plumbers, and electricians. You hire these subcontractors not merely to assemble the right building materials in the right order but also to position them in the right way. For instance, it’s all fine and well to take bricks and assemble them in the order required to build a wall. But if the bricks are oriented at strange angles or if the wall is built at a slant so that the slightest nudge will cause it to topple over, then no usable house will result even if the order of assembly is correct. In other words, it’s not enough for the right items to be assembled in the right order; rather, as they are being assembled, they also need to be properly configured.

Now, as a building contractor, you find none of these seven hurdles insurmountable. That’s because, as an intelligent agent, you can coordinate all the tasks needed to clear these hurdles. You have an architectural plan for the house. You know what materials are required to build the house. You know how to procure them. You know how to deliver them to the right location at the right time. You know how to secure the location from vandals, thieves, debris, weather and anything else that would spoil your construction efforts. You know how to ensure that the building materials are properly adapted to each other so that they work together effectively once put together. You know the order of assembly for putting the building materials together. And, through the skilled laborers you hire (i.e., the subcontractors), you know how to arrange these materials in the right configuration. All this know-how results from intelligence and is the reason you can build a usable house. But the Darwinian mechanism of random variation and natural selection has none of this know-how. All it knows is how to randomly modify things and then preserve those random modifications that happen to be useful at the moment. The Darwinian mechanism is an instant gratification mechanism. If the Darwinian mechanism were a building contractor, it might put up a wall because of its immediate benefit in keeping out intruders from the construction site even though by building the wall now, no foundation could be laid later and, in consequence, no usable house could ever be built at all. That’s how the Darwinian mechanism works, and that’s why it is so limited. It is a trial-and-error tinkerer for which each act of tinkering needs to maintain or enhance present advantage or select for a newly acquired advantage.

Imagine, therefore, what it would mean for the Darwinian mechanism to clear these seven hurdles in evolving a bacterial flagellum. We start with a bacterium that has no flagellum, no genes coding for proteins in the flagellum, and no genes homologous to genes coding for proteins in the flagellum. Such a bacterium is supposed to evolve, over time, into a bacterium with the full complement of genes needed to put together a fully functioning flagellum. Is the Darwinian mechanism adequate for coordinating all the biochemical events needed to clear these seven hurdles and thereby evolve the bacterial flagellum? To answer yes to this question is to attribute creative powers to the Darwinian mechanism that are implausible in the extreme.

To see this, let’s run through these seven hurdles in turn, at each hurdle assessing its potential challenge to the Darwinian evolution of the bacterial flagellum. Let’s start with availability: can the Darwinian mechanism clear the availability hurdle? To clear this hurdle, the Darwinian mechanism needs to be able to form novel proteins from scratch (the bacterial flagellum, if it evolved at all, evolved from a bacterium without any of the genes, exact or homologous, for the proteins constituting the flagellum). Now it’s certainly true that the Darwinian mechanism is capable of tinkering with existing proteins or recruiting them wholesale for new uses. But there is no evidence that it can produce complex specified proteins from scratch (the problem of specified complexity thus arises not just at the level of irreducibly complex molecular machines but even at the level of the individual proteins that make up these machines and constitute their elemental constituents). Moreover, recent work on the extreme functional sensitivity of proteins provides strong evidence that certain classes of proteins are in principle unevolvable by gradual means (and thus a fortiori by the Darwinian mechanism) because small perturbations of these proteins destroys all conceivable biological function (and not merely existing biological function).[31] Thus, it’s highly implausible that the Darwinian mechanism can generate the novel proteins (as well as the novel genes coding for them) required in the evolution of the bacterial flagellum.

What about the synchronization hurdle? Some hurdles are easier for the Darwinian mechanism to clear than others, and this is perhaps one of them. Natural selection is capable of locking in existing structures that serve some biologically useful purpose. Thus, once available, a biologically useful structure will tend to remain available. What’s more, unlike building contractors, who need to complete projects in narrow windows of time, Darwinian evolution works without immediate deadlines (though note that astrophysics imposes long-term deadlines, as with the Sun turning into a red giant in about 5 billion years, causing it to expand and burn up everything in its path, including the Earth[32]). Thus, the timing with which items become available for systems to evolve tends not to be so critical in biological evolution. The only hitch could be that an item that hitherto has served a biologically useful function and is needed in the future evolution of some irreducibly complex system loses its functional advantage somewhere in the middle of the evolutionary process and thus falls into disuse. If that happens, natural selection will tend to eliminate that item, thereby rendering it unavailable.

The localization hurdle, on the other hand, seems considerably more difficult for the Darwinian mechanism to clear. The problem here is that items originally assigned to certain systems need to be reassigned and recruited for use in a newly emerging system. This newly emerging system starts as an existing system that then gets modified with items previously incorporated in other systems. But how likely is it that these items break free and get positioned at the construction site of an existing system, thereby transforming it into a newly emerging system with a novel or enhanced function? Our best evidence suggests that this repositioning of items previously assigned to different systems is improbable and becomes increasingly improbable as more items need to be repositioned simultaneously at the same location. There are two reasons for this. First, the construction site for a given biochemical system tends to maintain its integrity, incorporating only proteins pertinent to the system and keeping out stray proteins that could be disruptive. Second, proteins don’t just break free of systems to which they are assigned as a matter of course; rather, a complex set of genetic changes is required, such as gene duplications, regulatory changes, and point mutations.

The interfering cross-reaction hurdle intensifies the challenge to the Darwinian mechanism posed by the previous hurdle. If the bacterial flagellum is indeed the result of Darwinian evolution, then evolutionary precursors to the flagellum must have existed along the way. These precursors would have been functional systems in their own right, and in their evolution to the flagellum would have needed to be modified by incorporating items previously assigned to other uses. These items would then need to have been positioned at the construction site of the given precursor. Now, as we just saw with the localization hurdle, there is no reason to think that this is likely. Typically, a construction site for a given biochemical system has an integrity of its own, incorporating only proteins pertinent to the system and keeping out stray proteins that could be disruptive. But suppose the construction site becomes more open to novel proteins (thus lowering the localization hurdle and thereby raising the probability of clearing it). In that case, by welcoming items that could help in the evolution of the bacterial flagellum, the construction site would also welcome items that could hinder its evolution. It follows that to the degree that the localization hurdle is easy to clear, to that degree the interfering cross-reaction hurdle is difficult to clear, and vice versa. With the interface-compatibility hurdle, we come to the gravest difficulty confronting the Darwinian mechanism. The problem is this. For the Darwinian mechanism to evolve a system, it must redeploy parts previously targeted for other systems. But that’s not all. It also needs to ensure that those redeployed parts mesh or interface properly. If not, the evolving system will cease functioning and thus no longer confer a selectable advantage. The products of Darwinian evolution are, after all, kludges. In other words, they are systems formed by sticking together items previously assigned to different uses. Now, if these items were built according to common standards or conventions, there might be some reason to think that they could work together effectively. But natural selection is incapable of instituting such standards or conventions. Think of cars manufactured by different automobile companies — say, a Chevrolet Impala from the United States and a Honda Accord from Japan. Although these cars will be quite similar and have subsystems and parts that perform identical functions in identical ways, the parts will be incompatible. You can’t, for instance, swap a piston from one car for a piston in the other or, for that matter, swap bolts, nuts, and screws from the two vehicles. That’s because these cars were designed independently according to different standards and conventions. Of course, at the Chevrolet plant that builds the Impala, there will be common standards and conventions ensuring that different parts of the Impala have compatible interfaces. But across automobile manufacturers (e.g., Chevrolet and Honda), there will be no (or very few) common standards and conventions to which the construction of parts must adhere. In fact, common standards and conventions that facilitate the interface compatibility of distinct functional systems points not just to the design of the systems but also to a common design responsible for the common standards and conventions.

But the Darwinian mechanism is incapable of such common design. As an instant gratification mechanism, its only stake is in bringing about structures that constitute an immediate advantage to an evolving organism. It has no stake in ensuring that such structures also adhere to standards and conventions that will allow them to interface effectively with other structures down the line. Thus, suppose the model proposed in section 4 for the evolution of the bacterial flagellum is, at least in broad strokes, accurate (though, as we saw in that section, this model is neither detailed nor testable nor step-by-step). Hence, at a crucial stage in the evolution of the bacterial flagellum, a pilus got redeployed and attached to a type III secretory system (TTSS). Yet prior to their juxtaposition, these two systems had evolved independently. Consequently, short of invoking sheer blind luck, there is no reason to think that these systems should work together—any more than there is to think that independently designed cars would have swappable parts. This weakness of Darwinian theory can be tested experimentally: take an arbitrary TTSS and pilus and determine the extent of the genetic modifications needed for the pilus to extrude through the TTSS’s protein delivery system. At present, there is no evidence, whether theoretical or experimental, that the Darwinian mechanism can clear the interface compatibility hurdle.

For the Darwinian mechanism to clear the order-of-assembly hurdle is also a stretch. The Darwinian mechanism works by accretion and modification: it adds novel parts to already functioning systems as well as modifies existing parts in them. In this way, new systems with enhanced or novel functions are formed. Now, consider what happens when novel parts are first added to an already functioning system. In that case, the earlier system becomes a subsystem of a newly formed supersystem. What’s more, the order of assembly of the subsystem will, at least initially (before subsequent modifications), be the same as when the subsystem was a standalone system. In general, however, just because the parts of a subsystem can be put together in a given order doesn’t mean those parts can be put together in the same order once it is embedded in a supersystem. In fact, in the evolution of systems like the bacterial flagellum, we can expect the order of assembly of parts to undergo substantial permutations (certainly, this is the case with the model for the evolution of the bacterial flagellum discussed in section 4). How, then, does the order of assembly undergo the right permutations? For most biological systems, the order of assembly is entrenched and does not permit substantial deviations. The burden of evidence is therefore on the Darwinist to show that for an evolving system, the Darwinian mechanism coordinates not only the emergence of the right parts but also their assembly in the right order. Darwinists have done nothing like this. Finally, we consider the configuration hurdle. In the design and construction of human artifacts, this hurdle is one of the more difficult to clear. Nevertheless, in the evolution of irreducibly complex biochemical systems like the bacterial flagellum, this is one of the easier hurdles to clear. That’s because in the actual assembly of the flagellum and systems like it, the biochemical parts do not come together haphazardly. Rather, they self-assemble in the right configuration when chance collisions allow specific, cooperative, local electrostatic interactions to lock the flagellum together, one piece at a time. Thus, in the evolution of the bacterial flagellum, once the interface-compatibility and order-of-assembly hurdles are cleared, so is the configuration hurdle. There’s a general principle here: for selfassembling structures, such as biological systems, configuration is a byproduct of other constraints (like interface compatibility and order of assembly). But note, this is not to say that the configuration of these systems comes for free. Rather, it is to say that the cost of their configuration is included in other costs.

The seven hurdles that I’ve just described should not be construed as merely subjective or purely qualitative challenges to the Darwinian mechanism. It is possible to assess objectively and quantitatively the challenge these hurdles pose to the Darwinian mechanism. Associated with each hurdle is a probability:

  • pavail The probability that the types of parts needed to evolve a given irreducibly complex biochemical system become available (the availability probability).
  • psynch The probability that these parts become available at the right time so that they can be incorporated when needed into the evolving system (the synchronization probability). plocal The probability that these parts, given their availability at the right time, can break free of the systems in which they are currently integrated and be localized at the appropriate site for assembly (the localization probability).
  • pi-c-r The probability that other parts, which would produce interfering cross-reactions and thereby block the formation of the irreducibly complex system in question, get excluded from the site where the system will be assembled (the interfering-cross-reaction probability).
  • pi-f-c The probability that the parts recruited for inclusion in an evolving system interface compatibly so that they can work together to form a functioning system (the interface-compatibility probability)
  • po-o-a The probability that even with the right parts reaching the right place at the right time, and even with full interface compatibility, they will be assembled in the right order to form a functioning system (the order-of-assembly probability).
  • pconfig The probability that even with all the right parts being assembled in the right order, they will be arranged in the right way to form a functioning system (the configuration probability).

Note that each of these probabilities is conditional on the preceding ones. Thus, the synchronization probability assesses the probability of synchronization on condition that the needed parts are available. Thus, the order-of-assembly probability assesses the probability that assembly can be performed in the right order on condition that all the parts are available (availability) at the right time (synchronization) at the right place (localization) without interfering cross-reactions and with full interface compatibility. As a consequence, the probability of an irreducibly complex system arising by Darwinian means cannot exceed the following product (note that because the probabilities are conditional on the preceding ones, in forming this product no unwarranted assumption about probabilistic independence is being slipped in here):

pavail x psynch x plocal x pi-c-r x pi-f-c x po-o-a x pconfig.
If we now define porigin as the probability of an irreducibly complex system originating by Darwinian means (the origination probability), then the following inequality holds (the origination inequality):
porigin ≤ pavail x psynch x plocal x pi-c-r x pi-f-c x po-o-a x pconfig.[33] The origination inequality has far-reaching implications. Because probabilities are numbers between zero and one, this inequality tells us that if even one of the probabilities to the right of the inequality sign is small, then the origination probability must itself be small (indeed, no bigger than any of the probabilities on the right). It follows that we don’t have to calculate all seven probabilities to the right of the inequality sign to ensure that porigin is small. It also follows that none of these probabilities needs to be calculated exactly. It is enough to have reliable upper bounds on these probabilities. If any of these upper bounds is small, then so is the associated probability and so is the origination probability. And if the origination probability is small, then the irreducibly complex system in question is both highly improbable and specified (all these irreducibly complex systems are specified in virtue of their biological function). It follows that if the origination probability is small, then the system in question exhibits specified complexity; and since specified complexity is a reliable empirical marker of actual design, it follows that the system itself is designed.

It will be helpful here to contrast the origination inequality with the Drake equation, which arises in the search for extraterrestrial intelligence (SETI). In 1960, an astrophysicist named Frank Drake organized the first SETI conference and introduced the now-famous Drake equation: N=N* x fp x ne x fl x fi x fc x fL.[34]

Here are what the terms of this equation mean:

  • N The number of technologically advanced civilizations in the Milky Way Galaxy capable of communicating with Earth.
  • N* The number of stars in the Milky Way Galaxy.
  • fp The fraction of stars that have planetary systems.
  • ne The average number of planets per star capable of supporting life.
  • fl The fraction of those planets in turn where life evolves.
  • fi The fraction of those planets in turn where intelligent life evolves.
  • fc The fraction of those planets in turn with civilizations that invent advanced communications technology.
  • fL The fraction of a planetary lifetime during which communicating civilizations exist.

The Drake equation gauges how likely the search for extraterrestrial intelligence is to succeed: the bigger N, the more likely SETI researchers are to find signs of intelligence from distant space.

As with the origination inequality, seven terms determine the Drake equation, namely, the seven terms on the right side of the equality. What’s more, these seven terms, as with the seven terms on the right side of the origination inequality, depend on each other successively. For instance, the fraction of planets where intelligent life evolves is defined in terms of the fraction of planets on which life simpliciter evolves.

Despite these interesting parallels between the Drake equation and the origination inequality—not least that both are used for discovering signs of intelligence—there is also a sharp difference. For the Drake equation to convince us that the search for extraterrestrial intelligence is likely to succeed, none of the factors on the right side of that equation must get too small. Only then will SETI researchers stand a reasonable chance of discovering signs of extraterrestrial intelligence. By contrast, with the origination inequality, to guarantee the specified complexity, and therefore design, of an irreducibly complex system, it is enough to show that even one term on the right side of the inequality is sufficiently small. With regard to the practical application of these formulas, this difference makes all the difference in the world.

The problem with the Drake equation is that most of the terms cannot be estimated. As Michael Crichton observed in a widely publicized Caltech lecture, the only way to work the equation is to fill in with guesses. And guesses—just so we’re clear—are merely expressions of prejudice. Nor can there be “informed guesses.” If you need to state how many planets with life choose to communicate, there is simply no way to make an informed guess. It’s simply prejudice. As a result, the Drake equation can have any value from “billions and billions” to zero. An expression that can mean anything means nothing…. I take the hard view that science involves the creation of testable hypotheses. The Drake equation cannot be tested…. There is not a single shred of evidence for any other life forms, and in forty years of searching, none has been discovered.[35] Crichton’s point about the Drake equation’s testability is well taken. The Drake equation is testable only if all its terms can be reasonably estimated (which, for now, they cannot). By contrast, the origination inequality becomes testable as soon as even one of its terms can be reasonably estimated. That’s because as soon as even one term on the right side of the origination inequality is small, the origination probability itself must be at least that small.

Nor is the origination inequality testable only in principle. Take, for instance, the interface-compatibility probability. It is possible to join existing biochemical systems (anything from individual proteins to complex biochemical machines) and determine experimentally the degree to which their interfaces are compatible. It is also possible to take apart existing biochemical systems, perturb them, and then put them back together. To the degree that these systems tolerate perturbation, they are evolvable by Darwinian means. Conversely, to the degree that these systems are sensitive to perturbation, they are unevolvable by Darwinian means. Experiments like this can be conducted on actual biochemical systems. Alternatively, they can be conducted using computer simulations that model biochemical processes. The point is, with the interfacecompatibility probability and the other probabilities in the origination inequality, there is no inherent obstacle to deriving reliable, experimentally confirmed estimates for them. Both Darwinists and design theorists have a significant stake in estimating these probabilities, research on which is only now beginning.

The origination inequality has no inherent bias. It does not predetermine whether a given irreducibly complex biochemical system is designed. So long as each of its probabilities is large or remains unestimated, the presumption is against the system exhibiting specified complexity and therefore against it being designed. On the other hand, should any of the probabilities become sufficiently small, then the presumption shifts to the system exhibiting specified complexity and being designed. In this way, the origination inequality makes for a level playing field in deciding between Darwinian and intelligent design theories. Darwinists tacitly consent to the origination inequality whenever they invoke high probability events to support their theory. For instance, in seeking confirmation that antibiotic resistance in bacteria results from the Darwinian mechanism and not intelligent design, Darwinists are happy to note that the probability of the point mutations needed for antibiotic resistance is large.

But having tacitly consented to the origination inequality whenever it confirms Darwinian theory, Darwinists are quick to deny that this inequality can legitimately be employed to disconfirm Darwinian theory. The double-standard here goes right back to Darwin himself. In the Origin of Species Darwin issued the following challenge: “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case.”[36] Darwin is here offering one of those heads-I-win-tails-you-lose challenges. Indeed, his challenge is no challenge at all — it guarantees that Darwinian theory will not, and indeed cannot, be subjected to critical scrutiny. As Robert Koons points out,

How could it be proved that something could not possibly have been formed by a process specified no more fully than as a process of “numerous, successive, slight modifications”? And why should the critic [of Darwin’s theory] have to prove any such thing? The burden is on Darwin and his defenders to demonstrate that at least some complex organs we find in nature really can possibly be formed in this way, that is, by some specific, fully articulated series of slight modifications.[37]

In order even to use the origination inequality, one must first propose specific evolutionary pathways leading to irreducibly complex biochemical systems like the bacterial flagellum. Only with such proposals in hand can one begin to estimate the probabilities that appear in the origination inequality. Moreover, once such proposals are made, they invariably point up the inadequacy of the Darwinian mechanism because the origination probabilities associated with irreducibly complex biochemical systems have, to date, always proven to be small. Design theorists take this as strong confirmation that these systems exhibit specified complexity and are in fact designed. Darwinists, by contrast, take this as simply showing that evolutionary biology has yet to come up with the right evolutionary pathways by which the Darwinian mechanism produced the systems in question.

Who’s right? By now it’s clear that neither party to this controversy is going to give way any time soon. From the vantage of the design theorist, the Darwinist has artificially insulated Darwinian theory and rendered it immune to disconfirmation in principle because the universe of unknown Darwinian pathways can never be exhausted. From the vantage of the Darwinist, on the other hand, nothing less than an in-principle exclusion and exhaustion of all conceivable Darwinian pathways suffices to shift the burden of evidence onto the Darwinist. To an outsider, with no stake in the outcome of this controversy, the asymmetry of these positions will be obvious. Intelligent design allows the evidence of biology both to confirm and to disconfirm it. Darwinism, by contrast, assumes no corresponding burden of evidence—it declares itself the winner against intelligent design by default.

This unwillingness of Darwinism to assume its due evidential burden is unworthy of science. Science, if it is to constitute an unbiased investigation into nature, must give the full range of logically possible explanations a fair chance to succeed. In particular, science may not by arbitrary decree rule out logical possibilities. Evolutionary biology, by unfairly privileging Darwinian explanations, has settled in advance which biological explanations must be true as well as which must be false apart from any consideration of empirical evidence. This is not science. This is arm-chair philosophy. Even if intelligent design is not the correct theory of biological origins, the only way science could discover that is by admitting design as a live possibility rather than by ruling it out in advance. Darwin unfairly stacked the deck in favor of his theory. Notwithstanding, elsewhere in the Origin of Species, he wrote: “A fair result can be obtained only by fully stating and balancing the facts and arguments on both sides of each question.”[38] That balance is now shifting away from Darwinism and toward intelligent design.

Nachdruck mit Genehmigung des Autors. Siehe auch www.designinference.com und

www.designinference.com/documents/2004.01.Irred_Compl_Revisited.pdf 

———————————————————————

Prof. Dr. Dr. William A. Dembski ist Associate Research Professor für Conceptual Foundations of Science am Baylor University’s Institute for Faith and Learning; Senior Fellow am Discovery Institute’s Center for Science and Culture; Executive Director of the International Society for Complexity, Information, and Design (www.iscid.org). Er hat folgende akademische Abschlüsse: 

B.A. in Psychologie (University of Illinois at Chicago)

M.S. in Statistik (University of Illinois at Chicago)

S.M. in Mathematik (University of Chicago)
Ph.D. in Mathematik (University of Chicago) 

M.A. in Philosophie (University of Illinois at Chicago)

Ph.D. in Philosophie (University of Illinois at Chicago)

M.Div. in Theologie (Princeton Theological Seminary). 

Fellowships/Awards:
Nancy Hirshberg Memorial Prize for best undergraduate research paper in psychology at the University of Illinois at Chicago, 1981.
National Science Foundation Graduate Fellowship for psychology and mathematics, 1982-1985
McCormick Fellowship (University of Chicago) for mathematics, 1984-1988
National Science Foundation Postdoctoral Fellowship for mathematics, 1988-1991
Northwestern University Postdoctoral Fellowship (Department of Philosophy) for history and philosophy of science, 1992-1993
Pascal Centre Research Fellowship for studies in science and religion, 1992-1995 

Notre Dame Postdoctoral Fellowship (Department of Philosophy) for philosophy of religion, 1996-1997
Discovery Institute Fellowship for research in intelligent design, 1996-1999 

Templeton Foundation Book Prize ($100,000) for writing book on information theory, 2000-2001 

Akademische Tätigkeiten:
Lecturer, University of Chicago, Department of Mathematics teaching undergraduate mathematics, 1987-1988
Postdoctoral Visiting Fellow, MIT, Department of Mathematics research in probability theory, 1988
Postdoctoral Visiting Fellow, University of Chicago, James Franck Institute research in chaos & probability, 1989
Research Associate, Princeton University, Department of Computer Science research in cryptography & complexity theory, 1990 Postdoctoral Fellow, Northwestern University, Department of Philosophy teaching philosophy of science + research, 1992-1993
Independent Scholar, Center for Interdisciplinary Studies, Princeton research in complexity, information, and design, 1993-1996
Postdoctoral Fellow, University of Notre Dame, Department of Philosophy teaching philosophy of religion + research, 1996-1997
Adjunct Assistant Professor, University of Dallas, Department of Philosophy teaching introduction to philosophy, 1997-1999
Fellow, Discovery Institute, Center for the Renewal of Science and Culture research in complexity, information, and design, 1996-present Associate Research Professor, Institute for Faith and Learning, Baylor University research in intelligent design, 1999-present 

Mitgliedschaften:
Discovery Institute-senior fellow
Wilberforce Forum-senior fellow
Foundation for Thought and Ethics-academic editor
Origins & Design-associate editor
Princeton Theological Review-editorial board
Torrey Honors Program, Biola University-advisory board
American Scientific Affiliation
Evangelical Philosophical Society
Access Research Network
International Society for Complexity, Information, and Design-executive director 

Weitere akademische Aktivitäten:
Endowed Lectures „Truth in an Age of Uncertainty and Relativism.“ Dom. Luke Child’s Lecture, Portsmouth Abbey School, 30 September 1988.
„Science, Theology, and Intelligent Design.“ Staley Lectures, Central College, Iowa, 4-5 March 1998.
„Intelligent Design: Bridging Science and Faith.“ Staley Lectures, Union University, Tennessee, 28 February – 1 March 2000.
„Intelligent Design.“ Staley Lectures, Anderson College, Anderson, South Carolina, 15 & 16 January 2002.
„The Design Revolution.“ Norton Lectures, Southern Baptist Theological Seminary, Louisville, Kentucky, 11 & 12 February 2003.
Participant, International Institute of Human Rights in Strasbourg France, 28 June to 27 July 1990.
Summer research in design, Cambridge University, sponsored by Pascal Centre (Ancaster, Ontario, Canada), 1 July to 4 August 1992. Participant, The Status of Darwinian Theory and Origin of Life Studies, Pajaro Dunes, California, 22-24 June 1993.
Faculty in theology and science at the C. S. Lewis Summer Institute, Cosmos and Creation. Cambridge University, Queen’s College, 10-23 July 1994.
Canadian lecture tour on intelligent design (Simon Fraser University, University of Calgary, and University of Saskatchewan), sponsored by the New Scholars Society, 4-6 February 1998.
Faculty in theology and science at the C. S. Lewis International Centennial Celebration, Loose in the Fire. Oxford and Cambridge Universities, 19 July to 1 August 1998.
The Nature of Nature, conference at Baylor University, 12-15 April 2002, organized by WmAD and Bruce Gordon.
Seminar Organizer, „Design, Self-Organization, and the Integrity of Creation,“ Calvin College Seminar in Christian Scholarship, 19 June – 28 July 2000. Follow-up conference 24-26 May 2001 (speakers included Alvin Plantinga, John Haught, and Del Ratzsch). 

Contributor, „Prospects for Post-Darwinian Science,“ symposium, New College, Oxford, August 2000. Other contributors included Michael Denton, Peter Saunders, Mae-Wan Ho, David Berlinski, Jonathan Wells, Stephen Meyer, and Simon Conway Morris. 

Participant, Symposium on Design Reasoning, Calvin College, 22-23 May 2001. Other participants were Stephen Meyer, Paul Nelson, Rob Koons, Del Ratzsch, Robin Collins, Tim & Lydia McGrew. Tim will edited the proceedings for an academic press. 

Presenter, on topic of detecting design, 23-27 July 2001 at Wycliffe Hall, Oxford University in the John Templeton Oxford Seminars on Science and Christianity.
Debate with Massimo Pigliucci, „Is Intelligent Design Smart Enough?“ New York Academy of Sciences, 1 November 2001. 

Debate with Michael Shermer, „Does Science Prove God?“ Clemson University, 7 November 2001.
Discussion with Stuart Kauffman, „Order for Free vs. No Free Lunch,“ Center for Advanced Studies, University of New Mexico, 13 November 2001. 

Program titled „Darwin under the Microscope,“ PBS television interview for Uncommon Knowledge with Peter Robinson facing Eugenie Scott and Robert Russell, 7 December 2001
Canadian lecture tour on intelligent design (University of Guelph, University of Toronto, and McMasters University), sponsored by the Canadian Scientific and Christian Affiliation, 6-8 March 2002. 

Debate titled „God or Luck: Creationism vs. Evolution,“ with Steven Darwin, professor of botany, Tulane University, New Orleans, 7 October 2002. 

Veröffentlichungen:
Bücher:
The Design Inference: Eliminating Chance through Small Probabilities. Cambridge: Cambridge University Press, 1998.
Intelligent Design: The Bridge between Science and Theology. Downer’s Grove, Ill.: InterVarsity Press, 1999. [Award: Christianity Today’s Book of the Year in the category „Christianity and Culture.“]
No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence. Lanham, Md.: Rowman & Littlefield, 2002.
Edited Collections:
Mere Creation: Science, Faith, and Intelligent Design (proceedings of a conference on design and origins at Biola University, 14 – 17 November 1996). Downer’s Grove, Ill.: InterVarsity Press, 1998.
Science and Evidence for Design in the Universe, Proceedings of the Wethersfield Institute, vol. 9 (co-edited with Michael J. Behe and Stephen C. Meyer). San Francisco: Ignatius Press, 2000.
Unapologetic Apologetics: Meeting the Challenges of Theological Studies (co-edited with Jay Wesley Richards; selected papers from the Apologetics Seminar at Princeton Theological Seminary, 1995-1997). Downer’s Grove, Ill.: InterVarsity Press, 2001.
Signs of Intelligence: Understanding Intelligent Design (co-edited with James Kushiner). Grand Rapids, Mich.: Brazos Press, 2001. 

Arktikel:
„Uniform Probability.“ Journal of Theoretical Probability 3(4), 1990: 611-626.
„Scientopoly: The Game of Scientism.“ Epiphany Journal 10(1&2), 1990: 110-120.
„Converting Matter into Mind: Alchemy and the Philosopher’s Stone in Cognitive Science.“ Perspectives on Science and Christian Faith 42(4), 1990: 202-226. Abridged version in Epiphany Journal 11(4), 1991: 50-76. My response to subsequent critical comment: „Conflating Matter and Mind“ in Perspectives on Science and Christian Faith 43(2), 1991: 107-111.
„Inconvenient Facts: Miracles and the Skeptical Inquirer.“ Philosophia Christi (formerly Bulletin of the Evangelical Philosophical Society) 13, 1990: 18-45.
„Randomness by Design.“ Nous 25(1), 1991: 75-106.
„Reviving the Argument from Design: Detecting Design through Small Probabilities.“ Proceedings of the 8th Biannual Conference of the Association of Christians in the Mathematical Sciences (at Wheaton College), 29 May – 1 June 1991: 101-145.
„The Incompleteness of Scientific Naturalism.“ In Darwinism: Science or Philosophy? edited by Jon Buell and Virginia Hearn (Proceedings of the Darwinism Symposium held at Southern Methodist University, 26-28 March 1992), pp. 79-94. Dallas: Foundation for Thought and Ethics, 1994. 

„On the Very Possibility of Intelligent Design.“ In The Creation Hypothesis, edited by J. P. Moreland, pp. 113-138. Downers Grove: InterVarsity Press, 1994.
„What Every Theologian Should Know about Creation, Evolution, and Design.“ Princeton Theological Review 2(3), 1995: 15-21. 

„Transcendent Causes and Computational Miracles.“ In Interpreting God’s Action in the World (Facets of Faith and Science, volume 4), edited by J. M. van der Meer. Lanham: The Pascal Centre for Advanced Studies in Faith and Science/ University Press of America, 1996. 

„The Problem of Error in Scripture.“ Princeton Theological Review 3(1)(double issue), 1996: 22-28.
„Teaching Intelligent Design as Religion or Science?“ Princeton Theological Review 3(2), 1996: 14-18. 

„Schleiermacher’s Metaphysical Critique of Miracles.“ Scottish Journal of Theology 49(4), 1996: 443-465.
„Christology and Human Development.“ FOUNDATIONS 5(1), 1997: 11-18. 

„Intelligent Design as a Theory of Information“ (revision of 1997 NTSE conference paper). Perspectives on Science and Christian Faith 49(3), 1997: 180-190.
„Fruitful Interchange or Polite Chitchat? The Dialogue between Theology and Science“ (co-authored with Stephen C. Meyer). Zygon 33(3), 1998: 415-430. 

„Mere Creation.“ In Mere Creation: Science, Faith, and Intelligent Design.
„Redesigning Science.“ In Mere Creation: Science, Faith, and Intelligent Design. „Science and Design.“ First Things no. 86, October 1998: 21-27. „Reinstating Design within Science.“ Rhetoric and Public Affairs 1(4), 1998: 503-518. 

„Signs of Intelligence: A Primer on the Discernment of Intelligent Design.“ Touchstone 12(4), 1999: 76-84.
„Are We Spiritual Machines?“ First Things no. 96, October 1999: 25-31. „Not Even False? Reassessing the Demise of British Natural Theology.“ Philosophia Christi 2nd series, 1(1), 1999: 17-43. 

„Naturalism and Design.“ In Naturalism: A Critical Analysis, edited by William Lane Craig and J. P. Moreland (London: Routledge, 2000). „Conservatives, Darwin & Design: An Exchange“ (co-authored with Larry Arnhart and Michael J. Behe). First Things no. 107 (November 2000): 23-31. 

„The Third Mode of Explanation.“ In Science and Evidence for Design in the Universe, edited by Michael J. Behe, William A. Dembski, and Stephen C. Meyer (San Francisco: Ignatius, 2000).
„The Mathematics of Detecting Divine Action.“ Mathematics in a Postmodern Age: A Christian Perspective, edited by James Bradley and Russell Howell (Grand Rapids, Mich.: Eerdmans, 2001). 

„The Pragmatic Nature of Mathematical Inquiry.“ Mathematics in a Postmodern Age: A Christian Perspective, edited by James Bradley and Russell Howell (Grand Rapids, Mich.: Eerdmans, 2001).
„Detecting Design by Eliminating Chance: A Response to Robin Collins.“ In Christian Scholar’s Review 30(3), Spring 2001: 343-357. 

„The Inflation of Probabilistic Resources.“ In God and Design: The Teleological Argument and Modern Science, edited by Neil Manson. (London: Routledge, to appear 2002).
„Can Evolutionary Algorithms Generate Specified Complexity?“ In From Complexity to Life, edited by Niels H. Gregersen, foreword by Paul Davies (Oxford: Oxford University Press, 2002). 

„Design and Information.“ To appear in Detecting Design in Creation, edited by Stephen C. Meyer, Paul A. Nelson, and John Mark Reynolds. „Why Natural Selection Can’t Design Anything,“ Progress in Complexity, Information, and Design 1(1), 2002: iscid.org/papers/Dembski_WhyNatural_112901.pdf 

„Random Predicate Logic I: A Probabilistic Approach to Vagueness,“ Progress in Complexity, Information, and Design 1(2-3), 2002: www.iscid.org/papers/Dembski_RandomPredicate_072402.pdf „Another Way to Detect Design?“ Progress in Complexity, Information, and Design 1(4), 2002: iscid.org/papers/Dembski_DisciplinedScience_102802.pdf „Evolution’s Logic of Credulity: An Unfettered Response to Allen Orr,“ Progress in Comlexity, Information, and Design 1(4), 2002: www.iscid.org/papers/Dembski_ResponseToOrr_010703.pdf 

„The Chance of the Gaps,“ in God and Design: The Teleological Argument and Modern Science, edited by Neil Manson, Routledge, forthcoming 2003. 

Short Contributions:

„Reverse Diffusion-Limited Aggregation.“ Journal of Statistical Computation and Simulation 37(3&4), 1990: 231-234.
„The Fallacy of Contextualism.“ Themelios 20(3), 1995: 8-11.
„The God of the Gaps.“ Princeton Theological Review 2(2), 1995: 13-16. „The Paradox of Politicizing the Kingdom.“ Princeton Theological Review 3(1)(double issue), 1996: 35-37. 

„Alchemy, NK Boolean Style“ (review of Stuart Kauffman’s At Home in the Universe). Origins & Design 17(2), 1996: 30-32.
„Intelligent Design: The New Kid on the Block.“ The Banner 133(6), 16 March 1998: 14-16. 

„The Intelligent Design Movement.“ Cosmic Pursuit 1(2), 1998: 22-26. „The Bible by Numbers“ (review of Jeffrey Satinover’s Cracking the Bible Code). First Things, August/September 1998 (no. 85): 61-64. „Randomness.“ In Routledge Encyclopedia of Philosophy, edited by Edward Craig. London: Routledge, 1998.
„The Last Magic“ (review of Mark Steiner’s The Applicability of Mathematics as a Philosophical Problem). Books & Culture, July/August 1999. [Award: Evangelical Press Association, First Place for 1999 in the category „Critical Reviews.“]
„Thinkable and Unthinkable“ (review of Paul Davies’s The Fifth Miracle). Books & Culture, September/October 1999: 33-35.
„The Arrow and the Archer: Reintroducing Design into Science.“ Science & Spirit 10(4), 1999(Nov/Dec): 32-34, 42.
„What Can We Reasonably Hope For? – A Millennium Symposium.“ First Things no. 99, January 2000: 19-20.
„Because It Works, That’s Why!“ (review of Y. M. Guttmann’s The Concept of Probability in Statistical Physics). Books & Culture, March/April 2000: 42-43.
„The Design Argument.“ In The History of Science and Religion in the Western Tradition: An Encyclopedia, edited by Gary B. Ferngren (New York: Garland, 2000), 65-67.
„The Limits of Natural Teleology“ (review of Robert Wright’s Nonzero: The Logic of Human Destiny). First Things no. 105 (August/September 2000): 46-51.
„Conservatives, Darwin & Design: An Exchange“ (co-authored with Larry Arnhart and Michael J. Behe). First Things no. 107 (November 2000): 23-31.
„Shamelessly Doubting Darwin,“ American Outlook (November/December 2000): 22-24.
„Intelligent Design Theory.“ In Religion in Geschichte und Gegenwart, 4th edition, edited by Hans Dieter Betz, Don S. Browning, Bernd Janowski, Eberhard Jüngel. Tübingen: Mohr Siebeck.
„What Have Butterflies Got to Do with Darwin?“ Review of Bernard d’Abrera’s Concise Atlas of Butterflies. Progress in Complexity, Information, and Design 1(1), 2002: www.iscid.org/papers/Dembski_BR_Butterflies_122101.pdf „Detecting Design in the Natural Sciences,“ Natural History 111(3), April 2002: 76.
„The Design Argument,“ in Science and Religion: A Historical Introduction, edited by Gary B. Ferngren (Baltimore: Johns Hopkins Press, 2002), 335-344 .
„How the Monkey Got His Tail,“ Books & Culture, November/December 2002: 42 (book review of S. Orzack and E. Sober, Adaptationism and Optimality).
„Detecting Design in the Natural Sciences,“ to appear in Russian translation in Poisk. Expanded version of Natural History article. 

Work in Progress:
Debating Design: From Darwin to DNA, co-edited with Michael Ruse; an edited collection representing Darwinian, self-organizational, theistic evolutionist, and design-theoretic perspectives; book under contract with Cambridge University Press.
The Design Revolution: Making a New Science and Worldview, cultural and public policy implications of intelligent design; book under contract with InterVarsity Press.
Freeing Inquiry from Ideology: A Michael Polanyi Reader, co-edited with Bruce Gordon; an anthology of Michael Polanyi’s writings; book under contract with InterVarsity Press.
Uncommon Dissent: Intellectuals Who Find Darwinism Unconvincing, edited collection of essays by intellectuals who doubt Darwinism on scientific and rational grounds; book under contract with Intercollegiate Studies Institute.
The End of Christianity, coauthored with James Parker III, book under contract with Broadman & Holman.
Of Pandas and People: The Intelligent Design of Biological Systems, academic editor for third updated edition, coauthored with Michael Behe, Percival Davis, Dean Kenyon, and Jonathan Wells.
Being as Communion: The Metaphysics of Information, Templeton Book Prize project, proposal submitted to Ashgate publishers for series in science and religion.
The Patristic Understanding of Creation, co-edited with Brian Frederick; anthology of writings from the Church Fathers on creation and design. 

Notes 

[1]Bruce Alberts, “The Cell as a Collection of Protein Machines: Preparing the Next Generation of Molecular Biologists,” Cell 92 (8 February 1998): 291.

[2]Adam Wilkins, “A Special Issue on Molecular Machines,” BioEssays 25(12) (December 2003): 1146.

[3]This definition of irreducible complexity is William Dembski’s refinement and generalization of Michael Behe’s original definition. For Behe’s original definition see Darwin’s Black Box: The Biochemical Challenge to Evolution (New York: Free Press, 1996), 39. For Dembski’s refinement and generalization, see No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (Lanham, Md.: Rowman and Littlefield, 2002), sec. 5.9.

[4]Charles Darwin, On the Origin of Species, facsimile 1st ed. (1859; reprinted Cambridge, Mass.: Harvard University Press, 1964), 189.

[5]See Howard C. Berg, Random Walks in Biology, exp. ed. (Princeton: Princeton University Press, 1993), 134. Berg writes: “E. coli has receptors for oxygen and other electron acceptors, sugars, amino acids, and dipeptides. It monitors the occupancy of these receptors as a function of time. The probability that a cell will run (rotate its flagella counterclockwise) rather than tumble (rotate its flagella clockwise) depends on the time rate of change of receptor occupancy. We know from responses of cells to short pulses of chemicals delivered by micropipettes that this measurement spans some 4 sec. A cell compares the occupancy of a given receptor measured over the past second—the aspartate receptor is the only receptor that has been studied in detail—with that measured over the previous 3 sec and responds to the difference. Now given rotational diffusion, E. coli wanders off course about 60 degrees in 4 sec. If measurements of differences in concentration took much longer than this, they would not be relevant, because cells would change course before the results could be applied. On the other hand, if these measurements were made on a much shorter time scale, their precision would not be adequate. E. coli counts molecules as they diffuse to its receptors, and this takes time. The relative error (the standard deviation divided by the mean) decreases with the square root of the count. Thus in deciding whether life is getting better or worse, E. coli uses as much time as it can, given the limit set by rotational Brownian movement.”

[6]Michael Behe, Darwin’ s Black Box (New Y ork: Free Press, 1996), 69–73.

[7]John Postgate, The Outer Reaches of Life (Cambridge: Cambridge University Press, 1994), 160.

[8]For reviews in the popular press see James Shreeve, “Design for Living,” New York Times, Book Review Section (4 August 1996): 8; Paul R. Gross, “The Dissent of Man,” Wall Street Journal (30 July 1996): A12; and Boyce Rensberger, “How Science Responds When Creationists Criticize Evolution,” Washington Post (8 January 1997): H01. For reviews in the scientific journals see Jerry A. Coyne, “God in the Details,” Nature 383 (19 September 1996): 227–228; Neil W. Blackstone, “Argumentum Ad Ignorantiam,” Quarterly Review of Biology 72(4) (December 1997): 445–447; and Thomas Cavalier-Smith, “The Blind Biochemist,” Trends in Ecology and Evolution 12 (1997): 162–163.

[9]See John Catalano’s web page titled “Behe’s Empty Box”: www.world-ofdawkins. com/Catalano/box/behe.htm (last accessed October 21, 2003).

[10]Franklin Harold, The Way of the Cell: Molecules, Organisms and the Order of Life (Oxford: Oxford University Press, 2001), 205. James Shapiro, “In the Details … What?” National Review (16 September 1996): 62–65.

[11]Darwin, Origin of Species, 82.

[12]Thomas D. Schneider, “Evolution of Biological Information,” Nucleic Acids Research 28(14) (2000): 2794.

[13]Ibid.

[14]Darwin, Origin of Species, 189.

[15]H. Allen Orr, “Darwin v. Intelligent Design (Again),” Boston Review (December/January 1996-1997): 29.

[16]Ibid., 29.

[17]See L. Nguyen, I. T. Paulsen, J. Tchieu, C. J. Hueck, M. H. Saier Jr., “Phylogenetic Analyses of the Constituents of Type III Protein Secretion Systems,” Journal of Molecular Microbiology and Biotechnology 2(2) (2000):125–44.

[18]Kenneth R. Miller, “The Flagellum Unspun: The Collapse of ‘Irreducible Complexity’,” in W. Dembski and M. Ruse, eds., Debating Design: From Darwin to DNA (Cambridge: Cambridge University Press, forthcoming 2004).
[19]Ian Musgrave, “Evolution of the Bacterial Flagellum,” in M. Young and T. Edis, eds., Why Intelligent Design Fails: A Scientific Critique of the New Creationism (Piscataway, N.J.: Rutgers University Press, forthcoming 2004).

[20]Nicholas Matzke, “Evolution in (Brownian) Space: A Model for the Origin of the Bacterial Flagellum,” published online at www.talkreason.org/articles/flag.pdf (last accessed December 1, 2003).

[21]Ibid.

[22]See Kenneth R. Miller, Finding Darwin’s God (New York: HarperCollins, 1999), ch. 5.

[23]Harold, Way of the Cell, 205

[24]Lynn Margulis and Dorion Sagan, Acquiring Genomes: A Theory of the Origins of Species (New York: Basic Books, 2002), 103.

[25]William Dembski, The Design Inference: Eliminating Chance through Small Probabilities (Cambridge: Cambridge University Press, 1998); William Dembski, No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (Lanham, Md.: Rowman and Littlefield, 2002).

[26]Richard Dawkins, The Blind Watchmaker (New York: Norton, 1987), 1.

[27]Francis Crick, What Mad Pursuit (New York: Basic Books, 1988), 138.

[28]Fred Hoyle, The Intelligent Universe (New York: Holt, Reinhart, and Winston, 1984), 19.

[29]See, for instance, Dembski, No Free Lunch, sec. 5.10.

[30]Ibid. See also Angus Menuge, Agents Under Fire: Materialism and the Rationality of Science (Lanham, Md.: Rowman and Littlefield, forthcoming 2004), ch. 4.

[31]Douglas Axe, “Extreme Functional Sensitivity to Conservative Amino Acid Changes on Enzyme Exteriors,” Journal of Molecular Biology 301 (2000): 585–595.

[32]Hubert P. Yockey, Information Theory and Molecular Biology (Cambridge: Cambridge University Press, 1992), 220–221.

[33]Note that this inequality need not be a strict equality because it can be refined with additional terms. For instance, consider the retention probability preten, the probability that items available at the right time and in the right place stay at the right place long enough (i.e., are retained) for the bacterial flagellum (or whatever irreducibly complex system is in question) to be properly assembled. The retention probability is thus conditional on the availability, synchronization, and localization probabilities and could be inserted as a factor after these terms in the origination inequality.

[34]Carl Sagan, Cosmos (New York: Random House, 1980), 299.

[35]Michael Crichton, “Aliens Cause Global Warming,” Caltech Michelin Lecture, January 17, 2003, available online at www.crichton-official.com/speeches/speeches_ quote04.html (last accessed January 3, 2004).

[36]Darwin, Origin of Species, 189.

[37]Robert Koons, “The Check Is in the Mail: Why Darwinism Fails to Inspire Confidence,” in W. A. Dembski, ed., Uncommon Dissent: Intellectuals Who Find Darwinism Unconvincing (Wilmington, Del.: ISI Books, 2004).

[38]Darwin, Origin of Species, 2.

Verwandte Dateien

Kontakt