Reports of the National Center for Science Education
|
Volume
20
|
No.
4
|
July-August
2000

Design and its Critics: Yet Another ID Conference

Concordia University in Mequon, Wisconsin, was the site of another "Intelligent Design" conference held on June 22-24, 2000. Under the rubric "Design and Its Critics" (DAIC), the conference brought together the leading lights of the "Intelligent Design" (ID) movement with several critics from a variety of disciplines in the natural sciences, social sciences, and humanities. There was a variety of plenary and concurrent sessions throughout the weekend, so we are able to present only the highlights of the conference.

Thursday, June 22, 2000

The opening debate was on Thursday night. Stephen Meyer and Michael Shermer shared the stage. Meyer's talk was entitled "What do good scientific theories do?" According to Meyer, they explain data in the natural world and make predictions about the natural world — particularly predictions that are useful for future scientific research. Because explanation is not equivalent to prediction, Meyer argued, historical theories can accomplish only the first task; they can retrodict but not predict. ID also accomplishes the first task: it has explanatory power. Meyer's example was the concept of irreproducible complexity introduced in Behe's discussion of the bacterial flagellum.

Meyer went on to claim that ID provides a better explanation than evolutionary theory in several instances. First, Meyer argued that ID provides a better explanation of the origin of "information", in particular the origin of DNA, than does evolution. Next he claimed that ID provides a better explanation of the Cambrian Explosion — the sudden appearance of new phyla in the fossil record 570 million years ago — because new organisms require a new information code. According to Meyer, this situation does not fit a "Darwinian" model, because the mere shuffling of genes is not sufficient to produce this variety (though he provided no support for this assertion). In Meyer's view, the shortcomings of evolutionary models confirm ID by default.

Meyer rehearsed the standard mistaken creationist critiques based on biochemical complexities and specificities of modern organisms, but added an interesting — if misconstrued — discussion of the origin of DNA. Since DNA provides the instruction set for proteins, Meyer asked, what is the causal explanation of the DNA code? Citing Stanley Miller's experiments as proof that the prebiotic atmosphere was unsuitable for sustaining life, Meyer concluded that there was no natural prebiotic source of the information encoded in DNA. Any precursor molecules would be subject to interfering cross-reactions, and the limited time and resources combined with the required sequence specificity (for a fully functioning 100-amino-acid protein) would have precluded de novo synthesis. Meyer tried to apply a version of Dembski's "explanatory filter", arguing that the low probability and the complex specification of the DNA molecule require us to conclude that it had been designed.

The main focus of the rest of Meyer's presentation was the supposed evidence for the design of DNA -- the information content of living things. Meyer argued that natural selection cannot explain the origin of information, because it presupposes a freely replicating system — one that operates on DNA and protein (of course, natural selection is not concerned with, nor does it try to explain, the origin of "information"). Furthermore, Meyer argued that there are no forces in evolutionary theory to explain the sequential order of DNA, apparently because he believes that, according to evolutionary biologists, nucleotide base organization should be random. Of course, in most organisms many repetitive sequences in DNA, noncoding introns ("nonsense" DNA), and "junk" DNA are not constrained by strict sequential relationships. These random elements constitute a very large fraction of the genome.

Meyer constructed a straw man by focusing on DNA and fully functioning proteins. No working evolutionary scientist believes that life originally appeared fully equipped with the present complex DNA and protein repertoire. One leading theory of early life is the RNA world hypothesis, about which, in the question period, Meyer showed that he is absolutely misinformed, falsely claiming that RNA could neither replicate nor make peptide bonds. The question period ended before he could be fully questioned on this topic, but there are several published papers that show that Meyer was attacking strawman arguments about DNA, RNA, and information origin (Zhang and Cech 1997, Wright and Joyce 1997).

Michael Shermer took the stage next. His presentation was more theatrical (complete with a laser pointer that projected the shape of a UFO). He made some very good points, but I do not think that most of the audience was sufficiently engaged by his presentation. Natural selection, Shermer said, preserves gains and eliminates mistakes; "Intelligent Design" assumes that the current function of structures in living things is the same as the original function. He argued that ID is not useful scientifically because it leads to an investigative dead end — the actions of an intelligent designer.

Friday, June 23, 2000

The first plenary session was entitled "Design in the Biological Sciences". Michael Behe spoke first. He read from a prepared text, saying that he has learned that he must be particularly careful in what he says. His main point was that there are irreducibly complex (IC) structures — structures that could not have been produced by numerous successive small changes without loss of function. Natural selection, which Behe restricts to such small, successive changes, would be unable to explain the existence of such structures. His examples of IC structures were the mousetrap — a 5-piece machine that is rendered nonfunctional by the removal of any piece — and the bacterial flagellum — a complicated, molecular "machine" that may be the biochemical equivalent of the mousetrap.

Behe next responded to Ken Miller's Finding Darwin's God. He focused on the lac operon — a genetic sequence in E coli bacteria that regulates the production of 3 enzymes necessary for the digestion of lactose. If any component of this multipart system is eliminated, argued Behe, the system becomes nonfunctional Although Kenneth Miller had cited experiments showing that when the one of the lac operon genes — the (-galactosidase gene — is knocked out, bacteria can re-acquire this function, Behe disputed this conclusion, because, he said, it was necessary to generate an artificial system using "intelligent intervention" that added other components to the system before the function could be restored.

Behe's next example of IC was the blood-clotting cascade. Behe illustrated the complexity of this system and claimed that removal of any of the components is highly deleterious and causes the whole process to collapse. Citing research with transgenic organisms, Behe argued that these systems are irreducibly complex, because they contain many parts that must be well coordinated with one another to function — therefore, they could not have arisen by natural selection working through gradual, Darwinian mechanisms.

Next up was Scott Minnich, whose talk focused on the research on the bacterial flagellum. He gave a very nice, purely scientific talk on research in the field, which did not seem to fit in here because it seemed that a substantial part of his talk contradicted the assertion of irreproducible complexity. For example, he discussed the virulence plasmid found in the bacterium that causes bubonic plague, which, as it turns out, contains several genes that are highly homologous to those that code for flagellar proteins. In the plasmid, these genes code for proteins that make up structures that drill holes into host cells and inject them with poison. Here we have an example where one set of genes codes for flagellar proteins, however a homologous subset of those genes codes for an entirely different structure (hole drilling apparatus). In an IC structure, if a single component is removed, the structure loses its specific function. But complex structures need not lose all physiological function when one component is changed. The exaptation of an existing structure — such as occurs in the protein products of the virulence plasmid — to a structure performing a new function — such as the flagellum — is precisely the sort of change evolution would predict. Minnich's example endangers only the straw man position that these cellular structures must preserve their existing functions as their protein composition or sequence is modified.

Ken Miller spoke next, presenting a step-by-step, systematic critique of Behe's argument. First he pointed out, in contrast to the assumptions of IC, that no scientist proposes that complex macromolecular systems spontaneously arose in their currently functioning state. Instead, individual components of the larger system probably had other functions, and, through gene duplication or other mechanisms, they took on new functions. These processes permitted the acquisition of new functions and opportunities to interact with other molecules to provide intermediaries with novel functions.

Miller spent a great deal of time describing how the flagellum might have evolved, providing numerous examples of organisms that illustrate the mechanisms and processes that he proposed. He also gave examples of flagella that have some components missing but still function. The example that made the best impression on the audience was that of eel sperm. The missing components make the flagellum appear nonfunctional, said Miller, but, he reminded the audience, since these sperm are very good at making baby eels, the flagellum clearly must function — despite its having "missing" parts.

Next Miller discussed the Krebs cycle — a series of chemical reactions common to living things that extracts energy from carbohydrate molecules — showing how a variety of organisms use different parts of the cycle for different functions. All the while Miller reminded the audience that according to IC, the loss or alteration of one component from an IC system makes the system nonfunctional. At the same time, he reminded the audience that complex systems evolve by co-opting pre-existing, functioning components to serve new functions in new ways.

Miller also presented a bibliographic search (on Medline) showing that there have been only 2 articles on IC in the peer-reviewed literature since 1966, neither one of which appeared in a peer-reviewed scientific research journal. Finally, he took on the central "commonsense" analogy of IC — the irreducibly complex mousetrap. He demonstrated fully functional 5-part, 4-part, 3-part, 2-part and even 1-part mousetraps, concluding by pointing out how, as in biology, the mousetrap that serves one function can be adapted for others (He cited the mousetrap key chain and the mousetrap tie-tack).

A question-and-answer period followed the presentations. As might be expected, Behe took exception to many of Miller's criticisms, denying that he had ever said the things for which Miller took him to task. This is a dangerous tactic in the digital age when your opponent is armed with a laptop computer. Miller was able to provide precise quotations and citations from Behe's work to support his claims. Behe was backpedaling throughout the entire session, and not many questions were asked of the speakers.

During this session, I (JO) introduced myself as a population geneticist from Rush Medical Center in Chicago and said that I had 3 related observations that led to a practical question. First, my research focuses on the identification of genes responsible for complex autoimmune diseases. Evolutionary theory provides the basis for the genetic algorithms that I use in my research. Second, 2 weeks earlier I had visited a pharmaceutical company that also uses evolutionary algorithms to aid in the identification of different alleles affecting drug-metabolizing enzymes. Third, I recently met a researcher at Marquette University who uses evolutionary algorithms to aid in identification of amino-acid residues critical for function of a very complex protein. My question — an open question to both of the ID proponents — was: As practical people, looking for the fastest, most efficient method to reach our goals., how would Intelligent Design help us in our endeavors? What would ID predict in these different systems?

Behe answered the question by commenting that ID would tell us where to look, and perhaps which systems would be irreducibly complex. I replied that his answer really did not answer my question. In the real world of scientific research, I reiterated, evolutionary theory provides algorithms that suggest how to go about finding what we are looking for; these algorithms are used successfully in many fields — including by pharmaceutical companies that are primarily interested in making money. How would ID provide a superior model for accomplishing these goals? Behe answered by mumbling something about needing to see what algorithms I am using. Then the session was closed.

My question sparked discussion afterwards, and I had opportunity to talk with quite a few different people. The general consensus of these people (with the exception of one oddball who basically contended that we are all de-evolving into the blackness of Hell) was that my question really went to the crux of the issue of whether ID has anything useful to present to the scientific community. Scientific theories not only explain and make sense of our observations, but also provide questions and predictions that support useful and productive research.

The claim that ID only has power to retrodict is an evasive maneuver that may sound nice in a sound bite. But the fact is that a theory that only retrodicts is a scientifically worthless idea that does not merit the title of theory. ID is based entirely on the assumption that when science reaches a stumbling block, the appropriate response is to throw up one's hands, say "I don't understand how this could be put together naturally" and to claim that it was intelligently designed. In this way ID is actually more scientifically bankrupt than young-earth creationism, which at least makes testable predictions. ID is invoked only when regular science gets stuck (for the moment).

Saturday, June 24, 2000

The main event of the final day of the conference was the talk of ID's undisputed star, William Dembski. Much of the presentation was devoted to an exposition of Dembski's method for detecting design — "The Design Inference" (TDI). Dembski had prepared enough material for several presentations, so he was unable to give more than a fleeting description of the details of TDI. Because there was too much material for the format and time allowed, Dembski skipped over numerous details and omitted connections among important ideas. The result was a presentation that appeared disorganized and disjointed — the lasting impression is of a series of symbolic statements that were meant to show the steps in the explanatory filter that Dembski proposes as the basis for TDI. However, Dembski's presentation was so abridged that these formulas were neither well explained nor clearly related to his TDI. RNCSE readers interested in Dembski's method for detecting design should consult Wesley Elsberry's recent review-essay in RNCSE (Elsberry 1999).

The second major event on Saturday was a panel discussion entitled "Prospects for Design". The participants were Paul Nelson, Edward Davis, Kelly Smith, and Lenny Moss. Nelson told the audience that the real issue is to provide an argument against methodological naturalism (MN), which he called one of the worst philosophies of science. Nelson characterized MN as absolute rubbish and characterized himself as someone interested in getting at the truth about the world. He said that his purpose was to show the limits of MN, not to set out the future direction of ID.

Nelson provided a definition of MN based on the characterization by the National Academy of Sciences (NAS): "The statements of science must invoke only natural things and processes" (NAS 1998: 42). Nelson's question to the audience was, "Should this be so? Should we separate the natural from the supernatural?" Nelson argued that we should discard the supernatural-natural distinction in favor of the intelligent-natural distinction. We should, Nelson said, institute a research program for intelligent causation — but true to his promise, Nelson did not suggest what such a program would entail.

Most of Nelson's presentation was an exploration of how MN supposedly limits our ability to find out what is true. In Nelson's example, a homicide detective faced with a dead body must consider 4 possible explanations in order to determine the real cause of death. Two of these require no intelligent agent — natural causes and accidents — but the other 2 are caused by the actions of just such an agent — suicide and homicide. According to Nelson, MN would limit the homicide detective's investigation to death by natural causes or accident and would leave out suicide and homicide — both actions of an intelligent agent. In the real world, Nelson argued, even if death were never to occur by suicide and homicide, they would remain causal probabilities — that is, they could occur — and, according to Nelson, if we do not consider homicide and suicide to determine that they do not explain the death we are investigating, then we cannot know for sure that our explanations are true. Unless he considers and rules out the possibility of murder and suicide, the detective cannot be justifiably confident that he has solved the case. Likewise, Nelson argued, we should not exclude intelligent design from the scientific "toolkit".

According to MN, Nelson told the audience, the tools in the scientific toolkit are natural laws (Nelson called them "physical" laws) and chance. Nelson argued that a third tool, intelligent design, belongs in the toolkit of science too. Even if we never need to invoke ID, Nelson told the audience, a naturalistic interpretation of evidence can never be completely justified unless ID is considered and ruled out. Even Darwin lived and worked in an environment with all 3 tools, said Nelson, and it did no harm to his science. Likewise, Nelson assured us, it will do no harm for us to consider ID when the evidence warrants it.

In summary, Nelson argued that science cannot discover what it excludes a priori. If science is a truth-seeking endeavor (as he assumes), then MN belongs on the rubbish heap of history because it limits scientists to a flawed investigative process that fails to include all the explanatory possibilities.

Edward Davis spoke next. He said that he accepts that there is purpose in the universe, although he has concerns about how the issues are framed in the current models of ID. He chose to explore how we understand the meaning of apparent design in Nature through recent research that he has been conducting on the works of Robert Boyle — a 17th-century chemist and natural philosopher best known for his laws about the behavior of gases and his use of controlled experiments.

Although Boyle argued for "design" in the natural world, Davis pointed out that this design represented neither ongoing tinkering by an intelligent agent nor what passed for the contemporary version of the anthropic principle -a philosophy of science that assumed that Nature was constructed benevolently to promote human well-being. Instead, although Boyle was convinced that experimental science would demonstrate the existence of God, he felt that the route to this demonstration was through an understanding of the mechanics of the way things really worked in the natural world. In Boyle's view, God works through the "mechanisms" that show His presence and actions. Boyle felt that the scientific process is short-circuited by teleological explanations, even if there is an ultimate purpose to the universe. He thus insisted on naturalistic explanations for natural phenomena first and foremost whenever possible.

Although he told the audience that he agreed that evidence of purpose is found in the natural world, Davis argued that it is neither appropriate nor productive to look for it in the same ways and places that one looks for evidence of natural processes. Davis told the audience that he believes "in a God who is sovereign over the laws of Nature". However, he noted, the world is not full of items stamped "Made by God"; God is more subtle than that. So the evidence for God's purposes may not be the same physical evidence that we find in natural phenomena that scientists study, say, in the behaviors of gases under pressure or mutation rates.

The most serious problem with ID, Davis told the conference, is that it appears to make the existence of God (the unnamed "intelligent designer") an additional hypothesis to be tested scientifically. However, this runs counter to the central understanding of God in Christian and Jewish traditions. Davis told the audience that the central claim of Christianity, for example, is that we have actually seen God directly, and when we did not like what we saw, we killed him — then he surprised us. Davis said that we need to incorporate the interaction between God and the world into our discourse in this way, not as specific scientific hypotheses about individual events and structures.

The next speaker was Kelley Smith. He presented a "blueprint for respectability" — an outline for how ID could earn itself a place at the scientific table. Smith's remarks are included elsewhere in this issue. In summary, he outlined a program that would turn ID from a fringe idea to a respectable theory in the sciences, along with all the benefits that respectability offers — respect, funds, access to classrooms, and a place in mainstream textbooks and journals. This was the route taken by all successful challengers to the scientific status quo. But he doubted that ID proponents would take his advice.

The last speaker in the panel was Lenny Moss, who argued that the key issue under discussion was the nature of Nature. According to Moss, ID assumed a very narrow notion of Nature, defining its position by its opposition to the viewpoints of a few prominent proponents of philosophical naturalism, such as Richard Dawkins and Daniel Dennett. Moss argued that ID, if it is to be successful, needs to define itself in its own terms, not merely in opposition to what are extreme positions even among natural scientists.

Taking Dawkins and Dennett to task is a good tactical approach, Moss told the conference, for it allows the proponents of ID to press the naturalistic explanation and show where it is in trouble — a debunking strategy. However good a tactical approach it may be to oppose what he called the strict neo-Darwinism of Dawkins and Dennett, Moss said, it is nonetheless a bad strategic approach. That is, to accept that naturalism is restricted to the premises of neo-Darwinism "sells Nature down the river" by restricting naturalism to a particular, limited version of naturalism espoused by Dawkins and Dennett. The most fruitful answer to a dogmatic metaphysics (like that of Dawkins or Dennett), said Moss, is not another dogmatism, but a pluralistic approach. Reacting against a strict neo-Darwinism with a dogmatic approach — whether it is ID or some other dogmatism — leads to bad biology. Instead, Moss argued for a broader perspective for both ID and for naturalism.

In considering the future prospects for ID, there is, Moss said, good news and bad news. As for the good news, Moss argued that science is at a historic juncture — at a new "crisis" in the struggle to resolve our "intuition for life". He traced our understanding of Nature from the 17th century, when science changed its understanding of natural events and organisms as ends unto themselves to a view of these phenomena as the outcome of other natural processes and interactions. This change culminated in the 20th century when, Moss argued we now understand natural events and organisms as only the outcome of natural processes and their interactions. One aspect of this important historic juncture is the Human Genome Project.

Moss told the conference that there are promissory notes that need to be called in — things that biology has promised and not yet delivered. It is time to move beyond the 17th-century view of matter and the physical world to a new scientific understanding that can do justice to the agency of life. This "new naturalism" is one that would allow a pluralistic view of agency in the emergence and direction of life, and one that may make substantial contributions to our understanding of Nature. In reviving a sort of preformationist, vitalistic approach, ID may figure into Moss's "new naturalism".

The bad news for ID is that it seems to be mired in its opposition to a view of the nature of Nature — espoused by Dawkins and Dennett especially — that is more restrictive than the view held by most scientists. Focusing on refuting this more restricted view threatens to push ID onto a path where it will remain tangential and irrelevant to the questions that active scientists pursue and find meaningful.

Moss's example of the new way for science to proceed is taken from the work of philosopher Immanuel Kant. Kant allowed us to have it both ways; Moss said — we can take it as a given that there is an organization in life while at the same time resisting the temptation to try to explain the purpose or first principle of everything. In this way, the "new naturalism" that Moss proposes does not require, presuppose, or even benefit from atheism. In contrast, many in the ID movement seem to be opposed to evolution because Dawkins and Dennett portray it as essential to supporting atheism.

The Big Tent

Throughout the conference there were numerous roundtable discussions, presented papers, and informal discussions over meals and snacks. It was impossible to cover all of these events, and most were not included in the official record of the conference. The sessions we attended resembled the plenary sessions: Some were thoughtful and well-researched presentations of important questions and theoretical perspectives. Others were little more than standard anti-evolutionary fare, concluding that if evolution could not immediately explain some unusual finding or new discover, then ID had to be true by default. But it was also clear that there were a number of very different ideas about what precisely intelligent design entailed.

The unspoken position of the IDCs at the conference seemed to be to accept all criticisms of evolutionary theory as evidence that an intelligent agent of some sort was involved in the history of life and in the patterns of similarity and difference that biologists attribute to evolution. However, one of the hallmarks of most scientific meetings was absent — the disagreement among proponents of different explanatory models. There were young-earth creationists presenting papers in breakout sessions who never addressed the discrepancies between their models of recent creation of organisms in their present forms and theistic evolution that Behe has claimed to accept, which would allow descent with modification from common ancestors over long time periods — at least for structures that were not "irreducibly complex", which was how Behe pronounced most of the examples that the ID critics used to rebut his model.

DAIC showed the "big tent" strategy in operation. This approach makes IDC more inclusive in order to increase the impact of the assault on evolutionary theory from a broad base of support. This may also be why details were so often missing from the presentations at the plenary sessions. All the anti-evolutionists in attendance may agree that evolution is bad and that apparent design in the universe is caused by an intelligent agent, but they do not agree on the specifics of time, place, frequency, duration, or intensity of this extranatural intervention. The devil, as they say, is in the details.
By Jeff Otto with Andrew Petto
This version might differ slightly from the print publication.