

<html>
<head>
<meta http-equiv="Content-Language" content="en-us">
<meta name="ProgId" content="FrontPage.Editor.Document">
<meta name="GENERATOR" content="Microsoft FrontPage 6.0">
	
<title>Naturalism's Argument from Invincible Ignorance:</title>
<base href="http://www.theism.net/">
</head>	

<body MARGINHEIGHT="0" MARGINWIDTH="0" TOPMARGIN="0" RIGHTMARGIN="0" leftmargin="0">

  <table border="0" cellpadding="0" cellspacing="0" width="100%">
	<tr>

		<td height="200%" background="images/bkg.gif" width="150">
<!--			<spacer type="block" width="150"> -->
<!--			<image src="http://www.wcdefenders.org/images/pixel.gif" width="150" height="1">-->
		</td>

		<td valign="top">
		<style>
<!--
span.xsmall  { font-size: 6pt; font-family: Arial; color: #008000 }
.smalltext   { font-family: Arial; font-size: 6pt }
-->
</style>
<div align="left">
  <table border="0" cellpadding="0" cellspacing="0" style="border-collapse: collapse" bordercolor="#111111" id="AutoNumber1" bgcolor="#333333" width="100%">
    <tr valign="middle">
    <td align="left">
		<font size="2" face="Bookman Old Style" color="#FFCC00"><b>&nbsp;
        <a style="color: #FFCC00; font-weight: bold" href="../">home</a>&nbsp; |&nbsp;
		<a style="color: #FFCC00; font-weight: bold" href="../articleindex.asp">articles</a>&nbsp; |&nbsp;
		<a style="color: #FFCC00; font-weight: bold" href="../books/">books</a>&nbsp; |&nbsp;
		<a style="color: #FFCC00; font-weight: bold" href="../searchform.htm">search</a>&nbsp; |&nbsp;
		<a style="color: #FFCC00; font-weight: bold" href="mailto:webmaster@theism.net">webmaster</a></b>&nbsp;</font>
	</td>
    <td align="left">
		<div align="center">
          <center>
          <table border="1" cellpadding="0" cellspacing="0" style="border-collapse: collapse" bordercolor="#111111" id="AutoNumber2" bgcolor="#DDDDDD">
            <tr>
              <td>
              </td>
            </tr>
          </table>
          </center>
        </div>
	</td>
    <td>
		<div align="center">
		<form action="https://www.paypal.com/cgi-bin/webscr" method="post">
		<b><font face="Tahoma" color="#FFCC00" size="2">Support Theism.net...</font></b><br>
		<input type="hidden" name="cmd" value="_xclick">
		<input type="hidden" name="business" value="donations@theism.net">
		<input type="hidden" name="item_name" value="Support Theism.net | Rational Theism!">
		<input type="hidden" name="cn" value="Comments for us?">
		<input type="hidden" name="currency_code" value="USD">
		<input type="hidden" name="tax" value="0">
		<input type="image" src="https://www.paypal.com/images/x-click-but04.gif" border="0" name="submit" alt="Make payments with PayPal - it's fast, free and secure!" width="62" height="31">
		</form>
		</div>
</td>

    </td>
    <td bgcolor="#333333" align="center" valign="middle">
      	<form method="get" action="http://search.atomz.com/search/">
		<input type="hidden" name="sp-k" value=""><input type=hidden name="sp-f" value="iso-8859-1"><input type=hidden name="sp-a" value="sp0a018e00">
 		<p align="right">
 		<input size=25 name="sp-q"><br>
      <input type=submit value="Site search"> </p>
		</form>
    </td>
    </tr>
  </table>
</div>
		<hr>
			<div align="left"><font face="arial, helvetica, tahoma">
			<blockquote><html>

<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1252">
<meta name="GENERATOR" content="Microsoft FrontPage 4.0">
<meta name="ProgId" content="FrontPage.Editor.Document">
<title>Naturalism's Argument from Invincible Ignorance:</title>
</head>

<body>
<div align="center">
  <font size="4"><b>Naturalism's Argument from Invincible Ignorance:<br>
  A Response to Howard Van Till<br>
  </b></font>By William A. Dembski<br>
  <br>
</div>
Howard Van Till's review of my book <i>No Free Lunch</i> is available at the
AAAS Evolution Resources page -- <a href="http://www.aaas.org/spp/dser/evolution/perspectives/default.htm" eudora="autourl">http://www.aaas.org/spp/dser/evolution/perspectives/default.htm</a>.
The actual review is available as a pdf file at <a href="http://www.aaas.org/spp/dser/evolution/perspectives/vantillecoli.pdf" eudora="autourl">http://www.aaas.org/spp/dser/evolution/perspectives/vantillecoli.pdf</a>.
I respond to Van Till's review here.<br>
<br>
<br>
Howard Van Till's review of my book <i>No Free Lunch</i> exemplifies perfectly
why theistic evolution remains intelligent design's most implacable foe. Not
only does theistic evolution sign off on the naturalism that pervades so much of
contemporary science, but it justifies that naturalism theologically -- as
though it were unworthy of God to create by any means other than an evolutionary
process that carefully conceals God's tracks.<br>
<br>
Following David Griffin, Van Till distinguishes four types of naturalism. The
bottom line with all these types of naturalism is that science qua science must
treat nature as a causal nexus that is impervious to any empirically discernible
intelligent input from outside of nature. Naturalism, whether of the
metaphysical or merely methodological varieties, treats nature as complete in
terms of the causal principles inherent in it. Intelligent design, by contrast,
questions that completeness, arguing that there can be good reasons for thinking
that events happening in nature nonetheless lie beyond the capacities inherent
in nature.<br>
<br>
There is nothing strange or even counterintuitive about this claim so long as
naturalism has not, as C. S. Lewis warned, worked its way into our bones. Howard
Van Till, before he became steeped in process theology, would probably have
accepted this possibility as well. Is it within the natural capacities of a
corpse to come back to life after all physiological function has stopped for
three days? The Judeo-Christian tradition is very clear that at least when it
comes to salvation history, things happen that lie beyond the capacity of
strictly natural forces. But naturalism in all its guises opposes such
&quot;supernaturalism.&quot;<br>
<br>
Supernaturalism is a problem, but not for the reasons Van Till gives. The
problem with terms like &quot;supernatural&quot; and &quot;supernaturalism&quot;
(and I include here Van Till's variant of &quot;extra-natural assembly&quot;) is
that they tacitly presuppose that nature is the fundamental reality and that
nature is far less problematic conceptually than anything outside or beyond
nature. The &quot;super&quot; in &quot;supernatural&quot; thus has the effect of
a negation.<br>
<br>
But what if nature is itself a negation or reaction against something else? For
the theist (though not for the panentheist of process theology), nature is not a
self-subsisting entity but an entirely free act of God. Nature thus becomes a
derivative aspect of ultimate reality -- an aspect of God’s creation, and not
even the whole of God’s creation at that (theists typically ascribe to God the
creation of an invisible world that is inhabited among other things by angels).
Hence, for the theist attempting to understand nature, God as creator is
fundamental, the creation is derivative, and nature as the physical part of
creation is still further downstream.<br>
<br>
Now, from the vantage of intelligent design, treated strictly as a scientific
inquiry, neither naturalism nor theism has a privileged place. Intelligent
design, as a scientific research program, attempts to determine whether certain
features of the natural world exhibit signs of having been designed by an
intelligence. Whether this intelligence is ET or a telic principle immanent in
nature or a transcendent personal agent are all, at least initially, live
options. The problem with ET, of course, is that it implies a regress -- where
did ET come from? The same question doesn't apply, at least not in the same way,
to telic principles or transcendent personal agents because the terms of the
explanation are different. ET is an embodied intelligence, and that embodiment
itself needs explanation. Telic principles and transcendent agents are
unembodied. That raises its own issues, but they are a different set of issues.<br>
<br>
The key question for intelligent design is whether we can rigorously determine
that an intelligence is responsible for certain features of the natural world
regardless what form that intelligence takes. This very question, however,
raises the possibility that occurrences in nature divide into those that require
intelligence and those that don't. It's such a division that Howard Van Till
wants at all costs to avoid. Instead of intelligence and nature working in
tandem, Van Till limits intelligence (increasingly a process God) to endowing
nature with purely natural capacities that then are on their own to work
themselves out in natural history. To keep this from degenerating into deism,
Van Till invokes the vocabulary of process theology, which describes God as
guiding or persuading creation. But all such talk is empty. Absolutely anything
that happens in the world is compatible with such divine guidance (the process
God always bows to the freedom of creation; by contrast, within classical
theism, creation always bows to divine freedom).<br>
<br>
Unlike Van Till's&nbsp; process theology, intelligent design is not compatible
with any sort of world. A world in which natural capacities can provide no
empirical evidence of anything other than chance and necessity and additionally
can do all of nature's design work is not a world in which intelligent design
holds. But how can we tell whether natural capacities are able to account for
everything that happens in nature? What evidence might count against natural
capacities being able to account for all natural occurrences? And if intelligent
design can show that natural capacities are in fact limited, does this not only
open the door to supernatural interventions and miracles but indeed necessitate
them?<br>
<br>
Let's consider this last point, because it is one that Van Till thinks is
particularly damning to my project and intelligent design generally. I argue in <i>No
Free Lunch</i> that intelligent design does not require miracles or supernatural
interventions in the classical sense of what I call &quot;counterfactual
substitution.&quot; Although the term counterfactual substitution is recent, the
idea is ancient and was explicitly described in counterfactual terms by the
theologian Schleiermacher. The idea is that natural processes are ready to make
outcome X occur but outcome Y occurs instead. Thus, for instance, with the body
of Jesus dead and buried in a tomb for three days, natural processes are ready
to keep that corpse a corpse (= the outcome X). But instead, that body
resurrects (= the outcome Y).<br>
<br>
Now I claim that intelligent design, in detecting design in nature and in
biological systems in particular, doesn't require counterfactual substitution.
Van Till takes exception and writes: &quot;How could the Intelligent Designer
bring about a <i>naturally impossible outcome </i>by interacting with a
bacterium in the course of time without either a suspension or overriding of
natural laws? Natural laws were set to bring about the outcome, no flagellum.
Instead, a flagellum appeared as the outcome of the Intelligent Designer’s
action. Is that not a miracle, even by Dembski’s own definition? How can this
be anything other than a <i>supernatural intervention</i>?&quot;<br>
<br>
The fault in Van Till's argument centers on an equivocation over what it means
to be a &quot;naturally impossible outcome.&quot; To see what's at stake,
imagine throwing a bunch of Scrabble pieces and seeing them spell Hamlet's
soliloquy. Is this a naturally impossible outcome? It certainly is highly
improbable, and such improbability often leads us to attribute impossibility (a
pragmatic sort of impossibility). But would such a wildly improbable event
require a miracle in the counterfactual-substitution sense of impossibility? Not
at all. Scrabble pieces thrown at random are not, as Van Till might put it,
&quot;set to bring about the outcome, no Hamlet's soliloquy.&quot; Randomness,
by definition, has free access to the entire reference class of possibilities
that is being sampled. Any possibility from the reference class is therefore
fair game for the random process -- in this case, the random throwing of
Scrabble pieces. It's therefore not the case that this random process was set to
bring about &quot;no Hamlet's soliloquy.&quot;<br>
<br>
Similar considerations apply to the bacterial flagellum. It's not that nature
was conspiring to prevent the flagellum's emergence and that a designer was
needed to overcome nature's inherent preference for some other outcome (as in
the case of counterfactual substitution). Rather, the problem was that nature
had too many options and without design couldn't sort through all those options.
It's not the case that natural laws are set to bring about the outcome of no
flagellum. The problem is that natural laws are too unspecific to determine any
particular outcome. That's the rub. Natural laws are compatible with the
formation of the flagellum but also compatible with the formation of a plethora
of other molecular assemblages, most of which have no biological significance.<br>
<br>
To return to the Scrabble analogy, there's nothing in the throwing of Scrabble
pieces that prevents them from spelling Hamlet's soliloquy. This is not like
releasing a massive object in a gravitational field which, in the absence of
other forces, must move in a prescribed path. For the object to move in any
other path would thus entail a counterfactual substitution and therefore a
miracle. But with the Scrabble pieces there is no prescribed arrangement that
they must assume. Nature allows them full freedom of arrangement. Yet it's
precisely that freedom that makes nature unable to account for specified
outcomes of small probability. Nature, in this case, rather than being intent on
doing only one thing, is open to doing any number of things. Yet when one of
those things is a highly improbable specified event (be it spelling Hamlet's
soliloquy with Scrabble pieces or forming a bacterial flagellum), design becomes
the required inference. Van Till has therefore missed the point: not
counterfactual substitution (and therefore not miracles) but the incompleteness
of natural processes is what the design inference uncovers.<br>
<br>
I want next to consider Van Till's concern about the applicability of specified
complexity to biology. Van Till writes: &quot;In no case do we know with
certainty <i>all</i> relevant natural ways in which some biotic system may have
historically come to be actualized.&quot; He denotes &quot;all relevant natural
causes&quot; that might be responsible for some biotic system X by capital
&quot;N&quot; and distinguishes this &quot;N&quot; from lower case
&quot;n,&quot; which for him denotes &quot;only those natural causes that are
known to be relevant.&quot; His concern is that we can only calculate
probabilities for X based on n rather than N. Yet to attribute specified
complexity to X, Van Till contends, we would need to calculate the probability
with respect to N and show that it is small enough. He concludes: &quot;The more
we learn about the self-organizational and transformational feats that can be
accomplished by biotic systems, the less likely it will be that the conditions
for complexity ... will be satisfied by any biotic system.&quot;<br>
<br>
This last statement is wishful thinking. There's no reason to think that as our
knowledge of n (i.e., known natural processes relevant to the formation of X)
increases, that the probabilities or complexities associated with X become more
manageable and that specified complexity thereby gets refuted or dwindles away.
Within Van Till's notational convention, he is suggesting that as n approximates
N, P(X|n) will continually increase. But that's not how probabilities work. With
increasing knowledge, the probability may stay the same or even decrease. What's
more, for an omniscient being who actually knows N, P(X|N) may be smaller than
we ever imagined.<br>
<br>
Van Till's mistake here should give us pause. He admits that increasing
knowledge might refute an attribution of specified complexity to some biotic
system X. But if that's a possibility, then certainly it's also a possibility
that increasing knowledge might fail to refute an attribution of specified
complexity and might even lead to increasingly extreme assessments of
complexity. What's more, there's an underlying fact of the matter about what
probabilities inhere in nature, and this fact of the matter might just be that
the complexity/improbability of X is indeed as extreme as it now seems. Why then
does Van Till think it's &quot;less likely&quot; that specified complexity will
be borne out for biotic systems &quot;the more we learn&quot;? The likelihood to
which Van Till is referring here has nothing to do with objective assignments of
probability or complexity to biotic systems. Rather, this likelihood merely
expresses Van Till's personal conviction that naturalistic explanations must
inevitably triumph. Any such likelihood is thus purely subjective and flows from
Van Till's precommitment to naturalism.<br>
<br>
But what about Van Till's worry that increased knowledge might overturn an
attribution of specified complexity? Clearly, increased knowledge need not have
this effect -- increased knowledge of natural processes may merely drive the
probabilities still lower and thus make the complexity even more extreme. Even
so, Van Till finds particularly troubling the mere possibility that new insights
into the natural processes surrounding some biotic system might overturn the
attribution of specified complexity to it. But why should we take Van Till's
worry seriously?<br>
<br>
A little reflection makes clear that Van Till's worry cannot be justified on the
basis of scientific practice. Indeed, to satisfy his worry is to impose
requirements so stringent that they are absent from every other aspect of
science. If standards of scientific justification are set too high, no
interesting scientific work will ever get done. Science therefore balances its
standards of justification with the requirement for self-correction in the light
of further evidence. The possibility of self-correction means that science can,
and indeed must, work with available evidence and on that basis (and that basis
alone) formulate the best explanation of the phenomenon in question. That's why
the &quot;relevant&quot; natural processes for the formation of some biotic
system are those we already know and not those waiting to be discovered. Yes, we
might be wrong in attributing specified complexity to some biotic system
(welcome to science -- all of whose claims are subject to revision in light of
further evidence). But we also might be right. And in the absence of detailed
testable models for how material mechanisms could have formed irreducibly
complex molecular machines like the bacterial flagellum, our best evidence
suggests that it is indeed complex and specified and that we are right in
attributing design.<br>
<br>
To attribute specified complexity to a biotic system is to engage in an <i>eliminative
induction</i>. Eliminative inductions depend on successfully falsifying
competing hypotheses (contrast this with Popper's falsification method, where
hypotheses are corroborated to the degree that they successfully withstand
attempts to falsify them). Now, for many design skeptics, eliminative inductions
are mere arguments from ignorance, that is, arguments for the truth of a
proposition because it has not been shown to be false. In arguments from
ignorance, the lack of evidence for a proposition is used to argue for its
truth. A stereotypical argument from ignorance goes something like &quot;gnomes
exist because you haven't shown me that they don't exist.&quot;<br>
<br>
But that's clearly not what eliminative inductions are doing. Eliminative
inductions argue that competitors to the proposition in question are false.
Provided that proposition together with its competitors form a mutually
exclusive and exhaustive class, eliminating all the competitors entails that the
proposition is true. This the ideal case, in which eliminative inductions in
fact become deductions. The problem is that in practice we don't have a neat
ordering of competitors that can then all be knocked down with a few
straightforward and judicious blows (like bowling pins). Philosopher of science
John Earman puts it this way (<i>Bayes or Bust</i>, p. 165): &quot;The
eliminative inductivist [seems to be] in a position analogous to that of Zeno's
archer whose arrow can never reach the target, for faced with an infinite number
of hypotheses, he can eliminate one, then two, then three, etc., but no matter
how long he labors, he will never get down to just one. Indeed, it is as if the
arrow never gets half way, or a quarter way, etc. to the target, since however
long the eliminativist labors, he will always be faced with an infinite list [of
remaining hypotheses to eliminate].&quot;<br>
<br>
Earman offers these remarks in a chapter titled &quot;A Plea for Eliminative
Induction.&quot; He himself thinks there is a legitimate and necessary place for
eliminative induction in scientific practice. What, then, does he make of this
criticism? Here is how he handles it (p. 165): &quot;My response on behalf of
the eliminativist has two parts. (1) Elimination need not proceed in such a
plodding fashion, for the alternatives may be so ordered that an infinite number
can be eliminated in one blow. (2) Even if we never get down to a single
hypothesis, progress occurs if we succeed in eliminating finite or infinite
chunks of the possibility space. This presupposes, of course, that we have some
kind of measure, or at least topology, on the space of possibilities.&quot; To
this Earman adds (p. 177) that eliminative inductions are typically <i>local
inductions</i>, in which there is no pretense of considering all logically
possible hypotheses. Rather, there is tacit agreement on the explanatory domain
of the hypotheses as well as on what auxiliary hypotheses may be used in
constructing explanations.<br>
<br>
I want here to focus especially on Earman's idea that elimination can be
progressive. Too often critics of intelligent design charge specified complexity
with underwriting a purely negative form of argumentation. But that charge is
not accurate. The argument for the specified complexity of the bacterial
flagellum, for instance, makes a positive contribution to our understanding of
the limitations that natural mechanisms face in trying to account for it. What
justifies us in attributing specified complexity to the bacterial flagellum? The
bacterial flagellum is irreducibly complex, meaning that all its components are
indispensable for its function as a motility structure. What's more, it is
minimally complex, meaning that any structure performing the bacterial
flagellum's function as a bidirectional motor-driven propeller cannot make do
without certain basic components.<br>
<br>
Design theorists are therefore closing off possible avenues by which such
systems might have evolved naturalistically. In particular, they've shown that
no direct Darwinian pathway exists that incrementally adds these basic
components and therewith evolves a bacterial flagellum. Rather, an indirect
Darwinian pathway would be required, in which precursor systems performing
different functions evolve by changing functions and components over time
(Darwinists refer to this as coevolution and co-optation; Van Till gestures at
such an indirect pathway when he invokes the type III secretory system as an
evolutionary precursor to the flagellum -- more on this later). Plausible as
this sounds to the committed naturalist, there is no evidence for the efficacy
of indirect Darwinian pathways to accomplish irreducible and minimal complexity.
What's more, evidence from engineering strongly suggests that tightly integrated
systems like the bacterial flagellum are not formed by trial and error tinkering
in which form and function coevolve. Rather, such systems are formed by a
unifying conception that combines disparate components into a functional whole
-- in other words, by design.<br>
<br>
In assessing whether the bacterial flagellum exemplifies specified complexity,
the design theorist is tacitly following Earman's guidelines for making an
eliminative induction work. Thus, the design theorist orders the space of
hypotheses that naturalistically account for the bacterial flagellum into those
that look to direct Darwinian pathways and those that look to indirect Darwinian
pathways (cf. Earman's requirement for an ordering or topology of the space of
possible hypotheses). The design theorist also limits the induction to a local
induction, focusing on relevant hypotheses rather than all logically possible
hypotheses. The reference class of relevant hypotheses are those that flow out
of Darwin's theory. Of these, direct Darwinian pathways can be precluded on
account of the flagellum's irreducible and minimal complexity, which entails the
minuscule probabilities required for specified complexity. As for indirect
Darwinian pathways, the causal adequacy of intelligence to produce such complex
systems (which is simply a fact of engineering) as well as the total absence of
causally specific proposals for how they might work in practice eliminates them.
In eliminating indirect Darwinian pathways, design theorists are therefore not
merely eliminating what thus far hasn't worked (coevolution and co-optations)
but also appealing to causal powers (designing intelligences) that are known to
work.<br>
<br>
Is this enough to justify asserting that the bacterial flagellum exhibits
specified complexity? For the diehard naturalist (and I include here
naturalistic theists like Howard Van Till), such an eliminative induction will
never be enough and always constitute an argument from ignorance. But in
refusing to countenance eliminative inductions that establish specified
complexity, naturalists are guilty of their own argument from ignorance.
Fearnside and Holther, in their classic <i>Fallacy -- The Counterfeit of
Argument</i>, call it the argument from &quot;invincible ignorance.&quot;
Alternatively, they refer to it as &quot;apriorism.&quot;<br>
<br>
According to Van Till, design theorists have failed to take into account
indirect Darwinian pathways by which the bacterial flagellum might have evolved
through a series of intermediate systems that changed function and structure
over time in ways that we do not yet understand (hence his appeal to the type
III secretory system). But is it that we do not yet understand the indirect
Darwinian evolution of the bacterial flagellum or that it never happened that
way in the first place? At this point there is simply no evidence for such
indirect Darwinian evolutionary pathways to account for biological systems that
display irreducible and minimal complexity.<br>
<br>
Is this, then, where the debate ends, with design critics like Van Till chiding
design theorists for not working hard enough to discover those (unknown)
indirect Darwinian pathways that lead to the emergence of irreducibly and
minimally complex biological structures like the bacterial flagellum?
Alternatively, does it end with design theorists chiding design critics for
deluding themselves that such indirect Darwinian pathways exist when all the
available evidence suggests that they do not. Although this may seem like an
impasse, it really isn't. Science must form its conclusions on the basis of
available evidence, not on the possibility or promise of future evidence. This
means that eliminative inductions need to be local inductions, based on detailed
testable models and hypotheses that are currently available.<br>
<br>
If evolutionary biologists can discover or construct detailed, testable,
indirect Darwinian pathways that account for the emergence of irreducibly and
minimally complex biological systems like the bacterial flagellum, then more
power to them -- intelligent design will quickly pass into oblivion. But until
that happens, the eliminative induction that attributes specified complexity to
the bacterial flagellum constitutes a legitimate scientific inference. The only
way to deny its legitimacy is by appealing to some form of apriorism. The
apriorism of choice these days is, of course, naturalism. And that apriorism
engenders an argument not just of ignorance but of invincible ignorance. Indeed,
any specified complexity (and therefore design) that might actually be present
in biological systems becomes invisible as soon as one consents to this
apriorism. If biological systems actually are designed, not only won't Van Till
see it but he can't see it. This is invincible ignorance.<br>
<br>
The remainder of Van Till's criticisms of <i>No Free Lunch</i> can be dispatched
more quickly:<br>
<dl>
  <dd>(1) Van Till is concerned that my use of chance encompasses all natural
    processes. But as he knows, I approach natural processes as a mathematician,
    and natural processes are modeled mathematically using stochastic processes.
    At any rate, Van Till's quibble is not with my definition but with the label
    to which I'm assigning the definition.<br>
    <br>
    <font face="Times New Roman, Times" size="4">
    <dd>(2)
      
    </font>Van Till claims that my probabilistic analysis of the bacterial
    flagellum is &quot;radically out of touch with contemporary genetics and
    developmental biology.&quot; I'm not sure what developmental biology has to
    do with it (bacteria don't have embryos that develop into adults). As for
    genetics, he would have preferred to see the probabilistic analysis of the
    flagellum center on the genes that code for its proteins rather than the
    proteins that go into its assembly. But the genes follow the proteins which
    follow the function, and not vice versa, so my analysis is the correct one.
    Even so, since genes map to proteins, the probabilities assigned to the
    flagellum's proteins and assemblage can easily enough be backtracked to the
    genes themselves (this is standard probability theory, in which
    probabilities on the space mapped into backtrack to probabilities on the
    space mapped out of).<br>
    <br>
    <font face="Times New Roman, Times" size="4">
    <dd>(3)
      
    </font>Van Till is confused about how the detachability condition applies if
    the probabilistic analysis of the flagellum is confined to the genome. As he
    sees it, if the search for a detachable pattern is directed toward the
    base-pair sequence coding for the flagellum, then any such pattern could not
    be detached from the actual occurrence of that sequence. But this is false.
    The pattern is that collection of sequences which codes for a functioning
    bidirectional motor-driven propeller. This is no different from a
    cryptographic scheme in which&nbsp; the plaintext (cf. protein assemblage)
    is detachable only if the ciphertext (cf. base-pair sequence) that maps onto
    it is likewise detachable.<br>
    <br>
  <dd>(4)
    
    Van Till seems to think that because the historical pathways by which
    biological systems evolved are almost invariably occluded, this gives
    credence to mechanistic theories of evolution. He writes: &quot;Full causal
    specificity is, of course, the goal of all scientific explanations, but it
    is often very difficult to achieve, especially in the reconstruction of
    life’s formational history. That’s just a fact of life in evolutionary
    biology, as well as in many other areas of science.&quot; To see this
    absence of evidence as providing support for biological evolution itself
    constitutes an argument from ignorance. The only way to test whether
    material mechanisms are capable of driving biological evolution is by
    placing it in competition with something like intelligent design. Van Till's
    naturalism conveniently closes the door to any such competition.<br>
    <br>
  <dd>(5)
    
    Van Till has a problem with my characterization of the bacterial flagellum
    as a discrete combinatorial object. Nonetheless, that's what it is.
    Moreover, the probability I describe for such objects, which decomposes into
    a product of an origination, localization, and configuration probability,
    does in fact constitute the probability for such objects. That decomposition
    holds with perfect generality and does not presuppose any independence or
    equiprobability assumptions. Now, how one assigns those probabilities and
    sorts through the different possible estimates of them is another matter.
    Thus, for Van Till to remark that &quot;no biologist has ever taken the
    bacterial flagellum to be a discrete combinatorial object that
    self-assembled in the manner described by Dembski&quot; is besides the
    point. The bacterial flagellum is indeed a discrete combinatorial object,
    and the self-assembly that I describe is the one we are left with and can
    compute on the basis of what we know. The only reason biologists would
    refuse to countenance my description and probabilistic calculations of
    self-assembly is because they show that only an indirect Darwinian pathway
    could have produced the bacterial flagellum. But precisely because it is
    indirect, there is, at least for now, no causal specificity and no
    probability to be calculated. Design theorists are closing off possible
    mechanistic routes for biological evolution. Van Till's biologists, by
    contrast, handwave at mere conceptual possibilities.<br>
    <br>
  <dd>(6)
    
    In line with the previous concern, Van Till offers the type III secretory
    system as a possible precursor to the bacterial flagellum. This ignores that
    the current evidence points to the type III system as evolving from the
    flagellum and not vice versa (cf. Milt Saier's recent work at UCSD). But
    beyond that, finding a component of a functional system that performs some
    other function is hardly an argument for the original system evolving from
    that other system. One might just as well say that because the motor in a
    motorcycle can be used as a blender, therefore the motor evolved into the
    motorcycle. Perhaps, but not without intelligent design. Even if it could be
    shown that the type III system predated the flagellum (contrary to Milt
    Saier's work), it could at best represent one possible step in the indirect
    Darwinian evolution of the bacterial flagellum. But that still wouldn't
    constitute a solution to the evolution of the bacterial flagellum. What's
    needed is a complete evolutionary path and not merely a possible oasis along
    the way. To claim otherwise is like saying we can travel by foot from Los
    Angeles to Tokyo because we've discovered the Hawaiian Islands. Evolutionary
    biology needs to do better than that.<br>
    <br>
  <dd>(7)
    
    Van Till would have liked more detail showing that how bacterial flagellum
    is specified. Briefly: consider the reference class of possibilities to be
    all molecular assemblages (to keep things manageable let's limit them to a
    billion subunits). Now consider the pattern &quot;bidirectional motor-driven
    propeller.&quot; This is a specification (I leave this as an exercise to the
    reader). Now do a perturbation tolerance and identity analysis as I describe
    it in section 5.10 of <i>No Free Lunch</i>. This restricts both the
    reference class and the specification to the actual flagellum for E. coli.
    Moreover, it allows us to estimate the probabilities for the naturalistic
    formation of the flagellum in line with John Leslie's fly-on-the-wall
    methodology.<br>
    <br>
  <dd>(8)
    
    Finally, Van Till attributes an argument to me that I never made. He writes:
    &quot;If, as Dembski implicitly accepts, forming the majority of the E. coli
    genome -- including the portion dedicated to the actualization of the type
    III secretion apparatus -- did not need the form conferring intervention of
    a designer, then why would intervention be necessary for the small
    additional portion that codes for a flagellum?&quot; I argue that the
    bacterial flagellum is designed because it exhibits specified complexity.
    But such an argument says nothing about the design or absence of it in the
    rest of the bacterium. Design and specified complexity must be established
    on a case-by-case, system-by-system basis. Moreover, the design of one thing
    need not preclude the design of another. I can, for instance, argue that the
    cassette player in my car is designed. But that leaves the design of the
    rest of my car untouched. Thus, when Van Till asks, &quot;Does it not seem
    odd that the flagellar 2% needed supplementary designer-action while the
    other 98% did not?&quot; he is certainly correct that it is odd. But the
    oddness here is of Van Till's own doing, attributing to me a position that I
    don't hold and for which I never argued.</dd>
</dl>
<div align="center">
  +++++<br>
</div>
I close with a quote by the late philosopher Willard Quine. Quine, though a
naturalist, was not wedded to the methodological and metaphysical naturalism of
Van Till. Quine was a pragmatic naturalist. This pragmatism allowed him to
entertain the following possibility: &quot;If I saw indirect explanatory benefit
in positing sensibilia, possibilia, spirits, a Creator, I would joyfully accord
them scientific status too, on a par with such avowedly scientific posits as
quarks and black holes&quot; (from &quot;Naturalism; or, Living within One's
Means,&quot; <i>Dialectica</i> 1995, vol. 49).<br>
<br>
Quine's pragmatic naturalism is far more intellectually nimble than Van Till's
naturalism, which, as we've seen, is scientifically stultifying and when pushed
to extremes, as Van Till does, commits an argument from invincible ignorance. I
would, therefore, that the scientific community take seriously the possibility
raised by Quine of joyfully according intelligent design full scientific status.
At issue is not the endless list of quibbles that Van Till raises, but whether
intelligent design can confer explanatory benefit in understanding biological
systems. That is now happening. To be sure, design theorists still have their
work cut out. But it is an intellectual project that is fast gaining momentum
and that promises shortly to displace Van Till's naturalism.<br>
<br>
Van Till's naturalism is not an aid to intellectual clarity but a wet blanket
designed to stifle inquiry. Not only is his naturalistically inspired critique
consistently off the mark, but it makes a virtue of maintaining the status quo.
The problem with web blankets and the status quo is, or course, that they are
boring. Intelligent design, by contrast, as Karl Giberson and Donald Yerxa point
out in their forthcoming <i>Species of Origin</i> (Rowman &amp; Littlefield,
2002), is setting the agenda for the origins question in biology (and
specifically for the emergence of biological complexity). Scientists therefore
have a choice to make: to consider the possibility of intelligent design as a
live option (if only for pragmatic reasons like Quine's) or to retreat into a
naturalistic apriorism that eternally blinds itself to the very possibility of
design. The choice here is between unfettered inquiry (with all the risks that
entails) and invincible ignorance (with all the security and boredom it
confers). It's clear which option Van Till has chosen.

</body>

</html>
</blockquote><!--DEBUG NotifyLocal 1 [Naturalism's Argument from Invincible Ignorance:] [32]-->
		</td>
	</tr>
</table>
</body>


</html>