MENU

Transcript: Do Scientists Need Philosophy?

By David Harriman

July 5, 2024

SUBSCRIBE TO SAVVY STREET (It's Free)

 

This was a Friday afternoon colloquium lecture at Johns Hopkins Applied Physics Laboratory. APL is one of the best science institutes in the world and this famous colloquium has been given by Nobel Prize winners in physics and chemistry. David Harriman is the first philosopher of science ever invited to present this lecture, which was attended by more than two hundred professional scientists.

Venue: John Hopkins Applied Physics Laboratory Colloquium Lecture.

Date: May 13, 2011
 

Lecture Text:

It’s an honor to speak here today, given the prestigious list of past speakers at this colloquium. The fact that my topic—philosophy of science—is a bit unusual for this lecture series makes me even more grateful for the opportunity.

I feel at home here. I began my career in applied physics, which is not the case for most philosophers.

And there’s a sense in which I feel at home here. I began my career in applied physics, which is not the case for most philosophers. I’m more comfortable at APL than I would be at the Harvard philosophy department colloquium—for reasons that will become clearer as we go.

My topic is a question: Do scientists need philosophy? Of course, I mean: Do you need philosophy to do your work? I take it as obvious that some understanding of ethics is required in order to make choices in your personal lives. But does philosophy have a crucial role in scientific research? Can it help you to do your jobs better? Or, to make the point stronger, is it necessary in order for you to do your jobs at all?

But does philosophy have a crucial role in scientific research?

From the fact that I went into philosophy and I’m here talking about it, I think you can guess my answer. But it might be fun to start by playing devil’s advocate. So, let’s see how strong a case can be made for the widespread view that philosophy is at best irrelevant to science and at worst it is actually harmful.

As I see it, there are two arguments for this view: one argument based on the nature of philosophy, and another based on history. Let’s consider them one at a time.

First, it’s often claimed that philosophy is detached from reality by its very nature. Philosophers sit in their ivory towers and concoct theories of knowledge, but they are not in the trenches dealing with the real world and actually discovering knowledge. So, taking advice from a philosopher is like having a tennis coach who has never played tennis. You’re out there battling on the court, trying to win a match, and this guy is sitting in his office sending you text messages about his theory of tennis. Under those circumstances, it’s understandable if you feel like suggesting—perhaps in some anatomical detail—exactly where he might put his theories.

Unfortunately, there is some supporting evidence for this view of philosophy. Contemporary philosophers provide Exhibit A. In many cases, their claims about knowledge are so bizarre that nobody with any common sense could take them seriously. For example, the dominant viewpoint in philosophy of science during the past fifty years is called the “sociology of knowledge” school. According to these philosophers, scientific truth is determined by authority and consensus within a social context. Now, APL is a prestigious science institute with recognized authority in our society, so this view does have its advantages. You don’t have to do all those complicated experiments in order to prove your ideas; instead, you can establish whether an idea is true or false simply by getting together and voting. Imagine how easy it would be to achieve your milestones. Of course, if you vote in favor of an idea, it becomes true only for our society; it’s not true for people in the different cultures of Madagascar or Malibu.

Most contemporary philosophy is detached from the real world and the real issues people face in their work and personal lives.

In my judgment, most contemporary philosophy is detached from the real world and the real issues people face in their work and personal lives. But this detachment is not unique to our era; it can be traced back to the beginning of philosophy in ancient Greece. Plato, the father of Western philosophy, split reality into two realms: a higher world of abstract ideas, and a lower world of physical appearances. His view gave rise to many false dichotomies that are still around today; in science, you can see the influence of Plato in the often uneasy relationship between theorists and experimentalists. Some theorists seem to think that they have the high ground, and they look down on the lab workers who design experiments and take measurements; on the other hand, some experimentalists tend to think that theorists merely play around with floating abstractions and indulge ideas that have little connection with the perceivable world. So, Platonism introduces an element of distrust and hostility into a relationship that should be mutually respectful and beneficial.

Of course, philosophers are the ultimate theorists. They have their heads in the upper stratosphere of abstraction. So, they can’t help scientists deal with the real-world problems involved in conducting research. As Richard Feynman once put it, “Philosophy is about as useful to scientists as ornithology is to birds.” Just as a bird doesn’t need a scientist to tell him how to fly, a scientist doesn’t need a philosopher to tell him how to do science. The claim here is that philosophers deal with floating abstractions, which have very little to do with the actual practice of science.

Now let’s turn to history and the second argument against philosophy. When scientists have taken philosophy seriously, it is said, the results have not been encouraging. Let’s consider some examples. Kepler, for instance, was influenced by Platonism, which led him to several ideas that turned out to be wrong. Newton was influenced by theological speculations about the alleged connection between God and space, and his idea of absolute space turned out to be wrong. In the early 19th century, there was a generation of German scientists who were strongly influenced by Kant and Hegel, and nearly all their ideas were embarrassingly wrong. In the late 19th century, many physicists and chemists were strongly influenced by the philosophy of positivism, which led them to reject the atomic theory of matter. And so on. It’s not difficult to come up with a dozen more examples like these.

You get the point. Scientists are better off ignoring the philosophers; when they take philosophy seriously, it leads them astray.

Now, since you’re not running for the exits, I assume that you’re either a polite audience or you sense there may be a weakness in these arguments against philosophy. I think there is such a weakness; the arguments fall short of establishing that philosophy is bad for science. Instead, they establish only that bad philosophy is bad for science. In other words, if a scientist accepts a false philosophy and it guides his thinking about science, he will go off-track. This isn’t a surprising conclusion. My response is: “Of course.”

But what if there is a different kind of philosophy, one based on observation and logic—in other words, one that is arrived at by essentially the same method that it prescribes for the special sciences? Could such a philosophy ask important questions and provide answers that are useful and even indispensable to scientists? Let’s sweep aside Plato’s supernatural world of ideas, and let’s sweep aside the modern skeptics who can’t write the words “reality” and “truth” without putting them in scare quotes. Let’s consider what philosophy really is.

And, first, let me clear the way by making one point about what philosophy isn’t. It isn’t a set of “a priori” principles from which one can deduce the nature of the world. Plato, Descartes, Kant, and Hegel all tried to do that. They trespassed into the realm of physical science and tried to dictate what the nature of the physical world must be. They said a lot of silly things, and this contributed to the bad reputation of philosophy among scientists.

And my criticism of such philosophers goes beyond the obvious fact that most of their conclusions were proven wrong. The stronger point is this: Whenever they deduced science from philosophy, all of their conclusions were arbitrary. For example, Descartes deduced the law of inertia from the immutability of God. Now, the law of inertia is true, but how do we know that it’s true? Only because of the work of Galileo and Newton; Descartes contributed nothing. By accident, an arbitrary and invalid argument might result in a correct generalization. But we can know it’s correct only by valid reasoning from observations. So, even when such philosophers get lucky, their ideas are worse than useless, because they corrupt the understanding of proper method.

Ptolemy thought it was best to settle for a mathematical description of the appearances, whereas Copernicus began the transition to focusing on causal explanations.

And philosophy is primarily about method; it’s about the principles that tell us how to discover knowledge. And even a quick look at the history of science shows us that these principles are not obvious. In astronomy, for instance, Ptolemy and Copernicus did not simply disagree in their scientific conclusions about the solar system; they also disagreed in their underlying philosophic ideas about how to develop a theory of the solar system. Ptolemy thought it was best to settle for a mathematical description of the appearances, whereas Copernicus began the transition to focusing on causal explanations. So, what is the goal of science—to describe appearances, or to identify causes? The answer depends on the philosophy you accept.

Similarly, in 17th century physics, Descartes and Newton did not simply disagree in their scientific theories; they strongly disagreed about the basic method of developing such theories. Descartes wanted to deduce physics from axioms, whereas Newton induced his laws from observational evidence. So, what is the essential nature of scientific method—is it primarily deductive, or primarily inductive? And what is the role of experiment in science? The answers depend on your theory of knowledge.

Here’s another example: consider the contrast between Lavoisier, the father of modern chemistry, and the alchemists of the previous era. Lavoisier did not merely reject the scientific conclusions of the alchemists; he rejected their method of concept-formation and he originated a new chemical language, and then he used a quantitative method for establishing causal relationships among his concepts. So how do we form valid concepts, and what is the proper role of mathematics in physical science? Again, your answers to such fundamental questions will depend on the philosophy you accept.

Finally, consider the battle between two late 19th century physicists, Boltzmann and Mach. Boltzmann was a leading advocate of atomic theory and he used that theory to develop the field of statistical thermodynamics. Mach, on the other hand, was a leading advocate of positivism; he thought that physicists should stick to what they can see, and that atomic theory was nothing more than speculative metaphysics. So, what is the relationship between observation and theory, and how is a theory proven, and are there limits to scientific knowledge? Once again, these are philosophic questions.

There is a great deal of controversy in theoretical physics today, and these basic issues of method are at the heart of the controversy.

And such issues have not gone away with time. There is a great deal of controversy in theoretical physics today, and these basic issues of method are at the heart of the controversy. Some physicists say that string theory is a major triumph that has unified quantum mechanics and relativity for the first time. Other physicists argue that string theory is just a mathematical game detached from reality—that it isn’t a theory of everything, but instead a theory of anything. And we’re starting to hear a few similar criticisms of Big Bang cosmology; if the theory is so flexible that it can explain anything, the critics say, then perhaps it actually explains nothing.

How do we decide these issues? How do we know the right method of doing science, and what standards should we use to evaluate scientific ideas? These are some of the questions that I try to answer in my book. My approach is to look closely at what scientists have actually done, and to induce principles of method from cases of successful discovery. Along the way, and particularly toward the end of the book, I also look at cases where scientists have made errors—and I try to identify the departure from proper method in those cases.

In effect, the laboratory of a philosopher is history. Strictly speaking, of course, philosophy is not an experimental science. It deals with the fundamental principles that guide human thought and action, and we’re not allowed to control and manipulate human beings. Fortunately, it isn’t necessary. Different people have adopted different principles to guide their thinking and actions, and we can look at history and see the results. We can abstract what is in common from the positive cases and identify what is different about the negative cases. This is one sense in which philosophy is like the specialized sciences.

Now, I said that different scientists have used different methods. But I’ll start with an example where the same scientist used different methods. It’s the case of Kepler, which I discuss in Chapter 3 of my book. Kepler was a brilliant astronomer, but he suffered from what I would call “multiple philosophy disorder.” He adopted two different and contradictory methods, and sometimes he used one and sometimes the other. So, he provides a fascinating case study; in effect, he conducted an experiment in philosophy of science. And the results of the experiment are clear-cut and dramatic.

Kepler used the most accurate astronomical observations, and searched for a causal theory that would explain them.

First, let’s look at Kepler’s successful method. He used the most accurate astronomical observations, and searched for a causal theory that would explain them. He rejected epicycles because they are non-causal; they were just a mathematical device in which bodies revolved around vacant points for no reason. He dismissed that approach as ridiculous; instead, he wanted to identify the physical cause of the planetary motions. He knew that the planes of the planetary orbits were inclined with respect to one another by a few degrees. He analyzed the data, and showed that the planes intersect at the position of the sun. Since the sun is the only body in common to the orbital planes, the sun is the physical cause of the orbits. QED, as they say in mathematics.

After this discovery, Kepler referred all planetary positions to the position of the sun and tried to determine the exact nature of the orbits caused by the sun. You know that this investigation culminated in his three laws of planetary motion. Here, I’ll just briefly make a couple of points about his method.

First, his concern was reality, not appearance. Previous astronomers who limited themselves to appearances had merely tried to fit the angular positions; Kepler, on the other hand, insisted that his models fit both the observed angular positions and calculated distances. If he had accepted the old “describe the appearances” approach, he would never have discovered his laws.

Kepler did not tolerate discrepancies between observation and theory.

Second, when using this reality-based method, Kepler did not tolerate discrepancies between observation and theory. His best circular model for the orbit of Mars fit the data pretty well by the standards of the time. But it wasn’t good enough for Kepler; the errors were small, but significant. On the basis of these discrepancies, Kepler discarded the 2000-year tradition of using circles and proved that Mars moves in an elliptical orbit. The message here is simple but profound: Observations are the foundation of knowledge, and if the theory doesn’t match—well, the theory has to change. It doesn’t matter whether the mismatch is big or small. In this case, a small mismatch led to a revolution in theoretical astronomy.

Now let’s turn to Kepler’s other method. I love Kepler, but I don’t think it’s appropriate to hold back and understate my criticism. So, I’ll try to tell it like it is. Kepler’s other method is based on Platonism, and the results are a scientific disaster.

Kepler started with the intuition that God would have based his design of the solar system on perfect geometrical figures. The Greeks had discovered the five symmetrical, solid figures that can be constructed from identical plane surfaces. In fact, Plato had tried to reduce the entire physical world to geometry and explain it all by means of these figures. Kepler used the same method to explain the solar system. He decided that there must be six planets, because the system was designed so that the five perfect geometrical figures fit between the six orbits. With this scheme, he attempted to explain the distances between the planets. When he worked out the details, the average error was about 20 percent, but this result did not discourage him. Plato had said that the physical world was an imperfect realm, so even perfect ideas can’t be expected to match the data. When Kepler was following the Platonic method, he wasn’t as concerned about the connection between observation and theory.

Kepler had other ideas based on this method. For example, he related the planetary motions to musical harmonies, and tried to explain the motions as a symphony written by God. And he offered two explanations of the ocean tides: one based on a physical cause—the attraction of the moon—and the other based on a spiritual cause—the alleged breathing of the living Earth. This is what I mean by a “multiple philosophy disorder.”

So how do we evaluate Kepler’s methods? Well, it seems obvious; his first method was a spectacular success and the second method was an equally spectacular failure. But did all scientists get this memo? Did they all learn this great lesson in scientific method? Unfortunately, the answer is no. In the generation after Kepler’s death, Descartes’ approach to science was very influential and it was based on the Platonic method. And what is the situation today? Theorists are trying to explain everything in terms of beautiful geometrical ideas, and observations sometimes seem to be regarded as an annoying afterthought. Here is a quote from Steven Weinberg, one of the most prominent theoretical physicists of the past half century. He’s writing about the method that led to Kepler’s failures:

“Kepler was no fool. The kind of speculative reasoning that he applied to the solar system is very similar to the sort of theorizing that elementary particle physicists do today; we do not associate anything with the Platonic solids, but we do believe for instance in a correspondence between different possible kinds of force and different members of the Cartan catalog of all possible symmetries. Where Kepler went wrong was not in using this sort of guesswork, but in supposing that the planets are important.”

I think it’s interesting, and thought-provoking, that a method that has failed so badly in the past is still given such respect.

Now let’s turn to another fascinating episode in the history of science. I want to talk about Isaac Newton’s first published paper, which presented his theory of colors. This paper, and the controversy it provoked, transcended the field of optics; it was a revolution in scientific method. I’ll start with some background, and then discuss the new principle of method that led to Newton’s discoveries.

Optics was an active field of investigation during the 17th century. The telescope was invented, Snell discovered the law of refraction, scientists began investigating the colors produced by prisms, and they tried to explain the phenomenon of rainbows. Many observations led to the question of how colored light is related to white light, but nobody knew how to answer the question.

Of course, some investigators thought they knew. They started with the assumption that colors were produced by some modification of pure white light. For instance, Descartes claimed that light was a certain type of particle, and white light became colored when the particles started rotating. Robert Hooke, a prominent English physicist, came up with a different theory. He agreed with Descartes that colors were a modification of pure white light, but he said that light was a wave pulse, and that colors were produced when the pulse became asymmetrical.

Now Newton comes on the scene, and he turns the whole process around. He didn’t start with a theory and then try to deduce the facts. He started with the observed facts, and then asked questions. Initially, he didn’t have the idea that colors are the elementary components of white light. But after making some observations with a prism, a question occurred to him: however the colors might be produced, is the emerging beam red on one side and blue on the other because red and blue light are refracted at slightly different angles?

He then thought of an ingenious way of answering the question. Newton took a thread and painted half of its length blue and the other half red. When he laid the thread in a straight line against a dark background and viewed it through a prism, the blue and red halves looked discontinuous, with one appearing above the other. The prism shifted the image of the blue half more than it shifted the red half. From this one experiment, Newton reached a universal truth: Upon refraction, blue light is bent more than red light.

As you know, he went on to prove that white light is a mixture of all the colors, which can be separated by refraction and reflection. He published these results in 1672.

The most radical aspect of Newton’s paper did not consist of what he said, but of what he refrained from saying. He presented his conclusions without committing himself to any definite view about the fundamental nature of light and colors. He reasoned as far as the available evidence could take him—and no farther. Many scientists reacted to Newton’s original paper with surprise and confusion because they were accustomed to the Cartesian method of deducing conclusions from so-called “first causes.” But here was a paper about colors in which the author simply ignored the controversy over whether colors were rotating particles or distorted wave pulses or something else.

Newton once said that he “framed no hypotheses,” a statement that became both famous and widely misunderstood. Explaining his terminology, he wrote: “The word ‘hypothesis’ is here used by me to signify only such a proposition as is not a phenomenon nor inferred from any phenomenon, but assumed or supposed—without any experimental proof.” Unfortunately, this did not make his meaning entirely clear. He did not mean to reject all hypotheses that lacked full experimental proof; in actual fact, he used the term “hypothesis” to refer to an arbitrary assertion, in other words, a claim unsupported by any observational evidence.

Newton understood that to accept an arbitrary idea—even as a possibility that merits consideration—undercuts all of one’s knowledge. As he explained in a letter to a colleague:

“If anyone may offer conjectures about the truth of things from the mere possibility of hypotheses, I do not see by what stipulation anything certain can be determined in any science; since one or another set of hypotheses may always be devised which will appear to supply new difficulties. Hence I judged that one should abstain from contemplating hypotheses, as from improper argumentation. . . .”

Wow. Here’s what Newton is saying. Perceptual data provide our only direct contact with reality. An arbitrary idea is detached from such data, so to consider the idea is to leave the realm of reality and enter a fantasy world—and no good science comes from taking such a trip. We can’t even disprove an arbitrary idea, because such ideas can always be shielded by other arbitrary ideas. Once we enter the make-believe world, we are caught in a web of baseless conjectures, and there’s only one way out: to dismiss all such claims as unworthy of attention.

In my judgment, this is the moment in history when scientific method put aside fantasy and reached adulthood. The speculative method that had led to so many errors in the past—in other words, the premature leap to theory in the absence of evidence—was rejected by Newton. He said that scientific discovery is a careful process of inductive logic that leads us from observations to generalizations, and he provided grand-scale examples demonstrating the power of that method. In the century and a half following Newton, this inductive method was dominant and it led to the new sciences of chemistry, geology, and electromagnetism.

Unfortunately, the method was never fully identified in explicit terms, and the commitment to it was undercut by a lot of very bad philosophy, notably the influential writings of Hume and Kant. As a result, many scientists began to drift away from the method that had been advocated by Newton, often without even realizing that they were drifting away. Now we have a strange situation. The description of scientific method that is given in nearly all of our textbooks is called the “hypothetico-deductive” method. In essence, it says that scientists somehow come up with a hypothesis—never mind exactly how—and then they deduce consequences and make observations to test it. If the predictions match the observations, then we say that the hypothesis is confirmed. Of course, we still don’t know whether it’s true, because other hypotheses might lead to similar predictions.

The man who is arguably the greatest scientist in history tells us that this method is wrong——and yet this is the view of method that we routinely present to students.

Notice that this sounds a lot like the method that Newton criticized and rejected. So, the man who is arguably the greatest scientist in history tells us that this method is wrong—or, at the very least, he tells us that this description of method omits the most important aspects of the discovery process—and yet this is the view of method that we routinely present to students. Why? Why don’t we have anything better to give them?

I think it’s because the majority of philosophers since the time of Newton have been very skeptical about inductive reasoning. They’ve argued that there’s no logical justification for leaping from a few observed cases to a universal generalization. But if induction is invalid, we certainly don’t want to emphasize it in our description of scientific method; if we do, we’re implying that scientific method is illogical and invalid. In order to avoid that implication, the emphasis is put on the deductive part of the process.

But this approach evades the central issue. Deduction presupposes induction; we must have a generalization before we can apply it. The challenging and interesting aspect of scientific method is: How do we arrive at and prove the generalizations? For the most part, the “hypothetico-deductive” method ignores this question.

But since induction is inescapable, we need to tackle the problem head-on. Let’s start with the philosophers’ description of the process as a giant, illogical leap from a few observed cases to a universal generalization. That does sound like a dubious procedure. How can we possibly justify accepting a conclusion that transcends the evidence in this way? And we do accept such conclusions all the time; we couldn’t survive if we didn’t. But we don’t want to say that induction is illogical, and yet we do it anyway because it works. That leaves us in a position of not being able to distinguish science from pseudo-science, or rationality from irrationality. If we say that even the best thought processes are illogical, that’s a disaster.

So how do we respond to the charge that induction, by its nature, is an illogical leap? The best place to start is with actual examples of scientific induction. Let’s go back to Newton’s experiment with the thread that was painted half blue and half red. Recall that when he viewed it through a prism, the two halves appeared discontinuous—and then analysis showed that the prism shifted the image of the blue part more than the red part. On this basis, Newton reached a generalization: blue light is refracted by glass at a greater angle than red light.

Now, where is the illogical leap here? Oddly enough, there doesn’t seem to be one. The generalization seems to follow with perfect logical necessity from the observations. In fact, under the circumstances, we would be shocked and disappointed in Newton if he didn’t reach the generalization. And yet the generalization goes well beyond the particular observations—it refers to any and all instances of red light and blue light refracting through glass.

The key here is the concepts themselves. When we form a concept, we do it on the basis of essential similarities—and we make a commitment to treat the particular instances of the concept as interchangeable members of a group. If we don’t make that commitment, then we don’t have the concept. Someone who uses the word “red,” but treats every instance of red light as completely different than every other instance, does not have the concept “red.” So, in this sense, inductive generalization is inherent in conceptualization.

The only way to deny the validity of Newton’s generalization is to deny the validity of his concepts. We would have to argue that one or more of his concepts are invalid—the concepts of “red,” “blue,” “glass,” and “refraction.” And by invalid I mean that the concept is not a proper integration of similar particulars, but instead a juxtaposition of essentially different particulars.

In this case, probably the best that the skeptic could do is point out that there are different types of glass, so perhaps the generalization is true only for the type of glass that Newton used. But notice that it is still a generalization, and notice that it’s easy to get a prism made out of a different type of glass and see similar results. So, the skeptic would strike out with this argument.

Now, I’ve deliberately chosen a simple example here involving low-level concepts related in a narrow generalization. Of course, I’m not implying that inductive reasoning is easy—that all we ever have to do is conduct an experiment and out pops the generalization in a straight-forward way. On the contrary, as we pursue more advanced knowledge, the inductive process becomes extremely complicated. The reason is that inductive arguments are not self-contained in the way that deductive arguments are. Deduction is easy because it depends only on form, not on content; we can symbolize the argument, and if it has a correct form then it’s deductively valid. Obviously, if a major premise is wrong then the conclusion is usually wrong. But deduction doesn’t deal with this issue; that’s where induction comes in. With induction, we can’t abstract from the content; we can’t symbolize the argument, and we can’t even delimit the number of premises. We have to survey the entire field of relevant knowledge, and make sure that our generalization integrates with all of it. That’s why inductive reasoning is controversial and difficult, and why it’s so prone to error. But we are able to do it—and physical science provides some of the most impressive success stories.

Let’s briefly turn now to a complex example, so I can indicate what is involved. The example I have in mind is the 19th century generalization that each chemical element is made of identical atoms, which differ from the atoms of other elements. This is a broad generalization that requires a wide range of advanced knowledge. In my book, I outline the major steps of the inductive argument in about 40 pages—and now I’m going to condense it into few minutes. But I’m taking advantage of the fact that you know the science, so I can just focus on a few points regarding the proof.

First, this broad generalization about the atomic nature of matter was an integration of many narrower generalizations. In chemistry, there was the law of multiple proportions, the law of combining gas volumes, the law of electrolysis, and key discoveries about allotropes and isomers. In physics, there was the law of heat capacities, the ideal gas law, and then the whole development of the kinetic theory of gases. The atomic theory emerged as the grand-scale, integrating explanation for all of these facts and laws; it brought chemistry and physics together for the first time.

Of course, the discovery process was not all smooth sailing. Initially, there were some false premises about the nature of chemical bonding and the nature of gases that slowed progress until those errors were rejected. And specialization was also a problem; chemists who didn’t know much physics, and physicists who didn’t know much chemistry, had difficulty putting all the pieces of the puzzle together.

But by about 1875 the pieces had come together and the atomic theory of matter was proven; reasonable doubts were no longer possible. Now, a skeptic would cringe if he heard me say that. For a skeptic, listening to talk about proof is like listening to fingernails on a chalkboard. But I like to talk about it, and not just to irritate skeptics—that’s only a side benefit. I do it because it’s an important practical issue. We need rational standards for evaluating theories. Without those standards, two mistakes are possible: a scientist may be too quick to accept a false theory, or he may be too slow to accept a true theory. You can find examples of both errors in history—and either way, scientific research has suffered.

Now, if we have good criteria for evaluating theories, then we should be able to apply them to past theories and get sensible results. For instance, when judged in the context of what was known at the time, Ptolemy’s theory of the solar system should fail our standards, and Kepler’s theory should pass; the phlogiston theory of combustion should fail, and the oxygen theory should pass; the oxygen theory of acidity should fail, and the hydrogen theory should pass; and so on. I think the three criteria proposed in my book do give the right results in such cases. Let’s briefly look at them.

Clearly, the first criterion needs to deal with the relationship between observation and theory. In some sense, nearly everybody would agree with this, but the common mistake is to make this criterion too weak. For instance, it’s often said that a good theory correctly predicts some observations and contradicts none. Well, that’s fine as far as it goes, but many false theories can pass that test. We need something stronger and more along the lines of what Newton suggested. Here’s the way I worded it: “Every concept and every generalization within the theory must be derived from observations by a valid method.” Let’s see a false theory pass that test.

Now, my second criterion deals with the internal structure of the theory. Here it is, as stated in my book:

“A proven theory must form an integrated whole. It cannot be a conglomeration of independent parts that are freely adjusted to fit the data. Rather, the various parts of the theory are interconnected and mutually reinforcing, so that the denial of any part leads to contradictions throughout the whole. A theory must have this characteristic in order to be necessitated by the evidence. Otherwise, one could not reach a conclusive evaluation about the relationship between the evidence and the theory; at best, one could evaluate only the relationship between the evidence and parts of the theory. On the other hand, when a theory is an integrated whole, then the evidence for any part is evidence for the whole.”

Maybe this requirement seems ambitious and difficult to satisfy. But, in fact, it’s been satisfied by many theories, including Newtonian mechanics, electromagnetism, atomic theory, the geological theory of plate tectonics, and the biological theory of evolution. You might question whether contemporary theories in particle physics and cosmology can pass this test—but I think that’s a useful question to ask.

My third and last criterion deals with the range of evidence for a theory. A common mistake in defining concepts is to make the definition too broad or too narrow, and a similar issue applies to scientific theories. Consider an example. In the early 19th century, chemists were investigating the relationship between electricity and chemical bonding. Some of these chemists described what we would call “ionic bonding,” but they proposed it as a general theory of chemical bonding. In essence, they studied acids and salts, and then over-generalized. In order to avoid this kind of error, we need to think about the entire range of evidence that spans all the essential categories subsumed by the generalization. Here’s the way I formulate the “range” criterion: “The scope of a proven theory must be determined by the data from which it is induced; that is, the theory must be no broader or narrower than required to integrate the data.”

Okay, I want to give you time for questions, so I’ll wrap up here. I just mentioned the error of over-generalization, so, in conclusion, let’s return to Feynman’s claim that philosophy is as useless to scientists as ornithology is to birds. This is true of some philosophies—Platonism, positivism, and postmodern skepticism, to name three. But it’s a big mistake to equate philosophy with those viewpoints; it’s worse than equating physical science with the Greek theory of elements, Ptolemaic astronomy, and the cold-fusion fiasco. Anyone who disrespected science in that way would deserve to be tarred-and-feathered, which would be illegal but fun.

Science depends on philosophy, especially on fundamental ideas about how we form concepts and how we induce generalizations. My goal is to offer a fact-based philosophy that deserves the kind of respect that science has already earned, and my hope is that scientists find it helpful.

Thank you.

(Visited 213 times, 1 visits today)