Vous êtes sur la page 1sur 20

Karl Popper (28 July 1902 17 September 1994) I.

Background

Sir Karl Raimund Popper was born in Vienna on 28 July 1902. His rise from a modest background as an assistant cabinet maker and school teacher to one of the most influential theorists and leading philosophers was characteristically Austrian. Popper commanded international audiences and conversation with him was an intellectual adventure - even if a little rough -, animated by a myriad of philosophical problems. His intense desire to tear away at the veneer of falsity in pursuit of the truth lead him to contribute to a field of thought encompassing (among others) political theory, quantum mechanics, logic, scientific method and evolutionary theory. Karl Popper is generally regarded as one of the greatest philosophers of science of the 20th century. He was also a social and political philosopher of considerable stature, a selfprofessed critical-rationalist, a dedicated opponent of all forms of scepticism, conventionalism, and relativism in science and in human affairs generally, a committed advocate and staunch defender of the Open Society, and an implacable critic of totalitarianism in all of its forms. He argued that there are no subject matters but only problems and our desire to solve them. He said that scientific theories cannot be verified but only tentatively refuted, and that the best philosophy is about profound problems, not word meanings. Isaiah Berlin rightly said that Popper produced one of the most devastating refutations of Marxism. Through his ideas Popper promoted a critical ethos, a world in which the give and take of debate is highly esteemed in the precept that we are all infinitely ignorant, that we differ only in the little bits of knowledge that we do have, and that with some co-operative effort we may get nearer to the truth.

II.

The Problem of Induction raised by David Hume

Induction, according to Born, means '. . . no observation or experiment, however extended, can give more than a finite number of repetitions'; therefore, 'the statement of a law - B depends on A - always transcends experience. Yet this kind of statement is made everywhere and all the time, and sometimes from scanty material. ' In other words, the logical problem of induction arises from (1) Hume's discovery (so well expressed by Born) that it is impossible to justify a law by observation or experiment, since it 'transcends experience' ; (2) the fact that science proposes and uses laws 'everywhere and all the time'. (Like Hume, Born is struck by the 'scanty material', i.e. the few observed instances upon which the law may be based.) To this we have to add (3) the principle of empiricism which asserts that in science only observation and experiment may decide upon the acceptance or rejection of scientific statements, including laws and theories. These three principles, (1), (2), and (3), appear at first sight to clash; and this apparent clash constitutes the logical problem of induction. Popper has argued (I think successfully) that a scientific idea can never be proven true, because no matter how many observations seem to agree with it, it may still be wrong. On the other hand, a single contrary experiment can prove a theory forever false. (From http://www.jayhanson.us/page126.htm) The Reformulation of the Problem of Induction
Science does not use induction and induction is in fact a myth. Instead, knowledge is created by conjecture and criticism. The main role of observations and experiments in science, he argued, is in attempts to criticize and refute existing theories. (Wikipedia)

The commonsense problem of induction is based on the bucket theory of the mindroughly, the assertion that there is nothing in our mind which has not entered through our senses. But we do have expectations and we strongly believe in regularities. How can these have arisen? Answer: Through repeated observations. The commonsense view takes for granted that the resulting expectations are justified. Popper has three theses: 1. 2. 3. There is no rationally justifiable method of induction There is no reliable method of induction. Nevertheless, there is a critical method of science that is rational.
PHILOSOPHY OF SCIENCE | KARL POPPER 01

Popper distinguishes Humes logical problem of induction whether we are justified in reasoning from repeated instancesfrom Humes psychological problem of induction Why do we have expectations in which we have great confidence? But for Popper, there is no such thing as induction by repetition (simple enumerative induction), as is shown by the fact that it is false that "The sun will rise and set once in 24 hours" (counterexample: the midnight sun at the Earths poles) and "All bread nourishes" (counterexample: ergotism in a French village). However, this does not show that simple enumerative induction fails to lead to true, or approximately true, conclusions most of the time. So, Popper has not even shown that simple enumerative induction is unreliable, yet alone that there is not reliable method of induction. Note: Goodman presents the view that simple enumerative induction is restricted to entrenched terms, like green and blue. This is his solution to the psychological problem of induction. So, we do know that simple en umerative induction must be restricted in some way if it is to be reliable. Popper rejects Humes assumption (that if there is a reliable method of induction then it is simple enumerative induction), so he much reformulates the logical problem of induction: L1: Can an explanatory universal theory be justified by assuming the truth of observation statements? Note: Talk of explanatory theories alludes to the idea of induction as inference to the best explanation.

Hypothetico-deductivism is the view that theories are hypothesized under no constraints. They may arise from a dream or they may arise from inference of some kind. That is a question of psychology. It that has nothing to do with the justification of theories, which is based solely on whether what can be deduced from the theory is true or false.

Here he agrees with the answer Hume would give: NO. To understand the basis of Poppers claim, consider a very Mills example of a universal theory: "All swans are white." This theory is not proven by any nu mber of swans that have been observed to be white because the claim applies to swans that have not been observed. Different arguments made by Popper against Induction a. b. c. d. e. f. Popper argues that the method of science is not inductive but deductive. Inductive method: gathering observations and making generalizations from them. There is no such thing as an untheoretical observation; science cannot proceed from pure observation to theory. We cannot prove our theories; at best we can disprove them. There is no clear criterion for when we have a valid induction. The most rational method of pursuing knowledge is the method of conjectures and refutations, that is, trying out hypotheses or theories by subjecting them to crucial tests. This is not induction, but deduction. Humes skeptical argument 1. 2. 3. 4. If an inductive rule is to be justified, it must be justified by either a deductive rule or an inductive rule. It cannot be justified by a deductive rule (or else the principle would not be inductive). It cannot be justified by an inductive rule (that would be circular). But any justification has to be either via a deductive rule or via an inductive rule. Hence, no inductive rule can be justified.

III.

His Proposal in Place of Induction

Popper's Deductivism
Scientists do not confirm hypotheses; they may only corroborate or decisively refute them. excerpted from The Logic of Scientific discovery by Karl Popper There is no justification for scientific theories. There is only falsification, and deductive logic is good enough for that. Justification: FALSIFICATION
PHILOSOPHY OF SCIENCE | KARL POPPER 02

IV.

Concept of Verification

Verificationism (also known as the Verifiability Criterion of Meaning or the Verification Principle) is the doctrine that a proposition is only cognitively meaningful if it can be definitively and conclusively determined to be either true or false (i.e. verifiable or falsifiable). It has been hotly disputed amongst Verificationists whether this must be possible in practice or merely in principle. Verificationism is often used to rule out as meaningless much of the traditional debate in areas of Philosophy of Religion, Metaphysics, and Ethics, because many philosophical debates are made over the truth of unverifiable sentences. It is the concept underlying much of the doctrine of Logical Positivism, and is an important idea in Epistemology, Philosophy of Science and Philosophy of Language. The problem with Verificationism, according to some, is that some statements are universal in the sense that they make claims about a possibly infinite set of objects. Since it is not possible to verify that the statement is true for each of an infinite number of objects it seems that verification is impossible. To counter this, Karl Popper proposed the concept of Falsificationism, whereby if no cases where the universal claim is false can be found, then the hypothesis is accepted as provisionally true. A. J. Ayer responded to the charge of unverifiability by claiming that, although almost any statement (except a tautology) is unverifiable in the strong sense, there is a weak sense of verifiability in which a proposition is verifiable if it is possible for experience to render it probable. Karl Popper asserted that a hypothesis, proposition or theory is scientific only if it is falsifiable (i.e. it can be shown false by an observation or a physical experiment) rather than verifiable, leading to the concept of Falsificationism. However, he claimed that his demand for falsifiability was not meant as a theory of meaning, but rather as a methodological norm for the sciences. V. Concept of Falsification

Thus theories can be "refuted" or "falsified," by the well known valid principle of inference known as modus tollens. In short, observational evidence can never prove any general theories are true, but it can falsify them. For this reason Popper's model of justification is known as "Falsificationism." Popper's main point is the extremely elementary logical point that if one takes the business of science as deducing observational consequences from statements of laws and theories and initial conditions, no amount of particular positive observational outcomes will ever prove (or verify) the truth of universal hypotheses or laws, for all such attempted inferences commit the well-known fallacy of affirming the consequent. However, even a single negative observational consequence allows us to validly infer that the conjunction of laws and initial conditions from which it is deduced cannot all be true. Good theories, according to Popper, must be testable, but "testable" means potentially falsifiable, refutable. Therefore, in proposing theories, the more refutable, (i.e. the more "testable"), the better. Popper expresses this point by saying that "conjectures" must be "risky" or "bold." Of course it would do little to advance the growth of knowledge repeatedly to propose totally off-the-wall "risky" conjectures just to shoot them down. The occasions when science advances most are when attempts to refute risky conjectures fail, thus corroborating bold guesses, or occasions when safe "modest" conjectures relying mostly on accepted beliefs are, surprisingly, refuted, thus falsifying "established" wisdom. Criticisms of Popperian Falisificationism: Unfortunately the Popperian falsificationist model of science runs into a big problem with what is known as "holism." This is the argument that what is in fact "tested" by observational evidence (if anything is) are not individual laws or theories, but rather large constellations of belief which include not only the one "theory" that is allegedly undergoing empirical "testing," but also a whole array of "auxiliary hypotheses" the truth of which is more or less tacitly taken for granted. According to the thesis of holism, by suitable modification or buttressing of the proper auxiliary hypotheses, any theory can always be "saved" from potential refutation. Furthermore, it is often claimed that historical research shows that scientists do frequently do precisely this. Hence it would seem that, contra Popper, theories cannot be definitively refuted any more than they can be verified or proved.
PHILOSOPHY OF SCIENCE | KARL POPPER 03

Popper acknowledges the holist argument, but he claims that it is possible to arrange "crucial experiments" in which two rival hypotheses make incompatible predictions but are made to share all the same "auxiliary" hypotheses or "background knowledge." In such a case, since the two hypotheses share the same background beliefs, the one which makes a prediction that is falsified by observation will be definitively refuted while the one whose prediction is consistent with observation will be corroborated. Popper holds that the history of science provides examples of just such crucial experiments. Critics of Popper will admit that while this may occasionally occur, one may reasonably expect that there will be other situations where the rival hypotheses are simply too different to share much common background assumptions and no such clear cut crucial experiment is possible. Because Popper admits that the observational evidence itself is never fixed and only adopted as a "convention" for the purpose of testing theories, when faced with a disconfirming observation, it would seem that the scientist has a choice of whether to reject the theoretical hypothesis or the observational evidence. To protect thereby a favored hypothesis from potential refutation, one might propose additional auxiliary hypotheses which have the effect of nullifying the apparently negative observational evidence. Such hypotheses are referred to as "ad hoc" hypotheses. Popper maintains that he can distinguish between modifications of hypotheses which are "permissible" from those which are purely ad hoc, and according to his methodology, prohibited. Permissible moves are those which render the whole conjunction of hypotheses and auxiliary assumptions more testable; those which do not lead to new testable consequences but function only to "save" the hypothesis supposedly under test are ad hoc and outlawed by the logic of science. The epistemologist essentially prescribes methodology to the scientist in this respect. Unfortunately the distinction over which particular moves make the collection of assumptions more or less "testable" turns out to be difficult to draw. To make matters worse, Popper claims his view describes real historical science. Yet real historical science reveals over and over again that knowledge advances by protecting hypotheses from refutation by ad hoc modifications that Popper would forbid. Thus, ironically, just like the positivists whose a historical formalism he so disdained, Popper also has to (despite his claims to the contrary) "reconstruct" science to fit his claim of how it ought to have been. Other problems result from Popper's claim to have "solved Hume's problem." It seems positively perverse to try to deny that the accumulation of observational evidence ever leads to the formation of hypotheses. Yet to the question of where does a hypothesis come from (the question of the context of discovery), Popper replies from the refutation of a prior hypothesis, not from the collection of observational evidence. Has Popper really found an account of science which eliminates induction? Hillary Putnam presents a nice argument to show why Popper has not really gotten around the problem of induction. The testing of theories is not done from a disinterested motive inspired by an interest in testing and testing alone. Testing theories is a relevant thing to do because we value well tested theories. Why should we value well-tested theories, if not because we have found that theories which are well tested in the past have continued to bear up well in the present, and we expect they will continue to do so in the future. But to reason in this way is just to make an inductive inference. If we had no reason to believe that well-tested theories could be relied upon in the future, then the motive for all the "testing" would evaporate. Unfortunately Popper's response was basically to stonewall all such criticisms and to staunchly never budge. Although Popper exalted the virtues of open-mindedness and searching for evidence against one's views, in fact he held his own views unswervingly, brooking no criticism and demanding absolute allegiance to his views amongst his followers. This ironic disparity between the philosophy he promoted and his own personal response to criticism was so well-known as to have become something of a clich, but Popper did little to dispel it and he persisted in making exorbitant claims in behalf of the virtues of his own philosophy. Many subsequently famous philosophers of science, including Lakatos, Feyerabend, and Laudan, at one time or another studied philosophy of science under Popper, although they broke with the master when their views departed from the orthodox Popperian line.

PHILOSOPHY OF SCIENCE | KARL POPPER 04

VI.

Concept of Demarcation

The demarcation problem is the philosophical problem of determining what types of hypotheses should be considered scientific and what types should be considered pseudoscientific or non-scientific. It also concerns itself with the ongoing struggle between science and religion, in particular the question about which elements of religious doctrine can and should be subjected to scientific scrutiny. This is one of the central topics of the philosophy of science, and it has never been fully resolved. In general, though, a hypothesis must be falsifiable, parsimonious, consistent, and reproducible to be scientific. Karl Popper, in his essay Science: Conjectures and Refutations (1963), claims that, the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability. (p. 11). While I agree with Popper that testability is a necessary criterion for a theory to be considered scientific, I do not believe that testability represents sufficient criteria. What distinguishes science from other pursuits is how this testability serves the central function of science through positive and negative feedback. Connection between Demarcation and Induction a. b. Previous philosophers tried to characterize good induction in order to say what counts as science; Popper rejects induction what we actually do is deduction, and that this use of deduction attempt to falsify our theories our theories is between science and non-science.

Criterion for Demarcation Popper's demarcation criterion concerns the logical structure of theories. Imre Lakatos described this criterion as a rather stunning one. A theory may be scientific even if there is not a shred of evidence in its favour, and it may be pseudoscientific even if all the available evidence is in its favour. That is, the scientific or non-scientific character of a theory can be determined independently of the facts (Lakatos 1981, 117). Instead, Lakatos (1970; 1974a; 1974b; 1981) proposed a modification of Popper's criterion that he called sophisticated (methodological) falsificationism. On this view, the demarcation criterion should not be applied to an isolated hypothesis or theory but rather to a whole research program that is characterized by a series of theories successively replacing each other. In his view, a research program is progressive if the new theories make surprising predictions that are confirmed. In contrast, a degenerating research programme is characterized by theories being fabricated only in order to accommodate known facts. Progress in science is only possible if a research program satisfies the minimum requirement that each new theory that is developed in the program has a larger empirical content than its predecessor. If a research program does not satisfy this requirement, then it is pseudoscientific. According to Paul Thagard, a theory or discipline is pseudoscientific if it satisfies two criteria. One of these is that the theory fails to progress, and the other that the community of practitioners makes little attempt to develop the theory towards solutions of the problems, shows no concern for attempts to evaluate the theory in relation to others, and is selective in considering confirmations and disconfirmations (Thagard 1978, 228). A major difference between his approach and that of Lakatos is that Lakatos would classify a nonprogressive discipline as pseudoscientific even if its practitioners work hard to improve it and turn it into a progressive discipline. In a somewhat similar vein, Daniel Rothbart (1990) emphasized the distinction between the standards that should be used when testing a theory and those that should be used when determining whether a theory should at all be tested. The latter, the eligibility criteria, include that the theory should encapsulate the explanatory success of its rival, and that it should yield test implications that are inconsistent with those of the rival. According to Rothbart, a theory is unscientific if it is not testworthy in this sense. George Reisch proposed that demarcation could be based on the requirement that a scientific discipline be adequately integrated into the other sciences. The various scientific disciplines have strong interconnections that are based on methodology, theory, similarity of models etc. Creationism, for instance, is not scientific because its basic principles and beliefs are incompatible with those that connect and unify the sciences. More generally speaking, says Reisch, an epistemic field is pseudoscientific if it cannot be incorporated into the existing network of established sciences).
PHILOSOPHY OF SCIENCE | KARL POPPER 05

Thomas Kuhn is one of many philosophers for whom Popper's view on the demarcation problem was a starting-point for developing their own ideas. Kuhn criticized Popper for characterizing the entire scientific enterprise in terms that apply only to its occasional revolutionary parts (Kuhn 1974, 802). Popper's focus on falsifications of theories led to a concentration on the rather rare instances when a whole theory is at stake. According to Kuhn, the way in which science works on such occasions cannot be used to characterize the entire scientific enterprise. Instead it is in normal science, the science that takes place between the unusual moments of scientific revolut ions, that we find the characteristics by which science can be distinguished from other enterprises (Kuhn 1974, 801). Popper's Path to his Demarcation Criteria (Curd and Cover, pages 1-10) To understand a philosophical theory, like Popper's demarcation criterion, it is useful to see why simpler alternative proposals do not work. Proposal 1: Science is distinguished by its empirical method. That is, science is distinguished from pseudoscience by its use of observational data in making predictions. Objection: Astrology appeals to observation, but is not a science. Proposal 2: Scientific theories, like Einstein's, are more precise in their predictions that Adler's psychology, or astrology. Objection: While it is true that pseudosciences do often protect themselves from refutation by making vague or ambiguous predictions, that is not always the case. The 'predictions' of example (b) are precise enough for the purpose, and Einstein's prediction was not exactit had to allow for many errors of observation. Proposal 3: Science is explanatory, whereas pseudoscience is not. Objection: If you buy into the auxiliary assumptions in Adler's psychology, then the theory explains the phenomena perfectly well. It is true we have little reason to believe that the explanation is correct, but that is a different issue. Proposal 4: Science is distinguished from pseudoscience by its verifications, or confirmation. Objection: Popper's objection is that "The world was full of verifications of those theories." I have remarked that that does not ring true in examples (a) and (b). Nevertheless, there seems to be some force behind Popper's point in other examples. For example, Einstein could have pointed to all the verifications of Newton's theory for low velocities and claim these as verifications for his own theory. Yet he did not. Why not? Because, says Popper, these were not risky predictions. They were not potential falsifiers of Einstein's theory. Poppers Proposal: Every good scientific theory is a prohibition: it forbids certain th ings to happen. The criterion of the scientific status of a theory is its falsifiability, or refutability, or testability. Note: Popper also anticipates a major objection to his criterion: namely, that any scientific theory can be protected from refutation by introducing ad hoc auxiliary assumptions. His reply is that the very use of ad hoc assumptions, in reducing the falsifiability of theory, also diminishes its scientific status. The problem with Poppers reply is that it is not always , if ever, clear in advance that ad hoc auxiliary assumptions are needed to save the theory. This is essentially Kuhns point.

Distinction between Science and Pseudo-science Demarcations of science from pseudoscience can be made for both theoretical and practical reasons (Mahner 2007, 516). From a theoretical point of view, the demarcation issue is an illuminating perspective that contributes to the philosophy of science in the same way that the study of fallacies contributes to the study of informal logic and rational argumentation. From a practical point of view, the distinction is important for decision guidance in both private and public life. Since science is our most reliable source of knowledge in a wide variety of areas, we need to distinguish scientific knowledge from its look-alikes. Due to the high status of science in present-day society, attempts to exaggerate the scientific status of various claims, teachings, and products are common enough to make the demarcation issue pressing in many areas.
PHILOSOPHY OF SCIENCE | KARL POPPER 06

A central part of Karl Poppers project is figuring out how to draw the line between science and pseudo-science. He could have pitched this as figuring out how to draw the line between science and non-science (which seems like less a term of abuse than pseudo-science). Why set the project up this way? Partly, I think, he wanted to compare science to non science-that-looks-a-lot-like-science (in other words, pseudo-science) so that he could work out precisely what is missing from the latter. He doesnt think we should dismiss pseudo-science as utterly useless, uninteresting, or false. Its just not science. Of course, Popper wouldnt be going to the trouble of trying to spell out what separates science from non -science if he didnt think there was something special on the science side of the line. He seems committed to the idea that scientific methodology is well-suited perhaps uniquely so for building reliable knowledge and for avoiding false beliefs. Indeed, under the assumption that science has this kind of power, one of the problems with pseudo-science is that it gets an unfair credibility boost by so cleverly mimicking the surface appearance of science. The big difference Popper identifies between science and pseudo-science is a difference in attitude. While a pseudo-science is set up to look for evidence that supports its claims , Popper says, a science is set up to challenge its claims and look for evidence that might prove it false. In other words, pseudo-science seeks confirmations and science seeks falsifications. There is a corresponding difference that Popper sees in the form of the claims made by sciences and pseudosciences: Scientific claims are falsifiable that is, they are claims where you could set out what observable outcomes would be impossible if the claim were true while pseudo-scientific claims fit with any imaginable set of observable outcomes. What this means is that you could do a test that shows a scientific claim to be false, but no conceivable test could show a pseudo-scientific claim to be false. Sciences are testable, pseudo-sciences are not. So, Popper has this picture of the scientific attitude that involves taking risks: making bold claims, then gathering all the evidence you can think of that might knock them down. If they stand up to your attempts to falsify them, the claims are still in play. But, you keep that hard-headed attitude and keep your eyes open for further evidence that could falsify the claims. If you decide not to watch for such evidence deciding, in effect, that because the claim hasnt been falsified in however many attempts youve made to falsify it, it must be true youve crossed the line to pseudo-science. This sets up the central asymmetry in Poppers picture of what we can know. We can find evidence to establish with certainty that a claim is false. However, we can never (owing to the problem of induction) find evidence to establish with certainty that a claim is true. So the scientist realizes that her best hypotheses and theories are always tentative some piece of future evidence could conceivably show them false while the pseudo-scientist is sure as sure as can be that her theories have been proven true. (Of course, they havent been problem of induction again.) So, why does this difference between science and pseudo-science matter? As Popper notes, the difference is not a matter of scientific theories always being true and pseudo-scientific theories always being false. The important difference seems to be in which approach gives better logical justification for knowledge claims. A pseudo-science may make you feel like youve got a good picture of how the world works, but you could well be wrong about it. If a scientific picture of the world is wrong, that hard-headed scientific attitude means the chances are good that well find out were wrong one of those tests of our hypotheses will turn up the data that falsifies them and switch to a different picture. Science, as a working method, employs basic principles such as objectivity and accuracy to establish a finding. It often also uses certain admitted assumptions about reality, assumptions that must eventually support themselves and be proven, or the resulting finding fails verification. Pseudoscience, however, uses invented modes of analysis which it pretends or professes meet the requirements of scientific method, but which in fact violate its essential attributes. Many obvious examples of pseudoscience are easy to identify, but the more subtle and therefore more insidious and convincing cases, require better definitions of the attributes involved.

PHILOSOPHY OF SCIENCE | KARL POPPER 07

Pseudoscientific concepts Examples of pseudoscience concepts, proposed as scientific when they are not scientific, are creation science, intelligent design, orgone energy, cold fusion, N-rays, ch'i, L. Ron Hubbard's engram theory, enneagram, iridology, the Myers-Briggs Type Indicator, New Age psychotherapies (e.g., rebirthing therapy), reflexology, applied kinesiology, astrology, biorhythms, facilitated communication, paranormal plant perception, extrasensory perception (ESP), Velikovsky's ideas, ancient astronauts, craniometry, graphology, metoposcopy, personology, physiognomy, acupuncture, alchemy, cellular memory, Lysenkoism, naturopathy, reiki, Rolfing, therapeutic touch, ayurvedic medicine, and homeopathy . Robert T. Carroll stated in part: "Pseudoscientists claim to base their theories on empirical evidence, and they may even use some scientific methods, though often their understanding of a controlled experiment is inadequate . Many pseudoscientists relish being able to point out the consistency of their ideas with known facts or with predicted consequences, but they do not recognize that such consistency is not proof of anything. It is a necessary condition but not a sufficient condition that a good scientific theory be consistent with the facts." In 2006, the US National Science Foundation (NSF) issued an executive summary of a paper on science and engineering which briefly discussed the prevalence of pseudoscience in modern times. It said, "belief in pseudoscience is widespread" and, referencing a Gallup Poll, stated that belief in the 10 commonly believed examples of paranormal phenomena listed in the poll were "pseudoscientific beliefs". The items were: "extrasensory perception (ESP), that houses can be haunted, ghosts, telepathy, clairvoyance, astrology, that people can communicate mentally with someone who has died, witches, reincarnation, and channelling." Such beliefs in pseudoscience reflect a lack of knowledge of how science works . The scientific community may aim to communicate information about science out of concern for the public's susceptibility to unproven claims. Reasons/Indicators in Classifying Disciplines as Pseudo-science a. Use of vague, exaggerated or untestable claims 1. Assertion of scientific claims that are vague rather than precise, and that lack specific measurements. 2. Failure to make use of operational definitions (i.e. publicly accessible definitions of the variables, terms, or objects of interest so that persons other than the definer can independently measure or test them) 3. Failure to make reasonable use of the principle of parsimony, i.e. failing to seek an explanation that requires the fewest possible additional assumptions when multiple viable explanations are possible 4. Use of obscurantist language, and use of apparently technical jargon in an effort to give claims the superficial trappings of science 5. Lack of boundary conditions: Most well-supported scientific theories possess well-articulated limitations under which the predicted phenomena do and do not apply. 6. Lack of effective controls, such as placebo and double-blind, in experimental design 7. Lack of understanding of basic and established principles of physics and engineering Over-reliance on confirmation rather than refutation 1. Assertions that do not allow the logical possibility that they can be shown to be false by observation or physical experiment 2. Assertion of claims that a theory predicts something that it has not been shown to predict Scientific claims that do not confer any predictive power are considered at best "conjectures", or at worst "pseudoscience" 3. Assertion that claims which have not been proven false must be true, and vice versa Over-reliance on testimonial, anecdotal evidence, or personal experience: This evidence may be useful for the context of discovery (i.e. hypothesis generation), but should not be used in the context of justification (e.g. Statistical hypothesis testing). 4. Presentation of data that seems to support its claims while suppressing or refusing to consider data that conflict with its claims. This is an example of selection bias, a distortion of evidence or data that arises from the way that the data are collected. It is sometimes referred to as the selection effect. 5. Reversed burden of proof: In science, the burden of proof rests on those making a claim, not on the critic. "Pseudoscientific" arguments may neglect this principle and demand that skeptics demonstrate beyond a reasonable doubt that a claim (e.g. an assertion regarding the efficacy of a novel therapeutic technique) is false. It is essentially impossible to prove a universal negative, so this tactic incorrectly places the burden of proof on the skeptic rather than the claimant.

b.

PHILOSOPHY OF SCIENCE | KARL POPPER 08

6.

Appeals to holism as opposed to reductionism: Proponents of pseudoscientific claims, especially in organic medicine, alternative medicine, naturopathy and mental health, often resort to the "mantra of holism" to explain negative findings.

c.

Lack of openness to testing by other experts 1. Evasion of peer review before publicizing results (called "science by press conference"): Some proponents of ideas that contradict accepted scientific theories avoid subjecting their ideas to peer review, sometimes on the grounds that peer review is biased towards established paradigms, and sometimes on the grounds that assertions cannot be evaluated adequately using standard scientific methods. By remaining insulated from the peer review process, these proponents forgo the opportunity of corrective feedback from informed colleagues. 2. Some agencies, institutions, and publications that fund scientific research require authors to share data so others can evaluate a paper independently. Failure to provide adequate information for other researchers to reproduce the claims contributes to a lack of openness. 3. Appealing to the need for secrecy or proprietary knowledge when an independent review of data or methodology is requested

d.

Absence of progress 1. Failure to progress towards additional evidence of its claims Terence Hines has identified astrology as a subject that has changed very little in the past two millennia. 2. Lack of self-correction: scientific research programmes make mistakes, but they tend to eliminate these errors over time. By contrast, ideas may be accused of being pseudoscientific because they have remained unaltered despite contradictory evidence. The work Scientists Confront Velikovsky (1976) Cornell University, also delves into these features in some detail, as does the work of Thomas Kuhn, e.g. The Structure of Scientific Revolutions (1962) which also discusses some of the items on the list of characteristics of pseudoscience. 3. Statistical significance of supporting experimental results does not improve over time and are usually close to the cutoff for statistical significance. Normally, experimental techniques improve or the experiments are repeated, and this gives ever stronger evidence. If statistical significance does not improve, this typically shows the experiments have just been repeated until a success occurs due to chance variations.

e.

Personalization of issues 1. Tight social groups and authoritarian personality, suppression of dissent, and groupthink can enhance the adoption of beliefs that have no rational basis. In attempting to confirm their beliefs, the group tends to identify their critics as enemies. 2. Assertion of claims of a conspiracy on the part of the scientific community to suppress the results 3. Attacking the motives or character of anyone who questions the claims Use of misleading language 1. Creating scientific-sounding terms to add weight to claims and persuade nonexperts to believe statements that may be false or meaningless: For example, a long-standing hoax refers to water by the rarely used formal name "dihydrogen monoxide" and describes it as the main constituent in most poisonous solutions to show how easily the general public can be misled. 2. Using established terms in idiosyncratic ways, thereby demonstrating unfamiliarity with mainstream work in the discipline

f.

PHILOSOPHY OF SCIENCE | KARL POPPER 09

VII.

Classification of Science Discipline

Pseudoscience and pseudoscientific are unavoidably defamatory words (Laudan 1983, 118; Dolby 1987, 204). It would be as strange for someone to proudly describe her own activities as pseudoscience as to boast that they are bad science. Since the derogatory connotation is an essential characteristic of the word pseudoscience, an attempt to extricate a value-free definition of the term would not be meaningful. An essentially value-laden term has to be defined in value-laden terms. This is often difficult since the specification of the value component tends to be controversial. This problem is not specific to pseudoscience but follows directly from a parallel but somewhat less conspicuous problem with the concept of science. The common usage of the term science can be described as partly descriptive, partly normative. When an activity is recognized as science this usually involves an acknowledgement that it has a positive role in our strivings for knowledge. On the other hand the concept of science has been formed through a historical process, and many contingencies influence what we call and do not call science. Against this background, in order not to be unduly complex a definition of science has to go in either of two ways. It can focus on the descriptive contents, and specify how the term is actually used. Alternatively, it can focus on the normative element, and clarify the more fundamental meaning of the term. The latter approach has been the choice of most philosophers writing on the subject, and will be at focus here. It involves, of necessity, some degree of idealization in relation to common usage of the term science. The English word science is primarily used about the natural sciences and other fields of research that are considered to be similar to them. Hence, political economy and sociology are counted as sciences, whereas studies of literature and history are usually not. The corresponding German word, Wissenschaft, has a much broader meaning and includes all the academic specialties, including the humanities. The German term has the advantage of more adequately delimiting of the type of systematic knowledge that is at stake in the conflict between science and pseudoscience. The misrepresentations of history presented by Holocaust deniers and other pseudo-historians are very similar in nature to the misrepresentations of natural science promoted by creationists and homeopaths. More importantly, the natural and social sciences and the humanities are all part of the same human endeavour, namely systematic and critical investigations aimed at acquiring the best possible understanding of the workings of nature, man, and human society. The disciplines that form this community of knowledge disciplines are increasingly interdependent (Hansson 2007). Since the second half of the 20th century, integrative disciplines such as astrophysics, evolutionary biology, biochemistry, ecology, quantum chemistry, the neurosciences, and game theory have developed at dramatic speed and contributed to tying together previously unconnected disciplines. These increased interconnections have also linked the sciences and the humanities closer to each other, as can be seen for instance from how historical knowledge relies increasingly on advanced scientific analysis of archaeological findings. The conflict between science and pseudoscience is best understood with this extended sense of science. On one side of the conflict we find the community of knowledge disciplines that includes the natural and social sciences and the humanities. On the other side we find a wide variety of movements and doctrines, such as creationism, astrology, homeopathy, and Holocaust denialism that are in conflict with results and methods that are generally accepted in the community of knowledge disciplines. Another way to express this is that the demarcation problem has a deeper concern than that of demarcating the selection of human activities that we have for various reasons chosen to call sciences. The ultimate issue is how to determine which beliefs are epistemically warranted (Fuller 1985, 331).

Examples of Science and Pseudoscience The key to understanding Popper's demarcation criterion is to compare two examples. The first, Popper thinks is typical of science, while the second is typical of pseudoscience.
PHILOSOPHY OF SCIENCE | KARL POPPER 10

Example (a): Einstein's prediction of the bending of star light. For over 200 years prior to Einstein, Newtonian physics had enjoyed a period of unprecedented success in science. Many scientists thought that Newton's theory was the end of science, and many philosophers not only believed that Newton's theory was true, they thought that it was necessarily true. They sought to explain why Newton's theory had to be true. All that began to change with Planck's 1900 introduction of the idea that energy comes in small discrete packages (the quantum hypothesis) and Einstein's discovery of the special theory of relativity in 1905. Einstein's special theory of relativity was a way of reconciling some inconsistencies between the wave theory of light and Newtonian mechanics. Instead of modifying the wave theory, he modified some of the fundamental assumptions used in Newtonian physics (like the assumption that simultaneity did not depend on a frame of reference, and that the mass does not depend on its velocity). However, Einstein's special theory of relativity said nothing about gravity. Einstein's general theory of relativity was his theory of gravitation, which he had published by 1916. Many scientists were impressed by the aesthetic beauty of Einstein's principles, but it was also important that it be tested by observation. For most everyday phenomena, in which velocities are far smaller than the speed of light, there is no detectable difference between Einstein's prediction and Newton's prediction. What we needed was a crucial experiment in which Einstein and Newton made different predictions. In 1916, there were successful tests of Einstein's special theory. But crucial tests of the general theory were harder to come by. One such case was provided by the bending of starlight by the gravity of the sun. The period from 1900 to at least 1916 was a period of revolution in physics, and Eddington's confirmation of Einstein's prediction in 1919 helped to complete the change in physics. "The idea that light should be deflected by passing close to a massive body had been suggested by the British astronomer and geologist John Michell in the 18th century. However, Einstein's general relativity theory predicted twice as much deflection as Newtonian physics. Quick confirmation of Einstein's result came from measuring the direction of a star close to the Sun during an expedition led by the British astronomer Sir Arthur Stanley Eddington to observe the solar eclipse of 1919. Optical determinations of the change of direction of a star are subject to many systematic errors, and far better confirmation of Einstein's general relativity theory has been obtained from measurements of a closely related effect-namely, the increase of the time taken by electromagnetic radiation along a path close to a massive body." (Encyclopedia Britannica) "The theories involved here were Einstein's general theory of relativity and the Newtonian particle theory of light, which predicted only half the relativistic effect. The conclusion of this exceedingly difficult measurement--that Einstein's theory was followed within the experimental limits of error, which amounted to +/-30 percent--was the signal for worldwide feting of Einstein. If his theory had not appealed aesthetically to those able to appreciate it and if there had been any passionate adherents to the Newtonian view, the scope for error could well have been made the excuse for a long drawn-out struggle, especially since several repetitions at subsequent eclipses did little to improve the accuracy. In this case, then, the desire to believe was easily satisfied. It is gratifying to note that recent advances in radio astronomy have allowed much greater accuracy to be achieved, and Einstein's prediction is now verified within about 1 percent." (Encyclopedia Britannica) "According to this theory the deflection, which causes the image of a star to appear slightly too far from the Sun's image, amounts to 1.75 seconds of arc at the limb of the Sun and decreases in proportion to the apparent distance from the centre of the solar disk of the star whose light is deflected. This is twice the amount given by the older Newtonian dynamics if light is assumed to have inertial properties. If light does not have such properties, as is generally accepted now, the Newtonian deflection is zero." (Encyclopedia Britannica) Example (b): Adler's 'individual psychology'. Compare the following two (hypothetical) explanations of human behavior. (1) E1: A man pushes a child into the water with the intention of drowning it. (2) E2: A man sacrifices his life in an attempt to save the child. Popper claims that Adler's 'individual psychology' can explain both of these behaviors with equal ease. Let T be Adler's theory, let A be the auxiliary assumption that the man suffered feelings of inferiority (producing the need to prove to himself that he dared to commit some crime In example (a), Einstein's theory predicts the observational evidence, while in example (b), the theory is merely accommodates the evidence. Popper describes the difference by claiming that Einstein's theory is falsifiable, whereas Adler's theory is not. Remark: Popper also claims that the problem with Adler's theory is that it is too easily verified: "the world was full of verifications of the theory." Adler may have seen it like that, but was he right? My feeling is that mere accommodations do not count as verifications at all. Hence, I think that a verificationist could account for the difference between these two examples as well as, if not better than, a falsificationist
PHILOSOPHY OF SCIENCE | KARL POPPER 11

VIII.

Concept of Theory]

Karl Popper has been acknowledged as the founder of various views in the fields of biology and ecology for his principal ideas regarding theory and confirmation. Many scientists consider Popper the hero of Philosophy. His views have contributed to new ideals and theories in addition to renovating and bringing a new perspective to others. Popper's point of view of how theories should be tested and how the concept of confirmation does not exist has given him great attention. However, not all have approved or have even tried to comprehend Popper's ideas. Many scientists have criticized his work, and don't find a place in science for his ideas, such as Falsification (Godfrey- Smith 57). Although his ideas have been greatly accepted in many areas by many scientist, Popper's thesis of confirmation being a myth brings many complications in science and demonstrate that science is somewhat rational and a difficult to pursue. Before one can understand Popper's ideas of confirmation and falsification, one first has to look at what started it all--the problem of demarcation. Popper diligently studied the methods and theories of various scientists. From there he came to the conclusion that, "It was precisely this fact-that they always fitted, that they were always confirmed-which in the eye of their admirers constituted the strongest argument in favour of these theories. It began to dawn on me that this apparent strength was in fact their weakness" (BR 297). Popper notices that these scientists' strongest point in the whole process is their ability to confirm their own theory. He continues to tell scientist, "If a theory takes no risks at all, because it is compatible with every possible observation, then it is not scientific" . Throughout his explanation, Popper emphasizes the necessity of having a risky theory. So far, Popper insists that scientist should challenge themselves with risky theories as well as to state that confirmation is a not correct.

In the Philosophy of Science by Yuri Balashov and Alex Rosenberg, Popper has been stated to say that previous scientists, instead of accepting their theories to be false, they rather interpret their results of the experiment and try to assimilate it to the theory. From there, Popper brought the idea of confirmation being a myth since, he believed, "It is never possible to confirm a theory, not even slightly, and no matter how many observations the theory predicts successfully... Popper placed great emphasis on the idea that we can never be completely sure that a theory is true" (GS 59). Popper views science as irrational because of the methods use to verify theories . In addition, he shares his thoughts of identifying the difference between unscientific theories and those that are scientific. That's where Popper provides the problem of demarcation. He stated that scientist should draw a line to determine what makes a theory or a claim scientific. Seeing that Popper disproves the method of confirmation, he gives scientists an alternative, one that focuses on his beliefs of falsification. In Theory and Reality, Peter Godfrey-Smith states Popper's method: So Popper had a fairly simple view of how testing in science proceeds. We take a theory that someone has proposed, and we deduce an observational prediction from it. We then check to see if the prediction comes out as the theory says it will. If the prediction fails, then we refuted, or falsified, the theory. If the prediction does come out as predicted, then all we should say is that we have not yet falsified the theory. For Popper, we cannot conclude that the theory is true, or that it is probably true or even that is more likely to be true than it was before the test. The theory might be true, but we can't say more than that" (59-60). Many scientists thought that Popper's methods were the complete opposite of rational, and rather than "confirming" the theory in question, he left the theory unsolvable, hence complicating the results and the process more. However, theories that have withstand years or even decades as confirmed, can create errors in decades to come since technology and new ways of verification are emerging. Thus, Popper's test of falsification can provide over and over again the reassurance that those theories are not falsifiable and that the results matches the predictions . One things was clear, Popper did not want "To react to the falsification of one conjecture by cooking up a new conjecture that is designed to just avoid the problems revealed by earlier testing, and which goes no further" (GS 61). This would simply overrule everything the previous claim was falsifiable for, thus making the claim exhausted overtime as well as unscientific.

PHILOSOPHY OF SCIENCE | KARL POPPER 12

Nevertheless, Popper's intake on confirmation was widely rejected and several provided objections to his believes regarding confirmation. To explain in detail one of the several objections against Popper's view, Peter Godfrey Smith presents an example. Godfrey Smith supposes that a bridge will be built and a well structured theory of a design needs to be used. He states: Popper can say why we should prefer to use a theory that has not been falsified over a theory that has been falsified. Theories that have been falsified have been shown to be false (here again I ignore the problem discussed in the previous section). But suppose we have to choose between (1) a theory that has been tested many times and has passed every test, and (2) a brand new theory that just been conjectured and has never been tested. Neither theory has been falsified. We would ordinarily think that the rational thing to do is to choose the theory that has survived testing.

In this matter, Popper does somewhat agree with all those objecting, that he has had difficulty with his view when applied to this situation. When analyzing the previous problem, one can see it is irrational to choose 2 over 1 since the theory that has been proven to successful in every test will generate a more positive response. What Popper is clearly trying to demonstrate with his bizarre choice is that no matter how well proven a theory is, it does not provide the guarantee that it will be, once again, successful if chosen. Even though scientists are taking a huge risk if they choose the theory which never been tested, how can science evolve if scientist are not taking these risks? All through evolution, there has been a constant risk from various scientists, providing their drastically and completely different point of views. Hasn't that been the process all these years? Popper takes things to a whole different level when he brings out the concept of corroboration. He proclaims that once an issue as the one previously mentioned, one should select a corroborated theory over those that have not been corroborated. The reason is that the theories that have been tested and have survived through time are corroborated. Peter Godfrey-Smith provides the new idea as: An academic transcript says what you have done. It measures your past performances, but it does not contain explicit predictions about what you will do in the future. A letter of recommendation usually says something about what you have done, and it also claims about how you are likely to do in the future. Confirmation, as understood by the logical empiricists, is something like a letter of recommendation for a scientific theory. Corroboration, for Popper, is only like an academic transcript. And Popper thought that no good reasons could be given for believing that past performances is a reliable guide to the future. What one needs to understand from corroboration is that even if it's a theory which has not been tested at all, we shouldn't regard it as a theory that will not bring results. In order for scientist to prove whether a non-tested theory is correct or proper for the situation, one needs to give it an opportunity to prove itself. Otherwise, when will we know if that particular theory was better than the one that's been tested numerous times? However, we continue to choose the one which has been tested, since this is the one that scientists most trust and which has had no problems. Popper's claims when applied to life situations are not reasonable since they can't provide evidence or any kind of connection between the various issues. Thus, a pursuit for further explanation or analysis can be debatable since many of the claims and believes are completely useless when put to the test. Popper's pursuit to find a line distinction between what is considered science and what is not, brought various problems. It is irrational to speculate or refer falsification as a divider of theories. Falsification doesn't provide scientist with assurance that a theory will ever be completely tested. It will only continue to leave us in the same place where we started. So why waste our time trying to falsified theories?
PHILOSOPHY OF SCIENCE | KARL POPPER 13

In its basic form, falsifiability is the belief that for any hypothesis to have credence, it must be inherently disprovable before it can become accepted as a scientific hypothesis or theory. For example, if a scientist asks, Does God exist? then this can never be science because it is a theory that cannot be disproved. The idea is that no theory is completely correct, but if not falsified, it can be accepted as truth. For example, Newtons Theory of Gravity was accepted as truth for centuries, because objects do not randomly float away from the earth. It appeared to fit the figures obtained by experimentation and research, but was always subject to testing. However, later research showed that, at quantum levels, Newtons laws break down and so the theory is no longer accepted as truth. This is not to say that his ideas are now useless, as the principles are still used by NASA to plot the courses of satellites and space probes. Popper saw falsifiability as a black and white definition, that if a theory is falsifiable, it is scientific, and if not, then it is unscientific. Whilst most pure sciences do adhere to th is strict definition, pseudo-sciences may fall somewhere between the two extremes.

IX.

Concept of Science

Popper saw Einsteins theory of relativity to perfectly exemplify these three criteria for genuine science. General relativity led to the surprising prediction that light would be bent by the gravitational field of the Sun. It was a great triumph when Arthur Eddingtons expeditions verified that light was bent by the amount that Ei nstein had predicted. For most observers, what mattered was the fit between Einsteins predictions and the evidence, but not for Popper. What mattered to him was that the theory had survived a severe test. The mark of a genuinely scientific theory is falsifiability. Science should make bold conjectures and should try to falsify these conjectures. Kasser goes on to stay that though Poppers theory is admirably straightforward, it nevertheless requires some clarification. First, Popper generally writes as if falsifiability and, hence, scientific standing come in degrees. This suggests, however, that pseudosciences differ more in degree than in kind from genuine sciences. Second, Poppers theory is both descriptive and normative, i.e. he claims both that this is what scientists do and that it is what they should do. Third, Popper is not offering a definition of science but only a necessary condition. He is not saying that all falsifiable statements are scientific but only that all scientific statements are falsifiable. Falsifiability is a pretty weak condition. Fourth and last, to call something unscientific is not to call it scientifically worthless. Conjectures and Refutations What makes Conjectures and Refutations such an enduring book is that Popper goes on to apply this bold theory of the growth of knowledge to a fascinating range of important problems, including the role of tradition, the origin of the scientific method, the demarcation between science and metaphysics, the body-mind problem, the way we use language, how we understand history, and the dangers of public opinion. Throughout the book, Popper stresses the importance of our ability to learn from our mistakes. Growth of knowledge A term coined by Karl Popper in his famous work The Logic of Scientific Discovery to denote what he regarded as the main problem of methodology and the philosophy of science, i.e. to explain and promote the further growth of scientific knowledge. To this purpose, Popper advocated his theory of falsifiability, testability and testing. [The aim of science is] to explain what so far has taken to be an explicans, such as a law of nature. The task of empirical science constantly renews itself. We may go on forever, proceeding to explanations of a higher and higher universality

PHILOSOPHY OF SCIENCE | KARL POPPER 14

X.

Concept of Corroboration

Popper repudiates induction, and rejects the view that it is the characteristic method of scientific investigation and inference, and substitutes falsifiability in its place. It is easy, he argues, to obtain evidence in favour of virtually any theory, and he consequently holds that such corroboration, as he terms it, should count scientifically only if it is the positive result of a genuinely risky prediction, which might conceivably have been false. For Popper, a theory is scientific only if it is refutable by a conceivable event. Every genuine test of a scientific theory, then, is logically an attempt to refute or to falsify it, and one genuine counter-instance falsifies the whole theory. In a critical sense, Popper's theory of demarcation is based upon his perception of the logical asymmetry which holds between verification and falsification: it is logically impossible to conclusively verify a universal proposition by reference to experience (as Hume saw clearly), but a single counter-instance conclusively falsifies the corresponding universal law. In a word, an exception, far from proving a rule, conclusively refutes it. Every genuine scientific theory then, in Popper's view, is prohibitive, in the sense that it forbids, by implication, particular events or occurrences. As such it can be tested and falsified, but never logically verified. Thus Popper stresses that it should not be inferred from the fact that a theory has withstood the most rigorous testing, for however long a period of time, that it has been verified; rather we should recognise that such a theory has received a high measure of corroboration. and may be provisionally retained as the best available theory until it is finally falsified (if indeed it is ever falsified), and/or is superseded by a better theory. The concept of corroboration was introduced by the philosopher of science Karl Popper in 1934 in The Logic of Scientific Discovery (Logik der forschung ). The concept epistemological attached to this word was introduced by Popper to have a term neutral to "express the degree to which a hypothesis has withstood severe tests and has proven.

In The Logic of Scientific Discovery, Karl Popper states that theory is supported as long as it passes the tests. "The assessment says corroboration (corroborating assessment) establishes certain basic relations, namely those of compatibility and incompatibility. We interpret the conflict as a refutation of the theory. Theory (scientific or otherwise) is considered to be refuted if a test derived from this theory has been able to disprove from certain initial conditions. This concept comes from a reflection Popper against what he called inductive logic , which would, according to him, "developed as logical statements that can be attributed to not only the two values" true "and" false "but more degrees of probabilities ".

That Popper criticized inductive logic is to propose that scientific statements could be considered as such, following an assessment inductive which would check so decisive, or the truth or falsity, is still a certain degree of mathematical probability. However, Popper argues that it is logically impossible, not only to verify with certainty the general statements of science also to refute decisively, knowing that ad hoc schemes can always, according to him, save for a refutation, and also to accept on the basis of a mathematical probability, which writes Popper, is always equal to zero against the infinity of cases not yet observed. Popper is therefore to propose that "instead of discussing the probability of a hypothesis we should try to assess the tests, trials, it has passed, that is to say that we should try to assess how far she has been able to prove its ability to survive in the tough tests. In short, we should try to estimate how far it has been corroborated".

In the philosophy of science of Popper there is no definable measure of the extent to which evidence confirms a hypothesis. Instead, hypotheses face the tribunal of experience by surviving efforts to falsify them. The degree of corroboration of a hypothesis by evidence is then a function of the stringency of the test the evidence provides, and hence a measure of the success of the hypothesis in surviving it. Critics have complained that corroboration in this sense is an empty notion, since it provides no reason to trust the hypothesis on any future occasion.

PHILOSOPHY OF SCIENCE | KARL POPPER 15

XI.

Concept of Verisimilitude

Popper's verisimilitude is the excess of truth content over falsity content . It is shown that his measures of truth and falsity content are at variance with his respective concepts. It is further shown that both his actual measure of verisimilitude and measures based on measures of truth and falsity content consistent with his definition of the concepts, have undesirable properties. Moreover, any measure of verisimilitude based solely on content and truth value does not capture the notion of closeness to truth. A new concept of verisimilitude is proposed, based on a metric in the space of state descriptions. The Problem of Progress Popper (1963) noted that there is no general agreement on the answers to two very basic questions: (A) Can we specify what scientific progress consists of? (B) Can we show that science has actually made progress? One quick answer to (A) is that (1) Science aims at true theories, and that progress consists of the fulfillment of this aim. In answer to (B), we should add: (2) Science has made progress in meeting this aim. The problem now arises when we add a third plausible statement to the first two: (3) Scientific theories have all been false. In the history of planetary astronomy for example, Ptolemys geocentric theory is false, Copernicuss version of the heliocentric theory is false, Keplers laws are false, Newtonian gravitational theory is false. It would be naive to suppose that Einsteins general theory of relativity is true. This conflict is "the problem of progress." The Problem of Verisimilitude The famous problem of verisimilitude flows from this (Musgrave, unpublished): Realists . . . seem forced to give up either their belief in progress or their belief in the falsehood of all extant scientific theory. I say seemed forced because Popper is a realist who wants to give up neither of them. Popper has the radical idea that the conflict between (1), (2), and (3) is only an apparent one, that progress with respect to truth is possible through a succession of falsehoods because one false theory can be closer to the truth than another. In this way Popper discovered the (two-fold) problem of verisimilitude; (A*) Can we explain how one theory can be closer to the truth, or has greater verisimilitude than another? (B*) Can we show that scientific change has sometimes led to theories which are closer to the truth than their predecessors? Note: Closeness to the truth is not the same as the probability of truth. A simple example shows this: A: The time on this stopped watch is correct to within one minute. B: The time of my watch (which is 2 minutes fast) is accurate to within one minute. Both hypotheses are false. But B is closer to the truth than A. But A is more probable than B, because the probability of A being true, though small, is non-zero. But the probability of my watch being accurate to within one minute, given what we know about my watch, is zero.
PHILOSOPHY OF SCIENCE | KARL POPPER 16

Popper's Definition of Verisimilitude So, how should we define verisimilitude? Popper (1963) defined verisimilitude as follows: DEFINITION: Theory A is closer to the truth than theory B if and only if (i) all the true consequences of B are true consequences of A, (ii) all the false consequences of A are consequences of B, and (iii) either and some true consequences of A are not consequences of B or some false consequences of B are not consequences of A. Note: Poppers definition allows that there are false theories A and B such that neither is closer to the truth than the other. A fatal flaw in this definition was detected independently by Tich (1974) and Miller (1974). They showed that, according to Poppers definition, for any false theories A and B neither is closer to the truth than any other. This is a fatal flaw because the philosophical motivation behind Poppers definition was to solve the problem of progress by showing that it i s possible that some false theories are closer to the truth than other false theories.

XII.

Concept of Truth

Since we can never know anything for sure, it is simply not worth searching for certainty; but it is well worth searching for truth; and we do this chiefly by searching for mistakes, so that we can correct them

Popper was initially uneasy with the concept of truth, and in his earliest writings he avoided asserting that a theory which is corroborated is truefor clearly if every theory is an open-ended hypothesis, as he maintains, then ipso facto it has to be at least potentially false. For this reason Popper restricted himself to the contention that a theory which is falsified is false and is known to be such, and that a theory which replaces a falsified theory (because it has a higher empirical content than the latter, and explains what has falsified it) is a better theory than its predecessor. However, he came to accept Tarski's reformulation of the correspondence theory of truth, and in Conjectures and Refutations (1963) he integrated the concepts of truth and content to frame the metalogical concept of truthlikeness or verisimilitude. A good scientific theory, Popper thus argued, has a higher level of verisimilitude than its rivals, and he explicated this concept by reference to the logical consequences of theories. A theory's content is the totality of its logical consequences, which can be divided into two classes: there is the truth-content of a theory, which is the class of true propositions which may be derived from it, on the one hand, and the falsity-content of a theory, on the other hand, which is the class of the theory's false consequences (this latter class may of course be empty, and in the case of a theory which is true is necessarily empty). Popper offered two methods of comparing theories in terms of verisimilitude, the qualitative and quantitative definitions. On the qualitative account, Popper asserted: Assuming that the truth-content and the falsity-content of two theories t1 and t2 are comparable, we can say that t2 is more closely similar to the truth, or corresponds better to the facts, thant1, if and only if either: (a) the truth-content but not the falsity-content of t2 exceeds that of t1, or (b) the falsity-content of t1, but not its truth-content, exceeds that of t2. (Conjectures and Refutations, 233). Here, verisimilitude is defined in terms of subclass relationships: t2 has a higher level of verisimilitude than t1 if and only if their truth- and falsity-contents are comparable through subclass relationships, and either (a) t2's truth-content includes t1's and t2's falsity-content, if it exists, is included in, or is the same as, t1's, or (b) t2's truth-content includes or is the same as t1's and t2's falsity-content, if it exists, is included in t1's. On the quantitative account, verisimilitude is defined by assigning quantities to contents, where the index of the content of a given theory is its logical improbability (given again that content and probability vary inversely). Formally, then, Popper defines the quantitative verisimilitude which a statement a possesses by means of a formula:
PHILOSOPHY OF SCIENCE | KARL POPPER 17

Vs(a) = CtT(a) CtF(a), where Vs(a) represents the verisimilitude of a, CtT(a) is a measure of the truth-content of a, and CtF(a) is a measure of its falsity-content. The utilization of either method of computing verisimilitude shows, Popper held, that even if a theory t2 with a higher content than a rival theory t1 is subsequently falsified, it can still legitimately be regarded as a better theory than t1, and better is here now understood to mean t2 is closer to the truth than t1. Thus scientific progress involves, on this view, the abandonment of partially true, but falsified, theories, for theories with a higher level of verisimilitude, i.e., which approach more closely to the truth. In this way, verisimilitude allowed Popper to mitigate what many saw as the pessimism of an anti-inductivist philosophy of science which held that most, if not all scientific theories are false, and that a true theory, even if discovered, could not be known to be such. With the introduction of the new concept, Popper was able to represent this as an essentially optimistic position in terms of which we can legitimately be said to have reason to believe that science makes progress towards the truth through the falsification and corroboration of theories. Scientific progress, in other words, could now be represented as progress towards the truth, and experimental corroboration could be seen an indicator of verisimilitude. However, in the 1970's a series of papers published by researchers such as Miller, Tich, and Grnbaum in particular revealed fundamental defects in Popper's formal definitions of verisimilitude. The significance of this work was that verisimilitude is largely important in Popper's system because of its application to theories which are known to be false. In this connection, Popper had written: Ultimately, the idea of verisimilitude is most important in cases where we know that we have to work with theories which are at best approximationsthat is to say, theories of which we know that they cannot be true. (This is often the case in the social sciences). In these cases we can still speak of better or worse approximations to the truth (and we therefore do not need to interpret these cases in an instrumentalist sense). (Conjectures and Refutations, 235). For these reasons, the deficiencies discovered by the critics in Popper's formal definitions were seen by many as devastating, precisely because the most significant of these related to the levels of verisimilitude of false theories. In 1974, Miller and Tich, working independently of each other, demonstrated that the conditions specified by Popper in his accounts of both qualitative and quantitative verisimilitude for comparing the truth- and falsity-contents of theories can be satisfied only when the theories are true. In the crucially important case of false theories, however, Popper's definitions are formally defective. For while Popper had believed that verisimilitude intersected positively with his account of corroboration, in the sense that he viewed an improbable theory which had withstood critical testing as one the truthcontent of which is great relative to rival theories, while its falsity-content (if it exists) would be relatively low, Miller and Tich proved, on the contrary, that in the case of a false theory t2 which has excess content over a rival theory false t1 both the truth-content and the falsity-content of t2 will exceed that of t1. With respect to theories which are false, therefore, Popper's conditions for comparing levels of verisimilitude, whether in quantitative and qualitative terms, can never be met. Commentators on Popper, with few exceptions, had initially attached little importance to his theory of verisimilitude. However, after the failure of Popper's definitions in 1974, some critics came to see it as central to his philosophy of science, and consequentially held that the whole edifice of the latter had been subverted. For his part, Popper's response was two-fold. In the first place, while acknowledging the deficiencies in his own formal account ("my main mistake was my failure to see at once that if the content of a false statement a exceeds that of a statement b, then the truth-content of a exceeds the truth-content of b, and the same holds of their falsity-contents", Objective Knowledge, 371), Popper argued that "I do think that we should not conclude from the failure of my attempts to solve the problem [of defining verisimilitude] that the problem cannot be solved" (Objective Knowledge, 372), a point of view which was to precipitate more than two decades of important technical research in this field. At another, more fundamental level, he moved the task of formally defining the concept from centre-stage in his philosophy of science, by protesting that he had never intended to imply "that degrees of verisimilitude can ever be numerically determined, except in certain limiting cases" (Objective Knowledge, 59), and arguing instead that the chief value of the concept is heuristic and intuitive, in which the absence of an adequate formal definition is not an insuperable impediment to its utilization in the actual appraisal of theories relativised to problems in which we have an interest. The thrust of the latter strategy seems too many to genuinely reflect the significance of the concept of verisimilitude in Popper's system, but it has not satisfied all of his critics.
PHILOSOPHY OF SCIENCE | KARL POPPER 18

Corroboration: to give or represent evidence of the truth or something Verisimilitude : the appearance of being true or real : something that only seems true (a statement that is not supported by evidence) Truth: something factual; that which corresponds to reality

XIII.

The Idea of Conjecture

In All Life is Problem Solving, Popper sought to explain the apparent progress of scientific knowledge that is, how it is that our understanding of the universe seems to improve over time. This problem arises from his position that the truth content of our theories, even the best of them, cannot be verified by scientific testing, but can only be falsified. Again, in this context the word 'falsified' does not refer to something being 'fake'; rather, that something can be (i.e., is capable of being) shown to be false by observation or experiment. Some things simply do not lend themselves to being shown to be false, and therefore, are not falsifiable. If so, then how is it that the growth of science appears to result in a growth in knowledge? In Popper's view, the advance of scientific knowledge is an evolutionary process characterized by his formula:

In response to a given problem situation ( ), a number of competing conjectures, or tentative theories ( ), are systematically subjected to the most rigorous attempts at falsification possible. This process, error elimination ( ), performs a similar function for science that natural selection performs for biological evolution. Theories that better survive the process of refutation are not more true, but rather, more "fit"in other words, more applicable to the problem situation at hand ( ). Consequently, just as a species' biological fitness does not ensure continued survival, neither does rigorous testing protect a scientific theory from refutation in the future. Yet, as it appears that the engine of biological evolution has produced, over time, adaptive traits equipped to deal with more and more complex problems of survival, likewise, the evolution of theories through the scientific method may, in Popper's view, reflect a certain type of progress: toward more and more interesting problems ( ). For Popper, it is in the interplay between the tentative theories (conjectures) and error elimination (refutation) that scientific knowledge advances toward greater and greater problems; in a process very much akin to the interplay between genetic variation and natural selection.

XIV.

Hypothetico-deductive method

The hypothetico-deductive model or method, first so-named by William Whewell, is a proposed description of scientific method. According to it, scientific inquiry proceeds by formulating a hypothesisin a form that could conceivably be falsified by a test on observable data. A test that could and does run contrary to predictions of the hypothesis is taken as a falsification of the hypothesis. A test that could but does not run contrary to the hypothesis corroborates the theory. It is then proposed to compare the explanatory value of competing hypotheses by testing how stringently they are corroborated by their predictions.

One example of an algorithmic statement of the hypothetico-deductive method is as follows: 1. Use your experience: Consider the problem and try to make sense of it. Gather data and look for previous explanations. If this is a new problem to you, then move to step 2. 2. Form a conjecture (hypothesis): When nothing else is yet known, try to state an explanation, to someone else, or to your notebook. 3. Deduce predictions from the hypothesis: if you assume 2 is true, what consequences follow? 4. Test (or Experiment): Look for evidence (observations) that conflict with these predictions in order to disprove 2. It is a logical error to seek 3 directly as proof of 2. This logical fallacy is called affirming the consequent. One possible sequence in this model would be 1, 2, 3, 4. If the outcome of 4 holds, and 3 is not yet disproven, you may continue with 3, 4, 1, and so forth; but if the outcome of 4 shows 3 to be false, you will have to go back to 2 and try to invent a new 2, deduce a new 3, look for 4, and so forth. Note that this method can never absolutely verify (prove the truth of) 2. It can only falsify 2. (This is what Einstein meant when he said, "No amount of experimentation can ever prove me right; a single experiment can prove me wrong.")
PHILOSOPHY OF SCIENCE | KARL POPPER 19

Additionally, as pointed out by Carl Hempel (19051997), this simple view of the scientific method is incomplete; a conjecture can also incorporate probabilities, e.g., the drug is effective about 70% of the time. Tests, in this case, must be repeated to substantiate the conjecture (in particular, the probabilities). In this and other cases, we can quantify a probability for our confidence in the conjecture itself and then apply a Bayesian analysis, with each experimental result shifting the probability either up or down. Bayes' Theorem shows that the probability will never reach exactly 0 or 100% (no absolute certainty in either direction), but it can still get very close to either extreme. See also confirmation holism. Qualification of corroborating evidence is sometimes raised as philosophically problematic. The raven paradox is a famous example. The hypothesis that 'all ravens are black' would appear to be corroborated by observations of only black ravens. However, 'all ravens are black' is logically equivalent to 'all non-black things are non-ravens' (this is the contraposition form of the original implication). 'This is a green tree' is an observation of a non-black thing that is a non-raven and therefore corroborates 'all non-black things are non-ravens'. It appears to follow that the observation 'this is a green tree' is corroborating evidence for the hypothesis 'all ravens are black'. Attempted resolutions may distinguish:

Non-falsifying observations as to strong, moderate, or weak corroborations Investigations that do or do not provide a potentially falsifying test of the hypothesis.

Corroboration is related to the problem of induction, which arises because a general case (a hypothesis) cannot be logically deduced from any series of specific observations. That is, any observation can be seen as corroboration of any hypothesis if the hypothesis is sufficiently restricted. The argument has also been taken as showing that both observations are theory-laden, and thus it is not possible to make truly independent observations. One response is that a problem may be sufficiently narrowed (or axiomatized) as to take everything except the problem (or axiom) of interest as unproblematic for the purpose at hand. Evidence contrary to a hypothesis is itself philosophically problematic. Such evidence is called a falsification of the hypothesis. However, under the theory of confirmation holism it is always possible to save a given hypothesis from falsification. This is so because any falsifying observation is embedded in a theoretical background, which can be modified in order to save the hypothesis. Popper acknowledged this but maintained that a critical approach respecting methodological rules that avoided such immunizing stratagems is conducive to the progress of science. Despite the philosophical questions raised, the hypothetico-deductive model remains perhaps the best understood theory of scientific method. To get a better understanding of the hypothetico-deductive method, we can examine the following geographic phenomena. In the brackish tidal marshes of the Pacific Coast of British Columbia and Washington, we find that the plants in these communities spatially arrange themselves in zones that are defined by elevation. Near the shoreline plant communities are dominated primarily by a single species known as Scirpus americanus. At higher elevations on the tidal marsh Scirpus americanus disappears and a species called Carex lyngbyei becomes widespread. The following hypothesis has been postulated to explain this unique phenomenon: The distribution of Scirpus americanus and Carex lyngbyei is controlled by their tolerances to the frequency of tidal flooding. Scirpus americanusis more tolerant of tidal flooding than Carex lyngbyei and as a result it occupies lower elevations on the tidal marsh. However, Scirpus americanus cannot survive in the zone occupied by Carex lyngbyei because not enough flooding occurs. Likewise, Carex lyngbyei is less tolerant of tidal flooding than Scirpus americanus and as a result it occupies higher elevations on the tidal marsh. Carex lyngbyei cannot survive in the zone occupied by Scirpus americanus because too much flooding occurs. According to Popper, to test this theory a scientist would now have to prove it false. As discussed above this can be done in two general ways: 1) predictive analysis; or 2) by way of experimental manipulation. Each of these methods has been applied to this problem and the results are described below. Predictive Analysis If the theory is correct, we should find that in any tidal marsh plant community that contains Scirpus americanus and Carex lyngbyei that the spatial distribution of these two species should be similar in all cases. This is indeed true. However, there could be some other causal factor, besides flooding frequency that may be responsible for these unique spatial patterns.
PHILOSOPHY OF SCIENCE | KARL POPPER 20

Vous aimerez peut-être aussi