Abstract
How can philosophy of science be of more practical use? One thing we can do is provide practicable advice about how to determine when one empirical claim is relevant to the truth of another; i.e., about evidential relevance. This matters especially for evidence-based policy, where advice is thin—and misleading—about how to tell what counts as evidence for policy effectiveness. This paper argues that good efficacy results (as in randomized controlled trials), which are all the rage now, are only a very small part of the story. To tell what facts are relevant for judging policy effectiveness, we need to construct causal scenarios about will happen when the policy is implemented.
Similar content being viewed by others
Notes
Roush (2005)
I ignore here questions about whether the right kind and quantity of low probability evidence will suffice.
Cartwright (2007) Ch. 3.
There are many of these—checklists running to 40 or more pages.
U.S. Department of Education (2003).
For a formal definition of efficacy and a discussion of it see Cartwright (2009). Note that efficacy is really a three term relation, the efficacy of T for O relative to a given population and set of circumstances.
See Cartwright (1989).
Borhnstedt and Stecher (eds.) (2002).
I think of this as a difference in the causal laws governing the two: T + specific arrangement K of confounders causes O in Tennessee; T + K cause O’ ≠ O in California.
More cautiously, we learn something about T’s contribution.
I put ‘add’ in scare quotes because in order for the language of capacities and contributions to be appropriate, O must contribute is some systematic way in new situations; ‘addition’—e.g. the vector addition familiar from mechanics—is only one example. (For others, see Cartwright (2007) and Cartwright (1989)).
So far this is not taking into account other changes we also make in the situation in implementing T nor ways in which the causal factors already present in the target may have a different distribution than they did in the experimental population nor that a different set of causal laws altogether govern the two different populations.
Note though that the backup needed for the inference on any particular occasion is weaker than a capacity claim. For any one inference we need only assume that what T produced in the experimental situation will ‘add’ in the way we suppose when T is present in the new circumstances. But as is usual in science, this weaker conclusion may be deemed implausible without the stronger to back it up. (Compare deflationary accounts of scientific realism. To back up any particular prediction we don’t need to accept the whole paraphernalia of theoretical claims and entities; we need only accept the consequence of those that directly underwrite the specific prediction.)
The second is enormously complicated when it comes to understanding the force of ‘all’ in the last sentence. Does this mean all known facts, or all available facts, or all facts that we happen to have on the table, or all facts that we could get on the table had we world enough and time, or all facts that we could get on the table for some reasonable price, etc? I lay aside this issue for now and focus instead on the simpler, and probably antecedent, problem of understanding what facts are relevant to the truth of a policy hypothesis.
Achinstein (1983).
It will also be counted relevant on many more formal accounts as well. I bring this example up here to illustrate the point about relevance often requiring assumptions.
Throughout I am using a very general sense of ‘facts’ that includes general facts – like causal laws – as well as singular ones.
References
Achinstein, P. (1983). The nature of explanation. Oxford: Oxford University Press.
Borhnstedt, G. W., & Stecher, B. M. (Eds.). (2002). What we have learned about class size reduction in California. CA, USA: California Department of Education.
Cartwright, N. (1989). Nature’s capacities and their measurement. Oxford: Oxford University Press.
Cartwright, N. (2007). Hunting causes and using them: Approaches in philosophy and economics. Cambridge: Cambridge University Press.
Cartwright, N. (2009) What is this thing called efficacy. In C. Mantzavinos (Ed.), Philosophy of the social sciences. Philosophical theory and scientific practice. Cambridge: Cambridge University Press (to appear).
Reiss, J. (2005). Causal instrumental variables and interventions. Philosophy of Science, 72(5), 964–976.
Roush, S. (2005). Tracking truth: Knowledge, evidence, and science. Oxford: Oxford University Press.
U.S. Department of Education Institute of Education Sciences National Center for Education Evaluation and Regional Assistance (2003). Identifying and implementing educational practices supported by rigorous evidence: A user friendly guide. http://www.ed.gov/rschstat/research/pubs/rigorousevid/rigorousevid.pdf.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Cartwright, N. Evidence-based policy: what’s to be done about relevance?. Philos Stud 143, 127–136 (2009). https://doi.org/10.1007/s11098-008-9311-4
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11098-008-9311-4