Research interests
- Aviation Regulation
- Assessing / regulating risk in complex technical systems.
- Risk-based policy-making.
- Sociology of Knowledge
Publications
Downer, John (2010) 'Trust and Technology: The Social Foundations of Aviation Regulation' in the British Journal of Sociology, Vol. 61 Issue 1, pp. 87-110.
Abstract
This paper looks at the dilemmas posed by 'expertise' in high-technology regulation by examining the US Federal Aviation Administration's (FAA) 'type-certification' process, through which they evaluate new designs of civil aircraft. It observes that the FAA delegate a large amount of this work to the manufacturers themselves, and discusses why they do this by invoking arguments from the sociology of science and technology.
It suggests that - contrary to popular portrayal - regulators of high technologies face an inevitable epistemic barrier when making technological assessments, which forces them to delegate technical questions to people with more tacit knowledge, and hence to 'regulate' at a distance by evaluating 'trust' rather than 'technology'. It then unravels some of the implications of this and its relation to our theories of regulation and 'regulatory capture'.
Downer, John (2010) 'Anatomy of a Disaster: Why Some Accidents Are Unavoidable' CARR Discussion Paper 61.
Abstract
This paper looks at the fateful 1988 fuselage failure of Aloha Airlines Flight 243 to suggest and illustrate a new perspective on the sociology of technological failure and the question of whether such failures are potentially avoidable. Drawing on core insights from the sociology of scientific knowledge, it highlights, and then challenges, a fundamental principle underlying our understanding of technological risk: idea that 'failures' always connote 'errors' and are, in principle, foreseeable.
From here, it suggests a new conceptual tool for Disaster Theory, by proposing a novel category of man-made calamity: what it calls the 'Epistemic Accident'. It concludes by exploring the implications of Epistemic Accidents and sketching their relationship to broader issues concerning technology and society, and social theory's approach to failure.
Downer, John (2009) 'Watching the Watchmaker: On Regulating the Social in Lieu of the Technical' CARR Discussion Paper 54.
Abstract
This paper looks at the problem of expertise in regulation by examining the Federal Aviation Administration's (FAA) 'type-certification' process, through which they evaluate new designs of civil aircraft. It notes that the FAA delegate a large amount of this work to the manufacturers themselves, and discusses why they do this by invoking arguments from the sociology of science and technology.
It suggests that - contrary to popular portrayal - regulators of 'high' technologies face an inevitable epistemic barrier when making technological assessments, which forces them to delegate technical questions to people with more tacit knowledge, and hence to 'regulate' at a distance by evaluating 'trust' rather than 'technology'. It then unravels some of the implications of this and its relation to our theories of regulation and 'regulatory capture'.
Downer, John (2009) 'When Failure Is an option: Redundancy, Reliability, and Risk' CARR Discussion Paper 53.
Abstract
Administering a technologically complex world poses unique challenges, if only because the vital properties of complex technologies are often opaque to administrators. Sound judgments about everything from pacemakers to power-plants require sound assessments of factors like reliability. We can learn much about these assessments by looking at the logic of redundancy. Redundancy is a modern engineering paradigm.
Engineers rely it heavily when designing complex, safety critical machines, such as aircraft. More than this, however, redundancy also allows us to 'know' the machines they build. It is a key element of all reliability calculations for ultra-reliable systems. Knowing machines, therefore, means understanding redundancy. This understanding is often flawed, however, leading us to misconstrue the calculations that rely on it. By deconstructing the logic of redundancy, therefore, I attempt to illuminate much wider issues about governing technology.
Downer, John (2007) 'When the Chick Hits the Fan: Representativeness and Reproducibility in Technological Testing' in Social Studies of Science. Vol. 31 No.1 Feb. 2007 pp.7-26.
Abstract
Before a new turbojet engine design is approved, the Federal Aviation Administration (FAA) must assure themselves that, among many other things, the engine can safely ingest birds. They do this by mandating a series of well-defined - if somewhat Pythonesque - 'birdstrike tests' through which the manufacturers can demonstrate the integrity of their engines. In principle, the tests are straightforward: engineers run an engine at high speed, launch birds into it, and watch to see if it explodes. In practice, the tests rest on a complex and contentious logic.
In this paper I explore the debate that surrounds these tests, using it to illustrate the now-familiar idea that technological tests - like scientific experiments - unavoidably contain irreducible ambiguities that require judgments to bridge, and to show that these judgments can have real consequences.
Having established this, I then explore how the FAA reconciles the unavoidable ambiguities with its need to determine, with a high degree of certainty, that the engines will be as safe as Congress requires. I argue that this reconciliation requires a careful balance between the opposing virtues of reproducibility and representativeness - and that this balance differs significantly from that in most scientific experiments, and from the common perception of what it ought to be.
Projects
Book project: "From Black Box to Check-Box: evaluating reliability in civil aircraft design."
I am writing a book about the 'type-certification' process, through which the Federal Aviation Administration evaluate and approve (or not) designs for new civil aircraft. In this book I draw on recent work in the sociology of technology to ask what it means to say that a future aircraft (or any complex technology) will be reliable to a figure of X [Where X is the likelihood of catastrophic failure over a given period]. Where does this number come from? How much faith should publics and policy-makers place in it?
Related Publications:
Downer, John (2010) 'Trust and Technology: The Social Foundations of Aviation Regulation' in the British Journal of Sociology.
Downer, John (2007) "When the Chick Hits the Fan: Representativeness and Reproducibility in Technological Testing." in Social Studies of Science. Vol. 31 No.1 Feb. 2007 pp.7-26
Downer, John (2008) "On Evaluating One's Self: The implications of asymmetrical expertise in aviation regulation" in Risk & Regulation. October, 2008.
Downer, John (2009) "'When Failure Is an option: Redundancy, Reliability, and Risk." Forthcoming.
Ancillary Project: "What is Risk-Based Policy-making?"
This project has its roots in a period of academic consultancy, undertaken in late 2007 and early 2008 in the UK Department of Farming and Rural Affairs (Defra). The project looked at the meaning and potential implementation of risk-based policy-making. Together with Henry Rothstein from Kings College London, I spent nine months inside the department, interviewing people and looking at how policy decisions and practices could draw more explicitly on different notions of 'risk'.
Related publications:
Rothstein, Henry & Downer, John (2008) "Risk in Policy-making: Managing the risks of risk governance" Report to Defra, 2008
Downer, John & Rothstein, Henry (2008) "What is Risk-Based Policy-Making?: Heather & grass burning reform case-study." Report to Defra, 2008
Presented a paper entitled 'What Can Go Wrong? Rethinking the Epistemology of Failure' at the University of Edinburgh on 22 June.