Cover of Techné: Research in Philosophy and Technology
>> Go to Current Issue

Techné: Research in Philosophy and Technology

Volume 10, Issue 1, Fall 2006
Technology and Normativity

Table of Contents

Already a subscriber? - Login here
Not yet a subscriber? - Subscribe here

Displaying: 1-9 of 9 documents

1. Techné: Research in Philosophy and Technology: Volume > 10 > Issue: 1
Ibo van de Poel, Peter Kroes Introduction: Technology and Normativity
view |  rights & permissions | cited by
part 1: technology and normativity
2. Techné: Research in Philosophy and Technology: Volume > 10 > Issue: 1
Carl Mitcham In Qualified Praise of the Leon Kass Council On Bioethics
abstract | view |  rights & permissions | cited by
This paper argues the distinctiveness of the President’s Council on Bioethics, as chaired by Leon Kass. The argument proceeds by seeking to place the Council in proper historical and philosophical perspective and considering the implications of some of its work. Sections one and two provide simplified descriptions of the historical background against which the Council emerged and the character of the Council itself, respectively. Section three then considers three basic issues raised by the work of the Council that are of relevance to philosophy and technology as a whole: the role of professionalism, the relation between piecemeal and holistic analyses of technology, and the appeal to human nature as a norm.
3. Techné: Research in Philosophy and Technology: Volume > 10 > Issue: 1
Lotte Asveld Informed Consent in the Fields of Medical Technological Practice.
abstract | view |  rights & permissions | cited by
Technological developments often bring about new risks. Informed consent has been proposed as a means to legitimize the imposition of technological risks. This principle was first introduced in medical practice to assure the autonomy of the patient.The introduction of IC in the field of technological practice raises questions about the comparability of the type of informed consent. To what extent are thepossibilities to include laypeople in making decisions regarding risks similar in the technological field to giving informed consent in the medical field and whatdoes this imply for the design and implementation of IC in the technological field? Medical and the technological practice are clearly alike in that both fieldsare characterized by highly specialized, technical knowledge which can be quite inaccessible to the average layperson. However, a fundamental difference ariseswith regard to the aim, knowledge of risks and exclusiveness of the practices in each field. The differences in aim imply that the necessity for each practice isperceived differently by laypeople, thus leading them to assess the respective risks differently. The differences in knowledge of risks arise from the variabilityin the ways that can be used to describe a given risk. Definition of risk in medical practice is more homogenous in this respect than the risk definition intechnological fields. Futhermore, medical practice tends to be more exclusive, leading laypeople immersed in that practice to necessarily embrace most of thefundamental underlying that practice. These differences result in divergent recommendations for the implementation of informed consent in the technological field, basically: there is a need for more extensive procedure and for less decisive authority for the individual.
4. Techné: Research in Philosophy and Technology: Volume > 10 > Issue: 1
Junichi Murata From Challenger to Columbia: What lessons can we learn from the accident investigation board for engineering ethics.
abstract | view |  rights & permissions | cited by
One of the most important tasks of engineering ethics is to give engineers the tools required to act ethically to prevent possible disastrous accidents which could result from engineers’ decisions and actions. The space shuttle Challenger disaster is referred to as a typical case in almost every textbook. This case is seen as one from which engineers can learn important lessons, as it shows impressively how engineers should act as professionals, to prevent accidents. The Columbia disaster came seventeen years later in 2003. According to the report of the Columbia accident investigation board, the main cause of the accident was not individual actions which violated certain safety rules but rather was to be found in the history and culture of NASA. A culture is seen as one which desensitizedmanagers and engineers to potential hazards as they dealt with problems of uncertainty. This view of the disaster is based on Dian Vaughan’s analysis of the Challenger disaster, where inherent organizational factors and culture within NASA had been highlighted as contributing to the disaster. Based on the insightful analysis of the Columbia report and the work of Diane Vaughan, we search for an alternative view of engineering ethics. We focus on the inherent uncertainty of engineers’ work with respect to hazard precaution. We discuss claims that the concept of professional responsibility, which plays a central role in orthodox engineering ethics, is too narrow and that we need a broader and more fundamental concept of responsibility. Responsibility which should be attributed to every person related to an organization and therefore given the range of responsible persons, governments, managers, engineers, etc. might be called “civic virtue”. Only on the basis of this broad concept of responsibility of civic virtue, we can find a possible way to prevent disasters and reduce the hazards that seem to be inseparable part of the use of complex technological systems.
5. Techné: Research in Philosophy and Technology: Volume > 10 > Issue: 1
Sven Ove Hansson Safe Design
abstract | view |  rights & permissions | cited by
Safety is an essential ethical requirement in engineering design. Strategies for safe design are used not only to reduce estimated probabilities of injuries but also to cope with hazards and eventualities that cannot be assigned meaningful probabilities. The notion of safe design has important ethical dimensions, such as that of determining the responsibility that a designer has for future uses (and misuses) of the designed object.
part 2: technological functions and normativity
6. Techné: Research in Philosophy and Technology: Volume > 10 > Issue: 1
Marcel Scheele Social Norms in Artefact Use: proper functions and action theory
abstract | view |  rights & permissions | cited by
The use of artefacts by human agents is subject to human standards or norms of conduct. Many of those norms are provided by the social context in which artefacts are used. Others are provided by the proper functions of the artefacts. This article argues for a general framework in which norms that are provided by proper functions are related to norms provided by the (more general) social context of use. Departing from the concept, developed by Joseph Raz, of “exclusionary reasons” it is argued that proper functions provide “institutional reasons” for use. Proper use of artefacts (use according to the proper function) is embedded in the normative structures of social institutions. These social normative structures are complementary to traditional norms of practical rationalityand are a kind of second-order reasons: exclusionary reasons. It is argued that proper functions of artefacts provide institutional reasons, which are up to a certain extent similar to exclusionary reasons. The most notable difference concerns the fact that proper functions not so much exclude other types of use, but rather place that use (and the user) in particular social structures with particular rights and obligations. An institutional reason not only gives a reason for action, it also provides reasons for evaluating actions according to such reasons positively (and other negatively). The upshot of the analysis is that it provides an additionaltool for understanding and evaluating the use of artefacts.
7. Techné: Research in Philosophy and Technology: Volume > 10 > Issue: 1
Francoise Longy Function and Probability: The Making of Artefacts
abstract | view |  rights & permissions | cited by
The existence of dysfunctions precludes the possibility of identifying the function to do F with the capacity to do F. Nevertheless, we continuously infer capacities from functions. For this and other reasons stated in the first part of this article, I propose a new theory of functions (of the etiological sort), applying to organisms as well as to artefacts, in which to have some determinate probability P to do F (i.e. a probabilistic capacity to do F) is a necessary condition for having the function to do F. The main objective of this paper is to justify the legitimacy of this condition when considering artefacts. I begin by distinguishing “perspectival probabilities”, which reflect a pragmatic interest or an arbitrary state of knowledge, from “objective probabilities”, which depend on some objective feature of the envisageditems. I show that objective probabilities are not necessarily based on physical constitution. I then explain why we should distinguish between considering an object as a physical body and considering it as an artefact, and why the probability of dysfunction to be taken into account is one relative to the object as member of an artefact category. After clarifying how an artefact category can be defined if it is not defined in physical terms, I establish the objectivity of the probability of dysfunction under consideration by showing how it is causally determined by objective factors regulating the production of items of a definite artefact type. Ifocus on the case of industrially produced artefacts where the objective factors determining the probability of dysfunction can be best seen.
8. Techné: Research in Philosophy and Technology: Volume > 10 > Issue: 1
Jeroen De Ridder The (Alleged) Inherent Normativity of Technological Explanations
abstract | view |  rights & permissions | cited by
Technical artifacts have the capacity to fulfill their function in virtue of their physicochemical make-up. An explanation that purports to explicate this relation between artifact function and structure can be called a technological explanation. It might be argued, and Peter Kroes has in fact done so, that there issomething peculiar about technological explanations in that they are intrinsically normative in some sense. Since the notion of artifact function is a normative one (if an artifact has a proper function, it ought to behave in specific ways) an explanation of an artifact’s function must inherit this normativity.In this paper I will resist this conclusion by outlining and defending a ‘buck-passing account’ of the normativity of technological explanations. I will first argue that it is important to distinguish properly between (1) a theory of function ascriptions and (2) an explanation of how a function is realized. The task of the former is to spell out the conditions under which one is justified in ascribing a function to an artifact; the latter should show how the physicochemical make-up of an artifact enables it to fulfill its function. Second, I wish to maintain that a good theory of function ascriptions should account for the normativity of these ascriptions. Provided such a function theory can be formulated — as I think it can — a technological explanation may pass the normativity buck to it. Third, to flesh out these abstract claims, I show how a particular function theory — to wit, the ICE theory by Pieter Vermaas and Wybo Houkes — can be dovetailed smoothly with my own thoughts on technological explanation.
9. Techné: Research in Philosophy and Technology: Volume > 10 > Issue: 1
Krist Vaesen How Norms in Technology Ought to Be Interpreted
abstract | view |  rights & permissions | cited by
This paper defends the claim that there are — at least — two kinds of normativity in technological practice. The first concerns what engineers ought to do and the second concerns normative statements about artifacts. The claim is controversial, since the standard approach to normativity, namely normative realism, actually denies artifacts any kind of normativity; according to the normative realist, normativity applies exclusively to human agents. In other words, normative realists hold that only “human agent normativity” is a genuine form of normativity.I will argue that normative realism is mistaken on this point. I will mainly draw on material of Daniel Dennett and Philip Pettit to show that it makes sense to talk about artifactual normativity. We claim that this approach can also make sense of human agent normativity — or more specifically “engineer normativity”. Moreover, it avoids some of the problems formulated by opponents of normative realism. Thus I will develop a strategy which: (i) makes sense of artifactual normativity; and (ii) makes sense of “human agent normativity”, specifically “engineer normativity”.