>> Go to Current Issue

Techné: Research in Philosophy and Technology

Conceptual, Empirical, and Ethical Perspectives

Volume 23, Issue 3, 2019
Social Robots, Emotions, and Social Cognition

Table of Contents

Already a subscriber? - Login here
Not yet a subscriber? - Subscribe here

Displaying: 1-8 of 8 documents

1. Techné: Research in Philosophy and Technology: Volume > 23 > Issue: 3
Johanna Seibt, Raffaele Rodogno Understanding Emotions and Their Significance through Social Robots, and Vice Versa
view |  rights & permissions | cited by
2. Techné: Research in Philosophy and Technology: Volume > 23 > Issue: 3
Kerstin Fischer Why Collaborative Robots Must Be Social (and even Emotional) Actors
abstract | view |  rights & permissions | cited by
In this article, I address the question whether or not robots should be social actors and suggest that we do not have much choice but to construe collaborative robots as social actors. Social cues, including emotional displays, serve coordination functions in human interaction and therefore have to be used, even by robots, in order for long-term collaboration to succeed. While robots lack the experiential basis of emotional display, also in human interaction much emotional expression is part of conventional social practice; if robots are to participate in such social practices, they need to produce such signals as well. I conclude that if we aim to share our social spaces with robots, they better be social actors, which may even include the display of emotions. This finding is of empirical as well as philosophical relevance because it shifts the ethical discussion away from the question, how social collaborative robots should be, to the question, what kinds of human-robot collaborations we want.
3. Techné: Research in Philosophy and Technology: Volume > 23 > Issue: 3
Janna van Grunsven, Aimee van Wynsberghe A Semblance of Aliveness: How the Peculiar Embodiment of Sex Robots Will Matter
abstract | view |  rights & permissions | cited by
While the design of sex robots is still in the early stages, the social implications of the potential proliferation of sex robots into our lives has been heavily debated by activists and scholars from various disciplines. What is missing in the current debate on sex robots and their potential impact on human social relations is a targeted look at the boundedness and bodily expressivity typically characteristic of humans, the role that these dimensions of human embodiment play in enabling reciprocal human interactions, and the manner in which this contrasts with sex robot-human interactions. Through a fine-grained discussion of these themes, rooted in fruitful but largely untapped resources from the field of enactive embodied cognition, we explore the unique embodiment of sex robots. We argue that the embodiment of the sex robot is constituted by what we term restricted expressivity and a lack of bodily boundedness and that this is the locus of negative but also potentially positive implications. We discuss the possible benefits that these two dimensions of embodiment may have for people within a specific demographic, namely some persons on the autism spectrum. Our preliminary conclusion—that the benefits and the downsides of sex robots reside in the same capability of the robot, its restricted expressivity and lack of bodily boundedness as we call it—demands we take stock of future developments in the design of sex robot embodiment. Given the importance of evidence-based research pertaining to sex robots in particular, as reinforced by Nature (2017) for drawing correlations and making claims, the analysis is intended to set the stage for future research.
4. Techné: Research in Philosophy and Technology: Volume > 23 > Issue: 3
Jaana Parviainen, Lina van Aerschot, Tuomo Särkikoski, Satu Pekkarinen, Helinä Melkas Motions with Emotions?: A Phenomenological Approach to Understanding the Simulated Aliveness of a Robot Body
abstract | view |  rights & permissions | cited by
This article examines how the interactive capabilities of companion robots, particularly their materiality and animate movements, appeal to human users and generate an image of aliveness. Building on Husserl’s phenomenological notion of a ‘double body’ and theories of emotions as affective responses, we develop a new understanding of the robots’ simulated aliveness. Analyzing empirical findings of a field study on the use of the robot Zora in care homes for older people, we suggest that the aliveness of companion robots is the result of a combination of four aspects: 1) material ingredients, 2) morphology, 3) animate movements guided by software programs and human operators as in Wizard of Oz-settings and 4) anthropomorphising narratives created by their users to support the robot’s performance. We suggest that narratives on affective states, such as, sleepiness or becoming frightened attached to the robot trigger users’ empathic feelings, caring and tenderness toward the robot.
5. Techné: Research in Philosophy and Technology: Volume > 23 > Issue: 3
Felix Tun Han Lo The Dilemma of Openness in Social Robots
abstract | view |  rights & permissions | cited by
This paper conducts a philosophical inquiry into past empirical research that reveals emotional coupling and category confusion between the human and the social robot. It examines whether emotional coupling and category confusion would increase or diminish the reification of human emotion and the human milieu by examining whether they fulfill the ideal of openness in technology. The important theories of openness, from the respective proposals of open industrial machines by Gérard-Joseph Christian and Karl Marx, to Umberto Eco’s critique of open art and Gilbert Simondon’s philosophy of open technology, are in agreement that (i) openness is the condition for realizing the potentiality for transcending the existing aesthetic, technical, or social structure, and (ii) that the realization of potentiality would diminish the reification of the human milieu. The therapeutic effect of emotional coupling with social robots seems to fulfill this ideal of open technology, whereas category confusion seems to increase rather than diminish reification. If people confuse the robot with the human, they risk losing sight of the unpredictability of other human beings that is essential to human development. This paper concludes that it is possible to avoid category confusion by building social robots without giving them a human-like appearance.
6. Techné: Research in Philosophy and Technology: Volume > 23 > Issue: 3
Arto Laitinen, Marketta Niemelä, Jari Pirhonen Demands of Dignity in Robotic Care: Recognizing Vulnerability, Agency, and Subjectivity in Robot-based, Robot-assisted, and Teleoperated Elderly Care
abstract | view |  rights & permissions | cited by
Having a sense of dignity is one of the core emotions in human life. Is our dignity, and accordingly also our sense of dignity under threat in elderly care, especially in robotic care? How can robotic care support or challenge human dignity in elderly care? The answer will depend on whether it is robot-based, robot-assisted, or teleoperated care that is at stake. Further, the demands and realizations of human dignity have to be distinguished. The demands to respect humans are based on human dignity and the inalienable high and equal moral standing that everyone has. For human moral agents, these demands take the form of negative and positive duties. For robots, they arguably take the form of corresponding ought-to-be norms. The realizations of dignity consist in variable responses to these demands, by oneself by others, and by society at large. This article examines how robot-based, robot-assisted, and teleoperated care can amount to realizations of dignity. The varieties of robotic care can, in different ways, be responsive to the demands of dignity and recognize humans as vulnerable beings with needs, as autonomous agents, and as rational subjects of experience, emotion, and thought.
7. Techné: Research in Philosophy and Technology: Volume > 23 > Issue: 3
Sven Nyholm, Lily Eva Frank It Loves Me, It Loves Me Not: Is It Morally Problematic to Design Sex Robots that Appear to Love Their Owners?
abstract | view |  rights & permissions | cited by
Drawing on insights from robotics, psychology, and human-computer interaction, developers of sex robots are currently aiming to create emotional bonds of attachment and even love between human users and their products. This is done by creating robots that can exhibit a range of facial expressions, that are made with human-like artificial skin, and that possess a rich vocabulary with many conversational possibilities. In light of the human tendency to anthropomorphize artefacts, we can expect that designers will have some success and that this will lead to the attribution of mental states to the robot that the robot does not actually have, as well as the inducement of significant emotional responses in the user. This raises the question of whether it might be ethically problematic to try to develop robots that appear to love their users. We discuss three possible ethical concerns about this aim: first, that designers may be taking advantage of users’ emotional vulnerability; second, that users may be deceived; and, third, that relationships with robots may block off the possibility of more meaningful relationships with other humans. We argue that developers should attend to the ethical constraints suggested by these concerns in their development of increasingly humanoid sex robots. We discuss two different ways in which they might do so.
8. Techné: Research in Philosophy and Technology: Volume > 23 > Issue: 3
Michał Klincewicz Robotic Nudges for Moral Improvement through Stoic Practice
abstract | view |  rights & permissions | cited by
This article offers a theoretical framework that can be used to derive viable engineering strategies for the design and development of robots that can nudge people towards moral improvement. The framework relies on research in developmental psychology and insights from Stoic ethics. Stoicism recommends contemplative practices that over time help one develop dispositions to behave in ways that improve the functioning of mechanisms that are constitutive of moral cognition. Robots can nudge individuals towards these practices and can therefore help develop the dispositions to, for example, extend concern to others, avoid parochialism, etc.