Human-Robot Relations: Why We Should Worry | Sherry Turkle
Here we aim to lay the theoretical foundations of human-robot relationship drawing upon insights from disciplines that govern relevant human. In this summa paper, I present the topic of relationships between humans and robots, integrating my Computer Science major, Physics minor, and the liberal arts. The future of human-robotic relationships was considered at the Second International Congress on Love and Sex with Robots held in London.
Contrary to the critics, I believe our popular discourse about robotic relationships has become too dark and dystopian. We overstate the negatives and overlook the ways in which relationships with robots could complement and enhance existing human relationships.
It seems that they really care for each other, but this could be an illusion. She is, after all, programmed to serve his needs. The relationship is an inherently asymmetrical one. He owns and controls her; she would not survive without his good will. Furthermore, there is a third-party lurking in the background: This is a far cry from the philosophical ideal of love.
Philosophers emphasise the need for mutual commitment in any meaningful relationship. Robots might be able to perform love, saying and doing all the right things, but performance is insufficient.
Human-Robot Relations: Why We Should Worry
Furthermore, even if the robot was capable of some genuine mutual commitment, it would have to give this commitment freely, as the British behavioural scientist Dylan Evans argued in Although people typically want commitment and fidelity from their partners, they want these things to be the fruit of an ongoing choice … This seems to scupper any possibility of a meaningful relationship with a robot.
Robots will not choose to love you; they will be programmed to love you, in order to serve the commercial interests of their corporate overlords. This looks like a powerful set of objections to the possibility of robot-human love.
But not all these objections are as persuasive as they first appear. After all, what convinces us that our fellow human beings satisfy the mutuality and free-choice conditions outlined above? The philosopher Michael Hauskeller made this point rather well in Mythologies of Transhumanism The same goes for concerns about free choice.
It is, of course, notoriously controversial whether or not humans have free choice, and not just the illusion of that; but if we need to believe that our lovers freely choose their ongoing commitment to us, then it is hard to know what could ground that belief other than certain behavioural indicators that are suggestive of this, eg their apparent willingness to break the commitment when we upset or disappoint them.
There is no reason why such behavioural mimicry needs to be out of bounds for robots. Ethical behaviourism is a bitter pill for some. Even though he expresses the view well, Hauskeller, to take just one example, ultimately disagrees with it when it comes to human-robot relationships.
He argues that the reason why behavioural patterns are enough to convince us that our human partners are in love with us is because we have no reason to doubt the sincerity of those behaviours. The problem with robots is that we do have such reasons: Humans once owned and controlled other humans but most of us eventually saw the moral error in this practice But i is difficult to justify in this context. Unless you think that biological tissue is magic, or you are a firm believer in mind-body dualism, there is little reason to doubt that a robot that is behaviourally and functionally equivalent to a human cannot sustain a meaningful relationship.
There is, after all, every reason to suspect that we are programmed, by evolution and culture, to develop loving attachments to one another.
It might be difficult to reverse-engineer our programming, but this is increasingly true of robots too, particularly when they are programmed with learning rules that help them to develop their own responses to the world.
The second element ii provides more reason to doubt the meaningfulness of robot relationships, but two points arise. First, if the real concern is that the robot serves ulterior motives and that it might betray you at some later point, then we should remember that relationships with humans are fraught with similar risks.
As the philosopher Alexander Nehamas points out in On Friendshipthis fragility and possibility of betrayal is often what makes human relationships so valuable.
Second, if the concern is about the ownership and control, then we should remember that ownership and control are socially constructed facts that can be changed if we think it morally appropriate.
Humans once owned and controlled other humans but we or at least most of us eventually saw the moral error in this practice. We might learn to see a similar moral error in owning and controlling robots, particularly if they are behaviourally indistinguishable from human lovers.
The argument above is merely a defence of the philosophical possibility of robot lovers. There are obviously several technical and ethical obstacles that would need to be cleared in order to realise this possibility.Robosexuals - HUMAN and ROBOT marriage is HERE!
One major ethical obstacle concerns how robots represent or performatively mimic human beings. If you look at the current crop of robotic partners, they seem to embody some problematic, gendered assumptions about the nature of love and sexual desire. Azuma Hikari, the holographic partner, represents a sexist ideal of the domestic housewife, and in the world of sex dolls and sexbot prototypes, things are even worse: This has a lot of people worried.
For instance, Sinziana Gutiu, a lawyer in Vancouver specialising in cyberliability, is concerned that sexbots convey the image of women as sexual tools: Kathleen Richardson, a professor of ethics and culture of robotics at De Montfort University in Leicester and the co-founder of the Campaign Against Sex Robots, has similar concerns, arguing that sexbots effectively represent women as sexual commodities to be bought and sold.
While both these critics draw a link between such representations and broader social consequences, others myself included focus specifically on the representations themselves.
- Embracing the robot
In this sense, the debate plays out much like the long-standing debates about the moral propriety of pornography. Do they necessarily convey or express problematic attitudes toward women or men? To answer that, we need to think about how symbolic practices and artefacts carry meaning in the first place. Their meaning is a function of their content, ie what they resemble or, more importantly, what they are taken to resemble by others and the context in which they are created, interpreted and used.
There is a complex interplay between content and context when it comes to meaning. Content that seems offensive and derogatory in one context can be empowering and subversive in another. This has implications for assessing the representational harms of robot lovers because neither their content nor the context in which they are used is fixed or immutable.
Ethorobotics: A New Approach to Human-Robot Relationship
It is almost certainly true that the current look and appearance of robot lovers is representationally problematic, particularly in the contexts in which they are produced, promoted and used. But it is possible to change this. To do this, proponents of the feminist porn movement pursue three main strategies: A similar set of strategies could be followed in the case of sexbots.
We could work to change the representational forms of sexbots so that they include diverse female, male and non-binary body shapes, and follow behavioural scripts pre-programmed or learned that do not serve to reinforce negative stereotypes, and perhaps even promote positive ones.
We could also seek to change the processes through which sexbots get created and designed, encouraging a more diverse range of voices in the process.
To this end, we could work to promote women who are already active in sextech. February 18, People are looking more and more to robotic toys and tools for companionship, and less to other people, said Sherry Turkle, a professor of the social studies of science and technology at MIT. Innovations such as SiriApple's iPhone digital assistant, have trained people to rely on machines in new ways, Turkle said, and to envision a future where robots are advanced enough to serve as teachers for the young, and caretakers for the old.
But I think that this new normal comes with a price. For the idea of artificial companionship to become our new normal, we have to change ourselves, and in the process we are remaking human values and human connection. Where subjects in her studies used to say, in the s and '90s, that love and friendship are connections that can occur only between humans, people now often say robots could fill these roles. A child interacting with Robovie, a remotely controlled humanoid robot.
In the near future, children may view such robots as friends.
Ethorobotics: A New Approach to Human-Robot Relationship
American Psychological Association For example, Turkle has studied Paro, a robotic baby seal that's been used as a companion for older adults with dementia or depression. It was widely seen as a great advance, Turkle said, when one grief-stricken woman was able to talk to Paro and be comforted by it. Many experts say in the future, robots could be better caretakers for the elderly, because they could be programmed with endless patience, and would never be abusive, inept or dishonest.
But Turkle worries about this drive to replace human caretakers with robots.