The Real Experiment of Giving AI Emotions

Household robots and AI assistants illustrate how personality can make technology more approachable, while also amplifying ethical dilemmas.

For example, what happens when somebody else in the household begins turning to the local AI to gather information or keep tabs on their partners or roommates? Airtags have already been used by abusers to track and keep tabs on their partners. The potential for those closest to us to misuse AI that is omnipresent within our most personal and private moments seems much greater.

And that’s before even getting into the unintended consequences of introducing AI systems with human characteristics, especially empathy. For Jana Schaich Borg, associate research professor in the Social Science Research Institute at Duke University, the unintended consequences are an especially large concern for children growing up with this technology.

Adults might easily make the mistake of thinking an AI has its own consciousness, even though, at least to this point, they absolutely do not. But adults can, for the most part, easily be corrected, and the mistake is not likely to have long-term effects on their mental health.

That is not necessarily the case, however, for children and even young adults who are interacting with these people-pleasing systems on a regular basis. “Empathy is just one piece of being human, but it’s one of the things that really makes us feel like connected to one another,” said Borg. “So as soon as an AI comes across as empathetic, you’ve reached a whole new realm.”

All of this might sound like an episode of Black Mirror, but it isn’t. This is an experiment, and we are running it right now.
Jana Schaich Borg, Associate Research Professor, Social Science Research Institute

—————