Robots can bring energy and effectiveness to the workplace. They just have to mimic more human behaviors.
Anxieties about whether machines will take our jobs will soon be a thing of the past. Robots are already here, adding new dimensions to the way we live and function, and researchers are exploring how to create intelligent machines that work better with us as opposed to taking our place. Guy Hoffman (@guyhoffman), assistant professor and Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering at Cornell University in Ithaca, New York, is studying how a working robot’s behavior can influence its human colleagues. The robots he designs lean forward to show they are listening to human interlocutors, and when they hear music, they nod in response to the beat. Hoffman’s work indicates that subtle changes in a robot’s actions have a positive effect on the humans around it. MIT Sloan Management Review spoke with him about his research to probe what his findings imply for managing human-robot teams.
MIT Sloan Management Review: Why do robots need to understand human body language and guess our intentions?
Hoffman: Robots have traditionally been designed to carry out preprogrammed behaviors. But increasingly, researchers in my field are thinking about modeling human intentions and taking human needs into account. In the past, a robot would perform a fixed action and the human had to adapt to it, but now we want the robot and the human to adapt mutually to each other. For this to happen, the robot has to solve a lot of really hard problems that for us are almost intuitive, which is to guess what we’re trying to do or what personality type we have or which mood we might be in. When we encounter people at work, we very quickly make judgments about their personalities and change our behavior accordingly. Having a robot able to do this is crucial if it’s to become a similarly good team member.
What part do emotions play in human-robot interactions?
Hoffman: Robots have the capacity to affect our behavior emotionally in that they’re using a physical body, they’re sharing space with us, they’re moving in our surroundings. What I’ve been looking at is how robots use their bodies to express their intentions, to express what they’re trying to do, and therefore affect people emotionally. We’ve found that when a robot uses human body language, it enables the people interacting with it both to be more effective in what they’re doing and to enjoy the interaction and gain psychological benefits from it. I believe that body language and the way that we think with our bodies and through our bodies is the fastest way to our hearts.
So, how would these benefits play out in the workplace?
Hoffman: In one study, I looked at robots that solely behaved like robotic tools and the interactions they had compared with robots that were more socially expressive. In the group with the more traditional robots, participants told the machines what to do, and they did it. In the other group, the robots would start moving before they’d been told what to do, and they’d start to help even before they were sure what the person wanted. People who worked with this second group of robots got into a kind of a dance, a back and forth — everybody was moving at the same time and getting things done, even though the robot was taking more chances and would sometimes make mistakes. The results showed that people felt this robot was a better team member and had more commitment to the joint activity. When participants just ordered the robot around, they felt it was lazy; it didn’t take the initiative and wasn’t a good team member or committed to the team.
In a later study, we had robots listen to people’s stories. In that situation, participants weren’t working with the robot but were using it to get something off their chest. We specifically chose stories that were negative or traumatic, and the robot would nod at the right moment or lean forward to show that it listened and understood. The participants liked this robot more than one that seemed distracted or didn’t react at all. They thought it was smarter. Afterwards they even felt more confident about themselves when going into a stressful task. This showed that people can reap psychological benefits when a robot uses its body even in a very, very small way to show empathy.
And a third project was a musical collaboration in which a pianist and a robot played a piece together. In one case, the robot just played; it didn’t exhibit any social expression. In the other, the robot joined the music socially by nodding its head and moving to the beat, looking at the pianist and then back, looking down when it was focused and then up when it was ready for more information. When we asked people to rate the music, they thought it sounded better when the robot used social behaviors than when it acted more mechanically. They thought the musicians were on the same page, as more of a duo than two separate layers. This shows us that body language is not just the icing on the cake but actually changes the taste of the cake. And the same sorts of benefits hold for robots’ cooperation and companionship with humans.
Human behavior is so complex. How do you decide how robots should act out?
Hoffman: The way I think about it is very inspired by the arts, from my experience studying theater and playing jazz. Actors have developed tricks for turning what is essentially a very schematic and structured activity into one that appears natural and spontaneous. On stage, good actors look very natural; it looks as though the lights are on in the character’s brain. One thing they do is begin a movement before they know where it’s going to end. It’s called the impulse versus the cue, when they go to speak before their line occurs.
And then there’s improvisation, something I looked at in the musical domain, but I feel like it has its place anywhere. I think robots that could improvise at your fast-food restaurant chain would be more fluent and therefore better robotic team members — which will in turn make them more acceptable to the people working with them.
You describe effective human-robot teams as having what you call “collaborative fluency.” Is that what you’re talking about here?
Hoffman: When I started looking at robots that could anticipate what you wanted to do, I focused on robot-human teams that were building simulated cars together. A surprising finding that emerged was that even though people felt that this sort of robot was much better and smarter at doing a task, it took the team the same amount of time to finish the task. (Though in some of our research, they actually worked more quickly — it depends on the task.)
That’s when I came up with the concept of collaborative fluency. What was different about the interaction was that there was a stronger sense of teamwork, a sense that everybody was doing their part and committed to the same ends.
Think about how you interact with Siri or Google Voice or Alexa or the latest intelligent agent: It’s very much a back and forth, almost like a chess game, with one move following another. “Can I do this?” I get a response. But if you and I are talking about something we’re engaged in, if we’re a team that’s brainstorming about something, that’s not how our conversation goes. You interrupt me, I interrupt you, we build on each other, we complete each other’s sentences.
Collaborative fluency occurs when you have this sense of two or more people just rising together like a great football team or a world-class ballet. It’s almost like one mind moving together. It’s a very subjective feeling, but we’re trying to deconstruct it into a mathematical computational model. I believe this is going to be the difference between robots that are going to be a joy to work with and robots that will be annoying to work with and just make you feel as though you have [just] another job.
What advantages would this sort of robot hold for businesses?
Hoffman: I would imagine that companies are interested in their employees’ well-being. It would probably also have tangible outcomes for retention and turnover.
If we’re building technology that interacts with people, we should think about human values and the well-being of the people working with these robots. In the end, we’re building technology to improve our lives. There’s no point in just making the world incredibly efficient and depressing.
There’s a lot of anxiety about the roles robots will play in our workplaces. Presumably having more agreeable robots will make the shift an easier one?
Hoffman: Right. Obviously, robots are going to replace people in some cases — it would be naïve to think that’s not part of the story — but in many cases, and we’re already seeing this, robots and humans are working together. I was at a Ford automotive plant recently and saw robots and people producing the same cars, and in my view, we will soon see this in a lot of settings. In addition, robots will be able to use data more effectively and make independent decisions so that a lot of the lower-level decision-making can be done autonomously, and only the higher-level decisions transferred to human workers. We can see this already in collaborative surgery, where the robot may stitch up a wound but doesn’t need to be told exactly what stitch to use and how to space it.
We also see more robots coming into retail right now — offering customers promotions, for instance. They’re not only going to be facing customers, though; they’re also going to be working with human salespeople and human stock-workers. I believe that we’ll see this in the restaurant business and in fast-food restaurants, too, with robots working alongside human kitchen workers.
In all these places, we want to create a situation that is beneficial to the people working with the robots by designing robots that support their psychological well-being. I believe that the way these robots interact socially and communicate is going to be a key factor to make this more a utopian and less a dystopian future.