I’ve played Dungeons and Dragons once, and really the best part of it (as most newbs say) was designing my character.
The Stomach, as I called him, was an evil half orc, half giant dragon shaman who wielded two double-headed battle axes and could vomit cones of acid. You can see how he was meant to “chew” and “digest” his enemies.
It didn’t work out as well as I had hoped.
More importantly, however, my character defined how I was to act during the game. Whenever my group interacted with townspeople, or even each other, my character’s stats, race, and alignment determined whether I thanked an old man for his help or kicked him in the ‘nads.
Robots also have roles to play, and like a D&D character, they’re restricted to what was assigned them. Initially, designers simply filled in the details of how they thought a “servant” bot would behave (respond to requests, etc.). But now they can fine-tune these roles with information about how robots are socially received — or rejected.
Though robots may be technically outfitted to complete social tasks, human perception towards them affects how effective they may actually be. A large part of how we feel about a robot is influenced by a tendency to anthropomorphize robots. As Christopher Bartneck et al. (2013) wrote in their paper “More Human Than Human: Does The Uncanny Curve Really Matter?“:
Anthropomorphism is a common phenomenon known already in ancient times. It is not a thing of the past, but still has a profound impact on major aspects of our lives and on research in [artificial intelligence and [human-robot interactions].
Researchers like Bartneck et al. in the field of human-robot interactions (HRI) seek to maximize a robot’s humanness by studying how a robot’s presentation affects human perception and attitude towards it. A robot’s believability relies on how well it imitates human behavior, which encompasses appearance, movements, and ability to communicate emotions or ideas.
Judgement at First Sight
A robot’s physical appearance is the first obvious signal about its purpose.
A Furby, for example, is small and fuzzy, and its general lack of articulated limbs indicate that it’s probably useless. By looking at it, you’re probably not going to buy it to lift a disabled person, like the muscle-builder-looking RIBA-II.
But there are technological limits that can unfortunately impact how humans perceive and feel towards robots. Even though humans do like human-like robots, there’s yet to be a perfect humanoid robot (did someone say Cylon?).
When androids pass the first few checkpoints of believability but fail something deceptively complex — like a fluid nose scratch — researchers have found positive feelings plunge more dramatically than if a non-humanoid robot failed. This sudden backlash against an android’s humanity is what’s called the uncanny curve, or as Bartneck suggests, more like an uncanny cliff.
And you know what the researchers mean. Look at our best androids so far and say you aren’t just a little wigged out when their movements are wonky.
Robot dogs, however, were not affected by this phenomenon. And to top that off, S. Shyam Sundar et al. (2013) found that people strongly anthropomorphized a tissue box that said, “Bless you” when hearing a sneeze — even though the box clearly lacked all physical trademarks of humanity. Which just goes to show that a McDonald’s uniform or an Italian business suit might affect the way we feel and talk to a certain person, but personality really might take the cake.
The more human a robot seems, the stronger its social presence.
“Social presence occurs when technology users do not notice the para-authenticity of mediated humans and/or the artificiality of simulated nonhuman social actors,” wrote Ki Joon Kim, doctoral candidate in the department of interaction science, Sungkyunkwan University, Korea. “Thus, social presence is particularly important in HRI and areas of AI because the ultimate goal of designing and interacting with social robots is perhaps to provide users with strong feelings of socialness.”
In Kim’s study with Sundar, participants attributed more human qualities to a robot when it offered care to them than when it simply needed to be taken care of. The robot’s two roles — caretaker and care-receiver — determined how it behaved, how it interacted with humans, and ultimately how humans felt about it.
Which brings me back to my fictional D&D character. Had The Stomach been a good half-elf monk with healing powers, I (and my companions) probably would’ve felt different about him. It’s as easy as that to change perception and make The Stomach more endearing — even without a tangible body, with all its buggy movements and expressions to challenge my anthropomorphic efforts.
That we are engineering an authentic human presence brings us to the old questions laid out by science fiction: if we can synthesize a presence through pleasing aesthetics, fluid gestures, emotional reactions, and an occupational framework, what makes us human?
And, let’s be serious: would I have ever considered The Stomach to be human?