Escaping the uncanny valley is a major challenge when it comes to recreating humans (both digitally and physically). The uncanny valley is a problem for both roboticists and digital artists, because it brushes against the core of what we consider “human.” If you’re not familiar with the term, the uncanny valley refers to robots and CGI models that make us uneasy because they seem almost human.
Notably, the uncanny valley effect doesn’t occur when something is clearly not human (such as a cartoon)—only when it looks almost, but not quite, human. The prevailing theory behind the phenomenon is that our brains categorize the model as human, but as a human with something seriously wrong—like psychopathy or death. Researchers at the University of Cambridge are trying to get past the uncanny valley, so they’re experimenting with a robot that simply mimics the emotions of a real human.
As you can see in the video, they haven’t yet climbed out of the uncanny valley. The robot, called Charles, looks more like a cadaver being manipulated by a puppeteer than a living human. But, it could be a step forward in solving two problems: a robot’s ability to sense a humans emotions, and its ability to display them itself.
Charles works by scanning the face of a human subject, and uses computer vision to track the various facial movements the subject makes. Everything from the position of the subject’s eyebrows, to the shape of their mouth is tracked. A system of servos then move Charles’ face to match. Charles is able to do that fairly well, but obviously remains in the uncanny valley, which gives the researchers an opportunity to study why and to figure out a way to make robots more lifelike in the future.