Lifelike animation heralds new era for computer games

'Emily' will set a new precedent for photo-realistic characters in video games and films, says her creator, Image Metrics

Extraordinarily lifelike characters are to begin appearing in films and computer games thanks to a new type of animation technology.

Emily - the woman in the above animation - was produced using a new modelling technology that enables the most minute details of a facial expression to be captured and recreated.

She is considered to be one of the first animations to have overleapt a long-standing barrier known as 'uncanny valley' - which refers to the perception that animation looks less realistic as it approaches human likeness.

Researchers at a Californian company which makes computer-generated imagery for Hollywood films started with a video of an employee talking. They then broke down down the facial movements down into dozens of smaller movements, each of which was given a 'control system'.

The team at Image Metrics - which produced the animation for the Grand Theft Auto computer game - then recreated the gestures, movement by movement, in a model. The aim was to overcome the traditional difficulties of animating a human face, for instance that the skin looks too shiny, or that the movements are too symmetrical.

"Ninety per cent of the work is convincing people that the eyes are real," Mike Starkenburg, chief operating officer of Image Metrics, said.

"The subtlety of the timing of eye movements is a big one. People also have a natural asymmetry - for instance, in the muscles in the side of their face. Those types of imperfections aren't that significant but they are what makes people look real."

Previous methods for animating faces have involved putting dots on a face and observing the way the dots move, but Image Metrics analyses facial movements at the level of individual pixels in a video, meaning that the subtlest variations - such as the way the skin creases around the eyes, can be tracked.

"There's always been control systems for different facial movements, but say in the past you had a dial for controlling whether an eye was open or closed, and in one frame you set the eye at 3/4 open, the next 1/2 open etc. This is like achieving that degree of control with much finer movements.

"For instance, you could be controlling the movement in the top 3-4mm of the right side of the smile," Mr Starkenburg said.

For many years now, animators have come up against a barrier known as "uncanny valley", which refers to how, as a computer-generated face approaches human likeness, it begins take on a corpse-like appearance similar to that in some horror films.

As a result, computer game animators have purposely simplified their creations so that the players realise immediately that the figures are not real.

"There came a point where animators were trying to create a face and there was a theory of diminishing returns," said Raja Koduri, chief technlology officer in graphics at AMD, the chip-maker.

AMD last week released a new chip with a billion transistors that will be able to show off creations such as Emily by allowing a much greater number of computations per second. "If you're trying to process the graphics in a photo-realistic animation, in real-time, there's a lot of computation involved," said Mr Koduri.

He said that AMD's new chip - the Radeon HD 4870 X2 - was able to process 2.4 teraflops of information per second, meaning it had a capability similar to a computer that - only 12 years ago - would have filled a room. AMD's chip fits inside a standard PC.

But he said that the line between what was real and what was rendered would not be blurred completely until 2020.

There have been several advances in computer-generated imagery (CGI) in recent years. One project at the University of Southern California involves placing an actor inside a giant metallic orb which fires more than 3,000 lights from a range of different angles - and with different degrees of intensity - at the actor while he or she is are being filmed performing an action.

The image captured by the camera can then be transported into another piece of film and the lighting effect (on the actor) chosen according to the ambient lighting in the scene.

Found this Post interesting? Discover more Curious Reads.
[via timesonline]

More Post From The Web