Meet The First Synthespians

Illustration for article titled Meet The First Synthespians

An animation company has finally managed to create a computer-generated human that most watchers can't distinguish from a real person. Forget the clunky CG characters in most animated movies today - in a few years, you could be watching animated films that are almost indistinguishable from "live-action" ones. Click through to watch a slightly disturbing video.

The video actually uses a motion capture of a human actor to create a life-like animation. Image Metrics, whose animations were used in the Grand Theft Auto game, has developed a sure-fire way of making human faces look believable: apparently it's all about the eyes, and the little assymetries of the human face. There's a lot more fine control over the details in this new system, according to chief operating officer Mike Starkenburg:

There's always been control systems for different facial movements, but say in the past you had a dial for controlling whether an eye was open or closed, and in one frame you set the eye at 3/4 open, the next 1/2 open etc. This is like achieving that degree of control with much finer movements... For instance, you could be controlling the movement in the top 3-4mm of the right side of the smile

My question is, how long is it before we're all having video chat with slightly idealized animations of our friends? [London Times via Technovelgy]

Share This Story

Get our newsletter

DISCUSSION

how about we watch this in HD?

[rcpt.yousendit.com]

and here is an interview with some othe the Image Metrics people

[media.fxguide.com]

a someone who is working with Image Metrics right now I can tell you that their technology is extremely impressive.

the Image Metrics process is pretty simple...

and it's all dependent on the rig...if you build a rig that is capable

of the appropriate range of expression and capture the performance you

want, Image Metrics can generate animation that will drive your rig

with a level of detail that is really impressive.

the 'Emily' clip is, from an animation standpoint, is just

about the best I've seen...most of the criticisms, however valid, have

been about the tracking and shading...and the manner in which clip is

presented puts the viewer in an inappropriate mind set right off the

bat.

This demo, at least from my point of view (again as someone who is

working with IM right now) is a very impressive demo of the sort of

'base line' results that IM can deliver, a digital recreation of a

human facial performance...where it goes from there is up to the

client...take the actor's performance and put it on an 'aged' version

of her face, or on a different face etc.

I can't talk about what we're doing, but I can say that the results we

are getting are pretty remarkable, and we're doing some fairly extreme

stuff.

The real question here is about scale and cost effectiveness...

can an animator, given a rig that is capable of the appropriate range

of expression, produce a performance that is as good or better than

what IM has demonstrated with 'Emily'?

the answer is, of course, yes.

but can it be done is the same amount of time? = $

can it be revised as quickly? = $

can the animation be re-targeted to another character?

the question 'can something like this ever replace real actors?' is a valid one, and technology like this (and others) will

replace actors - not your Harrison Ford's etc but extra's had better

find ways to suplement their income...crowds are already being replaced

digitally and soon CG will be able to replace 'background' actors (the

other cops in the police station etc). Where this stuff really shines

(and where it will be used more an more in the coming years) is in

performance augmentation, The Curious Case of Benjamin Button is a good example (they're using Mova [www.mova.com]).

As an in-house test, and as a promotional clip this could have been

presented more effectively, but the underlying technology represents

the 'cutting edge' of digital performance.