< Previous | Contents | Manuals Home | Boris FX | Next >
Introduction
To perform motion capture of faces or bodies, you will need at least two cameras trained on the performer from different angles. Since the performer’s head or limbs are rotating, the tracking features may rotate out of view of the first two cameras, so you may need additional cameras to shoot more views from behind the actor.
Tip : if you can get the field of view and accuracy you need with only two cameras, that will make the job simpler, as you can use stereo features, which are simpler and faster because only two cameras are involved.
The fields of view of the cameras must be large enough to encompass the entire motion that the actor will perform, without the cameras tracking the performer (OK, experts can use SynthEyes for motion capture even when the cameras move, but only with care).
You will need to perform a calibration process ahead of time, to determine the exact position and orientation of the cameras with respect to one another (assuming they are not moving). We’ll show you one way to achieve this, using some specialized but inexpensive gear.
Very Important : You’ll have to ensure that nobody knocks the cameras out of calibration while you shoot calibration or live action footage, or between takes.
You’ll need to be able to resynchronize the footage of all the cameras in post.
We’ll tell you one way to do that.
Generally the performer will have tracker markers attached, to ensure the best possible and most reliable data capture. The exception to this would be if one of the camera views must also be used as part of the final shot, for example, a talking head that will have an extreme helmet added. In this case, markers can be used where they will be hidden by the added effect, and in locations not permitting trackers, either natural facial features can be used (HD or film source!), or markers can be used and removed as an additional effect.
After you solve the calibration and tracking in SynthEyes, you will wind up with a collection of trajectories showing the path through space of each individual feature.
When you do moving-object tracking, the trackers are all rigidly connected to one another, but in motion capture, each tracker follows its own individual path.
You will bring all these individual paths into your animation package, and will need to set up a rigging system that makes your character move in response to the tracker paths. That rigging might consist of expressions, Look At controllers, etc; it’s up to you and your animation package.
Alternatively, you can set up a rig in SynthEyes using the GeoH tracking facilities. By attaching your motion-capture trackers to it, the rig will be animated to match up with the trackers in 3D. You can then export the rig in BVH format, and import it into character animation software.
©2024 Boris FX, Inc. — UNOFFICIAL — Converted from original PDF.