Tips & Tricks¶
Optimize your Sensor Input¶
Here are some hints how to optimize your sensor input.
Try minimizing the amount of holes in the depth data. Holes are usually created due to other light sources (such as the sun) or due to the distance of the actor from the sensor. Sunlight tends to throw off the depth sensors based on infrared projection, causing bad results. If you are working close to a window, try closing the blinds.
Ideally, the actor should be at about 60 cm distance for the Kinect, OpenNI1 and OpenNI2 sensors and at about 30 cm for the Intel RealSense sensor. Make sure not to be too much closer or further away, which may increase the amount of holes.
The lighting should be even, not saturated. The exposure can be adjusted in your sensor settings, see the section General sensor settings.
It turns out that the sensors tend to produce misaligned data. You can adjust this using the principal point offset (see General sensor settings). Please note that this has a significant impact on the training as well; if you change this setting, you should scan your expressions again and build a new profile.
|Misaligned data||Corrected offset|
The tracking works best if the actor is facing the camera. Since the chin is often a reliable reference, best results are achieved when the sensor is positioned slightly below the level of the face.
Achieve the Best Tracking Results¶
Make sure that your camera is set up correctly. Check out the tips to set up your camera (see section Optimize your Sensor Input). In particular:
- Lighting: Uniform, diffuse lighting without strong highlights works best. Make sure that no part of the image is saturated.
- Distance: The actor should be at about 30 cm distance (with the Intel RealSense sensor) or 60 cm distance (for other sensors) and facing the sensor. The best results are achieved when the sensor is positioned slightly below the center of the face. But make sure not to get too close to the sensor; holes in the depth data are often the result.
It is strongly recommended to use the same sensor for training and tracking, because the calibration can vary slightly between them. It is particularly important that the principal point offset setting and the OpenNI2 resolution setting are not changed between training and tracking.
Occlusions: Actors with long hair should use a head band to keep the hair back. Since the forehead is a relevant cue for tracking, a bonnet can help to keep it clear even for shorter hair. When the lighting is good, move the slider Geometry - Texture in the Track panel more towards Texture to get a more responsive result with less jitter.
Refining the track produces a significantly more accurate and smooth output. It works best if you manually mark up the features on some frames.