For accurate facial tracking, it is very important to set up the camera carefully, because the accuracy of the reconstructed profile and recordings depends on the quality of the input. We recommend to always first check the 3D data in the setup view. Click the camera symbol with the title Setup.
On the right side you can see the rendered image. If 3D data is shown (as in the figure The Setup module where we show the surface), you can zoom in and out using the mouse wheel, rotate the view by pressing the left mouse button and dragging, and translate using the right mouse button.
In the middle column at the top, a list of the different installed drivers is displayed. Once a camera is plugged in, the corresponding driver can be activated by checking the checkbox and the connected sensors will start to appear below. You can change the camera and camera driver used by faceshift studio by unchecking and checking the appropriate checkboxes. Note that faceshift studio doesn’t support the activation of multiple sensors at the same time.
When a connected sensor is selected, different settings will appear below the display selection. We describe each of them in the following sections.
Please refer to Optimize your Sensor Input to learn how to get optimal data from the sensor or watch the corresponding tutorial video:
Tutorial - Setup View
The following data can be displayed:
- None: This will not show any data
- Depthmap: A grayscale image of the depth
- Surface: The scene as a shaded 3D object
- Textured Surface: The scene as a colored and shaded 3D object
- Point Cloud: Each datapoint from the sensor rendered independently
- Colored Point Cloud: Each datapoint from the sensor rendered independently and colored with the video
- Video: The color stream from the sensor
- Surface in Video: The 3D surface overlayed on top of the color stream
- Infrared: The infrared stream from the sensor (only for the Intel RealSense sensor)
- Surface in Infrared: The 3D surface overlayed on top of the infrared stream (only for the Intel RealSense sensor)
|Depthmap||Surface||Textured Surface||Video Image|
Examples of the display possibilities listed above
General sensor settings¶
The following options are common to the Kinect, OpenNI1, OpenNI2 and Intel RealSense sensors.
Easily enable or disable the streaming of frames from the sensor.
Scales the frames coming from the sensor. Note that while downscaling improves the framerate performance, it will reduce the quality of the received images and therefore impact the tracking quality if the output resolution is too low.
Allows rotating the image and the depth map. Use this if you set up your sensor vertically or upside down.
Principal Point Offset
Here you can adjust a shift between the color image and the depth image. Use this if depth data and color data are not aligned correctly. You can use sharp horizontal and vertical edges to check this alignment (see section Color offset).
The cutoff function allows you to restrict the 3D data to a certain depth range. Everything farther away from the sensor than the given cutoff distance will be discarded. Note that the cutoff will not change the tracking quality, but it may slightly increase performance when discarding geometry in the back, i.e., walls and objects behind you. If set to 0, the cutoff will be completely disabled.
Reset to default values
Use this button to reset all settings for the selected sensor to their default values.
OpenNI1 and OpenNI2¶
If checked, the depth and video frames from the sensor will be synchronized (captured with the same timestamp). We recommend to keep this checked. Note: There are rare situations where the data input freezes due to the sensor driver. In this case, briefly uncheck Frame Sync and then check it again.
Adjusts the exposure to the lighting. To set it manually, both Auto Exposure and Auto Whitebalance need to be turned off.
Adjusts the gain. To set it manually, both Auto Exposure and Auto Whitebalance need to be turned off.
If the Exposure and Gain sliders do not work, you may have to upgrade your sensor’s firmware (see Firmware Downloads).
Automatically adjusts the white balance.
This flips both the color image and the depth map left-right.
Changes the quality of the color image. By default, faceshift studio uses High Resolution.
This sensor offers a higher framerate of 60fps and is also able to stream infrared (IR) frames. As the IR frames show the scene with a constant illumination, provided by the IR laser of the sensor, they can be used to provide useful additional cues for face tracking. Integration of these cues in our tracking is currently under development and is not available in faceshift studio 2015.1. However, we already enable the streaming and saving of IR frames by default, so that they can be re-tracked using cues from the IR once faceshift studio supports IR frames completely.
For maximum framerate performance with this sensor we recommend to scale RGB and Depth to 0.5 and change the frame width and height in “Preferences > Tracking > General” to 512.
The number of frames per second.
The power of the laser which influences the depth data quality.
Controls the quality of the depth frames. For better framerate performance we recommend to set this option to Coarse: 7 patterns. For better tracking quality we recommend to set this option to Finest: 9 patterns.
Controls the smoothness of the depth data.
Depth Confidence Threshold
Confidence of the sensor about the incoming depth data. Less confidence will act like a cutoff on the depth data.
For better performance, we recommend to turn off the Auto checkbox next to Color Exposure and set the exposure and the gain as low as possible, depending on your lighting conditions. This is especially important in 60 fps mode, where the exposure must be low so that the exposure time is short enough for the color frames to maintain the framerate.
When turning off the Auto Exposure and the Auto Whitebalance settings, the current slider value is not directly applied. You need modify the slider value manually again to make sure that the setting gets updated.