The tracking mode is used for live tracking, with the possibility to stream the tracked data directly to other programs, and to record clips which can be further improved and then exported or streamed to a target application.
Tutorial - Tracking Overview
The Tracking module
The window is split up as shown in the screenshot: the main display in the middle shows the data that is captured by the camera as well as the tracking output computed by faceshift studio.
On the left, you have the controls to start and stop the tracking and recording, and the list of currently open clips along with buttons to open, save, or export them. On the lower left you can see several panels with additional options.
Below the central display, you will find the replay controls for recorded clips.
This video tutorial explains the meaning of various display options:
Tutorial - Tracking Display
This video tutorial explains the meaning of the eyegaze options:
Tutorial - Tracking Eyegaze
In tracking mode, you can start the live tracking by pressing the Run button in the top left corner, which will then change into a Pause button. When you switch to tracking from another mode, faceshift studio will automatically start the live tracking. Some preferences influencing the tracking are explained in the Tracking settings (see section Tracking Preferences).
When the Coefficients checkbox is checked in the “Display” section of the tracking module, you can see how each blendshape (see section Tracked Blendshapes) is activated during live tracking. This can be used to control that the tracking is accurate; for example, the blendshape JawOpen should be activated when you open your mouth. If the mouth is opened as widely as you did during training, the activation should be full, conversely, a smaller aperture of the jaw should result in a lower activation value.
Clicking on one blendshape will color it in dark green and show you its corresponding curve. Hovering the mouse over another blendshape will color it in light green and show you its curve as well. This can give you a good overview of what is happening during tracking and allows you to analyze the quality of it.
Blendshape coefficients display
To tell faceshift studio that you are looking straight ahead, click the Orient Head button. This will store the current pose as neutral and align the displayed head accordingly. You can set the neutral head pose at any time, also for recorded clips.
The Neutralize feature uses a range of frames to estimate a new Neutral expression and rebuilds your tracking rig. During live tracking, it will use the next few frames (keep a neutral face!); when called on a recorded clip it, will use the current frame selection (see Timeline). You can use this feature when you realize that making a neutral face still activates some blendshapes when they should all be set to 0 (see figures). This does not update your training profile, i.e., as soon as you go back to the Training module, the tracking rig will be reset to the profile you have built from your scans.
Also, please note that some Eye blendshapes are not neutralized (i.e., the ones defining the eye gaze like EyeUp and EyeDown). This is the reason why you can still see some activated blendshapes in the coefficient display.
Have a look at this quick tutorial about how to use the neutralize functionality live or in a clip:
Tutorial - Neutralize
Notice how the blendshape activations are pulled towards zero after the neutralize button was hit. The few bars which did not change represent some not affected eye blendshapes.
Tracking blendshapes After Neutralize
When you are ready to record a clip, press the red Record button. It is only enabled once the live tracking is running. When you click it, it will light up to indicate that you are recording. You will also notice a red label replacing the replay controls, and that the frame counter and recorded time in the lower right corner of the window start running. Press the button again to stop recording. The new clip will be added to the list on the left.
If the tracking could not keep up with the frame rate and some frames had to be left unprocessed, a dialog will pop up, asking you to retrack the clip. This will simply reprocess the recorded data as if you were tracking it live. You can also retrack a clip at any time by pressing the Retrack button below the list of clips (see section Retrack).
Tutorial - Tracking Recording
To load a new clip, select “Load Clip” in the main file menu. You can load more than one clip at the same time by using the shift or the control key in the file dialog. Dragging and dropping multiple clips from your file manager is also supported.
The list on the left of the tracking window shows the currently open clips. By right clicking, you can save or export them to various formats (see Chapter Interfacing with Other Applications). You can also duplicate or delete them from the list. Double clicking a clip will allow you to rename it.
The buttons below the list can be used to open and save the clips in faceshift studio’s own *.fsp format as well as formats supported by other software such as C3D or BVH. There are also the Retrack and Refine buttons.
Retrack will simply reprocess the recorded data as if you were tracking it live. If the currently loaded actor profile is not the same as the one stored in the clip, faceshift studio will ask you which one of the two you want to use. Your selection will then overwrite the previous one within the clip. Holding down the button allows you to choose to retrack all currently loaded clips in sequence, in that case faceshift studio will not ask you if the profiles don’t match, and the one stored in each clip will always be used.
Retrack and refine can also be used on a portion of a clip:
Tutorial - Refine/Retrack Ranges
Refine will apply more advanced processing, resulting in a smoother, higher-quality clip; however, the refinement process takes considerably more time. For more information about post processing clips, including manual user input for improving facial feature detections, see section Post Processing a Clip. Holding down the button allows you to choose to refine all loaded clips sequentially.
Tutorial - Tracking Refinement
To play a clip, you can use the controls at the bottom of the window. The buttons in the middle control the replay just like a media player. Besides that, the panel displays the clip name as well as the current frame’s index and timestamp relative to the first frame. You can also use the space bar to toggle the replay. The left/right arrow keys can be used to jump by single frames, or by larger steps if you also hold Control.
It is also possible to browse through the clip by clicking on or dragging the mouse over the timeline (the row of image thumbnails) or the blendshape curves.
By using the right mouse button and dragging, you can select a subsequence of the clip and remove the frames outside of it by clicking the Trim button. Using the left/right arrow keys while holding “Shift” will change the selection. You can also set the start and end frame of the selection by pressing the “S” (start) and “E” (end) keys, or using the input fields next to the replay controls. See the figure below, where the red and yellow input fields are respectively the starting and ending frame for the selection. The green input field corresponds to the current frame.
The same display for the blendshape coefficients is also available for clips (see Blendshape Display). Additionally, you can display the audio of your recorded clip by selecting the “Audio” checkbox in the display options.
Post Processing a Clip¶
To achieve the best quality of tracking and animation, recorded clips can be processed further after tracking, with a more accurate and thorough procedure, resulting in a smoother, higher-quality clip, at the cost of more processing time. You also have the possibility to improve the automatically detected facial feature locations wherever needed by manually dragging feature points for the mouth, eyes, and eyebrows.
Manual Feature Refinement
Having recorded (or retracked) a clip, the automatically detected facial features (eyebrows, eyes, and mouth) might not always be perfectly located. In such cases you can manually click the single feature points and drag them to more satisfactory locations.
Whenever you click a feature point, the whole set of points turns magenta, indicating that this feature was manually marked for the current frame. Use the mouse scroll to zoom in on the video frame. To move the frame around, move the cursor away from any facial features, and drag by holding the left mouse button.
As soon as you start marking feature points, faceshift studio automatically starts adapting the clip at hand and refines the location of such feature points in the neighboring frames. Therefore you should not have to manually edit every single frame. The processing is reflected by the progress bar in the bottom right corner of the application. However you do not have to wait for the processing to finish before you can continue – when you are done annotating the clip you should run a refinement on the full clip anyway.
|Automatic detection||Manual annotations|
Manual feature refinement
For the eyes and the mouth regions you can indicate that the feature should be closed by clicking or hovering over one of the points of the desired feature and then hitting the “C” (close) key. To delete the current annotation, click or hover over the desired feature and press the “R” (reset) key, the feature will then be automatically detected again in a couple of seconds (the points will turn white again). To quickly reach the next or previous manually annotated frame, press the “K” or “J” key, respectively. You can also click the pink markers in the timeline, which indicate all manually annotated frames.
Manually annotated frames are indicated by these clickable pink triangles
Refining a Clip
Refine a Clip
The Refine button is found below the list of currently loaded clips. When clicked, faceshift studio will apply more advanced processing, taking into account both past and future frames, resulting in a smoother, higher-quality clip, at the price of longer computation time. By holding the button, you have the option to refine all currently loaded clips, one after the other, which is useful for overnight processing. Settings can be found in the Preferences menu, and are explained in the Post Processing Settings (see section Post-Processing Preferences).
Refining a sequence will perform the following steps:
More accurate tracking of the head pose
Redetection of facial features in the video frames, by creating custom models for the clip at hand, taking into account manually marked feature points
More accurate tracking of facial expressions in multiple passes
Performing a retrack on a clip will not affect the manually annotated features.
Since version 1.2, faceshift studio allows you to freely configure the filtering that is applied to your clips as post-processing.
Filters are applied during live tracking, and to the currently selected clip in replay mode. Depending on the filters and the clip’s length, you may briefly see a progress bar appearing in the bottom right corner of the application window when you click a different clip, causing faceshift studio to apply the filters to display it.
What is being displayed in the main window is the filtered clip. Likewise, when you export the data to one of the various available formats, the filtered clip will be used. However, when you save a clip in faceshift’s own *.fsp format, it will store the unfiltered data, along with the filtering settings that are currently set to it.
In the filtering panel of main window, you can choose a filter preset for the current performance. You can also switch it on and off conveniently by checking and unchecking the check box.
You will notice that the selected filter for live tracking will be reverted to the default whenever you pause and restart the live tracking (you can choose a different default preset for live tracking in the presets dialog as described below). Similarly, the default preset for clips will be applied when you finish recording a clip or when you load a clip that was saved with a version of faceshift studio prior to 1.2.
There is also a Smoothness slider that allows you to control the overall smoothness of the filters contained in the preset: pull it to the left for less, right for more smoothing.
Note that the smoothness parameter as well as the selected preset are saved on a per-clip basis.
The Edit Presets button opens the presets dialog, where you can customize your filters.
Filtering preset editor
Using this dialog, you can add new presets, modify them, and import or export them to a file.
You can add a new, empty preset by hitting New Preset, or you can select a preset from the combo box. You can change the preset name by typing in a new one in the field below the combo box.
When you try to Delete a preset that is still used by some clips, a confirmation dialog will pop up. If you choose to delete it anyway, all clips that use it will be changed to use no preset at all.
You can also Save and Load presets to and from *.fsf files.
By default, faceshift studio will remember a preset when the application is closed so you do not need to save it explicitly. If you do not want it to keep a preset, simply uncheck the “Load on next startup” box.
Clicking the Use for Current button applies this preset to the currently selected clip.
The Use as default checkboxes control which preset will be used during live tracking or when you stop recording a clip, respectively. Each of them can only be checked for a single preset. If you do not want any filtering to be applied to your new clips, make sure that the corresponding checkbox is disabled on all presets (to do that, you can simply check it and uncheck it again on one preset since it cannot be set on multiple presets).
On the right half of the dialog, you can configure the filters that make up the individual presets. They are applied to the tracking result one after another, from top to bottom. The ordering of the filters can be changed using the up/down arrows on the right, and a filter can be removed from the preset by pressing its X button. Most filters come with a smoothness slider that allows you to change how strongly the result will be smoothed; pulling it to the right will produce smoother results.
There are two basic types of filters: Rigid filters affect the alignment of the head, i.e., its position and orientation, while Blendshape filters affect the blendshapes, i.e., the way the face is deformed. Faceshift studio currently offers the following filters:
- Rigid: RMF (rigid motion filter) is a simple, fast filter.
- Rigid/Blendshapes: Gaussian applies Gaussian filtering.
- Rigid/Blendshapes: Bilateral applies Bilateral filtering. It is similar to Gaussian, but applies less smoothing to large, sudden changes.
- Rigid/Blendshapes: CurveSmooth fits a smooth curve through the coefficients. It yields good results, but it is a little more expensive to compute than Gaussian or Bilateral filtering.
- Blendshapes: Project to Bounds is a special filter that does not actually smooth the data. Instead it will fit bounded coefficients to the geometry. Consequently it will not have any effect if your coefficients are already bounded. It is also computationally expensive, making it the slowest filter currently offered by faceshift studio.
- Idle Animation plays a clip whenever the tracking fails, resulting in much more pleasing animations. The idle animation is played as soon as the actor’s face is lost, smoothly blended with the last successfully tracked frames. Add the filter in your current preset and press the Load idle clip button to select your own idle animation FSB clip. Note that you can easily export any faceshift clip (.fsp) to FSB format (.fsb), see section Interfacing with Other Applications.
Idle Animation filter
When you load a clip that was saved with a filter preset, faceshift studio will look for the saved preset in your list. If it finds a preset that consists of the same set of filters with the same smoothness values, it will use that preset. Otherwise, the preset will be added. Faceshift studio may rename the preset to something like “Loaded Preset 003”, or it might append an asterisk (*) to the preset name. Presets loaded this way have the “Load on next startup” flag disabled by default.
Since version 1.2, faceshift studio allows you to conveniently record multiple takes of a single scene. To do that, hit the Record Takes button. You will be asked for a name of your scene and a folder to save your clips.
As long as the Record Takes option is enabled, faceshift studio will name and number your recorded clips automatically. When you stop recording a clip, it will be saved to the given folder (if you have the “Save Automatically” checkbox enabled) and faceshift will go back to live tracking. You still need to start the next recording by clicking the Record button, though.
To stop treating your clips as takes of the same scene, click the Record Takes button again, but be aware that faceshift will no longer save them automatically when it is off.
The Display settings control what is being shown in the main window.
The first combo box controls how the personalized model and the target character’s head will be moving, according to the actor’s own rigid head motion:
- Fixed keeps the head unmoving at the same position, actually removing the actor’s rigid motion. This view can be useful to look at fine facial deformations.
- Fixed Neck keeps the neck at the same position, but turns the head according to the actor’s movement.
- Fixed Body Rotation will apply translations to the full model, but rotate only the head.
- Floating will make the model move (“float”) through space as the actor moves, applying head rotations to the entire body.
Using the second combo box, you can overlay the personalized actor model or the current target over the video image.
The set of checkboxes allow you to enable or disable various displays:
- Video toggles the display of the image captured by the sensor.
- Features toggles the rendering of the detected eyes, eyebrows, and mouth features over the video.
- Depthmap toggles the display of the depth data captured by the sensor. It is often a good idea to look at the depth data since moving too close to the camera often produces holes in the depth map, causing bad results.
- Infrared toggles the display of the infrared images captured by the sensor. Note that this option is only supported by the Intel RealSense sensor.
- Info toggles the “Tracking OK/failed”, distance and orientation displays at the top of the window.
- Model toggles the display of the actor’s trained model.
- Wireframe toggles the rendering of the model’s edges, which allows you to see deformations more clearly.
- Coefficients toggles the display of the blendshape coefficients (curves and bar diagram) at the bottom of the window (see section Timeline).
- Audio toggles the display of the audio waveform.
- Timeline toggles the display of the thumbnails and time chart at the bottom of the window.
You can also apply the tracked data to a predefined target character by selecting a target from the Target combo box. Please refer to FBX Target Import (see chapter Interfacing with Other Applications) to learn how to display your own model within faceshift studio.
The Track panel controls some basic settings for the processing. More advanced parameters can be found in the Preferences. Note that if you are working with a recorded clip, these settings will not have any effect unless you retrack or refine the clip.
The Deformation checkbox allows you to turn off the tracking of facial expressions. If it is off, only the head position and orientation will be tracked, but not the blendshapes. If your machine is often struggling to process the data at the camera’s frame rate, you may notice that turning Deformation off makes it easier for the tracking to keep up.
The Bounds combo box allows you to change how the blendshapes are tracked.
- If you use the Bounded setting, the blendshape coefficients will be constrained to be between 0 and 1 (by default). You can manually adjust the bounds on a per-shape basis using the Shapes panel, which also allows you to turn off the tracking of individual shapes (see section Shapes).
- The Positive setting lets the blendshape coefficients be larger than 1, but not smaller than 0.
- The Unbounded setting allows the coefficients to take arbitrary values, resulting in values that may not be practical by themselves. However, this setting allows the scanned geometry to be matched more closely, therefore it is particularly useful (and recommended) if you intend to export the performance as virtual markers.
The Geometry-Texture slider allows you to control how strongly the data captured by the depth/video sensors influence the tracking results. If you are tracking under bad lighting conditions, we recommend pulling the slider towards Geometry.
The Brow and Mouth Smoothing sliders allow you to change the smoothing applied to the brow and mouth blendshapes, respectively.
In addition to filter presets (see section Filtering), it is possible to use shape presets to influence the tracking result. You can choose a maximum coefficient value for each blendshape used for tracking, depending on the result you would like to achieve. For example, setting a blendshape to 0 will disable it completely for the tracking.
Note that modifying a preset or selecting a different one will not change existing clips. You have to retrack or refine the clip to see the effect. This applies only to recorded clips; if you are tracking in live mode, you will be able to see changes to the active preset immediately.
Shape preset editor
These settings allow you to control the eye tracking:
- Eye Tracking allows you to disable the eye tracking, assuming a “straight ahead” eye gaze.
- Couple Eyes assumes the same eye gaze for both eyes.
- Couple Eyelids forces the eyelids to behave similarly, consequently you will only be able to track winking correctly if this is off.
The detection of eye, eyebrow, and mouth features in faceshift studio is based on a detector that was trained from manually annotated data.
The Create button in the Features panel allows you to train a custom detector from all clips that are currently loaded. Your custom detector will then be used for the feature detection on subsequent tracks. You can also Save and Load custom detectors, and you can restore the original, built-in detector using the Reset button.
Note that an additional label in the Replay Controls tells you which custom detector is currently loaded.
This tutorial shows you how to create and use a custom detector:
Tutorial - Custom Detector
Network streaming controls
The Network Streaming checkbox allows you to send the tracked data over the network. You can also use faceshift studio to receive tracking data from a remote instance of the application using the Connect to Server button.
Configuration parameters for the network streaming can be found in the Streaming section of the Preferences (see section Streaming Preferences).
For learning how to use faceshift studio with Unity, Maya, and MotionBuilder, see the documentation at http://doc.faceshift.com/plugins.
To learn how to process the streamed data in your own application, please refer to the Faceshift Networking section.