FaceRig lets anyone with a webcam to instantly embody any character they want. The output can be streamed to Skype, Twitch or any service that  uses a webcam. It can also be instantly recorded as a movie

FaceRig uses state of the art real time image based tracking technology provided by Ulsee, and combines it with a cross-platform real-time animation and visualization framework developed by indie-powered Holotech Studios and its partners from the games industry. A lot of research has been put into FaceRig by top level academics and professionals to create the current results.

FaceRig core features are:

  • real time expression tracking in an input video stream, (also with audio analysis).
  • combining tracked data with additional virtual puppeteer-ing input (if available).
  • on-the-fly animation re-targeting for the tracked data and applying the animation on a user selected 3d model, with audio processing (voice alteration).
  • rendering and lighting the animated model in real time, in a user selected background.
  • encoding the output (video and audio) and streaming it as output from a virtual webcam device (it basically intercepts webcam input, and swaps the images captured by your real webcam with the fantastic content before sending it further)
    provide the interface for tuning the parameters of all operations above.

At 8AM PST, on the 7'th of July FaceRig 1.0 has officially launched on Steam

There are two different versions of the program, "basic" Facerig with camera input and IRfacerig developed specially for the Intel's 3D Real Sense camera.

Facerig has also been released on mobile devices in automn 2016, on Android and iOS.

The FaceRig Studio version for commercial use was released the end of 2016.

FaceRig 1

FaceRig 1.0 Presentation-0

Latest activityEdit