MASK 3D Render Engine
This live 3D Open GL renderer is one of the fruits of Televirtual's extended NVIDIA developer relationship. All the latest features including per pixel lighting and cg shaders, combine to produce filmatic quality scene and character display, whilst harnessing the advancing power of GPU hardware acceleration. A typical benchmark for the renderer is 400,000 photo-textured polygons, mesh-deformed in excess of 30 fps, at resolutions up to and including HDTV. All this with multiple cameras and lights.
BABEL2LIPS live speech generator
Award-winning Natural Language Science application from the Acapela Group, this neural networked computer vision model can detect over 40 phonemes in a live stream of spoken dialogue, delivering character lip-synch of such quality, that deaf persons can lip read from its output.
IMPERSONATOR live speech generator
Televirtual's own Fourier analysis-based application is triggered by vowel sounds, and can power less sophisticated characters effectively, whilst enjoying low computing task overheads.
RAP STUDIO CONTROL command screen features
These include switch controls based on preview images for up to 10 cameras.
All cameras may have pre-programmable moves incorporating, pan tilt, track and zoom or any combination thereof. Lighting changes may also be controlled on the fly by faders, or pre-programmed and triggered live. There is also a small bank of live special FX.
RAP PERFORMANCE RECORDING GRAPH
This feature is used to create a data recording or recordings of any chosen scene. In order to output the best results in terms of lip-synch, gesture and movement, recording input has to achieve an incremental frame rate of 40+fps. To deliver this, elements of the environment, character details and various lights and shaders are turned off for the data recording, only to be switched back on during the offline rendering process. Thus simplified, the scene is still of sufficient complexity to allow effective puppeteering.These techniques are of particular importance when the number of characters and environment are so complex as to be beyond the current capabilities of the Live Render Engine.
RECORDING BY LAYER
A RAP scene may feature multiple characters, but because it may prove impractical to puppeteer their combined choreographed performances in one single take, recording may be staggered, so that the performance is achievable by layers. Layer 1 might be the scene choreography for three major players, as defined by the spoken dialogue, but without lip-synch, Layer 2 might be auto generation of lip-synch by these characters, augmented with facial gestures. Layer 3 could establish the stage choreography of perhaps two additional minor characters together with their facial gestures. Once all three layers had been approved via a combined real-time playback, a fourth layer would define the cut camera sequence. Once a combined 4-layer playback had been approved, the sequence may be consigned to the offline renderer.
RAP offline renderer
Together with THE PERFORMANCE RECORDING GRAPH, this feature is key to RAP's ability to produce first-quality animated sequences for composition into conventional 3d animated dramas or drama sequences. Once the data scene data recording is passed through to the renderer, the same hardware acceleration which powers the MASK live render engine is engaged to deliver unprecedented levels of conventional frame production. Benchmarks vary as to the size of file, complexity of lights and characters, and size of image, but typical production rendering rates would be about 8-10 times real time for 4 MB images, or 8-10 minutes to produce a one minute sequence of 1,500 bitmaps. All this on a single processor Pentium 4 PC workstation.
ANIMATED TEXTURES in scenes
RAP environments or sets can incorporate and display animated textures and live action video, both in real time and via the Offline Renderer.
Auto animation from TTS
For the ultimate in Virtual Presentation, RAP may offer a complete content service driven via a synthetic speech engine and an associated mark up language such as SSML. This means that an entire live production or character presentation may be produced directly from Text.
An example is Televirtual's Metman, a cartoon-style weather forecaster, whose speech, movement, expressions, gestures and screen position, may all be generated from his associated Nuance Vocalizer™ 4.0 speech engine. The same system can also dictate camera changes, map changes, light effects and screen text overlays.
For non-English Language territories, Acapela speech products may perform the same production roles.
HDTV compatible/future proof
RAP-produced scenes may be HDTV compatible at 1920 x 1080p, either in Real-time or via the Offline Renderer, depending on scene complexity.
Building/designing content from RAP
Currently RAP imports content directly from MAYA, and where other procedural animation software (Softimage/3dmax/Lightwave etc) have been used, it is necessary to import and reassemble assets in MAYA to allow export to RAP.
Animation files may be imported via Maya or direct from motion builder as Fbx files.