Video post-processing API

Note: you are currently viewing documentation for a beta or an older version of Varjo

The Video post-process shader API allows applications to modify the video pass-through image without latency in real time. This is achieved by providing the Varjo compositor with a compiled HLSL compute shader as binary blob, which is then run for each video frame coming from the cameras. The application can provide input data as constant buffer and textures. The format of the built-in constant buffer is specified in varjo_ShaderInputLayout_VideoPostProcess_V1.

The input texture contains the video pass-through image in the RGB channels and, optionally, the chroma key mask in the alpha channel if the feature is enabled. The shader should write its output to the output texture in that same format. Modifying the alpha channel will affect blending with VR content and can impair chroma keying.

In addition to these built-in inputs, an application can pass its own custom constant buffer data and textures for the shader. These can be either static data (updated only once) or dynamic data (updated constantly). Additional constant buffer and texture data can be submitted using varjo_MRSubmitShaderInputs.

Custom input textures are owned by the Varjo compositor and writing them requires using varjo_MRAcquireShaderTexture and varjo_MRReleaseShaderTexture to lock or unlock the data for writing. Input textures are swapchain buffered, therefore the client will not get the same memory buffer on each call and cannot assume previously written data is retained. Native DXGI and OpenGL textures are supported, and the calling application must decide which graphics API is used by calling one of the graphics-API specific configure functions (e.g., varjo_MRD3D11ConfigureShader). The Varjo texture format can be converted to native format using varjo_ToDXGIFormat and varjo_ToGLFormat.

To achieve the lowest possible latency, the Varjo compositor processes the video pass-through image in slices instead of one full frame at a time. For this reason the built-in input texture does not contain the full frame, but only the data that has been processed from the camera so far. If the custom post-process shader accesses the texture outside the provided rectangle, that data may be undefined. The client can provide a sampling margin as number of scanlines in the shader parameters structure.

Post-process shaders are run in the Varjo compositor mixed-reality pipeline and may have an impact on the compositor’s rendering performance. It is left to the client to provide shaders to the compositor that are not too heavy. Typical consequences of performance issues are tearing, jittering, and artefacts in the video pass-through image.

Example

See VideoPostProcessExample in the examples folder for a reference example of how to implement video post-processing in your own client application. The example application shows how to provide an HLSL compute shader for the Varjo compositor.