Occlusion mesh and layer extensions

Note: you are currently viewing documentation for an older version of Varjo

Occlusion mesh can be used to improve performance by avoiding rendering pixels that will not be visible in the HMD due to lens distortion.

Layer extensions are additional data submitted together with layer to enable advanced features.

Table of Contents

Occlusion mesh

Since some display pixels are not visible in the optical path due to lens distortion, an application can stencil out pixels in order to reduce shading workload and improve performance.

A mesh can be obtained by calling varjo_CreateOcclusionMesh() and must be freed with varjo_FreeOcclusionMesh. This mesh can be used either as a stencil mask or depth mask (rendered at near plane) to reduce the pixel fill in areas that are not visible.

When a layer is submitted for which the occlusion mesh was used in rendering, applications should pass varjo_LayerFlag_UsingOcclusionMesh.


Layers can also have one or more extensions via extension chaining.

Depth buffer extension

varjo_ViewExtensionDepth allows applications to submit a depth buffer together with the color buffer. Submitting a depth buffer automatically enables 6-DOF (positional) timewarp in Varjo Compositor. As positional timewarp can correct for movement in addition to rotation, the final rendering will appear smoother even with low FPS. In addition to positional timewarp, depth buffer submission also allows enabling depth testing in Varjo Compositor, which is important especially in some mixed reality use cases. If possible, applications should always submit the depth buffer.

Depth buffer (or velocity buffer) also enables Motion prediction.

Each depth buffer needs a separate swap chain, similar to color buffers. The only difference in swap chain creation is the format.

struct varjo_ViewExtensionDepth {
    struct varjo_ViewExtension header;
    double minDepth;
    double maxDepth;
    double nearZ;
    double farZ;
    struct varjo_SwapChainViewport viewport;

minDepth and maxDepth define the range values that the application will render into the created depth buffer. Unless a depth viewport transformation is used, minDepth = 0 and maxDepth = 1 should be used regardless of the rendering API. Note that the depth buffer value range is not the same as clip space range.

nearZ and farZ describe the near and far planes; if nearZ < farZ, compositor will assume that the application renders forward depth (and view distance of nearZ matches with minDepth in the depth buffer). Reversed Z-buffer can be indicated with reversed nearZ and farZ values.

Depth swapchains are created similarly as color swapchains:

varjo_SwapChainConfig2 config2{};
config2.numberOfTextures = 3;
config2.textureArraySize = 1;
config2.textureFormat = varjo_DepthTextureFormat_D32_FLOAT;
config2.textureWidth = getTotalWidth(viewports);
config2.textureHeight = getTotalHeight(viewports);
varjo_SwapChain* swapchain = varjo_D3D11CreateSwapChain(session, renderer.GetD3DDevice(), &config2);

All current depth formats are supported by both DirectX and OpenGL, so the format support does not need to be queried separately.

Note: If the application cannot render into a swap chain depth texture, the best performance can be achieved by creating varjo_DepthTextureFormat_D32_FLOAT swap chain and copying depth data with a simple shader. Otherwise, this conversion will be performed by runtime due to limitations in DirectX 11 resource sharing.

The depth buffer can be submitted by adding a pointer to varjo_ViewExtensionDepth for each varjo_LayerMultiProjView.

It is common practice to set up these structures once outside the rendering loop.

The following example shows how a (forward) depth buffer can be submitted. In this example, the depth buffer contains values from 0.0 to 1.0, and near and far planes are set to 0.1m and 300m, respectively.

std::vector<varjo_ViewExtensionDepth> extDepthViews;
for (int i = 0; i < varjo_GetViewCount(session); i++) {
    extDepthViews[i].header.type = varjo_ViewExtensionDepthType;
    extDepthViews[i].header.next = nullptr;
    extDepthViews[i].minDepth = 0.0;
    extDepthViews[i].maxDepth = 1.0;
    extDepthViews[i].nearZ = 0.1;
    extDepthViews[i].farZ = 300;

std::vector<varjo_LayerMultiProjView> multiprojectionViews;
for (int i = 0; i < varjo_GetViewCount(session); i++) {
    multiprojectionViews[i].extension = &extDepthViews[i].header;

Velocity buffer extension

varjo_ViewExtensionVelocity allows applications to submit a velocity buffer in addition to color and depth buffers. The velocity buffer contains per-pixel velocities in the content itself, for example in animations and object movement in general.

The velocity buffer also enables Motion prediction, which smoothens animations and object movement especially in low-FPS scenarios. The buffer helps with prediction, since motion estimation in the compositor can use velocities specified by the application instead of estimating them.

Applications should submit the velocity buffer if possible.

The velocity buffer requires a separate swap chain similar to color and depth buffers. The format for the buffer is described below:

struct varjo_ViewExtensionVelocity {
    struct varjo_ViewExtension header;
    double velocityScale;
    varjo_Bool includesHMDMotion;
    struct varjo_SwapChainViewport viewport;

A scale multiplier is applied to all velocity vectors in the surface so that velocities are in pixels per second after scaling.

Velocity can be expressed either with or without head movement. The includesHMDMotion flag controls which one is used.

Velocities should be encoded as two 16-bit values into an RGBA8888 texture as follows: R = high X, G = low X, B = high Y, A = low Y.

Sample code for packing velocities:

uint4 packVelocity(float2 floatingPoint) {
    int2 fixedPoint = floatingPoint * PRECISION;
    uint2 temp = uint2(fixedPoint.x & 0xFFFF, fixedPoint.y & 0xFFFF);
    return uint4(temp.r >> 8, temp.r & 0xFF, temp.g >> 8, temp.g & 0xFF);

The Benchmark example includes example code showing how to use the velocity buffer in a real application.

Depth range extension

Depth buffer submission allows applications to enable depth test. However, in some XR use cases the depth testing range needs to be limited. Inside these view-space limits, composition does depth testing against the other layers; outside the limits, the layer is blended with other layers in the layer order.

As an example, the depth testing range can be used to enable hand visibility in virtual scenes. By enabling depth estimation for the video pass-through layer, applications can perform depth tests against this layer. However, as only hands should show up over the VR layer (and not the office room or objects around the user), depth testing needs to be enabled only for the short range near the user.

The extension that allows applications to limit the depth testing range is called varjo_ViewExtensionDepthTestRange.

struct varjo_ViewExtensionDepthTestRange {
    struct varjo_ViewExtension header;
    double nearZ;
    double farZ;

nearZ and farZ define the range for which the depth test is active. These values are given in view-space coordinates.

std::vector<varjo_LayerMultiProjView> views(viewCount);
std::vector<varjo_ViewExtensionDepth> depthExt(viewCount);
std::vector<varjo_ViewExtensionDepthTestRange> depthTestRangeExt(viewCount);

for (int i = 0; i < viewCount; i++) {
    views[i].extension = &depthExt[i].header; // Attach depth buffer extension
    depthExt[i].header.next = &depthTestRangeExt[i].header; // Chain depth test range extension
    depthTestRangeExt[i].header.type = varjo_ViewExtensionDepthTestRangeType;
    // Depth test will be enabled in [0, 1m] range
    depthTestRangeExt[i].nearZ = 0;
    depthTestRangeExt[i].farZ = 1;

The order in which the given extensions are chained is not important. However, the depth test range extension will have no effect if the depth buffer is not provided.