Mixed Reality development
This page is dedicated to development for Varjo XR headsets. It contains all the information and references specifically related to developing mixed reality content.
If you are already familiar with the basics, check the guides for Creating realistic MR and Adjusting camera settings.
On this page you can find the following topics:
Enabling Mixed Reality on Varjo XR devices
To start developing a mixed reality application on your engine, you will need the Varjo VR implementation first. For this, refer to the Varjo SDK instructions. Once this is set up, you can begin with your mixed reality implementation.
The video pass-through image is a layer in your application, and is displayed behind the VR layer by default. By using depth estimation and occlusion, you can bring real objects in front of virtual ones. Read more about in the Depth occlusion section.
Varjo Compositor can automatically blend virtual and video pass-through layers and align VR views. To see an image from video pass-through in your application, perform the following steps:
Check if MR is available:
if (varjo_HasProperty(m_session, varjo_PropertyKey_MRAvailable)) {
mixedRealityAvailable = varjo_GetPropertyBool(m_session, varjo_PropertyKey_MRAvailable);
}
Start video pass-through:
varjo_MRSetVideoRender(m_session, varjo_True);
Set varjo_LayerFlag_BlendMode_AlphaBlend
flag to the VR layer:
layer.header.flags |= varjo_LayerFlag_BlendMode_AlphaBlend;
Modify the alpha channel of your VR layer according to your needs: Wherever alpha is lower than 1.0, you can see the real-world image.
Note: Colors need to be pre-multiplied for blending to work as expected.
Since the full 12-megapixel resolution is too large to be streamed to the computer at 90 Hz in stereo, we split the stream into two parts:
- The first part is the peripheral stream, which is the full sensor image downscaled to 1152x1152 resolution for XR-3.
- The second part is the focus stream, which is a non-downscaled crop from the full-resolution sensor image of size 512x512 for XR-3.
The crop position can be moved in real-time per-frame to any location in the image, and currently it moves according to the user’s eye gaze as a result of eye tracking with sub-degree accuracy. In practice, you will always see the full resolution image wherever you are looking.
Eye tracking must be enabled for dynamic foveation to work. If eye tracking is disabled, the foveation location will default to the center of the view.
Developers have access to the uncompressed, distorted peripheral stream directly from the cameras. In addition to the raw data, distortion parameters are provided so that you can undistort the image for your use cases. The current stream is intended mainly for potential computer vision applications.
For instructions on how to get the stream from the API, see the Native examples page.
Handling errors
If the video pass-through cameras are disconnected, you will see an error in Varjo Base and a pink color in place of the real-world view inside the headset. You can replace the default pink color with something else, such as a skybox or an image with a logo. For this, the app can listen to the MR device Connected and Disconnected events and switch behavior.
varjo_Event evt{};
while (varjo_PollEvent(m_session, &evt)) {
switch (evt.header.type) {
case varjo_EventType_MRDeviceStatus: {
switch (evt.data.mrDeviceStatus.status) {
case varjo_MRDeviceStatus_Connected: {
printf("EVENT: Mixed reality device status: %s\n", "Connected");
break;
}
case varjo_MRDeviceStatus_Disconnected: {
printf("EVENT: Mixed reality device status: %s\n", "Disconnected");
break;
}
}
} break;
default: break;
}
}