Mixed Reality development
Note: you are currently viewing documentation for a beta or an older version of Varjo
This page is dedicated to development for XR-1 Developer Edition. It contains all the information and references specifically related to mixed reality content development. If you are planning on developing traditional VR applications for Varjo headsets, refer to the main development page.
On this page you could find following topics:
- What is Varjo XR-1 Developer Edition
- Enabling Mixed Reality on XR-1 Developer Edition
- Developing for Unreal and Unity
- Aligning physical world with Virtual world
- Handling errors
Enabling Mixed Reality on XR-1 Developer Edition
When you decided to develop a mixed reality application on your engine, you need to have the Varjo VR implementation. To locate that, refer to the Varjo SDK instructions.
After that step is done, you can begin with mixed reality implementation.
Video pass-through image is another layer in your application, just like VR. This layer is displayed behind the VR layer by default. However, by using depth estimation and occlusion, you can bring real objects in front of virtual objects. Read more about it in the Depth occlusion section.
Varjo Compositor can automatically blend virtual and video pass-through layers and align VR views. In order to see an image from video pass-through in your application, you need to perform the following steps:
* Check if MR is available
if (varjo_HasProperty(m_session, varjo_PropertyKey_MRAvailable)) {
mixedRealityAvailable = varjo_GetPropertyBool(m_session, varjo_PropertyKey_MRAvailable);
}
* Start video pass-through:
varjo_MRSetVideoRender(m_session, varjo_True);
Modify the alpha channel of your VR layer according to your needs: Wherever alpha is lower than 1.0, you cansee the real-world image.
Note: The colors need to be pre-multiplied or otherwise blending does not work as expected.
The full 12 mega-pixel resolution is too large to be streamed to the computer at 90 Hz in stereo, which is why we split the stream into 2 parts.
The first part is the “peripheral stream,” which is the full sensor image downscaled to 1008×1008 resolution. The second part is the “focus stream,” which is a non-downscaled crop from the full-resolution sensor image of size 834×520. The crop position can be moved in real-time per-frame to any location in the image, and currently it moves according to the user’s eye gaze as a result of eye tracking with sub-degree accuracy. In practice, you will always see the full resolution image wherever you are looking. It is important to note that to work properly, this functionality requires eye tracking enabled.Otherwise, the foveated video pass-through image will be fixed to the center of the view.
Developers can get access to the uncompressed, distorted peripheral stream directly from cameras. In addition to the raw data, distortion parameters are provided so you can undistort the image for your use cases. The current stream is intended mainly for possible computer vision applications.
You can get the stream from the API as explained on the Native examples page.
Handling errors
If the video pass-through cameras have been disconnected, you will see an error in Varjo Base and a pink color where the real-world view was supposed to be in the headset. You can replace default pink color for the environment with something else, such as a skybox or an image with a logo. For that, the app can listen to the MR device connected and disconnected events, as well asswitch behavior.
varjo_Event evt{};
while (varjo_PollEvent(m_session, &evt)) {
switch (evt.header.type) {
case varjo_EventType_MRDeviceStatus: {
switch (evt.data.mrDeviceStatus.status) {
case varjo_MRDeviceStatus_Connected: {
printf("EVENT: Mixed reality device status: %s\n", "Connected");
break;
}
case varjo_MRDeviceStatus_Disconnected: {
printf("EVENT: Mixed reality device status: %s\n", "Disconnected");
break;
}
}
} break;
default: break;
}
}