This section of the documentation explains how you should handle settings related to the video pass-through cameras.
Defining camera render position
The video pass-through cameras are located about 10 cm in front of the user’s eyes, and the distance between the two cameras does not adjust dynamically according to the user’s IPD. The distance between the cameras is 64 mm, and Varjo is working on improving the offset through compensation in its software for representing true eyepoint position. This results in an offset in the perception of distance of real objects through the camera view as compared to what you see with your real eyes. This affects the appearance of real objects (primarily objects which are near the user, such as your hands), as well as the alignment and calibration between virtual and real objects (the virtual objects are normally projected to the optical plane of the user’s eyes).
There are two options for rendering position when using Real world view with virtual objects. In the first option, you can render VR objects from the camera position. Virtual objects will remain in place realistically, but because the render position is different from your eyes, objects will look further away from you than they actually are. The second option is to render from the eye position. In this option, virtual objects will be better perceived by the user. However, they will float in space when you turn your head.
The developer can switch between two modes by using the Varjo API:
To use camera position (default when VST is enabled)
To use physical eye position
You can also perform smooth transition between two values. Be sure to experiment between the two modes for the best result.
Occluding real world objects
As previously mentioned, by default, the virtual pass-through layer is always behind VR regardless of where in the physical world the objects are located. That means you won’t see your hands through an object located a few meters away even if you raise your hands close to your face. XR-1 Developer Edition utilizes color and infrared cameras to estimate the depth of real-world objects in real-time.
To enable depth estimation, call this:
Then submit VR application depth and toggle depth testing by attaching a properly populated varjo_ViewExtensionDepth extension for the layer. Submit the layer with varjo_LayerFlag_DepthTesting flag enabled. After this is done, your hands will be occluded when they are closer than the virtual object.
Varjo is are continuously improving its depth estimation software for higher quality results. In the future, depth maps will become available for developers as streams.
Changing default camera settings
By default, Varjo software adjusts cameras for optimal brightness and color tone. However, you can override those values. Through the API, you can change exposure, ISO/gain, white balance, and flickering compensation.
First, lock the camera config:
varjo_Bool wasLocked = varjo_MRLockCameraConfig(m_session)
Only one application can lock the config at a time, so it is possible that your application can’t obtain the lock in case someone else is currently holding it. If the configuration can’t be locked, any attempts to change the camera properties will fail.
When you are holding the lock, you can e.g. change the operating mode to manual and change the manual setting.
varjo_MRSetCameraPropertyMode(m_session, varjo_CameraPropertyType_ExposureTime varjo_CameraPropertyMode_Manual); varjo_MRSetCameraPropertyValue(m_session, varjo_CameraPropertyMode_Manual, propertyValue);
Finally, unlock the camera config. Alternatively, if you want to prevent other applications from changing the camera properties, you can leave the configuration locked.
You can determine the supported modes and values with:
varjo_MRGetCameraPropertyModeCount() varjo_MRGetCameraPropertyModes() varjo_MRGetCameraPropertyValueCount() varjo_MRGetCameraPropertyValues()
Getting RAW camera images
The API allows you to obtain raw images from the camera for your application. This is useful if your application wants to perform e.g. OpenCV operations on the images. The resolution of the image will be lower than what you are able to see through VST.
Note: Do not draw this image as it will incur latency.
See the DataStreamer file in the MRExample for the code example.