If your project relies on your own engine that is not yet supported by Varjo headsets, you can add support for your engine using the Varjo SDK for Custom Engines. Varjo SDK documentation is intended to help you prepare your project to work on Varjo headsets.
Table of Contents
- Going from another VR headset to Varjo headset
- Tracking and controllers
- Migrating from OpenVR
- 20/20 Eye tracking
- Mixed Reality
- Error handling
- Motion prediction
Going from another VR headset to Varjo headset
When thinking about adapting your product for a Varjo headset, it is important to understand the difference between a traditional VR headset and a Varjo headset. For a traditional VR headset a single image is rendered for each eye. With Varjo devices it is recommended to render two views per eye: one for context view that covers the whole field of view and one for focus that covers a smaller, more precise area. In addition, even better quality and performane can be achieved if the focus view is rendered along the gaze. Details on how to render images are explained in the section Rendering to Varjo headsets.
Tracking and controllers
Currently, Varjo headset uses SteamVR™ Tracking technology. The API used for SteamVR Tracking is called OpenVR.
OpenVR API can be utilized for controllers and trackers. You will need to initialize the SteamVR system as a background application to access the controllers in your application. The example implementation of hand controllers tracking can be found in the Benchmark example. In most cases it’s sufficient to get your existing controller implementation working by just changing the type of your
Please note that this
IVRSystem should be used only for controllers and input, not for rendering or head pose. This may cause problems with the Varjo compositor and result in a suboptimal VR experience. You should only use the controller tracking and controller input portions of the OpenVR API.
Important: You should always render the image using the view information provided by the Varjo API.
As of Varjo Base 2.4 release, the Varjo API can be used to reposition controllers and other trackables after using the Override origin and direction feature in Varjo Base. This is useful since the override feature applies to the headset only.
varjo_GetTrackingToLocalTransform function returns a transform matrix that can be used to place trackables in the correct position and orientation. Apply this to, for example, controller pose by multiplying the controller pose with the matrix.
Varjo Base supports tracking plugins, that allow users to override the headset tracking and use other tracking systems in addition to SteamVR. Please see Third-party tracking for more information.
Migrating from OpenVR
OpenVR is an SDK and API developed by Valve. If you have read the SDK documentation, you are most likely familiar with how it works, and have already ported your software to work with it. Varjo headset does not use OpenVR for headset tracking. You still must use OpenVR API for controller integration in your engine, but all the headset–related functionalities need to be addressed through the Varjo API.
Learn the details about rendering on the Rendering to Varjo Devices page.
20/20 Eye tracking
You can use the full capability of the Varjo headset eye tracking feature in your software. Get more familiar with eye tracking on the Eye tracking page. API endpoints for eye tracking can be found in the API endpoint guide available in Varjo SDK for Custom Engines package under varjo-sdk/docs.
Learn the details about mixed reality development on the Mixed reality development page.
You can query Varjo errors with the
varjo_GetError function. The error code refers to the first frame loop that has failed. The following API calls may fail as a cascading result, without overriding the previous error. The
varjo_GetError function will also clear all the following errors. Errors are important and must be checked at least once every frame. Errors can be used for informing the user about errors.
Varjo API is thread safe.
However, graphics APIs put additional constraints on this as, e.g., Direct3D calls will use the immediate context of the provided graphics device and they must not overlap with threads that use the same context.
Varjo uses nanoseconds as a time unit. Absolute times are relative to an epoch which is constant during execution of the program. Time can be queried using the
To query the time for a frame, the
varjo_FrameGetDisplayTime function will return the time of the average perceived moment of when the image is shown.
Varjo system time can be converted to other time sources. In order to do so Varjo system time and the other time source needs to be synchronized. Synchronization procedure measures offset between the two time sources and then this offset can be used for converting times. Gaze Tracking Example contains example code that does exactly this. Be aware that, when using time sources that are not monotonic, any adjustment like day light saving or NTP synchronization could invalidate the synchronization. Also if you plan to use synchronized time conversion for longer time, you’ll need to periodically resynchronize the times to avoid clock drift errors.
Varjo API uses events to notify users about changes to the system and user input.
varjo_Event structure. It can be allocated in a stack or if you want it allocated in a heap, you can use the
varjo_AllocateEvent helper function. Poll for the events in your main loop or any other place that gets called frequently by calling
varjo_PollEvent in a loop.
Varjo_events.h contains all available event types.
Properties can be used to query different Varjo system values.
varjo_SyncPropertieswhen you want to update the properties.
varjo_HasPropertyto check whether a specific property value exists. Each property has its own
varjo_GetProperty*to get the actual property value.
All available properties are listed in
Varjo_types.h. Properties can be used to query user presence and gaze tracking information.
Timewarp is a way of making sure that the rotation of the headset in VR and in real life are always in sync. The plane of the image is rotated as the headset rotation changes, regardless of the image frame rate. This ensures that the user will not feel nauseous when looking around while wearing the headset.
If the application submits a depth buffer, an additional positional timewarp is applied over the image. This will greatly improve the smoothness of the applications where the user is moving, especially in cases where the application isn’t rendering maximum number of frames per second. This may cause some artifacts around the edges of objects.
Motion prediction is essentially an extension of positional timewarp. Motion prediction works either by estimating RGBA and depth buffers or by using the velocity vectors submitted by the client. In comparison with positional timewarp, motion prediction will reduce artifacts even further in scenes that contain a lot of motion. This is especially useful in applications like simulators where the user experiences movement.