Introduction to Varjo SDK
If your project relies on your own engine that is not yet supported by Varjo headsets, you can add support for your engine using the Varjo SDK for Custom Engines. Varjo SDK documentation is intended to help you prepare your project to work on Varjo headsets.
Table of Contents
- Going from another VR headset to Varjo headset
- Tracking and controllers
- Migrating from OpenVR
- Rendering
- 20/20 Eye tracking
- Mixed Reality
- Error handling
- Threading
- Timing
- Events
- Properties
- Timewarp
- Motion prediction
Going from another VR headset to Varjo headset
When thinking about adapting your product for a Varjo headset, it is important to understand the difference between a traditional VR headset and a Varjo headset. For a traditional VR headset a single image is rendered for each eye. With Varjo devices it is recommended to render two views per eye: one for context view that covers the whole field of view and one for focus that covers a smaller, more precise area. In addition, even better quality and performane can be achieved if the focus view is rendered along the gaze. Details on how to render images are explained in the section Rendering to Varjo headsets.
Varjo XR-3 and VR-3 have four displays: 2 for each eye. This is called Bionic Display. Varjo Aero and XR-4 have two displays. For more detailed technical specifications, see Varjo products technical specifications.
Tracking and controllers
Currently, Varjo headsets use Inside-out or SteamVR™ Tracking technology based on your headset model. Varjo controllers work only when Inside-out is in use and SteamVR tracked controllers and trackers only when SteamVR tracking is in use.
OpenVR API can be utilized for tracking controllers and trackers. OpenVR is the API for SteamVR but can be used for Varjo controllers also. You will need to initialize the SteamVR system as an other application to access the controllers in your application. The example implementation of hand controllers tracking can be found in the Benchmark example. In most cases it’s sufficient to get your existing controller implementation working by just changing the type of your vr::IVRSystem
from vr::EVRApplicationType::VRApplication_Scene
to vr::EVRApplicationType::VRApplication_Other
. Note that vr::EVRApplicationType::VRApplication_Background
is not enough as it doesn’t start the SteamVR runtime if it is not running already.
Please note that this IVRSystem
should be used only for controllers and input, not for rendering or head pose. This may cause problems with the Varjo compositor and result in a suboptimal VR experience. You should only use the controller tracking and controller input portions of the OpenVR API.
Important: You should always render the image using the view information provided by the Varjo API.
Tracking-to-local transform
Varjo API has to be used to reposition controllers and other trackables after receiving their locations through OpenVR. For example the Room setup and the Override origin and direction features in Varjo Base add an additional transform that applies only to the headset.
The varjo_GetTrackingToLocalTransform
function returns the needed transform matrix that can be used to place trackables in the correct position and orientation. Apply this to, for example, controller pose by multiplying the controller pose with the matrix.
Tracking plugins
Varjo Base supports tracking plugins, that allow users to override the headset tracking and use other tracking systems in addition to SteamVR. Please see Third-party tracking for more information.
Migrating from OpenVR
OpenVR is an SDK and API developed by Valve. If you have read the SDK documentation, you are most likely familiar with how it works, and have already ported your software to work with it. Varjo headset does not use OpenVR for headset tracking. You still must use OpenVR API for controller integration in your engine, but all the headset–related functionalities need to be addressed through the Varjo API.
Rendering
Learn the details about rendering on the Rendering to Varjo Devices page.
20/20 Eye tracking
You can use the full capability of the Varjo headset eye tracking feature in your software. Get more familiar with eye tracking on the Eye tracking page. API endpoints for eye tracking can be found in the API endpoint guide available in Varjo SDK for Custom Engines package under varjo-sdk/docs. See also the GazeTrackingExample application, which contains an example code demonstrating the use of the Varjo eye tracking API.
Mixed Reality
Learn the details about mixed reality development on the Mixed reality development page.
Error handling
You can query Varjo errors with the varjo_GetError
function. The error code refers to the first frame loop that has failed. The following API calls may fail as a cascading result, without overriding the previous error. The varjo_GetError
function will also clear all the following errors. Errors are important and must be checked at least once every frame. Errors can be used for informing the user about errors.
Threading
Varjo API is thread safe.
However, graphics APIs put additional constraints on this as, e.g., Direct3D calls will use the immediate context of the provided graphics device and they must not overlap with threads that use the same context.
Timing
Varjo uses nanoseconds as a time unit. Varjo timestamps are monotonic, independent of the time-of-day clock and relative to the epoch that is constant during the execution of the program (Varjo system epoch). The time can be queried using the varjo_GetCurrentTime
function.
To query the time for a frame, the varjo_FrameGetDisplayTime
function will return the time of the average perceived moment of when the image is shown.
Note that the Varjo system time epoch is not equal to e.g. the Unix Epoch or any other global time epoch. However, Varjo timestamps can be converted to be compatible with timestamps from other time sources. To do so, Varjo system time and the other time source needs to be synchronized. The clock synchronization procedure measures the offset between the two time sources and the offset would then be used for the timestamp conversion.
The function varjo_ConvertToUnixTime
can be used to convert from the Varjo nanoseconds time to the Unix Time, which is the real time-of-day clock timestamp expressed as the number of nanoseconds that elapsed since the Unix Epoch. The result of this conversion depends on the time-of-day clock adjustment of the operating system (e.g. Windows time synchronization with NTP server). This conversion is supported for Varjo timestamps that are up to one hour in the past, but not earlier than the time when the Varjo system was started (e.g. the Unix Time conversion can,t be used with timestamps that were recorded before the last computer reboot or restart of the Varjo services). The conversion can also be applied to Varjo timestamps that are in the future relative to the result of varjo_GetCurrentTime
, however, the future time conversion must be used with consideration of a possible upcoming time-of-day clock adjustment that can’t be accounted for at the moment of the conversion. The GazeTrackingExample application contains an example code that demonstrates the use of the timestamp conversion with Varjo timestamps.
Events
Varjo API uses events to notify users about changes to the system and user input.
Initialize a varjo_Event
structure. It can be allocated in a stack or if you want it allocated in a heap, you can use the varjo_AllocateEvent
helper function. Poll for the events in your main loop or any other place that gets called frequently by calling varjo_PollEvent
in a loop. Varjo_events.h
contains all available event types.
Properties
Properties can be used to query different Varjo system values.
- Call
varjo_SyncProperties
when you want to update the properties. - Use
varjo_HasProperty
to check whether a specific property value exists. Each property has its ownvarjo_PropertyKey
. - Use
varjo_GetProperty*
to get the actual property value.
All available properties are listed in Varjo_types.h
. Properties can be used to query user presence and gaze tracking information.
Timewarp
Timewarp is a way of making sure that the rotation of the headset in VR and in real life are always in sync. The plane of the image is rotated as the headset rotation changes, regardless of the image frame rate. This ensures that the user will not feel nauseous when looking around while wearing the headset.
If the application submits a depth buffer, an additional positional timewarp is applied over the image. This will greatly improve the smoothness of the applications where the user is moving, especially in cases where the application isn’t rendering maximum number of frames per second. This may cause some artifacts around the edges of objects.
Motion prediction
Motion prediction is essentially an extension of positional timewarp. Motion prediction works either by estimating RGBA and depth buffers or by using the velocity vectors submitted by the client. In comparison with positional timewarp, motion prediction will reduce artifacts even further in scenes that contain a lot of motion. This is especially useful in applications like simulators where the user experiences movement.