Eye positions are given in meters in the headset coordinate system in VR space. The origin is in the middle of the segment connecting eye centers, with X points right, Y points up, and Z points forward. Thus the left eye center position is always (-ipd/2, 0, 0) and the right one is (ipd/2, 0, 0). Each eye gaze direction is a normalized vector. Simple formulas can be used to express gaze vectors in angles, e.g., horizontal angle can be computed as atan(x/z), and vertical as atan(y/z). Knowing gaze ray origin and direction (vector), gaze position can be projected on any intersecting plane in VR space.
How is focus distance calculated?
Focus distance is calculated by finding the smallest edge between two eye rays in 3D and selecting its midpoint. Focus distance is the distance from the origin to that midpoint. There are extra checks for near-parallel or divergent rays, i.e., the computed distance is always clamped to a range of 0.01m…2.0m. For parallel or divergent rays the value produced is 2m, since focus distance cannot be estimated if the user is looking more than 2m ahead as the rays are nearly parallel. Given individual eye rays, one can experiment with other ways of computing focus point and focus distance.
How is stability calculated?
Stability is calculated based on the history of recent combined gaze ray samples. We sum up gaze ray “travel distance” in angles for a few recent samples and determine (interpolate) where this value is in the range from bad stability (0 ~ big traveled distance) and good stability (1 ~ small traveled distance). 0 and 1 values are based on empirically proven angle constants and historical samples. Users are free to experiment with their own stability metrics, since they have access to all gaze ray data for calculating their own stability values.
What happens to lost samples?
Some samples may be lost if corresponding video frames are lost during transfer to PC (e.g., due to bad USB connection), or if there was a major state transition event in the gaze system requiring more processing time that does not fit in the video frame interval (e.g. when the user is being calibrated and the gaze system performs extra processing for calibration). No frames are extrapolated or hallucinated for missing samples. Users can detect missed samples by checking the frameNumber field.