A recent paper on computer vision from Stanford University, Universidad de Zaragoza, and University of California Berkeley explains the Saliency of VR. The research tracks the gaze from 86 different users of panoramic VR images, many of which were done as part of Chaos Group Labs. The research compares this to how people gaze at desktop images and can help in many other areas of research in the area of VR.
Image from CONSTRUCT VR experiment by Kevin Margo used in “Saliency in VR: How do people explore virtual environments?”
Abstract:
"Understanding how humans explore virtual environments is crucial for many applications, such as developing compression algorithms or designing effective cinematic virtual reality (VR) content, as well as to develop predictive computational models. We have recorded 780 head and gaze trajectories from 86 users exploring omnidirectional stereo panoramas using VR head-mounted displays. By analyzing the interplay between visual stimuli, head orientation, and gaze direction, we demonstrate patterns and biases of how people explore these panoramas and we present first steps toward predicting time-dependent saliency. To compare how visual attention and saliency in VR are different from conventional viewing conditions, we have also recorded users observing the same scenes in a desktop setup. Based on this data, we show how to adapt existing saliency predictors to VR, so that insights and tools developed for predicting saliency in desktop scenarios may directly transfer to these immersive applications."
Credits:
Vincent Sitzmann (1), Ana Serrano (2), Amy Pavel (3), Maneesh Agrawala (1), Diego Gutierrez (2), Gordon Wetzstein (1)
(1) Stanford University, (2) Universidad de Zaragoza, (3) University of California Berkeley