Data di Pubblicazione:
2019
Abstract:
Capturing and tracking immersive VR sessions performed through HMDs in public spaces, may offer valuable insights on users propensities and spatial affordances. Large collected records can be exploited to analyze or fine-tune locomotion models for time-constrained experiences. The transmission or streaming of such data over the web to analysts or professionals in distance learning field although, can be challenging due to network bandwidth or involve computationally intensive decoding routines. This work investigates compact encoding models to volumetrically absorb user states and propensities during running VR sessions, using image-based encoding approaches. We focus on quantization methods and data layouts to smoothly record immersive sessions and how they compare to standard approaches in terms of storage and spatio-temporal accuracy. Qualitative and quantitative results obtained from public exhibits are presented in order to validate the encoding model.
Tipologia CRIS:
01.01 Articolo in rivista
Keywords:
Encoding models; Immersive VR; Saliency; Visual analytics
Elenco autori:
Cinque, Luigi; Fanini, Bruno
Link alla scheda completa: