Please Select Your Location
Australia
Österreich
België
Canada
Canada - Français
中国
Česká republika
Denmark
Deutschland
France
HongKong
Iceland
Ireland
Italia
日本
Korea
Latvija
Lietuva
Lëtzebuerg
Malta
المملكة العربية السعودية (Arabic)
Nederland
New Zealand
Norge
Polska
Portugal
Russia
Saudi Arabia
Southeast Asia
España
Suisse
Suomi
Sverige
台灣
Ukraine
United Kingdom
United States
Please Select Your Location
België
Česká republika
Denmark
Iceland
Ireland
Italia
Latvija
Lietuva
Lëtzebuerg
Malta
Nederland
Norge
Polska
Portugal
España
Suisse
Suomi
Sverige
<< Back to Stories

From Gaze to Insight: WorldViz and VIVE Business Bring Eye Tracking into Everyday VR Research

Why Visual Attention Is the Missing Piece

WorldViz Image

Education | Collaboration | Training/Simulation | Case Study | Articles

5 min read

Researchers, educators, and innovators have wanted to answer the question of: “What are people paying attention to?” for a long time. For decades, post-session interviews, screen recordings, assumptions about what people should focus on, and other indirect answers were all there was.

That has changed with virtual reality and built-in eye tracking. Thanks to eye tracking researchers know exactly where a user is looking and how long their gaze lingers. These behavioral cues give insight to cognition, emotion, and intent. It’s essentially a window into human insight. WorldViz and HTC VIVE Business has now made it accessible to anyone working in behavioral research, UX design, training, or academic study.

The Barriers: Why Eye Tracking Wasn’t Always Feasible

Over twenty years ago, eye tracking in VR was the domain of large labs with a setup in the range of $60,000–$80,000. Those systems required extensive custom codes, integration with fragile headsets, and complex calibration routines. Most researchers needed weeks to set up an experiment and could only collect data from small participant pools due to cost and time.

That’s where WorldViz and VIVE Business made a breakthrough.

By combining WorldViz’s SightLab VR Pro software with VIVE Focus Vision hardware, researchers can do more. That includes designing and deploying immersive experiments with built-in gaze tracking that doesn’t require coding—often in less than a day. All of these capabilities are made possible on a headset under $1,500—in most setups.

"One of the challenges with performing research using virtual reality and eye tracking is the complexity of building the initial experiment. With built-in tools specifically for setting up eye tracking experiments in VR, SightLab VR got us up and running and doing our research in very little time with minimal effort!" – Joy Lee PhD, Maastricht University

The Toolkit: SightLab + VIVE Focus Vision

SightLab VR Pro—built by WorldViz—allows researchers to create customized VR experiments. They can also use it to collect synchronized, biometric and behavioral data. The system supports more than 150 devices. That includes electroencephalography (EEG), functional, near-infrared spectroscopy (fNIRS), BIOPAC sensors, and VIVE’s native, eye-tracking hardware.

Researchers can easily measure:

  • Gaze (direction, fixations, dwell time)
  • Pupil diameter (used as a proxy for cognitive load)
  • Scan paths and time to first fixation
  • Visual attention heatmaps
  • Head movement (6DOF) and input behavior

What makes the system powerful is not just the data it collects , but how it visualizes it. Researchers can replay sessions with eye trails in 3D, generate heatmaps across areas of interest, or compare gaze behavior across cohorts. All of which can be done without writing a single line of code.

WorldViz eye tracking in automotive setting

“We use SightLab VR in combination with EEG to understand aesthetic experiences in museums. I also used the SightLab VR during a series of workshops and conferences at the Panthéon-Sorbonne University in Paris. VR eye-tracking experiments stimulated strong interest from French museum curators and digital application supervisors, and we are planning to collaborate for future exhibitions.” – Professor Maurizio Forte, PhD at Duke University’s Department of Classical Studies

Use Cases: Exploring Human Behavior Through VR and Eye Tracking

Maastricht University: Reducing Errors Through Stress-Aware VR Training

Researchers at Maastricht University examined how stress and cognitive overload affect medical trainees during high-pressure, emergency room simulations in VR. Using SightLab VR and VIVE Pro Eye headsets, they measured pupil dilation, blink rate, and gaze fixations to assess real-time cognitive load. This data allowed instructors to pinpoint moments of mental strain, as well as provide targeted, in-the-moment interventions. Their findings were used to create adaptive training modules. These modules helped reduce diagnostic errors, improve retention, and lay the groundwork for more responsive and personalized medical education systems. ( Source )

University of Oklahoma: Neurocognitive Research in Smart Learning

A smart, non-text-based learning platform that uses a combination of WorldViz Vizard, VIVE headsets, haptic feedback, and BIOPAC fNIRS devices. That’s what Dr. Ziho Kang and his team are developing at the Human Factors and Simulation Lab. The system captures real-time biometric data—including gaze, heart rate, and brain activity—to study how students learn under pressure. Researchers tracked how physiological signals correlate with performance and adaptation using semantic network tasks. SightLab VR enables synchronized, multimodal data collection. This, in turn, helps support insights into attention, stress response, and personalized learning. The project follows Universal Design for Learning principles and aims to expand through school and community partnerships. ( (Source )

Michigan State University: Exploring Artificial Social Influence in VR

Researchers at Michigan State University used the WorldViz Vizard platform to study how AI-driven virtual agents can influence human behavior through socially intelligent interactions. Participants engaged with embodied conversational agents (VR-ECAs) in natural, health-related conversations within a controlled VR environment. These AI avatars responded in real-time using speech and non-verbal cues, such as eye contact and lip sync, creating a lifelike and persuasive social presence. The study examined how perceived similarity—specifically gender matching between participants and AI agents—affected engagement. Results showed that male participants paid more visual attention to agents of the same gender, while female participants reported higher overall likability regardless of the agent’s gender. Learnings suggest that tailoring AI agents to user traits like gender can enhance their persuasive impact.

The project leveraged the Vizard platform’s integration with SightLab VR Pro to capture detailed behavioral data, including gaze tracking and real-time-interaction metrics. These tools enabled researchers to explore how AI and VR can simulate authentic social dynamics and influence user attitudes, as well as behavior.

This work lays the foundation for future applications of socially responsive AI in VR. Its reach can span across fields such as healthcare, education, and marketing, where intelligent agents can adapt their actions to engage and guide users more effectively. ( Source )

WorldViz user research

Duke University: Neuroaesthetics and the VR Museum Experience

In partnership with the University of La Sapienza and the National Etruscan Museum of Rome, Duke University researchers used WorldViz SightLab VR and EEG for a study. Its focus was on how the brain responds to beauty and artistic expression in museum contexts. This project— called the NeuroArtiFact initiative—explored how attention, emotion, and perception shape the aesthetic experience.

While participants viewed ancient artifacts in immersive VR, SightLab captured their eye movements, fixations, and gaze intersects, alongside brainwave activity. Findings revealed that aesthetic judgment activates sensory-motor, emotional, and reward-related areas of the brain. Led by Maurizio Forte—a professor in the Department of Classical Studies—the research has sparked international collaboration, including workshops at the Panthéon-Sorbonne. It has also helped curators design cognitively engaging and emotionally resonant exhibits. ( Source )

Michigan State University: Message Reception and Retention in VR

In collaboration with multiple research institutions, Michigan State University uses VR and eye tracking to study how people process and retain messages in immersive environments. Built with WorldViz Vizard and SightLab VR Pro, the platform allows researchers to examine the entire causal chain of communication. This range, under realistic conditions, goes from exposure to reception to memory.

Participants drive along a virtual highway or city while emotional or targeted billboard messages are visible. Eye-tracking data, collected through VR headsets, captures gaze patterns and attention shifts, even under distraction. The studies show that exposure is critical for message impact and that emotional content, attention demands, and environmental context influence retention.

The research offers micro-level insight into message effectiveness—with implications for health, political, and commercial communication strategies. Source )

WorldViz eye tracking for roadside sign research

Research Without the Red Tape

One of the most powerful aspects of this integration is its simplicity. Using SightLab’s GUI builder, researchers with no programming experience can:

  • Build virtual environments
  • Create multi-step experiments
  • Assign object triggers and log files
  • Calibrate eye tracking in minutes

Studies that used to take months to deploy now take a few days—from IRB approval to the first data pull.

Onboarding is quick, too. Undergraduate research assistants can learn the system in under a week. Even though SightLab and Vizard operate as their own engine, there is an available plugin option. It helps to enable data collection, eye-tracking overlays, and visualization within external applications—such as Unity and Unreal.

Adaptive Systems and Real-Time Feedback

Beyond static research, WorldViz and VIVE are helping researchers explore real-time, adaptive experiences. SightLab now includes early-stage AI analytics that allow VR simulations to adjust based on user behavior.

For example, if a trainee repeatedly overlooks a key object or area in a training setting, the system can slow down, offer hints, or trigger a verbal coaching module. In one military use case (not publicly disclosed), this form of “ live insight ” cut critical response errors by over 30%.

Adaptive interfaces based on user gaze or frustration signals are already piloted in UX and design testing. Researchers are using VIVE Focus Vision to run untethered, mobile tests in pop-up labs and public venues.

End-User Impact: Data That Drives Change

Research is only valuable if it informs action—and that’s exactly what these teams are doing.

  • At Maastricht University , biometric stress indicators informed adaptive VR training modules that reduced diagnostic errors in emergency medicine.
  • The University of Oklahoma uses real-time gaze and brain activity data to tailor non-text-based learning systems for diverse student needs.
  • Michigan State University’s social influence study revealed how gender-matched AI avatars increase user engagement, guiding future AI agent design in healthcare and education.
  • In another Michigan State project , researchers tracked attention to emotional billboards in VR, providing insights that inform public messaging strategies and ad placement.
  • Duke University’s neuroaesthetics research is helping curators design museum exhibits that promote deeper emotional and cognitive engagement—guided by real eye and brain data.

This isn’t just interesting data— it’s data that shapes decisions, design, and policy.

Global Reach and a Growing Community

WorldViz reports that over 2,000 institutions have adopted its tools across 50+ countries. These include healthcare providers, military simulation units, architecture firms, and behavioral economics labs. HTC VIVE also supports a network of educational and enterprise partners through its VIVE Business division. This division includes grant programs, hackathons, and research showcases.

Their joint impact has helped accelerate immersive science in ways few expected even five years ago.

Looking Ahead: Gaze as Infrastructure

As eye tracking becomes standard in more XR headsets, it will stop being a feature and start becoming infrastructure. Just as clickstream data reshaped web analytics, gaze data will transform spatial computing and adaptive systems.

Future applications already underway include:

  • Live VR coaching systems
  • AI tutors that adjust instructions based on where you’re looking
  • Biometric-informed therapy for PTSD and phobias
  • Gaze-informed UI in surgical robotics

In all these use cases, one thing is clear: immersive research is not a niche— it’s the next standard.

Immersive Insight for the Real World

WorldViz and HTC VIVE have helped tear down the barriers to immersive research. By merging researcher-first software with industry-leading hardware, they’ve opened up the door of possibilities. Now, anyone who wants to can understand how people think, act, and decide—visually, emotionally, and behaviorally.

Whether you're studying human behavior, designing better products, or teaching future professionals, eye tracking in VR offers a deep and accessible lens.

Take the next step. Learn how VIVE Business and WorldViz can bring immersive insight to your lab, training center, or design team.

Explore more: