EAGER: Viewpoint Tracking via Acceleration Stabilized with Computer Vision

  • Brooks, Jr, Frederick F.P. (PI)

Project Details

Description

This project addresses a problem which has kept virtual reality from widespread use. Some 15 years ago high-capability graphics cards in PCs reduced the cost of computing for virtual environments from hundreds-of-thousands of dollars to (today) hundreds of dollars. Low-cost head-mounted displays have just appeared. The similar advance in viewpoint tracking has not occurred; accurate, low-latency wide-area viewpoint tracking remains very costly. Virtual reality demands stereo 60 frames per second per eye and system latencies below 50 ms. This research is developing a novel system to provide accurate, low-latency viewpoint tracking to meet these requirements with consumer-cost components. The research is based upon a recently demonstrated proof-of-concept system. A standard RGB-Depth camera sits on the user's head. Pose is calculated by matching images against an environment model. A Kalman filter integrates rotational velocity and linear acceleration from a cheap high-speed inertial measurement unit (IMU) to update the pose estimate many times between frames. This not only gives low-latency pose readings, it also improves initial values for the next camera calculation. The depth images and reconstruction software are concurrently used to incrementally build/update the depth model of the environment for the camera matching.

The current research is demonstrating the system's potential. To work completely successfully, both conceptual and algorithmic advances are in process. IMU calibrations are being improved. Temperature and dynamic bias must be compensated in the calibration to improve stimation and reduce jitter. Using multiple cameras to reduce overall noise and handle difficult cases (such as blank walls) are being addressed with new algorithms and evaluated. The merging of new and modeled data is computationally expensive. The feasibility demo uses two GPUs, one for rendering; one for tracking. Ways are being invented to do it on one. Additional future research includes tracking dynamic objects and incorporating object recognition (e.g., such as a desk or chairs) to improve estimates. Widespread access to virtual reality may well open new, unexpected creative uses of the technology. The research is inventing the proof-of-concept system forward to one that can make this exciting leap.

StatusFinished
Effective start/end date15/8/1331/7/14

Funding

  • National Science Foundation: US$75,000.00

ASJC Scopus Subject Areas

  • Computer Vision and Pattern Recognition
  • Computer Science(all)

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.