Collaborative Research: HCC: Medium: Deep Learning-Based Tracking of Eyes and Lens Shape from Purkinje Images for Holographic Augmented Reality Glasses

  • Fuchs, Henry H. (Investigador principal)
  • Niethammer, Marc M. (CoPI)

Detalles del proyecto

Descripción

This project seeks to develop head-worn Augmented Reality (AR) systems that look and feel like ordinary prescription eyeglasses, and can be worn comfortably all day, with a field of view that matches the wide field of view of today's eyewear. Such future AR glasses will enable vast new capabilities for individuals and groups, integrating computer assistance as 3D enhancements within the user's surroundings. For example, wearing such AR glasses, an individual will see around them remote individuals as naturally as they now see and interact with nearby real individuals. Virtual personal assistants such as Alexa and Siri may become 3D-embodied within these AR glasses and situationally aware, guiding the wearer around a new airport, or coaching the user in customized physical exercise. This project aims to advance two crucial, synergistic parts of such AR glasses: 1) the see-through display itself and 2) the 3D eye-tracking subsystem. The see-through display needs to be both very compact and have a wide field of view. To achieve these display requirements, the project uses true holographic image generation, and improves the algorithms that generate these holograms by a) concentrating higher image quality in the direction and distance of the user's current gaze, and b) algorithmically steering the 'eye box' (the precise location where the eye needs to be to observe the image) to the current location of the eye's pupil opening. In current holographic displays, this viewing eye box is typically less than 1 cubic millimeter, far too small for a practical head-worn system. Therefore, a practical system may need both a precise eye tracking system that locates the pupil opening and a display system that algorithmically steers the holographic image to be viewable at that precise location. The 3D eye tracking system also seeks to determine the direction of the user's gaze, and the distance of the point of gaze from the eye (whether near or far), so that the display system can optimize the generated holographic image for the precise focus of attention. The proposed AR display can render images at variable focal lengths, so it could be used for people with visual accommodation issues, thereby allowing them to participate in AR-supported education and training programs. The device could also have other possible uses in medical (such as better understanding of the human visual system) and training fields.

The two branches of this project, the holographic display, and the 3D eye tracker, are closely linked and each improved by the other. The 3D eye tracker utilizes an enriched set of signals and sensors (multiple cameras for each eye, and a multiplicity of infra-red (IR) LEDs), from which the system extracts the multiple tracking parameters in real time: the horizontal and vertical gaze angles, the distance accommodation, and the 3D position and size of the pupil's opening. The distance accommodation is extracted by analyzing Purkinje reflections of the IR LEDs from the multiple layers in the eye's cornea and lens. A neural network extracts the aforementioned 3D tracking results from the multiple sensors after being trained on a large body of ground truth data. The training data is generated from multiple human subjects who are exposed, instantaneously to known patterns on external displays at a range of distances and angles from the eye. Simultaneous to these instantaneous patterns, the subject is also shown images from the near-eye holographic image generator whose eye box location and size have been previously optically calibrated. One part of each pattern will be shown, instantaneously, on an external display and the other part, at the same instant, on the holographic display. The subject can only answer correctly a challenge question if they have observed both displays simultaneously. This can only occur if the eye is at a precise 3D location and also at a precise known gaze angle. The eye tracker will be further improved by integrated its training and calibration with the high precision (but very bulky) BinoScopic tracker at UC Berkeley, which tracks using precise maps of the user's retina. The holograhic image generator uses the real time data from the 3D eye tracker to generate holograms whose highest image quality is at the part of image that is currently on the viewer's fovea, and at the distance to which the user is currently accommodated. The image quality is improved by a trained neural network whose inputs are images from a camera placed, during training, at the position of the viewer's eye.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

EstadoActivo
Fecha de inicio/Fecha fin1/10/2130/9/24

Financiación

  • National Science Foundation: USD974,978.00

!!!ASJC Scopus Subject Areas

  • Inteligencia artificial
  • Interacción persona-ordenador
  • Informática (todo)

Huella digital

Explore los temas de investigación que se abordan en este proyecto. Estas etiquetas se generan con base en las adjudicaciones/concesiones subyacentes. Juntos, forma una huella digital única.