Excellence in Research: Predicting computations that lead to a stable perception of object lightness under spectral variabilities of the visual scene

  • Singh, Vijay V. (PI)

Project Details

Description

The human visual system allows people to see objects in a scene as unchanging, despite variation in the light reaching the eyes. For example, consider the task of estimating the lightness of an object (i.e., the fraction of light reflected by the object's surface). Properties of the object (e.g., its color), will change the fraction of light it reflects and hence the amount of light available to the eyes. However, the light available to the eyes also depends on several other aspects of the scene, such as the intensity of the light source and how background objects also reflect the light. Despite all this variation, the human visual system allows people to stably perceive an object's lightness. This project aims to identify the mental computations that lead to stable perception of object lightness. Lightness perception is just one example of a large variety of scene invariant perception problems that the human brain deals with and thus, an understanding of these mental computations could open the door to a better grasp of other brain functions as well. This interdisciplinary project will establish a color perception lab at an Historically Black College and University (HBCU), which will train students from under-represented communities in valuable job-oriented analytical, computational, and experimental skills that are in high demand in academia and industry.

To identify these computations, the investigator proposes to combine computational methods – mathematical models that mimic the behavior of complex systems – with experiments involving human participants to study how changes in scene properties affect people's perception of lightness. Human participants will view computer-generated pictures of natural looking scenes using specifically calibrated equipment and will be asked to discriminate between two objects based on their lightness. For instance, a participant may see back-to-back images of the same scene and be asked to indicate in which of the two scenes a target object (e.g., a spherical object) is lighter. The images will be generated using the investigator's custom-built software pipeline; each image will be a two-dimensional rendering of a three-dimensional scene. The software employed here can be used to generate large databases of images with precise control over the properties of the objects in the scene, such as the position and color of the objects, and the intensity and color of the light sources. During the experiment, the investigator will vary the intensity of light sources and the color of the background objects and measure human participants' ability to distinguish the lightness of the target objects. A computational model will then be created to compare with the performance of human observers. The computational learning methods will be designed to perform the same task as the human participants. If the model performs in a similar manner to the humans, it will suggest that the underlying mathematical computations of the model are similar to the mental computations of the human observers. Taken together, the experiments and model will provide a rich description of how human beings perceive the lightness of objects under varying conditions and will provide insights into the possible biological mechanisms involved in stable lightness perception.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

StatusActive
Effective start/end date1/8/2131/7/24

Funding

  • National Science Foundation: US$373,588.00

ASJC Scopus Subject Areas

  • Computational Mathematics
  • Behavioral Neuroscience
  • Cognitive Neuroscience

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.