Our research group has been active in modeling urban noise. Above is the results from a noise model that we developed at 50 m resolution for Manhattan, New York City, NY. PhD student, Eunice Lee did the modeling and field validation.
In our new Vx Lab — a “smart” health and wellness living laboratory (SWELL) — we are using large screen displays and interactive technologies to allow people to explore these modeled environments. This enables us to conduct “virtual exposure” studies, in which subjects can interact with distant environments, while we gauge their reactions to measured and modeled data, and relate these data to visual cues that can be gathered from street-level 3D imagery such as Google Streetview or Earthmine data.
What types of visual cues? Below are four 3D scenes from different parts of New York that illustrate the streetscape differences: Central Park, Chinatown, the Financial District, and Harlem. These scenes are viewed using 3D glasses in our lab. The environments can be navigated in 3D.
Comparisons of objective and perceived data are being explored in our Vx Lab. Moreover, laboratory studies can be extended to real world perception data sets. As an example, below is a comparison of our modeled noise levels against an indicator of perceived noise — noise complaint data which come from the New York City Public Health Department.