Our Milky Way Galaxy is comprised of roughly 200 billion stars. These stars trace Galactic structure and serve as records of the formation history of the Galaxy. Studying them, however, is challenging because we can't tell, e.g., how old they are or how far away they are from us from just images of the sky (see SED Fitting). This is made even more difficult because the Galaxy is also filled with cosmic dust, which blocks and ''reddens'' the light from many of these stars. Studying Galactic structure with modern astronomical datasets thus requires simultaneous modeling of both stars and dust throughout the galaxy.
I work on creating 3-D dust maps (and star maps) of the Galaxy using data from hundreds of millions of stars and theoretical stellar models.
Deriving the underlying physical properties of an observed astronomical object requires sophisticated modeling of its spectral energy distribution (SED). When modeling stars and galaxies, most often astronomers fit synthetic model spectra derived from stellar evolution models to either an observed spectrum or collection of broadband photometry. By fitting spectra derived from a variety of underlying physical parameters (age, metallicity, dust, etc.), we are able infer constraints of these parameters based on the observed data.
Current SED fitting methods have had difficulty keeping pace with the increasing quantity and quality of data from large surveys. I work on bridging this gap by improving the underlying models, tools, and statistical methods used in SED fitting with large datasets.
Most of science involves using data to test, constrain, and/or rule out various models that represent our current understanding of how we think things work. The models we use can be complex, depending on many parameters and requiring lots of computational effort to generate (see SED Fitting). Furthermore, the data we are comparing too can be noisy, filled with large uncertainties and unknown systematic problems. This can make it difficult to compare our models to data within the context of Bayesian inference.
I work on developing, improving, and implementing techniques for performing inference with complex models and large datasets.
Before we can estimate many of the intrinsic properties of a galaxy (luminosity, stellar mass, star formation rate, etc.), we first need to know how far away it is from us. This is most often derived using a galaxy's measured redshift (z), which we can convert to a physical distance based on our current understanding of cosmology. While deriving redshifts from spectroscopy is straightforward, obtaining good spectra is expensive and time-consuming. In order to study the evolution of hundreds of millions of galaxies collected in modern surveys, astronomers instead rely on ''photometric redshifts'' (photo-z's) derived solely from multi-band imaging data. Obtaining accurate photometric redshifts are crucial for ongoing/future wide-field surveys (e.g., HSC, DES, KiDS, Euclid, LSST, WFIRST), which depend heavily on them to do science.
I work on developing quick yet robust probabilistic approaches for inferring photo-z's based on pre-existing spectroscopic/photometric datasets.