Much of my work in the domain of machine learning and computer vision has been targeted towards autonomous driving.
In order to safely navigate an environment autonomously, an agent will need to use an array of sensors to perceive its surroundings. Each sensor's readings will need to be fused with the others, and accurately knowing the relative pose of each is necessary to do so. Critically, as agents move at higher speeds and sensors are placed farther apart, the effects of vibration and calibration misalignment become more pronounced — this is especially true for 3D sensing systems like stereo cameras. Our work produced the first realtime per-frame autocalibration of sensor extrinsics, opening the door to a new class of depth estimation that was previously not possible: wide-baseline stereo vision.
Depth estimation can be done using a variety of modalities, for example, through time of flight sensing. In my work, I have focused on depth perception from camera images using both monocular and stereo sensing. In the former I have explored ways to enforce temporally consistent measurements in learned monocular models, and in the latter our work has shown the ability to detect objects the size of road debris at distances of hundreds of meters.
Neural methods are still in the early stages of adoption for environmental modeling on Earth and other celestial bodies. They show great promise.
In this project, we propose highly expressive spatiotemporal neural networks fit to the task of forecasting Arctic sea ice concentration at seasonal lead times. The resultant models improve our understanding of which regions of the Arctic are most influential in overall sea ice health compared to existing dynamical, statistical, and learned models.