The NASA Vision Workbench
Starting in 2006 I spent more and more of time time diving into robot vision and image processing, mostly with an eye towards vision-based control of autonomous aircraft. At about the same time an old friend of mine from MIT, Michael Broxton, was getting settled in the robotics group and was also working on image processing. I'd heard that they were gearing up to consolidate their robot vision software into a common codebase, whihc they'd begun referring to as a “vision workbench”. I offered a trade: I’d help them develop their library if they’d let me use it for my own projects too.
We’d originally intended to pick an existing C++ image library to use as the foundation, and we evaluated lots of them and settled on VXL as the best candidate. Once we started using it, though, we pretty quickly realized that like all the other libraries it had been designed at a time when the C++ languge was still immature, and so it was not nearly as powerful as it could have been. Michael and I dove underground for a number of months and emerged with a brand-new image processing system, with a design that we were much happier with. Thus was the NASA Vision Workbench born.
The most distinctive features of the Vision Workbench are its lazy computation model, it's highly scalable support for block-level operations on images, and its intuitive end-user syntax. It first started to catch on around IRG after I demonstrated how easy it was for me to write the first multi-band image blending system for the original GigaPan panorama stitching software. The other major success, which continues to be our flagship application, is the Ames Stereo Pipeline, our stereo correlation and 3D reconstruction software suite. Shortly after the core Vision Workbench code solidified, we arranged to release library as open source under the NASA Open Source Agreement license.
The photo of the rover cameras above, taken by Michael at one of the IRG field tests, has become the de facto logo for the Vision Workbench. The image on the left shows a handfull of basic image processing operations applied to a simple source image, a picture taken by my friend Heather of a mural in the Mission in San Francisco.