by Gooly (Li Yang Ku)
Recently, light weighted quadrotors that equip a camera like the AR Drone becomes cheap and interesting enough to be used in research even for poor students. By combining vision algorithms with a quadrotor you can really expand your imagination even to robot fishing.
SLAM (Simultaneous Localization and Mapping) is a pretty hot topic in recent years; since in many situations a detailed map is not available for robots, it would be largely preferred if a robot could navigate in an unknown environment and draw the map itself like how we humans do. Previously most SLAM researches use mobile robots with a laser scanner that builds up a decent map combining laser result with dead reckoning. However, a laser scanner weighs quite a lot and is not ideal to be mounted on a light weighted aerial vehicle. This is where visual SLAM take place. Visual SLAM is the name for all SLAM techniques that uses mainly visual inputs. (Mostly monocular camera only)
One of the early works of visual SLAM is MonoSLAM, which is done in Oxford. The video above shows how a 3D map is built by recognizing relative position of features merely using a 2D camera. PTAM further expands this concept and uses 2 threads to improve performance. They provide open sourced codes on their website and was widely tested. The code was further ported to ROS by ETH Zurich and tested in the European SFly project on a few UAVs.
ROS also has a quadrotor simulator package where you can test your algorithms before crashing your real quadrotor.
All the publications and websites you should know about quadrotors are organized here.