Abstract:
Autonomous Drone Racing (ADR) has recently become the going-to-the-moon milestone for many roboticists. A challenging problem that advances the development of vision-based perception and navigation algorithms to perform fast, agile maneuvers while working on constrained onboard computing resources and dealing with imperfect sensing of the Unmanned Aerial Vehicles (UAVs). Due to such constraints, traditional navigation methods of map-localize-plan are infeasible. However, mapless navigation algorithms using Machine Learning approaches are showing promising results. In this thesis, we present an approach for vision-based navigation for quadrotors in an autonomous drone racing configuration. We propose the use of short-trajectory segments as control commands inferred directly from a deep-learning model and tracked by a high-level controller in a receding-horizon fashion. The direct use of short-trajectory segments eliminates the role of a path-planning module, thus, reducing the overall latency of the system, and as a result, allowing higher flight speeds. Furthermore, short-trajectory segments permit the use of deeper neural network models thanks to their relaxed update rate, compared with low-level commands, such as thrust and angular-body rates, that require high and fixed update rates. In addition, we train our policy network to predict short-trajectory segments that jointly traverse a racing gate and maintain it in the field of view of the camera. Keeping the racing
gate in the camera’s field of view increases the accuracy of future predictions while permitting more accurate and robust state estimation. We compare the performance of our proposed system against one of the state-of-the-art methods in simulation. Our system flies at nearly double the speed on average reaching speeds up to around 4 m/s while achieving a comparable successful traversing rate (91% compared with 92% for the baseline).