Drones are being used worldwide to perform many functions - medical deliveries, video recording, and surveying to name a few. Their growing presence is paralleled by the growing world of Machine Learning (ML). This thesis presents a synthesis of machine and machine learning to create reactive agents in the form of mini-drones. Two prominent companies, DJI and Amazon, are patenting and selling drones that recognize hand gestures, but do not release their methods on how they made their intelligent drones. Deep learning and convolutional networks are used to train agents that control mini-drones based on hand gestures. This thesis details the process from collecting data to deploying an agent. Real-time gesture recognition agents are deployed in connection with a mini-drone that detects four gestures: a pointer finger pointing up as well as down, two fingers in a "peace" v-shape sign, and a flat, upward facing palm. The dependency of environment in which training data is selected as well as the apparent lighting of the subjects is investigated to see which creates an agent that detects the four gestures more frequently. The end product is a gesture-intelligent drone reacting to hand gestures - the foundation for a complex physical language interaction.
Taner Davis, MS (2018): Real-Time Gesture Recognition with Mini Drones. Master's Thesis, School of Computer Science, University of Oklahoma.
Related publications and presentations
The GIFT (Gesture Inages For Training) database is freely available on GitHub.
Created by amcgovern [at] ou.edu.
Last modified December 14, 2018 9:10 PM