Researchers from the University of Zurich have proposed using neural networks and training drones on city streets to prepare them for use in urban environments. By using data already collected from self-driving car experiments the method put forward in the paper has advantages over the GPS/obstacle avoidance system found in current drones on the market.

Computer vision is nothing new but the paper presents a system where each frame processed produces two outputs, a steering angle ,and a collision probability. According to the paper, adding the steering angle will allow drones to adapt to tighter more dynamic environments without suffering from stimulation overload, a concern with current systems.

The researchers’ algorithm uses a single layer convolutional neural network which they have dubbed DroNet. The outputs from the steering angle and collision probability are then used to control the drone. The improved response time has as much to do with the use of machine learning as it does with the way the controls are mapped to the outputs. For example, speed is controlled by the collision probability, which allows the drone to react intuitively to obstacles at a distance rather than creating unnecessarily large buffer zones to prevent crashes.

Testing was conducted using a Parrot Bebop 2.0, but the team claims that their method is highly generalizable. They have made the code available to the general public. You can read the full research paper here.