AI Fundamentals: The Evolution of AI in Drone Systems – Inside Unmanned Systems


Artificial intelligence and deep machine learning are allowing UAS to scale and intensify their impact across many sectors.
Hamid Benbrahim, PhD
Every now and then, the world experiences significant bursts of innovation thanks to random concurrence of favorable conditions in technology, business and society. The explosive growth in innovative applications of Unmanned Aerial Vehicles (UAVs) over the last 10 years can only be matched by the World Wide Web and the iPhone.
Remote controlled flight had existed since the Royal Air Force launched the Queen Bee aircraft in 1935. After that, drones evolved to higher levels of sophistication, but remained, for the most part, limited to military applications. Perhaps the two watershed moments that turned drones into true consumer products happened in 2010, when Parrot flew its AR Drone with an iPhone app at CES, and in 2013, when DJI equipped its Phantom drone with a camera.
Ease of navigation, image capture capabilities and miniaturization of ARM processors provided fertile ground for AI applications across all sectors. This powerful combination allowed drones to permeate an impressive variety of domains, including ISR (intelligence, surveillance and reconnaissance) and targeting, film, agriculture, logistics, engineering and disaster response. The iPhone, the GoPro camera, GPS and AI were technology kindling. The number of problems that have been waiting for a UAV solution is only bound by imagination.
Today’s drones come prepackaged with GPS sensing and navigation capabilities, video capture, and command and control applications, as well as several interfaces that allow the implementation of AI and special purpose programs. This combination puts the “system” in UAS.
Computers enabled massive automation and scaling of information processing. AI, on the other hand, moved computation to a much higher level, enabling massive automation of reasoning, or more precisely, solution discovery and parameter optimization. These two operations allowed machines to learn.
An AI thermostat, for instance, can “learn” the optimal temperature setting that minimizes the number of trips a person takes to warm or cool a home or office. To do this, a machine learning algorithm collects many data points such as room temperature and the action the person takes every time she or he sets the thermostat. This gives the algorithm valuable feedback on what the proper setting should be given ambient temperature, time of day, etc. All machine learning algorithms follow the same principle, except they use hundreds or thousands of “thermostats,” each of which learns an obscure parameter inside the algorithm. The accumulation of learning from these units results in machine intelligence such as recognizing a cat in an image, parallel parking a trailer truck, translating text from Mandarin to Arabic with no prior knowledge of vocabulary or grammar, or identifying terrorist cells in a social network.
The most successful and practical applications of drones today leverage AI for image recognition and image stitching. While this is far short of the full promise of AI, it is making a tremendous impact, helping to automate and scale a vast array of applications, at minimal cost. Drones are flying inside Europe’s cathedrals and architectural treasures to build comprehensive 3D models. Drones count sheep in Israel, and, surprisingly, stay awake. Civil engineers use drones to constantly scan large structures like bridges, dams and oil rigs to detect structural faults before they grow into serious problems. And, of course, the military has been using drones for decades now to collect field intelligence. Drone imaging is becoming so prevalent that it will be virtually impossible to find a domain that has not been permeated.

Drones that scan the insides of a cathedral, for instance, enjoy many luxuries that are rarely afforded to many in other domains. The drones can easily fly back to charging stations every time they need power. They can use broadband to upload their high-resolution videos to the cloud to be processed by gigantic server farms. They are protected from wind, rain, theft and attacks, and run little cybersecurity risk. AI cannot do its job without some of these luxuries.
The challenge of operating applications in hostile environments such as war zones, or forest fires, requires a great deal of support infrastructure as well as bigger, more powerful drones. This challenge provides a great opportunity for AI to provide “operational support” functions. Albeit less glamorous than object identification, navigation and context determination, these support functions optimize and preprocess the data to mitigate the lack of readily available powerful computing and high-speed data transfer.
For example, rather than upload high- resolution images to the server, or process them on the drone, AI algorithms are used to sample the images and only capture relevant features, such as edges, position of each eye relative to the nose, etc. These features can then be uploaded to a server requiring much lower bandwidth and using much less power. Another powerful approach, which is frequently used in movies, consists of taking a few high-resolution static images and low-resolution video; AI algorithms then use the two to construct high-resolution video. Once high-resolution video is on the server, then virtually endless computing power can be used by advanced AI. A drone equipped with FHD video, for instance, captures images that contain 1920×1080 pixels (about 2 million). Using images, an eighth the FHD resolution produces 240×135 (or 32,400 pixels), which still provides recognizable images and results in an order of a 64-fold reduction in power required for capturing, storing and transmitting the images, as well as required bandwidth. Many AI applications only use 64×32 images, which results in a 1024-fold reduction.
Advances in computing and battery technology as well as increased availability of bandwidth will make these support functions less relevant in many applications. However, the drive for miniaturization and continued expansion of drones onto new domains of application with hostile environments will always require additional support functions.

The model of using a handful of drones to capture images or other data and upload them to a server to run advanced AI is prone to be outdated very soon. The problem with this model is that it cannot be operationalized. It is neither robust nor is it scalable. If a drone fails, gets lost or suffers a physical or cyber attack, the whole operation could fail. The cost of manually replacing drones in a battlefield or an offshore oil rig is operationally too expensive. The model for the future is one that uses hundreds or thousands of drones as a system, rather than individual drones.
Transforming “individuals” into a system has been fundamental to many areas, including physics, biology, social science and technology. The key distinction is that the system is greater than the sum of its parts. It is a universal property of sustainable systems. The science of managing large numbers of individuals as a system has been applied with great success to marketing, finance, web search, language translation, environment restoration and AI itself. Transforming an army of drones into a system of one is a natural application of this science.
To cite a public example, in 2012, Ars Electronica Futurelab created the first drone light show, in which 49 drones were programmed with flocking and swarming behavior inspired from birds and bees. The result was an emergence of artistic light patterns in the sky. Since then, there have been hundreds of light shows, using thousands of drones. Most of these shows use highly coordinated drones to show, for instance, a man walking in space at the 2020 New Year’s celebration in Shanghai.
As impressive as it is, a light show is simply a rendering of 3D images in which each drone occupies the position of a pixel. The technology-associated challenges are not to be underestimated, but this remains in the realm of coordinating individuals rather than managing a system.
AI exemplifies the power of leveraging large numbers of individuals as a system. In 1956, Oliver Selfridge and Marvin Minsky founded what is known today as AI with the notion of daemons, or background computer programs, working as a system to solve complex problems. Since then, there have been a proliferation of distributed technology systems founded on the principle of agents, or daemons, working together to tackle complex problems at scale. “Hadoop technology” that is today widely used to mine huge amounts of text data, such as web logs, is based on agents that each grab a subset of the data and break the task into thousands of subtasks that are executed in parallel. Agent-based modeling is widely used in biology to learn about the sustainability of an ecosystem, for instance. In these models, thousands of agents are programmed with certain behaviors, like deer eating vegetation at a particular rate, trees growing at their own rates, predators eating deer, etc. These agents then interact randomly with each other and produce many plausible outcomes.

Consequently, deep learning, which is at the center of many of the most impactful applications of AI in drones, gets its power from neural networks in which hundreds, or thousands, of very basic mathematical functions systematically cooperate to adapt their parameters as they learn from experience. These neural networks are organized in layers whereby each layer feeds a summarized version of its inputs to the next layer, allowing the network to dive deeper and deeper into the features that really matter—hence the name deep learning.
Drones are no different from the nodes in a neural network, albeit much more sophisticated. With today’s technology, small and simple drones can be produced at very low cost. Instead of a handful of powerful drones, with single points of failure, thousands of simpler drones can operate as a system. Systems of drones are much more effective than individuals. A handful of drones can be shot down in a battlefield; a dust of drones, on the other hand, is much harder to defend against.
The operationalization of drones as a multidrone system is key to enabling the next leap. Fortunately, there is a powerful precedent that presages high likelihood of success. In 2006 Amazon launched Amazon Web Services (AWS), which gave birth to cloud computing, transforming IT operations forever and making data centers a thing of the past. At the core of AWS are a number of support processes that marshal thousands of computers on the cloud to operate as one. Before AWS, the vast majority of AI applications were limited to a handful of powerful computers. Now AI, with virtually endless power, is readily available at a staggeringly small fraction of the cost.
Amazon, and many others, are diligently working on the the operationalization of drones today. They have a pressing need to use drones in package delivery logistics, at scale. They are tackling a number of technical and regulatory challenges, such as air traffic control. They have the know-how, the tools and, importantly, the imagination to build the support infrastructure that makes drones operate as a system.

Drone light shows are a spearhead that is driving the systematization of drones today. They are creating the market conditions that drive miniaturization, cost reduction, interoperability and, crucially, the development of support processes. Indeed companies like Intel, EHang and HighGreat have developed a tremendous arsenal of technologies and capabilities that allowed them to rapidly scale drone light show production. SPH Engineering, a software engineering firm located in Latvia, is offering open source as well as premium software that allows anyone to create their own drone light show.
It is only a matter of time until these light show capabilities are repurposed for other activities. The availability of open source software along with standard drone interfaces provide extremely favorable conditions to implement AI applications. While light shows are concerned with mapping drones to pixels, these more generalized applications will focus on mapping drones to sub functions. For example, some drones will follow movement in a battlefield, others will count soldiers, etc. While many of these applications will address the task at hand, such as mapping a war zone, much of the AI will focus on breaking up and distributing the task among thousands of drones, and stitching the parts to make the whole.
<!––>

source