Indoor flying robots represent a largely unexplored area of robotics. There are several unmanned aerial vehicles, but these are machines that require precise information on their absolute position and can fly only in open skies far away from any object. Flying within, or among buildings requires completely different types of sensors and control strategies because geo-position information is no longer available in closed and cluttered environments. At the same time, the small space between obstacles calls for extreme miniaturization and imposes stringent constraints on energetic requirements and mechatronic design.
A small number of scientists and engineers have started to look at flying insects as a source of inspiration for the design of indoor flying robots. But where does one start? Should the robot look like an insect? Is it possible to tackle the problem of perception and control separately from the problem of hardware design? What types of sensors should be used? How do insects translate sensory information in motor commands?
Biological inspiration is a tricky business. The technology, so to speak, used by biological organisms (deformable tissues, muscles, elastic frameworks, pervasive sensory arrays) differs greatly from that of today’s robots, which are mostly made of rigid structures, gears and wheels, and comparatively few sensors. Therefore, what seems effective and efficient in biology may turn out to be fragile, difficult to manufacture, and hard to control in a robot. For example, it is still very debated to which extent robots with rigid legged locomotion are better than robots with articulated wheels.
Also, the morphologies, materials, and brains of biological organisms co-evolve to match the environmental challenges at the spatial and temporal scales where those organisms operate. Isolating a specific biological so-lution and transposing it into a context that does not match the selection criteria for which that solution was evolved may result in sub-optimal solutions. For example, the single-lens camera with small field of view and high resolution that mammalian brains evolved for shape recognition may not be the most efficient solution for a micro-robot whose sole purpose is to rapidly avoid obstacles on its course.
Useful practice of biological inspiration requires a series of careful steps: describing the challenge faced by robots with established engineering design principles; (b) uniquely identifying the biological functionality that is required by the robot; (c) understanding the biological mechanisms responsible for that functionality; (d) extracting the principles of biological design at a level that abstracts from the technological details; (e) translating those principles into technological developments through standard engineering procedures; and (f) objectively assessing the performance of the robot.
Technology and science will continue to progress, and flying robots will become even smaller and more autonomous in the future.
What’s Wrong with Flying Robots?
Current instances of unmanned aerial vehicles (UAV) tend to fly far away from any obstacles, such as ground, trees, and buildings. This is mainly due to aerial platforms featuring such tremendous constraints in terms of manoeuvrability and weight that enabling them to actively avoid collisions in cluttered or confined environments is highly challenging. Very often, researchers and developers use GPS (Global Positioning System) as the main source of sensing information to achieve what is commonly known as “waypoint navigation”. By carefully choosing the way-points in advance, it is easy to make sure that the resulting path will be free of static obstacles. It is indeed striking to see how research in flying robotics has evolved since the availability of GPS during the mid-1990’s. GPS enables a flying robot to be aware of its state with respect to a global inertial coordinate system and – in some respects – to be considered as an end-effector of a robotic arm that has a certain workspace in which it can be precisely positioned. Although localisation and obstacle avoidance are two central themes in terrestrial robotics research, they have been somewhat ignored in the aerial robotics community, since it was possible to effortlessly solve the first one by the use of GPS and ignore the second as the sky is far less obstructed than the Earth surface.
However, GPS has several limitations when it comes to low-altitude or indoor flight. The signal sent by the satellites may indeed become too weak, be temporary occluded, or suffer from multiple reflections when reaching the receiver. It is therefore generally admitted that GPS is unreliable when flying in urban canyons, under trees or within buildings. In these situations, the problem of controlling a flying robot becomes very delicate. Some researchers use ground-based beacons or tracking systems to replace the satellites. However, this is not a convenient solution since the use of such equipment is limited to pre-defined environments. Other researchers are attempting to equip flying robots with the same kind of sensors that are commonly found on terrestrial mobile robots, i.e. range finders such as sonars or lasers. The problem with this approach is that not only do flying systems possess a very limited payload, which is very often incompatible with such sensors, but, in addition, they must survey a 3D space whereas terrestrial robots are generally satisfied with 2D scans of their surroundings. Moreover, because of their higher speed, flying robots require longer ranges of sensing, which in turn requires heavier sensors. The only known system that has been able to solve the problem of near obstacle flight using a 3D scanning laser range finder is a 100 kg helicopter equipped with a 3 kg scanning laser range finder.
Even if the GPS could provide an accurate signal in near obstacle situations, the localisation information per se does not solve the collision avoidance problem. In the absence of continuously updated information concerning the surrounding obstacles, one needs to embed a very accurate 3D map of the environment in order to achieve collision-free path planning. In addition, environments are generally not completely static, and it is very difficult to incorporate into maps changes such as new buildings, cranes, etc. that could significantly disturb a UAV flying at low altitude. Apart from the problem of constructing such a map, this method would require a significant amount of memory and processing power, which may be well beyond the capability of a small flying system.
In summary, the aerial robotics community has been somehow refrained from effectively tackling the collision avoidance problem since GPS has provided an easy way around it. This problem is definitely worth getting back to in order to produce flying robots capable of flying at lower altitude or even within buildings so as to, e.g. help in search and rescue operations, provide low-altitude imagery for surveillance or mapping, measure environmental data, provide wireless communication relays, etc. Since the classical approach used in terrestrial robotics – i.e. using active distance sensors – tends to be too heavy and power consuming for flying platforms, what about turning to living systems like flies? Flies are indeed well capable of solving the problem of navigating within cluttered environments while keeping energy consumption and weight at an incredibly low level.
Flying Insects Don’t Use GPS
Engineers have been able to master amazing technologies in order to fly at very high speed, relatively high in the sky. However, biological systems far outperform today’s robots at tasks involving real-time perception in cluttered environments, in particular if we take energy efficiency and size into account. Based on this observation, the present book aims at identifying the biological principles that are amenable to artificial implementation in order to synthesise systems that typically require miniaturisation, energy efficiency, low-power processing and fast sensory-motor mapping.
The notion of a biological principle is taken in a broad meaning, ranging from individual biological features like anatomy of perceptive organs, models of information processing or behaviours, to the evolutionary process at the level of the species. The idea of applying biological principles toflyingrobotsdrawsonthefieldsofbiorobotics and evolutionary robotics. These philosophical trends have in turn been inspired by the new artificial intelligence (new AI), first advocated by Brooks in the early 1980’s (for a review, see Brooks, 1999) and by the seminal contribution from Braitenberg. However, when taking inspiration from biology in order to engineer artificial systems, care must be taken to avoid the pitfall of carrying out biomimicry for the sake of itself, while forgetting the primary goal, i.e. the realisation of functional autonomous robots. For instance, it would make no sense to replace efficiently engineered systems or subsystems by poorly performing bio-inspired solutions for the sole reason that they are bio-inspired. In our approach, biological inspiration will take place at different levels.
The first level concerns the selection of sensory modalities. Flies do not use GPS, but mainly low-resolution, fast and wide field-of-view (FOV) eyes, gyroscopic sensors and airspeed detectors. Interestingly, these kinds of sen sors can be found in very small and low-power packages. Recent developments in MEMS technology allow the measurement of strength, pressure,or inertial forces with ultra-light devices weighing only a few milligrams. Therefore, artificial sensors can easily mimic certain proprioceptive senses in flying insects. Concerning the perception of the surroundings, the only passive sensory modality that can provide useful information is vision. Active range finders such as lasers or sonars have significant drawbacks such as their inherent weight (they require an emitter and a receiver), their need to send energy into the environment, and their inability to cover a wide portion of the surroundings unless they are mounted on a mechanically scanning system. Visual sensors, on the other hand, can be extremely small, do not need to send energy into the environment, and have by essence a larger FOV. It is probable that these same considerations have driven evolution toward extensive use of vision in flying insects rather than active range finders to control their flight, avoid collisions and navigate in cluttered environments.
The second level of bio-inspiration is related to the control system, in other words, how sensor information is processed and merged in order to provide useful motor commands. At this level, two different approaches will be explored. The first approach consists in copying flying insects in their way of processing information and behaving: controlling attitude (orientation), stabilising their course, maintaining ground clearance, and avoiding collisions. The second approach relies on artificial evolution to automatically synthesise neuromorphic controllers that map sensory signals into motor commands in order to produce a globally efficient behaviour without requiring the designer to divide it into specific sub-behaviours. In both these approaches, vision remains the core sensory modality.
However, a significant drawback with vision is the complex relationship existing between the raw signal produced by the photoreceptors and the corresponding 3D layout of the surroundings. The mainstream approach to computer vision, based on a sequence of pre-processing, segmentation, object extraction, and pattern recognition of each single image, is often incompatible with the limited processing power usually present onboard small flying robots. By taking inspiration from flying insects, this book aims at demonstrating how simple visual patterns can be directly linked to motor commands. The underlying idea is very close to the ecological approach to visual perception, first developed by Gibson and further advocated by Duchon & al.:
Ecological psychology (…) views animals and their environments as “inseparable pairs” that should be described at a scale relevant to the animal’s behavior. So, for example, animals perceive the layout of surfaces (not the coordinates of points in space) and what the layout affords for action (not merely its three-dimensional structure). A main tenet of the ecological approach is that the optic array, the pattern of light reflected from these surfaces, provides adequate information for controlling behavior without further inferential processing or model construction. This view is called direct perception: The animal has direct knowledge of, and relationship to its environment as a result of natural laws.
Following this idea, no attempt will be made to, e.g. explicitly estimate distances separating the artificial eye of the flying robot and the potential obstacles. Instead, simple biological models will be used to directly link perception to action without going through complex sequences of image processing.