Recently scinetists are considering not only individual robots but developing of groups of robots, because a swarm of collaborative robots, each one observing the problem from a different point of view, is able to solve more difficult tasks than one robot by itself. The strength of a swarm can be found in the sharing of information. Pieces of information are sent by a robot to a nearby robot and so on, connecting in this way all the robots of the swarm in a synergistic network. For example, each robot used for the exploration of unknown environment could explore by itself a little part of the search space and at the same time could share information with neighbours. In this fashion the exploration task is achieved after a shorter time than using a single robot.
The basic principle behind this new approach to robot coordination was directly inspired by the observation of natural systems. In nature, in fact, it is possible to see a lot of animals that work together for a final common purpose. Some typical example can be found in the sea, on the ground and in the air, and more evolved animals can collaborate to perform more complex social behaviours. Swarm Intelligence (SI) is a research field, afferent to Artificial Intelligence, that studies the decentralized collective behaviour of entities belonging to both artificial and natural systems. SI takes advantage of ideas and theories strongly inspired by biological systems.
The system commonly used as matter of tests is made up by a population of entities, which are called units, agents or particles in relation to the research field. Entities have the ability to interact with the surrounding environment and with other entities of the population exchanging information in some fashions. Each entity operates autonomously and in a completely decentralized fashion with the purpose to achieve the same target and following the same simple rules. The intelligent behaviour of the group emerges in a self-organized way from the behaviour of each single entity. The most biological groups studied from SI are schools of fishes, flocks of birds, swarms of bees, colonies of ants and herds of animals in general, from which scientist have created many applications in mathematics, statistics, immunology, sociology, engineering and in many other research fields included robotics e.g. multirobot systems.
Hybrid bio-inspired robotics tries not only to mimic living organisms in nature, but also to improve them to make them adaptable to multiple terrains and environments, while mechanically modular and efficient. The development of hybrid robots requires new designs of mechanical elements that take advantages of 3D printer technologies combining soft and rigid materials in the same part for multifunctional purposes.
Bio-inspired spherical mobile robots have been embedding mechatronic modifications to make them more adaptive to different terrains and environments, they can swim, dive from integrated thrusters; move in snow from their rugged outer shell; and even walk from using its shell as legs; but never embedding an active exoskeleton. The main reason is the challenging mechatronic system to be fitted inside the constrained space of the robot’s exoskeleton, which integrates a sealed spherical mobile robot as an inner shell together with a sensorized and actuated outer shell, plus managing their complex interactions.
The particle robots were inspired by an interesting rather boring animal, the biological Echinoid (sea urchin) and by spherical mobile robots. The sea urchin consists of a round shaped body with long spines that come off it. The spines are used for multiple purposes, such as protection and to move about in the water. On the other hand, spherical mobile robots embed a special morphology with multiple advantages such as protection from their outer shell and smooth motion with good power efficiency. Whether, based on the spherical mobile robots, soft robotics and highly compressible linear actuators state-of-the-art, it is possible to create a new specie of robot with novel locomotion methods, able to reconfigure itself to swim, move on snow or sand, pass over obstacles and even jump by contracting or extending its spines.
NASA Wants to Send Shapeshifting Robots to Saturn Moon
To handle one of the more distant and fascinating objects in our solar system – Saturn’s moon Titan – NASA engineers have come up with Shapeshifter. That’s because a robot operating semi-autonomously on very alien turf must be able to negotiate a broad range of terrains and environmental conditions, the likes of which may not exist on Earth. Titan’s landscapes include vast plains of dunes, high and steep-walled mountains peppered with deep alpine lakes, complex networks of river-carved canyons, and several wide seas of liquid methane. In some respects, Titan’s physical environment will make it easier for a co-botic transforming Shapeshifter craft to move about.
This new framework is a mechanical stage that will offer total access and versatility across various areasA shapeshifter can change shape like a flight exhibit, a ball, a torpedo-like structure, and the sky is the limit from there. By changing to these shapes, it can drift and fly over or move underneath surfaces. It can swim beneath fluid or move on surfaces.
To demonstrate this concept, they built the Shapeshifter mockup from two separate and complementary assemblies: a pair of flight-capable drones housed within their own halves of a pipe-frame cylinder structure. Combined, the prototype can roll like a barrel to easily traverse stretches of flat or mounded terrain. Separately, one half can ascend skyward on propellers, using the other half as a launch pad. More advanced visions for the Shapeshifter stick with the paradigm of smaller robots working together – “co-bots” – that form different con. figurations, but involve greater numbers of base robot units. The framework incorporates little automated units called as cobot. These cobots consolidate permitting them to shapeshift into differing methods of versatility. The benefit of the framework is that each cobot comprises of a very basic structure. It just incorporates a few propellers, which go about as actuators.
“Particle robot” works as a cluster of simple units
Researchers from MIT, Columbia University, and elsewhere have developed computationally simple robots, which the researchers call “particles” that connect in large groups to move around, transport objects, and complete other tasks. The particles are loosely connected by magnets around their perimeters, and each unit can only do two things: expand and contract. (Each particle is about 6 inches in its contracted state and about 9 inches when expanded.) That motion, when carefully timed, allows the individual particles to push and pull one another in coordinated movement. On-board sensors enable the cluster to gravitate toward light sources.
In a Nature paper published in March 2019 , the researchers demonstrate a cluster of two dozen real robotic particles and a virtual simulation of up to 100,000 particles moving through obstacles toward a light bulb. They also show that a particle robot can transport objects placed in its midst.
Particle robots can form into many configurations and fluidly navigate around obstacles and squeeze through tight gaps. Notably, none of the particles directly communicate with or rely on one another to function, so particles can be added or subtracted without any impact on the group. In their paper, the researchers show particle robotic systems can complete tasks even when many units malfunction.
The paper represents a new way to think about robots, which are traditionally designed for one purpose, comprise many complex parts, and stop working when any part malfunctions. Robots connected in a circular formation that can be pulled to expand and pushed back to contract. Two small magnets are installed in each panel.
The trick was programming the robotic particles to expand and contract in an exact sequence to push and pull the whole group toward a destination light source. To do so, the researchers equipped each particle with an algorithm that analyzes broadcasted information about light intensity from every other particle, without the need for direct particle-to-particle communication. The sensors of a particle detect the intensity of light from a light source; the closer the particle is to the light source, the greater the intensity.
Each particle constantly broadcasts a signal that shares its perceived intensity level with all other particles. Say a particle robotic system measures light intensity on a scale of levels 1 to 10: Particles closest to the light register a level 10 and those furthest will register level 1. The intensity level, in turn, corresponds to a specific time that the particle must expand. Particles experiencing the highest intensity — level 10 — expand first. As those particles contract, the next particles in order, level 9, then expand. That timed expanding and contracting motion happens at each subsequent level.
“This creates a mechanical expansion-contraction wave, a coordinated pushing and dragging motion, that moves a big cluster toward or away from environmental stimuli,” Li says. The key component, Li adds, is the precise timing from a shared synchronized clock among the particles that enables movement as efficiently as possible: “If you mess up the synchronized clock, the system will work less efficiently.”
Bees’ Movements May Lead To New Swimming, Flying Robots
Walking on Caltech’s campus, engineer Chris Roh happened to see a bee stuck in the water of Millikan Pond. Although it was a common sight, it led Roh and his colleague Mory Gharib to a discovery about the unique way bees navigate the interface between water and air.The incident occurred around noon, so the overhead sun cast the shadows of the bee — and, more importantly, the waves churned by the flailing bee’s efforts — directly onto the bottom of the pool.
As the bee struggled to make its way to the edge of the pond, Roh noticed that the shadows on the pool’s bottom showed the amplitude of the waves generated by the bee’s wings. Gharib and Roh recreated the conditions of Millikan Pond. They placed water in a pan, allowed it to become perfectly still, and then put bees, one at a time, in the water. As each bee flapped about in the water, filtered light was aimed directly down onto it, to create shadows on the bottom of the pan. A paper describing the results of the NSF-funded research was published in the journal Proceedings of the National Academy of Sciences.
When a bee lands on water, the water sticks to its wings, robbing it of the ability to fly. However, that stickiness allows the bee to drag water, creating waves that propel it forward. “The motion of the bee’s wings creates a wave that its body is able to ride forward,” Gharib says. “It hydrofoils, or surfs, toward safety.” Roh and Gharib are applying their findings to robotics research, developing a small robot that uses a similar motion to navigate the surface of water. The motion could be used to generate robots capable of both flying and swimming.
Military developing small, autonomous robots for use in warfare
ON NOVEMBER 12th a video called “Slaughterbots” was uploaded to YouTube. It is the brainchild of Stuart Russell, a professor of artificial intelligence at the University of California, Berkeley, and was paid for by the Future of Life Institute (FLI), a group of concerned scientists and technologists that includes Elon Musk, Stephen Hawking and Martin Rees, Britain’s Astronomer Royal. It is set in a near-future in which small drones fitted with face-recognition systems and shaped explosive charges can be programmed to seek out and kill known individuals or classes of individuals (those wearing a particular uniform, for example). In one scene, the drones are shown collaborating with each other to gain entrance to a building. One acts as a petard, blasting through a wall to grant access to the others.
“Slaughterbots” is fiction. The question Dr Russell poses is, “how long will it remain so?” For military laboratories around the planet are busy developing small, autonomous robots for use in warfare, both conventional and unconventional. In America, in particular, a programme called MAST (Micro Autonomous Systems and Technology), which has been run by the US Army Research Laboratory in Maryland, is wrapping up this month after ten successful years. MAST co-ordinated and paid for research by a consortium of established laboratories, notably at the University of Maryland, Texas A&M University and Berkeley (the work at Berkeley is unrelated to Dr Russell’s). Its successor, the Distributed and Collaborative Intelligent Systems and Technology (DCIST) programme, which began earlier this year, is now getting into its stride.