Trending News
Home / Technology / AI & IT / Computers and drones begin to understand human body language through Gesture recognition and control

Computers and drones begin to understand human body language through Gesture recognition and control

The way we interact with our machines has changed dramatically in the past decade evolving from buttons and toggles, motion sensors, keyboards, touch screens to new more hands-free forms of human-machine interface (HMI) such as speech-recognition interfaces. The ultimate aim is to bring interactions with computers as natural as an interaction between humans.

 

Over the past few years, gesture recognition has made its debut in entertainment and gaming markets. Now, gesture recognition is becoming a commonplace technology, enabling humans and machines to interface more easily in the home, the automobile and at work. Gesture technology can open up new opportunity for elderly and disabled people.

 

Gestures are visible body actions through which human express information to others without saying it. In our daily lives, we see several hand gestures that are frequently used for communication purposes. Hand gesture recognition is one of the advanced research fields, which provides a way for human-machine interaction. Hand gesture recognition provides an intelligent method for human-computer interaction (HCI).

 

Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans. Gestures have long been considered as an interaction technique that can potentially deliver more natural, creative and intuitive methods for communicating with our computers. Conceiving a future era of human-computer interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand -all without help of any input device.

 

 

Gesture control also has military applications like Anthropomorphic Augmented Robotic Controller that allows hands-free remote control for military bomb-disarming robots. The system, follows the movements of the user’s arm and hand, replacing the panel of buttons and switches military personnel typically use to control robots in the field. The Mirror Training system, CEO Liz Alessi says, is easier to use than these arrays of buttons and means bombs can be more quickly defused.

 

Researchers have developed a human gesture application, a Hand Signal Transceiver system for military soldiers. It is based on hand gestures which will be useful for squad communication. The current hand signal system for soldiers has some shortages, such as unavailability in the darkness and sometimes hard to keep stealth. It will be also applicable for patient’s emergency in hospital, security purpose. The transceiver will be able to detect several hand gestures and then send out the corresponding information, which will finally be received by another transceiver or computer and displayed on a LCD screen.

 

Ivan Poupyrev, technical project lead at Google Advanced Technology and Projects Group, says: “Gesture sensing offers a new opportunity to revolutionize the human-machine interface by enabling mobile and fixed devices with a third dimension of interaction. “This will fill the existing gap with a convenient alternative for touch- and voice controlled interaction.”

 

Gesture controlled Drone

In May 2017, DJI introduced Spark, its tiniest drone yet and the first that can be controlled by hand gestures, for just $499. The company packed computer vision and object tracking into a pocket-size 11 ounces that you can launch from your palm—no device pairing, remote, or app required. If a user simply frames her face with her fingers, the hovering Spark will snap a 12-megapixel selfie, comparable to a photo taken on an iPhone X. DJI was also the first to include obstacle-avoidance features (in its Phantom 4 model).

 

Gesture control technology

Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from face and hand gesture recognition. The ability to track a person’s movements and determine what gestures they may be performing can be achieved through various tools from wired gloves, depth aware cameras, stereo cameras, controller based gestures, single camera, radars etc. Moreover, gyroscope, accelerometer, combo sensor comes under touch-based gesture recognition and ultrasonic, infrared 2D array and camera solutions comes under touch less gesture recognition.

 

Wired gloves

In 1970s, wired gloves were invented to capture hand gestures and motions. The gloves use tactile switches, optical or resistance sensors to measure the bending of joints. These can provide input to the computer about the position and rotation of the hands using magnetic or inertial tracking devices. Furthermore, some gloves can detect finger bending with a high degree of accuracy (5-10 degrees), or even provide haptic feedback to the user, which is a simulation of the sense of touch.

 

Now engineers at the University of California San Diego have developed a prototype of what they call “The Language of Glove,” a Bluetooth-enabled, sensor-packed glove that reads the sign language hand gestures and translates them into text. This isn’t the first device designed to break down this particular language barrier. The 2012 Microsoft Imagine Cup was taken out by the EnableTalk gloves, which translate gestures into speech, and a London team developed a similar system a few years later called the SignLanguageGlove.

Gest gloves  are useful in cases where haptic feedback is important, like industrial robot control. However, requiring the user to put on a glove is a barrier for mass market adoption. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use.

 

Radars: Google’s gesture control technology

Google  had launched  Project Soli  as miniature radar hardware that allows gesture control of devices. Soli is a purpose-built interaction sensor that uses radar for motion tracking of the human hand. The sensor tracks sub-millimeter motion at high speeds with great accuracy. They are creating a ubiquitous gesture interaction language that will allow people to control devices with a simple, universal set of gestures. Google envisions a future in which the human hand becomes a universal input device for interacting with technology.

 

Soli sensor technology works by emitting electromagnetic waves in a broad beam. Objects within the beam scatter this energy, reflecting some portion back towards the radar antenna. Properties of the reflected signal, such as energy, time delay, and frequency shift capture rich information about the object’s characteristics and dynamics, including size, shape, orientation, material, distance, and velocity. Soli tracks and recognizes dynamic gestures expressed by fine motions of the fingers and hand. In order to accomplish this with a single chip sensor, they developed a novel radar sensing paradigm with tailored hardware, software, and algorithms. The Soli sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale, and can be used inside even small wearable devices.

 

Unlike cameras, which are used in other motion sensing technologies, radar has a high positional accuracy, and thus works better in this context than cameras would. It’s able to pick up on slight movements better. If a user was to touch his or her thumb to their forefinger, Soli would read that as a button being pressed. Or, the user slides his or her forefinger back and forth on the pad of their thumb that could operate a slider to adjust volume. “Project Soli is the technical underpinning of human interactions with wearables, mobile devices as well as the Internet of Things,” a Google ATAP (Advanced Technology and Projects) spokesperson said.

 

Unlike traditional radar sensors, Soli does not require large bandwidth and high spatial resolution; in fact, Soli’s spatial resolution is coarser than the scale of most fine finger gestures. Instead, their fundamental sensing principles rely on motion resolution by extracting subtle changes in the received signal over time. By processing these temporal signal variations, Soli can distinguish complex finger movements and deforming hand shapes within its field. The system uses broad beam radar to measure doppler image, IQ and spectrogram.

 

Infineon has developed radar technology for cars and devices. The company has reached separate agreements with Google and Imec on applications for the technology. With Google, Infineon has developed gesture control technology, which would allow users to control their devices by waving their arms about, and subtler movements.

 

A research team from Scotland has now expanded Soli’s smarts, allowing the radar to identify objects as well as gestures, putting it into a device it calls RadarCat. We have used the Soli sensor, along with our recognition software to train and classify different materials and objects, in real time, with very high accuracy […] Our studies include everyday objects and materials, transparent materials and different body parts.

 

The Soli chip works within the 60 Ghz radar spectrum at up to 10,000 frames per seconds. The final chip will contain everything it needs to be plug and play including the antennas. ATAP says the device can be made to scale. Andreas Urschitz, president of the division power management and multimarket at Infineon, says: “Since mankind started using tools 2.4 million years ago, this is the first time in history that tools adapt to their users, rather than the other way round.

 

Vision Based Gesture Recognition

Using a conventional 2D camera, simple gesture recognition can be implemented using functions provided by commercial or open source computer vision libraries, like OpenCV library. The pipeline uses skin tone detection to detect hands in a constrained area. It then detects convex and defect points of hands.

Gesture recognition has long been researched with 2D vision, but with the advent of 3D sensor technology, its applications are now more diverse, spanning a variety of markets. The human eye naturally registers x, y and z coordinates for everything it sees, and the brain then interprets those coordinates into a 3D image. In the past, lack of image analysis technology prevented electronics from seeing in 3D. Today, there are three common technologies that can acquire 3D images, each with its own unique strengths and common use cases: stereoscopic vision, structured light pattern and time of flight (TOF). With the analysis of the 3D image output from these technologies, gesture-recognition technology becomes a reality.

 

The ability to process and display the “z” coordinate is enabling new applications far beyond entertainment and gaming, including manufacturing control, security, interactive digital signage, remote medical care, automotive safety and robotic vision. Vision based gesture recognition uses a generic camera and/or range camera to capture and derive the hand gesture. It requires higher processing power compared to a wired glove. There are multiple methods for camera based gesture recognition.

3D Cameras

3D cameras that can perceive depth have become much more broadly available and cheaper in recent years. In 2010, Microsoft released Kinect V1 motion controller, using technology from PrimeSense. It provides strong three dimensional body and hand motion capture capabilities in real-time, freeing game players from physical input devices like keyboards and joysticks. Kinect also supports multiple users in a small room setting.

Researchers Develop a Low-power Always-on Camera With Gesture Recognition

Researchers at Georgia Institute of Technology have designed an always-on camera that is capable of watching for specific types of gestures without draining batteries and wake up only when needed. “We wanted to devise a camera that was capturing images all of the time, and then once you have a particular gesture – like you write a Z in the air – it’s going to wake up,” said Arijit Raychowdhury, an associate professor in the School of Electrical and Computer Engineering. “To make that work without affecting the battery life, we wanted it to be so low power that you can power it with harvested ambient energy, such as with a photovoltaic cell.”

 

While reducing the frame rate of a camera plays a role in lowering power demands, to achieve the power savings needed for this project, the researchers programmed the camera to track motion in a more generalized way that still preserved crucial details about what was being tracked. That requires much less power to process than tracking individual pixels throughout the entire field of view.

 

Such a low-power camera could be useful in a range of applications, especially for camera systems in remote locations where efficiency is crucial. “If you have a camera in the field, you want them to use as little energy as possible and only record events when necessary,” said Justin Romberg, a professor in Georgia Tech’s School of Electrical and Computer Engineering. Other applications include specialized surveillance, robotics and consumer electronics with hands-free operation, and the researchers are already working on adding wireless functionality to transmit images and data with an antenna.

 

RFID to Track Body Movements for Novel Smart Technologies

Carnegie Mellon University (CMU) researchers have found a new way to use the technology to track body movements and detect shape changes, leading to two RFID-based innovations that can lead to novel wearable designs, researchers said in a CMU news release. The team devised a new method for tracking the tags, which results in the monitoring of movements and shapes using a single, mobile antenna to monitor a tag array without the prior calibration that is usually needed, he said. RFID technology typically uses multiple antennas to track signal backscatter and triangulate the locations of the tags, but for this application, it wouldn’t make sense, researchers said.

 

Haojian Jin, a Ph.D. student in CMU’s Human-Computer Interaction Institute (HCII) was part of a team that designed two technologies—called RF-Wear and WiSH—using RFID tags to track body movements in unique ways. “We’re really changing the way people are thinking about RF sensing,” he said.

 

For body-movement tracking, researchers positioned an array of RFID tags on either side of the knee, elbow, or other joints. They can calculate the angle of bend in a joint by keeping track of the very slight differences in when the backscattered radio signals from each tag reach the antenna, he said. “By attaching these paper-like RFID tags to clothing, we were able to demonstrate millimeter accuracy in skeletal tracking,” Jin said.

 

The clothing they developed with this technology is called RF-Wear, and researchers envision it could be an alternative to systems such as Kinect, which use a camera to track body movements. The drawback to Kinect, however, is that it only works when the person is in the camera’s line of sight.

 

The technology for monitoring changes in curves or shapes that the team developed is called WiSh—short for Wireless Shape-aware world. It also uses arrays of RFIDs and a single antenna, as well as relying on a sophisticated algorithm for interpreting the backscattered signals to infer the shape of a surface, researchers said. “We can turn any soft surface in the environment into a touch screen,” said Jingxian Wang, Jin’s co-researcher and a Ph.D. student at CMU. Indeed, the technology could be integrated into various smart fabrics to track a user’s posture, or even into objects—like smart carpets or toys—that can detect and/or respond to user movements, he said.

 

Controller-based gestures

These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by software. Mouse gestures are one such example, where the motion of the mouse is correlated to a symbol being drawn by a person’s hand, as is the Wii Remote or the Myo armband or the mForce Wizard wristband, which can study changes in acceleration over time to represent gestures.

 

Devices such as the LG Electronics Magic Wand, the Loop and the Scoop use Hillcrest Labs’ Freespace technology, which uses MEMS accelerometers, gyroscopes and other sensors to translate gestures into cursor movement. The software also compensates for human tremor and inadvertent movement.

 

Myo gesture control wearable

The Myo armband gesture control wearable from Thalmic Labs which can be worn on forearm, that using   a series of motion and muscle sensors is able to track movement in a really sophisticated way. John Hopkins Applied Physics Laboratory has used Myo that allow persons to control prosthetic forearm. Just by thinking about moving his arm, his arm now moves. Opening his hand, shaking hands, making a fist, picking up objects and rotating his arm are all possible as well.

 

Researchers from the Arizona State University have found a way to use the Myo controller to translate sign language. The system, which is known as Sceptre, uses a pair of the wearables that can match gestures and signs to a database and then display it as a text on a screen. The team has managed to successfully recognise a series of words and phrases including ‘headache’, ‘upset stomach’ and ‘all morning’.

Google’s Project Jacquard to bring Touch and Gesture Technology to Clothing

Google’s Advanced Technology and Projects (ATAP) group unveiled Project Jacquard and named the Levi’s® brand its first official partner. Google’s Project Jacquard uses conductive yarn to create fabric panels that can be used to interact with a device. To create the yarn, conductive metal alloys are braided together with fabric fibers to make a product that is also strong, but still feels like yarn.

 

The goal of Google’s Project Jacquard is to confront the historical limitations of wearable technologies by decoupling the touch interface from the digital device. Jacquard makes garments interactive — simple gestures like tapping or swiping send a wireless signal to the wearer’s mobile device and activate functionality, such as silencing phone calls or sending a text message.

 

Google has announced an update for its Project Jacquard smart jacket that will sound an alarm if you’re in danger of leaving your phone (or jacket) behind. Google added a ‘find your phone’ feature last year, which works a little like a key tracker, letting you use a gesture on the jacket to start your phone ringing at full volume.

 

 Market Growth

The global gesture recognition market size is likely to reach USD 30.6 billion by 2025. It is poised to post a CAGR of 22.2% from 2018 to 2025, according to a new report by Grand View Research, Inc. Increasing digitization across various industries is benefiting the growth of the market. Ease of adoption due to low technical complexity for end users is escalating its implementation across the consumer electronics industry. Several other industries have also started using this technology. Similarly, the touchless gesture recognition market is primarily driven by factors such as rising hygiene consciousness, government measures for water conservation, low maintenance cost, and booming hospitality and tourism industry.

 

Surging use of consumer electronics and Internet of Things, along with increasing need for comfort and convenience in product usage, is boosting the growth of the gesture recognition market. Technological advancements and ease of use are helping the market gain momentum over the coming years. Increasing awareness regarding regulations and driver safety are bolstering the demand for gesture recognition systems in the automobile industry. Similarly, spiraling customer demand for application-based technologies is stimulating market growth.

 

 

The market is volatile and is experiencing a fierce competition, therefore, witnessing high number of mergers and acquisitions. For instance, Intel Corp. acquired Omek Interactive Ltd. Some of the key players in the market are Apple, Intel, Microsoft, and Google. Along with global giants, local and regional players are also showing tremendous growth and attracting big investors. This scenario has taken the competition to a whole different level.

 

References and Resources also include:

http://source.colostate.edu/beyond-siri-researchers-are-bridging-human-computer-interaction/

http://spectrum.ieee.org/view-from-the-valley/at-work/start-ups/startups-take-gesture-control-beyond-games-to-robots-and-more

http://www.gizmag.com/gest-gesture-controller-glove/40174/

https://www.wareable.com/wearable-tech/myo-armband-uses-gaming

http://www.embedded-vision.com/platinum-members/texas-instruments/embedded-vision-training/documents/pages/gesture-recognition-enab

http://roboticsandautomationnews.com/2016/05/25/infineon-develops-breakthrough-radar-technology-for-cars-and-devices/4734/

http://www.rh.gatech.edu/news/562011/researchers-develop-low-power-always-camera-gesture-recognition

https://medium.com/iotforall/how-gesture-control-will-transform-our-devices-32d4527a6d25

https://www.designnews.com/materials-assembly/rfid-track-body-movements-novel-smart-technologies/114759887060017?ADTRK=UBM&elq_mid=7185&elq_cid=1082976

https://www.grandviewresearch.com/press-release/global-gesture-recognition-market

 

About Rajesh Uppal

Check Also

Software Defined Radio (SDR) technology

A radio is any kind of device that wirelessly transmits or receives signals in the …

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!