Home / Technology / AI & IT / Advances in AI Assistants enabling US army to develop cognitive agents helping Soldiers Deal with Information Overload

Advances in AI Assistants enabling US army to develop cognitive agents helping Soldiers Deal with Information Overload

Virtual assistants are the cutting edge of end user interaction, thanks to endless set of capabilities across multiple services. Today, we can ask virtual assistants like Amazon Alexa, Apple’s Siri, Google Now to perform simple tasks like, “What’s the weather”, “Remind me to take pills in the morning”, etc. in natural language.

 

They were supposed to have simplified our lives, but they’ve barely made a dent. They recognize only a narrow range of directives and are easily tripped up by deviations. The next evolution of natural language interaction with virtual assistants is in the form of task automation such as “turn on the air conditioner whenever the temperature rises above 30 degrees Celsius”, or “if there is motion on the security camera after 10pm, call Bob”.

 

But some recent advances are about to expand your digital assistant’s repertoire. In June 2018, researchers at OpenAI developed a technique that trains an AI on unlabeled text to avoid the expense and time of categorizing and tagging all the data manually. A few months later, a team at Google unveiled a system called BERT that learned how to predict missing words by studying millions of sentences. In a multiple-choice test, it did as well as humans at filling in gaps. These improvements, coupled with better speech synthesis, are letting us move from giving AI assistants simple commands to having conversations with them. They’ll be able to deal with daily minutiae like taking meeting notes, finding information, or shopping online.

 

Military is also developing AI assistants. The battlefield of the future will be complex, with mountains of data moving rapidly between commanders, operations centers and the joint warfighter. In this multi-faceted environment, Army researchers and their partners are seeking solutions. Drones and sensors are steadily getting better, smaller, cheaper and more numerous. There’s more data by the day. “Humans simply cannot process the amount of information that is potentially available,” Touryan said. “Yet, humans remain unmatched in their ability to adapt to complex and dynamic situations, such as a battlefield environment.”

 

A decrease in cognitive performance can have great impact and therefore around the world, armies are recognizing the importance of maximizing the effectiveness of Soldiers cognitively.  Therefore military is planning to employ AI to aid the soldiers. The idea is for the AI—”intelligent agent” is the term the Army uses—to process raw information, leaving the human soldier to do what they’re best at: make decisions, especially creative ones.

 

“In theory, intelligent agents will have parallel computational power that is much greater than that of humans,” Dr. Jonathan Touryan, a neuroscientist at the Army’s Human Research and Engineering Directorate in Maryland, said in an Army release. “In developing human-agent integration principles, we hope to accentuate the strengths of both while mitigating individual weaknesses.”

 

Recent Advances in AI assistants

In June 2018, researchers at OpenAI developed a technique that trains an AI on unlabeled text to avoid the expense and time of categorizing and tagging all the data manually. A few months later, a team at Google unveiled a system called BERT that learned how to predict missing words by studying millions of sentences. In a multiple-choice test, it did as well as humans at filling in gaps.

 

These improvements, coupled with better speech synthesis, are letting us move from giving AI assistants simple commands to having conversations with them. They’ll be able to deal with daily minutiae like taking meeting notes, finding information, or shopping online. MIT Technology Review has selected Smooth-talking AI assistants as  of its 10 Breakthrough Technologies for 2019.

 

Google Duplex has the advanced AI technology that will allow the Google Assistant to pick up your calls to screen for spammers and telemarketers. It can also make calls for you to schedule restaurant reservations or salon appointments.make extremely realistic-sounding calls to people.

 

But while Google slowly rolls out the feature in a limited public launch, Alibaba’s own voice assistant has already been clocking overtime. On December 2 at the 2018 Neural Information Processing Systems conference, one of the largest annual gatherings for AI research, Alibaba demoed the AI customer service agent for its logistics company Cainiao. Jin Rong, the dean of Alibaba’s Machine Intelligence and Technology Lab, said the agent is already servicing millions of customer requests a day.

 

But while AI programs have gotten better at figuring out what you want, they still can’t understand a sentence. Lines are scripted or generated statistically, reflecting how hard it is to imbue machines with true language understanding. New techniques that capture semantic relationships between words are making machines better at understanding natural language.

 

Almond by Stanford University

Today’s virtual assistants, platforms like Amazon’s Alexa and Google Assistant may be open to third parties, but their proprietary nature means nothing created on one can be accessed by the others. As a result, they connect their users to a linguistic web, not the linguistic web. And the landscape grows more fractured by the day.

 

Almond  is an open, crowdsourced and programmable virtual assistant that was built as part of the Open Mobile Platform project  at Stanford. hat’s why we’re founding The Stanford Open Virtual Assistant Lab, or OVAL. “It’s a world-wide open-source initiative intended to confront what we believe are the three major challenges facing the future of this technology: avoiding fragmentation of the linguistic web, democratizing the power of natural language interfaces, and putting privacy back in the hands of consumers.”

 

Central to Almond is Thingpedia, which is an open repository of different services, including Internet of Things (IoT) devices, open Web APIs and Social networks along their natural language interfaces. Thingpedia, which is an encyclopedia for the IoT, contains information about each device along with a set of functions that correspond to each device API.

 

Each Thingpedia entry for a function also contains a natural language annotation that captures how humans refer and interact with the device. Through the efforts of crowdsourcing, Thingpedia contains a set of 50 devices and 187 functions. The 50 devices span a variety of domains from media (news papers, web comics), social networks (twitter, facebook), home automation (light bulb, thermostat), communication(email, calendar), etc.

 

Thingpedia means virtual assistants of all kinds can connect their users to the same shared world. It encourages competition by sparing upstart virtual assistant developers the burden of reinventing the wheel (or rather, tens of thousands of wheels) simply to catch up with incumbents. It lets consumers comparison shop without worrying about whether a particular function will be accessible to the assistant that suits them best.

 

Built on top of Thingpedia is the execution system, called Thingsystem, that takes user programs in the form of Trigger-Action programs (also known as If-This-Then-That programs) and maps them to the low-level device implementation in the repository. We express the intermediate Trigger-Action programs in a high-level domain specific language called ThingTalk. ThingTalk can connect devices together by specifying the compositional logic while abstracting the device implementation and the communication.

 

Today’s virtual assistants are based on neural networks capable of transcribing the human voice and intelligently interpreting the results. The accuracy of such networks requires a significant amount of training data, typically acquired through manual annotations of real data by a large workforce. That’s why we’re building LUInet, an open-source neural network that provides an alternative to the capabilities at the heart of today’s commercial assistants. Additionally, we’ve developed an innovative tool called Genie that helps domain experts create natural language interfaces for their products, at a greatly reduced cost, and without in-house machine learning expertise. By empowering independent developers and by collecting their contributions from different domains, LUInet is positioned to surpass even the most advanced proprietary model developed by a single company.

 

Finally, the proprietary nature of today’s virtual assistants means their creators have total control over the data passing through them. That includes personal information, preferences and behavior, as well as hours upon hours of voice recordings. OVAL is changing that with Almond, a complete virtual assistant with a unique focus on privacy and transparency. Not only can it access every function in Thingpedia, and interpret complex commands thanks to LUInet, but it was built from the ground up with privacy preserving measures that let you explicitly control if, when and how data is shared. For example, a user can tell her Almond assistant, running on her own device, that “my father can see motion on my security device, but only if I am not home”. No third party sees any of the shared data.

 U.S. Army Research Laboratory advancing Cognitive Assistants

Dr. James Schaffer, U.S. Army Research Laboratory scientist recently won a best paper award at the Association for Computing Machinery’s 26th Conference on User Modeling, Adaptation and Personalization for discovering that most people cannot distinguish between liking a user interface and making good choices. “User experience and choice satisfaction can easily be conflated when good system design creates positive feelings about an experience, artificially leading participants to think good decisions have been made,” Schaffer said. “This can lead to false positive situations, where researchers may assume good decisions are being made due to a system’s appearance or ease of use.”

 

“The current state of the art in recommender systems likely would have led the U.S. Army’s modernization in the wrong direction, and the results from the paper are a warning against any type of subjective evaluation being done at, for instance, military exercises,” Schaffer said. Schaffer’s research helps form the basis for evaluation strategies that can help the Army distinguish between technology that boosts performance and technology that simply has a wow factor. In fact, this research indicates we should see the opposite: frustration on the part of the decision makers likely means something is being accomplished.

 

One recent experiment involved two people—a driver and passenger—travelling together along a busy highway. The passenger, acting as a sort of surrogate AI, talked to the driver in order to test how well a human being can remember and respond to new information while under stress.

 

“What we’re interested in doing is understanding whether we can look at the synchrony between the physiologies—the brain response or the heart rate response—between the driver and passenger, and use that synchrony to predict whether the driver is going to remember the information the passenger is telling them after the drive is over,” Dr. Jean Vettel, an Army neuroscientist, said in an official release. The resulting data could help the Army determine when and how an AI should relay information to a soldier in combat. This man-machine division of labor could become even more important in coming years.

 

Traditional intelligence analysis also suffers from several systemic problems including: information overload; intelligence sharing difficulties; lack of time, methods, and resources for analytic collaboration with area experts; limited capabilities in regard to the consideration of multiple hypotheses; socio-cultural and socio-psychological bias informing the analytic process; lack of time and resources for critical analysis and after-action review; “group-think” (a lack of diverse opinions informing the process) and “paralysis by analysis”; loss of analytic expertise due to downsizing and attrition; lack of time and resources needed to train new analysts; and limited availability and use of tools to improve the analytic process, according to Lowenthal, 1999, and  National Commission on Terrorist Attacks Upon the United States, 2004.

 

The Learning Agents Center of George Mason University and the Center for Strategic Leadership of the US Army War College has carried out joint research aimed at developing a new type of analytic tool that will help alleviate several of the above problems. This tool, called, Disciple-LTA (learner, tutor, and assistant) is a personal cognitive assistant that can rapidly acquire expertise in intelligence analysis directly from intelligence analysts, can train new analysts, and can help analysts find solutions to complex problems through mixed-initiative reasoning, making possible the synergistic integration of a human’s experience and creativity with an automated agent’s knowledge and speed, and facilitating the collaboration with complementary experts and their agents. This new type of intelligent agent, capable of learning, tutoring and decision making assistance, is intended to act as a career-long aid to intelligence analysts. It will be used during classroom learning, for skills maintenance and growth after classroom learning, and for decision support in the field

 

Advanced Information Processing (AIP) techniques were employed: knowledge based (rule and case based) systems, planning (script based and plan-goal graphs), fuzzy logic and (distributed) blackboard systems.

 

Cognition and Neuroergonomics Collaborative Technology Alliance

The U.S. Army Research Laboratory formed an alliance in 2010 with universities and industry to enable “revolutionary advances” in Soldier systems technology by merging neuroscience, psychology, engineering and human factors to deliver those solutions. For its main human-AI integration effort, the Army teamed up with private industry and universities in California, Texas, Florida, and New York. The resulting Cognition and Neuroergonomics Collaborative Technology Alliance began in 2010 and is scheduled to continue in its current form until at least 2020.

 

United States Army Research Laboratory (ARL) Cognition and Neuroergonomics Collaborative Technology Alliance (CAN CTA) announced Program Announcement (PA) to solicit offers to establish a new Collaborative Technology Alliance (CTA) in the area of Cognition and Neuroergonomics

 

The Army envisions that the Alliance will bring together government, industrial and academic institutions to address research and development to enable optimal Soldier-system performance during tactical operations. The objective of the Alliance is the development and demonstration of fundamental translational principles, that is, principles governing the application of neuroscience-based research and theory to complex operational settings.

 

The Alliance is expected to perform enabling research and to transition technology to enhance Soldier-system performance in complex operational settings by optimizing information transfer between the system and the Soldier, identifying mental processes and individual differences that impact mission-relevant decision making, and developing technologies for individualized analyses of neurally-based processing in operational environments.

 

To achieve this objective the Alliance is expected to implement computational modeling and to execute and link neuroscience-based research from multiple levels to produce advances in fundamental science and technology, demonstrate and transition technology, and develop research demonstrators for Warfighter experimentation.

 

AI Can Learn From ARL’s Brain Interface

A joint effort between the U.S. Army Research Laboratory, ARL, and DCS Corporation recently won the Neurally-Augmented Image Labelling Strategies, or NAILS challenge, at an international machine learning research competition in Tokyo, Japan. The goal of NAILS was to incorporate brain activity into machine learning methods that can detect if an image a person was seeing was relevant to the task at hand, as part of an effort to improve the ability of humans and machines to manage information by working together.

 

 

Teams participating in the NAILS challenge developed machine learning methods to detect – through brain activity – whether an image that a person was seeing was a task-relevant image or not. In performing the research, the researchers calibrated an in-house tool called EEGNet, which is a deep convolutional network capable of learning robust representations of specific brain responses using relatively sparse training sets. Using this approach, they trained a unique instantiation of EEGNet for each subject and subsequently obtained the highest classification performance, averaged across all subjects, of the participating teams.

 

“EEGNet allows researchers to train models for different neural responses using examples of those responses collected under a wide variety of conditions and from multiple individuals,” Gordon said. “In this way, EEGNet provides both a ‘common framework’ for analyzing disparate data sets as well as a tool for extrapolating results from simplified to more complex domains.”

 

The team participated in conference discussions focused on the state of brain computer interface technology and how it can be leveraged for information retrieval applications; future directions for the NAILS task; assessing models with IR oriented evaluation metrics; and encouraging the development of general BCI algorithms that are not calibrated per-subject or task and hold greater potential for measuring human state in complex, real-world environments.

 

“This work is part of a larger research program at ARL that focuses on understanding the principles that govern the application of neuroscience-based research to complex operational settings,” Lawhern said. “By competing in this competition we were able to showcase our expertise in this area to the broader scientific community. Ultimately, we are interested in using neuroscience-based approaches to develop human-computer interaction technologies that can adapt to the state of the user.”

 

The Army hopes that technology can solve the info-overload problem that technology has created, and free up people to do what people do best: think creatively.

 

 

 

References and Resources also include:

http://apgnews.com/inside-the-innovation/arl-wins-machine-learning-competition/

https://govtribe.com/project/a-cognition-and-neuroergonomics-collaborative-technology-alliance-can-cta-extend-closing-date/activity

https://www.eurekalert.org/pub_releases/2018-09/uarl-dtr090718.php

https://hai.stanford.edu/news/stanford-open-virtual-assistant-lab-oval

About Rajesh Uppal

Check Also

Revolutionizing Soldier Communications: Militaries Embrace Quantum Receivers with Rydberg Atoms

Introduction: In the realm of modern warfare, communication is paramount. The ability to transmit critical …

error: Content is protected !!