Home / Technology / Comm. & NW / DARPA’s Communicating with Computers (CwC) developing technology to facilitate human-like communication between warfighters and their unamnned vehicles

DARPA’s Communicating with Computers (CwC) developing technology to facilitate human-like communication between warfighters and their unamnned vehicles

The lifelong human imperative to communicate is so strong that people talk not only to other people but also to their pets, their plants and their computers. Straightforward as that may sound, communication involves several coordinated processes. The speaker puts ideas into words, the listener extracts ideas from words and, importantly, both rely on context to narrow down the possible meanings of ambiguous language. All of these processes are challenging for machines.

“Human communication feels so natural that we don’t notice how much mental work it requires,” said Paul Cohen, DARPA program manager. “But try to communicate while you’re doing something else –the high accident rate among people who text while driving says it all– and you’ll quickly realize how demanding it is.”

DARPA launched Communicating with Computers (CwC) program in 2015 with aim to develop technology to turn computers into good communicators. The Communicating with Computers (CwC) program aims to enable symmetric communication between people and computers in which machines are not merely receivers of instructions but collaborators, able to harness a full range of natural modes including language, gesture and facial or other expressions.

CwC  program is a basic research effort to explore how to facilitate faster, more seamless and intuitive communication between people and computers—including how computers endowed with visual or other sensory systems might learn to take better advantage of the myriad ways in which humans use contextual knowledge (gestures and facial expressions or other syntactical clues, for example) to enrich communication.

“Because humans and machines have different abilities, collaborations between them might be very productive. In the intelligence-gathering domain, for example, machines’ superior ability to collect and store information and humans’ superior ability to develop interpretive narratives from such information would find greater synergy if the people and the machines could communicate better.

If successful, CwC could advance a number of application areas, most notably robotics and semi-autonomous systems. For example, CwC could allow operators to describe missions and give direction, before and during operations, using natural language.

Conversely, when CwC-enabled robots or semi-autonomous systems encounter unexpected situations that require additional inputs from operators they would be capable of requesting assistance in natural language. Such natural language-based interactions would be far more efficient and flexible than programming or the rigidly preconfigured interfaces currently in use.

With a goal of revolutionizing everyday interactions between humans and computers, Colorado State University researchers are developing new technologies for making computers recognize not just traditional commands but also non-verbal ones, including gestures, body language and facial expressions, CSU said. The CSU project, “Communication Through Gestures, Expression and Shared Perception,” is led by Bruce Draper, computer science professor, and funded by a $2.1 million grant from the U.S. Defense Advanced Research Projects Agency (DARPA).

CwC Program

The CwC program is based on four premises:
1) Complex ideas are composed from elementary ones;
2) Most elementary ideas are about the physical world;
3) Language specifies how to compose complex ideas; but,
4) Context is often needed to boost the specificity of complex ideas that can be composed given language.

DARPA provides following example to  illustrate these premises, consider the phrase “add one more.” It is not meaningless. Rather, it has too many possible meanings. Whatever the listener is supposed to “add” might be physical – a scoop of ice cream – or nonphysical – one can add an opinion to a poll or a term to a sum. But the ideas of collections, things and adding a thing to a collection could be considered elementary physical ideas. The phrase “one more” suggests a copy or another instance of a thing, while the phrase “add one more” suggests adding a copy or another instance of something in the collection. The fact that this thing is not named (contra “add one more apple”) suggests that the speaker thinks that the listener knows what is to be added.

Specific technologies that CwC will develop include: A corpus or library of elementary ideas; algorithms for assembling complex ideas from elementary ones given language and context; and algorithms for figuring out what to do or say during communication.

The CwC program is organized around three use cases of increasing difficulty:

  • Blocks World In this use case, humans and machines must communicate to build structures with toy blocks. The human or the machine will be given an assignment – a particular structure to build – and will have to communicate with the other to get the job done.
  • Biocuration This use case involves communication about the biological sciences literature between human biocurators, who read the literature and compile machine-readable records of the contents of papers, and machine biocurators such as those under development in DARPA’s Big Mechanism program.
  • Collaborative Composition This use case will explore the process by which humans and machines might collaborate toward the assembly of a creative product—in this case, contributing sentences to create stories.

 

To further the goal of developing systems that communicate more like people do, the CwC program will set tasks in which humans and machines must communicate to do a job. One task will involve collaborative story-telling, in which a human and a machine will take turns contributing sentences until they have written a short story. “This is a parlor game for humans, but a tremendous challenge for computers,” said Cohen. “To do it well, the machine must keep track of the ideas in the story, then generate an idea about how to extend the story and express this idea in language.”

Another CwC task will be to build computer-based models of the complicated molecular processes that cause cells to become cancerous. Computers are starting to do this already in DARPA’s Big Mechanism program, but they don’t work collaboratively with human biologists—a shortcoming, because while machines read more quickly and widely than humans, they do not read as deeply, and while machines can generate vast numbers of molecular models, humans are better judges of the biological plausibility of those proposed models.

Of course, storytelling and cancer research are just initial challenges to help advance the technology to a point where humans and machines can take best advantage of their complementary capabilities. In the intelligence-gathering domain, for example, machines’ superior ability to collect and store information and humans’ superior ability to develop interpretive narratives from such information would find greater synergy if the people and the machines could communicate better.

“Because humans and machines have different abilities, collaborations between them might be very productive. But today we view computers as tools to be activated by a few clicks or keywords, in large part because we are separated by a language barrier,” Cohen said. “The goal of CwC is to bridge that barrier, and in the process encourage the development of new problem-solving technologies.”

If successful, CwC could advance a number of application areas, most notably robotics and semi-autonomous systems. For example, CwC could allow operators to describe missions and give direction, before and during operations, using natural language. Conversely, when CwC-enabled robots or semi-autonomous systems encounter unexpected situations that require additional inputs from operators they would be capable of requesting assistance in natural language. Such natural language-based interactions would be far more efficient and flexible than programming or the rigidly preconfigured interfaces currently in use.

“Ultimately, advances from this program could allow warfighters, analysts, logistics personnel and others in the national security community to take fuller advantage of the enormous opportunities for human-machine collaboration that are emerging today,” said DARPA director.

 

DARPA $2.1M grant to CSU researchers to develop gesture technology

A team of Colorado State University researchers recently received a $2.1 million grant from the Defense Advanced Research Projects Agency to develop technology that would enable computers to recognize non-verbal commands such as gestures, body language and facial expressions.

“Current human-computer interfaces are still severely limited,” CSU professor of computer science Bruce Draper, “First, they provide essentially one-way communication: Users tell the computer what to do. This was fine when computers were crude tools, but more and more, computers are becoming our partners and assistants in complex tasks. Communication with computers needs to become a two-way dialogue.”

The goal is to be able to someday allow people to communicate more easily with computers in noisy settings or when a person is hearing-impaired or speaks another language.

Packets of gesture info

The team has proposed creating a library of what are called Elementary Composable Ideas (ECIs). Like little packets of information recognizable to computers, each ECI contains information about a gesture or facial expression, derived from human users, as well as a syntactical element that constrains how the information can be read.

To achieve this, the researchers have set up a Microsoft Kinect interface. A human subject sits down at a table with blocks, pictures and other stimuli. The researchers try to communicate with and record the person’s natural gestures for concepts like “stop,” or, “huh?”

“We don’t want to say what gestures you should use,” Draper explained. “We want people to come in and tell us what gestures are natural. Then, we take those gestures and say, ‘OK, if that’s a natural gesture, how do we recognize it in real time, and what are its semantics? What roles does it play in the conversation? When do you use it? When do you not use it?’”

Their goal: making computers smart enough to reliably recognize non-verbal cues from humans in the most natural, intuitive way possible. According to the project proposal, the work could someday allow people to communicate more easily with computers in noisy settings, or when a person is deaf or hard of hearing, or speaks another language.

 

 

References and Resources also include:

https://www.darpa.mil/program/communicating-with-computers

https://www.darpa.mil/news-events/2015-02-20

 

 

 

About Rajesh Uppal

Check Also

Exploring the Growth and Dynamics of the Phased Array Antenna Market

The phased array antenna market is experiencing significant growth, driven by advancements in communication, radar …

error: Content is protected !!