Trending News
Home / International Defence Security and Technology / Technology / ICT / Deep learning neural networks (DNN) Transforming Google Search, object recognition, face detection and autonomous military systems

Deep learning neural networks (DNN) Transforming Google Search, object recognition, face detection and autonomous military systems

Machine Learning (ML) is a subfield of Artificial Intelligence which attempts to endow computers with the capacity of learning from data, so that explicit programming is not necessary to perform a task. ML algorithms allow computers to extract information and infer patterns from the record data so computers can learn from previous examples to make good predictions about new ones.

 

Deep Neural networks (DNN) or large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain have been responsible for many exciting advances in artificial intelligence in recent years. Over the past decade, DNNs have become the state-of-the-art algorithms of Machine Learning in speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning and the exponential increase of the chip processing capabilities, especially GPGPUs. The Big Data is term used to signify the exponential grow of data taking place, as 90% of the data in the world today has been created in the last two years alone.

 

Last March, Google’s computers roundly beat the world-class Go champion Lee Sedol, marking a milestone in artificial intelligence. The winning computer program, created by researchers at Google DeepMind in London, used an artificial neural network that took advantage of what’s known as deep learning, a strategy by which neural networks involving many layers of processing are configured in an automated fashion to solve the problem at hand.

 

Early in 2015, as Bloomberg reported, Google began rolling out a deep learning system called RankBrain that helps generate responses to search queries. As of October, RankBrain played a role in “a very large fraction” of the millions of queries that go through the search engine with each passing second. At one point, Google ran a test that pitted its search engineers against RankBrain. Both were asked to look at various web pages and predict which would rank highest on a Google search results page. RankBrain was right 80 percent of the time. The engineers were right 70 percent of the time. John Giannandrea told a room full of reporters inside Google headquarters this fall. “Increasingly, we’re discovering that if we can learn things rather than writing code, we can scale these things much better.”

 

 

In an article for the World Economic Forum, Marc Benioff, chairman and CEO of Salesforce, explains that the convergence of big data, machine learning and increased computing power will soon make artificial intelligence “ubiquitous”. “AI follows Albert Einstein’s dictum that genius renders simplicity from complexity,” he writes. “So, as the world itself becomes more complex, AI will become the defining technology of the twenty-first century, just as the microprocessor was in the twentieth century.”

 

US has launched Third Offset strategy, as Secretary Hagel said, “This new initiative is an ambitious department-wide effort to identify and invest in innovative ways to sustain and advance America’s military dominance for the 21st century.” Under this one of the important initiatives is Autonomous “deep learning” machines and systems, which the Pentagon wants to use to improve early warning of events. As an example, Deputy Secretary of Defense Bob Work pointed to the influx of “little green men” from Russia into Ukraine as simply a big data problem that could be crunched to predict what was about to happen.

 

China has overtaken the United States to become the world leader in deep learning research, a branch of artificial intelligence (AI) inspired by the human brain, according to White House reports that aim to help prepare the US for the growing role of artificial intelligence in society. The National Artificial Intelligence Research and Development Strategic Plan lays out the strategy for AI funding and development in the US. It shows that since mid-2013 China has been contributing more journal articles to the field of deep learning research and has more studies cited by other researchers than the US. In 2015, for example, Chinese researchers published around 350 articles, compared to around 260 in the US. The report says the US will need to step up investment: “Current levels of R&D spending are half to one-quarter of the level of R&D investment that would produce the optimal level of economic growth.

Machine learning is one of the most important technical approaches to AI and the basis of many recent advances and commercial applications of AI. Modern machine learning is a statistical process that starts with a body of data and tries to derive a rule or procedure that explains the data or can predict future data. Machine Learning (ML)  has now become a pervasive technology, underlying many modern applications including internet search, fraud detection, gaming, face detection, image tagging, brain mapping, check processing and computer server health-monitoring. There is a wide variety of algorithms and processes for implementing ML systems.

 

The deep learning (DL) algorithms allow high-level abstraction from the data, and this is helpful for automatic features extraction and for pattern analysis/classification. “Deep learning is useful for many applications, such as object recognition, speech, face detection,” says Vivienne Sze, the Emanuel E. Landsman Career Development Assistant Professor in MIT’s Department of Electrical Engineering and Computer Science whose group developed the new type of deep-learning chip that dramatically speeds up the ability of neural networks to process and identify data.

 

Deep learning networks typically use many layers—sometimes more than 100— and often use a large  number of units at each layer, to enable the recognition of extremely complex, precise patterns in data.  Data come in and are divided up among the nodes in the bottom layer. Each node manipulates the data it receives and passes the results on to nodes in the next layer, which manipulate the data they receive and pass on the results, and so on. The output of the final layer yields the solution to some computational problem. Although they outperform more conventional algorithms on many visual-processing tasks, they require much greater computational resources.   Deep learning is useful for many applications, such as object recognition, speech, face detection recognizing commands spoken into a smartphone, and, responding to Internet search queries.

Deep Neural Networks

The year 2012 was turning point of machine vision, when in the annual ImageNet Large-Scale Visual Recognition Challenge; a team from the University of Toronto in Canada entered an algorithm called SuperVision and won. SuperVision used technique of the deep convolutional neural network to achieve an error rate of only 16.4 percent. The ImageNet Large Scale Visual Recognition Challenge provides a set of photographic images and asks for an accurate description of what is depicted in each image. On a popular image recognition challenge that has a 5 percent human error rate according to one error measure, the best AI result improved from a 26 percent error rate in 2011 to 3.5 percent in 2015.

 

Deep learning, while sounding flashy, is really just a term to describe certain types of neural networks and related algorithms that consume often very raw input data. They process this data through many layers of nonlinear transformations of the input data in order to calculate a target output.

 

Various deep learning architectures such as deep neural networks, convolutional deep neural networks, and deep belief networks have been applied to fields like computer vision, automatic speech recognition, natural language processing, and music/audio signal recognition where they have been shown to produce state-of-the-art results on various tasks.

Artificial neural networks (ANNs)

Artificial neural networks (ANNs) are statistical models directly inspired by, and partially modeled on biological neural networks. They are capable of modeling and processing nonlinear relationships between inputs and outputs in parallel. The related algorithms are part of the broader field of machine learning, and can be used in many applications as discussed.

 

Artificial neural networks are characterized by containing adaptive weights along paths between neurons that can be tuned by a learning algorithm that learns from observed data in order to improve the model. In addition to the learning algorithm itself, one must choose an appropriate cost function.

 

The cost function is what’s used to learn the optimal solution to the problem being solved. This involves determining the best values for all of the tunable model parameters, with neuron path adaptive weights being the primary target, along with algorithm tuning parameters such as the learning rate. It’s usually done through optimization techniques such as gradient descent or stochastic gradient descent.

 

Architecturally, an artificial neural network is modeled using layers of artificial neurons, or computational units able to receive input and apply an activation function along with a threshold to determine if messages are passed along. In a simple model, the first layer is the input layer, followed by one hidden layer, and lastly by an output layer. Each layer can contain one or more neurons.

MIT study finds that Deep Neural Networks can match primate brain in object recognition

A study from MIT neuroscientists has found that one of the latest generations of these so-called “deep neural networks” matches the human skills such as recognizing objects, which the human brain does very accurately and quickly.

 

For vision-based neural networks, scientists have been inspired by the hierarchical representation of visual information in the brain. As visual input flows from the retina into primary visual cortex and then inferotemporal (IT) cortex, it is processed at each level and becomes more specific until objects can be identified.

 

To mimic this, neural network designers create several layers of computation in their models. Each level performs a mathematical operation, such as a linear dot product. At each level, the representations of the visual object become more and more complex, and unneeded information, such as an object’s location or movement, is cast aside.

 

For this study, the researchers first measured the brain’s object recognition ability. Led by Hong and Majaj, they implanted arrays of electrodes in the IT cortex as well as in area V4, a part of the visual system that feeds into the IT cortex. This allowed them to see the neural representation — the population of neurons that respond — for every object that the animals looked at.

 

The researchers could then compare this with representations created by the deep neural networks, which consist of a matrix of numbers produced by each computational element in the system. Each image produces a different array of numbers. The accuracy of the model is determined by whether it groups similar objects into similar clusters within the representation. The best network was one that was developed by researchers at New York University, which classified objects as well as the macaque brain.

 

Military and security requirements

In past DOD had funded projects to put ANNs in M1A1 Abrams tanks as engine diagnostic tools. Officials also considered using them as automated target-recognition tools on board the canceled Comanche helicopter. Naval Research Laboratory had worked on the multisensor fire-recognition system, which uses neural networks embedded in video cameras. There are reports about military researchers attempting to use an ANN to detect tanks amid foliage.

 

A rapidly increasing volume of intelligence, surveillance, and reconnaissance (ISR) information is available to the Department of Defense (DOD) as a result of the increasing numbers, sophistication, and resolution of ISR resources and capabilities. “The amount of video data produced annually by Unmanned Aerial Vehicles (UAVs) alone is in the petabyte range, and growing rapidly. Full exploitation of this information is a major challenge. Human observation and analysis of ISR assets is essential, but the training of humans is both expensive and time-consuming. Human performance also varies due to individuals’ capabilities and training, fatigue, boredom, and human attentional capacity, one response to this situation is to employ machines …” said DARPA.

 

“Deeply layered methods should create richer representations that may include furry, four-legged mammals at higher levels, resulting in a head start for learning cows and thereby requiring much less labelled data when compared to a shallow method. A Deep Learning system exposed to unlabelled natural images will automatically create high-level concepts of four-legged mammals on its own, even without labels.”

 

Intelligence agencies are also interested in searching videos based on content like martyrdom videos of people planning a suicide bombing, or IED-placement videos.

Factors accounting for success of Deep Neural networks

Two major factors account for the recent success of this type of neural network. One is a significant leap in the availability of computational processing power. Researchers have been taking advantage of graphical processing units (GPUs), which are small chips designed for high performance in processing the huge amount of visual content needed for video games.

 

The second factor is that researchers now have access to large datasets to feed the algorithms to “train” them. These datasets contain millions of images, and each one is annotated by humans with different levels of identification. Mr Farfade trained his algorithm using a database of 200,000 images featuring faces shown at various angles and orientations, plus 20 million images that didn’t contain faces. The PIPER has been examined by use of a dataset, consisting of over 60,000 instances of 2000 individuals collected from public Flickr photo albums with only about half of the person images containing a frontal face.

Deep Neural Networks require more maturity

Facial recognition use has been rapidly increasing both in commercial products, as well as by law enforcement lately.

However, studies have also found some weaknesses with DNNs. Study has revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library).

Another study by trio of researchers in the U.S. has found that deep neural networks (DNNs) can be tricked into “believing” an image it is analyzing is of something recognizable to humans when in fact it isn’t. They showed that it is easy to produce images that are completely unrecognizable to humans that the state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion).

“When military researchers attempting to use an ANN to detect tanks amid foliage.  Scientists fed pictures into a neural network of trees with and without tanks parked beneath them. At first, they had stunning success — the machine had a 100 percent detection rate. But when they tried reproducing the results with new data, the ANN failed,” reported DAVID PERERA in defense systems.

“The computer hadn’t learned to detect tanks at all. Instead, it had focused on the color of the sky to determine whether tanks were present because the test photos had been taken on different days. In the pictures with the tanks, the sky was cloudy; in the pictures without tanks, the sky was bright blue. The network had learned to recognize the difference in the weather.”

As cognitive psychologist Gary Marcus writes at the New Yorker, the methods that are currently popular “lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like ‘sibling’ or ‘identical to.’ They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.” In other words, they don’t have any common sense.

 

This implies DNNs are still not ready for use in real world applications such driverless cars.

 

The Article sources also include:

http://blogs.scientificamerican.com/sa-visual/unveiling-the-hidden-layers-of-deep-learning/?print=true

https://defensesystems.com/Articles/2008/07/Neural-nets-find-niche.aspx

http://www.wired.com/2016/02/ai-is-changing-the-technology-behind-google-searches/

http://www.theverge.com/2016/2/29/11133682/deep-learning-ai-explained-machine-learning

http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide/

http://www.innoarchitech.com/artificial-intelligence-deep-learning-neural-networks-explained/

https://www.whitehouse.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf

https://www.weforum.org/agenda/2016/11/china-is-now-the-world-leader-in-deep-learning-research-and-the-us-is-worried-about-it?utm_content=buffera4e43&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer

image_pdfimage_print

Check Also

widebandHF

Wideband high frequency communications provide net-centric, high-speed beyond line of sight communications in Anti-Access/Area Denial (A2/AD) battlefield environments

Whether in the field of battle, search-and-rescue or humanitarian aid efforts, the ability to share …

  • You have touched some nice factors here. Any way keep up wrinting. Best regards

error: Content is protected !!