Home / Technology / AI & IT / Embedded AI or Tiny AI, the next revolution in embedded systems and Military internet of things (MIOT)

Embedded AI or Tiny AI, the next revolution in embedded systems and Military internet of things (MIOT)

The general definition of AI is the capability of a computer system to perform tasks that normally require human intelligence, such as visual perception, speech recognition and decision-making. Machine Learning (ML) is a subfield of Artificial Intelligence which attempts to endow computers with the capacity of learning from data, so that explicit programming is not necessary to perform a task. One of the most successful machine learning algorithms is deep learning (DL) that allow high-level abstraction from the data, and this is helpful for automatic features extraction and for pattern analysis/classification.

 

Various deep learning architectures such as deep neural networks, convolutional deep neural networks, and deep belief networks have been applied to fields like computer vision, automatic speech recognition, natural language processing, and music/audio signal recognition where they have been shown to produce state-of-the-art results on various tasks many times exceeding human performance.

 

The transformative potential of AI on civilian and military systems and operations was realized early by many countries and led to their National plans for their development. AI has now heralded a new age of warfare  which is having a multiplier effect in all domains of warfare including  air, ground , sea, space and cyber and uses technology that is affordable and widely available. Ultimately, the neural network could contain a brain equivalent to someone who has been trained on all military intelligence, assets, strategies, personnel, and anything else known collectively by military and intelligence agencies. Although we are not at this stage, we are moving in that direction.

 

Deep Neural networks (DNN) or large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain have been responsible for many exciting advances in artificial intelligence in recent years. The  major factors accounting for the recent success of deep neural network is the significant leap in the availability of computational processing power.

 

When Google’s computers roundly beat the world-class Go champion Lee Sedol, it marked a milestone in artificial intelligence. The winning computer program, created by researchers at Google DeepMind in London, used an artificial neural network that took advantage of what’s known as deep learning, a strategy by which neural networks involving many layers of processing are configured in an automated fashion to solve the problem at hand. In addition the computers Google used to defeat Sedol contained special-purpose hardware—a computer card Google calls its Tensor Processing Unit. Reportedly it uses application-specific integrated circuit, or ASIC to speed up deep-learning calculations.

 

This requirement of enormous computational power has also become limitation in embedded applications which employs microcontrollers with relatively simpler processing power. For example AlexNet requires 727 MegaFlops (FLOP — floating-point operation) and 235Mb of memory to process a small 227x227px image. On the other hand popular microcontroller ARM Cortex-A8 on Google Nexus S can produce 66 MegaFlops per second. So, you have to wait ~11 seconds for inference which is inadequate for real time applications which may require to make inference from the model in few tens or hundreds of microseconds. The embedded hardware is seen as being too limited to be able to run the kinds of deep neural network (DNN) algorithms on which they rely.

 

Therefore application of machine learning in embedded applications requires careful thinking  about computational resources and memory usage. Artificial intelligence (AI) is now seen as a vital technology for the development of the Internet of Things (IoT) and cyber-physical systems such as robots and autonomous vehicles. AI techniques are now also being proposed as a way of managing the immensely complex 5G New Radio protocol. The number of channel parameters that need to be analysed by handsets to deliver optimum data rates have outpaced the ability of engineers to develop efficient algorithms. Algorithms trained on data obtained during field trials provides a way to balance the trade-offs between different settings more efficiently.

 

The military operations will be significantly affected by widespread adoption of IoT technologies. Analogous to IoT, Military internet of things (MIOT) comprising multitude of platforms, ranging from ships to aircraft to ground vehicles to weapon systems, is expected to be developed. MIoT offers high potential for the military to achieve significant efficiencies, improve safety and delivery of services, and produce major cost savings.

 

As MIOT things collect huge amount of sensor data, they require large compute and storage resources are required to analyze, store and process the data. The current most common compute and storage resources are cloud based because the cloud offers massive data handling, scalability, and flexibility. Cloud companies including Dropbox, Microsoft Azure, Amazon AWS, and others offer places to store data safely off-site that is accessible from anywhere in the world. Amazon Web Services (AWS) and IBM are among the companies that now offer cloud-based AI services to their customers. AWS provides access to a wide range of hardware platforms suitable for machine learning, including general-purpose server blades, GPU accelerators and FPGAs. The DNNs run in the cloud can be built using open-source frameworks such as Caffe and Tensorflow that are now widely used by AI practitioners.

 

While many technology companies have been offering AI-powered software and services to customers around the world, many of them using centralized cloud services means that devices have to send and receive packets of data at all times, thereby increasing latency and impacting the efficiency of operations. These data center-powered clouds offer virtual big-data processing capabilities to deliver high-performance analytics with ease. However this may not be sufficient to meet the requirements of many MIOT applications, due to resource-constrained military networks, issues of latency, reliability and the security. Amazon Web Services (AWS) and IBM are among the companies that now offer cloud-based AI services to their customers. New techniques and emerging concepts like fog computing that can bring some compute and storage resources to the edge of the network instead of relying everything on the cloud. Both the fog and cloud computing may be required for optimal performance of MIOT applications.

 

But thanks to new developments in AI technologies, researchers are now able to shrink AI algorithms and models into smaller boxes without impacting their abilities. For example in MIOT applications with complex requirements, it may be possible to use a simple AI algorithm in the embedded device to look for outliers in the input data and then request services from the cloud to look at the data in more detail to provide a more accurate answer. Such a split would help maintain real-time performance, limit the amount of data that needs to be transmitted over long distances and ensure continuous operation even in the face of temporary network outages. If a connection is lost, the embedded system can cache the suspicious data until an opportunity arises to have it checked by a cloud service.

 

The evolution of Tiny AI has enabled data processing at the Edge- which means that AI-powered chips in smartphones, smart speakers, and other gadgets can analyze data and process it without having to interact with a centralized data center. Although DNNs generally require high-performance hardware to run in real time, there are simpler structures such as adversarial neural networks that have been successfully implemented on mobile robots based around 32bit or 64bit processors such as those found in the Raspberry Pi platforms. Another solution is to use special chips called neuromorphic chips. The  enhanced power efficiency of neuromorphic allows deploying advanced machine vision, which usually requires a lot of computing power, in places where resources and space are limited. Satellites, high-altitude aircraft, air bases reliant on generators, and small drones could all benefit, says AFRL principal electronics engineer Qing Wu. “Air Force mission domains are air, space, and cyberspace. [All are] very sensitive to power constraints,” he says.

 

As well as its computational overhead, a drawback with the DNN is the huge amount of data that is needed to train it. This is where other algorithms such as those based on Gaussian processes are now being investigated by AI researchers. These use probabilistic analysis of data to build models that function in a similar manner to neural networks but which use far less training data. However, in the short term, the success of the DNN makes it a key candidate for dealing with complex multi-dimensional inputs such as images, video and streaming samples of audio or process data.

 

One possible solution can be found in rule-based AI. This leverages the expertise of domain experts rather than direct machine learning by encoding the experts’ knowledge within a rule base. An inference engine analyses the data against the rules and attempts to find the best match for the conditions it encounters. A rule-based system has low computational overhead but developers will encounter difficulties if the conditions are difficult to express using simple statements or the relationships between input data and actions are not well understood. The latter situation, which applies to speech and image recognition, is where machine learning has been shown to excel.

 

Some of the measures which can be employed for such processor and memory constrained systems can be weights quantization that is  converting a continuous range of values into a finite range of discrete values. The model would require to be converted to C-code to be able to run on microcontroller.

 

Many big semiconductor companies, have started incorporating many of features for embedded AI such as CMSIS-NN — efficient neural network kernels for Arm Cortex-M CPUs, and Compilers which produce highly effective inference code that is optimized based on the hardware. IBM has built direct interfaces to its Watson AI platform to boards such as the Raspberry Pi, making it easy to prototype machine-learning applications before committing to a final architecture. ARM provides a similar link to Watson through its mbed IoT device platform.

 

“Existing services like voice assistants, autocorrect, and digital cameras will get better and faster without having to ping the cloud every time they need access to a deep-learning model. Tiny AI will also make new applications possible, like mobile-based medical-image analysis or self-driving cars with faster reaction times. Finally, localized AI is better for privacy, since your data no longer needs to leave your device to improve a service or a feature,” notes MIT.

 

Modern military-grade electronic packaging technology has matured to the point that data center-like capabilities can be shrunk to smaller form factors, including OpenVPX (ANSI/VITA standards), the most widely adopted and supported rugged high-­performance open system compute architecture for defense programs.

 

Military platforms require rugged hardenened and reliable embedded computers. One of the critical use of AI is Vehicle autonomy which requires algorithms to detect objects to avoid (or follow) and perform actions without human intervention.  Autonomous systems on commercial and defense mobile platforms have mission computers to control their effectors that are required to prove high levels of reliable, deterministic, and safe operation. Safe operation means that both the hardware and software have been shown and documented to be highly deterministic and reliable. Varying degrees of reliability and critical function execution are validated through Design Assurance Level (DAL) certification, including DO-254 for hardware and DO-178 for software.

 

Modern defense prime contractors are developing the capabilities and technologies required to embed AI processing capabilities all the way to the tactical edge. These companies are building in effector determinism, as defined by flight-safety certification and embedded holistic systemwide security, that enable AI-powered systems to be deployed anywhere.

 

 

 

 

References and Resources also include:

http://www.newelectronics.co.uk/electronics-technology/ai-options-for-embedded-systems/209022/

http://mil-embedded.com/articles/embedded-ai-for-military-applications/

About Rajesh Uppal

Check Also

The Future of Surveillance: Smart ePANTS and the Rise of Active Smart Textiles

Introduction In a world where technology continues to evolve at a rapid pace, our understanding …

error: Content is protected !!