Home / Technology / AI & IT / Geospatial intelligence (GEOINT) technology trends

Geospatial intelligence (GEOINT) technology trends

Geoint is imagery and information that relates human activity to geography. It typically is collected from satellites and aircraft and can illuminate patterns not easily detectable by other means.

 

Geospatial intelligence (GEOINT) is intelligence about human activity on earth derived from the exploitation and analysis of imagery and geospatial information that describes, assesses, and visually depicts physical features and geographically referenced activities on the Earth. GEOINT, as defined in US Code, consists of imagery, imagery intelligence (IMINT) and geospatial information. Geospatial intelligence (GEOINT) is a broad field that encompasses the intersection of geospatial data with social, political, environmental, and numerous other factors.

 

Traditional Geospatial-Intelligence data sources include imagery and mapping data, whether collected by commercial satellite, government satellite, aircraft (such as Unmanned Aerial Vehicles [UAV] or reconnaissance aircraft), or by other means, such as maps and commercial databases, census information, GPS waypoints, utility schematics, or any discrete data that have locations on earth.

 

There has been exponential growth in the number and type of satellites launched over the past decade. In fact, there were 971 remote sensing satellites in orbit as of April 2021, a 42% increase in just 3 years. The global Satellite Earth Observation (EO) market was valued $3.6 billion in 2021 and is predicted to reach $7.9 billion by 2030.

Satellite Pros
• Global monitoring, best for macro view
• Large existing body of open source scientific imagery-based data products
• Low to high resolution: 30m+ > 10cm (VLEO)
Satellite Cons
• High CapEx relative to alternative methods
• Expensive for resolutions <2M
• Challenging to leverage data without geospatial expertise

 

This rapid growth is being driven by the commoditization of launch services and computing that have lowered the barriers to entry for new data providers. Today, a SpaceX Falcon 9 block 5 rocket costs $62 million to launch at around $2,500 per kg to LEO, while the cost of a Falcon Heavy averages around $1,400 to LEO. SpaceX’s Starship aims to become the world’s largest and only fully reusable launch vehicle, which will further lower the cost of launch to $10 to $20 per kg to LEO.

 

The rapid pace of new commercial satellite constellation launches has led to a significant increase in the amount and availability of geospatial imagery.  In the future, there shall be exponential growth in data when hundreds of small satellites which will provide persistent GEOINT, 24-hour, seven-days-per-week, 365 days-per-year continuous coverage of the Earth be put in place by private sector GEOINT providers. Similarly, as procedures are developed to allow safe operation of Unmanned Aerial Vehicles in civil airspace, will see large numbers of UAVs (Unmanned Aerial Vehicles), not only government-operated but inevitability commercial as well. This will enable real-time Earth observations which when combined with analytics will provide a tremendous wealth of global information, insight, and intelligence.

 

In the 1990s, a group of engineers at Intrinsic Graphics unknowingly developed the core technology behind Google Earth; 3D graphics libraries for video games. The adoption of cloud, edge computing, AI/ML capabilities, and increasingly powerful geospatial APIs and SDKs are making the benefits of geospatial intelligence more accessible.  Developers no longer need to be experts in image capture, data processing, or object detection, and instead can focus on building specialized applications tailored to unique customer needs. The ability to collect, process, and analyze endless geospatial data is creating powerful new applications that are helping to reshape how entire industries operate and transform our relationship with our planet.

 

Processing capacity has been another key barrier to utilizing complex data captured by satellites. In 1999, NVIDIA introduced the first widely available GPU, which included a rendering engine capable of processing 10 million polygons per second; several years later that number increased to 38 billion per second. This step change in performance was made possible by the architecture of the GPU – while a CPU handles jobs sequentially (better at calculation) a GPU parallelizes jobs (better at rendering). In 2009, researchers discovered the GPUs promise in building and training machine learning applications. This has led to a new paradigm of GPU-accelerated computing that makes large complex data sets usable in real-time.

 

Similar to that of GPUs, Application-specific Integrated Circuits (ASICs) have gained traction in recent years. These chip designs apply hardware and computing configurations to solve a specific problem. One well-known example is Google’s Tensor Processing Units (TPUs), an Application-specific Integrated Circuit designed to accelerate machine learning workloads. TPUs accelerate the performance of linear algebra computation by minimizing the time-to-accuracy when training
large complex neural networks; training models that used to take weeks, in hours. Advances at the chipset level have made it possible to not only capture more geospatial data, but also derive insights from it.

 

The changes in processing have also impacted consumer electronics, providing smaller, more powerful smartphones, and ultimately changing the way satellites are built. This opened up the possibility of using commercial off-the-shelf (COTS) components to create standardized satellite buses from a larger pool of technology suppliers, which significantly reduced the time and cost of development. These changes have allowed new satellite companies to experiment with state-of-the-art systems, de-risk technical challenges early, increase iterations with fewer expenditures, and achieve incremental revenues as a constellation is being built.

 

A wave of new remote sensing companies includes Skybox, Planet (NYSE: PL, founded 2010), Spire (NYSE: SPIR, founded 2012), and ICEYE (founded 2014). The next generation of remote sensing companies are providing Satellites-as-a-service and developing highly-targeted
scientific instruments to improve decision-making for commercial, civil, and defense customers

 

Today, a variety of geospatial platforms capture data at different altitudes, benefiting from low-cost components, commoditized storage/compute, and decades of GIS product development. These platforms rely on a variety of technologies ranging from stratospheric balloons and drones to vehicle mounted sensors and handheld devices.

 

The reduction in launch costs has led to an increase in satellites in Low Earth Orbit (LEO) as opposed to geostationary orbit (GEO). This is important because unlike GEO satellites, which are fixed, LEO satellites “move relative to the Earth’s surface.” This results in connectivity challenges since each LEO satellite requires line of sight to a ground antenna to facilitate space-to-ground communications. This has led to the entrance of companies attempting to address these problems by providing “global infrastructure of antennas, processing equipment, and software for satellite operators”.

 

Processing at the edge is becoming another important tool that can reduce delays, downlink bandwidth, and costs across geospatial platforms. Edge computing (or IoT edge processing) is the process of taking “action on data as near to the source as possible rather than in a central, remote data center, to reduce latency and bandwidth use”. Both hardware and software innovations are needed to unlock edge processing use cases given the challenges of processing data on device. Ground-Station and Edge-as-a-Service have fundamentally changed the way geospatial solutions are built and are delivering more timely, actionable data.

 

The global High Altitude Platforms (HAPs) market is expected to reach $4.3 billion by 2026,20 an increase from $3.4 billion in 2022 and over 400% increase from $1.0 billion in 2016.21 This includes both balloons and airships that are able to operate in the stratosphere. These stratospheric platforms have a unique ability to provide persistent coverage over a localized area at a competitive cost when compared with existing alternatives (satellites, aircraft, drones). The data acquired from these assets have been used in industries with a large set of fixed real estate assets like insurance, transportation, energy, and conservation. Remote-controlled vehicles can also be used for climate science, disaster recovery and response, and military surveillance.

 

Remote-controlled drones have played an important role in lowering the cost and increasing the safety profile of collecting geospatial data in physically dangerous environments. In agriculture, insurance adjusters are using tools that visualize the plant health in real time so that users can “evaluate damage at the field’s edge” and receive quantifiable results.

 

Ground collection is the oldest method of acquiring geospatial data and is still the most widely adopted. Sensors are deployed at fixed locations or mounted on mobile platforms. Fixed sensors tend to be in close proximity to an area of interest and can produce continuous monitoring at the highest resolutions unattended. Fixed solutions tend to be used in industries like agriculture and mining. The market size of the fixed sensors is expected to grow moderately at a 4.7% CAGR, from $2.0 billion in 2022 to $2.8 billion by 2028. Mobile sensors include those mounted on vehicles
and hand-held devices. The most well known mobile ground sensing project is Google Street View. Started 15 years ago, Google has collected street view imagery across the globe. This service is now accessible to users in over 100 countries and has accumulated over 220 billion images.

 

Modern smartphones are equipped with a variety of sensors and as a result, every person with a smartphone becomes a part of a larger crowdsourcing effort to create a detailed understanding of human activity at a micro and macro level. Geotagged photos and videos provide a tremendous amount of spatial data that is useful in a variety of applications. Snapchat allows users to tag photos with friends and visualize where friends are on Snapmap via the mobile app. Geospatial data collected by the wearables are being used by healthcare professionals to better understand the relationships between different physical activities and body health, monitor patient recovery conditions, and send out alerts in emergency situations. The market size for mobile sensors is much larger than the fixed ones, as the smartphone sensor market alone is forecasted to grow at a CAGR of 17% from 2022 to $379.0 billion in 2030.

 

Geospatial data comes from a variety of devices that speak fundamentally different languages. Historically, making use of this information has required the collaboration of highly trained professionals including data engineers, subject matter experts, and GIS specialists to create even basic reports. Constraining matters further, only 5% of data science and other engineering fields are trained in the tools to analyze geospatial data. The need for human intervention and specialized knowledge has kept costs high and access low.

 

Data Fusion

The more effectively we can merge different spatial data to create unified and uniformly accessible data, for example vector and raster, the more types of spatial relationships we can derive. This is the concept of spatial data fusion, a way to enable new cross referencing and data visualization to provide a better understanding of a situation or problem, which wouldn’t have been possible otherwise. Data fusion is not limited to the merging of data from different sources, but from different time periods as well.

 

With an increase in the variety, capability, and number of sensors, the amount of data (optical, spectral, multispectral, LiDAR, SAR, DEM, etc.) intended to enhance decision-making and human-system performance can lead to information overload and obscure the most relevant aspects of a situation, leading to decision fatigue.

 

Data fusion can address this challenge by making sense of disparate data and leading to a richer
understanding with less noise by: Fusing raw sensor data, Co-registering and overlaying sensor outputs, Extracting features/key points of interest from each sensor, independently and then fusing the feature sets; Applying independent algorithms to each sensor (e.g., a target detection algorithm on each sensor individually) and then fusing the algorithm (e.g., detection) results; Implementing change analysis that occurs on the same type of spatial data over a period of time (multitemporal data). Studies have shown how the combination of radar’s structure data and multispectral sensor’s reflectance data are highly complementary to improving the accuracy of assessing and monitoring biodiversity at scale.

 

The exponential growth in geospatial data has driven the need for new ways to process, transform, and analyze information, which has resulted in a shift to the “as-a-service” paradigm.

Cloud-as-a-Service

In recent years, cloud computing has become an integral component in the management of geospatial data. According to NSR, a satellite and space market research and consulting firm, 486 PB (petabytes) of raw satellite imagery will need to be downlinked to Cloud servers over the next 10 years. One way companies are evolving to meet the deluge of data is moving processing to the edge. A secondary effect is Big Tech companies are releasing additional products and features specifically for geospatial use cases.

 

APIs-as-a-service

Since 2005, geospatial APIs (Application Programming Interfaces) and SDKs (software development kits) have helped developers, geographers, and non-geographers make sense of geospatial data. Developers are now able to make use of geospatial software libraries and developer tools with modern programming languages, broadening the user base dramatically, from geospatial experts to general software engineers. Integrations with modern software architecture are helping to build the future where geospatial data is accessible like any other data in that there are four distinct phases to making it useful: ingest, process, store, and analyze. Now that each phase is primarily run on the cloud, the opportunities to refine processes, collaborate more effectively, and automate undifferentiated human interventions have increased significantly.

 

Algorithms-as-a-service

The algorithms-as-a-service (AaaS) business model has garnered interest from a broad swath of the technical community due to the promised ease-of-use and ability to commercialize and sell IP in a new way. Traditionally, algorithms were hosted on the backend and then the services from those algorithms were made available to users.

Now there is more interest in directly licensing algorithms to other companies and/or serving algorithms through a marketplace. So, algorithms are now at a point in which they are being exposed for use directly by companies given the interest in building applications in-house, using other companies algorithms natively, and solving rudimentary and mathematically complex tasks with AI and ML.

 

Platform-as-a-service

The evolution of AI was built on top of massive training data sets, but the expansion comes with real challenges. Finding, validating, and generating data for machine learning is a complex, often inaccurate task. Gartner estimates that, by 2022, 85% of AI projects will deliver incorrect outcomes due to biased data.  There are several challenges that current machine learning researchers are facing.

 

Data is incomplete: AI needs both large and diverse datasets, but real data is often incomplete, excluding infrequent scenarios that are critical for AI performance.
Data is expensive: It is hard to collect, integrate, store, and maintain.
Data is biased: Even if data perfectly reflects reality, it can encode
biases present in the real world that we would like to remove.
Data is restricted: Regulation is increasingly limiting data use for AI.

 

Synthetic data is not a new concept and has been used by astrophysicists for decades. The historical approach to synthetic data was often seen as a lower-quality substitute, useful only when real data is inconvenient to get, expensive, or constrained by regulation. However, recent innovations in synthetic data may help data scientists unlock highly-valuable training data at scale without compromising on quality, balance, or accuracy. Maverick’s research shows that synthetic data is actually on a trajectory to go from an alternative data source to becoming the main force

A new approach to synthetic data leveraging physics-based simulation models may address
previous limitations.

Rendered has partnered with Orbital Insight and UC Berkeley to support the National Geospatial-Intelligence Agency. The project team plans to demonstrate that:

  • Synthetic data can be used with real data to increase the training performance of AI for rare and unusual objects
  • Much less synthetic data can be used to train AI than would be required using real datasets
  • Potentially much less real data may need to be collected to be able to detect rare and unusual objects

 

Space Capital, a venture capital firm focused on the space economy, released “The GEOINT Playbook,” a market research report in Oct 2022 for investors on the emerging economic opportunities in the multi-billion-dollar geospatial intelligence (GEOINT) market.

Key issues covered in the report:

  • A Fundamental Shift in Accessibility: GEOINT is evolving from a field dominated by mapping experts and geospatial consultants into a commercially available toolset that integrates seamlessly into existing workstreams and enables the development of highly specialized end-user applications. This new accessibility is being driven by innovation across three distinct technology layers: (1) Infrastructure, the geospatial sensor platforms that capture data at different altitudes (ex: small satellites, drones); (2) Distribution, the technology used to structure, process, analyze, and disseminate geospatial information (ex: the cloud, AI/ML , APIs and SDKs); and (3) Applications, specialized hardware or software that harnesses geospatial data to address industry and customer-specific needs.
  • Massive Growth Potential: There are currently over 280 types of data in demand across a broad customer base, with only 10-15 available today. In the Applications layer alone, we are witnessing the birth of a new market that could exceed the $36 billion Location-Based Services (i.e., GPS) market. Because GEOINT provides essential information to enterprises and governments in times of uncertainty, it is less prone to revenue declines during risk-off environments in the broader markets, making it countercyclical and resilient to macro market conditions.
  • Future Opportunities: The next evolution of geospatial companies will look and feel more like developer tools and deep tech companies, focused on the power of computing and AI/ML. A seemingly infinite number of venture-scale businesses are now being built in multi-trillion-dollar global industries like Agriculture, Insurance, Climate Markets, and Augmented Reality.

 

About Rajesh Uppal

Check Also

Enhancing Aviation Safety and Efficiency: A Deep Dive into Aircraft Maintenance, Repair, and Overhaul (MRO) Technology and Market Trends

In the vast and dynamic world of aviation, ensuring the safety, reliability, and efficiency of …

error: Content is protected !!