Home / Technology / AI & IT / Human to Machine Interface (HMI) for controlling Autonomous Killer Satellites

Human to Machine Interface (HMI) for controlling Autonomous Killer Satellites

The US China, and Russia have also reportedly developed so-called “inspection satellites” that can maneuver close to other spacecraft in low orbits and examine them for malfunctions. The robots could fix minor maintenance issues, keeping up with current orbiters as they age and sustain damage.


However, this technology is also dual use , the same satellites which are designed to repair owns satellites can also be use to degrade or destroy adversary’s satellites. They could  be weaponized with lasers or explosives. The satellites are very vulnerable by their construction.  As Secretary Wilson said, our SBIRS satellites are “vulnerable” to “kinetic attacks” including the grapple and crash by robotic arms.


The scope of space warfare therefore now have expanded from  ground-to-space warfare, such as attacking satellites from the Earth using  anti-satellite missiles or directed energy weapons to space-to-space warfare, such as satellites attacking satellites; and space-to-ground warfare, such as satellites attacking Earth-based targets.


This is leading to new race among space faring nations to develop similar satellites which could function as killer microsatellites. Space warfare has added a  a new dimension of Space to Space warfare  using in-space robotic satellites to deorbit adversaries satellites. Orbital or space-based systems are satellites that can deliver temporary or permanent effects against other spacecraft. These systems could include payloads such as kinetic kill vehicles, radiofrequency jammers, lasers, chemical sprayers, high-power microwaves, and robotic mechanisms. Some of these systems, such as robotic technology for satellite servicing and repair and debris removal, have peaceful uses but can also be used for military purposes.


AI is the critical technology for developing autonomous satellites. The ongoing wave of AI includes all machine learning techniques where machines define rules by clustering, classifications and use those models to predict and make decisions. But the problem with deep learning is that it is a black box, we don’t know the reasoning behind the decisions it makes. This makes it hard for people to trust them and humans working closely with robots risky.


Researchers are new developing AI theory and applications that make it possible for machines that can explain their decisions and adapt to changing situations. Instead of learning from data, intelligent machines will perceive the world on its own, learn and understand it by reasoning. Artificial intelligence systems will then become trustworthy and collaborative partners to humans.


However, there is a requirement to develop efficient design human-machine interfaces (HMIs) that allow humans to maintain situational awareness of highly autonomous satellites so that they can adapt in the face of unforeseen circumstances.


US Air Force asks industry to develop a human-machine interface for machine autonomy in satellite control

In April 2022, Officials of the Space Control Technology Branch of the air Force Research Laboratory Space Vehicles Directorate at Kirtland Air Force Base, N.M., released a solicitation (FA9453-21-S-0001)  for the Space Technology Advanced Research – Fast-tracking Innovative Software and Hardware (STAR-FISH) project.


The solicitation involves call four of the STAR-FISH project — Human to Machine Interface for Autonomous Satellite Systems — which seeks to enable seamless and agile human-machine interaction by establishing trust among satellite operators, and to boost satellite autonomy capabilities with advanced human-machine interface technology.


Scenario- The scenario modeled by the existing test bed for this HMI prototype is where one or more satellites are set to perform an inspection or docking orbit around another cooperative satellite. The system is envisioned to have 3 controllers: a primary automatic or autonomous controller, a backup controller provided through a run time assurance wrapper, and a human operator.


In addition to these automatic controllers, the operator will be able to override with a scripted command. The run time assurance (RTA) watches the primary controller (and optionally the scripted controller, when turned on by the operator), and intervenes with a backup control when it detects intervention is necessary to ensure that the satellite adheres to safety constraints including, but not limited to, translational (e.g. collision avoidance) and rotational (e.g. camera pointing restrictions) keep-out zones. The RTA is designed to be minimally invasive to the primary mission, allowing the system to stay on mission safely.


This collaboration is expected to enable the U.S. Air and Space forces to optimize machine autonomy and decision support with new features in advanced human-machine interfaces for satellite control.


Researchers are asking industry to submit white papers that outline human-machine interface concepts for autonomous satellite systems, as well as ways to modify existing satellite autonomy and control technologies.


Technical Objectives

The training scenario will be used above to scope the prototype HMI work for this contract. Relevant HMI could support the autonomous controller and a Course of Action (COA) analysis tool where the operators can flexibly set desired priorities for the COA tool as well as the thresholds for COA assumptions based on operator requirements/preferences.  The desire of this contract will be to create an intuitive HMI that facilitates understanding and projection of the autonomous controller while maximizing the directability and shared awareness between the human operator and the autonomy.

This HMI will:

  • Will be written in a widely available programming language such as Javascript, Java, Python, C, or C++.
  • Optimize human decision support through interface visualizations that juxtapose COA options based on established and dynamic operator preferences.
  • Develop methods for testing the effectiveness of the interfaces through SME inputs, human usability assessment, user studies targeting human performance factors, rapid prototyping of novel interface designs, experimental design.
  • Identify and describe the physical characteristics of the operator interface, control and design approach, display format symbology, decision aiding method and interface instantiation, and evaluate the effectiveness of said technologies for improving human performance.
  • Provide transparency of autonomous control algorithms to include current state, projected future state, relevant assumptions regarding appropriate TTPs.
  • Will provide input to RTA and visualize status of those inputs that allow the operator to turn the RTA system on/off, tune risk, select which protections are on/or, and engage a pre-planned backup maneuver on demand.
  • Will display RTA outputs such as status of failure and interlock conditions, and insight into which backups it’s considering, how long until they kick in, and other feedback.
  • Provide input to the autonomous controller and visualize the status of those inputs such as fuel limits per mission, fuel limits per maneuver, time limits, optimal time to complete, illumination requirements, etc.
  • Display autonomous outputs such as status of failure and interlock conditions, and insight into how it is selecting its next maneuver.


References and Resources also include:



About Rajesh Uppal

Check Also

The Evolution of Video Generation: Unlocking Creativity with AI-Powered Tools

Introduction: In the age of digital media, video content reigns supreme, captivating audiences across platforms …

error: Content is protected !!