In 2011, Iran claimed to have downed a sophisticated American stealth drone, and unveiled what it alleged was a reverse-engineered copy of the futuristic looking RQ-170 Sentinel UAV, produced by defense giant Lockheed Martin. The drone was brought down by the Iranian Armed Forces’ electronic warfare unit which commandeered the aircraft and safely landed it, Iran’s Tasnim News Agency announced.
“Many things have computers inside and those computers are networked to talk to other things. Whenever you have that situation, you have the possibility for remote vulnerabilities where somebody can use the network connection to take over and get the device to do what the attacker wants instead of what the owner wants,” according to Kathleen Fisher, a professor of computer science at Tufts University and the founding program manager of the High-Assurance Cyber Military Systems (HACMS) project.“A lot of the ways attackers take over programs on machines and the Internet is by exploiting security vulnerabilities, and there are many kinds, such as the buffer overrun vulnerability, to which safe languages are just immune,’’ notes Andrew Appel, professor of computer science at Princeton University, who is considered an expert in the program verification field.
The Defense Advanced Research Projects Agency is so confident in the hack-proof software it developed for a remote-controlled quadcopter that it invited hackers at the recent DEF CON cybersecurity convention in August 2021 to try to break in and take it over. None succeeded, according to Ray Richards, program manager of DARPA’s Information Innovation Office. Work on DARPA’s High-Assurance Cyber Military Systems, or HACMS, demonstration concluded in 2017, Richards told Air Force Magazine, but this was the first time DARPA had invited all comers to try to hack it.
DARPA, introduced a new unmanned drone with secure software that protects the control and navigation functions of the aircraft from a systems hack. During the summer of 2015 a team of hackers attempted to take control of an unmanned military helicopter known as Little Bird. Even though the Red Team was given six weeks with the drone and more access to its computing network than genuine bad actors could ever expect to attain, they failed to crack Little Bird’s defenses.“They were not able to break out and disrupt the operation in any way,” said Kathleen Fisher, a professor of computer science at Tufts University and the founding program manager of the High-Assurance Cyber Military Systems (HACMS) project.
The software was developed under Defense Advanced Research Project Agency (DARPA)’s High Assurance Cyber Military Systems (HACMS) program whose goal is to demonstrate the feasibility of using formal verification to improve the software security for complex cyber-physical systems. The researchers turned to formal methods, that programmers can use to create ultra-secure, ultra-reliable software. But using a technique called “formal methods,” HACMS software mathematically ensures the absence of software flaws that let hackers break into and take over computer systems. Boeing used HACMS in its Unmanned Little Bird autonomous helicopter project.
The formal methods-built software for the quadcopter turned out to be unhackable, even by the pros at DEF CON, because the architecture rigidly separated the different functions of the mission control system. Even though it was still possible to break into the video camera software, the “pivot” to command and control that hackers so often rely on couldn’t happen. The mini drone is made with software that is mathematically proven to be invulnerable under large classes of cyber attacks, said Kathleen Fisher. The goal of HACMS is to enable high-assurance military systems ranging from unmanned vehicles (e.g., UAVs, UGVs, and UUVs), to weapons systems, satellites, and command and control devices.
Before it is ready for primetime, any piece of software should be able to satisfy at least two criteria:
- Under normal conditions, the software provides the desired features (e.g., you can edit, save, and share files); and
- When errors do occur, the software handles them gracefully (i.e., the program does not crash your whole computer system, leak private data, or give control to someone else).
Because most customers use software as it is intended, software programmers devote most of their attention to satisfying the first criteria: ensuring the software works properly under normal conditions. This is relatively easy to evaluate through user feedback, as customers tend to be vocal when a piece of software obviously misbehaves or omits an advertised feature.
The second dimension is much trickier—and the bane of the cybersecurity community. Virtually all software code contains defects that can cause the software to fail in some way. Humans write software, and our species is naturally prone to mistakes. Larger and more complex software applications multiply opportunities for committing and overlooking errors by orders of magnitude. Human minds excel at creating highly capable software (the first criteria), but they are ill-equipped to identify and eliminate software defects. One defect might be harmless, while another might crash the entire program. Others can lead to security failures. These happen when human attackers purposefully exploit defects to cause a specific kind of software failure that achieves their own objectives, such as leaking private data or giving control to them, the attacker.
The software industry has coalesced around two methods for reducing the error rate in software code. The first is simply education and knowledge exchange. Experts collaborate on a global scale to share information about software vulnerabilities and how to fix them. Yet as computing has matured, this body of knowledge has become overwhelming. It is extremely challenging to know when to apply lessons learned and how to verify implementation. Another common method to improve software quality is intensive testing. Yet this can consume massive resources, and most testing only indicates the presence of defects—it cannot prove the absence of them. Given the context of cybersecurity, where attackers actively hunt for any possible flaw that designers overlooked, these two methods have proved insufficient in solving software security.
Formal methods encompass a group of technologies that aim to manage these problems much more effectively by supplementing human resources with computational resources. In the 1970s, when it became clear that computers would become the foundation of military power, the defense community realized it needed greater assurance that its software was of the highest quality and free of security issues. Early programmers knew they could not rely on human judgment alone to ferret out all possible security vulnerabilities. They needed ways to prove that critical pieces of software would not crash by accident or contain unknown security flaws that attackers could exploit. They wanted to maximize confidence that a specific software application would do only what its authorized users intended, and nothing else.
The pioneers of formal methods adapted mathematical logic to construct abstract representations of software (“I want this program to do X”) and then use advanced mathematical theorems to prove that the software code they wrote would only accomplish X. The term formal methods evolved over time, and today it represents a spectrum of sophistication, from relatively simple instructions for planning a software project to automated programs that function like a super spell-check for code. The method of analysis varies between different formal methods, but it is largely automated and ideally carried out with mathematical precision. Lightweight formal methods are already in widespread use today. Static type theory, for example, has become a standard feature in several programming languages. These methods require no specialized knowledge to use and provide low-cost protection against common software faults.
More sophisticated formal methods can prevent more complex problems. Formal verification is one such example and it enables programmers to prove that their software does not contain certain errors and behaves exactly according to specification. These more advanced methods tend to require specialized knowledge to apply, but with recent advances this bar has been coming down.
For a long time, formal methods were relegated to mission-critical use cases, such as nuclear weaponry or automotive systems, where designers were willing to devote immense time and resources to creating error free software. More recent developments, including DARPA’s helicopter project, suggest formal methods are poised to remake how we design software and transform cybersecurity. In November 2016, the National Institute for Standards and Technology delivered a comprehensive report to the White House recommending alternative ways to achieve a “dramatic” reduction in software vulnerabilities. Devoting six pages to formal methods, the report noted that “formal methods have become mainstream in many behind-the-scenes applications and show significant promise for both building better software and for supporting better testing.
Some verification tools are commercially viable. For example, an optimizing C compiler in France called CompCert is being evaluated by Airbus and European certification agencies for use in compiling the fly-by-wire software used to fly the Airbus jetliner. “ Formal verification tools “have been shown to be effective at finding defects in safety-critical digital systems including avionics systems,” according to a 2017 report released by the U.S. National Aeronautics and Space Administration (NASA) Langley Research Center. Also, the U.S. Federal Aviation Administration (FAA) has issued guidelines that allow the use of formal methods tools to replace some of the methods it uses for “certification credit.”
NASA acknowledges this will be a slow process, noting, “There are still many issues that must be addressed before formal verification tools can be injected into the design process for digital avionics systems.” The report points out, “Most developers of avionics systems are unfamiliar with even the basic notions of formal verification, much less which tools are most appropriate for different problem domains,” and skills and expertise will be needed to master the tools effectively.
However, this is changing now . The journal of the Association of Computing Machinery, Communications of the ACM, reported in July 2021 that both Google and Firefox have installed formal methods-verified components in their Web browsers, and that Amazon Web Services has also employed the technique for key components in its critical cloud services. As software grows ever more complex, testing—traditionally the baseline technique of choice for cybersecurity assurance—becomes more and more time consuming and less and less effective, Richards said. “The ever-increasing focus on cybersecurity will increase the demand for large-scale application of formal methods,” he said.
DARPA’s HACMS program demonstrates Hack proof Unmanned Aerial Vehicles and Cars
Galois conducted a successful demonstration for the U.S. Defense Advanced Research Projects Agency’s High-Assurance Cyber Military Systems (HACMS) program.
For the DARPA HACMS program, Galois demonstrated its industry-first ability to:
- Prevent UAV drone hacking – Galois’ secure UAV software provides an alternative to currently available software that’s open to remote takeover and other vulnerabilities. Galois’ software was evaluated by independent, world-class penetration testing teams that are unable to gain remote access to the vehicle.
- Prevent car hacking – Galois’ technology guarantees protection against the automotive vulnerabilities demonstrated in ’60 Minutes’ that allow attackers to wirelessly take over automobile control systems.
The technology addresses the same security vulnerabilities in many systems, including modern automobiles and the Internet of Things (IoT).
“The message for organizations building connected vehicles, systems and products is that vulnerabilities are not a foregone conclusion if secure and reliable software is designed into their products up front,” said Rob Wiltbank, CEO, Galois. In the HACMS program, Galois is part of a team led by Rockwell Collins, and also includes University of Minnesota, National ICT Australia, and Boeing.
DARPA’s High-Assurance Cyber Military Systems (HACMS)
Embedded systems form a ubiquitous, networked, computing substrate that underlies much of modern technological society. Such systems range from large supervisory control and data acquisition (SCADA) systems that manage physical infrastructure to medical devices such as pacemakers and insulin pumps, to computer peripherals such as printers and routers, to communication devices such as cell phones and radios, to vehicles such as airplanes and satellites.
Such devices have been networked for a variety of reasons, including the ability to conveniently access diagnostic information, perform software updates, provide innovative features, lower costs, and improve ease of use. Researchers and hackers have shown that these kinds of networked embedded systems are vulnerable to remote attack, and such attacks can cause physical damage while hiding the effects from monitors.
The goal of the HACMS program is to create technology for the construction of high-assurance cyber-physical systems, where high assurance is defined to mean functionally correct and satisfying appropriate safety and security properties. Achieving this goal requires a fundamentally different approach from what the software community has taken to date.
Consequently, HACMS will adopt a clean-slate, formal methods-based approach to enable semi-automated code synthesis from executable, formal specifications. In addition to generating code, HACMS seeks a synthesizer capable of producing a machine-checkable proof that the generated code satisfies functional specifications as well as security and safety policies. A key technical challenge is the development of techniques to ensure that such proofs are composable, allowing the construction of high-assurance systems out of high-assurance components.
Key HACMS technologies include interactive software synthesis systems, verification tools such as theorem provers and model checkers, and specification languages. Recent fundamental advances in the formal methods community, including advances in satisfiability (SAT) and satisfiability modulo theories (SMT) solvers, separation logic, theorem provers, model checkers, domain-specific languages and code synthesis engines suggest that this approach is feasible.
If successful, HACMS will produce a set of publicly available tools integrated into a high-assurance software workbench, which will be widely distributed for use in both the commercial and defense software sectors. HACMS intends to use these tools to (1) generate open-source, high-assurance, and operating system and control system components and (2) use these components to construct high-assurance military vehicles. HACMS will likely transition its technology to both the defense and commercial communities.
DARPA said that the software tools in the HACMS program will be adapted for larger platforms, such as with Boeing’s Unmanned Little Bird helicopter. In the year since the Little Bird test, DARPA has been applying the tools and techniques from the HACMS project to other areas of military technology
The technology that repelled the hackers was a style of software programming known as formal verification. Formal verification is the most rigorous way to thwart attacks against IT systems and applications upon which military, government, and commercial organizations rely. While the traditional, method to see if a program works is to test it. Coders submit their programs to a wide range of inputs (or unit tests) to ensure they behave as designed. “One flaw anywhere in your software, and that’s the security vulnerability. It’s hard to test every possible path of every possible input,” Parno said.
HACMS use Formal methods
It is highly desirable to be able to prove the security of a system. To date, such proofs have been difficult to show. Formal method techniques have been used to provide highly secure and reliable guarantees at the operating system and application levels for a secure microkernel (seL4). To form a trustworthy embedded system, seL4 could provide the secure software layers (system and application services) for some existing and emerging application domains and devices, for example, smartphones, cyber physical military systems, and medical devices. However, there are questions whether the approaches used for seL4 could scale for ultra complex hardware/software systems.
Formal methods are mathematical approaches to software and system development which support the rigorous specification, design and verification of computer systems. Key aspects of this definition are that formal methods are based on rigorous mathematics and can be used to prove properties of computer systems. The techniques can be applied to both software and hardware. Sometimes they are used directly on the code implementing a system; other times they are applied to higher-level models. The proofs are generally machine-checkable, providing a high degree of confidence that they are right. It is important to realize that the results only apply if the assumptions are satisfied and the guarantees may be weaker than needed. The assumptions include the extent to which models accurately and completely capture all the relevant aspects of the underlying system.
The formal methods consist of using formal specification as a way of defining what, exactly, a computer program does and formal verification as the way of proving beyond a doubt that a program’s code perfectly achieves that specification.“You’re writing down a mathematical formula that describes the program’s behavior and using some sort of proof checker that’s going to check the correctness of that statement,” said Bryan Parno, who does research on formal verification and security at Microsoft Research.
“ Recently researchers have made big advances in the technology that undergirds formal methods: improvements in proof-assistant programs like Coq and Isabelle that support formal methods; the development of new logical systems (called dependent-type theories) that provide a framework for computers to reason about code; and improvements in what’s called “operational semantics” — in essence, a language that has the right words to express what a program is supposed to do,” writes Kevin Hartnett. “Today formal software verification is being explored in well-funded academic collaborations, the U.S. military and technology companies such as Microsoft and Amazon.”
Verifying all software to the level of full functional correctness is well beyond current capabilities. However, most software does not need to be verified. Researchers considered that the most important artefacts to verify are those that can be leveraged to build systems we can trust even in the face of attackers. Appropriate candidates include separation kernels, hypervisors, real-time operating systems (RTOSes), compilers, file systems, web broswers, sandboxes, cryptographic algorithms and garbage collectors.
CertiKOS: A breakthrough toward hacker-resistant operating systems
A team of Yale researchers has unveiled CertiKOS, the world’s first operating system that runs on multi-core processors and shields against cyber attacks, a milestone that the scientists say could lead to a new generation of reliable and secure systems software The CertiKOS verified operating system kernel is a key component of the Defense Advanced Research Agency’s (DARPA) High Assurance cyber Military Systems (HACMS) program, which is used to build cyber-physical systems that are provably free from cyber vulnerabilities.
Led by Zhong Shao, professor of computer science at Yale, the researchers developed an operating system that incorporates formal verification to ensure that a program performs precisely as its designers intended — a safeguard that could prevent the hacking of anything from home appliances and Internet of Things (IoT) devices to self-driving cars and digital currency.
Computer scientists have long believed that computers’ operating systems should have at their core a small, trustworthy kernel that facilitates communication between the systems’ software and hardware. But operating systems are complicated, and all it takes is a single weak link in the code — one that is virtually impossible to detect via traditional testing — to leave a system vulnerable to hackers.
“If the code and spec do not match, there is a bug or loophole,’’ explains Shao. “What a certified operating system provides you with is a guarantee there are no such loopholes; under every situation, the code will behave exactly like its specification.”
This guarantees hackers cannot exploit the operating system, he says. In addition to verification, the other factor that sets CertiKOS apart from other OSs is that it has an extensible operating system kernel, meaning it can easily be customized and adapted for many different settings so more features can be added, Shao says.
One of the main breakthroughs of CertiKOS is that it supports concurrency, meaning that it can simultaneously run multiple threads (small sequences of programmed instructions) on multiple central processing unit (CPU) cores. Many in the field have long believed that the complexity of such a system also makes formal verification of functional correctness problematic or prohibitively expensive. This sets CertiKOS apart from other previously verified systems and allows CertiKOS to run on modern multi-core machines. The CertiKOS architecture is also designed to be highly extensible — that is, it can take on new functionalities and be used for different application domains.
In constructing the CertiKOS system, Shao and his team incorporate formal logic and new, layered deductive verification techniques. That is, they carefully untangle the kernel’s interdependent components, organize the code into a large collection of hierarchical modules, and write a mathematical specification for each kernel module’s intended behavior. The use of formal deductive verification to certify the system differs from the conventional method of checking a program’s reliability, in which the code writer tests the program against numerous scenarios.
“A program can be written 99% correctly — that’s why today you don’t see obvious issues — but a hacker can still sneak into a particular set-up where the program will not behave as expected,” Shao said. “The person who wrote the software worked with all good intentions, but couldn’t consider all cases.”
“The HACMS team uses the virtualization capability provided by CertiKOS to separate trusted from untrusted components,” DARPA program manager Ray Richards said. “This is an important ability that allows us to effectively build cyber-resilient systems. In the world where cybersecurity is a growing concern, this resiliency is a powerful attribute that we hope will be widely adopted by system designers.”
Andrew Appel, director of NSF’s DeepSpec consortium and a professor of computer science at Princeton, called CertiKOS “a real breakthrough,” noting that it can serve as a base for building highly secure systems from combinations of verified and untrustworthy components. “But just as important, the modular layered verification methods used in CertiKOS will be applicable not just to operating systems, but to many other kinds of software,” Appel said.
Only in recent years would a system like CertiKOS be possible, since the proofs for a certified kernel are too big for any human to check. Powerful computer programs known as proof assistants have been developed within the last 10 years, however, that can automatically generate and check large formal proofs.
“This is amazing progress,” said Greg Morrisett, a leading expert on software security and dean of computing and information sciences at Cornell University. “Ten years ago, no one would predict that we could prove the correctness of a single-threaded kernel, much less a multi-core one. Zhong and his team have really blazed a spectacular trail for the rest of us.”
Galois and Guardtime Federal Awarded $1.8M DARPA Contract to Formally Verify Blockchain-Based Integrity Monitoring System
Galois and Guardtime Federal announced they have jointly been awarded a $1.8 million contract by the Defense Advanced Research Projects Agency (DARPA) to verify the correctness of Guardtime’s Keyless Signature Infrastructure (KSI). The contract will fund a significant effort that aims to advance the state of formal verification tools and all blockchain-based integrity monitoring systems. Verifying the correctness of Guardtime’s KSI will demonstrate the scalability and practicality of formal verification methods as a means for establishing trust in critical systems.
Integrity monitoring systems like Guardtime Federal’s KSI detect evidence of advanced persistent threats (APTs) as they work to remain hidden in networks. APTs undermine the security of networks for long periods of time and have been central in many major network breaches. APTs carefully cover their tracks by removing evidence from system log files, adding information to “white-lists” used by security software, and altering network configurations. This project aims to verify the ability of keyless integrity monitoring systems to detect APTs and attest to the ongoing integrity of a system.
“Formal verification has evolved considerably over the past several years, but has only recently matured enough to tackle production-level software,” said Stephen Magill, Research Lead for Software Analysis at Galois. “This collaborative effort seeks to advance our understanding of the role that integrity analysis plays in system security and lead to improvements in formal verification tools and methods that will be applicable to other existing systems.”
“Guardtime Federal sees this formal verification of block chain and keyless infrastructure technology implemented to meet national security challenges as an amazing opportunity for our clients,” said David Hamilton, President of Guardtime Federal. “By subjecting our cyber defense infrastructure to this most sophisticated methodology we will test both typical and exotic boundary conditions enabling further refinements of our defenses for protecting the most precious national security secrets and configurations of operational systems.”
Data breaches cost the economy billions and affect government and private companies alike. One major factor in the severity of a breach is the length of time that the adversary can operate before being detected, which can often be months as they explore a network and identify the most valuable assets and data. Technology such as Guardtime’s KSI can be used to ensure intruders are unable to cover their tracks. Formal verification aims to provide mathematically grounded assurance that the KSI system will not be compromised no matter what the intruder does to subvert it. This provides a much stronger level of assurance than conventional testing, which typically only covers non-malicious or randomly generated data.
Crowd Sourced Formal Verification (CSFV) for COTS software
Unreliable software places huge costs on both the military and the civilian economy. Currently, most Commercial Off-the-Shelf (COTS) software contains about one to five bugs per thousand lines of code. Formal verification of software provides the most confidence that a given piece of software is free of errors that could disrupt military and government operations. Unfortunately, traditional formal verification methods do not scale to the size of software found in modern computer systems. Formal verification also currently requires highly specialized engineers with deep knowledge of software technology and mathematical theorem-proving techniques. These constraints make current formal verification techniques expensive and time-consuming, which in turn make them impractical to apply to COTS software.
DARPA created the Crowd Sourced Formal Verification (CSFV) program to overcome these challenges. CSFV aims to investigate whether large numbers of non-experts can perform formal verification faster and more cost-effectively than conventional processes. The goal is to transform verification into a more accessible task by creating fun, intuitive games that reflect formal verification problems. Playing the games would effectively help software verification tools complete corresponding formal verification proofs.
The program envisions numerous benefits, including:
- Increased frequency and cost-effectiveness of formal verification for more types of common COTS software.
- Greatly expanded audience to participate in formal verification.
- Establishment of a permanent community of game players interested in improving software security.
CSFV’s Verigames web portal (www.verigames.com) offers free online games that translate players’ actions into program annotations that help formal verification. Gameplay generates mathematical proofs that can verify the absence of certain software flaws or bugs in common open source software written in the C and Java programming languages. If gameplay reveals potentially harmful code, DARPA will implement approved notification and mitigation procedures, including notifying the organization responsible for the affected software. Because CSFV verifies open source software that commercial, government and/or Department of Defense systems may use, prompt notification is essential to correct the software rapidly and mitigate risk of functional or security breakdowns.
CSFV games such as StormBound translate players’ actions into program annotations and generate mathematical proofs to help verify the absence of important classes of flaws in software written in the C programming language. We have developed an automated process that creates new puzzles for each verification problem in need of a solution. “With StormBound, we aim to investigate whether large numbers of non-experts playing the game can perform formal verification faster and more cost-effectively than conventional processes,” says Galois.