As the Military and civilian technological systems, from fighter aircraft to networked household appliances, are becoming ever more dependent upon internet, they are also becoming more vulnerable to hackers and electronic intruders. Electronic system security has become an increasingly critical area of concern for the DoD and the broader U.S. population.
Internet security is a branch of computer security specifically related to not only Internet, often involving browser security and the World Wide Web, but also network security as it applies to other applications or operating systems as a whole. Its objective is to establish rules and measures to use against attacks over the Internet. The Internet represents an insecure channel for exchanging information, which leads to a high risk of intrusion or fraud, such as phishing, online viruses, trojans, worms and more. Many methods are used to protect the transfer of data, including encryption and from-the-ground-up engineering. The current focus is on prevention as much as on real time protection against well known and new threats.
TCP/IP protocols may be secured with cryptographic methods and security protocols. These protocols include Secure Sockets Layer (SSL), succeeded by Transport Layer Security (TLS) for web traffic, Pretty Good Privacy (PGP) for email, and IPsec for the network layer security. IPsec is designed to protect TCP/IP communication in a secure manner. It is a set of security extensions developed by the Internet Task Force (IETF). It provides security and authentication at the IP layer by transforming data using encryption.Two main types of transformation that form the basis of IPsec: the Authentication Header (AH) and ESP. These two protocols provide data integrity, data origin authentication, and anti-replay service. These protocols can be used alone or in combination to provide the desired set of security services for the Internet Protocol (IP) layer.
The basic components of the IPsec security architecture are described in terms of the following functionalities: Security protocols for AH and ESP; Security association for policy management and traffic processing; Manual and automatic key management for the Internet key exchange (IKE) and Algorithms for authentication and encryption.
The set of security services provided at the IP layer includes access control, data origin integrity, protection against replays, and confidentiality. The algorithm allows these sets to work independently without affecting other parts of the implementation. The IPsec implementation is operated in a host or security gateway environment giving protection to IP traffic.
Internet protocols are now also set to be upgraded, QUIC, the new transport protocol set to replace TCP and HTTP/3, the HTTP version running atop QUIC, that will start to see wide deployment in 2020. After more than six years of building, reframing, and refinement, HTTP/3 and QUIC are primed to modernize the internet in a number of ways: faster response times, greater accessibility worldwide, and setting the standard for built-in encryption, just to name a few.
One of the most substantial gains behind HTTP/3 is the impact it will have on APIs and the Internet of Things (IoT). APIs and the Internet of Things, more often than not, find themselves operating on unpredictable networks. The network quality, the quality of transmission media, and the general security underlying it all is highly dynamic, with packet loss and transmission errors being primary components behind most data failures.
QUIC stands for “Quick UDP Internet Connections” and is, itself, Google’s attempt at rewriting the TCP protocol as an improved technology that combines HTTP/2, TCP, UDP, and TLS (for encryption), among many other things. Google wants QUIC to slowly replace both TCP and UDP as the new protocol of choice for moving binary data across the Internet, and for good reasons, as test have proven that QUIC is both faster and more secure because of its encrypted-by-default implementation (current HTTP-over-QUIC protocol draft uses the newly released TLS 1.3 protocol).
QUIC was proposed as a draft standard at the IETF in 2015, and HTTP-over-QUIC, a re-write of HTTP on top of QUIC instead of TCP, was proposed a year later, in July 2016. Since then, HTTP-over-QUIC support was added inside Chrome 29 and Opera 16, but also in LiteSpeed web servers. While initially, only Google’s servers supported HTTP-over-QUIC connections, this year, Facebook also started adopting the technology.
QUIC is also designed to be very fast. By offering 0-RTT and 1-RTT (Round Trip Time) handshakes against the TCP 3-way handshakes, the transfer process of QUIC is very fast.
Clients and servers begin their interaction with one another via transport and crypto handshakes. These establish that the two parties are ready to communicate, and set up the ground rules for doing so. TCP and TLS, the prevalent transport and crypto protocols, have to do their handshakes in order, which must occur before any data can be exchanged. This means that with TCP and TLS, an end user spends at least two round trips setting up communication before any web traffic can flow. That’s where QUIC comes in. QUIC collapses the transport and crypto handshakes together. As a result, only one round trip is necessary for setup before traffic can flow.
When re-establishing a connection to a known server, this can be reduced to one round trip with TCP and TLS version 1.3, but that is still a fair bit of time for the web: entire web pages finish transferring and loading in that amount of time. Under the same conditions with QUIC, web traffic goes out right away, without waiting for any setup time. This is what QUIC calls Zero Round-Trip Time or 0-RTT connection establishment, and we expect it to be a significant improvement in latency for our customers’ web pages and apps.
QUIC is highly reliable, due to its support of the aforementioned additional streams, meaning that data transmission is assured with greater speed and accuracy. This reliability, combined with speed, offers superior congestion control and stream re-transmission. In fact, the main issue raised against HTTP/3 – the fact that it utilizes UDP, a relatively unreliable transport method – is largely negated by these facets.
QUIC solves parking lot problem with connection migration, a feature that allows your connection to the server to move with you as you switch networks. Current mobile device is capable of speaking to multiple networks. But, for a variety of reasons, it does not quickly detect and switch if the one it is using right now is of terrible quality for example from WiFi to Cellular. QUIC uses connection identifiers to make this possible: a server hands out these identifiers to a client within a connection. If the client moves to a new network and wishes to continue the connection, it simply uses one of these identifiers in its packets, letting the server know that the client wishes to continue communication but from a new network.
More secure communication
Current web communication is secured to the extent possible by TLS, but this still leaves a fair amount of metadata visible to third parties. Specifically, all of the TCP header is visible, and this can leak a significant amount of information.Specifically, all of the TCP header is visible, and this can leak a significant amount of information.
See this exemplary work, which uses information in the TCP headers to detect what video on Netflix is being watched. TCP headers can also be — and often are — tampered with by third parties, while the communicating client and server are none the wiser. Encryption and privacy are fundamental to QUIC’s design. All QUIC connections are protected from tampering and disruption, and most of the headers are not even visible to third parties.
.QUIC keeps flexibility for the future in mind, ensures that applications’ connections are confidential, and promises to provide better internet performance globally. These are all built into the protocol’s design. It’s exciting to think that, very soon, QUIC and HTTP/3 will be working quietly behind the scenes to make the internet better for everyone connected to it.
Additionally, the fact that QUIC has been developed for implementation in the user space is also notable. This means that, unlike protocols built into the OS or firmware level, QUIC can iterate rather quickly and effectively without having to deal with the entrenchment of each protocol version. This is a big deal for such a large protocol, and in many ways should be considered a core feature in its own right.
HTTP/3 is the next iteration of the oft-used HTTP protocol family. It’s meant to be a replacement of sorts, though just as with HTTP/1, some level of co-utilization is expected across the internet in the foreseeable future due to the nature of adopting a new protocol. HTTP/3 on the other hand, while still functional with TCP is actually built upon QUIC, a Google/IETF hybrid which is foundationally a transport protocol developed upon UDP. By being built upon UDP, QUIC manages to fix many of the core issues found in HTTP/2 while operating under a new implementation methodology. This adoption of UDP also allows significant increases in speed, not to mention reliability.
HTTP/3 is very similar to HTTP/2, but it offers some significant advancements and changes to the underlying method of utilization. Perhaps one of the most notable of these issues is the fact that the single connection of HTTP/2 ends up being a bottleneck for data in a low network quality environment – as network quality degrades, and packets are dropped, the single connection slows the entire process down, and no additional data can be transferred during this time of retransmission. HTTP/1 originally offered six connections, which solved much of this issue, but both protocols were designed for a network and a time in which the current latency, speed, and concurrency demands weren’t yet a reality.
QUIC, and thereby HTTP/3, utilizes multiplexing to solve this issue. If one packet is lost, the additional connection streams established by HTTP/3 allows for independent functionality. In other words, if one packet fails, the rest of the connection streams can keep going while that stream attempts to repair itself. This ultimately reduces congestion pretty handily, not to mention improves the general reliability of the protocol.
An intrinsic issue with HTTP/2 is actually not an issue of HTTP/2 itself – rather, it’s an issue of how vendors have chosen to implement it. Because this protocol is often “baked in” to routers, firewalls, and other network devices (not to mention middleboxes), any deviation from HTTP/2 is often seen as invalid, or worse, an attack. These devices are configured to only accept TCP or UDP between contacted servers and their users within a very strict, narrow definition of what expected traffic should look like – any deviation, such as when a protocol has updated, new functionality, is almost instantly rejected because the devices just don’t want to deal with them.
This issue is known as protocol ossification and is a huge problem in resolving the underlying issues of HTTP/2. New TCP options are either severely limited or outright blocked, so fixing HTTP/2 becomes less an issue of “what do we fix,” and more an issue of “how do we implement the fix.”
References and Resources also include: