Home / Technology / Comm. & NW / Satellite Internet networks and TCP/IP optimization

Satellite Internet networks and TCP/IP optimization

Satellite communication offers a number of advantages over traditional terrestrial point-to-point networks. Satellite networks can cover wide geographic areas and can interconnect remote terrestrial networks (“islands”). In case of damaged terrestrial networks, satellite links provide an alternative. Satellites have a natural broadcast capability and thus facilitate multicast communication. Finally, satellite links can provide bandwidth on demand by using Demand Assignment Multiple Access (DAMA) techniques. A satellite network (also called a satcom network) comprises a set of satellite terminals, one or more gateways and one NCC that is operated by one operator and uses a subset of the satellite resources (or capacity).

 

Satellite communications solutions have also many challenges they especially geosynchronous satellites create large delays typically 125 microseconds on each link and double this value for two way link between client to server. This causes considerable latency which for interactive applications like chat become completely unacceptable sometimes causing applications to fail entirely.

 

Additionally, because they are dependent on atmospheric transport, satellites are especially prone to packet loss caused by environmental interference.  The steadily rising demand from businesses, media companies and government entities has led to satellite bandwidth shortages and higher costs for the capacity that is available in the foreseeable future.

 

Latency Vs Bandwidth

Latency is the amount of time that it takes for a signal to travel from your computer to a remote server (such as the physical machine where Netflix stores its video) and back. Bandwidth, or download speed, is what most ISPs advertise on their plans. If you have a 100 Mbps internet connection, that means that your download speed is 100 Mbps. Download speed tells you how long it takes for a certain amount of data from the internet to reach your computer. If we think of data like water, your bandwidth is the size of the pipe leading to your house. A wider pipe can carry more water, and fill up a pool faster. If bandwidth is the width of the pipe filling up your pool, latency would be the length of the pipe. If you have a really long pipe, there will be a delay between when you turn the water on and when it actually starts flowing out of the end of the pipe. This usually won’t have a huge impact on how long it takes to fill a pool, but if you were filling a series of smaller buckets and had to turn the water on and off between each one, it would become pretty inconvenient.

 

Satellite Internet

The growth in the use of Internet-based applications in recent years has led to telecommunication networks transporting an increasingly large amount of Internet Protocol (IP)-based traffic. Satellite internet is wireless internet beamed down from satellites orbiting the Earth. It’s a lot different from land-based internet services like cable or DSL, which transmit data through wires.

 

Satellite Internet generally relies on three primary components: a satellite – historically in geostationary orbit (or GEO) but now increasingly in Low Earth orbit (LEO) or Medium Earth orbit MEO) – a number of ground stations known as gateways that relay Internet data to and from the satellite via radio waves (microwave), and further ground stations to serve each subscriber, with a small antenna and transceiver.

 

Other components of a satellite Internet system include a modem at the user end which links the user’s network with the transceiver, and a centralized network operations centre (NOC) for monitoring the entire system.

 

Working in concert with a broadband gateway, the satellite operates a Star network topology where all network communication passes through the network’s hub processor, which is at the centre of the star. With this configuration, the number of ground stations that can be connected to the hub is virtually limitless.

 

HughesNet and Viasat are the two primary residential satellite internet providers in the US. In the near future, Starlink (from SpaceX) and Project Kuiper (from Amazon) will also offer satellite internet service.

 

Space segment

Geostationary satellites attracted interest as a potential means of providing Internet access. A significant enabler of satellite-delivered Internet has been the opening up of the Ka band for satellites. Marketed as the centre of the new broadband satellite networks are a new generation of high-powered GEO satellites positioned 35,786 kilometres (22,236 mi) above the equator, operating in Ka-band (18.3–30 GHz) mode.

 

These new purpose-built satellites are designed and optimized for broadband applications, employing many narrow spot beams, which target a much smaller area than the broad beams used by earlier communication satellites. This spot beam technology allows satellites to reuse assigned bandwidth multiple times which can enable them to achieve much higher overall capacity than conventional broad beam satellites.

 

The spot beams can also increase performance and consequential capacity by focusing more power and increased receiver sensitivity into defined concentrated areas. Spot beams are designated as one of two types: subscriber spot beams, which transmit to and from the subscriber-side terminal, and gateway spot beams, which transmit to/from a service provider ground station. Note that moving off the tight footprint of a spotbeam can degrade performance significantly. Also, spotbeams can make the use of other significant new technologies impossible, including ‘Carrier in Carrier’ modulation.

 

In 2004, with the launch of Anik F2, the first high throughput satellite, a class of next-generation satellites providing improved capacity and bandwidth became operational. More recently, high throughput satellites such as ViaSat’s ViaSat-1 satellite in 2011 and HughesNet’s Jupiter in 2012 have achieved further improvements, elevating downstream data rates from 1–3 Mbit/s up to 12–15Mbit/s and beyond. Internet access services tied to these satellites are targeted largely to rural residents as an alternative to Internet service via dial-up, ADSL or classic FSSes.

 

In 2013 the first four satellites of the O3b constellation were launched into medium Earth orbit (MEO) to provide internet access to the “other three billion” people without stable internet access at that time. Over the next six years, 16 further satellites joined the constellation, now owned and operated by SES.

 

Since 2014, a rising number of companies announced working on internet access using satellite constellations in low Earth orbit. SpaceX, OneWeb and Amazon all plan to launch more than 1000 satellites each. OneWeb alone raised $1.7 billion by February 2017 for the project, and SpaceX raised over one billion in the first half of 2019 alone for their service called Starlink and expected more than $30 billion in revenue by 2025 from its satellite constellation. Many planned constellations employ laser communication for inter-satellite links to effectively create a space-based internet backbone.

 

In September 2017, SES announced the next generation of O3b satellites and service, named O3b mPOWER. The constellation of 11 MEO satellites will deliver 10 terabits of capacity globally through 30,000 spot beams for broadband internet services. The first three O3b mPOWER satellites are scheduled to launch in Q3 2021. As of 2017, airlines such as Delta and American have been introducing satellite internet as a means of combating limited bandwidth on airplanes and offering passengers usable internet speeds.

 

Gateways

The internet service provider (ISP) gets the internet signal via fiber from a collection of data servers, moves that signal to a central station, or hub, then distributes it to the modems of individual subscribers.

 

Along with dramatic advances in satellite technology over the past decade, ground equipment has similarly evolved, benefiting from higher levels of integration and increasing processing power, expanding both capacity and performance boundaries. The Gateway—or Gateway Earth Station (its full name)—is also referred to as a ground station, teleport or hub. The term is sometimes used to describe just the antenna dish portion, or it can refer to the complete system with all associated components.

 

Access server/gateways manage traffic transported to/from the Internet. In short, the gateway receives radio wave signals from the satellite on the last leg of the return or upstream payload, carrying the request originating from the end-user’s site. The satellite modem at the gateway location demodulates the incoming signal from the outdoor antenna into IP packets and sends the packets to the local network.

 

Once the initial request has been processed by the gateway’s servers, sent to and returned from the Internet, the requested information is sent back as a forward or downstream payload to the end-user via the satellite, which directs the signal to the subscriber terminal. Each Gateway provides the connection to the Internet backbone for the gateway beam(s) it serves. The system of gateways comprising the satellite ground system provides all network services for satellite and corresponding terrestrial connectivity. Each gateway provides a multiservice access network for subscriber terminal connections to the Internet.

 

User terminal

At the far end of the outdoor unit is typically a small (2–3-foot, 60–90 cm diameter), reflective dish-type radio antenna. The VSAT antenna must also have an unobstructed view of the sky to allow for proper line-of-sight (L-O-S) to the satellite. There are four physical characteristic settings used to ensure that the antenna is configured correctly at the satellite, which are: azimuth, elevation, polarization, and skew.

 

“TRIA” is an acronym for “transmit-receive integrated assembly” is essentially a radio that can send and receive. A modem, by the way, serves as the interface between the radio signals received via the TRIA and your computer or router. Transmit and receive components are typically mounted at the focal point of the antenna which receives/sends data from/to the satellite. The main parts are:

  • Feed – This assembly is part of the VSAT receive and transmit chain, which consists of several components with different functions, including the feed horn at the front of the unit, which resembles a funnel and has the task of focusing the satellite microwave signals across the surface of the dish reflector. The feed horn both receives signals reflected off the dish’s surface and transmits outbound signals back to the satellite.
  • Block upconverter (BUC) – This unit sits behind the feed horn and may be part of the same unit, but a larger (higher wattage) BUC could be a separate piece attached to the base of the antenna. Its job is to convert the signal from the modem to a higher frequency and amplify it before it is reflected off the dish and towards the satellite.
  • Low-noise block downconverter (LNB) – This is the receiving element of the terminal. The LNB’s job is to amplify the received satellite radio signal bouncing off the dish and filter out the noise, which is any signal not carrying valid information. The LNB passes the amplified, filtered signal to the satellite modem at the user’s location.

 

Indoor unit (IDU)

The satellite modem serves as an interface between the outdoor unit and customer-provided equipment (i.e. PC, router) and controls satellite transmission and reception. From the sending device (computer, router, etc.) it receives an input bitstream and converts or modulates it into radio waves, reversing that order for incoming transmissions, which is called demodulation. It provides two types of connectivity:

  • Coaxial cable (COAX) connectivity to the satellite antenna. The cable carrying electromagnetic satellite signals between the modem and the antenna generally is limited to be no more than 150 feet in length.
  • Ethernet connectivity to the computer, carrying the customer’s data packets to and from the Internet content servers.

Consumer-grade satellite modems typically employ either  the DOCSIS or WiMAX telecommunication standard to communicate with the assigned gateway.

A modem translates data so it can move between your internet-ready device and the satellite dish. You can connect some devices, like a computer, smart TV, or gaming console, directly to your modem using an ethernet cable.

 

Wi-Fi capabilities: However, those cables can get a bit messy, and you’ll still need Wi-Fi capabilities for devices like tablets and smartphones. That’s where a router comes in. It connects to the modem to give it Wi-Fi capabilities. A router broadcasts an internet signal wirelessly, so you can pick it up on your phone, laptop, or other device. HughesNet and Viasat satellite internet modems come with a router built in.

 

 

Network protocols

A protocol is the rules and conventions used in conversation by agreement between communicating parties. Basic protocol functions include segmentation and reassembly, encapsulation, connection control, ordered delivery, flow control, error control, routing and multiplexing. Protocols are needed to enable parties to understand each other and make sense of received information. A protocol stack is a list of protocols (one protocol per layer). A network protocol architecture is a set of layers and protocols.

 

One major trend in any telecommunications network is to move towards IP network technologies. Satellite networks are following the same trend. As with all other communications protocols, TCP/IP is composed of different layers.

 

Application layer

The application layer protocols are designed as functions of user terminals or servers. The classic Internet application layer protocols include HTTP for the Web, FTP for file transfer, SMTP for email, Telnet for remote login, DNS for domain name services

Transport layer: TCP and UDP

The transmission control protocol (TCP) and user datagram protocol (UDP) are transport layer protocols of the Internet protocol reference model. They originate at the end-points of bidirectional communication flows, allowing for end-user terminal services and applications to send and receive data across the Internet.

 

TCP is responsible for verifying the correct delivery of data between client and server. Data can be lost in the intermediate network. TCP adds support to detect errors or lost data and retransmit them until the data is correctly and completely received. Therefore TCP provides a reliable service though the network underneath may be unreliable, i.e., operation of Internet protocols does not require reliable transmission of packets, but reliable transmission can reduce the number of retransmissions and thus improve performance.

 

UDP provides a best-effort service as it does not attempt to recover any error or packet loss. Therefore, it is a protocol providing unreliable transport of user data. But this can be very useful for real-time applications, as re-transmission of any packet may cause more problems than losing packets.

 

The network layer: IP

The IP network layer is based on a datagram approach, providing only best-effort service, i.e. without any guarantee of quality of service (QoS). IP is responsible for moving packets of data from router to router according to a four-byte destination IP address (in the IPv4 mode) until the packets reach their destination. Management and assignment of IP addresses is the responsibility of the Internet authorities.

TCP/IP over satellites

The Transmission Control Protocol/Internet Protocol (TCP/IP) suite has been carried transparently over satellite , and TCP/IP implementations have been shown to work well over satellite links.

 

However, the long propagation delay to satellites in geostationary earth orbit (GEO) has imposed
limitations on interactive applications and on existing TCP implementations. Work on large windows and selective acknowledgements has been designed to overcome TCP’s problems with paths that exhibit high bandwidth-delay products, such as links over geostationary satellites.

 

Current TCP congestion control algorithms can mistake bursty satellite channel errors (which can be the result of how data-link layer coding choices perform under poor conditions) for network congestion. This leads to suboptimal use of available satellite link capacity when recovering from errors, due to congestion avoidance decreasing TCP’s sending rate dramatically, followed by a slow return to the previous transmission rate. This is something that really results from a lack of explicit congestion notification to distinguish between network congestion and link errors. Tweaking congestion control algorithms to improve performance in the satellite environment cannot compensate for this lack of information on the real cause of the problem.

 

The main issue affecting the performance of TCP/IP over satellite links is the very large feedback delay compared to terrestrial links. The inherent congestion control mechanism of TCP causes source data rate to reduce rapidly to very low levels with even a few packet loss in a window of data. The increase in data rate is controlled by ACKs received by the source. Large feedback delay implies a proportional delay in using the satellite link efficiently again.

 

Consequently, a number of TCP enhancements (NewReno, SACK) have been proposed that avoid multiple reductions in source data rate when only a few packets are lost. The enhancements in end-to-end TCP protocol are called End System Policies.

 

Performance Enhancement Proxy (PEP) technology

Performance Enhancement Proxy (PEP) technology can mitigate the effects of latency, help fill the link with data and improve network performance. Installing a pair of PEPs at either end of a satellite link can trick each local network into believing the remote, satellite-linked network is right next door. However, not all PEPs are alike, and bundled satellite modem PEPs are constrained in their capabilities and deliver limited results.

 

Unlike simple PEPs, Expand Accelerators apply a mix of TCP acceleration, link conditioning, compression, and application-specific acceleration techniques to increase the performance of applications despite the degraded conditions. They offer extensive caching, compression, and QoS capabilities to overcome congestion and latency on the WAN to provide the most effective use of the available bandwidth. Expand also offers advanced technologies such as packet
fragmentation, to reduce the effect of large file transfers and similar applications (e.g., FTP) on sensitive real-time traffic such as VoIP and server based computing (Citrix/MS Terminal Services/VDI).

 

TCP is subject to a number of limitations on a WAN that severely effect it’s performance. The Expand Accelerator’s TCP acceleration (PEP) overcomes these limitations to increase performance for all TCP applications. It’s based on the Space Communications Protocol Standard (SCPS) developed by NASA and the U.S. Air Force. SCPS is a transparent, highly reliable set of TCP extensions that interoperates with other SCPS based devices . With SCPS, the Accelerator acts as a transparent TCP proxy to all TCP traffic to overcome latency by using a variety of techniques, such as enlarging transmission windows for higher throughput, overcoming TCP slow-start and advanced congestion-avoidance mechanisms.

 

Window Scaling.

Standard TCP stacks on workstations typically support a maximum transmission window of 64 KB. This means that, at any one time, only 64 KB of data may be transmitted without receiving an acknowledgment (ACK). When transferring data over a long-haul WAN connection, the maximum threshold must be increased to avoid severe under-utilization of the expensive link.

 

In order to keep the pipe full, Expand Networks’ TCP acceleration creates a much larger window to allow the link to be fully utilized. When the window size is enlarged, the destination does not send as many ACK packets, resulting in further improvement in bandwidth utilization With TCP acceleration, a file transfer that used only a fraction of the available bandwidth will now fully utilize the link and complete the transfer faster. Sessions will scale faster and fill the pipe immediately, effectively avoiding the “saw-tooth” performance curve in standard TCP sessions.

 

Error Detection and Proactive Resolution.

Standard TCP assumes that all packet loss is caused by network congestion. But, on a dedicated and controlled WAN link packet loss could also be due to a bit error that can occur from noise in the channel or other environmental conditions. TCP Vegas and TCP Reno congestion avoidance mechanisms have been designed to address these network issues and adjust the transmission speed accordingly. The TCP Vegas algorithm emphasizes packet delay, rather than packet loss, as a signal to help determine the rate at which to send packets. TCP Vegas detects congestion at an early stage based on increasing Round Trip Time (RTT) values of the packets in the connection.

 

TCP Reno, on the other hand, detects congestion only after it has actually happened via packet drops. By implementing both TCP Vegas and TCP Reno network congestion avoidance mechanisms, Expand Accelerator’s TCP acceleration feature can perform the appropriate corrective measure both when there is network congestion and when there is bit-error packet loss. Expand Accelerators can adapt to any satellite environments since it allows the users enable one or disable both of these congestion avoidance mechanisms.

Dynamic Bandwidth Adjust.

In addition to TCP Vegas and TCP Reno algorithms discussed above, Expand Accelerators have also implemented the Dynamic Bandwidth Adjust feature. Through a real-time feedback mechanism, the Accelerator can automatically
adjust the bandwidth it sends to the WAN when congestion in the network occurs. This feature also provides effective traffic optimization when multiple paths with different bandwidths or delay characteristics exist. This mechanism ensures the most reliable optimization of all IP traffic at all times, and can also help in environments with multiple
satellite links and backup links for disaster recovery.

Fast Start.

The proxy operation of the Accelerator means that TCP sessions are terminated locally, so that establishing and tearing down TCP connections takes place at LAN speeds. This increases the link utilization and overcomes TCP’s slow start and congestion avoidance on both sides of the network, resulting in better and more immediate response for users.

 

Application Acceleration

Expand also offers several “plugins” that add advanced caching techniques and packet aggregation capabilities to optimize CIFS and accelerate specific applications such as HTTP and FTP; highly interactive applications such as Citrix, Terminal Services, and Telnet; and virtual desktop solutions using Virtual Desktop Infrastructure (VDI).

 

HTTP and FTP.

Web-enabled applications are characterized by many small objects (logos, graphics, etc.) that require multiple round-trips over the WAN, resulting in poor performance. Using caching techniques, Expand Accelerators serve the objects locally at LAN speeds. Eliminating repetitive content transfers over the WAN speeds delivery and saves bandwidth for both HTTP and FTP. Utilizing local termination of the HTTP session also speeds application response. The Accelerator’s byte-level caching and compression work in combination with its Layer-7 QoS to enhance response times; it seamlessly compresses web services (HTML, xHTML, Javascript, J2EE, JSP, etc.) and works on most MIME-types (user-configurable).

Interactive Applications.

Virtual desktop solutions such as Virtual Desktop Infrastructure (VDI) and highly interactive applications, such as Citrix Presentation Server (XenAPP), Terminal Services, and Telnet, are characterized by a request-reply protocol with relatively small packets. Although Citrix ICA and RDP protocols in particular have been optimized to deal with WAN latency, they still pose a challenge to application acceleration solutions that use block caching. As noted above, because blocks are larger than the size of interactions that these applications generate, it is difficult to get a cache hit without imposing additional latency required by the buffering needed to build a sample of sufficient size to generate a hit. However, Expand Accelerator’s byte-level caching and compression has the granularity necessary to optimize highly interactive applications, and is capable of increasing throughput by an average of 300% and peaks of more than
1,000%.

In addition, for interactive applications and solutions such as Citrix, Terminal Services, Telnet, and VDI, Expand offers a plug-in using packet aggregation that optimizes the bandwidth utilization further, increasing the number of user sessions by an average of two to three times and peaks of more than 10 times. It does all this with superior network, server, and user performance and all on the same infrastructure. More sophisticated than either a data reduction or compression technique, this application-specific optimization actually multiplexes sessions together temporarily for transport over the WAN

 

Caching

Expand Accelerators offer caching at multiple levels: bit, byte, object, and file, so that, unlike other solutions, it can deliver benefits not only for “the usual suspects” (standard TCP based applications), but for non-TCP and interactive applications as well. It also incorporates advanced predictive algorithms to accelerate traffic the first time it’s seen. The Accelerator’s integration with Microsoft Domains enables it to offer synchronous file access with SMB Signing, offering full security and protection of data integrity.

 

The Accelerator offers a DNS acceleration service that can overcome the poor performance associated with a centralized DNS server deployment over a satellite connection. The Accelerator provides a transparent DNS caching solution that can process both UDP and TCP DNS requests, answering them locally. This feature does not require any special configuration
on PCs or servers. It is simple to deploy and use. By enabling DNS acceleration on the remote Accelerator, the DNS records are cached and available locally across the entire branch office. This, in turn, significantly reduces latency, bandwidth consumption, and DNS server load, and improves the user experience.

 

Compression

The Accelerator’s compression option can compress almost any type of traffic across the WAN, saving valuable bandwidth and allowing more applications and users over existing links. Expand Accelerators compress at the byte level as well as object and file level and can provide the benefits of compression and caching all IP traffic, including interactive traffic and real-time transmissions.

 

Expand’s compression is also flexible, offering both outof-band transparency and tunneling modes. Expand’s true transparency preserves existing infrastructure investments and maintains full visibility through any management solutions, while tunneling is available for those network configurations where WAN visibility isn’t desired and full packet compression is delivered.

 

Quality of Service

The Accelerator’s advanced QoS and traffic shaping mechanisms and intuitive management interface makes it easy to prioritize applications and guarantee bandwidth to the applications critical to a business. The Accelerator goes far beyond simple queuing with dynamic QoS that is tightly integrated with application acceleration and adapts to network conditions to maintain chosen priorities even during periods of high congestion. Not only does the Accelerator QoS share bandwidth not needed by higher-priority applications, its unique application-aware technology prevents higher-priority or highly accelerated applications from choking out lowerpriority ones.

 

Layer-7 QoS and traffic discovery capabilities can be used to classify, monitor and prioritize network applications according to business objectives. Layer 7 QoS can guarantee optimal application performance regardless of WAN conditions by assigning a priority or bandwidth guarantees and limits for each application with full granularity. The Expand Accelerator QoS not only understands the difference between applications in terms of their importance to the
business, it understands the difference between different sites in terms of their application usage. For example, the Accelerator can ensure that an ERP application gets higher priority than a CRM application for manufacturing sites, while the opposite is true for call centers. Furthermore, unlike many competing solutions, the Accelerator provides both outbound and inbound QoS. Even if a remote site does not have an Expand appliance, the datacenter appliance can throttle application behavior at the remote end to prevent link and server overload.

 

Packet Fragmentation

Even with proper QoS priorities, applications that transfer large amounts of information, such as CIFS, FTP, and backup systems, can effectively starve real-time applications such as VoIP and video over the low-speed links that are generally used for smaller remote sites. The problem is that even though the real-time application may have priority, the bulky nature of large-transfer applications takes too long to clear the link even when queuing and traffic shaping are enabled. The added latency that results can make VoIP, for instance, impossible for many branch offices. The solution applied by the Expand Accelerator is to reduce the size of data packets and intelligently fragment packets depending on the effective link speed and VoIP traffic profiles. Packet fragmentation also stabilizes network jitter and latency, maintaining optimal Accelerator performance for VoIP.

 

 

 

 

 

References and Resources also include:

https://ecfsapi.fcc.gov/file/6520219725.pdf

About Rajesh Uppal

Check Also

Navigating the Cosmos: Exploring the Global Large Satellite Propulsion and AOCS Subsystem Market

Introduction: The cosmos has always been a realm of fascination and exploration for humankind. As …

error: Content is protected !!