top of page

Why your Satellite Network's Protocol Stack matters more than you think

  • Writer: Satellite Evolution Group
    Satellite Evolution Group
  • 2 minutes ago
  • 7 min read

The following is an analysis of existing congestion control research and its implications for satellite network operators, based on synthesized findings from IETF, ACM, and IEEE publications.


by Rajat Bhambani, Senior Principal Terminal Design & Development Engineer, SES



Rajat Bhambani, Senior Principal Terminal Design & Development Engineer, SES
Rajat Bhambani, Senior Principal Terminal Design & Development Engineer, SES

The overlooked performance constraint

Satellite network capacity has increased dramatically over the past decade through advances in modulation, coding, and antenna technology. High-throughput satellites now deliver hundreds of gigabits per second, yet application performance has not scaled proportionally. Field measurements and controlled studies increasingly point to transport-layer protocol behavior as a significant constraint on user experience.


Recent analysis from the Internet Engineering Task Force (IETF) reveals that TCP (Transmission Control Protocol) congestion control algorithms—the protocols governing how data flows across networks—make assumptions that satellite links systematically violate. As capacity increases, these protocol-layer limitations become more significant relative to radio frequency constraints.


This represents a shift in the satellite industry's optimization focus. Where latency and bandwidth were once the primary performance bottlenecks, transport-layer behavior now warrants equal attention from network architects and operators.


Understanding loss-based congestion control

Most Internet traffic today uses TCP with loss-based congestion control algorithms. The dominant algorithm, TCP CUBIC, interprets packet loss as a signal of network congestion. When loss occurs, CUBIC reduces transmission rate significantly—a behavior designed for terrestrial networks where packet loss primarily results from router queue overflow.


Satellite links, however, experience packet loss from multiple non-congestion sources: rain fade attenuation, scintillation effects, link handoffs in non-geostationary systems, and adaptive coding/modulation transitions. IETF analysis demonstrates that CUBIC's assumptions create significant performance constraints under these conditions.


The mathematical relationship is striking. IETF Internet-Draft analysis shows that for a link with 100 milliseconds round-trip time to sustain 10 Gbps throughput, CUBIC requires packet loss rates below 0.000003 percent—a threshold impossible to maintain on satellite links experiencing atmospheric propagation effects. At more realistic loss rates of 1 percent, which operators commonly observe during moderate rain fade, representative analyses show throughput degrading to single-digit megabits per second despite substantially higher available capacity.


This phenomenon explains observed discrepancies between satellite link capacity and delivered application performance. The protocol stack interprets environmental loss as congestion and throttles transmission accordingly.


Theoretical throughput comparison calculated using the Mathis Equation (Mathis et al., 1997) for loss-based congestion control and model-based estimation principles documented in IETF BBR specifications. Values assume a 100 Mbps link capacity and a 600ms RTT typical of GEO satellites. Demonstrates fundamental performance difference between algorithms that interpret packet loss as congestion (CUBIC) versus those that maintain explicit bandwidth models (BBR). Source: Calculation methodology: RFC 3649 (TFRC), IETF draft-ietf-ccwg-bbr-04
Theoretical throughput comparison calculated using the Mathis Equation (Mathis et al., 1997) for loss-based congestion control and model-based estimation principles documented in IETF BBR specifications. Values assume a 100 Mbps link capacity and a 600ms RTT typical of GEO satellites. Demonstrates fundamental performance difference between algorithms that interpret packet loss as congestion (CUBIC) versus those that maintain explicit bandwidth models (BBR). Source: Calculation methodology: RFC 3649 (TFRC), IETF draft-ietf-ccwg-bbr-04

Model-based alternatives: a different approach

Recent research has explored congestion control algorithms that estimate available bandwidth directly rather than inferring it from packet loss. These model-based approaches decouple performance from random loss events by continuously measuring delivery rate and round-trip time to construct an explicit network model.


Google developed and deployed one such algorithm, TCP BBR (Bottleneck Bandwidth and Round-trip propagation time), which is documented in IETF specifications and peer-reviewed publications. The most recent iteration, BBRv3, has evolved to balance high performance under loss with fairness to legacy protocols, making it suitable for modern multi-orbit satellite deployments. BBR probes for bandwidth and minimum round-trip time, using these measurements to determine sending rate. ACM Queue reported that Google deployed BBR across its infrastructure in 2016, and the algorithm now carries a substantial fraction of Internet traffic at large-scale service providers, including YouTube, Google Cloud, and various content delivery networks.


The operational difference is significant. While loss-based algorithms reduce transmission rate when packet loss occurs, model-based algorithms maintain transmission rate based on measured delivery capacity. Field measurements on operational GEO satellite networks documented in peer-reviewed literature show model-based approaches achieving significantly faster connection startup times and substantially lower latency variance compared to loss-based algorithms under equivalent conditions.


Independent analysis published in IEEE proceedings demonstrates similar patterns. At packet loss rates of 1 percent—conditions satellite operators encounter during rain fade or other propagation impairments—loss-based algorithms utilized approximately 3-5 percent of available link capacity, while model-based alternatives maintained 85-95 percent utilization.


Throughput comparison across satellite link configurations. Scenarios 1-3 represent theoretical performance estimates calculated using standard TCP performance models for different orbit types and loss conditions. Scenario 4 shows field validation from operational satellite network measurements, demonstrating that real-world performance aligns with theoretical predictions. Source: Theoretical models: IETF TCP performance analysis; Field data: Claypool et al., Passive and Active Measurement Conference (PAM), 2021
Throughput comparison across satellite link configurations. Scenarios 1-3 represent theoretical performance estimates calculated using standard TCP performance models for different orbit types and loss conditions. Scenario 4 shows field validation from operational satellite network measurements, demonstrating that real-world performance aligns with theoretical predictions. Source: Theoretical models: IETF TCP performance analysis; Field data: Claypool et al., Passive and Active Measurement Conference (PAM), 2021

Standards development and industry adoption

The IETF has documented these performance characteristics through multiple working groups focused on congestion control and satellite communications. Internet-Draft documents analyzing TCP performance over satellite links have established mathematical foundations that explain the observed behavior.


Industry deployment has occurred primarily in terrestrial networks and content delivery applications. Amazon Web Services, Akamai, and major cloud providers have implemented model-based congestion control for long-distance data transfer. The Linux kernel included BBR as a standard option beginning with version 4.9 in 2016, making it available for widespread deployment without custom software development.


Satellite-specific deployment remains limited despite documented performance improvements. This represents a standards adoption gap: the protocols exist, have undergone peer review, and demonstrate operational effectiveness, yet satellite network infrastructure has not widely incorporated them.


The 3rd Generation Partnership Project (3GPP) has begun addressing this gap through 5G Non-Terrestrial Network (NTN) standards, which explicitly recognize the need for transport layer optimization in satellite deployments. Industry recognition of these challenges has accelerated the integration of model-based congestion control mechanisms into satellite network architectures.


Operational implications for satellite networks

These research findings have several implications for satellite network operators:


  • Performance under adverse conditions: Satellite networks experience variable loss from atmospheric propagation, particularly in tropical regions with frequent rain. Research indicates that congestion control algorithm selection significantly affects performance during these events. Operators measuring customer experience during rain fade may find protocol-layer optimization yields substantial improvements.

  • Service differentiation opportunities: Model-based congestion control could enable premium service tiers with predictable performance under adverse weather conditions. While basic service experiences throughput degradation during rain fade, premium tiers using optimized protocols could maintain higher performance levels.

  • Multi-orbit network architecture: Operators deploying combined geostationary and non-geostationary constellations face congestion control considerations for routing decisions. GEO links have stable but high latency, while LEO links have lower latency but experience frequent handoffs. Research suggests protocol selection interacts with orbit selection in determining end-to-end performance.

  • Infrastructure compatibility: Transport-layer optimization requires consideration of existing customer premise equipment, gateways, and intermediate network elements. Some performance-enhancing proxies deployed in satellite networks may interact unpredictably with newer congestion control algorithms. Operators should evaluate compatibility across their infrastructure.

  • Standards evolution: The IETF continues developing congestion control specifications with input from satellite operators. Active participation in standards development allows operators to ensure emerging protocols address satellite-specific requirements.


Implementation considerations

Satellite operators evaluating congestion control alternatives should consider several factors:


  • Testing infrastructure: Laboratory evaluation using network emulators can characterize algorithm behavior before operational deployment. Traffic shaping tools can simulate satellite link characteristics, including latency, bandwidth, and loss patterns. Operators can validate performance predictions using controlled test environments.

  • Limited deployment: Initial implementation on isolated terminals or non-production capacity allows field validation without customer impact. Comparing performance between legacy and alternative protocols under real-world conditions provides operational data.

  • Customer equipment: End-user terminals require compatible software stacks to utilize alternative congestion control protocols. Linux-based customer premise equipment can support BBR through kernel configuration. Proprietary terminal software may require vendor cooperation for protocol updates.

  • Gateway infrastructure: Some satellite architectures employ performance-enhancing proxies at network gateways. These proxies may need updates to support newer congestion control algorithms or may need to be bypassed for specific traffic.

  • Monitoring and measurement: Operators should establish metrics for congestion control performance evaluation, including throughput under various loss conditions, latency stability, and flow fairness. Baseline measurements using current protocols enable quantitative comparison.


Industry trajectory

The satellite communications industry has historically focused optimization efforts on the physical and link layers—improving modulation, coding, and spectrum efficiency. As link capacity has increased through these advances, transport-layer protocols have become relatively more significant performance determinants.


This pattern mirrors earlier industry evolution. When satellite bandwidth was severely constrained in the 1990s, performance-enhancing proxies emerged to optimize TCP behavior. As bandwidth increased but latency remained high, attention shifted to latency-optimized protocols. Now, with high-capacity satellites operating in environments with variable loss, congestion control algorithm selection warrants similar attention.


Terrestrial networks underwent comparable evolution. Major content providers and cloud platforms deployed model-based congestion control after determining that traditional loss-based algorithms limited their infrastructure utilization. The satellite industry may follow a similar adoption trajectory as operators recognize protocol stack optimization opportunities.


Standards bodies are developing guidance specific to satellite networks. IETF working groups examining TCP over satellite links, 3GPP addressing satellite-cellular integration, and IEEE symposia on satellite communications are all evaluating transport-layer protocol recommendations. Operators participating in these standardization efforts can influence specifications to address operational requirements.


Conclusions

Transport-layer congestion control represents a first-class design parameter for satellite network architecture, warranting attention comparable to radio frequency optimization. Research from IETF, ACM, and IEEE sources demonstrates that algorithm selection significantly affects performance under packet loss conditions common to satellite links.


Loss-based congestion control algorithms, designed for terrestrial network assumptions, achieve limited link utilization when satellite channels experience environmental loss. Model-based alternatives maintain high utilization by explicitly measuring bandwidth rather than inferring it from loss events. Field measurements validate these performance differences on operational satellite infrastructure.


For satellite operators, this presents both opportunity and consideration. Protocol stack optimization may enhance user experience, enable service differentiation, and improve return on satellite capacity investments. Implementation requires evaluation of infrastructure compatibility, customer equipment capabilities, and standards evolution.


As the satellite industry deploys higher-capacity systems and extends service to challenging propagation environments, transport-layer protocol behavior increasingly determines delivered performance. Operators who evaluate congestion control alternatives position themselves to optimize the full protocol stack rather than the physical layer alone.


The research is documented, the protocols are specified and deployed at Internet scale, and the performance differences are quantified. The question for satellite network operators is whether protocol stack optimization merits inclusion in network architecture planning alongside traditional radio frequency considerations.


REFERENCES

  • 1.  Cardwell, N., et al. "BBR: Congestion-Based Congestion Control." ACM Queue, Vol. 14, No. 5, 2016.

  • 2.  Internet Engineering Task Force. "TCP Congestion Control for High-Bandwidth Delay Product Networks." draft-ietf-ccwg-bbr-04, 2023.

  • 3.  Claypool, M., et al. "Measuring TCP and QUIC on Viasat's Satellite Network." Passive and Active Measurement Conference (PAM), 2021.

  • 4. 3rd Generation Partnership Project. "Study on NB-IoT/eMTC Support for Non-Terrestrial Networks." 3GPP TR 36.763, Release 17.

bottom of page