C

Bandwidth Delay Product Calculator

📅Last updated: December 5, 2025
Reviewed by: LumoCalculator Team

Calculate the bandwidth-delay product (BDP) for your network connection. Determine optimal TCP window sizes, socket buffer settings, and understand the maximum data that can be "in flight" for efficient network utilization.

BDP Calculator

Bandwidth-Delay Product

Common scenarios:

Use ping to measure RTT to destination

Bandwidth-Delay Product Results

Bandwidth-Delay Product (BDP)
610.35 KB
5.00 Mbits (610.35 KB)
Bandwidth
100.00 Mbps
Round Trip Time
50.00 ms
Network Type
Intercontinental
50 ms RTT

Higher latency due to distance, undersea cables

🔧 TCP Tuning Recommendations
Recommended TCP Window610.35 KB
Socket Buffer Size1.19 MB
Packets in Flight~429
⚠️ Requires TCP window scaling (RFC 7323) for windows > 64KB
💡 What This Means

With 100.00 Mbps bandwidth and 50.00 ms RTT, the network can hold 610.35 KB of data "in flight" at any time. To fully utilize this connection, TCP windows and buffers should be at least this size.

📋 Optimization Recommendations

Enable TCP window scaling on both endpoints
Optimal TCP window size: 610.35 KB
Maximum segments in flight: ~429

BDP Formula

BDP = Bandwidth × RTT

Bandwidth in bits per second

RTT (Round Trip Time) in seconds

Result in bits (divide by 8 for bytes)

Common Network Scenarios

ScenarioBandwidthRTT
LAN 1000 mbps1 ms
Regional WAN 100 mbps30 ms
Cross-Country 1000 mbps70 ms
Intercontinental 100 mbps200 ms
Satellite 50 mbps600 ms

BDP Significance Levels

< 64 KB
Standard TCP works well

Default settings usually sufficient

64 KB - 1 MB
TCP window scaling needed

Enable window scaling for optimal performance

1 MB - 10 MB
Large buffers required

Tune socket buffers, consider parallel streams

> 10 MB
High-performance networking

Specialized tuning, multiple connections may help

TCP Tuning Parameters

TCP Window Size

Maximum amount of unacknowledged data

Should be ≥ BDP

Socket Buffer (SO_SNDBUF)

Send buffer size

Typically 2× BDP

Socket Buffer (SO_RCVBUF)

Receive buffer size

Typically 2× BDP

TCP Window Scaling

Allows windows > 64KB

Enable if BDP > 64KB

Congestion Control

Algorithm for rate adjustment

BBR or CUBIC for high BDP

Applications of BDP

TCP Tuning

Optimize TCP window sizes for maximum throughput

Buffer Sizing

Calculate optimal send/receive buffer sizes

Network Design

Plan buffer requirements for network equipment

Performance Analysis

Understand throughput limitations

WAN Optimization

Configure WAN accelerators and proxies

Cloud Networking

Optimize connections to cloud services

Key Points

TCP window should be ≥ BDP for full throughput
Socket buffers typically need 2× BDP
Window scaling needed for BDP > 64 KB
High RTT has major impact on throughput
⚠️Both endpoints need proper configuration
⚠️Middleboxes may interfere with TCP options

Frequently Asked Questions

What is Bandwidth-Delay Product (BDP)?
The Bandwidth-Delay Product (BDP) is a fundamental concept in networking that represents the maximum amount of data that can be "in flight" (transmitted but not yet acknowledged) on a network connection at any given time. FORMULA: BDP = Bandwidth × Round Trip Time (RTT). WHAT IT MEANS: Think of a network connection as a pipe. Bandwidth is the width of the pipe (how much water flows per second). RTT is the length of the pipe (how long it takes for water to travel round trip). BDP is the total volume of water the pipe can hold. UNITS: BDP is typically measured in bits or bytes. Common result sizes range from kilobytes to megabytes. EXAMPLE CALCULATION: Bandwidth: 100 Mbps. RTT: 50 ms (0.05 seconds). BDP = 100,000,000 bits/s × 0.05 s = 5,000,000 bits = 625,000 bytes ≈ 610 KB. This means 610 KB of data can be "in the pipe" at once. WHY IT MATTERS: To fully utilize a network connection, the sender must be able to have at least BDP amount of data outstanding. If TCP windows are smaller than BDP, throughput is limited. Understanding BDP helps optimize network performance.
Why is BDP important for TCP performance?
BDP directly determines how efficiently TCP can utilize available bandwidth. TCP's flow control mechanism depends on understanding this relationship. TCP WINDOW AND THROUGHPUT: Maximum throughput = Window Size / RTT. If Window < BDP, throughput is limited by the window. If Window ≥ BDP, throughput can reach line rate. THE PROBLEM WITH SMALL WINDOWS: Default TCP window: 64 KB (without scaling). High-speed networks often have BDP > 64 KB. Result: Underutilized bandwidth. EXAMPLE: 1 Gbps link with 100ms RTT. BDP = 1,000,000,000 × 0.1 = 100,000,000 bits ≈ 12.5 MB. With 64 KB window: Max throughput = 65,536 bytes / 0.1 s ≈ 5.2 Mbps. That's only 0.5% of available bandwidth! With 12.5 MB window: Can achieve full 1 Gbps. LONG FAT NETWORKS (LFNs): Networks with high bandwidth × delay product. Common in: WAN connections, Satellite links, Intercontinental paths, Cloud connections. Require special attention to TCP tuning. SOLUTIONS: TCP Window Scaling (RFC 7323). Increased socket buffers. Modern congestion control (BBR, CUBIC). Parallel TCP connections.
How do I measure RTT for BDP calculation?
Measuring Round Trip Time (RTT) accurately is essential for proper BDP calculation. There are several methods to measure it. PING COMMAND: Simplest method for basic RTT measurement. Windows/Mac/Linux: ping <destination>. Look at "time=" value in results. Use average of multiple pings for accuracy. EXAMPLE: C:\> ping google.com. Reply from 142.250.X.X: bytes=32 time=15ms TTL=117. TRACEROUTE: Shows RTT to each hop along the path. Helps identify where latency occurs. Windows: tracert <destination>. Linux/Mac: traceroute <destination>. NETWORK MONITORING TOOLS: More accurate measurements. Examples: iperf3, netperf, smokeping. Can measure under load conditions. CONSIDERATIONS: RTT varies with network load. Measure during typical usage times. Multiple samples provide better accuracy. Consider worst-case RTT for conservative tuning. TYPICAL RTT VALUES: Same datacenter: 0.1-1 ms. Same city: 1-5 ms. Same country: 10-50 ms. Intercontinental: 100-200 ms. Geostationary satellite: 500-700 ms. APPLICATION-LEVEL RTT: HTTP timing in browser dev tools. Application-specific latency. May include processing time, not just network.
How do I tune TCP for high BDP networks?
Tuning TCP for high BDP networks involves several system-level adjustments to achieve optimal throughput. TCP WINDOW SCALING: Enable on both sender and receiver. Allows windows larger than 64 KB. Linux: Usually enabled by default (net.ipv4.tcp_window_scaling = 1). Windows: Enabled by default in modern versions. SOCKET BUFFER SIZES: Set to at least BDP, preferably 2× BDP. Linux settings: net.core.rmem_max, net.core.wmem_max (Maximum buffer sizes). net.ipv4.tcp_rmem, net.ipv4.tcp_wmem (TCP specific). Example for 10 MB BDP: sysctl -w net.core.rmem_max=20971520. sysctl -w net.core.wmem_max=20971520. Windows: Via registry or netsh commands. CONGESTION CONTROL ALGORITHM: Modern algorithms handle high BDP better. Linux options: BBR (Bottleneck Bandwidth and RTT) - Google's algorithm. CUBIC - Default in most Linux distros. Reno - Traditional, less suitable for high BDP. Set with: sysctl -w net.ipv4.tcp_congestion_control=bbr. APPLICATION-LEVEL: Use larger application buffers. Consider parallel connections for very high BDP. Use appropriate transfer protocols (GridFTP, etc.). NETWORK EQUIPMENT: Ensure switches/routers have adequate buffers. Configure QoS appropriately. Consider WAN optimization devices.
What are common BDP scenarios and their implications?
Different network scenarios have vastly different BDP values, each requiring appropriate tuning. LOCAL AREA NETWORK (LAN): Example: 1 Gbps, 0.5 ms RTT. BDP = 1,000,000,000 × 0.0005 = 500,000 bits ≈ 61 KB. Implication: Standard TCP usually sufficient. Default settings work well. METROPOLITAN AREA / REGIONAL WAN: Example: 100 Mbps, 20 ms RTT. BDP = 100,000,000 × 0.02 = 2,000,000 bits ≈ 244 KB. Implication: May need window scaling. Moderate buffer increases help. CROSS-COUNTRY: Example: 1 Gbps, 70 ms RTT. BDP = 1,000,000,000 × 0.07 = 70,000,000 bits ≈ 8.5 MB. Implication: Definitely needs tuning. Large buffers required. Modern congestion control beneficial. INTERCONTINENTAL: Example: 100 Mbps, 200 ms RTT. BDP = 100,000,000 × 0.2 = 20,000,000 bits ≈ 2.4 MB. Implication: Significant tuning needed. High latency challenges. Consider application-level optimization. SATELLITE: Example: 50 Mbps, 600 ms RTT. BDP = 50,000,000 × 0.6 = 30,000,000 bits ≈ 3.7 MB. Implication: Extreme latency challenges. Traditional TCP performs poorly. PEP (Performance Enhancing Proxies) often used. May need specialized protocols. CLOUD CONNECTIONS: Variable based on region. Often 50-150 ms RTT to major cloud providers. Can have very high bandwidth (10+ Gbps). Requires case-by-case analysis.
How does BDP affect file transfer speeds?
BDP directly impacts the maximum achievable file transfer speed over TCP connections. THE RELATIONSHIP: Without proper tuning: Max Speed = TCP Window / RTT. With proper tuning: Max Speed = Bandwidth (line rate). This is why large file transfers often seem slow on high-latency links. EXAMPLE SCENARIO: You have a 1 Gbps connection to a cloud server. RTT is 100 ms. File to transfer: 10 GB. With default 64 KB window: Max speed = 65,536 bytes / 0.1 s = 5.2 Mbps. Transfer time = 10 GB / 5.2 Mbps ≈ 4.3 hours! With proper 12.5 MB window: Max speed = 1 Gbps (line rate). Transfer time = 10 GB / 1 Gbps ≈ 80 seconds. SLOW START CONSIDERATIONS: TCP doesn't immediately use full window. Slow start gradually increases sending rate. Short transfers may never reach full speed. BDP tuning helps but doesn't eliminate slow start. PRACTICAL TIPS: For large files: Tune TCP as described. Use parallel connections (some tools do this). Consider dedicated transfer tools (rsync, GridFTP). For many small files: Connection overhead dominates. Keep connections alive (HTTP keep-alive). Consider batching transfers. PROTOCOL ALTERNATIVES: UDP-based protocols (QUIC, UDT). Don't have same window limitations. May achieve better throughput on high BDP paths. Used by some file transfer services.
What is TCP Window Scaling and when is it needed?
TCP Window Scaling is an extension that allows TCP to advertise windows larger than 64 KB, essential for high BDP networks. THE 64 KB LIMIT: Original TCP design used 16-bit window field. Maximum window = 2^16 - 1 = 65,535 bytes. Sufficient when networks were slower. Not enough for modern high-speed, high-latency paths. WINDOW SCALING (RFC 7323): Introduced in 1992, updated in 2014. Adds a scale factor negotiated during handshake. Scale factor: 0-14 (multiply by 2^scale). Maximum window: 65,535 × 2^14 ≈ 1 GB. HOW IT WORKS: 1. During TCP handshake (SYN, SYN-ACK). 2. Both sides advertise their scale factor. 3. Actual window = advertised window × 2^scale. 4. Must be supported and enabled on BOTH endpoints. WHEN YOU NEED IT: BDP > 64 KB. Most modern WAN connections. Cloud and datacenter traffic. Any high-bandwidth, high-latency path. CHECKING IF ENABLED: Linux: cat /proc/sys/net/ipv4/tcp_window_scaling. Windows: netsh int tcp show global | find "Window Scaling". ENABLING IT: Linux: sysctl -w net.ipv4.tcp_window_scaling=1. Usually enabled by default in modern systems. Windows: Usually enabled by default. TROUBLESHOOTING: Some firewalls/middleboxes strip TCP options. Can prevent window scaling from working. Results in 64 KB maximum even when enabled. Check with packet capture (Wireshark).
How do I use BDP in network equipment configuration?
BDP calculations are crucial for properly configuring network equipment buffers and queues. ROUTER/SWITCH BUFFER SIZING: Total buffer needed = BDP × number of flows. For congested links, may need 2-4× BDP. Example: 1 Gbps link, 50 ms max RTT to destinations. BDP per flow ≈ 6.25 MB. For 100 concurrent flows: 625 MB of buffering. QUEUE MANAGEMENT: Modern approaches use smaller buffers. Active Queue Management (AQM): RED, CoDel. Goal: Keep latency low while maintaining throughput. Buffer bloat occurs with excessive buffering. QOS CONFIGURATION: Different queues for different traffic types. Each queue sized appropriately for its BDP. Voice/video: Small buffers, low latency. Bulk data: Larger buffers, higher throughput okay. WAN ACCELERATOR CONFIGURATION: Must understand BDP of accelerated paths. Pre-position data based on BDP analysis. Configure appropriate TCP optimization parameters. FIREWALL/PROXY CONFIGURATION: Ensure TCP options (window scaling) pass through. Configure appropriate buffer sizes. May need to tune connection tracking tables. CLOUD/VIRTUAL NETWORK: Virtual switch buffers often undersized. May need to tune hypervisor settings. Cloud provider limits may apply. PRACTICAL EXAMPLE: Configuring a router interface for 10 Gbps with max 100 ms RTT. BDP = 10 Gbps × 100 ms = 125 MB per flow. With 1000 active flows: Need 125 GB buffer (impractical). Solution: Use AQM, accept some packet loss during congestion. Size buffers for typical case, not worst case.