Bandwidth Delay Product Calculator
Estimate bandwidth-delay product for a network path, then review the in-flight data size, minimum TCP window, and per-direction socket buffer target for WAN, cloud, intercontinental, or satellite links.
Calculator Inputs
BDP Summary
Bandwidth-delay product
610.35 KB
- TCP window target
- 610.35 KB
- Per-direction buffer
- 1.19 MB
- Packets in flight
- 429
Current Input Breakdown & NotesShow details
Formula substitution
Current values inserted into the calculation
1.Bandwidth = 100.00 Mbps = 100000000 bps
2.RTT = 50.00 ms = 0.050000 s
3.BDP = bandwidth x RTT = 5.00 Mbits = 610.35 KB
4.Window target = 610.35 KB; per-direction buffer start = 1.19 MB
Output details
Size conversions and tuning notes
- BDP in bits
- 5.00 Mbits
- BDP in KB
- 610.35 KB
- BDP in MB
- 0.5960 MB
- Path profile
- Long-haul path
- Window scaling is required when the target TCP window exceeds 64 KiB.
- Enable TCP window scaling on both endpoints and confirm middleboxes pass the option through.
- Start socket-buffer planning at about 1.19 MB per direction, then confirm the actual OS limits and autotuning behavior.
- A lower-RTT path still benefits from checking the actual route, especially if WAN overlays, VPNs, or cloud hops can add delay later.
- BDP is a planning estimate, not a throughput guarantee: packet loss, congestion control, application behavior, and device limits can still cap real transfers.
Use Scenarios
Transfer planning
Check whether a long-haul link is underfilled
Use the calculator before tuning backups, replication, or large downloads so you can see whether the path needs a larger receive window than the default stack is using.
Cloud networking
Compare regional, cross-region, and intercontinental paths
The same 100 Mbps or 1 Gbps link behaves very differently as RTT rises, so BDP is a quick way to compare cloud-region distance against the window and buffer targets you may need.
High-latency paths
Sanity-check satellite or remote links
Satellite and remote-field paths can build large BDP values even at moderate bandwidth, which makes them a good fit for a first-pass window-scaling and buffer review.
Formula Explanation
Step 1
Normalize bandwidth and RTT
Bandwidth in bits/s; RTT in seconds
The formula only works cleanly when both inputs are in consistent base units, so the calculator first converts the selected bandwidth and round-trip time into bits per second and seconds.
Step 2
Compute in-flight data
BDP = bandwidth x RTT
That product estimates how much unacknowledged data a TCP flow can keep in the path at once. The headline result is shown in bytes because that is the unit most operating-system window and buffer settings use.
Step 3
Translate BDP into tuning targets
Minimum TCP window = BDP; per-direction buffer start often begins near 2 x BDP
The result is a planning target, not a guarantee. It gives you a practical starting point for receive-window sizing and per-direction socket-buffer checks on the current path.
How to Read the Result
Headline output
BDP shows how much data the path can hold
If the TCP receive window is smaller than this value, the sender cannot keep enough data outstanding to fully occupy the link, even when raw bandwidth looks high on paper.
Window target
TCP window is the minimum planning threshold
Read the window target as the minimum receive-window size that matches the current path. Once it crosses 64 KiB, window scaling becomes part of the tuning conversation.
Buffer planning
Buffer and packet counts are secondary checks
Per-direction buffer size and packets in flight help you translate the headline BDP into system settings, packet budgets, and troubleshooting expectations for the same path.
Example Cases
Worked example
Case 1: Regional WAN application path
Inputs
- Bandwidth: 100.00 Mbps
- RTT: 30.00 ms
- Path profile: Regional WAN path
Computed Results
- BDP: 366.21 KB (3.00 Mbits)
- TCP window target: 366.21 KB
- Per-direction buffer: 732.42 KB
- Packets in flight: 257
Interpretation
This is a moderate-latency business path where the BDP is no longer tiny, so a receive window that looked acceptable on a LAN can start limiting real transfer rates.
Decision Hint
If a regional application path still feels slow, compare the actual TCP window against the calculator output before assuming the circuit is undersized.
Worked example
Case 2: Cross-country 1 Gbps transfer
Inputs
- Bandwidth: 1.00 Gbps
- RTT: 70.00 ms
- Path profile: Long-haul path
Computed Results
- BDP: 8.34 MB (70.00 Mbits)
- TCP window target: 8.34 MB
- Per-direction buffer: 16.69 MB
- Packets in flight: 5,994
Interpretation
A fast backbone link with long RTT pushes the in-flight data budget into multi-megabyte territory. The bandwidth alone is high, but the path still needs a much larger receive window.
Decision Hint
This kind of path is where TCP window scaling and buffer limits become an early checkpoint instead of an afterthought.
Worked example
Case 3: Satellite or remote-field link
Inputs
- Bandwidth: 50.00 Mbps
- RTT: 600.00 ms
- Path profile: Satellite / extreme-latency path
Computed Results
- BDP: 3.58 MB (30.00 Mbits)
- TCP window target: 3.58 MB
- Per-direction buffer: 7.15 MB
- Packets in flight: 2,569
Interpretation
Even with modest bandwidth, very high RTT drives a large BDP. That is why remote links can need surprisingly large TCP windows despite not looking fast by raw bandwidth.
Decision Hint
Use satellite-style cases to separate a latency-driven tuning problem from a pure line-rate problem before changing application behavior or queue policies.
Boundary Conditions
Sources & References
- Oracle Database Net Services - Determining Bandwidth Delay ProductUsed as the primary factual reference for the BDP formula, RTT measurement via ping, and the relationship between BDP and send/receive buffer sizing. This source was also part of the live SERP calibration set, so it was kept for its direct factual role rather than only because it ranked.
- RFC 7323 - TCP Extensions for High PerformanceUsed for the 16-bit TCP receive-window limit and the window-scaling mechanism that becomes relevant when the calculated window target exceeds the classic 64 KiB range.
- RFC 6349 - Framework for TCP Throughput TestingUsed as a supplementary transport-performance reference for high-BDP tuning context, including the practical relationship between path delay, buffers, and throughput testing assumptions.