How to Test Bandwidth Between Servers

How to Test Bandwidth Between Servers

When a replication job slows down, backups miss their window, or east-west traffic starts dragging, the question usually is not whether the network is up. The real question is how to test bandwidth between servers in a way that reflects actual throughput instead of guesswork.

A simple ping will tell you latency. A traceroute can show the path. Neither tells you how much data two systems can push between each other under load. For that, you need an active throughput test that sends traffic between the servers and measures what the path can sustain.

What bandwidth testing between servers actually measures

In practice, server-to-server bandwidth testing is about throughput, not just the advertised speed of a network interface or uplink. A pair of servers might both have 1 Gbps or 10 Gbps ports and still deliver much less because of routing, packet loss, CPU limits, virtualization overhead, traffic shaping, firewall inspection, or storage contention during a real workload.

That is why the most useful tests generate controlled traffic directly between the two endpoints. You are measuring effective transfer capacity across the actual path, not the theoretical maximum printed on a spec sheet.

This also means results are contextual. A test across the same rack may look perfect, while a test between regions, clouds, or providers may show large differences depending on congestion, peering, or TCP behavior over longer distances.

The best way to test bandwidth between servers

For most administrators and engineers, iPerf3 is the standard answer. It is purpose-built for measuring TCP and UDP throughput between two hosts. One server runs in server mode, the other runs in client mode, and the tool reports transfer rate, retransmissions, jitter, and loss depending on the protocol you test.

If you want a clean answer fast, this is usually the right approach. It is more accurate than copying a file and more informative than relying on interface counters alone.

A browser-based iPerf3 bandwidth testing tool can also be useful when you want quick access without building a full workflow around local tooling. That is especially practical for validation, spot checks, or when you are working across multiple environments and need a fast test from one place.

How to run a basic iPerf3 test

You need reachability between the two servers on the chosen test port. Typically, one system starts the iPerf3 server process and listens for incoming test traffic. The other connects as the client and generates the traffic stream.

A standard TCP test is the usual starting point because most application traffic uses TCP. It gives you a realistic baseline for file transfer, replication, API traffic, and other normal workloads. The client output will show the measured throughput over the test interval, along with retransmissions if congestion or packet handling becomes an issue.

If the result looks lower than expected, do not assume the network is the only bottleneck. A single TCP stream can underperform on high-latency paths, and host-level constraints matter more than many teams expect.

Test both directions

One common mistake is testing only one direction. Server A to Server B may not match Server B to Server A. Routing asymmetry, shaping policies, interface errors, VM host contention, or cloud-side limits can affect one direction differently.

Run the test both ways. If one side performs well and the reverse path does not, that narrows the problem quickly.

Test with multiple parallel streams

A single TCP stream does not always fill the available link, especially on higher-bandwidth or higher-latency paths. Running multiple parallel streams often shows whether the path itself has capacity that one stream cannot fully use.

This matters a lot in WAN, inter-region, and cloud environments. If four or eight streams perform much better than one stream, the network may be fine while TCP windowing or endpoint tuning is holding back the single-stream result.

Use UDP when you need loss and jitter data

TCP is best for realistic application throughput, but UDP testing has a different purpose. It helps you measure packet loss, jitter, and behavior at specific offered rates.

That makes UDP useful for voice, video, streaming, and real-time service validation. It also helps expose issues that TCP can partially hide through retransmission and congestion control.

What can skew your results

Bandwidth tests are easy to run and easy to misread. If you want results you can trust, look at the path, the hosts, and the timing.

Host resources are the first thing to check. CPU saturation on either server can cap throughput before the link is actually full. This shows up often on virtual machines, smaller cloud instances, older systems, or servers doing other work at the same time. Memory pressure and NIC offload settings can also change the outcome.

The network path comes next. Firewalls, IDS or IPS devices, load balancers, VPN tunnels, overlays, and NAT can all affect throughput. If the traffic crosses the public internet, peering and congestion may play a major role. In cloud environments, instance class, placement, and provider egress rules can matter as much as the nominal interface speed.

Timing matters too. A bandwidth test during a backup window may look very different from the same test at midday. If you are validating a complaint, test during the period when the issue is actually happening.

How to interpret the numbers

A lower-than-expected result is not automatically a failure. The useful question is whether the measured throughput is consistent with the path and enough for the workload.

For example, if two servers in the same data center on 10 Gbps interfaces only reach a small fraction of that rate, you likely have a local bottleneck worth investigating. If two servers are in different regions with moderate latency and internet transit in between, lower TCP throughput may be normal unless you tune specifically for that path.

Retransmissions are an important clue in TCP tests. If throughput is low and retransmissions are high, packet loss or congestion is likely involved. If retransmissions are minimal but throughput still stays low, look harder at endpoint configuration, single-stream limitations, and CPU utilization.

With UDP, the main signals are achieved bitrate, jitter, and packet loss. High offered rates with rising loss usually mean you are pushing beyond what the path can carry cleanly.

A practical workflow for troubleshooting

If you need a reliable process, start simple. Confirm basic reachability first so you are not blaming bandwidth for a routing or ACL problem. Then run a short TCP test in one direction, repeat it in the reverse direction, and compare. After that, rerun with multiple parallel streams.

If results still do not make sense, inspect server CPU and interface stats during the test. Check for packet drops, errors, and rate limits. If the path includes a firewall, VPN, or cloud security layer, verify that those systems are not inspecting or shaping the test traffic.

If the issue affects a latency-sensitive service, add a UDP test. That will tell you more about jitter and packet loss than TCP alone.

When file transfer tests are useful and when they are not

Some teams prefer to copy a large file between servers and watch the transfer rate. That can be a useful sanity check, but it is not a clean bandwidth test. Storage speed, filesystem behavior, encryption overhead, compression, and application-level throttling all get mixed into the result.

That makes file transfers good for validating user experience but weak for isolating pure network throughput. If your goal is to know whether the path itself can carry more traffic, iPerf3 is the better instrument.

Browser-based testing vs command line testing

Command-line tools are still the most flexible option for repeatable diagnostics and scripted workflows. They let you control streams, intervals, protocol choices, and output handling in detail.

But convenience matters in real operations. A browser-based tool can shorten the path from suspicion to answer, especially when you are validating a single route or helping a less command-line-heavy user confirm a problem. For quick checks, that speed is valuable. Ping Tool Net fits that utility model well by putting bandwidth and adjacent diagnostics in one place instead of making you jump across separate tools.

Common mistakes to avoid when you test bandwidth between servers

The biggest mistake is trusting one result. Run multiple tests, change direction, and vary the stream count. Another common problem is testing through a path that is not the one your application actually uses. A third is ignoring the endpoints and focusing only on the network.

Also, avoid testing with production impact blind spots. Saturating a busy link during business hours may create the very problem you are trying to measure. If the environment is sensitive, use shorter tests, controlled windows, and rates that match your diagnostic goal.

If you need clean answers, treat bandwidth testing as a controlled measurement, not a quick command you run once and forget. The test is easy. Getting a result you can act on takes a little discipline. That extra care is usually what separates a useful throughput number from noise.

Leave a Reply