Performance of high-speed TCP variants


Testbed Configuration


testbed

The figure above depicts the simple dumbbell network emulation testbed used in these experiemnts.
TCP Source1 and TCP Source2 send a (high-speed) flow to TCP Sink1 and TCP Sink2. iperf is used to transmit TCP data and measure throughput. We modified iperf to log TCP-related variables (ssthresh, cwnd, rto, rtt), using the tcp_info struct.  Traffic is routed via a Linux machine running as router, using iproute2's tc. netem tc is also used on each sender/receiver to modulate the amount of  RTT on the path. Notice that the netem opearates only on the outgoing traffic on an interface. Each machine is connected to 1Gbps NICs. Traffic from the router to the receiving ends is going via a gigabit Cisco 3750 switch. 

Experiment 1 (Router Queue=25%BDP, SACK OFF, tso OFF, netem delay buffer=BDP)

Experiment 2 (Router Queue=25%BDP, SACK ON, tso ON, default netem delay buffer (eq. txqueuelen=1000 packets))

Experiment 3 (Router Queue=25%BDP, SACK ON, tso OFF, default netem delay buffer (eq. txqueuelen=1000 packets))

Experiment 4 (Router Queue=25%BDP, SACK ON, tso OFF, netem delay buffer=BDP)

Experiment 5 (Router queue=25%BDP, SACK ON, tso off, one-side RTT)

Exepriment 6 (Router queue=25%BDP, SACK ON, tso off, one-side RTT, measure RTT-Fairness)



Transmissions over the UKLight 1Gbps optical loopback

Transmissions using IFB and netem with an RTT=177ms (compare with UKLight)

NETEM emulation of high-speed TCP transmissions using IFB and netem (tso OFF)

NETEM emulation of high-speed TCP transmissions using IFB and netem (tso ON)

test