Ghobadi

TCP Pacing in
Data Center Networks
Monia Ghobadi,Yashar Ganjali
Department of Computer Science, University of Toronto
{monia, yganjali}@cs.toronto.edu
1
TCP, Oh TCP!
2
TCP, Oh TCP!
๏ TCP
congestion control
2
TCP, Oh TCP!
๏ TCP
congestion control
๏ Focus on evolution of cwnd over RTT.
2
TCP, Oh TCP!
๏ TCP
congestion control
๏ Focus on evolution of cwnd over RTT.
๏ Damages
2
TCP, Oh TCP!
๏ TCP
congestion control
๏ Focus on evolution of cwnd over RTT.
๏ Damages
๏ TCP
pacing
2
The Tortoise and the Hare: Why
Bother Pacing?
3
The Tortoise and the Hare: Why
Bother Pacing?
๏ Renewed
interest in pacing for the data center
environment
๏ Small buffer switches
๏ Small round-trip times
๏Disparity between the total capacity of the network and
the capacity of individual queues
3
The Tortoise and the Hare: Why
Bother Pacing?
๏ Renewed
interest in pacing for the data center
environment
๏ Small buffer switches
๏ Small round-trip times
๏Disparity between the total capacity of the network and
the capacity of individual queues
๏ Focus on tail latency cause by short-term
unfairness in TCP
3
TCP Pacing’s Potential
4
TCP Pacing’s Potential
๏ Better
link utilization on small switch buffers
4
TCP Pacing’s Potential
๏ Better
๏ Better
link utilization on small switch buffers
short-term fairness among flows of
similar RTTs:
๏ Improves worst-flow latency
4
TCP Pacing’s Potential
๏ Better
๏ Better
link utilization on small switch buffers
short-term fairness among flows of
similar RTTs:
๏ Improves worst-flow latency
๏ Allows
slow-start to be circumvented
๏ Saving many round-trip time
๏ May allow much larger initial congestion window to
be used safely
4
Contributions
5
Contributions
๏
Effectiveness of TCP pacing in data centers.
5
Contributions
๏
Effectiveness of TCP pacing in data centers.
๏
Benefits of using paced TCP diminish as we increase
the number of concurrent connections beyond a
certain threshold (Point of Inflection).
5
Contributions
๏
Effectiveness of TCP pacing in data centers.
๏
Benefits of using paced TCP diminish as we increase
the number of concurrent connections beyond a
certain threshold (Point of Inflection).
๏
Inconclusive results in previous works.
5
Contributions
๏
Effectiveness of TCP pacing in data centers.
๏
Benefits of using paced TCP diminish as we increase
the number of concurrent connections beyond a
certain threshold (Point of Inflection).
๏
Inconclusive results in previous works.
๏
Inter-flow bursts.
5
Contributions
๏
Effectiveness of TCP pacing in data centers.
๏
Benefits of using paced TCP diminish as we increase
the number of concurrent connections beyond a
certain threshold (Point of Inflection).
๏
Inconclusive results in previous works.
๏
Inter-flow bursts.
๏
Test-bed experiments.
5
Inter-flow Bursts
6
Inter-flow Bursts
๏
C: bottleneck link capacity
6
Inter-flow Bursts
๏
๏
C: bottleneck link capacity
Bmax :buffer size
6
Inter-flow Bursts
๏
๏
๏
C: bottleneck link capacity
Bmax :buffer size
N: longed lived flows.
6
Inter-flow Bursts
๏
๏
๏
๏
C: bottleneck link capacity
Bmax :buffer size
N: longed lived flows.
W: packets in every RTT in paced or non-paced
manner.
6
Inter-flow Bursts
๏
๏
๏
๏
0
C: bottleneck link capacity
Bmax :buffer size
N: longed lived flows.
W: packets in every RTT in paced or non-paced
manner.
RTT
2RTT
3RTT
6
Inter-flow Bursts
๏
๏
๏
๏
0
C: bottleneck link capacity
Bmax :buffer size
N: longed lived flows.
W: packets in every RTT in paced or non-paced
manner.
RTT
2RTT
3RTT
6
Inter-flow Bursts
๏
๏
๏
๏
0
C: bottleneck link capacity
Bmax :buffer size
N: longed lived flows.
W: packets in every RTT in paced or non-paced
manner.
RTT
2RTT
3RTT
6
Inter-flow Bursts
๏
๏
๏
๏
0
C: bottleneck link capacity
Bmax :buffer size
N: longed lived flows.
W: packets in every RTT in paced or non-paced
manner.
RTT
2RTT
3RTT
6
Inter-flow Bursts
๏
๏
๏
๏
๏
0
C: bottleneck link capacity
Bmax :buffer size
N: longed lived flows.
W: packets in every RTT in paced or non-paced
manner.
X: Inter-flow burst
RTT
2RTT
3RTT
6
Inter-flow Bursts
๏
๏
๏
๏
๏
0
C: bottleneck link capacity
Bmax :buffer size
N: longed lived flows.
W: packets in every RTT in paced or non-paced
manner.
X: Inter-flow burst
RTT
2RTT
3RTT
6
Modeling
7
Modeling
best case of non-paced
7
Modeling
best case of non-paced
worst case of paced
7
Modeling
best case of non-paced
worst case of paced
RTT
7
Modeling
best case of non-paced
RTT
worst case of paced
RTT
7
Modeling
best case of non-paced
RTT
worst case of paced
RTT
7
Modeling
best case of non-paced
RTT
worst case of paced
RTT
7
Experimental Studies
8
Experimental Studies
๏
Flow of sizes 1,2, 3 MB between servers and clients.
8
Experimental Studies
๏
๏
Flow of sizes 1,2, 3 MB between servers and clients.
Bottleneck BW: 1,2, 3 Gbps
8
Experimental Studies
๏
๏
๏
Flow of sizes 1,2, 3 MB between servers and clients.
Bottleneck BW: 1,2, 3 Gbps
RTT: 1 to 100ms
8
Experimental Studies
๏
๏
๏
๏
Flow of sizes 1,2, 3 MB between servers and clients.
Bottleneck BW: 1,2, 3 Gbps
RTT: 1 to 100ms
Bottleneck utilization, Drop rate, average and tail
FCT
8
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
9
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
No congestion
9
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
No congestion
Congestion
9
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
Bottleneck Link Utilization (Mbps)
300
No congestion
paced
non−paced
Congestion
250
200
150
100
50
0
0
50
100
150
200
Time (sec)
250
300
9
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
Bottleneck Link Utilization (Mbps)
300
No congestion
paced
non−paced
Congestion
250
200
150
100
50
0
0
50
100
150
200
Time (sec)
250
300
9
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
Bottleneck Link Utilization (Mbps)
300
No congestion
paced
non−paced
Congestion
250
200
38%
150
100
50
0
0
50
100
150
200
Time (sec)
250
300
9
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
Bottleneck Link Utilization (Mbps)
300
No congestion
paced
non−paced
Congestion
250
200
38%
150
100
50
0
0
50
100
150
200
Time (sec)
250
300
9
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
Bottleneck Link Utilization (Mbps)
300
No congestion
paced
non−paced
Congestion
250
200
38%
150
100
50
0
0
50
100
150
200
Time (sec)
250
300
9
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
Bottleneck Link Utilization (Mbps)
300
No congestion
paced
non−paced
Congestion
250
200
38%
150
100
50
0
0
1
50
100
150
200
Time (sec)
250
300
paced
non−paced
0.9
0.8
CDF
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.039
0.063
0.1
0.2
0.3
Flow Completion Time (sec)
0.4
9
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
Bottleneck Link Utilization (Mbps)
300
No congestion
paced
non−paced
Congestion
250
200
38%
150
100
50
0
0
1
50
100
150
200
Time (sec)
250
300
paced
non−paced
0.9
0.8
CDF
0.7
1RTT
0.6
0.5
0.4
0.3
0.2
0.1
0
0.039
0.063
0.1
0.2
0.3
Flow Completion Time (sec)
0.4
9
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
Bottleneck Link Utilization (Mbps)
300
No congestion
paced
non−paced
Congestion
250
200
38%
150
100
50
0
0
1
50
100
150
200
Time (sec)
250
300
paced
non−paced
0.9
0.8
CDF
0.7
1RTT
0.6
0.5
2RTTs
0.4
0.3
0.2
0.1
0
0.039
0.063
0.1
0.2
0.3
Flow Completion Time (sec)
0.4
9
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
Congestion
paced
non−paced
250
200
38%
150
100
50
0
0
50
100
150
200
Time (sec)
250
300
1
1
paced
non−paced
0.9
CDF
No congestion
0.8
0.8
0.7
0.7
1RTT
0.6
0.5
2RTTs
0.4
0.6
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0.039
0.063
0.1
0.2
0.3
Flow Completion Time (sec)
paced
non−paced
0.9
CDF
Bottleneck Link Utilization (Mbps)
300
0.4
0
0.06
0.2
0.5
1
2 3
Flow Completion Time (sec)
5 7
9
Base-Case Experiment:
One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT
Congestion
paced
non−paced
250
200
38%
150
100
50
0
0
50
100
150
200
Time (sec)
250
300
1
1
paced
non−paced
0.9
CDF
No congestion
0.8
0.8
0.7
0.7
1RTT
0.6
0.5
2RTTs
0.4
0.6
0.5
0.3
0.2
0.2
0.1
0.1
0.039
0.063
0.1
0.2
0.3
Flow Completion Time (sec)
2RTTs
0.4
0.3
0
paced
non−paced
0.9
CDF
Bottleneck Link Utilization (Mbps)
300
0.4
0
0.06
0.2
0.5
1
2 3
Flow Completion Time (sec)
5 7
9
Multiple flows: Link Utilization/Drop/Latency
Buffer size 1.7% of BDP, varying number of flows
10
Multiple flows: Link Utilization/Drop/Latency
Bottleneck Link Utilization (Mbps)
Buffer size 1.7% of BDP, varying number of flows
N*
1000
PoI
800
600
400
200
paced
non−paced
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
10
Multiple flows: Link Utilization/Drop/Latency
N*
1000
Bottleneck Link Drop (%)
Bottleneck Link Utilization (Mbps)
Buffer size 1.7% of BDP, varying number of flows
PoI
800
600
400
200
paced
non−paced
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
*
PoI
N
1
paced
non−paced
0.8
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
10
Multiple flows: Link Utilization/Drop/Latency
800
600
400
200
*
PoI
paced
1
Average FCT (sec)
N*
1000
Bottleneck Link Drop (%)
Bottleneck Link Utilization (Mbps)
Buffer size 1.7% of BDP, varying number of flows
0.8
N
PoI
paced
non−paced
0.6
0.4
0.2
non−paced
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
*
PoI
N
1
paced
non−paced
0.8
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
10
Multiple flows: Link Utilization/Drop/Latency
800
600
400
200
*
PoI
paced
1
Average FCT (sec)
N*
1000
0.8
N
PoI
paced
non−paced
0.6
0.4
0.2
non−paced
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
*
PoI
N
1
paced
non−paced
0.8
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
99th Percentile FCT (sec)
Bottleneck Link Drop (%)
Bottleneck Link Utilization (Mbps)
Buffer size 1.7% of BDP, varying number of flows
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
*
PoI
N
2
paced
1.8
non−paced
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
10
Multiple flows: Link Utilization/Drop/Latency
800
600
400
200
paced
*
PoI
1
Average FCT (sec)
N*
1000
0.8
N
PoI
paced
non−paced
0.6
0.4
0.2
non−paced
0
0
0 10 20 30 40 50 60 70 80 90 100
0 10 20 30 40 50 60 70 80 90 100
Number•ofNumber
Flows
Number
of Flows
of concurrent connections increase
beyond
a
*
*
PoI
PoI
N
N
certain
point the benefits of pacing 2diminish.
1
paced
paced
1.8
non−paced
non−paced
0.8
1.6
1.4
0.6
1.2
1
0.4
0.8
0.6
0.2
0.4
0.2
0
0
0 10 20 30 40 50 60 70 80 90 100
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
Number of Flows
99th Percentile FCT (sec)
Bottleneck Link Drop (%)
Bottleneck Link Utilization (Mbps)
Buffer size 1.7% of BDP, varying number of flows
10
Multiple flows: Link Utilization/Drop/Latency
Buffer size 3.4% of BDP, varying number of flows
11
Multiple flows: Link Utilization/Drop/Latency
N* PoI
1000
1
Average FCT (sec)
800
600
400
200
paced
non−paced
0.4
0.2
0.6
0.5
0.4
0.3
0.2
0.1
0
0
20
40
60
80
Number of flows sharing the bottleneck
100
paced
non−paced
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
99th Percentile FCT (sec)
paced
nonpaced
PoI
0.6
0.8
0.7
N*
0.8
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
Bottleneck link drop(%)
Bottleneck Link Utilization (Mbps)
Buffer size 3.4% of BDP, varying number of flows
N* PoI
2
1.8
1.6
1.4
1.2
1
0.8
0.6
paced
0.4
non−paced
0.2
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
11
Multiple flows: Link Utilization/Drop/Latency
N* PoI
1000
1
600
0
0 10 20
0.8
0.5
0.4
paced
nonpaced
paced
0.2
312 packets
non−paced
30 40 50 60 70 80 90
• 100
N* = 8 flows.
0
Number of Flows
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
• Kulik et al.: Pace!
N* PoI
2
• 1 flow, BDP 91 packets,
buffer size 10
1.8
1.6
packets.
1.4
1.2
• N* = 9 flows.
1
paced
non−paced
0.3
0.2
0.1
0
0
PoI
0.8
99th Percentile FCT (sec)
200
0.6
N*
• Aagarwal et al.: Don’t pace! 0.6
• 50 flows, BDP 12500.4packets and buffer size
400
0.7
Average FCT (sec)
800
Bottleneck link drop(%)
Bottleneck Link Utilization (Mbps)
Buffer size 3.4% of BDP, varying number of flows
20
40
60
80
Number of flows sharing the bottleneck
100
0.8
0.6
paced
0.4
non−paced
0.2
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
11
N* vs. Buffer
12
Bottleneck Link Utilization (Mbps)
N* vs. Buffer
1000
800
600
400
200
0
paced
non−paced
50
100
150
200
Buffer Size (KB)
250
12
1000
800
600
400
200
0
paced
non−paced
50
100
150
200
Buffer Size (KB)
250
0.8
0.7
paced
nonpaced
0.6
Bottleneck link drop(%)
Bottleneck Link Utilization (Mbps)
N* vs. Buffer
0.5
0.4
0.3
0.2
0.1
0
50
100
150
Buffer size(KB)
200
250
12
1.4
1000
Average CT (sec)
600
400
200
0
paced
non−paced
1.2
800
paced
non−paced
50
1
0.8
0.6
0.4
0.2
100
150
200
Buffer Size (KB)
250
0
50
100
150
200
Buffer Size (KB)
250
0.8
0.7
paced
nonpaced
0.6
Bottleneck link drop(%)
Bottleneck Link Utilization (Mbps)
N* vs. Buffer
0.5
0.4
0.3
0.2
0.1
0
50
100
150
Buffer size(KB)
200
250
12
1.4
1000
Average CT (sec)
600
400
200
0
paced
non−paced
50
1
0.8
0.6
0.4
0.2
100
150
200
Buffer Size (KB)
0
250
99th Percentile CT (sec)
paced
nonpaced
0.6
0.5
0.4
0.3
0.2
0.1
0
50
2.8
0.8
0.7
paced
non−paced
1.2
800
Bottleneck link drop(%)
Bottleneck Link Utilization (Mbps)
N* vs. Buffer
50
100
150
Buffer size(KB)
200
250
100
150
200
Buffer Size (KB)
250
paced
non−paced
2.4
2
1.6
1.2
0.8
0.4
0
50
100
150
200
Buffer Size (KB)
250
12
1.4
1000
Average CT (sec)
600
400
200
0
paced
non−paced
50
1
0.8
0.6
0.4
0.2
100
150
200
Buffer Size (KB)
0
250
99th Percentile CT (sec)
paced
nonpaced
0.6
0.5
0.4
0.3
0.2
0.1
0
50
2.8
0.8
0.7
paced
non−paced
1.2
800
Bottleneck link drop(%)
Bottleneck Link Utilization (Mbps)
N* vs. Buffer
50
100
150
Buffer size(KB)
200
250
100
150
200
Buffer Size (KB)
250
paced
non−paced
2.4
2
1.6
1.2
0.8
0.4
0
50
100
150
200
Buffer Size (KB)
250
12
Clustering Effect:
The probability of packets from a flow being followed by packets
from other flows
13
Clustering Effect:
The probability of packets from a flow being followed by packets
from other flows
Non-paced: Packets of each flow
are clustered together.
13
Clustering Effect:
The probability of packets from a flow being followed by packets
from other flows
Non-paced: Packets of each flow
are clustered together.
Paced: Packets of different flows are
multiplexed.
13
Drop Synchronization:
Number of Flows Affected by Drop Event
14
Drop Synchronization:
Number of Flows Affected by Drop Event
NetFPGA router to count the number of flows affected by drop events.
14
Drop Synchronization:
Number of Flows Affected by Drop Event
N: 48
1
0.9
paced
0.8
non−paced
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
20
40
60
80
Number of Flows Affected by Drop Event
N: 96
CDF
1
paced
0.9
0.8
non−paced
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
Number of Flows Affected by Drop Event
CDF
CDF
NetFPGA router to count the number of flows affected by drop events.
1
paced
0.9
non−paced
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 50 100 150 200 250 300 350 400
Number of Flows Affected by Drop Event
N: 384
14
Drop Synchronization:
Number of Flows Affected by Drop Event
N: 48
1
0.9
paced
0.8
non−paced
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
20
40
60
80
Number of Flows Affected by Drop Event
N: 96
CDF
1
paced
0.9
0.8
non−paced
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
Number of Flows Affected by Drop Event
CDF
CDF
NetFPGA router to count the number of flows affected by drop events.
1
paced
0.9
non−paced
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 50 100 150 200 250 300 350 400
Number of Flows Affected by Drop Event
N: 384
14
Future Trends for Pacing:
per-egress pacing.
15
Future Trends for Pacing:
N* PoI
1
Average RCT (sec)
1000
800
600
400
per−flow paced
non−paced
per−host + per−flow paced
200
20
12
24
48
96
Number of Flows
0.2
12
24
48
96
Number of Flows
192
*
N PoI
10
5
24
48
Number of flows
per−flow paced
non−paced
per−host + per−flow paced
0.4
0
6
per−flow paced
nonpaced
per−host + per−flow paced
12
*
N PoI
0.6
192
15
0
6
0.8
99th Percentile RCT (sec)
0
6
Bottleneck Link Drop (%)
Bottleneck Link Utilization (Mbps)
per-egress pacing.
96
192
2.2
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
6
per−flow paced
non−paced
per−host + per−flow paced
12
24
48
96
Number of Flows
192
15
Future Trends for Pacing:
N* PoI
1
Average RCT (sec)
1000
800
600
400
per−flow paced
non−paced
per−host + per−flow paced
200
20
12
24
48
96
Number of Flows
0.2
12
24
48
96
Number of Flows
192
*
N PoI
10
5
24
48
Number of flows
per−flow paced
non−paced
per−host + per−flow paced
0.4
0
6
per−flow paced
nonpaced
per−host + per−flow paced
12
*
N PoI
0.6
192
15
0
6
0.8
99th Percentile RCT (sec)
0
6
Bottleneck Link Drop (%)
Bottleneck Link Utilization (Mbps)
per-egress pacing.
96
192
2.2
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
6
per−flow paced
non−paced
per−host + per−flow paced
12
24
48
96
Number of Flows
192
15
Future Trends for Pacing:
N* PoI
1
Average RCT (sec)
1000
800
600
400
per−flow paced
non−paced
per−host + per−flow paced
200
20
12
24
48
96
Number of Flows
0.2
12
24
48
96
Number of Flows
192
*
N PoI
10
5
24
48
Number of flows
per−flow paced
non−paced
per−host + per−flow paced
0.4
0
6
per−flow paced
nonpaced
per−host + per−flow paced
12
*
N PoI
0.6
192
15
0
6
0.8
99th Percentile RCT (sec)
0
6
Bottleneck Link Drop (%)
Bottleneck Link Utilization (Mbps)
per-egress pacing.
96
192
2.2
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
6
per−flow paced
non−paced
per−host + per−flow paced
12
24
48
96
Number of Flows
192
15
Future Trends for Pacing:
N* PoI
1
Average RCT (sec)
1000
800
600
400
per−flow paced
non−paced
per−host + per−flow paced
200
20
12
24
48
96
Number of Flows
0.2
12
24
48
96
Number of Flows
192
*
N PoI
10
5
24
48
Number of flows
per−flow paced
non−paced
per−host + per−flow paced
0.4
0
6
per−flow paced
nonpaced
per−host + per−flow paced
12
*
N PoI
0.6
192
15
0
6
0.8
99th Percentile RCT (sec)
0
6
Bottleneck Link Drop (%)
Bottleneck Link Utilization (Mbps)
per-egress pacing.
96
192
2.2
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
6
per−flow paced
non−paced
per−host + per−flow paced
12
24
48
96
Number of Flows
192
15
Future Trends for Pacing:
N* PoI
1
Average RCT (sec)
1000
800
600
400
per−flow paced
non−paced
per−host + per−flow paced
200
20
12
24
48
96
Number of Flows
0.2
12
24
48
96
Number of Flows
192
*
N PoI
10
5
24
48
Number of flows
per−flow paced
non−paced
per−host + per−flow paced
0.4
0
6
per−flow paced
nonpaced
per−host + per−flow paced
12
*
N PoI
0.6
192
15
0
6
0.8
99th Percentile RCT (sec)
0
6
Bottleneck Link Drop (%)
Bottleneck Link Utilization (Mbps)
per-egress pacing.
96
192
2.2
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
6
per−flow paced
non−paced
per−host + per−flow paced
12
24
48
96
Number of Flows
192
15
Conclusions and Future work
๏ Re-examine TCP
pacing’s effectiveness:
๏ Demonstrate when TCP pacing brings benefits in
such environments.
๏ Inter-flow
burstiness
๏ Burst-pacing vs. packet-pacing.
๏ Per-egress pacing.
16
Renewed Interest
17
Traffic Burstiness Survey
๏‘Bursty’ is a word with no agreed meaning. How do
you define a bursty traffic?
๏If you are involved with a data center, is your data
center traffic bursty?
๏ If yes, do you think that it will be useful to supress
the burstiness in your traffic?
๏ If no, are you already supressing the burstiness?
How? Would you anticipate the traffic becoming
burstier in the future?
monia@cs.toronto.edu
18
19
Base-Case Experiment:
One RPC vs Two RPCs, 64KB of buffering, Latency
20
Multiple flows: Link Utilization/Drop/
Latency
Buffer size: 6% of BDP, varying number of
21
Base-Case Experiment:
One RPC vs Two RPCs, 64KB of buffering, Latency / Queue
Occupancy
22
Base-Case Experiment:
One RPC vs Two RPCs, 64KB of buffering, Latency / Queue
Occupancy
22
Base-Case Experiment:
One RPC vs Two RPCs, 64KB of buffering, Latency / Queue
Occupancy
22
Base-Case Experiment:
One RPC vs Two RPCs, 64KB of buffering, Latency / Queue
Occupancy
22
Functional test
23
Functional test
23
Functional test
23
Functional test
23
RPC vs. Streaming
Paced by ack
clocking
10GE
Bursty
1GE
RTT = 10ms
10GE
24
Zooming in more on the paced flow
25
Multiple flows: Link Utilization/Drop/Latency
Buffer size 6.8% of BDP, varying number of flows
26
Multiple flows: Link Utilization/Drop/Latency
Bottleneck Link Utilization (Mbps)
Buffer size 6.8% of BDP, varying number of flows
1000
N* PoI
800
600
400
200
paced
non−paced
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
26
Multiple flows: Link Utilization/Drop/Latency
Bottleneck Link Utilization (Mbps)
Buffer size 6.8% of BDP, varying number of flows
1000
N* PoI
800
600
400
200
paced
non−paced
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
1
0.8
paced
nonpaced
Bottleneck link drop(%)
0.6
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
0
20
40
60
80
Number of flows sharing the bottleneck
100
26
Multiple flows: Link Utilization/Drop/Latency
1000
N* PoI
1
Average FCT (sec)
Bottleneck Link Utilization (Mbps)
Buffer size 6.8% of BDP, varying number of flows
800
600
400
200
paced
non−paced
N* PoI
0.8
0.6
0.4
0.2
paced
non−paced
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
1
0.8
paced
nonpaced
Bottleneck link drop(%)
0.6
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
0
20
40
60
80
Number of flows sharing the bottleneck
100
26
Multiple flows: Link Utilization/Drop/Latency
1000
N* PoI
1
Average FCT (sec)
Bottleneck Link Utilization (Mbps)
Buffer size 6.8% of BDP, varying number of flows
800
600
400
200
0.8
0.6
0.4
0.2
paced
non−paced
1
99th Percentile FCT (sec)
paced
nonpaced
Bottleneck link drop(%)
0.6
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
0
20
40
60
80
Number of flows sharing the bottleneck
paced
non−paced
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
0.8
N* PoI
100
N* PoI
4
3.6
3.2
2.8
2.4
2
1.6
1.2
0.8
paced
0.4
non−paced
0
0 10 20 30 40 50 60 70 80 90 100
Number of Flows
26