YouTube. It strives to optimize both throughput and latency/RTT by estimating the bottleneck bandwidth and RTT to compute a pacing rate. One goal — one that sets it apart from most traditional TCP variants — is to avoid filling up the bottleneck buffer, which might induce Bufferbloat.
A description of the algorithm was published in the September/October 2016 issue of ACM Queue. An implementation in the Linux kernel has been proposed as a patch. Dave Taht posted a preliminary evaluation ("a quick look...") on his blog. An good description of the BBR's motivation and approach is included in the proposed kernel patch (see below).
In the STARTUP phase, BBR tries to quickly approximate the bottleneck bandwidth. It does so by increasing the sending rate until the estimated bottleneck bandwidth stops growing.
Bottleneck bandwidth is estimated from the amount of data ACKed over a given period, filtered through a "windowed max-filter".
In the DRAIN phase, the sending (pacing) rate is reduced to get rid of the queue that BBR estimates to have created while probing the bottleneck bandwidth during STARTUP.
In steady state, BBR will pace to the estimated bottleneck bandwidth. Periodically it tries to improve its network model by doing probing:
PROBE_BW mode: BBR probes for a bandwidth increase at the bottleneck by increasing the pacing rate, then decreasing the rate to remove temporary queuing in case the bottleneck bandwidth hasn't grown.
PROBE_RTT mode: RTT is filtered through a windowed min-filter. Sometimes the algorithm will reduce the pacing rate to better approximate the base RTT in case queueing ("bufferbloat") is in effect.
A state diagram has been included in the Linux kernel source as a patch. I quote:
Linux kernel TCP implementation
Linux 4.9 and above: An implementation for the Linux kernel was submitted and merged in September 2016. This initial kernel implementation relied on a scheduler that is capable of pacing, such as the
Linux 4.13 and above: In May 2017, Éric Dumazet submitted a patch to implement pacing in TCP itself, removing the dependency on the
fq scheduler. This makes BBR is simpler to enable, and allows its use together with other schedulers (such as the popular fq_codel).
BBR exports its bandwidth and RTT estimates using the
getsockopt(TCP_CC_INFO) interface, see
struct tcp_bbr_info . User-space applications can use this information as hints for adapting their use of a given connection, e.g. by selecting appropriate audio or video encodings.
According to the BBR update from IETF 99 (see reference below), Netflix is working on an implementation of BBR for FreeBSD.
BBR has also been implemented for QUIC, and is in active use on Google's servers.
Mark Claypool has written an implementation of BBR for ns-3, a popular network simulator.
There is another instance of "BBR TCP", namely Bufferbloat Resistant TCP proposed by Michael Fitz Nowlan in his 2014 Ph.D. thesis (Yale library catalog entry, PDF). It shares the basic goals of (Google's) BBR TCP, namely to detect bottleneck rate and to avoid bufferbloat. But it uses a delay-based measurement approach rather than direct measurement of the amount of ACKed data per time, and doesn't seem to make use of pacing. Despite these differences, the link between the two BBR TCPs seems more than just a coincidental acronym collision: Fitz Nowlan is acknowledged in the Queue/CACM papers, and conversely acknowledges several of the "Google" BBR authors in his thesis.
- BBR: Congestion-Based Congestion Control, ACM Queue, Vol. 14 No. 5, September-October 2016 (HTML/PDF)
- tcp_bbr: add BBR congestion control, N. Cardwell, V. Jacobson, Y. Cheng, N. Dukkipati, E. Dumazet, S. Yeganeh, commit 0f8782ea14974ce992618b55f0c041ef43ed0b78, Linus Torvalds's Linux Git tree, September 2016
- draft-cardwell-iccrg-bbr-congestion-control-00, BBR Congestion Control, Neal Cardwell, Yuchung Cheng, Soheil Hassas Yeganeh, Van Jacobson, July 2017
- draft-cheng-iccrg-delivery-rate-estimation-00, Delivery Rate Estimation, Yuchung Cheng, Neal Cardwell, Soheil Hassas Yeganeh, Van Jacobson, July 2017
- A Quick Look at TCP BBR, D. Taht, blog post, 19 September 2016
- bbr-dev discussion group on Google Groups
- Making Linux TCP Fast, Y. Cheng, N. Cardwell, netdevconf 1.2, October 2016, (abstract,video)
- TCP BBR Quick-Start: Building and Running TCP BBR on Google Compute Engine, N. Cardwell, October 2016 (last accessed)
- BBR Congestion Control, N. Cardwell, Y. Cheng, Presentation to ICCRG at IETF 97, Seoul, November 2016 (video, starting ~57'00",slides)
- BBR congestion control, J. Corbet, LWN.net, September 2016
- BBR Congestion Control: An Update, N. Cardwell, Y. Cheng, Presentation to ICCRG at IETF 98, Chicago, March 2017 (video, starting ~1h35'28",slides)
- BBR TCP Opportunities, Matt Mathis, Presentation at the Quilt, 19 October 2016 (PDF slides)
- BBR TCP, Geoff Huston, May 2017, Potaroo blog/RIPELabs
- CS 244 '17: ReBBR: Reproducing BBR Performance In Lossy Networks, Luke Hsiao and Jervis Muindi, June 2017 CS244 blog
- CS 244 '17: Congestion-Based Congestion Control With BBR, Brad Girardeau and Samantha Steele, June 2017 CS244 blog
- BBR Congestion Control: IETF 99 Update, N. Cardwell, Y. Cheng, C. S. Dunn, S. H. Yeganeh, I. Swett, J. Iyengar, V. Vasiliev, V. Jacobson, Presentation to ICCRG at IETF 99, Prague, July 2017 (video,slides)
- How Google is speeding up the Internet, B. Butler, Network World, August 2017