a Congestion Control Algorithm -- which uses various signals (mostly dropped packets) to try to estimate the available bandwidth and avoid network connection.
After some searching apparently it means “congestion control algorithm”. Definitely should have been defined in the article, especially since they have a whole section dedicated to explaining what it is.
I can see why they rewrote QUIC in Rust and for use in userspace, though going the in-house approach would warrant keeping an eye on the relevant kernel commits like a hawk to avoid missing bug fixes like these. These in-house implementations tend to have less eyeballs than the kernel.
I found it interesting that Cloudflare is not yet using BBR as the default in quiche. CUBIC's recovery in this day and age, and especially in datacenters with large pipes, seems so slooow to me. Almost two seconds with no loss whatsoever till achieving BDP again and then shooting itself in the foot every time it hits the ceiling. Each one of those losses a retransmission.
> though going the in-house approach would warrant keeping an eye on the relevant kernel commits like a hawk to avoid missing bug fixes like these. These in-house implementations tend to have less eyeballs than the kernel.
This is somewhat funny to read because this specific issue in CUBIC (sudden CWND jump upon existing quiescence) was originally discovered in Google's QUIC library and then later reported to the team working on the TCP stack. I know this because I was the one who found that bug back in 2015.
That said, congestion control algorithms are really prone to logic bugs, and very subtle changes in the algorithm can often lead to dramatically different outcomes. Because of that, there's a lot of value in running congestion control code that has been tested on a wide variety of real Internet traffic.
> I can see why they rewrote QUIC in Rust and for use in userspace
As far as I know, while they might have either way, they did not ("rewrite QUICK [...] for use in userspace"): the linux kernel implementation only landed late 2025. Quiche was started ca 2018 (that's when Cloudflare started beta-deploying QUIC, the first public alpha of quiche was january 2019).
I don't know that there even was an in-kernel implementation of quic before msquic.sys which I believe first shipped in Server 2022 circa mid 2021 (and is used as the implementation backend by MsQuic on Server 2022 and W11).
Looking at the last plot, it seems like the backoff is roughly 1/5 of the total bandwith and it happens every 50 ms or so. Wouldn't it make sense to reduce the backoff and the growth speed if a backoff occurs repeatedly in rapid succession? We want to maximize the area under the curve (transmitted packages), right?
Also, not a single takeaway about how to prevent that very preventable issue in the first place, as you allude to.
I wonder what happened with the very hardcore engineering that used to happen at Cloudflare and was published? Almost every blog post today seems to expose some weirdness at Cloudflare, rather than highlighting excellence in engineering, what changes? Been slowly changing over the years, did they change their hiring practices or something?
What is a CCA in this context?
I found it interesting that Cloudflare is not yet using BBR as the default in quiche. CUBIC's recovery in this day and age, and especially in datacenters with large pipes, seems so slooow to me. Almost two seconds with no loss whatsoever till achieving BDP again and then shooting itself in the foot every time it hits the ceiling. Each one of those losses a retransmission.
This is somewhat funny to read because this specific issue in CUBIC (sudden CWND jump upon existing quiescence) was originally discovered in Google's QUIC library and then later reported to the team working on the TCP stack. I know this because I was the one who found that bug back in 2015.
That said, congestion control algorithms are really prone to logic bugs, and very subtle changes in the algorithm can often lead to dramatically different outcomes. Because of that, there's a lot of value in running congestion control code that has been tested on a wide variety of real Internet traffic.
As far as I know, while they might have either way, they did not ("rewrite QUICK [...] for use in userspace"): the linux kernel implementation only landed late 2025. Quiche was started ca 2018 (that's when Cloudflare started beta-deploying QUIC, the first public alpha of quiche was january 2019).
I don't know that there even was an in-kernel implementation of quic before msquic.sys which I believe first shipped in Server 2022 circa mid 2021 (and is used as the implementation backend by MsQuic on Server 2022 and W11).
I wonder what happened with the very hardcore engineering that used to happen at Cloudflare and was published? Almost every blog post today seems to expose some weirdness at Cloudflare, rather than highlighting excellence in engineering, what changes? Been slowly changing over the years, did they change their hiring practices or something?