Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

x/net/quic: add QUIC implementation #58547

Open
neild opened this issue Feb 15, 2023 · 158 comments
Open

x/net/quic: add QUIC implementation #58547

neild opened this issue Feb 15, 2023 · 158 comments

Comments

@neild
Copy link
Contributor

neild commented Feb 15, 2023

I propose adding an implementation of the QUIC transport protocol (RFC 9000) in golang.org/x/net/quic. QUIC is the protocol underlying HTTP/3, and QUIC support is a necessary prerequisite for HTTP/3 support.

The proposed API is in https://go.dev/cl/468575. This API does not include support for Early Data, but does not preclude adding that support at a later time.

RFC 9000 does not define a QUIC API, but it does define a set of operations that can be performed on a QUIC connection or stream.

A QUIC connection is shared state between a client and server.

  • open a [client] connection:

    conn, err := quic.Dial(ctx, "udp", "127.0.0.1:8000", &quic.Config{})
  • listen for incoming connections:

    l, err := quic.Listen("udp", "127.0.0.1:8000", &quic.Config{})
    conn, err := l.Accept(ctx)

A QUIC stream is an ordered, reliable byte stream. A connection may have many streams. (A QUIC stream is loosely analogous to a TCP connection.)

  • create streams:

    s, err := conn.NewStream(ctx)
  • accept streams created by the peer:

    s, err := conn.AcceptStream(ctx)
  • read from, write to, and close streams:

    n, err = s.Read(buf)
    n, err = s.Write(buf)
    s.Close()
  • stream operations also have Context-aware variants:

    n, err = s.ReadContext(ctx, buf)
    n, err = s.WriteContext(ctx, buf)
    s.CloseContext(ctx)
  • data written to streams is buffered, and may be explicitly flushed:

    // Sends one datagram, not 100.
    for i := byte(0); i < 100; i++ {
      s.Write([]byte{i})
    }
    // Data will not be sent until a datagram's worth has been accumulated or an explicit flush.
    // No Nagle's algorithm.
    s.Flush()

See https://go.dev/cl/468575 for the detailed API.

@neild neild added the Proposal label Feb 15, 2023
@gopherbot gopherbot added this to the Proposal milestone Feb 15, 2023
@DeedleFake
Copy link

DeedleFake commented Feb 15, 2023

I generally like the use of context.Context over the Deadline() methods of the regular net package, but I think those should still be available. As proposed, quic.Stream doesn't implement net.Conn, for example, though it would probably make sense for it to. Additionally, it probably makes sense for quic.Conn to implement net.Listener as it can accept streams which would then be net.Conns.

Confusingly, it probably does not make sense for quic.Listener to implement net.Listener, as I don't think it would make sense for quic.Conn to implement net.Conn. It could probably be done, though, for example by making the quic.Conn implementation of net.Conn's methods automatically open a single stream on first use. That behavior might be surprising, though.

Edit: Random thought: Might it make sense to introduce new interfaces for the listener -> conn -> stream pattern that QUIC uses? I don't think any other connection scheme is even considering something similar, so maybe not, but it would allow the net package, for example, to provide adapters for things using that pattern, such as the proposed quic package, to let them function as the standard net interfaces in various different ways.

@marten-seemann
Copy link
Contributor

Author of quic-go here 👋
quic-go is a QUIC implementation in Go that's been around since before QUIC was even standardized. It also comes with an HTTP/3 implementation.

If there's interest from the Go team's side, I'd be happy to talk about what would be needed to get (parts of?) quic-go merged into the standard library.

@neild
Copy link
Contributor Author

neild commented Feb 15, 2023

It probably does make sense for quic.Stream to implement net.Conn.

I don't think it's useful for anything in a QUIC implementation to implement net.Listener. A quic.Listener isn't a net.Listener, because it listens for QUIC connections, which are not stream-oriented network connections. A quic.Conn does listen for streams, but while net.Listeners generally accept streams from varying sources a quic.Conn only accepts streams multiplexed over an existing connection with a single entity. I don't think there are many cases where one would want to use a quic.Conn in a place which operates on a net.Listener.

If someone does need to use a quic.Conn as a net.Listener, writing an adapter will be trivial.

@komuw
Copy link
Contributor

komuw commented Feb 16, 2023

Related: #32204 (net/http: support HTTP/3)

@rhysh
Copy link
Contributor

rhysh commented Feb 16, 2023

Thank you for sharing this!


type Config struct {
	// TLSConfig is the endpoint's TLS configuraiton.
	// It must be non-nil and include at least one certificate or else set GetCertificate.
	TLSConfig *tls.Config

Does an endpoint that acts as a client need to set a certificate? If so, I'd guess that it's even OK to call quic.Listen without a certificate, if the quic.Listener is only used for Dialing (such as for a high-volume client that doesn't want to use an OS file descriptor for each outbound connection).


func (s *Stream) ReadContext(ctx context.Context, b []byte) (n int, err error) {
func (s *Stream) WriteContext(ctx context.Context, b []byte) (n int, err error) {
func (s *Stream) CloseContext(ctx context.Context) error {

These methods don't allow using normal io packages in a Context-aware way; anything that takes an io.Reader/io.Writer to do buffering, (de)compression, (un)marshaling, etc and wants to also use Context will need a wrapper. And dealing with a Context value in each call looks like it will be expensive (as crypto/tls's implementation of Read vs HandshakeContext saw in #50657).

What are the tradeoffs of this versus a method like func (*Stream) WithContext(context.Context) io.ReadWriteCloser (or func (*Stream) WithContext(context.Context) *ContextStream, to allow adding more methods later)?


This revision of the API doesn't give visibility into flow control. How much is required to build a reliable QPACK implementation? From https://www.rfc-editor.org/rfc/rfc9204.html#section-2.1.3

To avoid these deadlocks, an encoder SHOULD NOT write an instruction unless sufficient stream and connection flow-control credit is available for the entire instruction.


// Close waits for the peer to acknowledge the connection has been closed.
func (c *Conn) Close(ctx context.Context) error {

// Abort closes the connection and returns immediately.
func (c *Conn) Abort(err error) {

Conn.Close and Conn.Abort look similar and not quite orthogonal. I'm not sure if there's a good reason for an app to close a connection with an error and then also wait for the peer to acknowledge, but I don't see how they'd do that with this pair of methods. Maybe call Abort and then Close? Or Abort and Wait (but maybe Abort immediately discards all state)? What's the reason to not have Abort and Close be one method with signature func(context.Context, error) error?


// Dial creates and returns a connection to a network address.
func (l *Listener) Dial(ctx context.Context, network, address string) (*Conn, error) {
	return nil, errors.New("not implemented")
}

Only because you mentioned future possibility of Early Data: Would Dialing with Early Data need a separate quic.Listener method (and package-level function)? Probably one with a signature like this, but which returns a *Conn that is not yet connected, and which needs a subsequent Connect(ctx) method call once the client has created the Early Data streams and filled them with the Early Data. I'm not sure it's important to have a full design now (or even soon), but it wasn't immediately clear to me where the extension point would be.

@mholt
Copy link

mholt commented Feb 16, 2023

@neild I just want to make sure @marten-seemann's comment above was seen. Given the years of effort that have already been spent implementing and optimizing QUIC in Go, it would probably make sense to take advantage of that rather than start over from scratch, even if the exported APIs are a little different.

Please consider using quic-go as the basis for this effort.

@james-lawrence
Copy link
Contributor

james-lawrence commented Feb 21, 2023

@mholt quic-go never tried to integrate with stdlib net packages in a natural way. while I'm sure some of the internal structures might be reusable. the overall library itself wasn't terribly appealing to me personally due to the incompatibilities with the wider ecosystem.

edit: as a result I strongly recommend against using quic-go as a base because of that decision. quic-go focused on getting quic http support available. golang's stdlib implementation should be focused on compatibility with the wide ecosystem at the transport level. the fundmental driving forces for the API design are very different.

@james-lawrence
Copy link
Contributor

james-lawrence commented Feb 21, 2023

@paralin I think the adapter code all of them had to implement to handle quic-go speaks for itself. golang stdlib needs to figure out how to interopt at the transport level. similar to the packet conn vs stream conns it already has.

if I have a stream oriented protocol I shouldn't care if I receive a quic, tcp, or unix transport. this is the problem we need to resolve which quic-go explicitly decided to ignore

@mdlayher
Copy link
Member

This is an active proposal and no decisions have been made as of yet.

Please hold off on speculation about the implementation, and take conversations about quic-go elsewhere.

@james-lawrence
Copy link
Contributor

@mdlayher agreed, but my points about the interopt in stdlib for quic stand. they're important even if we ignore quic-go.

@rsc
Copy link
Contributor

rsc commented Mar 9, 2023

It is good for proposals to focus on API, but implementation is explicitly on topic at least for large proposals, given that it's one of the sections listed in the design doc template.

As for quic-go, I completely agree that it would be good to take advantage of the expertise that @marten-seemann has built up over his years of development of quic-go. We would certainly welcome his help. At the same time, reusing quic-go directly is probably not the right path forward, for a few reasons:

  • As already noted on this issue, there are API questions about how best to present QUIC, and we may well want an API that is different in important ways from quic-go. In particular the quic-go API is fairly low level compared to what we are contemplating.
  • QUIC having been a moving target, it is almost certain that quic-go contains compatibility code for older versions that is no longer needed. A new implementation can target the RFC and modern implementations only.
  • The implementation strategy and approach (some would say code style but I mean something deeper about the code) differs from that of the standard library in a few key respects. In particular it tends toward much more indirection and mocks, while standard library code tends to be more direct. The strategy matters because the Go team will be maintaining the standard QUIC implementation into the distant future.
  • The quic-go tests depend on test frameworks that we cannot depend on in the standard library, so those would need rewriting.
  • The quic-go code has not been reviewed, so we would still have to do a careful line-by-line review as part of bringing it in. Reviewing and revising 75,000+ lines of code is quite possibly more work than writing 75,000 lines from scratch. And a fresh implementation without the history of keeping up with QUIC during its instability may well end up smaller.

For all these reasons, the path forward is almost certainly not to adopt quic-go directly.

@marten-seemann, as I said before, we certainly appreciate your work implementing QUIC to date as well as the expertise you have amassed, and if you would be interested to share that with us in the development and review of a fresh implementation, you'd certainly be welcome. On the other hand, if you would rather focus on quic-go and not a different implementation, we'd understand that too.

@james-lawrence
Copy link
Contributor

main things I'm interested in w/ a quic implementation are exposing ALPN and seamless interopt w/ other standard transports. aka shouldn't have to make extra calls to server http over a quic transport. just setup the quic listener and pass it to http.Serve. when the quic implementation gets to a workable state I'm 100% down to start using it in some of my applications and provide feedback on the API.

@marten-seemann
Copy link
Contributor

marten-seemann commented Mar 10, 2023

@rsc, thank you for your detailed post. It seems like a decision has already been made, but nevertheless, here are my 2c. Happy to share some of my insights from almost 8 years of developing / maintaining a QUIC stack and from having been a member of the IETF QUIC working group since the very beginning.

Building a performant QUIC stack is an absolutely massive endeavor. Getting the handshake to work is nothing more than a tiny first step. When we started the project, it only took us one or two weeks to download a small file from a quic-go server using Chrome via what was back then called H2/QUIC, and most of that time was spent on implementing the bespoke QUIC Crypto.
Implementing the other mandatory parts of the 4 QUIC RFCs (please don't do it the CloudFlare way of not implementing mandatory parts of the RFC) is an enormous amount of work, including the implementation of flow control (both on the stream and the connection level), packet scheduler, a congestion controller and loss detection and the various loss recovery strategies.

This results in a spec-compliant QUIC stack, but in no way an optimized / performant one. You'd probably want to implement

  • Flow control window auto-tuning: without this, a QUIC transfer will never be able to make use of the BDP of common connections on the internet
  • Packet pacing: Absolutely crucial. Without pacing, sending a full cwnd of packets overflows the queues of routers, leads to (avoidable) packet loss and a subsequent collapse of the cwnd
  • The QUIC ACK-frequency extension. Preliminary measurements show that this extension will allow quic-go to make much better use the bandwidth available
  • Optimized syscalls to read and write multiple packets from / to the socket in a single syscall (By the way, can we get net: add UDPMsg, (*UDPConn).ReadUDPMsgs, (*UDPConn).WriteUDPMsgs #45886 some time soon 🙏? This is currently the biggest bottleneck for high-BDP throughput performance in quic-go.)
  • qlog: it will be super hard to do any serious debugging / performance analysis without awesome tools like qvis
  • DPLPMTUD
  • maybe: ECN support, especially as L4S is picking up steam

Maybe not absolutely necessary, but highly desirable:

  • The ability to run multiple QUIC connections (ingoing and outgoing) on the same net.PacketConn. This is quite a powerful feature since you only need a single FD for all your connections, and one of the main reasons the IPFS project started investing in quic-go.
  • QUIC Unreliable Datagrams
  • WebTransport support. The hooks WebTransport requires are no fun.
  • 0-RTT handshakes (not sure if that's part of the plan here)

Aside from all of these features above, we've spent significant engineering efforts on performance optimizations (e.g. reducing the number of allocs) and DoS defense (you're keeping track of a lot of things, e.g. sent and received frames, sent and received packets, etc., and all of these data structures are potentially attackable). As I see it, there's little point in just providing a spec-compliant QUIC implementation, if it can't (at the very least) compete with TCP's performance. And performance work on quic-go is far from done at this point.


Let me briefly comment on the points you made.

  • As already noted on this issue, there are API questions about how best to present QUIC, and we may well want an API that is different in important ways from quic-go. In particular the quic-go API is fairly low level compared to what we are contemplating.

Not sure in what sense quic-go is too low-level. However, quic-go is still on a v0.x version, and we're happy to consider well-motivated API changes. That statement stands independent of the discussion on this issue, and we're happy about proposals how to make the quic-go API work better (please open an issue in quic-go, happy to discuss there!).

  • QUIC having been a moving target, it is almost certain that quic-go contains compatibility code for older versions that is no longer needed. A new implementation can target the RFC and modern implementations only.

We've removed support for QUIC crypto a long time ago. The only compatibility code that we're still maintaining is for draft-29, which we're planning to remove some time in summer this year. The additional code for draft-29 is pretty limited to begin with (mostly just using different labels in the various HKDF expansions), and with one tiny exception doesn't leak beyond the handshake package.

One thing I'm really happy about is finally cleaning up the API between quic-go and crypto/tls. I can't wait to start using the new API (I already have a branch). The API that my crypto/tls fork has accumulated over the years is indeed suboptimal (to say the least). Cleaning it up was always complicated by the fact that I had to maintain two separate forks (for the most recent 2 Go versions) at the same time.

Other than that, I don't think there's any code around that only exists for historical reasons. While minimizing the LOC was never a design target (code clarity and testability was), I don't think there's a lot you can remove without removing features or sacrificing performance.

  • The quic-go tests depend on test frameworks that we cannot depend on in the standard library, so those would need rewriting.

Indeed. In hindsight, that was a bad decision we made when we started the project. Migrating would probably take somewhere around 2 weeks of work to rewrite the tests. Pretty sure that this would still be orders of magnitude less work than rewriting an implementation from scratch.

  • The quic-go code has not been reviewed, so we would still have to do a careful line-by-line review as part of bringing it in. Reviewing and revising 75,000+ lines of code is quite possibly more work than writing 75,000 lines from scratch. And a fresh implementation without the history of keeping up with QUIC during its instability may well end up smaller.

All code has been reviewed by a Googler, @lucas-clemente (not on the Go team though). I don't mean to be nit-picky here, but it's just 24,000 LOC if you exclude tests (and 63,000 if you don't) (counted using cloc). The test suite is indeed quite comprehensive, which allowed me to discover a number of problems (including deadlocks) in the QUIC specification itself during the standardization process, as well as countless bugs in our own code.

I'd also like to point out that quic-go is widely used in production, for example by Caddy (using the HTTP/3 implementation it comes with) and accounts for ~80-90% of all connections in the IPFS network (using just the quic package, without HTTP/3). See here for a (very much incomplete) list of other projects that use it.

It's also tested against a long list of other QUIC implementations using the QUIC Interop Runner which we built a few years ago to facilitate automated interop testing in the QUIC working group.

@marten-seemann, as I said before, we certainly appreciate your work implementing QUIC to date as well as the expertise you have amassed, and if you would be interested to share that with us in the development and review of a fresh implementation, you'd certainly be welcome. On the other hand, if you would rather focus on quic-go and not a different implementation, we'd understand that too.

Happy to help, in one way or the other. You know where to find me :)

@neild
Copy link
Contributor Author

neild commented Mar 10, 2023

@rhysh

Does an endpoint that acts as a client need to set a certificate?

No; fixed the documentation for Config.TLSConfig.


func (s *Stream) ReadContext(ctx context.Context, b []byte) (n int, err error) {
func (s *Stream) WriteContext(ctx context.Context, b []byte) (n int, err error) {
func (s *Stream) CloseContext(ctx context.Context) error {

These methods don't allow using normal io packages in a Context-aware way; anything that takes an io.Reader/io.Writer to do buffering, (de)compression, (un)marshaling, etc and wants to also use Context will need a wrapper. And dealing with a Context value in each call looks like it will be expensive (as crypto/tls's implementation of Read vs HandshakeContext saw in #50657).

There are three types of API for cancellable read/write operations in common use that I know of:

  1. The net.Conn style of a type which implements io.ReadWriter with a separate Set(Read|Write)Deadline.
  2. Functions that accept a context.Context.
  3. A function that curries a context, returning an io.ReadWriter: s.WithContext(ctx).Read(p).

I believe we need to support the first one for compatibility with net.Conn. We should also support context-based cancellation, so that means at least two overlapping APIs.

I don't have a strong opinion about which of the latter two options is best (s.ReadContext(ctx, p) vs s.WithContext(ctx).Read(p)), but ReadContext is a bit less indirect and it's simple to write a context-currying adapter in terms of it.

Another possibility if #57928 is accepted (not strictly necessary, but necessary to implement this efficiently) might be:

// SetCancelContext arranges for operations on the stream to be interrupted if the provided context is canceled.
// After the context is canceled, calls to I/O methods such as Read and Write will return the context error.
// A a nil value for ctx means operations will not be interrupted.
func (s *Stream) SetCancelContext(ctx context.Context) {}

This revision of the API doesn't give visibility into flow control. How much is required to build a reliable QPACK implementation?

This is an excellent question. I left flow control out of the initial proposal because I'm not completely satisfied with any of the ideas I've had so far.

My current inclination is to have a per-stream configuration option that makes writes to the stream effectively atomic--a write will block until flow control is available to send the entire write, sending either the entire chunk of data or none of it.

s.SetAtomicWrites()
n, err := s.Write(data)
// If err is nil, n==len(data).
// If err is non-nil, n==0.

I'd be interested to hear other ideas.


Conn.Close and Conn.Abort look similar and not quite orthogonal. I'm not sure if there's a good reason for an app to close a connection with an error and then also wait for the peer to acknowledge, but I don't see how they'd do that with this pair of methods. Maybe call Abort and then Close? Or Abort and Wait (but maybe Abort immediately discards all state)? What's the reason to not have Abort and Close be one method with signature func(context.Context, error) error?

To close a connection with an error and wait for the peer to acknowledge:

c.Abort(ConnClosedError{Code: code})
err := c.Wait(ctx)

I think you might be right that combining Abort and Close into a single func(context.Context, error) error method is better. I'll think about that some more.


Only because you mentioned future possibility of Early Data: Would Dialing with Early Data need a separate quic.Listener method (and package-level function)?

I think we can do Early Data almost entirely within the proposed API.

  • We add a Config.EnableEarlyData option.
  • When EnableEarlyData is on, creating a new client stream doesn't immediately start the handshake if 0-RTT state is available. The user can create new streams and write to them as usual, with data buffered locally. When the early data buffer fills or when the user explicitly flushes the connection, we send the handshake and pending data in 0-RTT packets. If the server rejects early data, we resend the discarded 0-RTT data in 1-RTT.
  • On the server side, when EnableEarlyData is on, accepted client streams may include early data. There will be a method for querying a stream to see if its receiving early data, and possibly a method the user needs to call to explicitly acknowledge that they're receiving early data before they can read from the stream.

But I haven't tried to implement this, and might be missing something.

@gopherbot
Copy link

Change https://go.dev/cl/475435 mentions this issue: quic: add various useful common constants and types

@gopherbot
Copy link

Change https://go.dev/cl/468402 mentions this issue: quic: add internal/quic package

@gopherbot
Copy link

Change https://go.dev/cl/475437 mentions this issue: quic: basic packet operations

@gopherbot
Copy link

Change https://go.dev/cl/475436 mentions this issue: quic: packet number encoding/decoding

@gopherbot
Copy link

Change https://go.dev/cl/475438 mentions this issue: quic: packet protection

@neild
Copy link
Contributor Author

neild commented Mar 10, 2023

just setup the quic listener and pass it to http.Serve.

To be clear: This will not work. QUIC is not TCP. A quic.Listener listens for QUIC connections, where a QUIC connection can multiplex any number of TCP-like streams. There isn't any concept in QUIC which corresponds well to a net.Listener.

In addition, while it would be possible to run HTTP/1 over QUIC streams, nobody (so far as I know) does this. HTTP/3 uses QUIC as an underlying transport, but HTTP/3 is not just HTTP/1 with TCP swapped out for QUIC.

@rhysh
Copy link
Contributor

rhysh commented Mar 10, 2023

There are three types of API for cancellable read/write operations in common use that I know of:

  1. The net.Conn style of a type which implements io.ReadWriter with a separate Set(Read|Write)Deadline.
  2. Functions that accept a context.Context.
  3. A function that curries a context, returning an io.ReadWriter: s.WithContext(ctx).Read(p).

I believe we need to support the first one for compatibility with net.Conn. We should also support context-based cancellation, so that means at least two overlapping APIs.

I don't have a strong opinion about which of the latter two options is best (s.ReadContext(ctx, p) vs s.WithContext(ctx).Read(p)), but ReadContext is a bit less indirect and it's simple to write a context-currying adapter in terms of it.

Yes, ReadContext is more direct when it's called directly, and writing a context-currying adapter is simple. But users of io.Copy, io.ReadFull, gzip.NewReader, fmt.Fprintf, etc who also want to use context will need to write and use that adaptor every time, or (I've found) will end up with project-specific implementations like myio.ReadFullWithContext.

The adaptor is simple to write in either direction (2 to 3 or 3 to 2). The io.Reader and io.Writer interfaces are extremely widespread, and currying means any interaction with the context (including looking up values) can be amortized across more calls and more bytes.

It seems that API 1 can also be built into API 3, where the value that WithContext returns implements net.Conn, but doesn't allow setting a deadline farther out than the deadline that was attached to the context. Users who don't want to deal with context at all can use the result of s.WithContext(context.Background()).

For what it's worth, I'd also expect multiple calls to WithContext with different arguments to return independent results, to allow independent control of Read versus Write deadlines. That would be hard to express in an API like SetCancelContext.

I agree though that it's not clear which of these APIs is best. Most of all I'd like one that aligns with performance: where it's easy to implement and use in an efficient way and hard to end up with a bunch of great code that can't be made to go fast.


My current inclination is to have a per-stream configuration option that makes writes to the stream effectively atomic--a write will block until flow control is available to send the entire write, sending either the entire chunk of data or none of it.

That sounds pretty simple to use, nice!

If it allows writes that are larger than a single packet, and some pacing is active, and the user cancels the write via SetWriteDeadline or a context—that would leak through the abstraction, which is probably fine but would take some careful doc-writing to explain.

I'd be interested to hear other ideas.

IIUC there are three protocol-level limits that determine whether a particular STREAM frame is allowed on the wire: stream-level flow control, connection-level flow control, and connection-level congestion control (which seems closely related to any pacing in place). It looks like QPACK specifies a need for interacting with the first two. I expect it's unusual to need to interact with the third, but I have an application that takes advantage of visibility into that: it prioritizes which data to send, and sometimes whether to send any data at all, based on how soon it expects the QUIC stack will be able to send it to the peer. (Maybe some of that is better left to an integration with the packet scheduler.)

I wonder if there's room for an API along the lines of https://pkg.go.dev/golang.org/x/time/rate#Limiter.ReserveN that gives an app a higher level of visibility into and control of those windows, through an API that returns a struct with methods that allow inspecting and manipulating (and canceling) the reservation. It's definitely an "other idea"; I don't know whether it's better than your SetAtomicWrites proposal.

// TryReserve reserves up to n bytes of stream- and connection-level flow control, ear-marking
// it for use with s. If fewer than n bytes are available, TryReserve claims them all.
func (s *Stream) TryReserve(n int64) *Reservation

type Reservation struct

// Value returns the number of bytes of stream- and connection-level flow control this Reservation holds.
// Writes to the stream will reduce this value.
func (r *Reservation) Value() int64

// Cancel returns the indicated number of bytes of stream- and connection-level flow control to the
// general pool. It panics if the math is wrong (n is negative, or n is larger than the current Value).
func (r *Reservation) Cancel(n int64)

@ghost
Copy link

ghost commented Mar 16, 2023

While this might be off topic as the unreliable datagram extension is not part of quic rfc, I would like to express interest in the extension, for implementing http 3 datagrams and udp-connect methods.
I think that an API imitating net.PacketConn should be fine.

@neild
Copy link
Contributor Author

neild commented Mar 17, 2023

Unreliable datagrams (RFC 9221) are something we definitely want to support, although I'd rather defer that to a separate proposal. A net.PacketConn-style API may not suffice, since we may want some mechanism for indicating when a datagram has been acknowledged or declared lost.

gopherbot pushed a commit to golang/net that referenced this issue Feb 8, 2024
The ReadContext, WriteContext, and CloseContext Stream methods are
difficult to use in conjunction with functions that operate on an
io.Reader, io.Writer, or io.Closer. For example, it's reasonable
to want to use io.ReadFull with a Stream, but doing so with a context
is cumbersome.

Drop the Stream methods that take a Context in favor of stateful
methods that set the Context to use for read and write operations.
(Close counts as a write operation, since it blocks waiting for
data to be sent.)

Intentionally make Set{Read,Write}Context not concurrency safe,
to allow the race detector to catch misuse. This shouldn't be a
problem for correct programs, since reads and writes are
inherently not concurrency-safe.

For golang/go#58547

Change-Id: I41378eb552d89a720921fc8644d3637c1a545676
Reviewed-on: https://go-review.googlesource.com/c/net/+/550795
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Jonathan Amsterdam <jba@google.com>
@gopherbot
Copy link

Change https://go.dev/cl/564016 mentions this issue: quic: add qlog recovery metrics

@gopherbot
Copy link

Change https://go.dev/cl/564015 mentions this issue: quic/qlog: don't output empty slog.Attrs

@gopherbot
Copy link

Change https://go.dev/cl/564017 mentions this issue: quic: log packet_dropped events

gopherbot pushed a commit to golang/net that referenced this issue Feb 14, 2024
For golang/go#58547

Change-Id: I49a27ab82781c817511c6f7da0268529abc3f27f
Reviewed-on: https://go-review.googlesource.com/c/net/+/564015
Reviewed-by: Jonathan Amsterdam <jba@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
gopherbot pushed a commit to golang/net that referenced this issue Feb 15, 2024
Log events for various congestion control and loss recovery metrics.

For golang/go#58547

Change-Id: Ife3b3897f6ca731049c78b934a7123aa1ed4aee2
Reviewed-on: https://go-review.googlesource.com/c/net/+/564016
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Jonathan Amsterdam <jba@google.com>
gopherbot pushed a commit to golang/net that referenced this issue Feb 15, 2024
Log unparsable or otherwise discarded packets.

For golang/go#58547

Change-Id: Ief64174d91c93691bd524515aa6518e487543ced
Reviewed-on: https://go-review.googlesource.com/c/net/+/564017
Reviewed-by: Jonathan Amsterdam <jba@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
@gopherbot
Copy link

Change https://go.dev/cl/564477 mentions this issue: quic: add Stream.ReadByte, Stream.WriteByte

@gopherbot
Copy link

Change https://go.dev/cl/564476 mentions this issue: quic: reduce ack freqency after the first 100 packets

@gopherbot
Copy link

Change https://go.dev/cl/564475 mentions this issue: quic: add throughput and stream creation benchmarks

gopherbot pushed a commit to golang/net that referenced this issue Feb 15, 2024
For golang/go#58547

Change-Id: Ie62fcf596bf020bda5a167f7a0d3d95bac9e591a
Reviewed-on: https://go-review.googlesource.com/c/net/+/564475
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Jonathan Amsterdam <jba@google.com>
@gopherbot
Copy link

Change https://go.dev/cl/564495 mentions this issue: quic: fast path for stream reads

@gopherbot
Copy link

Change https://go.dev/cl/564496 mentions this issue: quic: fast path for stream writes

gopherbot pushed a commit to golang/net that referenced this issue Feb 16, 2024
RFC 9000 recommends sending an ack for every second ack-eliciting
packet received. This frequency is high enough to have a noticeable
impact on performance.

Follow the approach used by Google QUICHE: Ack every other packet
for the first 100 packets, and then switch to acking every 10th
packet.

(Various other implementations also use a reduced ack frequency;
see Custura et al., 2022.)

For golang/go#58547

Change-Id: Idc7051cec23c279811030eb555bc49bb888d6795
Reviewed-on: https://go-review.googlesource.com/c/net/+/564476
Reviewed-by: Jonathan Amsterdam <jba@google.com>
Auto-Submit: Damien Neil <dneil@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
gopherbot pushed a commit to golang/net that referenced this issue Feb 16, 2024
Currently unoptimized and slow.
Adding along with a benchmark to compare to the fast-path followup.

For golang/go#58547

Change-Id: If02b65e6e7cfc770d3f949e5fb9fbb9d8a765a90
Reviewed-on: https://go-review.googlesource.com/c/net/+/564477
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Jonathan Amsterdam <jba@google.com>
@gopherbot
Copy link

Change https://go.dev/cl/565255 mentions this issue: quic: source address and ECN support in the network layer

gopherbot pushed a commit to golang/net that referenced this issue Feb 21, 2024
Keep a reference to the next chunk of bytes available for reading
in an unsynchronized buffer. Read and ReadByte calls read from this
buffer when possible, avoiding the need to lock the stream.

This change makes it unnecessary to wrap a stream in a *bytes.Buffer
when making small reads, at the expense of making reads
concurrency-unsafe. Since the quic package is a low-level one and
this lets us avoid an extra buffer in the HTTP/3 implementation,
the tradeoff seems worthwhile.

For golang/go#58547

Change-Id: Ib3ca446311974571c2367295b302f36a6349b00d
Reviewed-on: https://go-review.googlesource.com/c/net/+/564495
Reviewed-by: Jonathan Amsterdam <jba@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
gopherbot pushed a commit to golang/net that referenced this issue Feb 21, 2024
Similar to the fast-path for reads, writes are buffered in an
unsynchronized []byte allowing for lock-free small writes.

For golang/go#58547

Change-Id: I305cb5f91eff662a473f44a4bc051acc7c213e4c
Reviewed-on: https://go-review.googlesource.com/c/net/+/564496
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Jonathan Amsterdam <jba@google.com>
@gopherbot
Copy link

Change https://go.dev/cl/565795 mentions this issue: quic: handle PATH_CHALLENGE and PATH_RESPONSE frames

@gopherbot
Copy link

Change https://go.dev/cl/565796 mentions this issue: quic: set ServerName in client connection TLSConfig

@gopherbot
Copy link

Change https://go.dev/cl/565797 mentions this issue: quic: expand package docs, and document Stream

gopherbot pushed a commit to golang/net that referenced this issue Feb 21, 2024
Make the abstraction over UDP connections higher level,
and add support for setting the source address and ECN
bits in sent packets, and receving the destination
address and ECN bits in received packets.

There is no good way that I can find to identify the
source IP address of packets we send. Look up the
destination IP address of the first packet received on
each connection, and use this as the source address
for all future packets we send. This avoids unexpected
path migration, where the address we send from changes
without our knowing it.

Reject received packets sent from an unexpected peer
address.

In the future, when we support path migration, we will want
to relax these restrictions.

ECN bits may be used to detect network congestion.
We don't make use of them at this time, but this CL adds
the necessary UDP layer support to do so in the future.

This CL also lays the groundwork for using more efficient
platform APIs to send/receive packets in the future.
(sendmmsg/recvmmsg/GSO/GRO)

These features require platform-specific APIs.
Add support for Darwin and Linux to start with,
with a graceful fallback on other OSs.

For golang/go#58547

Change-Id: I1c97cc0d3e52fff18e724feaaac4a50d3df671bc
Reviewed-on: https://go-review.googlesource.com/c/net/+/565255
Reviewed-by: Jonathan Amsterdam <jba@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
gopherbot pushed a commit to golang/net that referenced this issue Feb 23, 2024
We do not support path migration yet, and will ignore packets
sent from anything other than the peer's original address.
Handle PATH_CHALLENGE frames by sending a PATH_RESPONSE.
Handle PATH_RESPONSE frames by closing the connection
(since we never send a challenge to respond to).

For golang/go#58547

Change-Id: I828b9dcb23e17f5edf3d605b8f04efdafb392807
Reviewed-on: https://go-review.googlesource.com/c/net/+/565795
Reviewed-by: Jonathan Amsterdam <jba@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
gopherbot pushed a commit to golang/net that referenced this issue Feb 23, 2024
Client connections must set tls.Config.ServerName to authenticate
the identity of the server. (RFC 9001, Section 4.4.)

Previously, we specified a single tls.Config per Endpoint.
Change the Config passed to Listen to only apply to
client connections accepted by the endpoint.
Add a Config parameter to Listener.Dial to allow specifying a
separate config per outbound connection, allowing the user
to set the ServerName field.

When the user does not set ServerName, set it ourselves.

For golang/go#58547

Change-Id: Ie2500ae7c7a85400e6cc1c10cefa2bd4c746e313
Reviewed-on: https://go-review.googlesource.com/c/net/+/565796
Reviewed-by: Jonathan Amsterdam <jba@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
gopherbot pushed a commit to golang/net that referenced this issue Feb 23, 2024
For golang/go#58547

Change-Id: Ie5dd0ed383ea7a5b3a45103cb730ff62792f62e1
Reviewed-on: https://go-review.googlesource.com/c/net/+/565797
Reviewed-by: Jonathan Amsterdam <jba@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
@gopherbot
Copy link

Change https://go.dev/cl/566295 mentions this issue: quic: move package out of internal

@neild
Copy link
Contributor Author

neild commented Feb 27, 2024

I plan on submitting https://go.dev/cl/566295 shortly, which will move the in-development internal package to golang.org/x/net/quic.

This implementation is still a work in progress, but it's complete enough for people to look it over and kick the tires. You can create QUIC connections, streams, and send and receive data on streams. It has not undergone any form of real-world usage, and there are doubtless many bugs. I would not recommend trying to use this in production yet.

This package is experimental and not yet bound by the Go compatibility promise. The API may change as we gain experience with it.

HTTP/3 is not yet implemented. I intend to do that next.

0-RTT, path migration, and the unreliable datagram extension are also not yet implemented. These should be straightforward to implement, but I don't know when I'll get to them.

A number of performance-critical features are missing, notably GRO, GSO, and PMTUD. Expect poor performance until those are completed. In my tests, performance is currently mostly limited by our ability to pass datagrams through the kernel.

There are some differences in the API compared to the initial proposal in this issue. Consult the package godoc for details. A quick overview of significant differences:

  • An Endpoint listens for QUIC traffic on a network address. (Renamed from Listener.)
  • Stream read/write cancelation has changed substantially:
    • Stream.ReadContext and Stream.WriteContext were a bad idea and have been dropped.
    • Stream.SetReadDeadline and Stream.SetWriteDeadline have also been dropped.
    • Stream.SetReadContext and Stream.SetWriteContext set a context to use for canceling read/write operations on a stream.
  • Read and write operations on Stream are not concurrency-safe: You can make only one read or write at a time, although you can read and write simultaneously. Reads and writes have a fast path which directly accesses the stream's internal buffer, avoiding the need to wrap a stream in a bufio.ReadWriter or equivalent for performance.

gopherbot pushed a commit to golang/net that referenced this issue Feb 27, 2024
For golang/go#58547

Change-Id: I119d820824f82bfdd236c6826f960d0c934745ca
Reviewed-on: https://go-review.googlesource.com/c/net/+/566295
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Jonathan Amsterdam <jba@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Accepted
Development

No branches or pull requests