Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for interim responses (1xx) #118

Open
Acconut opened this issue Jul 9, 2024 · 6 comments
Open

Support for interim responses (1xx) #118

Acconut opened this issue Jul 9, 2024 · 6 comments

Comments

@Acconut
Copy link

Acconut commented Jul 9, 2024

A server can generate one final response and multiple interim responses for a single request. From RFC 9110:

A single request can have multiple associated responses: zero or more "interim" (non-final) responses with status codes in the "informational" (1xx) range, followed by exactly one "final" response with a status code in one of the other ranges

Some informational status codes, like 100 Continue, are typically handled by the HTTP implementation themselves, but other interim responses are useful for the applications. For example, a server may want to generate an 103 Early Hint interim respone to allow the client to preload resources. Alternatively, a server may want to repeatedly generate interim responses for a long-running request to update the client on the processing progress. The client, on the other hand, may be interested in consuming those interim responses.

As far as I understand - and please correct me here if I am wrong - the interface currently does not expose capabilities for clients to receive or for server to generate interim responses. Would there be interest in adding such features?

@lukewagner
Copy link
Member

Great question! Does anyone have any links to good examples of how this is exposed in any standard library HTTP interfaces?

@Acconut
Copy link
Author

Acconut commented Jul 11, 2024

Looking at other API is a good idea! I have worked with interim responses in Go and Node.js, so I can share their approaches.

Go

When handling incoming request on the server-side, the handler receives a ResponseWriter value, whose WriteHeader function can be called multiple times to emit an interim (1xx) response and once for the final response (2xx-5xx):

mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
  w.Header().Set("My-Interim-Header", "hello")
  w.WriteHeader(105) // interim response

  w.Header().Del("My-Interim-Header")
  w.Header().Set("My-Final-Header", "hello")
  w.WriteHeader(200) // final response
})

Retrieving interim responses as a client is a bit more tedious. You have to attach a client tracer to the outgoing HTTP request. This tracer allows you to define a callback that will be invoked for every received interim response:

ctx := context.TODO()
ctx = httptrace.WithClientTrace(ctx, &httptrace.ClientTrace{
	Got1xxResponse: func(code int, header textproto.MIMEHeader) error {
		fmt.Printf("Got %d response\n", code)
		return nil
	},
})

req, _ := http.NewRequestWithContext(ctx, "GET", "https://example.com", nil)
res, _ := http.DefaultClient.Do(req)

Node.js

For clients in Node.js, receiving interim responses is as easy as listening for the information event on the request object:

const req = http.request(options);
req.end();

req.on('information', (info) => {
  console.log(`Got information prior to main response: ${info.statusCode}`);
});

For servers in Node.js, the support for generic 1xx responses is not great. Node.js offers dedicated methods for sending specific 1xx resposnes:

Other than that, there are no methods for sending generic 1xx responses with custom headers.

That being said, the approaches from Go and Node.js are similar:

  • clients use callbacks/events to receive 1xx responses
  • server call a method multiple times to send 1xx responses

Do you think these concepts are transferable to wasi-http?

@lukewagner
Copy link
Member

Thanks so much for the detailed examples in 2 languages; that helps create a picture of what we'd need at the WASI level. I think one thing I was wondering about that your examples help to explain is how to fallback gracefully when the receiver of a response doesn't know or care about interim responses: it sounds like you just get the non-interim response and body-stream like normal and silently ignore all the interim responses.

Just to confirm: is it the case that interim responses can only be received before a single final non-interim (non-1xx) response (followed by the body stream)? If so, that suggests to me that (perhaps in a 0.3 release, when we're making breaking changes to wasi-http anyways) that handle returns a resource type that represents "the overall sequence of responses to a single request" from which you can call different methods to either (1) stream the N interim responses before 1 final non-interim response or (2) skip directly to the final non-interm response. Thus, when the Go/Node.js "information" callback is installed, the impl would use (1), otherwise, it would use (2).

@acfoltzer
Copy link
Contributor

Just to confirm: is it the case that interim responses can only be received before a single final non-interim (non-1xx) response (followed by the body stream)?

Yep! (cite)

handle returns a resource type

Do you mean that handle would take a new resource type as an argument? This would be in line with other proposals floating around different ecosystems (here's one from the hyper world that uses a method on the Request argument rather than an entire new argument to avoid a breaking change). If the new resource was only accessible after handle returns (along with the final Response headers) that would be too late for many 1xx use cases.

@lukewagner
Copy link
Member

Do you mean that handle would take a new resource type as an argument? [...] If the new resource was only accessible after handle returns (along with the final Response headers) that would be too late for many 1xx use cases.

I was thinking that (again, in a 0.3 breaking-change timeframe) the resource type returned by handle would not represent "a response" (which I agree would arrive "too late"), but, rather, would represent "whatever future response(s) I get from this request" and thus you would get it before the first information response (maybe immediately, or maybe after some basic connection setup? I dunno) and given this resource, you could use it to either ask for informational responses or skip them.

Thinking through direct component-to-component composition scenarios, it seems like the ideal here is that we're not bifurcating the types (or handle function or its interface name) so that if I have a chain of 3 components and the component in the middle has no knowledge or care of informational responses--it's just forwarding things after poking at a header, let's say--that it all composes and we don't "shear off" the information responses. There are probably multiple ways to achieve this, though and what I'm saying is just a rough sketch.

One requirement that standard libraries have that we don't (yet) is that they have to maintain backwards compatibility with existing users whereas we only need to be able to implement these library interfaces (with library impl code that can know the full 0.3 interface), which is nice.

@acfoltzer
Copy link
Contributor

Ah, I see, I was still thinking about it in the framework of the current incoming- and outgoing-handler split. That makes sense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants