How to serve HTTPS with the Deno.serve() API
I used to start a dev server with TLS like so
but can't seem to do so with the Deno.serve() api
It seems to me that the cert and key files in the options object are being ignored and i have no idea why.
Any help is greatly appreciated.
117 Replies
The options are now
cert
and key
which are the contents of these files. I would change your code to something like
I had more success streaming using serveTls(handler, {certFile:'/path/to/file', keyFile:'/path/to/file'}).
serveTls
will be deprecated in an upcoming release of the standard library. I would strongly recommend moving to Deno.serve
as described.I see, thank you
serveTls uses Deno.listen() and Deno.serveHttp(). The last time I tested Deno.serve() doesn't stream like serveTls does.
could you clarify what you mean by "doesn't stream like
serveTls
does"?
They all do streaming, Deno.serve()
is optimized for performance and might do buffering to achieve faster overall times.I'll have to find the tests and posts I made when I ran comparing serveTls to Deno.serve(). I'm about to test Deno.serve() right now using the same handler I pass to serveTls().
Deno.serve() ain't working the same as serveTls when reading and writing streams.
I can post some code here or elsewhere for you to test and verify.
sure, please post a code repro
I don't know what you mean by "not working the same"
Doesn't stream.
This is the same handler I use with serveTls() where I make sure to add alpnProtocols to Deno.listen() in the serveTls source code
https://gist.github.com/guest271314/6b430fe053aaf9352f0910639d890d35.
why are you piping it through a text encoder stream?
This is the client code, POST request is first, then GET request. Now when we write to the writable side, e.g.,
await writer.write('duplex');
the data is transformed and streamed to client in real-time.
Ideally only a single POST request should be made however Deno servers doesn't stream to client with POST request alone as a ServiceWorker on Chromium 117 does https://plnkr.co/edit/2PQK21kZTZvZ2oVi.
POST
var {readable, writable} = new TransformStream();
var writer = writable.getWriter();
fetch('https://localhost:8443', {
duplex:'half',
method:'query',
body: readable.pipeThrough(new TextEncoderStream())
}) .then((r) => r.body.pipeThrough(new TextDecoderStream())) .then((r) => r.pipeTo(new WritableStream({ write(value) { console.log(value); } }))).then(() => console.log('Done writing stream')).catch(console.dir); GET fetch('https://localhost:8443?=' + new Date().getTime(), { headers: { 'Content-Type':'text/plain', 'Access-Control-Request-Private-Network': true},
}) .then((r) => r.body.pipeThrough(new TextDecoderStream())) .then((r) => r.pipeTo(new WritableStream({ write(value) { console.log(value); } }))).then(() => console.log('Done reading stream')).catch(console.dir);
}) .then((r) => r.body.pipeThrough(new TextDecoderStream())) .then((r) => r.pipeTo(new WritableStream({ write(value) { console.log(value); } }))).then(() => console.log('Done writing stream')).catch(console.dir); GET fetch('https://localhost:8443?=' + new Date().getTime(), { headers: { 'Content-Type':'text/plain', 'Access-Control-Request-Private-Network': true},
}) .then((r) => r.body.pipeThrough(new TextDecoderStream())) .then((r) => r.pipeTo(new WritableStream({ write(value) { console.log(value); } }))).then(() => console.log('Done reading stream')).catch(console.dir);
why not just
So I can read text. It shouldn't matter because pipeThrough() returns a ReadableStream. This is how the server should work with a single POST (or QUERY) request https://plnkr.co/edit/2PQK21kZTZvZ2oVi
When you test you will see that Deno.serve() does not stream data back to the client in real-time as serveTls does.
Thus my question posted here earlier where I was referred here from Reddit.
I'm confused on why you specify duplex as "half"?
Does
ResponseInit
in Deno even support this option?I have no idea if Deno supports that option. Deno's documentation is not the greatest. That has no bearing on Deno.serve() not streaming.
I mean the documentation is pretty good if you look at it, where did you get this option from?
I don't think the documentation is that good. See https://developer.chrome.com/articles/fetch-streaming-requests/#demo.
Chrome Developers
Streaming requests with the fetch API - Chrome Developers
Chromium now supports upload streaming as of version 105, which means you can start a request before you have the whole body available.
You aren't making a fetch request, this doesn't apply.
I feel like I'm missing something here, this code feels super weird. I can't seem to replicate this with 1-way streaming.
Excuse me? I am making a fetch request client side. If the option is non-applicable in Deno then it doesn't matter.
I can't seem to replicate this with 1-way streaming.Can't seem to replicate what?
If I stream a response from the server to the client (without going through a second client?), it streams just fine
I feel like there's a subtle bug somewhere in here and I'm struggling to figure it out
The bug is in Deno.serve() which doesn't stream as serveTls() does.
serveTls() alone doesn't stream back to the client with a single POST request - the data is only written on the client when writer.close() is called. Thus I had to make a POST request first to send the ReadableStream, store in the server, then make a GET request to effectuate real-time bi-directional streaming - which should work with a single POST request as the plnkr code does.
That's is why I asked this question: https://discord.com/channels/684898665143206084/1130151205373481130
I'm sorry, I genuinely have no idea what's wrong here. A smaller reproduction would be great as this has way too much going on to pin down the issue. Hopefully someone else will be willing to help.
That is a small reproduction. Deno.serve() does not serve the stream the same way that serveTls() does.
The process steps:
1. Client side fetch('url', {method:'post', body:ReadableStream}).then((stream) => stream.pipeThrough(...).pipeTo())
2. Server side read the stream, transform or pass other chunks, read the stream in real-time client side.
@guest271314 You keep saying "Deno.serve() does not serve the stream the say way that serveTls() does" but that seems to not really explain the issue to be honest. Not serving or streaming in the same way seems expected, it is a different way so it would make sense for it to not stream in the same way. The question is does it stream in some other way? Does it stream at all? Can your code be replaced with
Deno.serve()
and still stream correctly, even if its done in a "different" way?The question is does it stream in some other way? Does it stream at all? Can your code be replaced with Deno.serve() and still stream correctly, even if its done in a "different" way?No.
HTTP Server: Streaming - Deno by Example
An example HTTP server that streams a response back to the client. -- Deno by example is a collection of annotated examples for how to use Deno, and the various features it provides.
I can post the code and you can try for yourself. If I'm missing something in my tests kindly point that out. Note also serveTls() doesn't serve a stream out of the box, either. I had to bundle serveTls and add alpnProtocols: ["h2", "http/1.1"] to Deno.listenTls().
Deno.serve can obviously stream. Re: https://examples.deno.land/http-server-streaming
HTTP Server: Streaming - Deno by Example
An example HTTP server that streams a response back to the client. -- Deno by example is a collection of annotated examples for how to use Deno, and the various features it provides.
Did you test that code?
Yes
Not only did I test this code (it works for me), I wrote it
Doesn't work for me here.
How are you making the request?
What version of Deno are you using?
I'm just opening it up in the browser
Lastest. On Linux. Chromium 117.
1.35.1?
deno 1.35.1 (release, x86_64-unknown-linux-gnu)
v8 11.6.189.7
typescript 5.1.6
If it works in the browser but not in your request, you're probably making the request wrong
If it doesn't work in the browser, it's probably a Deno bug on linux
It does not work in the browser. No I am not making the request wrong. The code doesn't work.
To clarify, you're running
and then opening http://localhost:8000/ in the browser, and it's not working?
OTOH I can stream - to a GET request only - using a minimally modified version of serveTls().
this is my result (it's adding new lines every second)
./deno run -A test.ts
Wait but does opening the link in the browser work? Not making a fetch request, just opening it in your browser.
Oh, I didn't know I was supposed to click deno deploy. Yes, works there. Not locally when using fetch() in the browser.
if it doesn't work using fetch, it's either an issue with fetch, or an issue with how you're configuring fetch
No, it's an issue with serve() and Deno. Because I can stream using a slightly modified version of serveTls().
However even serveTls() doesn't stream to a POST request. I have to first make a POST request sending a ReadableStream, then make a GET request to read the transformed stream I previously posted.
Can you provide an example of this working in any other language / runtime with a browser fetch?
Sure https://plnkr.co/edit/2PQK21kZTZvZ2oVi. That is the same stream being streamed back from a POST request. That is how streams should be working in Deno.
Sure, but this is using service workers, which is a completely different ballgame
No. It's the Streams Standard implementation that Deno, and others are trying to implement the design of.
Do you have an example of this working in Node that I can take a look at
No. Node.js needs special treatment just to expose Streams Standard. And the example for upload streaming https://glitch.com/~fetch-request-stream uses Express and a GET and POST request, just like I have to use with Deno.
Fetch Request Stream
A quick demo showing fetch request streams.
The very best implementation of Streams Standard and Fetch Standard that folks are trying to emulate is found in Chromium Dev Channel - not Workerd, not WASM Workers Server, not WasmEdge, and certainly no Bun https://github.com/oven-sh/bun/issues/1886.
GitHub
Issues · oven-sh/bun
Incredibly fast JavaScript runtime, bundler, test runner, and package manager – all in one - Issues · oven-sh/bun
This is what I am doing in Deno
async function main() {
// ...
let readable, writable, writer;
await serveTls(
async (request) => {
console.log(request.method);
let body = null;
if (request.method === 'OPTIONS' || request.method === 'HEAD') {
return new Response(null, responseInit);
}
if (request.method === 'GET') {
return new Response(readable.pipeThrough(new TextEncoderStream()), responseInit);
}
({readable, writable} = new TransformStream()); writer = writable.getWriter(); let proxyPlaceholderController; const proxyPlaceholderStream = new ReadableStream({ start() { proxyPlaceholderController = ; return _.enqueue('Done Streaming'); } }); request.body.pipeThrough(new TextDecoderStream()).pipeTo( new WritableStream({ async write(value) { console.log(value); await writer.write(value.toUpperCase()); }, async close() { console.log('closed'); await writer.close(); proxyPlaceholderController.close(); }, async abort(reason) { await writer.abort(reason); } }) ); return new Response( proxyPlaceholderStream.pipeThrough(new TextEncoderStream())
, responseInit) }, { certFile: 'certificate.pem', keyFile: 'certificate.key', signal, } ); } In the browser async function halfDuplexStream() { const {readable, writable} = new TransformStream(); const writer = writable.getWriter(); const request = fetch('https://localhost:8443', { duplex: 'half', method: 'POST', headers: { 'Content-Type': 'text/plain; charset=UTF-8' }, body: readable.pipeThrough(new TextEncoderStream()) }).then((r)=>r.body.pipeThrough(new TextDecoderStream())).then((r)=>r.pipeTo(new WritableStream({ write(value) { console.log(value); } }))).then(()=>'Done writing stream').catch((e)=>e); await new Promise((resolve)=>setTimeout(resolve, 1000)); const response = fetch('https://localhost:8443?=' + new Date().getTime(), { headers: { 'Content-Type': 'text/plain', 'Access-Control-Request-Private-Network': true }, }).then((r)=>r.body.pipeThrough(new TextDecoderStream())).then((r)=>r.pipeTo(new WritableStream({ write(value) { console.log(value); } }))).then(()=>'Done reading stream').catch((e)=>e); return { writer, request, response }; } var {writer, request, response} = await halfDuplexStream(); Promise.allSettled([request, response]).then(console.log, console.error); // ... await writer.write('duplex'); // 'DUPLEX' await writer.write(123); // '123' await writer.close() When we should be able to do what Chromium DevChannel ServiceWorker implementation does with a single POST. This is the slightly modifed Deno serveTls module I'm using. I only inserted alpnProtocols: ["h2", "http/1.1"] after ./deno bundle https://deno.land/std@0.194.0/http/server.ts serveTls.js. Looking at source code I don't see why serveTls is slated for deprecation when it basically relies on Deno.listenTls().
({readable, writable} = new TransformStream()); writer = writable.getWriter(); let proxyPlaceholderController; const proxyPlaceholderStream = new ReadableStream({ start() { proxyPlaceholderController = ; return _.enqueue('Done Streaming'); } }); request.body.pipeThrough(new TextDecoderStream()).pipeTo( new WritableStream({ async write(value) { console.log(value); await writer.write(value.toUpperCase()); }, async close() { console.log('closed'); await writer.close(); proxyPlaceholderController.close(); }, async abort(reason) { await writer.abort(reason); } }) ); return new Response( proxyPlaceholderStream.pipeThrough(new TextEncoderStream())
, responseInit) }, { certFile: 'certificate.pem', keyFile: 'certificate.key', signal, } ); } In the browser async function halfDuplexStream() { const {readable, writable} = new TransformStream(); const writer = writable.getWriter(); const request = fetch('https://localhost:8443', { duplex: 'half', method: 'POST', headers: { 'Content-Type': 'text/plain; charset=UTF-8' }, body: readable.pipeThrough(new TextEncoderStream()) }).then((r)=>r.body.pipeThrough(new TextDecoderStream())).then((r)=>r.pipeTo(new WritableStream({ write(value) { console.log(value); } }))).then(()=>'Done writing stream').catch((e)=>e); await new Promise((resolve)=>setTimeout(resolve, 1000)); const response = fetch('https://localhost:8443?=' + new Date().getTime(), { headers: { 'Content-Type': 'text/plain', 'Access-Control-Request-Private-Network': true }, }).then((r)=>r.body.pipeThrough(new TextDecoderStream())).then((r)=>r.pipeTo(new WritableStream({ write(value) { console.log(value); } }))).then(()=>'Done reading stream').catch((e)=>e); return { writer, request, response }; } var {writer, request, response} = await halfDuplexStream(); Promise.allSettled([request, response]).then(console.log, console.error); // ... await writer.write('duplex'); // 'DUPLEX' await writer.write(123); // '123' await writer.close() When we should be able to do what Chromium DevChannel ServiceWorker implementation does with a single POST. This is the slightly modifed Deno serveTls module I'm using. I only inserted alpnProtocols: ["h2", "http/1.1"] after ./deno bundle https://deno.land/std@0.194.0/http/server.ts serveTls.js. Looking at source code I don't see why serveTls is slated for deprecation when it basically relies on Deno.listenTls().
Gist
Slightly modified Deno serveTls module
Slightly modified Deno serveTls module. GitHub Gist: instantly share code, notes, and snippets.
Not only did I test this code (it works for me), I wrote itThen can you kindly explain why the code doesn't work in Chromium Dev Channel 117 on Linux? serve() streams only when alpnProtocols: ["h2", "http/1.1"] is added as an option to Deno.list(). Looking at the source code of serve() function we can't really pass alpnProtocols: ["h2", "http/1.1"] option, i.e., this Deno.serve({alpnProtocols: ["h2", "http/1.1"]}, handler); doesn't work.
Hmm. ALPN should be automatically set to h2 and http/1.1 (in that order). We also do have quite a bit of testing around streams themselves. Let me see if I can reproduce what you're seeing.
Reading back a bit --
serveTls
is based on the old Deno.serveHttp
API which is going to be deprecated soon as the code is very difficult for us to maintain. Deno.serve
should be a strict superset of everything that Deno.serveHttp
+ Deno.listen
+ Deno.listenTls
could handle.
Note that Deno.serve
is a completely re-implemented API, so there may be subtle differences to how it handles streams.
Also, the version of Deno.serve we stabilized this last release is different than the version of Deno.serve that was unstable earlier this year. That version also had a few bugs and we retired it in favour of a new implementation living on Hyper 1.0.Seeing the
duplex: "half"
option in the example, I recall reading somewhere that half-duplex stream is not implemented in Deno (https://github.com/whatwg/fetch/issues/1254). Not sure if that is the issue here.GitHub
Issues · whatwg/fetch
Fetch Standard. Contribute to whatwg/fetch development by creating an account on GitHub.
ALPN should be automatically set to h2 and http/1.1 (in that order).Well, that is definitely not the case. Reading the source code of serveTls and serve I see no difference - both depend on and use Deno.listenTls(). Basically I'm trying to figure out why when we pass a ReadableStream to Response after a ReadableStream is received in a POST request the stream is not sent to the client until the uploaded stream is closed. Using a ServiceWorker on Chromium - same V8 JavaScript engine as Deno - we can POST a ReadableStream, then pass a ReadableStream to Response in respondWith() and the stream will be sent to the client.
Take a look at the time of the last ?stream request. 23 hours of real-time duplex streaming using a ServiceWorker. Where I can POST a ReadableStream and transform the output in the ServiceWorker and write that transformation back to the client. https://plnkr.co/edit/2PQK21kZTZvZ2oVi.
Note that Deno.serve is a completely re-implemented API, so there may be subtle differences to how it handles streams.It doesn't habdle streams. A very simple test case is to try to use serve or serveTls without explicity setting alpnProtocols: ["h2", "http/1.1"] in the source code. It won't work.
Seeing the duplex: "half" option in the example, I recall reading somewhere that half-duplex stream is not implemented in Deno Re Fetch body streams are not full duplex #1254 I think one of you
Re https://github.com/whatwg/fetch/issues/1254 that is possible using a ServiceWorker onfetch handler and respondWith(), so it must be possible to extract the source from ServiceWorker and apply that to Deno source code.
GitHub
Fetch body streams are not full duplex · Issue #1254 · whatwg/fetch
With request body streams and response body streams one would think it is now possible to use fetch for full duplex communication over HTTP, like in the example below: const intervalStream = new Re...
I think lucacasonato mentioned that it is not that Deno can't implement duplex stream, but Deno does not want to deviate from the spec to allow that, hence the issue.
The requirement is possible already using a ServiceWorker onfetch event handler and event.respondWith() - which a whole bunch of folks are trying to mimic in their server design. So the key is using specifications and implementations that already achieve the requirement, thus no deviance in mind or observation.
Are you mixing up
Deno.serve
with the std
serve/serveTls?Folks deviate from specifications all the time. Chrome does not implement silence per Media Capture and Streams for a MediaStreamTrack of kind audio, even after I notified Web Audio API about the issue. The last time I checked Blob binary type for WebRTC data channels was not implemented. Chrome authors outright refuse to capture monitor devices on Linux for getUserMedia() and so forth.
Are you mixing up Deno.serve with the std serve/serveTls?I don't think so. I tested them all. None of them support streams out of the box.
Are you mixing up Deno.serve with the std serve/serveTls?
I can guarantee Deno.serve supports streams because I wrote it and added tests 😄
Well, that just is not the case. Did you test on browsers? Because reading the source code alpn protocols are not included.
GitHub
deno/ext/http/00_serve.js at main · denoland/deno
A modern runtime for JavaScript and TypeScript. Contribute to denoland/deno development by creating an account on GitHub.
listenOpts.alpnProtocols = ["h2", "http/1.1"];
So Deno has duplicate serve functions? Is that Deno.serve()?
We have duplicates because we need to keep the old std serve/serveTls around until the end of the deprecation period. It's not great, but we are trying not to break existing code.
Correct, this is Deno.serve
Yes, I tested Deno.serve() and it doesn't stream.
That's strange, because there are explicit tests for that
I'll try again. I can create gists of all the code I am testing.
Tests where and how? Somebody linked to a test on Deno Deploy that works, yet doesn't work in the browser.
Can you create a minimal repro for something that should stream with Deno.serve that doesn't? I will definitely take a look if we can isolate an issue
GitHub
deno/cli/tests/unit/serve_test.ts at main · denoland/deno
A modern runtime for JavaScript and TypeScript. Contribute to denoland/deno development by creating an account on GitHub.
It's entirely possible that we ended up in a mode where chunked encoding isn't enabled, and the browser ends up buffering
I don't know how you tested it. However, when using Deno.serve() from latest Deno it doesn't stream.
It's very simple: Deno server (any one you pick) should work like this https://plnkr.co/edit/2PQK21kZTZvZ2oVi.
Just to clarify -- is the problem that Deno.serve + fetch does not stream the response? I can see that as being potentially broken
This is what I am doing: Fetch supports upload streaming with the current duplex:'half', which means we can POST, or if preferred use QUERY to send a ReadableStream to the server. Once we start reading that ReadableStream in the server we should be able to return a ReadableStream in parallel - that is, we transform the uploaded stream as we read it and send the transformed chunks back to the client at the same time. Deno doesn't send the ReadableStream back to the client until we call WritableStreamDefaultWriter.close() or ReadableStreamDefaultController.close() on the client. OTOH on Chromium 117 Dev Channel we can intercept that fetch() request in onfetch handler, pipe the uploaded body (ReadableStream) through a TransformStream, pass to event.respondWith(), then write and read from the same transformed stream. In Deno right now I have to POST, then store the ReadableStream in the server, then GET to read the previously uploaded stream.
I can test Deno.serve() again, to verify what you guaranteed. The last think folks hould be doing is performing tests in-house and guaranteeing a result. Kurt Godel proved mathematically almost 100 years ago that no system can prove it own axioms in that system in Incompleteness Theorems.
I believe that what you want to happen is supported in Deno.serve. You can start streaming the POST reponse back to the client and it should work as you expect.
This is probably not supported in fetch, however
No, you can't. I already notified you of that fact.
It must be supported in fetch because I can do that using a ServiceWorker - using fetch.
Take a look at the plnk I linked to. I am doing exactly what I described with a single stream. Deno does not suppor that with serv(), Deno.serve(), nor serveTls().
Unless I am missing some option to pass to make that so.
I should clarify that I don't believe we have the APIs in Deno's implementation of fetch to do what you want there
I'm talking about fetch implemented in Chromium Dev Channel, same V8 JavaScript/WebAssembly engine that Deno depends on.
Again, I think you should go type some words in the linked plnkr https://plnkr.co/edit/2PQK21kZTZvZ2oVi, which demonstrates what I described is possible.
Deno doesn't support service workers, however, which seem pretty fundamental to what you want to do
Deno, and others, including Workerd, Wasm Workers Server, et al. are emulating ServiceWorker's in their server design.
We implement workers, which are a different thing unfortunately
The point is fetch() does support what I describe - Deno servers don't.
If I understand correctly, the real thing you want to do here is "invert a request" -- you want to make a POST to a server where the body of the POST is a transformed version of the response of the POST?
fetch
+ ServiceWorkers
does what you want, and unfortunately we only have fetch
What I am saying is your servers don't support sending a stream to the client while the uploaded stream is not closed.
fetch() => POST => ReadableStream => server (start reading uploaded stream) => new Response(ReadableStream()) => read in client.
Deno does not serve the response ReadabeStream until the uploaded ReadableStream is closed/bodyUsed.
I'm 99% sure we're doing that right but I'll write a small test to confirm
What do you mean by "right"? It's easy enough to claim your own tests achieve what you expect.
The claim "your servers don't support sending a stream to the client while the uploaded stream is not closed" is pretty straightforward to prove in the negative
Yes. Your servers don't support that. Meaning the uploaded ReadableStream is not closed - still streams to the server, could be for 24 hours. Deno servers will not send the client a ReadableStream until the uploaded (POSTed) ReadableStream closes.
Here's some code for you to use
async function halfDuplexStream() {
const { readable, writable } = new TransformStream();
const writer = writable.getWriter();
const stream = readable.pipeThrough(new TextEncoderStream());
const output = document.querySelector('output');
const input = document.querySelector('input');
input.oninput = async (e) => {
if (
e.data === null &&
e.inputType === 'deleteContentBackward' &&
e.target.value === ''
) {
output.textContent = '';
} else {
await writer.write(e.target.value);
}
};
input.onselect = (e) => {
input.value = '';
};
fetch('./?stream', {
method: 'POST',
headers: { 'Content-Type': 'text/plain' },
body: stream,
duplex: 'half',
})
.then((r) =>
r.body.pipeThrough(new TextDecoderStream()).pipeTo(
new WritableStream({
write(value) {
output.textContent = value;
},
close() {
console.log('Stream closed');
},
abort(reason) {
console.log({ reason });
},
})
)
)
.then(console.log)
.catch(console.warn);
}
Now, read that stream in a Deno server, transform the text written client side to uppercase, and send the ReadableStream back - without calling writer.close() client side.
Your server code should look something like
new Response(
e.request.body
.pipeThrough(new TextDecoderStream())
.pipeThrough(
new TransformStream({
transform(value, c) {
c.enqueue(value.toUpperCase());
},
flush() {
console.log('flush');
},
})
)
.pipeThrough(new TextEncoderStream())
)
I'll dig into this
So the reason you saw that this only works w/the alpn string is that the streamable POST body (ie: chunked encoding) only works with H2 or QUIC on Chrome.
Right. You guaranteed it worked. Then I think went to 99%? Deno authors banned me because I notified them one of their packages they were advertising claimed package.json was required to run Node.js, which is not correct; node executable can of course be run standalone. You folks obviously didn't test a variety of cases and devices when you tested this.
In any event, we're past that. I'm trying to figure out what needs to be changed in Deno to get this working in Deno.
And by "worked" I am referring to upload streaming (e.g., body:readable) working using Deno.serve(), serve(), serveTls().
Here's a demo showing that it works:
https://gist.github.com/mmastrac/d2065f45a7be70433b69765d37d61704
Run that executable with a key and cert, and you'll see that the stream for client and server are running simultaneously
That is not the test case I provided. You are running that in the terminal, right?
Yes, but you asked me to prove this: "your servers don't support sending a stream to the client while the uploaded stream is not closed", and I showed that it is possible to send a stream to the client while the stream is still uploading to the server
Your example did no such thing that I can see. I provided the source code to use, which I am writing in your code to demonstrate what I am talking about. You are making multiple requests, which I am already doing. This should be one (1) POST request and one (1) response.
Unfortunately now that I've shown this, I have to move on. You can check the bugtracker for info on support for ServiceWorkers and
duplex:full
on fetch.You have not shown what I am talking about.
A browser example is not sufficient to build a repro for use in Deno. If you are able to produce a minimal repro for the server test case, I'm happy to take a further look at it. Have a good day!
I already did. You didn't use the code I posted here. Anyway, you have a good day, too.
Unfortunately it looks like Deno.serve() can't send a ReadableStream piped through a TransformStream() to the browser at all.
return new Response(
new ReadableStream({
start(c) {
c.enqueue(new TextEncoder().encode('test'));
c.close();
}
}).pipeThrough(new TextDecoderStream())
.pipeThrough(
new TransformStream({
transform(value, c) {
const transformedValue = value.toUppperCase();
console.log(value);
c.enqueue(value.toUpperCase());
},
flush() {
console.log('flush');
},
})
)
.pipeThrough(new TextEncoderStream())
, {headers:{'Access-Control-Allow-Origin':'*'}})
Thank you for the repro -- I think you have a typo in here which was causing it to fail (
toUppperCase
-> toUpperCase
):
The error is quite obscure, so I filed this bug:
https://github.com/denoland/deno/issues/19867GitHub
Error in ReadableStream/TransformStream is swallowed in Deno.serve ...
The following Deno.serve code has a subtle error (toUppperCase -> toUpperCase). When running this code, there is nothing to implicate the failing TransformStream handler. Deno.serve((req) => ...
Thanks, fixed it. I had an extra line of backticks
you should also include ```ts instead of just the backticks at the start, it'll give syntax highlighting as well
Unfortunately Deno.serve() still doesn't stream after fixing that.
Since Deno's fetch() supports full duplex streaming out of the box, this https://gist.github.com/guest271314/39a3f93346bbc7dec5ee9fa1ff579256 should work out of the box, but it doesn't.
Gist
Deno.serve() doesn't stream, part 2
Deno.serve() doesn't stream, part 2. GitHub Gist: instantly share code, notes, and snippets.
It looks like serveTls() streams out of the box without modification.
The greater problem is Deno.serve() doesn't stream to even Deno's fetch() implementation.
I put together a demo of using Deno.listenTls() (without using Deno.serve() or serveTls()) and fetch() to implement full duplex streaming to and from te browser using a Native Messaging host https://gist.github.com/guest271314/49b0dfc91ac78fcc57c54485327646a8
Gist
Browser <=> Deno fetch(), listenTls() full duplex streaming
Browser <=> Deno fetch(), listenTls() full duplex streaming - nm_deno_fetch_duplex_server.js