https://kotlinlang.org logo
Title
c

corneil

07/10/2022, 5:52 PM
if you use write that returns a Future and call close when Future isDone
j

janvladimirmostert

07/10/2022, 6:31 PM
I can still reproduce the problem when using a blocking Future
withContext(<http://Dispatchers.IO|Dispatchers.IO>) {
  channel.write(ByteBuffer.wrap(data)).get()
  channel.close()
}
net::ERR_CONTENT_LENGTH_MISMATCH 200 (OK)
There must be a better way to do
close()
that won't cut off the stream while the browser is reading
c

corneil

07/10/2022, 6:36 PM
Look at a CompletionHandler
j

janvladimirmostert

07/10/2022, 6:39 PM
there's a completion handler in my StackOverflow example, putting close inside the completion handler still causes
net::ERR_CONTENT_LENGTH_MISMATCH 200 (OK)
When you're done writing and the completion handler is called, it doesn't actually mean that the browser received your written data, just that the kernel received your data and calling close() too soon wipes the buffer before the browser can read it
c

corneil

07/10/2022, 6:46 PM
Apologies I didn't look at the stackoverflow link. What is the client side doing?
j

janvladimirmostert

07/10/2022, 6:52 PM
client side is literally just displaying the GET request, at least that's what I've simplified it down to
if I put a Thread.sleep before the close, even for 1ms, then it doesn't show that error
when combining it with window.fetch, anything above 250kB when deployed on AWS behind a load balancer triggers that error and then the JSON response gets cut off when doing a RAW GET request (easier to test), then I need to increase the data size significantly to reproduce the problem
I thought that gzipping might fix it, which it did for the large JSON responses, but then it started showing the exact same errors for small JSON responses retreived via window.fetch if that solved it, it would have been great, but it seems to be too random to be a reliable workaround, so I need to figure out what the proper way is to close a socket connection on the JVM
c

corneil

07/10/2022, 6:59 PM
Have you had a look at Netty or Jetty code?
I think with HTTP you shouldn't close the socket on the server, let the browser close the socket. The client can make multiple requests and with HTTP/2 even multiple requests.
j

janvladimirmostert

07/10/2022, 7:02 PM
I've had a look at a few frameworks so far and all of them are just using Netty, Netty itself is enormous, so I haven't gotten my head around the closing mechanism yet. I'll definitely check out Jetty too, thanks for the hint!
if you don't close the channel, then the browser hangs until the channel is closed I've tried adding the Connection: close header in the response, but the browser seems to ignore it. This is an HTTP1.1 implementation, so the connection has to close, for HTTP2 I can probably leave the connection open
some smart socket people say I have to shutdownInput which will trigger a write from the client side which I then read again, but not sure at what point this needs to happen, before or after writing. I've tested so many permutations of this already and it doesn't seem to make a difference
Seems in HTTP 1.1 the server doesn’t close the socket. Are you using a Content-Length header in response?
j

janvladimirmostert

07/10/2022, 7:51 PM
I do have a content-length header
headers.add("Content-Length: ${body.length}")
Imagine it's that simple like just commenting out the close(), haha let me read that link, that looks interesting
Read 4.5.3 Persistent connections.
j

janvladimirmostert

07/10/2022, 8:09 PM
without closing the connection, window.fetch for GET / POST from JavaScript just hangs like this indefinitely
I think I should write the response in two parts, header first and then the body, let me try that
c

corneil

07/10/2022, 8:15 PM
What is happening in the network tab? Or even when looking at the session with wireshark? With HTTP 1.1 the client should send a close request. Are you handling those?
j

janvladimirmostert

07/10/2022, 8:18 PM
in the network tab, the first 6 requests are going through, then anything after that just hangs since the first 6 requests aren't considered done
c

corneil

07/10/2022, 8:18 PM
Is there a Connection header in request?
j

janvladimirmostert

07/10/2022, 8:19 PM
there was one, I've removed it earlier in my trying things out, let me put that back
yep, that fixed it locally! :thank-you: let me try this on AWS again, if it works, then you must collect the bounty on StackOverflow
c

corneil

07/10/2022, 8:21 PM
Seems like the absence implies persistent. You should check for presence and if it has
close
then close the socket otherwise leave the socket open.
j

janvladimirmostert

07/10/2022, 8:30 PM
seems like AWS is rewriting the Connection param let me tinker with the load balancer settings
works perfectly fine on localhost, also via NGINX, but not via AWS load balancer I'll need to go read up on what AWS load balancers are doing
c

corneil

07/10/2022, 9:26 PM
Try TCP load balancer.
What is coming from LoadBalancer?
I'm sure if you honor 1.1 Connection header rules by leaving it open unless the header is close you should be fine.
Don't try sending Connection: close in response.
j

janvladimirmostert

07/10/2022, 9:44 PM
I'm going to remove Connection: close from my side and then implement persistent connections I noticed that the browser is sending more data over the same connection, so once I've handled the first request on a connection, and written a response, I need to read again indefinitely until the connection dies This is how the postgresql protocol works as well, write, read, write read, write, read, just in the reverse for http1.1
👍 1
this works locally, one connection that's re-used for everything
supervisorScope {
   launch(<http://Dispatchers.IO|Dispatchers.IO>) {

      log.warn("New Connection!!")

      // create re-usable buffer
      val buffer = ByteBuffer.allocate(bufferSize.B)

      while (channel.isOpen) {
         val parser = readRequest(channel = channel, buffer = buffer.rewind())
         writeResponse(channel = channel, parser = parser)
      }
   }
}
let's see if this fixes the AWS Load Balancer problem which seems to ignore my response headers