Isn't that what ContentNegotiation does? For examp...
# ktor
d
Isn't that what ContentNegotiation does? For example https://github.com/ktorio/ktor/blob/master/ktor-features/ktor-gson/src/io/ktor/gson/GsonSupport.kt, uses a reader... @enleur
d
The problem I see is that maybe the Render stage of the pipeline where the actual sending is done is an object, not an outputstream.... https://github.com/ktorio/ktor/blob/master/ktor-server/ktor-server-core/src/io/ktor/features/ContentNegotiation.kt
But I could be wrong... or there might be another pipeline stage to hook onto.
Also these handlers are not just Json, it seems there might be other ContentTypes, where it might not be convenient to get an OutputStream?
d
But you're referring to the ktor client @octylFractal, and he's talking about the ktor server feature... unless I'm misunderstanding something?
o
oops, you're right, they're very similar in nature
I would be surprised if the server didn't have a similar capability
d
They have two different interfaces for implementing converters....
o
OutputStream
is still synchronous, so it will be blocking anyway
o
mhm,
ContentNegotiation
calls a function named
transformDefaultContent
on the result: https://github.com/ktorio/ktor/blob/master/ktor-server/ktor-server-core/src/io/ktor/features/ContentNegotiation.kt#L85 that converts the result to an `OutgoingContent`: https://github.com/ktorio/ktor/blob/master/ktor-server/ktor-server-core/src/io/ktor/http/content/DefaultTransform.kt#L9 So it would be slightly more complex on the server side, since you wouldn't have the nice interface, but it would be easy to change https://github.com/ktorio/ktor/blob/master/ktor-features/ktor-jackson/src/io/ktor/jackson/JacksonConverter.kt to return a new OutgoingContent that streams it
I don't think the concern here is to make it non-blocking (as I don't think you could feasibly make Java libraries non-blocking? maybe I just don't know how to do it), but to prevent buffering the whole result in memory. I honestly don't believe there would be much benefit, and you would lose out on
Content-length
-related optimizations.
d
The funny thing is that for receiving, it's implemented that way, but not for sending... I guess we'd need to look for some benchmarks. When handling alot of json going in and out, these things might make a difference. What do you mean by
Content-Length
-related optimizations?
o
clients can pre-size buffers according to
Content-Length
, which probably doesn't make a huge difference either, but it can be useful to know how large an incoming payload is streaming payloads also often switch to HTTP chunked mode, which is slightly more bytes over the wire
👍🏼 1
d
Anyways, it might be possible to turn #klaxon to async, and then we gain the latency time to handle more requests with minimal thread blocking. I'm not sure of how much gain we're talking about here either though...