Hi there - we just got hit hard by this issue: <ht...
# ktor
p
Hi there - we just got hit hard by this issue: https://youtrack.jetbrains.com/issue/KTOR-6462/Ktor-clients-and-servers-should-use-Dispatchers.IO.limitedParallelism...-wherever-possible The related change went into 2.3.5 here: https://github.com/ktorio/ktor/pull/3748 Forcing all client connections on the bounded IO dispatcher is an odd approach because it forces customers to architect around it. An alternative approach might be to make the Dispatcher configurable so that the customer can control the pool constraints. How can we escalate this?
2
@Jilles van Gurp just saw your thread on this cc @Oliver.O
s
We had good success by putting an nginx in front of ktor many years ago. Havent had an issue since.
o
Good to hear that you had success that way. However, this doesn't necessarily solve all cases discussed here. Also, throwing more moving parts at a problem increases complexity. Mileage varies. In this case, there is an immediate solution to the problem at hand in Ktor. As the ticket status shows, it is being worked on just now. The underlying cause, of course, is blocking I/O (including network I/O and all sorts of databases). With a true coroutines-based I/O solution, there would be no need to fire up an excessive number of threads, which are consuming resources, but mostly sleeping. So once you have that option, it would be the ideal way to scale up easily and efficiently.
s
Sure. I just mentioned it as a workaround. We just found it to be a critical component of running a top1000 website since ktor will never be as hardened as nginx
o
I appreciate sharing solutions here, that's where we all benefit from each other. The hard part in selecting one is just context. We might take something for granted that just does not apply to others. I don't see evidence supporting the hypothesis that Nginx might be any more "hardened" than Ktor already is. I actually have seen solid data that Ktor is absolutely robust at scale (unfortunately, I can't share details for confidentiality reasons).
👍 1