Is there any way to limit the number of concurrent...
# http4k
m
Is there any way to limit the number of concurrently handled incoming requests with http4k server? (I am currently using the
Netty
server.)
s
http4k doesn't handle that aspect of the runtime. You'll have to refer to what Netty (or any other server backend) offers.
A quick google search tells me that may not be possible in Netty. In Jetty I remember that could be achieved by configuring its internal executor thread pool.
m
OK. Has anyone done something like that with any of the backends?
s
Yes, in Jetty, that can be achieved with something like:
Copy code
httpHandler.asServer(Jetty(port, Server(QueuedThreadPool(100))))
f
@Mikael Ståldal Have a look at https://www.http4k.org/guide/reference/resilience4j or https://www.http4k.org/guide/reference/failsafe/ . You can set up a Bulkhead, which will control the number of concurrent request.
s
👆 That’s a good alternative 🙂
m
Seems to be possible with
Netty
if you configure the
workerGroup
with a custom size: https://github.com/http4k/http4k/blob/master/http4k-server/netty/src/main/kotlin/org/http4k/server/Netty.kt#L38
Copy code
private val workerGroup = NioEventLoopGroup(4)
Candidate for parameter to
Netty
perhaps?
s
We usually don’t parameterise backend server config unless it’s something there’s a benefit supporting it for all of them. There are other things like request timeouts etc that fall into that category. Another complication is that we don’t want to add any features without testing and those tend to be way more complicated to test.
m
But you support it for Jetty?
s
Not as part of our APIs. We allow creating a server with a custom Jetty instance.
m
OK, so what about this then:
Copy code
class Netty(val port: Int = 8000, override val stopMode: StopMode, workerGroup: NioEventLoopGroup = NioEventLoopGroup())
s
That should be fine. I wonder if we should parameterise ServerBootstrap instead but that’s a bigger change, so your suggestion is probably enough for now.
m
(There are also other aspects of the
workerGroup
you might want to customize, such as ThreadFactory to give threads nice names.)
👍 1
s
If you want to contribute with a PR we can review it and merge
m
I'll give it a go.
s
Thank you!
m
So @s4nchez encourages me to make a PR, and then @dave immediately rejects it? https://github.com/http4k/http4k/pull/972#issuecomment-1702871056
d
Lol. I hadnt seen this thread. Will discuss it between us. 🙃
s
Hi @Mikael Ståldal, sorry about that. I forgot that Dave and I agreed a while ago to not make our server backends more customisable (and
Jetty
is an exceptional case we want to get rid of, actually):
Copy code
Each of the server backends implement an interface ServerConfig, which is written with sensible defaults for the server in questions, but is also designed to be used as a starting point for tweaking to API user needs. To customize, simply use the relevant ServerConfig class as a starting point and reimplement as required. See the how-to guides for an example of this in use.
The way to go for your use case is to create a custom ServerConfig (e.g.
NettyWithLimitedConcurrency
), which you can configure with the parameters you need. Again, please accept my apologies for lead you in the wrong path.
m
OK. Maybe document somehow that the
Jetty
configurability is not an example to be followed?
s
The snipped above comes from the doc already (here). We'll try to find a way to remove the
Server
parameter from
Jetty
in the future. Meanwhile, I'll stop recommending its use like I did above to avoid this kind of confusion.
m
Maybe add a comment in the code for the
Jetty
server parameter?