Hello Everyone. I am trying Http4k for a project. ...
# http4k
s
Hello Everyone. I am trying Http4k for a project. Have a doubt in my mind which I couldn't find in docs. 1. How is blocking code handled in Http4k. 2. What's the relation b/w a request and thread when I am using Netty. 3. Is there a construct provided by the http4k to execute asynchronous blocking and non-blocking calls.
f
1. All calls handled in blocking fashion (thread-per-request model): https://www.http4k.org/faq/ 2. Requests are process on a thread from the Netty worker pool (AFAIK) 3. See 1, but there is also a concept of an async http client: https://www.http4k.org/guide/modules/clients/
s
I am not looking for clients, but how doe sthe server request model with thread, if its netty then it should not be one thread per request ?
d
@sahil Lone you can find the implementation of the Netty backend here - it's not very complicated if you're familiar with the Netty APIs. https://github.com/http4k/http4k/blob/master/http4k-server-netty/src/main/kotlin/org/http4k/server/Netty.kt
s
@dave I can see it not changing any inplementation from netty, so its not one request per thread, But as Http4k doesnt have any blocking call handler which means my data base calls will block the event loop ?
d
you're correct for netty - the default implementation is written to use the workserGroup
NioEventLoopGroup
(which is set as the childGroup in the bootstrap setup and uses the default number of threads from the factory). The implementations of `ServerConfig`are written so they can be tweaked to the user need if required (in this case just reuse the
Http4kChannelHandler
).
also, as rule, we generally tend to use the
Undertow
or
ApacheServer
implementations by default, or
Jetty
if we need websockets
s
Just one last, If i block all worker group threads , will I still be able to accept the requests on the server but not process them or I will simply mnot be able to accept the request at all.
d
I'm definitely not an expert on netty internals, but the SO_BACKLOG param is defined as
Copy code
The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused.
.
Apart from that limit, things will just block waiting to for a free worker. If performance really is a concern, I suggest taking a look at the TechEmpower Benchmarks to see what the best tech for your requirements are: https://www.techempower.com/benchmarks/#section=data-r17&hw=ph&test=fortune&l=xan9tr-1
... although to be honest, for 99% of all apps out there, performance is NOT actually a real concern and just something that people fixate on because they read some crap on the internet about webscale. 🙂 Developer time is expensive - it's generally much cheaper to just rent another server than to have to code against a complicated API.
👍 2
s
@dave Looks like a good starting point. I will update you once I have something
I am trying to port from vertx to http4k. Yes performance is an issue for the load we have
d
what kind of load are you expecting?
s
right now we have 6 , 6 core servers with ~3GHZ taking load on 250K/s. There is a hit in the system to distributed key/val store. SO we are trying to port to something light nad get away from threads bound to verticles
SOmething like scala futures and schedulers with light server handling
d
Well be sure to let us know how you get on with your efforts 🙃