Hello Ktor Team, I was wondering if you could shed...
# ktor
h
Hello Ktor Team, I was wondering if you could shed some light regarding optimisations for Netty Server. More specifically regarding the number of available connections. We are currently running a Netty Server application within an EC2 instance (c5.large) and I noticed that whenever the upstream takes long to process some of the requests, it looks like Ktor run out of connections and therefore our application can’t receive more load, which results in 504's in the Load Balancer Level Having a look at this channel, I noticed that it was suggested to tweak the following parameters:
Copy code
/**
         * Size of the queue to store [ApplicationCall] instances that cannot be immediately processed
         */
        public var requestQueueLimit: Int = 16

        /**
         * Number of concurrently running requests from the same http pipeline
         */
        public var runningLimit: Int = 32
I would like to know if you have any suggestion regarding what’s the best approach to test / update these parameters? We are currently running
Ktor 2.0.3
. Thanks for your help.
Here is an example of the AWS monitoring for our LB
e
Hey @Helio, thanks for the report. @Rustam Siniukov could you take a look? It looks similar to the issue we had with depth
h
No worries @e5l! Please, let me know if you need any extra information and I will be glad to assist. =)
Hello, I was wondering if you had any chance to look into it yet. 👀
r
Hi! Sorry for the delay.
requestQueueLimit
is not used in the current setup, so the value there can be ignored.
runningLimit
should be configured depending on nature of your server. If you have large number of relatively small requests, you can increase it to 64 or larger. This limit is needed to avoid failing with OOM when too much requests are happening simultaneously. If you have enough memory, then it should be no problem to make it larger.
h
Thanks so much @Rustam Siniukov. Is there anyway to predict how much of memory it will use if I updated that Value to 64? I’m actually curious to understand how to manage that metric.
Also, I was wondering if you could explain a little better what does changing that metric means. For example, if we double the value, what does that means in terms of simultaneously connections to our server?
r
Memory consumption is really hard to predict without knowing your config, which plugins you use, type of requests etc.
runningLimit
shows how many connections can be run simultaneously in a single http pipeline. It doesn’t affect number of connections, but how much requests are handled in one connection
h
That's absolutely fair regarding the memory... I may need to explore a little bit. I will probably need to do some reading regarding this http pipeline concept. Thanks once again for your help, really appreciate it.
👍 1