Does okhttp have a way of limiting the maximum number of connections used? I see <ConnectionPool> al...
a
Does okhttp have a way of limiting the maximum number of connections used? I see ConnectionPool allows configuring
maxIdleConnections
but says nothing of maximum total connections allowed to be created/active. I also am not seeing an equivalent of a
connectAcquireTimeout
...
I suppose a rough equivalent would be Dispatcher.maxRequests?
j
Yep, Aaron’s right. You can limit connections by limiting how many threads are interacting with OkHttp, either via that dispatcher API or by sizing your own thread pools
a
Thanks Jesse. Is there any timeout control on waiting for a connection (alternatively how long a request is allowed to be queued waiting to execute would be an approximation)? FYI the context for all these questions is that OkHttp is currently the default engine for generated service clients in the AWS SDK for Kotlin. I'm evaluating HTTP capabilities to ensure we have enough control for more niche/nuanced use cases.
j
I think of it like this: if the calling code is already spending a thread to make a blocking call (Call.execute), that runnable has already spent whatever time it needed to waiting in a queue to be scheduled
And so OkHttp does not delay synchronous calls ever
If the calling code is not already spending a thread (Call.enqueue) then it is OkHttps job to balance latency vs. resource consumption. It implements this via the tuning parameters on Dispatcher
You configure how many concurrency HTTP calls you allow, and also how many concurrent calls to any particular domain name, and OkHttp won’t exceed those limits
Finally, if you want finer control over scheduling, you can have that but you gotta build it on top of OkHttp
For example, if you wanna limit time-in-queue, or you want a custom call priority system, or you want per-domain policies those are all reasonable requirements!
But then you gotta just make those decisions and then tell OkHttp about em
By not calling enqueue/execute until you want it to do the work
My top-level advice is if you deliberately manage your apps own thread pool sizes AND OkHttp’s dispatcher parameters, you’ll get good enough behaviour: it won’t exhaust resources and it’ll also have reasonable latency
The next fancier step is to use EventListener with New Relic or Datadog or SignalFX to get visibility into what’s actually happening in the system. Are calls waiting on connections? Is that appropriate?
(This reminds me that we need an EventListener event for when a call is enqueued!)
And also all of the above would make a nice document on our site, we need to do that too
a
Appreciate your insights. I'm just seeing whats possible right now with OkHttp as the backing engine (it isn't the only one we support but it is currently the default).
My top-level advice is if you deliberately manage your apps own thread pool sizes AND OkHttp’s dispatcher parameters, you’ll get good enough behaviour: it won’t exhaust resources and it’ll also have reasonable latency
Reasonable behavior isn't (really) the goal, we are trying to have predictable characteristics (most notably in dealing with failure scenarios and mitigating wide spread outages). Some configuration parameters won't map cleanly (or at all) to some underlying engines, I'm just trying to understand what those parameters are right now w.r.t OkHttp. I may have some additional questions if thats ok as I try to map out OkHttp capabilities.
And also all of the above would make a nice document on our site, we need to do that too
Agreed. It would also be nice to have a request/response lifecycle diagram (e.g. being able to answer when a call is assigned a connection, is it after it's pulled from the queue? before? etc).
Reviving this thread...is there a way to actually enforce maximum connections used by okhttp using any of the above (custom dispatcher, thread pool, existing tuning parameters, etc)? From what I can see the answer is no but I want to make sure I'm not missing something obvious.
j
If you make a blocking call with OkHttp, it’ll always open a connection if it needs one
You can use an interceptor to count connections and stall or fail calls that would exceed a limit
There’s nothing built in, but there’s lots of ways to do it on top
a
Thanks for the response Jesse. We are using the kotlin
async
extension which calls
enqueue
. Does an interceptor have enough information to know when a connection is available again? I suppose an event listener/connection listener could signal it 🤔 I get that okhttp is trying to automatically balance performance and resources but this seems like such a common use case I'm surprised there isn't a setting to do it (even if it's not the default). Though, I think I would prefer to be able to just plug in a custom connection pool implementation and that would solve all this as well (and then some). Is there any reason the
ConnectionPool
isn't an interface and has to be a concrete implementation that only has a few tuning parameters?
j
If you’re using enqueue then it’s easy. Just tune the okhttp3.Dispatcher object
1244 Views