I'm writing a logger using an `ApolloInterceptor` ...
# apollo-kotlin
e
I'm writing a logger using an
ApolloInterceptor
and there seems to be a lot of overhead for the non-network execution. I have an OkHttp
HttpLoggingInterceptor
installed as well, and it is showing me ~100ms RTT using
Copy code
val startNs = System.nanoTime()
      val response = chain.proceed(request)

      val tookMs = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNs)
When I do the same thing in the
ApolloInterceptor
it takes ~600ms
Copy code
val startNs = System.nanoTime()
      chain.proceed(request).onEach { response ->
          val tookMs = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startNs)
          ...
      }
Is that kind of overhead expected?
s
Isn't the Apollo chain strictly bigger in this context? The apollo chain might include the cache and any other apollo-only things, while the Okhttp interceptor would be a wrap only around the last apollo interceptor which is doing the networking. So for a chain like this (copied this from some old discussion we had here for the app I am working on)
Copy code
0 = {LoggingInterceptor@44117}
1 = {LogoutOnUnauthenticatedInterceptor@44118} 
2 = {WatcherInterceptor@44119} 
3 = {FetchPolicyRouterInterceptor@44120} 
4 = {ApolloCacheInterceptor@44121} 
5 = {DefaultRetryOnErrorInterceptorImpl@44122} 
6 = {NetworkInterceptor@44123}
Your Apollo interceptor would start the time before 0 starts, and it would end after 6 ends. While the Okhttp one would start counting before 6 starts, and it would end after 6 ends. This is the first thing that comes to my mind. It's definitely possible that I have a wrong understanding of where the OkHttp request starts and ends. But what do you think, do you think that this explanation would make sense here?
1
m
+1. I'd add that the culprit is probably
ApolloCacheInterceptor
, especially if using a SQL cache. Writing large response to the disk can be slow. If you have that kind of results without a cache, that's unexpected
e
That was my first thought, so I disabled the cache, and I was still getting those results. I'll run another test now just to make sure.
s
In your builder, can you try adding that interceptor last? So it's after as many of your own interceptors as possible at least? Afaik you can't control all of them, but at least you can make sure it's added at the bottom of your own interceptors
e
I removed all other interceptors in order to test this.
👍 1
m
wait, your okhttp interceptor is only measuring the time to receive the response status + headers, not the body
the ApolloInterceptor includes reading + parsing the body
e
Ah I might've copied the wrong code, I think they include the time to read the body in the final log
👍 1
OK they log the time to receive the response status + headers and also log the time to read the body. In my most recent test OkHttp logged 120ms and Apollo logged 486ms.
I figured there would be some overhead for the parsing, etc... but that seemed a bit much.
m
How large is the response?
In general, JSON parsing should be on par with Moshi, the code was taken over from there. There's a difference that fragments require "rewinding" the JSON stream and buffering the whole response before being turned into classes again but if you queries have no fragments it should be really similar and the parsing should be amortized during IO
e
There are no fragments and the payload is pretty small. I have been testing with error responses, so I'll try one that succeeds and see if there's any difference.
m
Other thing that could come to mind is classloading, i.e. the first time a request is executed might be a bit slower
Besides that, I'm not sure
e
The success took even longer (779ms), but maybe because there's more deserialization involved? Subsequent calls went quicker (~450ms)
m
As a comparison point, this is what I get with the Confetti conferences data on a Pixel 6a
Copy code
I  --> POST <https://confetti-app.dev/graphql> (875-byte body)
I  <-- 200 <https://confetti-app.dev/graphql> (1157ms, 262214-byte body)
I  took 1554
So 400ms for 262kB
Subsequent ones are faster:
Copy code
I  --> POST <https://confetti-app.dev/graphql> (875-byte body)
I  <-- 200 <https://confetti-app.dev/graphql> (536ms, 262214-byte body)
I  took 663
So that's in the same range
w
Eliezer, are you measuring that on an Android device, or regular JVM? I remember chasing some inefficiencies in the past, on an Android device, which vanished when I measured an R8-optimized release build. In any case, you both are sharing some numbers, so it's be good to align on the testing methodology 😄
m
Ah yea good point, I'm in debug mode
Release mode:
Copy code
--> POST <https://confetti-app.dev/graphql> (875-byte body)
<-- 200 <https://confetti-app.dev/graphql> (493ms, 262214-byte body)
took 631
--> POST <https://confetti-app.dev/graphql> (875-byte body)
<-- 200 <https://confetti-app.dev/graphql> (523ms, 262214-byte body)
took 748
--> POST <https://confetti-app.dev/graphql> (875-byte body)
<-- 200 <https://confetti-app.dev/graphql> (488ms, 262214-byte body)
took 611
--> POST <https://confetti-app.dev/graphql> (875-byte body)
<-- 200 <https://confetti-app.dev/graphql> (584ms, 262214-byte body)
took 690
e
True, I'm testing a debug build on an emulator. I'll try a release build
Timing remained the same on a release build. Debug and release builds were much slower than the emulator (probably because of the network), but the ratio remains consistent.
And now the Apollo times are suddenly equal to the OkHttp times. Maybe it just took time for some optimizations to kick in?
m
I guess so...