Hello! I have a question about <Sql cache: use a s...
# apollo-kotlin
e
Hello! I have a question about Sql cache: use a single transaction when using MemoryCache chaining #5840. Is that something that V3 could use as well, or it was only a problem (regression) in the V4 beta? 😄
👍 2
b
Hi! The issue also exists in v3,
e
Oh 😅 Any chance that the fix will be backported and a patch version will be released? 😄 The truth is that we've been having a hunch that Apollo is a bit slow, but we just assumed that's just normal. For instance, from our performance monitoring traces, it looks like for merging a collection of ~230 records the median is ~0.7s, and the 90th percentile is over 3 seconds. So perhaps it's possible to make it faster? 😄
b
Hmm a release on the 3.x branch wasn't planned but it could be done 🙂 It would be interesting to know if this improves your perf - have you tried the v4 betas at all yet?
e
No, we haven't 😞 We sadly rely on a bunch of Apollo internals for our "repository" abstractions, so upgrading Apollo is always a pain, resulting in a lot of work. But I would be more than glad to share any new performance monitoring metrics if we were to update to a patch version with this fix backported 😄
👍 1
Btw, having a closer look at this, wouldn't it be possible to also optimise
MemoryCache#loadRecords
similar to
SqlNormalizedCache#loadRecords
, meaning try to load all records from the memory cache first, and only afterwards load the missing ones from the next cache in one go? So it would do one big select query for all the missing keys, instead of many selects, one for each key missing? (not sure how big of a speed up this would bring though, but we have traces for
loadRecords
as well, so I could at least measure it in our app 😄)
b
good point!
I'll have a look - unless you'd like to give it a go and open a PR?
e
I would like to, but realistically it's unlikely that I'll have the time to do it in the upcoming week. But I can give it a try later, if you don't get to it by then.
b
that's perfectly fine 🙂 I'll have a go at it, and thanks a lot for catching it!
thank you color 1
a
Seeing any kind of improvement to v3 here would make my day. Please do link an issue I can subscribe to when you have one!
👍 1
1
b
@Eduard Boloș PR for the
loadRecords
chaining is here
😍 1
e
That was quick, thanks! Looking forward to seeing the changes in V3 as well 🙂
b
v3.8.4 is now available with the 2 fixes
🙌 1
😍 2
s
Out of interest, has anyone benchmarked any significant performance improvements from either PR?
b
For the 1st issue, we had an example query provided by a user and I could notice a significant bump (~20s to ~1s). Haven't benchmarked the 2nd PR.
1
e
Nice! I was just looking at this, we don't have benchmark tests, but we tested manually, and we got mixed results 🤪 Writes are much faster, we went down from 1s to 0.3s, but the reads got slower, from 1s to 1.7s 😮 I guess the reason why this happens is that we actually either read everything either from disk or memory, as we load pretty much all data at once on app startup. So the actual read time will be the same, but now there's the additional overhead from the map operations, if that makes sense.
b
Interesting, I'm not expecting the map to be significant, whereas executing one select instead of several should be an improvement.
🤔 1
e
Let me test with release builds as well, we tested on debug, and that might influence the results 🤷
👍 1
Ok, good news, perf improves on release builds 🙂 Down from 0.45s to 0.35s. It's not much, but it's honest work 😄
❤️ 1
b
thanks for the followup 🙏 Good to hear
e
Also confirming for the merge records in release: down from 278ms to 62ms. All tested on a Pixel 4a. Our users with Samsung J6 and A03 Core will be so glad 😄
thank you very much for your work!
b
ahhh that's great to hear. Thank YOU for reporting, and confirming.
👍 1
e
We are going to make an app release soon containing this update, if you are interested I can report back in about a week and let you know what the actual numbers from our performance monitoring tools are.
👀 1
💯 1
b
yes! Very much interested 🙏
👍 1
e
A bit late (release dragged on and then I was on holidays), but I am back with some numbers 🙂 So for our largest query (200+ fields), merging the records showed a whopping 80-85% improvement 🤩 We don't have good tracing for loading records (as we can't differentiate between in-memory-only reads vs reads that hit the disk too), but on aggregate we saw a 30 to 50% improvement there as well. It goes without saying, we are extremely satisfied with the impact of these changes 😄 Thank you again for your great work!
👍 1
1
b
Wow this is so good to hear! 🎉 Thanks a lot! 🙏
👍 1