hey another question related to paginated caching ...
# apollo-kotlin
a
hey another question related to paginated caching artifacts - i got my caching logic correct. so here is my workflow: 1. watch a query with initial
limit
offset
using query strategy
CacheAndNetwork
2. user scrolls / fetch next page called, we query apollo (no watch) for the next set of data. 3. On reentry, Correct: we receive emission for the fully cached dataset (9 items). strange*:* then we receive a final emission for the original query (say its 5 items). in this situation, would it then be best to just observe cache only and do fetches at will instead (i.e. parallel initial fetch as a query like we do the next page query)
b
Hmm I wonder why you're seeing this, it's definitely not expected 🤔 Would you be able to share a bit of code, to be sure we're on the same page (no pun intended🙂) and maybe write a repro test. In any case, yes observing the cache on one side and updating it (initial or subsequent pages) on the other side is often a nice way to do this.
After all I think what you're seeing with
CacheAndNetwork
may be expected, depending on how when/how it's called. I made a little test here and you can see that you'd get first the value in the cache (without the next page), and then the value from the network (so, only the next page).
a
our code is a deep into a custom framework we’re building, but the concepts I can try to explain here: 1. The query uses
limit
/
offset
with a cursor-type pagination return where
nextCardIndex
acts like a cursor, we use that for the next
offset
in subsequent pages. 2. We wrote a field merger and metadata generator to handle the caching mechanisms for our data (as its not relay-style) 3. It mostly works as expected, except this one case. 4. We
watch
the initial query (CacheAndNetwork). 5. We fetch subsequent pages using
query()
with
NoCache
the
watch
acts as the source of truth for the full set of data
b
That sounds good! And by the way thanks a lot for trying this experimental feature and the feedback (and sorry for the lack of documentation)! So yeah maybe it would make sense to change 4. to be the same as 5. then, instead of using CacheAndNetwork?
a
yeah so the fix on my end was to use
CacheOnly
on the
watch
, and
NoCache
on query fetches for each page. The issue is - with the
CacheAndNetwork
is that the last emission will be smaller size than what cache has, so we will always use network to page next content even if we have it in the cache. My assumption was that a watch can ignore parameters with the pagination fieldPolicy. so shouldnt the subsequent cache and network (network return) include both network and cache data merged?
b
I think
watch
with
CacheOnly
on one side and query pages with
NetworkOnly
on the other side is the way to go. About
CacheAndNetwork
returning the cached (merged) value on the second emission, it could be done but I think there are valid cases where you could actually want only the last page?
a
i think in a normalized query world (where we watch a query), maybe we should only allow cache observing in that case? what could happen in that scenario is you watch a query with parameters, then watch same query with next paginated parameters. both watches will receive full list of cached data prior to receiving the network response. isnt that odd?e
maybe in this case then, the cache response should only return for that page information and not full data. maybe the way im merging is wrong?
where we merge the lists into the same field
maybe instead , we query and observe as normal. meaning that with parameters they get used (and stored as expected). but then a new kind of
Query
is generated for pagination that allows no parameters specifically to designate it as the cache observer
then the apollo client will return all pages that had loaded in that pagination scheme
b
I bet your merging is probably The way I see it, if you have something like: • one coroutine that queries the first page (either
CacheAndNetwork
or
NetworkOnly
) and
watch
it. Your UI observes this. • another coroutine is launched every time you reach the end of the list, which queries the next page with
NetworkOnly
. You can ignore the returned data of this one: it's stored in the cache Both can be the same query, only the parameters change. Then the first one will be notified with the first page at first, and then with the whole data set (pages merged) when querying next pages. Does that make sense? Maybe I’m missing something 🙂
a
thats how i desired it to be.
CacheAndNetwork
is the one going wrong on the first query. the original issue. where it returns full set of data (expected) on
watch
but then receives another emission for its network call, without cached data. which causes the UI to only show the first 5 items (rather than all 15, for example) and by paging down we then always have to requery more data
b
💡 All right, getting it now 🙂 Would it make sense to use
CacheOnly
for the first one (so no network call, just observe the cache) and call the other one manually with page 1 when opening the screen?
a
yeah
thats what im doing and it works.
b
Oh great to hear! Do you think this is a good solution in the end? Again, feedback is much appreciated! 🙏
a
i think the solution makes sense - observe the entire cache. then push changes via separate queries. sort of how architecture components for Room that android team recommends.