We’ve historically had a place where we’ve had one...
# apollo-kotlin
s
We’ve historically had a place where we’ve had one ongoing
watch()
on a query, listening on the local cache changes to update itself. And then a subscription, which on each new item, it was doing an
apolloStore.readOperation(query)
to get the latest data, take that list and append the new item from the subscription, and then do an
apolloStore.writeOperation(query, newData)
to update the cache, therefore completing this loop where the initial
watch()
would update the original flow. It’s something that has never worked too well and I was looking to improve this, where I was struggling to figure out a way to make sure that if I do a
readOperation
and try and write it back to the cache after I add something with it, that it has not updated in the meantime, making me override things in the cache, losing items in the process. But then I was thinking there must be a way to avoid doing all this, back when this was introduced to our codebase (2019) maybe there wasn’t a better way, but but there should be something better I can do 😅 Should I be able to do smth like
extend type Message @typePolicy(keyFields: "globalId")
on the return type of the query and the subscription (it’s the same “Message” type) and get my
watch
to simply automatically get the updates in the cache here? My query
messages
is
type Query { messages: [Message]! }
and the subscription
message
looks like this
type Subscription { message: Message! }
. Maybe tl;dr: Can my subscription also update the cache automatically so another watch on a query also gets the new values
b
Can my subscription also update the cache automatically so another watch on a query also gets the new values
While the generic answer is yes (subscriptions participate in the cache), in your case, since your query is on a list of messages while your sub is a single message, then there's the question of where in the list to put the messages coming from the sub
s
Right, and my
Message
type
does
have a globally unique ID, so I got that in my disposal, I just need to figure out how to hook this together I think 👀
b
I think what you're doing right now is the only way (but adding the
keyFields
if you don't have it yet would make sense)
s
So setting keyFields + accessing apolloStore is the way to go. Is there a way for me to perform the cache changes in an atomic way, so that if I take the old state, append the new message and write it back in the cache, I don't accidentally override some new entry that was entered between my read and my write? I don't know how likely this is to happen, but I would like to guard against it if possible.
b
There is
ApolloStore.accessCache
which will execute the given lambda within the store’s lock. You could use that but it feels like a bit of a hack 😅
s
Ahaa alright, that’s quite interesting! Looked into it a bit now, and yeah I’m also realizing I need to create a
Record
object to work with this, really in uncharted territories for me here 😅 I feel like I’d be dropping a layer too low for me to not be worried that I’ll be making mistakes but not sure hmm, will have to look more into this tomorrow 😊
Played with it a bit more, got something like this down:
Copy code
// In ViewModel

messagesQuery().watch().collect { cachedData ->
  //  update ui
}
messageSubscription.toFlow().collect {
  // Just take the new single message and try to store it inside the cache
  repository.storeResponseToCache(it)
}

// In repository

fun storeResponseToCache(message: MessageFragment) {
  // From looking into how writing to cache is done
  val changedKeys = apolloClient.apolloStore.accessCache { cache ->
    val records = messagesQuery.normalize(
      data = ChatMessagesQuery.Data(
        listOf(
          ChatMessagesQuery.Message(
            __typename = message.__typename,
            globalId = message.globalId,
            fragments = ChatMessagesQuery.Message.Fragments(message),
          ),
        ),
      ),
      customScalarAdapters = apolloClient.customScalarAdapters,
      cacheKeyGenerator = TypePolicyCacheKeyGenerator,
    )
    // Here I am hoping this new entry will be merged with the reset of the messages
    cache.merge(records.values.toList(), CacheHeaders.NONE)
  }
  apolloClient.apolloStore.publish(changedKeys)  
}
I am doing the
normalize
approach since I do not know how to confidently receive the right key for the cache and not sure how to get the right
Fragment<D>
parameter that
apolloStore.writeFragment
needs, so this is the closest thing I’ve gotten to make this work, by looking at what
apolloStore.writeFragment
does. This forces me to go through the chat messages operation in the first place, where here optimally I'd like to just add another message in the cache just by itself. But I feel like I must be missing some step here to make that new messages be returned by the query which returns the list of messages. What I am experiencing right now is that after I am done with this, the new cache looks good, after doing
NormalizedCache.prettifyDump
I think I get the right entries in there. But the original query which I am `watch`ing simply returns to me the new entry and that’s all it returns, a list of 1 item. It doesn’t also re-emit all the previous messages from the cache. I wonder if this has to do something with the
apolloClient.apolloStore.publish(changedKeys)
I do, but I think I should better stop now and look into it tomorrow again 😴
Okay,
NormalizedCache.prettifyDump
coming in clutch. The messages are cached as they should be, but for the query itself it has an entry like this
Copy code
"messages" : [
  CacheKey(Message:198803482)
]
So they’re simply then not referenced by the query response even though they exist in general in the cache itself. I need to make that happen now 😄 This prettifyDump is so so good, I had never peeked inside how the cache actually saves things, but this is finally making me understand at least kinda how all of it works, while I had no idea before
b
💯 I find that looking at how things are stored in the cache is the best way to understand how it works
hey yesterday when I mentioned
ApolloStore.accessCache
I was thinking you could use your existing code (
readOperation
,
writeOperation
) inside it. That's why it felt a bit like a hack, because you wouldn't use the cache lambda parameter. But it may be easier? (What you're doing here, creating records should work too, but is lower-level.)
s
Yeah, now I am trying to get the new records, get the old records, and basically put them together, I think that should work. I was thinking doing the readOperation and writeOperation inside the lambda would be problematic as those themselves also seem to try to acquire a lock, and I was thinking it’d try to acquire an already acquired lock and it’d deadlock. I didn’t try it to be honest, but I made that assumption and moved on to a fist fight with records directly 😅
But now that I see it better, it’s read/write lock, so it should in fact allow for multiple writers 🌟 Maybe I should stop with going that low level here!
b
that's a good point - the lock is reentrant which I think means this should work, but not 100% sure tbh 🙂
s
Will try and wrap up my current try, see if that works. Then I’ll try the hack as you suggested, see if that works for me too. And then decide on which one feels easier to read if I stumble upon this in 6 months so I don’t pull my hair not knowing what the hell I was doing here 😅
👍 1
Thanks a ton for the guidance here btw, it’s been super valuable to me, I feel like I’m close to getting this to work thanks to your ideas here 🤗
b
Glad to help!
s
I got something like this and it seems to work exactly as I wish it would:
Copy code
suspend fun writeNewMessageToApolloCache(message: ChatMessageFragment) {
  /**
   * Using [com.apollographql.apollo3.cache.normalized.ApolloStore.accessCache] here to use the
   * [java.util.concurrent.locks.ReentrantReadWriteLock] which resides inside the
   * [com.apollographql.apollo3.cache.normalized.internal.DefaultApolloStore] to respect in the read/write lock as we
   * want to touch the cache internals. This should make it so that we can't make a modification which would override
   * a cache entry which was written in-between us fetching the previous cache and appending our new message to it.
   */
  val changedKeys = apolloClient.apolloStore.accessCache { cache ->
    /**
     * [nomalize] here acts as a way to go from a query response into the map of key to records that we would've
     * gotten back. We construct our own fake `ChatMessagesQuery.Data` object with the [message] fragment to get the
     * exact record we would've gotten if the query came in normally from the backend.
     */
    val records: Map<String, Record> = messagesQuery.normalize(
      data = ChatMessagesQuery.Data(
        listOf(
          ChatMessagesQuery.Message(
            __typename = message.__typename,
            globalId = message.globalId,
            fragments = ChatMessagesQuery.Message.Fragments(message),
          ),
        ),
      ),
      customScalarAdapters = apolloClient.customScalarAdapters,
      cacheKeyGenerator = TypePolicyCacheKeyGenerator,
    )
    // These were the old cache entries for the MESSAGES_QUERY_NAME query.
    val oldCachedMessageCacheKeys: Set<CacheKey> = cache
      .loadRecord(CacheKey.rootKey().key, CacheHeaders.NONE)
      ?.get(MESSAGES_QUERY_NAME)
      ?.cast<List<CacheKey>>()
      ?.toSet() ?: emptySet()
    // This inlcudes the one new message which we want to write to the cache
    val newCachedMessageCacheKeys: Set<CacheKey> = records
      .get(CacheKey.rootKey().key)
      ?.get(MESSAGES_QUERY_NAME)
      ?.cast<List<CacheKey>>()
      ?.toSet() ?: emptySet()

    // This includes all the existing messages + the new one. In needs to be *first* to show as the last message in
    // the chat which is inverted, goes from bottom to top.
    val newMessageKeys: Set<CacheKey> = newCachedMessageCacheKeys + oldCachedMessageCacheKeys

    /**
     * We create a new Record for the "QUERY_ROOT" entry in the cache. This will look something like:
     *
* "QUERY_ROOT" : { * "messages" : [ * CacheKey(Message:123123123) * CacheKey(Message:234234234) * CacheKey(Message:345345345) * ] * } *
Copy code
*/
    val queryRootRecordWithAllMessages: Record = Record(
      key = CacheKey.rootKey().key,
      fields = mapOf(MESSAGES_QUERY_NAME to newMessageKeys.toList()),
      mutationId = null,
    )

    // We take the original [records] which was going to be written to the cache, and we change what was going to be
    // written to the "QUERY_ROOT" entry by entering our own Record which we've enriched to include all the chat
    // messages.
    // We keep the original [records] since in there it also includes information about where the message itself will
    // be stored, along with the message body and the message header which have their own key. This will mean that
    // the final [alteredRecords] map will look something like:
    //
// mapOf( // "QUERY_ROOT" : { // "messages" : [ // CacheKey(Message:123123123) // This is the new entry which we're now wring // CacheKey(Message:234234234) // This and the one below were already in the cache // CacheKey(Message:345345345) // Already was in the cache // ] // }, // "Message:123123123" : { // The new message we're storing // "__typename" : Message // "globalId" : 123123123 // "id" : free.chat.message // "header" : CacheKey(Message:123123123.header) // reference to the header entry below // "body" : CacheKey(Message:123123123.body) // reference to the body entry below // }, // "Message:123123123.header" : { // The header of the new message we're storing // "fromMyself" : true // "statusMessage" : Tack för ditt meddelande. Vi svarar så snart som möjligt. // "pollingInterval" : 1000 // "richTextChatCompatible" : true // }, // "Message:123123123.body" : { // The body of the new message we're storing // "__typename" : MessageBodyText // "type" : text // "text" : Hello, I would like some help with this. // "keyboard" : DEFAULT // "placeholder" : Aa // } // ) //
Copy code
val alteredRecords: Map<String, Record> = records.toMutableMap().apply {
      put(CacheKey.rootKey().key, queryRootRecordWithAllMessages)
    }

    // This should merge everything together nicely. The new message entries will get stored, and the entries for
    // QUERY_ROOT will be retained, and the "messages" one will be updated with the old messages + the new one.
    cache.merge(alteredRecords.values.toList(), CacheHeaders.NONE)
  }
  apolloClient.apolloStore.publish(changedKeys)
}

const val MESSAGES_QUERY_NAME = "messages"
private inline fun <reified T> Any?.cast() = this as T
And without all the comments 😄
Copy code
suspend fun writeNewMessageToApolloCache(message: ChatMessageFragment) {
  val changedKeys = apolloClient.apolloStore.accessCache { cache ->
    val records: Map<String, Record> = messagesQuery.normalize(
      data = ChatMessagesQuery.Data(
        listOf(
          ChatMessagesQuery.Message(
            __typename = message.__typename,
            globalId = message.globalId,
            fragments = ChatMessagesQuery.Message.Fragments(message),
          ),
        ),
      ),
      customScalarAdapters = apolloClient.customScalarAdapters,
      cacheKeyGenerator = TypePolicyCacheKeyGenerator,
    )
    val oldCachedMessageCacheKeys: Set<CacheKey> = cache
      .loadRecord(CacheKey.rootKey().key, CacheHeaders.NONE)
      ?.get(MESSAGES_QUERY_NAME)
      ?.cast<List<CacheKey>>()
      ?.toSet() ?: emptySet()
    val newCachedMessageCacheKeys: Set<CacheKey> = records
      .get(CacheKey.rootKey().key)
      ?.get(MESSAGES_QUERY_NAME)
      ?.cast<List<CacheKey>>()
      ?.toSet() ?: emptySet()
    val newMessageKeys: Set<CacheKey> = newCachedMessageCacheKeys + oldCachedMessageCacheKeys

    val queryRootRecordWithAllMessages: Record = Record(
      key = CacheKey.rootKey().key,
      fields = mapOf(MESSAGES_QUERY_NAME to newMessageKeys.toList()),
      mutationId = null,
    )
    val alteredRecords: Map<String, Record> = records.toMutableMap().apply {
      put(CacheKey.rootKey().key, queryRootRecordWithAllMessages)
    }
    cache.merge(alteredRecords.values.toList(), CacheHeaders.NONE)
  }
  apolloClient.apolloStore.publish(changedKeys)
}

const val MESSAGES_QUERY_NAME = "messages"
private inline fun <reified T> Any?.cast() = this as T
I felt like over-commenting this because it’s a bit like dark magic to someone like me just yesterday, before I had ever looking into the apollo cache and how it stores things 😅
💯 1
b
that looks good!
thank you color 1
s
With this working, I do realize how the old
readOperation
approach would look much simpler, but it turns out the lambda for
accessCache
is not inline, and I need to be in a suspending context to call those functions. Looking inside the function
override suspend fun <D : Operation.Data> readOperation(
itself, it doesn’t seem like it makes use of the
suspending
keyword at all, since it also just locks on the ReentrantReadWriteLock and then calls
readDataFromCache
which is non-suspending. In any case, I felt like trying to make this work would mean I’d need to launch a new coroutine inside the lambda after I’ve locked, and then I feel like I would deadlock myself somehow
👍 1
Thanks so much again!
b
Sure thing! Thanks for the updates 🙂