Stylianos Gakis
09/11/2023, 4:51 PMwatch()
on a query, listening on the local cache changes to update itself. And then a subscription, which on each new item, it was doing an apolloStore.readOperation(query)
to get the latest data, take that list and append the new item from the subscription, and then do an apolloStore.writeOperation(query, newData)
to update the cache, therefore completing this loop where the initial watch()
would update the original flow.
It’s something that has never worked too well and I was looking to improve this, where I was struggling to figure out a way to make sure that if I do a readOperation
and try and write it back to the cache after I add something with it, that it has not updated in the meantime, making me override things in the cache, losing items in the process.
But then I was thinking there must be a way to avoid doing all this, back when this was introduced to our codebase (2019) maybe there wasn’t a better way, but but there should be something better I can do 😅
Should I be able to do smth like extend type Message @typePolicy(keyFields: "globalId")
on the return type of the query and the subscription (it’s the same “Message” type) and get my watch
to simply automatically get the updates in the cache here?
My query messages
is type Query { messages: [Message]! }
and the subscription message
looks like this type Subscription { message: Message! }
.
Maybe tl;dr: Can my subscription also update the cache automatically so another watch on a query also gets the new valuesbod
09/11/2023, 4:59 PMCan my subscription also update the cache automatically so another watch on a query also gets the new valuesWhile the generic answer is yes (subscriptions participate in the cache), in your case, since your query is on a list of messages while your sub is a single message, then there's the question of where in the list to put the messages coming from the sub
Stylianos Gakis
09/11/2023, 5:06 PMMessage
type does
have a globally unique ID, so I got that in my disposal, I just need to figure out how to hook this together I think 👀bod
09/11/2023, 5:09 PMkeyFields
if you don't have it yet would make sense)Stylianos Gakis
09/11/2023, 6:41 PMbod
09/11/2023, 6:53 PMApolloStore.accessCache
which will execute the given lambda within the store’s lock. You could use that but it feels like a bit of a hack 😅Stylianos Gakis
09/11/2023, 9:37 PMRecord
object to work with this, really in uncharted territories for me here 😅 I feel like I’d be dropping a layer too low for me to not be worried that I’ll be making mistakes but not sure hmm, will have to look more into this tomorrow 😊Stylianos Gakis
09/11/2023, 10:54 PM// In ViewModel
messagesQuery().watch().collect { cachedData ->
// update ui
}
messageSubscription.toFlow().collect {
// Just take the new single message and try to store it inside the cache
repository.storeResponseToCache(it)
}
// In repository
fun storeResponseToCache(message: MessageFragment) {
// From looking into how writing to cache is done
val changedKeys = apolloClient.apolloStore.accessCache { cache ->
val records = messagesQuery.normalize(
data = ChatMessagesQuery.Data(
listOf(
ChatMessagesQuery.Message(
__typename = message.__typename,
globalId = message.globalId,
fragments = ChatMessagesQuery.Message.Fragments(message),
),
),
),
customScalarAdapters = apolloClient.customScalarAdapters,
cacheKeyGenerator = TypePolicyCacheKeyGenerator,
)
// Here I am hoping this new entry will be merged with the reset of the messages
cache.merge(records.values.toList(), CacheHeaders.NONE)
}
apolloClient.apolloStore.publish(changedKeys)
}
I am doing the normalize
approach since I do not know how to confidently receive the right key for the cache and not sure how to get the right Fragment<D>
parameter that apolloStore.writeFragment
needs, so this is the closest thing I’ve gotten to make this work, by looking at what apolloStore.writeFragment
does. This forces me to go through the chat messages operation in the first place, where here optimally I'd like to just add another message in the cache just by itself. But I feel like I must be missing some step here to make that new messages be returned by the query which returns the list of messages.
What I am experiencing right now is that after I am done with this, the new cache looks good, after doing NormalizedCache.prettifyDump
I think I get the right entries in there. But the original query which I am `watch`ing simply returns to me the new entry and that’s all it returns, a list of 1 item. It doesn’t also re-emit all the previous messages from the cache. I wonder if this has to do something with the apolloClient.apolloStore.publish(changedKeys)
I do, but I think I should better stop now and look into it tomorrow again 😴Stylianos Gakis
09/12/2023, 7:11 AMNormalizedCache.prettifyDump
coming in clutch. The messages are cached as they should be, but for the query itself it has an entry like this
"messages" : [
CacheKey(Message:198803482)
]
So they’re simply then not referenced by the query response even though they exist in general in the cache itself. I need to make that happen now 😄
This prettifyDump is so so good, I had never peeked inside how the cache actually saves things, but this is finally making me understand at least kinda how all of it works, while I had no idea beforebod
09/12/2023, 7:25 AMbod
09/12/2023, 7:29 AMApolloStore.accessCache
I was thinking you could use your existing code (readOperation
, writeOperation
) inside it. That's why it felt a bit like a hack, because you wouldn't use the cache lambda parameter. But it may be easier? (What you're doing here, creating records should work too, but is lower-level.)Stylianos Gakis
09/12/2023, 7:31 AMStylianos Gakis
09/12/2023, 7:31 AMbod
09/12/2023, 7:33 AMStylianos Gakis
09/12/2023, 7:37 AMStylianos Gakis
09/12/2023, 7:38 AMbod
09/12/2023, 7:40 AMStylianos Gakis
09/12/2023, 12:20 PMsuspend fun writeNewMessageToApolloCache(message: ChatMessageFragment) {
/**
* Using [com.apollographql.apollo3.cache.normalized.ApolloStore.accessCache] here to use the
* [java.util.concurrent.locks.ReentrantReadWriteLock] which resides inside the
* [com.apollographql.apollo3.cache.normalized.internal.DefaultApolloStore] to respect in the read/write lock as we
* want to touch the cache internals. This should make it so that we can't make a modification which would override
* a cache entry which was written in-between us fetching the previous cache and appending our new message to it.
*/
val changedKeys = apolloClient.apolloStore.accessCache { cache ->
/**
* [nomalize] here acts as a way to go from a query response into the map of key to records that we would've
* gotten back. We construct our own fake `ChatMessagesQuery.Data` object with the [message] fragment to get the
* exact record we would've gotten if the query came in normally from the backend.
*/
val records: Map<String, Record> = messagesQuery.normalize(
data = ChatMessagesQuery.Data(
listOf(
ChatMessagesQuery.Message(
__typename = message.__typename,
globalId = message.globalId,
fragments = ChatMessagesQuery.Message.Fragments(message),
),
),
),
customScalarAdapters = apolloClient.customScalarAdapters,
cacheKeyGenerator = TypePolicyCacheKeyGenerator,
)
// These were the old cache entries for the MESSAGES_QUERY_NAME query.
val oldCachedMessageCacheKeys: Set<CacheKey> = cache
.loadRecord(CacheKey.rootKey().key, CacheHeaders.NONE)
?.get(MESSAGES_QUERY_NAME)
?.cast<List<CacheKey>>()
?.toSet() ?: emptySet()
// This inlcudes the one new message which we want to write to the cache
val newCachedMessageCacheKeys: Set<CacheKey> = records
.get(CacheKey.rootKey().key)
?.get(MESSAGES_QUERY_NAME)
?.cast<List<CacheKey>>()
?.toSet() ?: emptySet()
// This includes all the existing messages + the new one. In needs to be *first* to show as the last message in
// the chat which is inverted, goes from bottom to top.
val newMessageKeys: Set<CacheKey> = newCachedMessageCacheKeys + oldCachedMessageCacheKeys
/**
* We create a new Record for the "QUERY_ROOT" entry in the cache. This will look something like:
*
* "QUERY_ROOT" : {
* "messages" : [
* CacheKey(Message:123123123)
* CacheKey(Message:234234234)
* CacheKey(Message:345345345)
* ]
* }
* */
val queryRootRecordWithAllMessages: Record = Record(
key = CacheKey.rootKey().key,
fields = mapOf(MESSAGES_QUERY_NAME to newMessageKeys.toList()),
mutationId = null,
)
// We take the original [records] which was going to be written to the cache, and we change what was going to be
// written to the "QUERY_ROOT" entry by entering our own Record which we've enriched to include all the chat
// messages.
// We keep the original [records] since in there it also includes information about where the message itself will
// be stored, along with the message body and the message header which have their own key. This will mean that
// the final [alteredRecords] map will look something like:
//
// mapOf(
// "QUERY_ROOT" : {
// "messages" : [
// CacheKey(Message:123123123) // This is the new entry which we're now wring
// CacheKey(Message:234234234) // This and the one below were already in the cache
// CacheKey(Message:345345345) // Already was in the cache
// ]
// },
// "Message:123123123" : { // The new message we're storing
// "__typename" : Message
// "globalId" : 123123123
// "id" : free.chat.message
// "header" : CacheKey(Message:123123123.header) // reference to the header entry below
// "body" : CacheKey(Message:123123123.body) // reference to the body entry below
// },
// "Message:123123123.header" : { // The header of the new message we're storing
// "fromMyself" : true
// "statusMessage" : Tack för ditt meddelande. Vi svarar så snart som möjligt.
// "pollingInterval" : 1000
// "richTextChatCompatible" : true
// },
// "Message:123123123.body" : { // The body of the new message we're storing
// "__typename" : MessageBodyText
// "type" : text
// "text" : Hello, I would like some help with this.
// "keyboard" : DEFAULT
// "placeholder" : Aa
// }
// )
// val alteredRecords: Map<String, Record> = records.toMutableMap().apply {
put(CacheKey.rootKey().key, queryRootRecordWithAllMessages)
}
// This should merge everything together nicely. The new message entries will get stored, and the entries for
// QUERY_ROOT will be retained, and the "messages" one will be updated with the old messages + the new one.
cache.merge(alteredRecords.values.toList(), CacheHeaders.NONE)
}
apolloClient.apolloStore.publish(changedKeys)
}
const val MESSAGES_QUERY_NAME = "messages"
private inline fun <reified T> Any?.cast() = this as T
Stylianos Gakis
09/12/2023, 12:20 PMsuspend fun writeNewMessageToApolloCache(message: ChatMessageFragment) {
val changedKeys = apolloClient.apolloStore.accessCache { cache ->
val records: Map<String, Record> = messagesQuery.normalize(
data = ChatMessagesQuery.Data(
listOf(
ChatMessagesQuery.Message(
__typename = message.__typename,
globalId = message.globalId,
fragments = ChatMessagesQuery.Message.Fragments(message),
),
),
),
customScalarAdapters = apolloClient.customScalarAdapters,
cacheKeyGenerator = TypePolicyCacheKeyGenerator,
)
val oldCachedMessageCacheKeys: Set<CacheKey> = cache
.loadRecord(CacheKey.rootKey().key, CacheHeaders.NONE)
?.get(MESSAGES_QUERY_NAME)
?.cast<List<CacheKey>>()
?.toSet() ?: emptySet()
val newCachedMessageCacheKeys: Set<CacheKey> = records
.get(CacheKey.rootKey().key)
?.get(MESSAGES_QUERY_NAME)
?.cast<List<CacheKey>>()
?.toSet() ?: emptySet()
val newMessageKeys: Set<CacheKey> = newCachedMessageCacheKeys + oldCachedMessageCacheKeys
val queryRootRecordWithAllMessages: Record = Record(
key = CacheKey.rootKey().key,
fields = mapOf(MESSAGES_QUERY_NAME to newMessageKeys.toList()),
mutationId = null,
)
val alteredRecords: Map<String, Record> = records.toMutableMap().apply {
put(CacheKey.rootKey().key, queryRootRecordWithAllMessages)
}
cache.merge(alteredRecords.values.toList(), CacheHeaders.NONE)
}
apolloClient.apolloStore.publish(changedKeys)
}
const val MESSAGES_QUERY_NAME = "messages"
private inline fun <reified T> Any?.cast() = this as T
Stylianos Gakis
09/12/2023, 12:21 PMbod
09/12/2023, 12:26 PMStylianos Gakis
09/12/2023, 12:26 PMreadOperation
approach would look much simpler, but it turns out the lambda for accessCache
is not inline, and I need to be in a suspending context to call those functions.
Looking inside the function override suspend fun <D : Operation.Data> readOperation(
itself, it doesn’t seem like it makes use of the suspending
keyword at all, since it also just locks on the ReentrantReadWriteLock and then calls readDataFromCache
which is non-suspending.
In any case, I felt like trying to make this work would mean I’d need to launch a new coroutine inside the lambda after I’ve locked, and then I feel like I would deadlock myself somehowStylianos Gakis
09/12/2023, 12:29 PMbod
09/12/2023, 12:30 PM