Hey all, I was wondering if anyone has any experi...
# graphql-kotlin
d
Hey all, I was wondering if anyone has any experience implementing query scoped, read only transactions. We have a graphql schema where entities are stored in separate tables in Postgres. On the mutation side, the entities are updated together in a database transaction. We want to ensure that reads are consistent in the same way across a given read query. Right now, we have a race condition where an update can cause a data consistency issue if one data fetcher has read, an update happens, and then a second data fetcher reads. Technically we could start a transaction in Instrumentation and pin a database connection to the query context and have all of the underlying services share this transaction. Seems a little less than ideal and introduces some concurrency concerns since data fetchers can be async. Does anyone have any other suggestions or experiences trying to make transaction guarantees across a query?
s
It sounds like you may want to guarantee that certain operations have completed before others, but this is from the perspective of a single client. There are many technical ways this could be solved but the best one is not to worry about it at all right? If a client needs one operation to complete before another, they should wait on that result themselves. As a service owner you don’t want to have to deal with pausing certain requests based of some rules. You should just process each request independently
d
not necessarily guaranteeing order, but data consistency
so, data is updated in a transaction, and so there is some inherent coupling between the two entities updated by that mutation . From the read perspective, as the service provider I care less about whtether the read returns old data prior to the mutation or the data after the mutation, but it should be one or the other . Because the read does not happen in a transaction, it can fetch old data for one entity, but new data for the second entity .
d
to better visualize it, we are talking about
Copy code
mutation {
  updateX // X is a complex object with Y and Z
}
followed by (fields resolved independently of each other and in parallel)
Copy code
query {
  Y // old data
  Z // new data
}
I’d assume if you want to ensure that read for all fields in a query happens in the same transaction then yes you will have to start it globally -> potentially start it while creating context (or as you suggested during query instrumentation)
afaik there is no magic bullet here
if you were to structure the query as
Copy code
query {
  X { // function
    Y // property on X
    Z // property on X
  }
}
you could limit transaction to just X and within it you would fetch both Y and Z and populate it on X before returning the complete object
*within function X you could look at data fetching environment to determine whether to fetch Y and/or Z to avoid unnecessary calls
d
ahh right that’s a good point that we could use the fetching environment to introspect when to populate a property or not… would you have an issue on the schema though? It might be not nullable from the schema definition perspective, but choosing not to populate that property means that it needs to be nullable.
d
yes non-nullable properties will be problematic