If a field is added in a JSON response, is this co...
# serialization
v
If a field is added in a JSON response, is this considered an API change? Yes šŸ‘ / No šŸ‘Ž ?
šŸ‘šŸ¾ 1
šŸ‘ 5
šŸ‘Ž 1
e.g. from
Copy code
{
  "name": "John",
  "age": 30
}
to
Copy code
{
  "name": "John",
  "age": 30,
  "car": null
}
k
yes. Itā€™s a backwards compatible API change.
c
Yes, this would typically constitute a breaking change to the API. Even if the value of
"car"
is
null
, if a consumer has as class which does not have a
car
property and
ignoreUnknownKeys
is
false
, the new JSON response would throw an error
In terms of semantic versioning, this would be a minor change. An older client cannot be expected to work with the new value, but newer clients should be able to work with older responses if
car
is nullable
k
Yeah I was also wondering how
ignoreUnkownKeys
would play into this. I think for most other deserializing clients (eg. the general question) this would be backwards compatible.
e
it depends on the expectations between the endpoints involved, but this is typically considered to be a compatible change.
v
If ignoreUnkownKeys default is false, which will crash the deserialiser with such a change, then how is that typically considered to be a compatible change?
k
kotlinx serialization is the first deserialization client Iā€™ve ever used that has this ā€œstrictā€ mode enabled by default.
For the most part additions to a payload are considered backwards compatible (aside from here, where the default is to throw on unknown keys)
c
It might not be the ā€œstandardā€ to be so strict, but consider that, unlike most serialization libraries, this one is designed so that you use the same serialization models on both the server and client. Most other languages that can do that are either dynamic and donā€™t really need specialized serialization libraries (JS), or else use different languages/libraries between client and server and canā€™t make any assumptions about the structure of JSON given a class. That said, Iā€™ve found this library to bit a bit difficult to use for parsing arbitrary JSON structures. It definitely works the best when you use kotlinx.serialization on both the client and the server. I do usually keep
ignoreUnkownKeys: true
and
isLenient: true
whenever Iā€™m not using the same models on the server, because 3rd party servers are the wild west and always end up sending stuff in production that I didnā€™t expect.
k
It might not be the ā€œstandardā€ to be so strict, but consider that, unlike most serialization libraries, this one is designed so that you use the same serialization models on both the server and client.
This doesnā€™t matter when youā€™re deploying applications to an app store where you have no control over when your clients are actually updated unfortunately.
Iā€™ll happy admit that the blanket ā€œbackwards compatibleā€ claim is wrong but I also think the inverse isnā€™t necessarily true either. Thereā€™s no hard spec here for JSON like this is for GRPC
v
Interesting take Casey, ty. I'm thinking Android too here
c
Thatā€™s a fair point. I canā€™t tell you how many times my clients have said ā€œcanā€™t you just force update when thereā€™s a change to the API?ā€ No. The answer is always no. A non-compatible change to the API will certainly break Android and iOS apps.
k
Yup.
Itā€™s unfortunate that the web solved this problem ages ago and greedy companies have regressed us this far.
e
the kotlinx.serialization.json default of
ignoreUnknownKeys=false
confuse me, especially since kotlinx.serialization.protobuf has its behavior default to the equivalent of
true
(as is the expectation for that format)
k
I generally think itā€™s a bad default, too.
e
Even you intend to share models between server/client, you typically can't guarantee simultaneous deployments of client and server..
v
thats true too
k
The above is very similar to best practice with database migrations. You generally want application code and a migration to be backwards compatible for deploy 1 and then can remove the cruft in deploy 2 after deploy 1 succeeds (assuming a client/server model for an RDBMS).
c
So moral of the story when making non-compatible changes to your API: 1) update your client apps with the
car
property in its model 2) Deploy to the app stores 3) Wait 2 weeks for 80% of your apps to be updated 4) Deploy the not-backward-compatible server changes to production 5) 20% of your users start complaining about crashes 6) customer service tells everyone to just update their apps 7) folks still donā€™t update their apps
k
step 0: Set
ignoreUnknownKeys
to true and communicate expectations to your backend devs
c
Or, just make sure
ignoreUnknownKeys
is true the next time you release, and hopefully donā€™t have to worry about this as much
v
7 is my favorite
e
moral of the story is that you need to set up the apps to allow for future changes, or your changes always go to new endpoints that the older clients aren't using
v
Reading this doc, it reads like: We recognise that "...new properties can be added during the API evolution", but we're gonna crash by default anyways
k
I think GraphQL does a lot of things poorly, but one thing thatā€™s super nice about it is having a type schema that allows for API evolution and deprecations.
Thereā€™s a related API compatibility tool here. I havenā€™t used it but I became curious. https://github.com/IBM/jsonsubschema
v
I think with GraphQL the clients can select (query) which things from the database they want to consume as JSON. Its just another form of change management, in effect similar to versioning
k
Thereā€™s more to it than that. You can build rich APIs on top of GraphQL. I use it every day.
e
there are a couple of pain points with GraphQL - we've run into https://github.com/graphql/graphql-js/issues/1361 more times than I can count, and https://github.com/graphql/graphql-spec/issues/550 makes it harder to deprecate and migrate. but overall it's a reasonably good story
k
Let me be clear ā€” I hate GraphQL. I just think that having a codified schema with types and deprecations is nice.
I frequently run into the issue that GraphQL cannot express a 64bit integer
e
ditto JSON, without extensions that don't work on every platform
c
A havenā€™t used GraphQL in a while, but I also found I didnā€™t care for it much. Its goal of being language-agnostic actually made it pretty difficult to match its schema semantics into any language except JS (since itā€™s dynamic). Such is the world we live in. JS is the only supported language for most things, apparently.
k
Yup. JS and its dumb 53 (I think?) bit numbers integers
Apollo GraphQL on Kotlin has come a long way, but itā€™s still not perfect.
e
IEEE-754 binary64 gives you precise integers up to 2^53, yes
there's integers beyond that, you just lose the ability to represent them all precisely šŸ˜›
k
Which is great for money
e
for the record, it's not just a JS issue. Lua does the same thing
k
Any language which smooshes the semantics of integer types and floating point types into one thing
c
You probably shouldnā€™t be using floating-point numbers for money at all. A 2-integer system is safer and doesnā€™t lose precision (which is basically what Javaā€™s
BigInteger
does under the hood)
k
Js just happens to be the one weā€™re all forced to use
Yup we represent our money as ints
Thereā€™s also similar concerns for other units of measurements, like
Watt Hours
here
e
at one of my previous companies, all currency was represented in millis as int64
overkill but safe
k
The GRPC money class is overkill but itā€™s nice they provide it out of the box
v
I would make an integer joke, but they just donā€™t have any point
c
or alternatively, sometimes ā€œstringly-typedā€ values are actually the better way to go. Donā€™t trust the framework to handle numbers correctly, do it all yourself. Painful, but safe
k
If you can agree on what format to represent your stuff in, maybe. I work for a global org where locale is at the centerpoint of all money representations so I would balk at that idea as dangerous.
e
yeah. our current product has integer IDs on the backend but the clients don't need to know that. as far as the clients are concerned, all IDs are simply opaque strings.
k
e
oh nanos is incredibly overkill lol
in any case I would tend to avoid doing anything currency-related on the client anyway. these things change in the real world (e.g. https://en.wikipedia.org/wiki/Croatian_kuna is no more as of a couple weeks ago) and you can update much more consistently on the backend
k
centerpoint of all money representations
For example, we recently had a prod bug regarding eastern arabic numerals. Thereā€™s also some weird stuff with bulgarian where they culturally donā€™t include a
,
as a grouping separator but only when dealing with money. If itā€™s normal numbers they do. So strange.
c
^ yup, Iā€™ve seen that kind of issue before. In my case, it was a phone set to Turkish locale inserting a
,
instead of
.
for the decimal portion of the USD
e
(if that's your issue, you have problems with far more locales than just Turkish)
k
French intensifies
c
Fortunately, my client only has US-based customers dealing in USD, so we could just hardcode the US locale. But donā€™t get me started on how terrible the API was that we had to deal with money representations like that at allā€¦
k
Hereā€™s the TL;dr on money formatting. 1. The formatting of the number and the currency symbol relates to the locale a user is trying to view that information in. Eg. Their device locale. 2. The type of currency a customer/user is interacting with has to deal with what country/merchant/entity theyā€™re interacting with.
You shouldnā€™t hard code us locale because people residing in the US and interacting with US currency can still have their devices set to a different locale (like turkish)
e
hard-coding currency per region is also not great: see above where HRK disappeared, or the VEF/VES change a few years ago (not that VES is stable either)
k
Yup. Currency should generally be delegated from a server
p
The reason for the default value of
ignoreUnknownKeys
is simple API compatibility. This key was introduced later (per user demand), but allowing the default behavior to change would break the expectations of many existing users. As such it is stuck to
false
until there is some reason to have an API incompatible change.
e
wasn't
Json.ignoreUnknownKeys
pre-1.0? there already was a bunch of API-incompatible changes for 1.0