> If two sibling modules use the same schema ty...
# apollo-kotlin
j
If two sibling modules use the same schema type and this schema type wasn't generated upstream, each module will generate its own version of the schema type, which could clash. To prevent this, Apollo Kotlin registers a global "check${service}ApolloDuplicates" task that will fail if there are duplicates.
Hello. I do not much understand this. • Could clash means exactly what? Our two modules has different package names, so I guess they cannot clash, can they? • Is the check "preemptive" in a way it fails even they do not clash (because of those package names)? Thank you 🙂
m
Hi 👋
Our two modules has different package names, so I guess they cannot clash, can they
That's the catch. Because there's only one schema, all the "schema" types will use the same package name
Which is the package name of the "schema" module
If you have something like this:
Copy code
:schema
- schema.graphqls
- query1 that uses input type Input1
:feature
- query1 that uses input type Input2
Then
Input2
is going to be generated in the "feature" module But if you move that query to the "schema" module (or just add a new one there), it's going to be moved to the "schema" module. Because we don't want that move to break things, we use the same package name for all "schema" types
If you don't want to be bothered with this, you can set
alwaysGenerateTypesMatching.set(listOf(".*"))
in your schema module but this might generate a bit more code (or sometimes a lot depending your schema)
j
Which is the package name of the "schema" module
So ours feature module setup will not work?
Copy code
val packagePath = "xxx." + path.replace(":", ".") + ".network"
packageName.set(packagePath)
m
It will, only just for operation types
• schema types use all in the same package name • operation types can use module-specific package name
We have only one
packageName
option in the Gradle setup for historical reasons but in multi-modules setup, there are in reality 2
packageName
s
(or more if you have more modules)
j
Our schema is quite big so we would rather not use .* to generate everything. At the same time there are some reused types, so some manual whitelisting could help, though in the end, having duplicated schema seems like the most easy solution.
m
You mean duplicate your schema in all modules (and therefore not using apollo metadata)?
I guess that's a solution if the manual whitelisting is adding too much friction
You can follow https://github.com/apollographql/apollo-kotlin/issues/4039 for ideas how to improve that. But ultimately I think there's no silver bullet if you want to only generate the used types and share them between modules at the same time
j
We have about 200 modules and multiple graphqls (dont ask why, I dont like it either), so in this particular fist usecase, we have a graphql shared across just two modules. Maybe for other graphqls it will be shared across more of them (5-10), so definitely we will reconsider this - and maybe for those it will make sense to generate everything in metadata package.
We wouldn't mind having some types duplicated in feature modules. (this would mean forcing a different package)
m
Yea maybe allowing duplicate schema types in different packages is a solution although I'm not sure the consequences of that. It "feels" wrong
j
If I understood it correctly, all this would be fixed by 4039, wouldn't be?
m
4039 will just automate some part of the manual
alwaysGenerateTypeMatching
logic that needs to be done
It will not change the actual codegen. It will "just" compute the
alwaysGenerateTypeMatching
for different nodes in your module graph automatically
j
Oh yes, that's what we wanna avoid - making us to think what to make shared. Because the current check doesn't tell us that, right?
m
Because the current check doesn't tell us that, right?
The current check will tell you something like:
Copy code
type 'SomeInputType' is used in sibling modules :feature1 :feature2, use alwaysGenerateTypesMatching in a parent module
4039 is about getting the
alwaysGenerateTypesMatching
values from a file that can be computed automatically by some task that scans the whole module graph
(because there's no other way to know where to generate the schema types without scanning the whole module graph, or at least all the .graphql files)
j
Thank you for thorough explanation and suggestion. 🙂
m
Sure thing! I'm not sure we landed on a good way forward for you though?
(thinking out loud) another solution to that problem would be to have a task in the root module that scans all *.graphql files and always generate the schema types in the schema module
Of course that's something that's going to be run everytime a single .graphql file is touch but in my experience, parsing GraphQL is relatively fast compared to compiling Kotlin so maybe that wouldn't be too bad
But it definitely introduces coupling
We wouldn't mind having some types duplicated in feature modules
Feel free to open an issue about this and we can experiment
j
I guess we will duplicate for now - those modules tackle (relatively) different stuff -> user account vs credits, so it make somehow sense not to couple them. Next time I believe the usecase will be different and it will be ok to use .* to generate everything in metadata module.
(next time - our different graphql server/schema)
m
Sounds good 👍 At the end of the day, the schema types are relatively slow compared to the operation types so it's certainly ok to duplicate.