Been trying to remove all of my queries etc from t...
# apollo-kotlin
s
Been trying to remove all of my queries etc from the big schema module, to hopefully make it a bit less of a compilation bottleneck, since it is I think our slowest module to build, and more or less every feature module depends on it. My hope is that moving those away will allow us to parallelize all of that work, and the schema module will have much less to do. One thing which I think I can't get around is that some fragments that want to be re-used across modules still need to be in that big schema module right? Could I even introduce new modules, which act only as a shell to contain those shared fragments, and have only select feature modules depend on them? Do you feel like that would even be worth the effort? I think folks use Develocity™ to help diagnose if such a change would be a net positive or negative right? I don't have access to that and I don't know how else to measure this reliably, so I thought I might as well just ask here to see if others have looked into this before me.
I have no idea if I am doing this even remotely right. But I tried making a build-scan for before/after moving the operations and it looks like it is ~the same before and after. Only that the apollo module now builds much quicker, but the time is simply offloaded to the others. I don't feel like moving the fragments as well would make a big difference really then. I might try it regardless, but some other time perhaps.
a
No you can have modules depend on other ones that grab fragments from the dependencies
☝️ 1
We have something like 20+ GraphQL specific modules that share fragments and define operations and no issue there
thank you color 1
The schema module still is a little bit of compilation bottleneck as you need to have it generate all the downstream types first
s
Ah that's perfect, thank you for confirming! Have you done this splitting as a measure to improve build perf, or just to contain the amount of people who can see those fragments? And if you did it for perf reasons, did you notice any noticeable difference or nothing special?
a
It was done to allow teams to reuse fragments , and call other operations in different features as necessary. It wasn’t explicitly done for performance reasons, but with 10+ teams , it helps to have them all separate for performance and code ownership perspective. Also each module is GraphQL only, so caching and code generation should be much faster than bundling kapt/ksp/compose etc in them. Since only if the GraphQL changes or schema updated do they need to rebuild
s
Right, so do any given
:feature-x
you'd also have a
:feature-x-graphql
which only serves as a shell to hold the .GraphQL files and apply the Apollo plugin etc right? That way this module also skips having to perhaps accidentally also have ksp or whatever else
:feature-x
might internally use itself. Did I understand you correctly here?
a
Correct!
🌟 1
e
But I tried making a build-scan for before/after moving the operations and it looks like it is ~the same before and after. Only that the apollo module now builds much quicker, but the time is simply offloaded to the others.
Hmm, I am not sure what you measured in your build scan, but I think that what should be most relevant from a developer experience point of view is incremental compilation time, and less so the total build time. By that I mean which modules are more likely to be changed, and how does that affect the compilation time of other modules and of the project overall. To measure this, I use the
gradle-profiler
tool, benchmarking different scenarios, as described here.
s
I looked a bit into gradle-profiler but I didn't really get into it at all. Thanks a lot for this link, it definitely looks like something I would be interested in trying out. And yeah, I did just test clean builds with somethig like
./gradlew app:assembleDebug --rerun-tasks -scan --no-configuration-cache --no-build-cache
just as a baseline so that I can get something reliable to test it against. It's true that this module won't be touched the majority of the time, so incremental compilation will make things much better anyway in a "real" scenario. Fwiw, I did this migration now and turning that module into a jvm only module, and looking at my scans my schema module now takes up ~20 seconds as opposed to ~50 seconds. It is no longer a bottleneck most of the time, but now the project with the most gql operations kiiinda is 😅 But that's a much better one to be slow since almost nothing depends on it. It's a fun experience to play around with all of this, so I am trying to make sure I don't make things worse instead of better 😅
👍 2
e
Yeah, that's a legit fear to have 😄 Ideally, you would have some telemetry for that, so you can monitor changes over time. You don't need Develocity for that, if you have any kind of observability framework already at your company, you can use something like this plugin to push build metrics to it, and then have a dashboard to visualise the aggregates 🙂
👀 2
1
s
I have been looking for something like this since like forever! This looks super promising, I see it even has support to attach build scans in there! Do you use this yourself too? I would be super curious how you use it exactly. We do have app monitoring in general using datadog, but nothing for the build itself. I wonder if there'd be a way for me to use this and hook it to provide some data to datadog for example then. What do you measure when doing such monitoring? Raw build time may be subject to change a lot depending on if there was a bad cache miss for some PR for some reason for example, and that may just be noise in the big picture for example.
m
There is also https://github.com/cdsap/Talaiot but I have never used it myself. We're using Gradle Enterprise here and I track raw build time. It fluctuates obviously because of the nature of different changes (doc change vs changing the ABI of the parser) but at the end of the day, there are so many factors (build cache, incremental builds, ...) at play here that it's hard to be really more precise
👀 2
s
Thanks for the link Martin! Sinec apollo-kotlin is OSS and gets free develocity due to that, are those dashboards also open? It would probably be useful to me to take a look at how develocity does this.
m
The trends is the one interesting for long term investigation of the dev experience
It's hard to correlate it to actual changes though. You have to remember sutff like when you split your modules/update your Gradle version/etc... and check back a couple of weeks after to see how it goes
Also because we're not such a huge team. I guess if you're the androidx team, you get feedback much faster
👍 1
e
> Do you use this yourself too? I would be super curious how you use it exactly. > Yes, we use the provided
LocalMetricsReporter
to write the metrics to a JSON file, which we then parse and push to Honeycomb as traces, alongside all kind of metadata. Each Gradle task we "convert" to a span, so we can get a nice timeline of the tasks. Each task also has a
state
property that can be one of UP_TO_DATE, IS_FROM_CACHE or EXECUTED. We have some naive logic like
isIncremental = tasks.count { it.state == EXECUTED } < tasks.size / 3
, and we only care about this kind of incremental builds 😄 We only started measuring this recently, and honestly we didn't looked too much into the results yet, so there's not a lot of interesting information that I can share, but the plan is to use this metric as a Mobile DevEx KPI.
thank you color 1