https://kotlinlang.org logo
#kotest
Title
# kotest
x

xenomachina

11/30/2021, 6:07 PM
Does 5.0.0 change the order in which tests and/or specs are run? We’re seeing some failures after migrating, and its looking like the issue might be due to this. (Not kotests’s fault, of course. Our tests should be able to run in any order. I’m just curious to know if this is a known change.)
s

sam

11/30/2021, 6:22 PM
Not consciously. The order of specs has always been the order passed in via gradle (so that should be consistent)
Tests, should be order as they are defined in the spec file.
Do you think it's spec order, or test order, that has changed ?
x

xenomachina

11/30/2021, 6:23 PM
I think it’s spec order.
One of the issues we had was 3 specs that interact with a rate limiter. With 5.0.0, 2 of them would fail, running into rate limits. It turned out we weren’t clearing the rate limiter’s state between specs. I wondered why they were passing pre-5.0.0. Turns out if I run only those 3 specs in 4.x.x, I see the same failure. Only if I run the full set (or any individual spec), they pass. I’m guessing that those 3 specs were spread out enough that they were able to squeak by the rate limiter.
s

sam

11/30/2021, 6:40 PM
Do you use parallel execution and/or @isolate ?
x

xenomachina

11/30/2021, 6:45 PM
We have
override val parallelism = 1
in our ProjectConfig.
s

sam

11/30/2021, 6:48 PM
that's the default anyway, so everything is sequential
s

Sebastian Schuberth

11/30/2021, 8:53 PM
FYI, I also saw tests from different specs being executed in parallel (as usual) but in a different order. So different tests (from different specs) were running in parallel than before. Which actually uncovered some race conditions in our tests. So whatever caused the change, it was a good one 😉
s

sam

11/30/2021, 8:53 PM
It can be useful to enable Random ordering in configuration for this reason. You can choose to randomize tests and randomize specs.
👍🏻 1
x

xenomachina

11/30/2021, 9:15 PM
Random spec ordering sounds very interesting, but how does one debug a failure that only shows up when the specs are run in a specific order?
s

sam

11/30/2021, 9:31 PM
I guess you'd have to look at the stack trace and start adding debug code. Better to know than not to know.
x

xenomachina

11/30/2021, 9:39 PM
I agree in theory, but in practice it can be painful to deal with tests that fail non-deterministically. Also, the way our CI works, it runs tests pre-merge, only merges if they pass, and then runs them again post-merge. Not fun when the post-merge run fails.
s

sam

11/30/2021, 9:40 PM
If a test fails depending on the order its executed (and it's not intended to rely on other tests) then its a flaky test and should be fixed ?
x

xenomachina

11/30/2021, 9:42 PM
I agree. A failure that only shows up in 1 of every n! test runs is pretty hard to debug, though.
Maybe if there was a way to get the random seed for each run, and then re-use it when debugging failures that could help. Or maybe even use the date as the random seed, so that it only shuffles once per day?
s

sam

11/30/2021, 9:43 PM
Yeah but it might be obvious too, like you're relying on some state from a previous test, that you didn't realize was being changed under you
That's a good idea - allowing the user to set teh random seed
x

xenomachina

11/30/2021, 9:47 PM
After spending a few hours debugging this rate limiting issue, I’ve seen that it isn’t always so easy to debug, and that was with the new order being stable. If it was randomizing on me I think I’d still be trying to figure out what was going on. (That said, if it was randomized from the start, we wouldn’t have ended up with so many hidden dependencies between our tests to begin with.)
s

sam

11/30/2021, 9:48 PM
Its all just extra tools end of day, hopefully Kotest can continue to improve
👍 1
4 Views