Is there any way to run the tests such that new sp...
# kotest
w
Is there any way to run the tests such that new spec instance is created for every container that contains tests, but all the tests inside are executed on that one instance? My motivation is that the assertions shouldn’t do any side effects, and so it’s a bit wasteful to repeat the entire setup for each test. So, I’d like this:
Copy code
DescribeSpec({
  describe("a") {
    println("a")

    describe("b") {
      println("b")

      it("check 1") { println("1") }
      it("check 2") { println("2") }
    }

    describe("c") {
      println("c")

      it("check 1") { println("1") }
      it("check 2") { println("2") }
    }
  }
})
to print
Copy code
a // first instance
b
1
2
a // second instance
c
1
2
If I understand correctly, right now my only options are: SingleInstance:
Copy code
ab12c12
InstancePerTest:
Copy code
a ab ab1 ab2 ac1 ac2
InstancePerLeaf
Copy code
ab1 ab2 ac1 ac2
WhatIWant:
Copy code
ab12 ac12
s
You want InstancePerRoot or something basically ?
w
Not sure how to call it, really. But yeah, if you define
root
as
container which has at least one test
then exactly
s
so InstancePerContainer
I think that would be a very easy change to InstanePerTest actually
w
Tbh I’m not very familiar with the naming, that’s why I went the example route 😄 When I say test it’s the
it() {}
in describe spec, and
describe/context
are containers
So I still don’t want an instance for containers which don’t have any tests (assertions)
s
there's no way to know if something has assertions until it runs, so it would have to go by the "type" of container
describe/context are TestType.Container and it is TestType.Test
so upon detection of a TestType.Container it can launch a new instance (like instance per test does)
and upon detection of a TestType.Test it would just run in the same instance
w
Is it something I can prototype locally somehow, to see the impact?
s
not without a custom build of kotest
are you prepared to build it locally ?
You can modify the instance per test runner to work how you want to test it out.
Copy code
InstancePerTestSpecRunner:142
change
Copy code
if (isTarget) {
to
Copy code
if (isTarget || nested.type == TestType.Test) {
I think that's all that would be required
w
I’ll try that
🤔 with your suggested change and
InstancePerTest
runner I see
Copy code
a
b
1
2
c
1
2
so it seems like it created just one spec instance
s
hmmm
w
Oookay wait, I have some trouble forcing the local dependency 😕
Any tip how to publish everything to maven local? just
publishToMavenLocal
fails at
Task :kotest-tests:kotest-tests-native:generateMetadataFileForIosArm32Publication FAILED
and it’s difficult to use different version of just
kotest-framework-engine
s
I would just delete that module locally
just nuke the entire kotest-tests folder and remove it from settings.gradle
w
I think I managed to publish everything needed, but now when I try to run the test with modified
InstancePerTest
runner I get bunch of
Copy code
*** java.lang.instrument ASSERTION FAILED ***: "!errorOutstanding" with message transform method call failed at ./src/java.instrument/share/native/libinstrument/JPLISAgent.c line: 873
😬
Update: seems like a bug in Jacoco 😄 I modified the condition to
Copy code
if (nested.type == TestType.Test) {
   run(t, target)
} else if (isTarget) {
   executeInCleanSpec(t).getOrThrow()
} else if (t.description.isOnPath(target.description)) {
   run(t, target)
}
and I see
aab12ac12
which is close enough e: but also fails some tests with
unfinished coroutines during teardown
so not quite
c
what about
Copy code
DescribeSpec({
  describe("a") {
...
    describe("b", isolate:false) {
      println("b")

      it("check 1") { println("1") }
      it("check 2") { println("2") }
    }
...
})
w
So, just for context, the reason I asked if it’s easy to prototype was that our tests suite is starting to take a long time, and I was looking for some relatively low-hanging fruit to optimize it. So while
isoloate = false
makes sense, especially if there are technical difficulties without it, ideally I’d like the isolation behavior default for all tests. That said, I still haven’t determined if it’s really worth it 🙂
c
how long does your suite take?
w
~1min per module for some biggest modules, and 15 minutes in total, that’s on CI
c
one possible problem with such a lifecycle is that its not really obvious.why can’t you do singleInstance?
you might want to try my multithreaded test runner thats optimized for speed: https://github.com/christophsturm/failfast
👀 1
w
one possible problem with such a lifecycle is that its not really obvious
that’s for sure, that’s why I wanted to measure first. Although arguably
InstancePerTest
is not obvious either, at least with specs that have dedicated assertions block
why can’t you do singleInstance?
Isolating tests is important to me, honestly I don’t even know how to write tests with single instance config, it’d null the biggest benefit of tree-like test structure Kotest offers
s
Or you can become a kotest contributor and help improve the best test framework on the jvm ;) @christophsturm
@wasyl are your tests io bound or cpu bound
w
cpu I think, we don’t read anything from i/o
But I haven’t profiled yet
s
And you've enabled parallel execution?
w
What is proper API for that now?
override val parallelism = 4
in project config class?
s
Yep that works
w
Or
concurrentTests/Specs
? Although I already know some of our tests aren’t ready for parallel execution yet due to shared static state in some dependencies. Seems like a noticeable improvement though
But it doesn’t stop me from looking for ways to optimize non-parallel runs anyway 😄 I don’t know how parallel tests will behave on CI, too, as there are only 2 cores/threads there
s
Fair
c
Or you can become a kotest contributor and help improve the best test framework on the jvm ;) @christophsturm 
I’m really hoping that failfast will become one of the (two?) best test frameworks on the jvm 🙂 it may look similar to kotest but i think its absolutely not. there are no features to enable there are no config options there are no different dsls, its for people who mostly care about test speed and think that multi threading is the way to go.
s
And anything with shared state just put @Isolate on the class
And it'll run sequentially
c
for example I would be really hesitant to add something like @isolate
s
And parallelism = 4 only parallelizes at the spec level. All tests per spec still run sequentially
Why
w
And if I have
InstancePerLeaf
? Then a spec is a test anyway, so no difference?
s
That just controls the instances used not parallelism.
👍 1
c
how is @isolate implemented? do you just run the @isolate tests last single threaded? if i would need that i would just put those tests into a separate suite
s
Yes that's exactly what happens
w
I forgot a bit of important context: I did log GC for our tests suite, and it looks like each test process triggers GC every ~second, and cleans up roughly 100MB of young objects at every GC. Admittedly I don’t know if it’s a lot or not, but I wanted to see if I can get this number down by limiting Kotest allocations. Since we have a number of tests with many `it`s at the same level, that seemed like an easy target to just experiment.
c
another example: in failfast there is only instancePerLeaf as lifecycle mode. I have now ported all my test suites to failfast and i keep wanting to use different lifecycle modes but I’m not implementing it yet because I havent found a solution that i like yet. I now think
context("does stuff", isolation: false) {…}
may be a good solution
maybe its just very short GCs?
s
@wasyl if you find some excessive gc in kotest then let me know and we'll try to fix it. Wasn't there some sourceref thing that was slow that you found a while back.
@christophsturm it seems overkill to create a new module to isolate a couple of tests
c
what happens when you give the test suite more ram? you could probably run the whole suite without doing gc
w
@christophsturm I’m planning to do that too, I’m gathering more data to know how to tune GC most efficiently 🙂 but with Gradle + parallel tests I’m not sure entire suite without GC is possible @sam I did find sourceref thingy, and yeah when I have something solid I’ll definitely let you know
c
is @isolate mostly to migrate not well written tests, or do you know usecases where improving the test is not an option and it just needs @isolate ?
s
If you have shared static state. I use it to test parts of kotest itself.
It's not the best option but sometimes it's the only option
c
what shared state does kotest depend on?
w
If you want solid example — Android’s `ViewModel`s and basically all of Jetpack libraries use shared coroutine dispatchers, so replacing them with test dispatcher to control time can’t be done individually (other than by circumventing the built-in mechanism completely)
c
interesting. I use kotlin only on server side jvm code. i thought one of the main points of jetpack / swiftui etc is to make ui code better to test.
i think it always pays off to try to find a way to test things in parallel. it really improves the whole architecture
do you have an example of a well written jetpack test suite?
w
I don’t, but personal opinion Google doesn’t pay a lot of attention to testing. Tooling is regularly broken, Robolectric is an abomination, APIs aren’t that testing-friendly (with exceptions). The APIs cater to the masses, and let’s be honest, a lot of devs don’t write tests at all, and when they do, they don’t even think about things like parallelism. Case in point about Google’s approach: https://issuetracker.google.com/issues/134473339 open for almost 2 years now, with little activity
c
i would love to work on an android project and try to find a good way to test it.
w
Or this one: https://github.com/android/android-test/issues/759, new test APIs behave completely different from real ones, no activity 🤷 This is for instrumented tests but still representative of Google’s stance
The only good way to test an Android project is to write as much of the codebase without Android dependencies as possible 😄 Then the only things that are broken are those classes that do need android, tooling which regularly breaks with kotlin-only modules and instrumented tests, which are insanely painful but another beast completely
c
yeah thats always the best way to test
s
So you use kotest for the non android bits
w
But this is not what Google advocates. They advocate for using their libraries and testing using Robolectric. And they write all the Google libraries with that philosophy in mind
s
Is robo their library or third party
c
a lot of kotlin libraries that come from android are also awful.
w
So you use kotest for the non android bits
we use it for everything that doesn’t require robolectric, so some Android components still qualify, albeit with custom listener to set this global dispatcher state
Robolectric sadly needs its own runner and Kotest integration is very limited. Instrumented tests are impossible to run with Kotest afaik
s
Yeah the internals of robo are very junit focused
w
One this issue is done: https://github.com/junit-team/junit5/issues/201 and Robolectric starts using it (probably 3 more years 😛~)~ it’ll get better robolectric might not need to add support
Btw circling back 😄 I started giving test process ludicrous amounts of memory, and the new gen easily hits 8GB, single GC takes it down to 20MB (!) I’ll definitely spend some time profiling memory allocations
c
classloader isolation. that looks very interesting
s
8gb wow
Yeah find out what's allocating all those objects
c
profiling memory usage per test could also be a nice feature
w