Hi, I'm trying to speed up my tests by using para...
# kotest
n
Hi, I'm trying to speed up my tests by using parallelism Kotest feature. I distribute databases instances by using LinkedBlockingQueue and
beforeSpec { queue.take() }
,
afterSpec { queue.add(db) }
Unfortunately I don't see a big improvement of performance. Normally my tests take around 30s. With
parallelism=16
only a few seconds better. My CPU has 32 threads. Database server in the container also has access to all 32 threads. I tried different combinations of
parallelism
setting and number of available databases but I didn't notice significant difference. IntelliJ shows that all my 200 tests take 10s but this time doesn't include time between tests and time taken by building project. What's wrong with my approach?
s
Parallelism will mostly just mean multiple specs in parallel
n
My 200 tests cases are of course divided into multiple specs.
Let's say I have 50 specs and 200 test cases.
s
Try increasing concurrentSpecs too
Set both in your ProjectConfig class
n
Setting
concurrentSpecs=16
didn't make difference but
concurrentSpecs=1
works 20s slower
It seems that collecting results and switching between tests take 40% of entire time
c
Does your queue of db instances hold enough entries to support that number of parallel tests?
n
It does. I tried different configurations. With the same number of dbs as
parallelism
, less dbs than parralelism, more dbs than parallelism. I don't see difference. For example there is no difference between
parallelism=16
with 10 dbs and
parallelism=8
with 8 dbs.
a
IntelliJ might not be the best way to benchmark this. It injects some build config so that it can detect and display the test results. Do you see the same results if you run the tests via the command line?
n
There is no difference in total time when I run it via command line
c
I use code like this to measure the level of concurrency that my tests achieve:
Copy code
private val operatingSystemMXBean =
    ManagementFactory.getOperatingSystemMXBean() as com.sun.management.OperatingSystemMXBean
private val runtimeMXBean = ManagementFactory.getRuntimeMXBean()

    val uptime = runtimeMXBean.uptime
    val cpuTime = operatingSystemMXBean.processCpuTime / 1000000
    val percentage = cpuTime * 100 / uptime
if you run that at the end of your suite you will see how much load you generated. optimal percentage would be 100*cpus, but thats more a theoretical value.
n
@christophsturm do you run it after each spec or after entire project?
This is my CPU chart when my tests are using
parallelism=1
and here with
parallelism=16
Why there is so high spike at the end of tests? I'm not executing anything custom at the end of my tests. Do you have the same behavior in your tests?
c
do you run it after each spec or after entire project?
this displays a summary of cpu load over the whole test suite so I run it after the entire project.
n
this displays a summary of cpu load over the whole test suite so I run it after the entire project.
I run it after entire project and it shows around 1800 with
parallelism=16
. With
parallelism=1
value is similar.
I did it in ProjectListener
c
interesting. your graphs for parallism=1 and paralellism=16 look very similar too, so maybe cpu load in your case is caused by something else?
n
There is also SQL Server container used by tests. Maybe IntelliJ do this. I will check it.
The same spikes occurrs when I run it without IntelliJ