Does kotest have something to test race conditions...
# kotest
a
Does kotest have something to test race conditions and such? I could not find anything.
e
a
I saw that, TY. But I'm looking for something that will expose non-thread safe code such as:
val nextValue = sharedMutableInt++
as opposed to
val nextValue = myAtomicInt.incrementAndGet()
I have my own code that does it, but it is not fully baked
e
Do you have examples from other frameworks or languages which accomplish this?
a
I have some examples
in Kotlin
but it is half baked
for instance, the following shows that mocking static methods is not thread safe - it can break other tests running in parallel:
Copy code
"demo for mockkStatic".config(enabled = true) {
            runInParallel({ runner: ParallelRunner ->
                timedPrint("Before mock on same thread: ${LocalDateTime.now().toString()}")
                runner.await()
                mockkStatic(LocalDateTime::class)
                val localTime = LocalDateTime.of(2022, 4, 27, 12, 34, 56)
                every { LocalDateTime.now(any<Clock>()) } returns localTime
                runner.await()
                timedPrint("After mock on same thread: ${LocalDateTime.now().toString()}")
            },
                { runner: ParallelRunner ->
                    timedPrint("Before mock on other thread: ${LocalDateTime.now().toString()}")
                    runner.await()
                    runner.await()
                    timedPrint("After mock on other thread: ${LocalDateTime.now().toString()}")
                }
            )
            /*
Time: 2023-05-12T13:14:07.815923, Thread: 51, Before mock on other thread: 2023-05-12T13:14:07.737748
Time: 2023-05-12T13:14:07.816011, Thread: 50, Before mock on same thread: 2023-05-12T13:14:07.737736
Time: 2022-04-27T12:34:56, Thread: 51, After mock on other thread: 2022-04-27T12:34:56
Time: 2022-04-27T12:34:56, Thread: 50, After mock on same thread: 2022-04-27T12:34:56
             */
        }
with the following implementation:
Copy code
import java.util.concurrent.CyclicBarrier
import java.util.concurrent.TimeUnit
import kotlin.concurrent.thread

class ParallelRunner(
    private val timeoutInMs: Long,
    vararg tasks: (runner: ParallelRunner) -> Unit) {
    private val tasks = tasks.toList()
    private val barrier = CyclicBarrier(tasks.size)

    constructor(vararg tasks: (runner: ParallelRunner) -> Unit): this(1000L, *(tasks))

    fun run() {
        val threads = tasks.map { task ->
            thread(start = false) {
                task(this)
            }
        }
        threads.forEach { it.start() }
        threads.forEach { it.join() }
    }

    fun await() = barrier.await(timeoutInMs, TimeUnit.MILLISECONDS)

    companion object {
        fun runInParallel(vararg tasks: (runner: ParallelRunner) -> Unit) {
            ParallelRunner(*(tasks)).run()
        }
    }
}
e
Which parts of this could/should be included as part of some framework for finding race conditions?
a
class
ParallelRunner
- demo shows how to use it, there are several more demos using it. I also have another class for reproducing deadlocks, dirty reads etc.
e
We already support running tests in parallel though? ๐Ÿค”
Plus it's easy to do things in parallel with coroutines.. I think the framework support should be focused on automatically running different things in parallel, with varying degrees of parallelism and checking what breaks when run together?
a
My understanding is that currently kotest runs tests in parallel only to complete faster, not to deliberately expose if system under test is thread safe. And as such we can get race conditions between two different tests, such as via a shared mock. But we won't reproduce a race condition in the code being tested. Pls correct me if I am wrong.
e
You are correct
gratitude thank you 1
a
so the code that makes it easier to understand/reproduce race conditions and such is outside the scope for kotest, right? TY!
e
I'm not sure. The best support for handling race conditions should IMO integrate seamlessly and just happen under the hood.. so if I write my integration tests, I would like to write them as normal but perhaps invoking Kotest in a mode which experiments with parallelism etc to find such issues. Perhaps it doesn't need to be part of Kotest, in a simple approach it could be a Gradle plugin which calls Kotest test task with different configs to see if it starts to break..
a
TY
c
a
@CLOVIS - this is awesome -TY so much!
c
Also, #lincheck (but it's a bit empty ๐Ÿ˜•)
๐Ÿ‘ 1
o
I'm using Kotest in highly concurrent scenarios and I'll be giving a talk on "End-to-End Stress Testing with Kotlin Multiplatform" at KotlinConf 2024 (May 22-24). While there is probably no one-size-fits-all approach, I'll be showing some helpers I have created for concurrency testing, one of which is similar to the above barrier but based on coroutines. It's possible and probably better to build such stuff in customized form on top of Kotest than to integrate some functionality into Kotest that's not sufficiently universal and will be hard to maintain in the long run.
๐Ÿ‘ 2
e
Looking forward to the talk :) we also use kotlin/kotest for concurrent workloads.
o
Cool! Will you attend in person?
e
Yes. And I believe Sam is coming as well. Should organize some Kotest workshop or something :)
o
Excellent that we all get a chance to meet! ๐ŸŽ‰ My talk is scheduled for the last day, so I need to be a bit careful with my voice. ๐Ÿ˜† Other than that, let's find out the best meetup opportunities once the final schedule is published.
a
@Oliver.O do you have any plans to expose your helpers as a library? That would be really useful.
o
I'll publish all the code via GitHub on the day of my talk (which will also be available on YouTube, I guess). The code will most probably be for copy&paste reuse, not a library. I expect such code to be customized, e.g. integrations with different logging/telemetry systems, client/server transports (I'm using WebSockets), feature flags and other stuff. And time will tell how universal such functions can become.
๐Ÿ‘ 1
And also for Kotest, I'll be considering some refinements where it makes sense. For example, AFAIK, when starting 50 parallel tests, there is currently no way of knowing the invocation index (0..49) inside a test. In addition, I have refactored Kotest concurrency in a past iteration and will try to build on that, so that concurrent tests can run in a more production-like fashion, like tests hopping across threads by default, because this is what coroutines in production will do. (Currently, each tests runs in its own dedicated thread.)
a
there are tons of scenarios where "each tests runs in its own dedicated thread" is just fine, as it does find all too many issues in less than a second. likewise, just two concurrent threads would get the job done for lots of real life bugs I've seen
o
Yes, there are those scenarios. And then there are the others: Subtle differences in scheduling, the accidental
ThreadLocal
which does not work with a thread-hopping coroutine, ...
a
or course! but mud huts before Eiffel towers