Do you often use parameterized tests? Let's compar...
# feed
c
Do you often use parameterized tests? Let's compare a few approaches! JUnit5:
Copy code
@ParameterizedTest
@MethodSource("stringIntAndListProvider")
fun `test with multiple arguments`(str: String, num: Int, list: List<String>) {
    …test code…
}

companion object {
    @JvmStatic
    fun stringIntAndListProvider(): Stream<Arguments> {
        val strings = listOf("apple", "lemon")
        val nums = listOf(1, 2)
        val lists = listOf(listOf("a", "b"), listOf("x", "y"))

        return strings.stream().flatMap { string ->
            nums.stream().flatmap { num ->
                lists.stream().flatMap { list ->
                    Stream.of(arguments(string, num, list))
                }
            }
        }
    }
}
Quite a bit verbose! Let's see if we can do better… Prepared (test framework) + Parameterize (parameterization DSL):
Copy code
fun SuiteDsl.testWithMultipleArguments() = suite {
    parameterize {
        val string by parameterOf("apple", "lemon")
        val num by parameterOf(1, 2)
        val list by parameterOf(listOf("a", "b"), listOf("x", "y"))

        test("Test $string $num $list") {
            …test code…
        }
    }
}
Kotlin's DSL capabilities really spoil us K Learn more: • Writing parameterized tests in Kotlin • Parameterize: documentation#C06083PAKEK • Prepared: documentation#C078Z1QRHL3
👍 1
K 2
v
Well, nothing helps if you are spoiled by Spock:
Copy code
def 'Test #string #num #list'() {
   //…test code…

   where:
      string  | num | list
      'apple' | 1   | ['a', 'b']
      'lemon' | 2   | ['x', 'y']
}
😄
❤️ 2
d
That doesn't have code completion though... and you need to manually format that table (in that case it's simple..., but what about more complex ones)...
And some of those params could be generated by code, not just hard-coded
v
That's non-sense. Why should you have no code completion or need to format manually? IntelliJ has nice support for both.
And Spock of course also supports generating those parameters with a little different syntax
You can hardly find something, Spock does not superior. The only exception is, that testing Kotlin code is not optimal, as calling Kotlin code from Groovy is not optimal. But that was not the point. The point was just that it is much more convenient and nice in Spock over any other Test framework I've seen to date.
And that even when you use prepared and parameterize, you will still be unhappy if you are spoiled by the amazing things you can do with Spock 😉
d
And probably another difference: Spock is probably not KMP... a consideration at least for those developing for other than the JVM.
I used Spock a LONG time ago... but I guess I didn't get used to Groovy too much.
v
Of course it is not KMP, only JVM. Again, I did not say use Spock instead of this here. I just said that the syntax is much nicer and easier to use still. Why are you even arguing? o_O
c
I just don’t see why it needs a specific syntax:
Copy code
val tests = testsAbout("String#reverse") {
    listOf(Pair("otto", "otto"), Pair("racecar", "racecar")).forEach { (input, output) ->
        it("reverses $input to $output") {
            assert(input.reversed() == output)
        }
    }
}
https://github.com/failgood/failgood/blob/main/docs/parametrized%20tests.md
i wonder what would be the most kotlin way to write this spock table, and I would be open to implement that in failgood
c
@Vampire Does that example do a cartesian product? At first look, I would expect it to only run twice, but I don't know Spock
How do you manage dependent parameters? For example:
Copy code
parameterize {
    val x by parameterOf(0, 1, -2)
    val y by parameterOf(0, 1, -2, x, x+1, x-1)

    …
}
it doesn't look like you are allowed to refer to other parameters 🤔
a
Whether or not syntax is nicer is subjective. Spock looks absolutely horrible to me. But each to their own :)
👌 1
c
@christophsturm Same question, how do you manage dependent parameters? Also, this looks like it uses list deconstruction, no? So it doesn't work with more than 5 parameters, and they all have to be the same type? I do find failgood's version less readable, but that may be just me
v
No, that syntax only does run twice. Oh, I missed that your example does cartesian product, sorry. Then it would be in the next version
Copy code
def 'Test #string #num #list'() {
   //…test code…

   where:
      string  | _
      'apple' | _
      'lemon' | _
   combined:
      num | _
      1   | _
      2   | _
   combined:
      list       | _
      ['a', 'b'] | _
      ['x', 'y'] | _
}
or
Copy code
def 'Test #string #num #list'() {
   //…test code…

   where:
      string << ['apple', 'lemon']
   combined:
      num << [1, 2]
   combined:
      list << [['a', 'b'], ['x', 'y']]
}
(just showing the first uglier version because you can actually multiply full tables 🙂)
it doesn't look like you are allowed to refer to other parameters
Yes, you can derive data variables from others in Spock. But anyway, let's not strive off to discuss Spock features, that was not my intention. 😄
c
IMO Parameterize's version is easier to read than that last version, if only because the things are actually declared in the order of reading.
> let's not strive off to discuss Spock features I don't mind! My goal here is to make Prepared as nice to use as possible. If there are good ideas elsewhere, I definitely do want to take inspiration! (as long as it's constructive of course)
👌 1
y
The dependent parameters thing is really interesting. I'm assuming right now that you're rerunning the block each time or something? Or you dry run, collect the options, then rerun the block? Sounds like if you wanted dependent parameters you need something like for-comprehensions or monads or something. I've been working on something that could enable that. The only issue is that it has to use reflection to be able to do multi-shot things, which is slow-ish for normal applications, but would definitely be great for tests. I'll try to see if I can make a POC of parameterize for that, then the syntax can be even better!
c
cc @Ben Woodworth
But yes, that's basically how it works
Initially we weren't sure how to make that fit with a test framework, but I'm happy with the result. It's easy to use and understand, and it doesn't need any magic or whatever, it's just smart usage of
provideDelegate
and sequences
❤️ 1
b
@Youssef Shoaib [MOD] Sounds like you're exactly where my head's at. I have an issue open that's speculating around using multi-shot coroutines to implement this, but there might be problems with breaking assumptions that Kotlin makes: https://github.com/BenWoodworth/Parameterize/issues/34 Currently though,
parameterize
is re-running the block, having
provideDelegate
return different values each time, while providing a way to avoid unnecessarily re-running code with a lazy
parameter {}
. And the library itself is designed to work for more than just testing, so it might be worth a look!
j
Really interesting, thanks for sharing @CLOVIS!
y
@Ben Woodworth I'll continue discussing in that issue then to not clutter up this thread, but I have ways to address a lot of the problems you bring up there!
👀 2
l
We found that parameterized tests are harder to debug from the IDE, because if a single parameters set fails, you cannot click on the failing test to open it. You have to build dynamically a good name for the test, to be able to find it, but with many parameters it still takes time to find which one was failing. So in complex cases we prefer to have separated tests that calls a private test function Something like:
Copy code
private fun runTheTest(param1, param2, ...) {}

@Test fun test1() = runTheTest(..., ...)
@Test fun test2() = runTheTest(..., ...)
This way intellij helps you identify the failing case. (this in simple cases, parameterized tests may still be relevant)
c
in the failgood example you can just rerun the test from the test tree view:
l
I meant in order to fix the test, intellij won't jump to the parameters set that caused the test to fail At least that's usually the case with tests frameworks (tested JUnit and Kotest) Prepared indicates
IntelliJ doesn't know which lines are tests or not, so it cannot display the small green triangle to select which tests to execute.
So I guess it is the same (yet I didn't know we could rerun like this 🙏 )
c
for failgood i wrote an idea plugin that works pretty well for letting idea know what line is a test
K 1
also you can jump to the source of the test from the test result tree, and then you can rerun or debug the test with the parameters that failed also from the test result tree.
c
At least that's usually the case with tests frameworks (tested JUnit and Kotest) […] Prepared
In the case of Kotest and Prepared, parameterized tests are declared in lambdas, so they should appear in the stacktrace similarly to your example with a private function.
o
This looks somewhat similar to what has been proposed for Kotest: https://github.com/kotest/kotest/pull/4258#issuecomment-2286624246 Regarding the Prepared implementation: If you're defining parameters in
suite
, these would be executed even if none of the tests in the suite were active (due to conditions, filtering), right? If so, it would slow down running a subset of tests if parameter generation is expensive. Also, `suite`s would not accept suspending functions, right? All of that might matter or not, depending on the context.
c
If so, it would slow down running a subset of tests if parameter generation is expensive.
Yes, though I haven't seen a case where parameter generation is expensive. Unlike Kotest, which allows suspension in
context
, Prepared doesn't allow
suspend
in
suite
. Instead, complex operations are encapsulated as values:
Copy code
val users by parameterOf("abcd", "efgh")
    .prepare { Database.loadUser(it) }
The operation is only executed inside the test proper, so if the test is disabled, it's not executed at all. In general, Prepared isn't aware of tests being enabled or not, it just declares the tests to the underlying runner as-is. That's why the
!
syntax works with Kotest, etc.
This looks somewhat similar to what has been proposed for Kotest
Not surprising! @Ben Woodworth is the origin of it, here. He later created a library specifically for this, #C06083PAKEK. I had a similar idea in my mind for a long time but couldn't get it to work, so Prepared just adds an integration with his library.
o
I like your approach. I've been experimenting with a simplified test framework to explore whether there could be something with Kotest-like power, but fewer moving parts internally, a small API surface and easier extensibility. (So you'd add timeout, retry functionality, etc. the way you need it via plain Kotlin rather than trying to figure out what the framework's hard-wired internals think.) I came up with a prototype: https://github.com/OliverO2/kotlin-test-framework-prototype, which runs tests across platforms in under 900 lines of code. It contains preliminary fixture support, but I think your fixture approach is better and easier to integrate with the JS infra.
c
Oh, that's very interesting 👀
At the very base, the only things needed to run a Prepared tests is the ability to call one of these two functions: https://gitlab.com/opensavvy/groundwork/prepared/-/blob/main/suite/src/commonMain/kotlin/RunTest.kt?ref_type=heads They are responsible for initializing everything else. Actually, Prepared has a very small base: everything is built on top of https://gitlab.com/opensavvy/groundwork/prepared/-/blob/main/suite/src/commonMain/kotlin/TestEnvironment.kt?ref_type=heads
Copy code
class TestEnvironment internal constructor(
	val testName: String,
	val coroutineScope: TestScope, // from kotlinx.coroutines
) {

	internal val cache = Cache()
	internal val finalizers = Finalizers()
}
Cache
is used to implement
prepared
, and everything else builds on top of that
Prepared is built to run on top of any other testing framework and pass its configuration along to it; you can see the Kotest integration there: https://gitlab.com/opensavvy/groundwork/prepared/-/blob/main/runners/runner-kotest/src/commonMain/kotlin/PreparedSuite.kt?ref_type=heads
o
Yes, I have briefly looked into that. We could try to plug in my stuff and make Prepared independent of external test runners.
c
do your suites
suspend
? If so, I'll need the same flattening tricks I use with the Kotest plugin…
o
No, they don't. It just does not make much sense anyway, as you don't want to run potentially expensive stuff outside of tests (or lazy initializations requested by tests).
👍 1
I think it's actually easy to have a small, fast (and maintainable!) multiplatform runner. Platform integration requires some thinking and careful restrictions, in particular because of the limitations baked into the Kotlin Gradle plugin. You have different platform expectations, e.g. top-down and bottom-up (JS) execution. KGP does not fully translate API calls (JS frameworks have things like async
beforeEach
, but KGP swallows that). Also, KGP is opinionated on what it considers legitimate test reporting (e.g. nested suites are possible with TeamCity reports, which I've tried, but not with the crippled TC report variant used inside kotlin-test).
c
Yeah, my initial reason to have non-suspending scopes is that I imagine a far future when I have a lightweight runner that can: • execute the scopes without executing the tests • thus accumulate the actual real list of all test declarations • capture each of their stacktraces • reports those to the IDE This way, the IDE would be able to put green 'run test' button anywhere a test is actually declared, even if it's declared by a library etc
but I don't know how to write IDE plugins and I have way too much work, so that idea is sitting there until someone has the time to do it
Would it makes sense to describe your project as a "lightweight test runner responsible for communicating the existence of tests to each platform"? If so, Prepared is "a test DSL that builds upon a few primitives such as 'a test exists' to provide all useful test declaration APIs" Then it still needs an assertion library to make the entire thing into a proper framework, but there are plenty of those (including Kotest's) and I like that there is a very strong distinction of responsibilities
o
Executing scopes without running tests is currently the norm. JUnit Platform and others already require this as "test discovery". However, using that inside an IDE plugin might be too expensive to make it perform well enough. I guess we'd have to stick with heuristics like using the static analysis (PSI tree) to guess "OK, that looks like a test, let's put a gutter icon here". Which can be a bit tricky if tests are created very dynamically. I haven't tried diving into IDE plugin development, and it seems its hard, mainly due to lack of documentation, and has to be done twice, potentially (Fleet). The same goes with compiler plugins (which I've done, but its always a bit risky as APIs change without notice). So long-term maintenance is an issue, as people expect almost instant availability of tool updates if new versions of the IDE or Kotlin arrive. I'd prefer avoiding the term runner. Firstly, I'm not a fan of "manager"-like classes. I'd differentiate between a test framework (which would provide an API to describe tests, and then run them on all platforms) and an assertion library. Secondly, my stuff actually runs tests, and where it can, it does so completely (e.g. on Native and with Wasm/Node), not using any lower-level library or framework. I'm delegating to JUnit Platform on the JVM mainly because that's an interface IntelliJ supports, so we can re-run single test classes from the test reports window. How IntelliJ interprets test reports (even its own log format) is largely undocumented and incomplete. That's when plugins are required to have a good DX for running tests. I'm delegating to the KGP/JS test infra mainly for browser support and KGP's expectations. However, the JS callback stuff does just not meet coroutine and other block nesting requirements very well. It would be great if that weren't so complicated and there would be just an API calling some
suspend fun runTests()
on each platform and a standard reporting format, including parallel test runs and arbitrary nesting. So I think a framework is better off if it does not just describe tests but also runs them according to its own needs, not those of other frameworks. Assertions are a different thing, as long as they stick to established protocols. I like Kotest's fluent assertion style more than boilerplate
assert...
invocations. What I don't like as much is that Kotest deviates from standard exception handling, as it requires its matchers to do special stuff when throwing to support
withClue
for example. I'd have no problem of providing a compiler plugin for test discovery as that's just a backend IR plugin where things are more stable. But for significant adoption I guess an IDE plugin would be required, and my priorities to not seem to leave sufficient room for that.
c
@Oliver.O I’m trying to build your prototype but it seems to be missing a repo definition:
Plugin [id: 'io.kotest.multiplatform', version: '6.0.0-20240905.065253-61'] was not found in any of the following sources:
c
@christophsturm the snapshot -61 doesn't exist anymore. Kotest snapshot only exist for ~1 week. You can see the versions that exist here: https://s01.oss.sonatype.org/content/repositories/snapshots/io/kotest/multiplatform/io.kotest.multiplatform.gradle.plugin/6.0.0-SNAPSHOT/
c
thanks! I just changed it to -SNAPSHOT