This is a very nice feature in jqwik: <https://jqw...
# kotest
d
This is a very nice feature in jqwik: https://jqwik.net/docs/current/user-guide.html#collecting-and-reporting-statistics also the way assumptions return an error if too many possibilities were skipped: https://jqwik.net/docs/current/user-guide.html#assumptions and in general, having some stats printed like there would add tons of value to property testing in kotest... it would make it much easier to make tests.
j
Yeah, they have similar stuff in eqc and proper in the erlang world. I assume jqwik as well has the possibility to attribute weights to the different actions as well (i.e. if one action due to preconditions gets executed too infrequently)?
Only having looked superficially, jqwik appears pretty full featured and competent. But I really don't like the annotations. πŸ˜„
d
I'm not really familiar with jqwik, I was just looking through the docs after you mentioned it, and I found some features that would be GREAT in kotest... but I don't like more than just the annotations there... it's just not Kotlinish... and the issue for Kotlin support's been around for a few months with no love...
The fact that they require using their runner for the tests is a big πŸ‘ŽπŸΌ for me. I like Kotest for a bunch of reasons and wouldn't abandon it too quickly
There seems to be
classify()
and
checkCoverage()
for assumptions already πŸ€”
But I wonder if it can print that out without the check coverage...
s
What are you looking for in reporting that classify doesn't already do
d
Say this example from jqwik:
Copy code
@Property
void simpleStats(@ForAll RoundingMode mode) {
    Statistics.collect(mode);
}
will create an output similar to that:

[MyTest:simpleStats] (1000) statistics = 
    FLOOR       (158) : 16 %
    HALF_EVEN   (135) : 14 %
    DOWN        (126) : 13 %
    UP          (120) : 12 %
    HALF_UP     (118) : 12 %
    CEILING     (117) : 12 %
    UNNECESSARY (117) : 12 %
    HALF_DOWN   (109) : 11 %
s
So you're looking for automatic labelling
I assume that only works on a couple of types
d
It gives a nice report, not only when checking coverage... and automatic labelling is a plus...
s
Easy enough to do for numbers and enums
d
That way, when making the tests, one can ensure that the distribution is enough to cover all cases
Automatic labelling would make it even easier...
s
I'm not sure i see the benefit. Is it to check the platforms random is actually random
Also kotest has this great feature of edgecases that other prop tests don't have
We can add it though
d
Is it to check the platforms random is actually random
No, rather to make sure that filtering and all other steps produce a reasonable amount of variants. The programmer's errors, not the framework. And edgecases need to be contrived, and when you don't know when they're already covered by the random ones sufficiently, it's not so easy to do in more complicated setups.
s
Ok
I'll put it in the roadmap
d
Thanks πŸ˜‰, I looked up alternatives, you're really doing great work with Kotest! I hope all these requests are OK, I guess I could do it on my end using the
PropertyContext
I just supposed others could benefit from this... I would make a PR, but I'm not familiar with MPP and how you would like this to be done.
βž• 1
I guess the easiest way to do it would be something like:
Copy code
fun PropertyContext.printResult() {
        val attempts =  attempts().toDouble()

        classifications().forEach { labelled, occurences ->
            val percent = (occurences.toDouble() / attempts) * 100.0
            println("$labelled ($occurences): ${percent.roundToInt()} %")
        }
    }
s
The more requests the better. Keep em coming
πŸ‘πŸΌ 2
d
And then prepend it to checkAll {}.printResult()
And classifying enums:
Copy code
inline fun <reified T : Enum<T>> PropertyContext.classify(label: T) {
    classify(label.name)
}
... the wonders of kotlin extension functions 😁. I made my own classifiers too for domain specific uses, this might be nice to document!
πŸ‘πŸ» 1
s
d
Very nice πŸ‘ŒπŸΌ!
plus1 1