I wonder if there are any plans in the near future...
# kotest
d
I wonder if there are any plans in the near future to allow given/and/when/and/then/and tests at the same nesting level... I think this was discussed before as not being easy, but it keeps on eating at me how much nicer the tests could look if it would be possible...
s
Example?
d
Like:
Copy code
given {

}.and {

}.when {

}.and {
...
The chaining could give a hint to the framework which level we're holding in and if it's a leaf or a test...
s
And would these nest in output like currently or be consided sibling tests
d
Well, it would be better the latter (but a bit hacky...), but the former would still avoid the nested testing code which is already a big improvement... it might not always make sense in the output though...
s
The latter is easy to do without the dot. You could just list tests vertically
d
I wonder how Spock does it? I never used it, but it has it all on one level.
s
Spock is able to use groovys dynamic nature for these tricks
d
But that would mess up the class setup (to list them vertically...)
s
But then it means using groovy ๐Ÿ˜‰
Would it
Is it much different to what you pasted
d
Well, what I pasted could potentially save some kind of fixture object that is passed through the chain. Which would allow the setup of the test
Class level objects could always be used, but they're a bit more dangerous, we'd need to assure that they're always reset.
s
ah ok
d
Minutest did it that way, but they don't have all the features that kotest has... https://github.com/dmcg/minutest#structure-tests
s
I was thinking something like:
Copy code
val fixture = ...

given("qweqwe") {
}
when("qeqw") {
}
then("werewr") {
}
d
That puts the fixture outside the given... especially when it's immutable, then the given block might end up being only half the setup
There might even be a provided class in the when context for setting a
result
, and a provided one in the given for setting some fixture objects...
Copy code
given("qweqwe") { // 'this' is a GivenContext with a var for fixtures
    fixture[1] = RepoStub()
    fixture[0] = SutClass(fixture[1])
}
when("qeqw") { // 'this' is a WhenContext with the fixtures in GivenContext and a result var
    result = fixture[0].methodUnderTest()
}
then("werewr") { // 'this' has all the above
    result shouldBe fixture[1].stubValue
}
maybe something like this?
fixture[0]
and
fixture[1]
might not be so nice, though, but there might be better ways.
s
We could probably pass out through like you want
Can you create a ticket and I'll experiment
d
Sure, thanks!
s
Great thanks
c
can you show some examples of tests that use this syntax and show why its better?
d
I have tests with complicated situations and interactions (more like legacy code that I need to test and refactor), and having all the setup description on one
given
line or a bunch of nested `and`s gets pretty messy. This proposition wouldn't be a replacement for the current BehaviourSpec, but rather an optional addition to it... @christophsturm
I guess it's less for code built with SRP in mind...
c
ah thats interesting. I'm thinking a lot about what the perfect testing syntax looks like, but i guess i have only been thinking of tests for well structured code.
s
It gets tricky when a project grows because everyone has their own preferred syntax and it's a balancing act of keeping everyone happy vs kitchen sink of features.
d
Yup, I was hesitant to propose this ๐Ÿ˜Š... but when I found myself thinking about it in a bunch of occasions, I wondered if others might also need such a thing. Legacy spaghetti code can lead to a complex test suite as it is... (until slowly refactored)
But I wouldn't mind if there was a better idea for this... I personally have a conflict whether everything should be spelled out in the test descriptions or some things could count on the test code to show...
c
one problem is that if you have all those features to support your old legacy code, people will not know what features are ok to use for new code
so they will keep writing too complicated tests even for new features
d
It's not many features... it's just a certain process that too many components were handling differently, and that currently has too many possible states that have to be handled...
s
I'm not sure Dave's proposal is even possible to support. Because a test in kotest is a function TestContext -> Unit. So you need to pass the test context into the "next" builder, but you can only do that when you're inside the lambda. So chaining these calls might not be possible.
But I will explore it.
c
ah sorry i was not really talking about your proposal but in general. for me the perfect test runner has not many features and just one testing style.
s
Just wait until someone asks for a feature in failfast that you don't need. Do you add it to expand the functionality, or do you say no ๐Ÿ™‚
๐Ÿ˜Š 2
c
nobody uses it anyway so thats not going to happen ๐Ÿ™‚
s
well you never know
d
Yeah, I guess kotest already has quite a few things that you don't personally ever use @sam... ๐Ÿ˜…
s
absolutely, if I had my way I'd delete behavior spec, I think its awful ๐Ÿ˜‚
d
So what do you use?
s
I only ever use FunSpec
d
Wow, not even FreeSpec?
s
nope
c
I didnt know that you know about failfast, sam, did you look at it?
s
I did
c
I did not want to advertise it here, and also I want to make sure it works for all my use cases before advertising it more
s
Feel free to advertise it here, I'm not precious about it. If it offers some functionality for people that kotest doesn't, then great, that helps kotlin in general.
c
it sure must look like i have severe nih syndrome. I started writing an orm then i thought the tests run not fast enough then i started writing a test runner. and this weekend i started writing a rest web server because i was frustrated with ktor, and http4k does not support coroutines
s
http4k not supporting coroutines is incredibly frustrating. I would use it tomorrow if it had coroutine support, but they seem to think it doesn't matter (if you follow the threads, no pun intended). I think that's mistake personally.
c
hehe
s
It seems easy to make a test framework, and when it's focused on your own use cases it probably is easy. There's some "bugs" that people bring up time after time and they're out of kotest's control. The classic one about parent tests being included in test counts - that's just how gradle works. And gradle has bugs in its testing implementation that have remained open for years.
c
in my web server handlers look like this:
Copy code
class UserService : RestService {
    suspend fun create(user: User): User {
        delay(1)
        return user.copy(id = "userId")
    }

    suspend fun show(userId: Int): User {
        delay(1)
        return User(id = userId.toString(), name = "User $userId")
    }

}
that fits 99% of the code that i write and makes it really easy to test
in failfast i now support junit engine for idea support, but in gradle i still only run the tests as a main method.
s
The problem with NIH is that it's not so bad doing the simple stuff, but when you need some rock solid handling of url encoding, or websockets, or whatever, you're really going to spend a lot of time on what has been done before.
c
hmm yeah I did not really write it from scratch, i use undertow. like i did in my first kotlin web projects
s
undertow is good, I built some scala stuff on it ages ago
c
it seems that it does not have much momentum, all the redhat server side stuff seems to focus on quarkus now
but it still gets security updates and has no bugs that bother me
so i thought i start with something i know
s
what didn't you like about ktor?
c
error handling. sometimes things just dont work and the error message is not useful, and trying to debug it is very complicated
s
yeah for sure the phase / install parts of it are odd
c
it does not feel robust at all. only the happy path is really tested
s
I know we had an issue upgrading from 1.4.2 to 1.5.0 which broke the CORS support
had to roll back
c
when you sent an authorization header to it that is misspelled you get an internal server error with a huge stacktrace that says nothing about the problem. or if you send json that does not have the expected format. thats both use cases that happen really often.
I want small libs with good testing coverage
s
tested with failfast of course ๐Ÿ™‚
c
or kotest!
๐Ÿ˜‚ 1
so what did you think of failfast?
s
very nicely implemented
I liked the configuring which tests to run programatically. That sits nicely with my functional programming mindset.
I didn't delve deeply enough to look at the auto test stuff, but I like that idea. SBT in scalaland does a good job of that and I quite liked it there.
c
oh nice thank you
the autotest stuff does not yet work as well as it should. but i did find out that you can find dependent classes by reading just the beginning of the class file (constant pool)
i thought that it would be enought to just run updated class files, because kotlin would recompile tests that use a class when the class changes. but kotlin is too clever for that and recompiles only when the interface changes
s
look at classgraph
c
yeah classgraph can do it. and seems to be really fast.
s
yeah it can scan quickly
c
i just try to keep the code that runs single threaded really small and with no dependencies
as soon as i have found a single test to run its much more relaxed, but i want all cores to work from the beginning
s
when you get it working flawlessly, do a PR to kotest ๐Ÿ™‚
c
where would autotest fit best into the kotest architecture?
s
would have to make a gradle task that spins up and keeps running I guess
that's how SBT does it
c
ah right, so the idea plugin and the gradle plugin could support it
s
yeah
kotest gradle plugin that doesn't need junit at all
c
i think if it just works in idea and works there perfectly that would be enough
s
yeah probably
then maybe it could be added to the intellij plugin for kotest
c
I started working on an idea plugin for failfast but there seems to be zero documentation
the best documentation for creating a test runner plugin seems to be the kotest plugin
s
lol
probably is yes
I spent many a night pouring over that
You are putting a lot of effort into recreating the entire kotest ecosystem ๐Ÿ™‚
c
it sure looks that way. I just wanted to iterate fast and build exactly the syntax that i wanted. so starting from scratch seemed to be the best solution
s
You will have learned a lot about how junit platform, gradle, etc works while doing that, so feel free to hop in and contribute to kotest in any way you want.
c
i did find a kotest bug that i did not report yet. it seems the pitest plugin has a dependency on pitest, and that leads to using an older pitest version, and you cannot override it in gradle.
s
seems more like a gradle bug, gradle should pick whatever version is latest
I have seen this behavior in gradle quite a bit though - the
implementation
thing doesn't seem super robust
It might be related to being a multi platform project
c
i think the dependency should just be compileOnly
I need to move pitest to its own repo soon so I will do it then, before 4.5 release
c
lol sorry i should have just created a ticket.
s
that's fine, took me 2 seconds
I wonder if you took the KotestEngineLauncher, programatically registered tests like in failfast, if it would run the tests fast enough for you
Copy code
KotestEngineLauncher()
   .withSpec(DummySpec1::class)
   .withSpec(DummySpec2::class)
   .launch()
c
i was thinking of a queue based design where the discovery starts putting tests into the queue, and execution starts as soon as there is one test in the queue
s
might save you a second, depends how slow discovery is.
c
the motto of failfast is "with enough cores any test suite can run in one second"
s
lol well Kotest has 4000+ tests
good luck running that in < 1 second
c
yeah its more theoretical. and it only works if you have a lot of short tests.
my orm test suite breaks that expectation and takes 10 seconds
s
right
there's some tests in kotest that are testing Thread.sleeps so they'll never run fast
c
why do they do that?
coroutines are really great for multi threaded test running, if a test needs a dependency that takes some time to spin up i do it in a separate io scheduler to make the cpu core available to a different test.
s
to test concurrency works for both suspension and blocking
saying "you should be using coroutines" is not adequate for people who can't / don't use suspension
c
sure
s
so for things like the timeout functionality, you want to test that a timeout works for both blocking and suspension
Copy code
"Test".config(timeout = 300.milliseconds) {
  Thread.sleep(100)
}
c
you will always have a smoke test suite that is slower.
s
Copy code
"Test 2".config(timeout = 300.milliseconds) {
  Thread.sleep(500) // should fail
}
c
hmm tests like that only work single threaded, or the first test will also fail if you have too much load
s
yes, so you need to put in a bit of "wiggle" room
that's why I don't do:
Copy code
"Test".config(timeout = 5.milliseconds) {
  Thread.sleep(4)
}