Are there any reasons, why nested JS is just ignor...
# kotest
b
Are there any reasons, why nested JS is just ignored? For example this MR: https://github.com/kotest/kotest/pull/3913 and this is not the first MR. To be honest, I think it's a bit impolite towards the community to simply ignore MRs.
l
I'm sorry you feel that way. The team isn't ignoring pull requests, we usually just don't have the capacity to thoroughly check and release the feature
It's nothing against JS nested tests and definitely nothing against people sending pull requests to improve the framework
It's more "The maintainers have a life job too and can't handle complicated demands quickly"
I suggest you add it as part of the 'required' 6.0.0 changes https://github.com/kotest/kotest/issues/3918 if you feel this is of uber-importance
I'm sorry I can't give you more answers too. I'm trying to be the most transparent I can. I understand the feeling of having a contribution 'ignored', and I promise there's more than that on this case
thank you color 1
o
Actually, there is still progress on this PR with some integration work and verification outstanding: https://kotlinlang.slack.com/archives/CT0G9SD7Z/p1710272074371939
e
Given that @Adam S asked Charles to review the PR, I think it might be ready for review now actually
Personally I don't work much with JS so it feels hard for me to do proper reviews
o
Unfortunately, I did not see the review request you mentioned. As I understand it, this one still needs to be solved (see thread above):
How can I test if my changes have affected Wasm? When I run
gradle wasmJsTest
it doesn't trigger anything...
The problem here is that testing with
kotest-tests-js
it not integrated with the current local Kotest build but uses a repository version instead. Many folks (me included) would like to see nested tests working on JS, but failed attempts of getting this done properly and lots of discussions spread across PRs and issues suggest that it's not easily done.
a
thanks for the nudge, I've just hacked around with the
kotest-tests-js
subproject and tested the latest version, and it works as expected! (Which is still rough, but it's better than what we have now.)
> Personally I don't work much with JS so it feels hard for me to do proper reviews Same for me tbh! It really doesn't help that there isn't good test coverage for Kotest JS. Or at least, that's my impression, given how my changes didn't break anything, which I think is unusual!
But yes, it's ready for a review. It needs some tidying, but that's just left over TODO notes and the like.
o
OK, cool, I'll try to experiment with it. I am aware of how complex the Kotlin test infra behind all this is, so I want to understand how this actually works without top-level Promises on JS and also check edge cases like approaching browser timeout limits. So give me at least a week or so.
gratitude thank you 1
a
thanks @Oliver.O!
👍 1
For quicker testing I've committed an independent project, so if you checkout the branch you can cd into it to run the tests and/or open in IntelliJ
Copy code
cd kotest-tests/kotest-tests-js-standalone
idea .
the source code is a copy/paste of the existing kotest-tests-js subproject, so I don't think it should be merged into master
@benkuly I've updated the PR description with links to previous Slack discussions, so the PR looks less ignored :) Are you able to provide any insight on JS test frameworks? We were wondering in previous threads if there's a more direct way Kotest can integrate with JS, so it's not reliant on a framework.
👍 1
o
@Adam S First experimentation results are available in the PR. Also, I'm investigating a possible solution to the missing Node.js test reporting issue. Solving this would require a different approach to Mocha invocations (which would probably also solve the current timeout problems). I'm about to prepare a separate experimentation project which would allow JS test invocations with a bare bones setup (plain Kotlin, no kotlin-test or Kotest required). Such a setup is probably better suited to understand the effects and limitations of the Kotlin JS test infra. I hope to have it ready sometime tomorrow. So you might want to wait for that before continuing to work on the PR.
a
thanks for the update! I saw your comments and they made sense. The PR is messy, so there's lots to clean up
I was confused by the comment about updating the .api file though, I did run the apiDump task but it didn't change any files, because I didn't touch any jvm files.
o
Maybe these were caused by merging the main branch. Also, the problems with timeouts in
kotest-tests-js
existed before your PR's changes. So it was already somewhat messy before. No worries. 😉
🆒 1
a
btw, I was thinking about another approach, and I wonder what you think. Kotest doesn't have to immediately call the JS Framework functions. Kotest could run all of the contexts/tests first, gather the results, and when they're all finished, then Kotest could call all of the describe/it functions to output the results. That way, there's no need for doing any async work in the describe blocks. The downside would be that the test durations don't get reported accurately.
o
Not sure. Test progress reporting would be missing. And console output would probably not be associated with the individual tests. Let's just see. That would be one example which my experimentation project would help to evaluate.
👍 1
@Adam S Took a bit longer to fix the missing Node.js test reporting issue. As expected, this required a change to the JS test framework invocations. Also, I did a bit of additional exploration regarding the Kotlin/JS test infra and the underlying JS frameworks. Dealing with the JS layer was quite messy and time-consuming (when
this
is not
this
, and lots of duck typing obscures what actually gets used and how). Anyway, we now have this new PR which contains some reasoning on how we should use the Kotlin test infra APIs, and a bunch of comments pointing to references. Hopefully, it helps to understand things better and build solid solutions on top of that. Next on my list is the little project to explore nested tests directly without Kotest (scheduled for later today). For now, I'd appreciate if you could look into the PR and tell me what you think.
🥇 2
JS test framework exploration project: https://github.com/OliverO2/kotlin-js-wasm-testing This is completely free of Kotest or kotlin-test dependencies and intended to allow quick testing of custom JS and Wasm/JS test runners.
🧙‍♂️ 1
a
Amazing! Thanks for the updates @Oliver.O. At first glance it looks really good. I'll take a closer look today or tomorrow.
❤️ 1
o
I've just added nested tests to the above exploration project. While I can get some reasonably looking output, it does not integrate well with the existing Kotlin test infra (see description in the README). Maybe I have missed something, so if someone would come up with better ideas, great! If not, it seems like supporting nested tests on JS platforms would require a completely new test infra, starting with a Gradle plugin, then a Karma replacement, ...
a
So, just to make sure I've understood it correctly, is this what Kotlin test does? 1. Kotlin injects
main()
and
startUnitTests()
functions, which it then uses to initialize and invoke the tests 2. Kotlin Test mimics a JS test framework, and prints TeamCity formatted messages to stdout. 3. KGP updates the Gradle test tasks to intercept standard out. 4. 'instruction' messages (like suite started/finished, test started/finished/failed/skipped) are forwarded to IJ (so IJ can pretty render the tests) 5. 'standard output' messages (like when a test has
println("x")
) are forwarded to stdout (so that when running via console or in CI, the test output is visible) And then the plan is this? 1. Kotest also injects some functions for initializing and invoking tests 2. Kotest will also log TeamCity messages to stdout 3. And then KGP can continue to handle the TeamCity messages, But the trick is that Kotest has to print out exactly the same TeamCity messages, otherwise the KGP test handler breaks.
If so, then here's a dump of the messages that Kotlin Test produces for the same example test
so Kotest just has to mimic those
These are the actual messages. So the problem is that the flowId changes from
karmaTC-197179531948718148
to
n1
.
o
I cannot claim that I've looked at every detail, but my understanding is that for JS and JS/Wasm targets • The Kotlin Gradle plugins (different ones for historical reasons, now just the Kotlin Multiplatform plugin) do this: ◦ Set up the platform-specific test frameworks, which is Mocha for JS/browser, JS/Node.js, Wasm/browser, ◦ For browsers: set up Karma to start up a test web server, start browser(s) and run the test inside the latter. ◦ Set up Node.js and browser targets (both main and test targets) to call
main
. ◦ Set up Wasm test targets to call
startUnitTests
. •
kotlin-test
provides the functions
suite
and
test
, as well as
startUnitTests
(Wasm only). • TheKotlin compiler ◦ finds the annotated unit tests and creates calls to the functions
suite
and
test
, ◦ injects code into
main
and
startUnitTests
(Wasm only). • TeamCity messages and other output can, in principle, share a common stdout channel to communicate with IntelliJ IDEA. For browsers, Karma has to make this happen. Mocha, as far as I am aware, has its own test output format. The Kotlin test infra seems to translate everything to TeamCity format (but I haven't looked into the specifics here). In the end, IntelliJ IDEA always understands the TeamCity format, which – in principle – also supports nesting. A changing
flowId
means that a new test or suite has started, which is entirely OK if I understand the docs right. The problem in the above output is that a suite is started inside a test, which the format seemingly does not allow (tests must be leaves, only suites can nest). Unfortunately, the only way to get async support with Mocha is inside a test, and that's what you see above as the top-level test is used to enter the async world:
Copy code
[org.jetbrains.kotlin.gradle.tasks.testing] [KOTLIN] TCSM: ##teamcity[testStarted name='nestable async (via JS/Mocha/transformed)' captureStandardOutput='true' flowId='karmaTC-197179531948718148']
Kotest already has integrated TeamCity reporting. So you could compare how this looks on the JVM. My impression is that the TeamCity format in general allows much more: For example, with parallel tests, several test start messages may appear consecutively.
a
okay, cool, that makes sense, thanks 👍
I like the idea of just dumping TC messages and letting KGP handle them.
I suspect there's either an intentional feature or bug in KGP's TC message handling *for JS tests. Perhaps parallel test execution was never considered, because of the JS framework limitations.
o
Yes, that what my guess is: Some TC stuff is interpreted inside the Kotlin/JS infra and limited to no parallelism, no nesting.
a
nesting is possible with Kotlin Test, but with some customization
kotlinTestNest.kt.cpp
but of course, no coroutines
o
And the reporting would not show the suite nesting correctly, right?
a
well... I think it should be rendered in IntelliJ correctly. But I'm not sure why "container" isn't nested, and also there's an IJ bug https://youtrack.jetbrains.com/issue/KTIJ-27307/KJS-Native-IDE-jsNodeTest-in-multiplatform-shows-tests-twice-in-Test-Results
o
I haven't tested yet, but the double-reporting in IJ might be fixed in the current EAP version. And IJ can render nesting correctly for the JVM, right?
a
yeah, that's right
hmmm so... here are some options 1. hack / fix karma-kotlin-reporter.js so the flowId can be overridden, and we can set a constant one and re-use it. 2. hack / fix TCServiceMessagesClient so it doesn't crash if the TC messages are out of order 3. maybe there's a TC message that will 'trick' the message client here into 'closing' the current flowId, allowing a new one to start? 4. test with KGP2.0 - maybe the TC messages handler has changed
I've pushed the demo with the Kotlin Test nested project to https://github.com/aSemy/kotlin-js-wasm-testing - to view the TC messages, run
gradle :with-kotlin-test:jsBrowserTest --debug
, and filter for
##teamcity
hmmm I don't think that TCServiceMessagesClient, or KGP, cares about the flowID. I think it will cache the first flowID it detects, and then re-use it.
o
Strange, that there's only one
flowId
. It also looks buggy with
%s
appearing as
flowId
and
duration
values.
a
mmm indeed
I think the flowId can be any string, so
%s
is a valid ID, but that's probably not intentional. I think it's a bug in the Kotlin JS lib, which uses varargs and string formatting.
o
Yes, that's also my guess.
a
I think the pipeline is kotlin-js-test-runner.js -> main() -> triggers tests -> kotlin-js-test-runner prints TC messages -> KGP intercepts the messages -> KGP converts the messages to IJ log format.
o
Sounds reasonable. But why convert something that IJ can already interpret as is? For me, that entire pipeline looks a bit too messy to touch. And it seems not much has changed for 2.0. I guess they would be open to receive PRs correcting the situation, but I don't know what the plans are given the ageing stuff below. Karma and Mocha don't seem to add much value these days.
a
I don't think IJ can interpret TC messages, it needs the IJ log format XML
o
Really? Look at my
standaloneJsFlatTestFramework
on Node.js, disabling the transformation in FrameworkAdapter.kt#L28. It just outputs TC messages as you can see with
gradlew jsNodeDevelopmentRun
. The is no middleman to convert it into anything else. And running it via
gradlew jsNodeTest
makes IJ report just fine.
a
okay, some progress: removing flowId from the messages TestReport creates means KGP picks up the 'parent' flowId
Hmmm... if you run
./gradlew jsNodeDevelopmentRun --debug
and search for
<ijLog><event type='onOutput'>
do you see anything?
o
I'll try. (I hate these debug runs. 🤪)
a
haha yes, me too! But they're not so bad here, there's not that much output
I get an error with
jsNodeDevelopmentRun
...
Copy code
/Users/dev/projects/external/kotlin-js-wasm-testing/src/jsMain/kotlin/KotlinJsTestFramework.js.kt:32
    js("describe(description, function () { this.timeout(0); suiteFn(); })")
        ^
ReferenceError: describe is not defined
o
That's when you did not select the standalone framework in Main.kt.
a
got it, thanks!
o
Hmmm... if you run
./gradlew jsNodeDevelopmentRun --debug
and search for
<ijLog><event type='onOutput'>
do you see anything?
Could not find
ijLog
What I'm seeing:
Copy code
2024-04-01T15:55:09.632+0200 [INFO] [org.gradle.process.internal.DefaultExecHandle] Starting process 'command '/home/oliver/.gradle/nodejs/node-v22.0.0-nightly2024010568c8472ed9-linux-x64/bin/node''. Working directory: /home/oliver/Repositories/experimental/Kotlin/kotlin-js-wasm-testing/build/js/packages/kotlin-js-wasm-testing Command: /home/oliver/.gradle/nodejs/node-v22.0.0-nightly2024010568c8472ed9-linux-x64/bin/node --require /home/oliver/Repositories/experimental/Kotlin/kotlin-js-wasm-testing/build/js/node_modules/source-map-support/register.js /home/oliver/Repositories/experimental/Kotlin/kotlin-js-wasm-testing/build/js/packages/kotlin-js-wasm-testing/kotlin/kotlin-js-wasm-testing.js
2024-04-01T15:55:09.632+0200 [DEBUG] [org.gradle.process.internal.DefaultExecHandle] Changing state to: STARTING
2024-04-01T15:55:09.644+0200 [DEBUG] [org.gradle.process.internal.DefaultExecHandle] Waiting until process started: command '/home/oliver/.gradle/nodejs/node-v22.0.0-nightly2024010568c8472ed9-linux-x64/bin/node'.
2024-04-01T15:55:09.671+0200 [DEBUG] [org.gradle.process.internal.DefaultExecHandle] Changing state to: STARTED
2024-04-01T15:55:09.671+0200 [DEBUG] [org.gradle.process.internal.ExecHandleRunner] waiting until streams are handled...
2024-04-01T15:55:09.671+0200 [INFO] [org.gradle.process.internal.DefaultExecHandle] Successfully started process 'command '/home/oliver/.gradle/nodejs/node-v22.0.0-nightly2024010568c8472ed9-linux-x64/bin/node''
2024-04-01T15:55:09.754+0200 [QUIET] [system.out] ##teamcity[testSuiteStarted name='nestable async (via JS/standalone/transformed)' flowId='s1']
2024-04-01T15:55:09.755+0200 [QUIET] [system.out] ##teamcity[testSuiteStarted name='nestable async (via JS/standalone/transformed)' flowId='s2']
2024-04-01T15:55:09.770+0200 [QUIET] [system.out] ##teamcity[testSuiteStarted name='container' flowId='n1']
2024-04-01T15:55:09.771+0200 [QUIET] [system.out] ##teamcity[testStarted name='should pass' captureStandardOutput='true' flowId='n2']
2024-04-01T15:55:10.778+0200 [QUIET] [system.out] ##teamcity[testFinished name='should pass' duration='2' flowId='n2']
2024-04-01T15:55:10.778+0200 [QUIET] [system.out] ##teamcity[testSuiteFinished name='container' flowId='n1']
2024-04-01T15:55:10.778+0200 [QUIET] [system.out] ##teamcity[testStarted name='should fail' captureStandardOutput='true' flowId='n3']
2024-04-01T15:55:12.784+0200 [QUIET] [system.out] ##teamcity[testFailed name='should fail' message='AssertionError: this is a failure' details='(details)' flowId='n3']
2024-04-01T15:55:12.784+0200 [QUIET] [system.out] ##teamcity[testFinished name='should fail' duration='2' flowId='n3']
2024-04-01T15:55:12.785+0200 [QUIET] [system.out] ##teamcity[testSuiteFinished name='nestable async (via JS/standalone/transformed)' flowId='s2']
2024-04-01T15:55:12.785+0200 [QUIET] [system.out] ##teamcity[testSuiteFinished name='nestable async (via JS/standalone/transformed)' flowId='s1']
2024-04-01T15:55:12.796+0200 [DEBUG] [org.gradle.process.internal.DefaultExecHandle] Changing state to: SUCCEEDED
2024-04-01T15:55:12.796+0200 [DEBUG] [org.gradle.process.internal.DefaultExecHandle] Process 'command '/home/oliver/.gradle/nodejs/node-v22.0.0-nightly2024010568c8472ed9-linux-x64/bin/node'' finished with exit value 0 (state: SUCCEEDED)
2024-04-01T15:55:12.797+0200 [LIFECYCLE] [org.gradle.internal.operations.DefaultBuildOperationRunner] 
2024-04-01T15:55:12.797+0200 [DEBUG] [org.gradle.internal.operations.DefaultBuildOperationRunner] Completing Build operation 'Execute exec for :jsNodeDevelopmentRun'
2024-04-01T15:55:12.797+0200 [DEBUG] [org.gradle.internal.operations.DefaultBuildOperationRunner] Completing Build operation 'Executing task ':jsNodeDevelopmentRun''
2024-04-01T15:55:12.724+0200 [LIFECYCLE] [class org.gradle.internal.buildevents.TaskExecutionLogger] 
2024-04-01T15:55:12.724+0200 [LIFECYCLE] [class org.gradle.internal.buildevents.TaskExecutionLogger] > Task :jsNodeDevelopmentRun
a
ah yeah, and so it doesn't render any tests in IntelliJ
o
Yeah, when running it as a regular main task, it is not considered a test by IJ. But you can run the exact same thing as a test task, and then IJ will switch its reporting. The output of both tasks is identical.
a
ahh okay, cool
o
So maybe you can throw XML at IJ, but you don't need to.
Regarding the adapter transformation, the main difference on Node.js is that with a transformation the TC messages disappear from output shown by IJ.
a
gotcha, so when I run
gradle :jsNodeTest --debug
then the tests are rendered in IntelliJ, and I see
<ijLog><event type='beforeSuite'>
events in the logs
o
Yep. Same here.
a
so, just thinking out loud: • KGP registers custom Test tasks for JS tests. • The Test tasks launch the JS tests with ExecFactory, and intercept the stdout & stderr. • Test output is redirected to the TCServiceMessagesClient, which then tries to parse TC messages. • TCServiceMessagesClient tells Gradle about the test events via TestResultProcessor • And then some other KGP tool converts the TC messages into equivalent IJ log messages.
o
I haven't really looked that much into the Gradle side of things. Lots of moving parts: Some transformation at the JS level (and parts of it different between pure JS and Wasm/JS)? Some interpretation/transformation on the Gradle level. I can't say that I have a complete picture here.
a
does println on JS have something weird behaviour with newlines 😵‍💫 I added an extra newline to the TC message and it's behaving better
Copy code
println("\n##teamcity[${args}]\n")
image.png
o
Yes that missing newline is a known bug. 🐞
a
okay, so I stripped down the example and created something that appears to work correctly for Node and Wasm (ignoring the duplicated-tests bug) https://github.com/aSemy/kotlin-js-wasm-testing/blob/a5ad41e35679181bfc9880745147cc9535af9ddb/v2/src/jsHostedMain/kotlin/Main.kt
I've pushed a very, very hacky and weak POC for executing the JS browser tests. Basically, a Karma config file loads the compiled JS file containing our main function https://github.com/aSemy/kotlin-js-wasm-testing/blob/bc757ed775cc6987bbaab9ba1f0466f88f914012/v2/karma.config.d/v2.karma.conf.js It does not render prettily, and there's some error that means the whole thing crashes on the first failing test. But, I think it's a good demonstration that so long as the main file is loaded (with a Karma plugin?) then the TS messages will be caught, and handled, and reported!
Weirdly if I run
gradle :v2:jsBrowserTest --rerun-tasks
in debug mode in IJ then it seems to render the results better
o
(Just repairing the drainage valve of my old automatic espresso machine, so availability is pretty limited. 💦🛠️🙃)
I'll try to pull your latest commits...
Hmm, that Karma hack looks interesting. If we could free ourselves from the Kotlin Gradle infra on JS, we might just use a standard Kotest runner plus the Kotest TC reporter. The existing Kotest Gradle plugin might help to set this up and hook it to the Kotlin Gradle plugin's test tasks (though I haven't looked yet if and how that might be achieved.)
👍 1
a
Yeah, I think it's very possible to unlink ourselves from KGP. However, wouldn't we need to emit IJLog XML? Without KGP the the TC messages wouldn't get translated to IJLog
o
We have seen that IJ can interpret TC directly. Seems like it wouldn’t hurt to emit XML, as otherwise the TC messages appear in the IJ output window. So I’d see XML as a bonus.
a
are you sure that it's IJ interpreting TC messages directly? I think KGP's TCServiceMessagesClient is handling the TC messages and converting them to IJLog XML.
o
Hmm, actually I don’t know. It’s always hard to see which part is responsible if there are multiple layers and you often don’t can’t rely on what you see. Even calling Gradle directly outside of IJ might kick in some other type/mode of output formatter…
a
A year or so ago, IntelliJ used to force all Gradle tests to re-run, even if they were up-to-date, which was slow and annoying. At the time, the requirement to fix this was to ensure that if someone hit 'run tests', then the previous results would be displayed. When tests run via IJ, IJ injects a Gradle test event listener (via an init.gradle script) that creates IJLog XML for each event, and logs them to stdout. So I made a proposal https://github.com/aSemy/intellij-gradle-init-plugin (which didn't get picked up) that would cache the IJLog messages to a file, and another task would print them out again if the tests were skipped. So, either the tests logged the messages, or they were loaded from cache. And IJ picked up and rendered the test events, even though a non-Test task was printing them.
So, maybe IJ understands TCSM, but I suspect TCSM is only used for JS tests → KGP JS Test tasks When running browser tests: 1. KGP adds a Karma runner and Karma test listener 2. the Karma test listener prints TCSM to the console 3. the KGP Test task launches Karma, and listens to stdout 4. KGP internals parses the TCSM messages, and reports them to Gradle (via some internal classes) 5. if running in IntelliJ, the injected test listener converts the Gradle test events to IJLog
So, if we want to re-use the existing KGP tasks, then yes, we should print TCSM. But, if we want to execute our own tests, then we need to figure out test-runner→test-task communication, and then the test-task needs to hook into Gradle test reporting.
o
Interesting stuff, IJ injecting via a Gradle init script. And me trying to stay away from anything DI/magic. 🙃 And I remember sometimes having to force Gradle to actually run tests, maybe due to cache invalidation issues. Oh my! I wonder if the TC converter inside KGP is sufficiently complete. My philosophy is usually, be lenient on input (accept everything legitimate, and then some) and strict on output. If that is not the case here, the TC converter might take short cuts and complain about legitimate things (certain types of nesting, concurrent tests). Freeing ourselves from KGP test tasks would ease some things, but may complicate things for users expecting the KGP test tasks to work. And I wonder how much effort is required to reimplement the essential parts of the existing JS test infra. We'd also never get rid of KGP interactions entirely as we'd have to consider KGP's JS/Wasm target (and possibly test) configurations, without having a stable API to interface with KGP.
a
Yeah, the TC converter has some idiosyncrasies that we need be careful of (in particular, making sure that tests and suites are opened/closed in the right order). It looks like it can handle nesting, but I'm not sure about concurrency...
freeing ourselves from KGP could look like: • Kotest Plugin registers a new JS test task, that uses a Kotest launcher. • The Kotest JS Test launcher logs the test events (in TCSM) to a file. • Kotest adds a custom Karma launcher that just reads the logged TCSM from file, and prints them to stdout. • The regular KGP JS test task runs, and uses the Kotest Karma launcher. • The KGP TCSM handler converts the TCSM to Gradle test events.
well, that's not really 'freeing', but it would mean we wouldn't have to use Gradle internals. We'd just have to rely on KGP's TSCM handler.
o
Karma always runs stuff in the browser, right? The browser cannot access project files (unless you'll instruct Karma to serve them). Also, an intermediate file would block any real-time test progress reporting.
Would changing the KGP parts to better accommodate Kotest's needs and then approach JB with a PR seem feasible?
a
Also, an intermediate file would block any real-time test progress reporting.
Yeah, it's not a great option :)
I work at JetBrains, in the KGP team (although not with the Kotlin/JS stuff), so I will ask...
o
Cool! Maybe Ilya Goncharov has stuff like this on the radar as he provided some insights on JS test reporting. I haven't done the necessary homework and I probably won't be able to do so this month as (among other stuff) I have to prepare material for my KotlinConf talk. So this may be too naive, but: Could we possibly get some API to replace Karma/Mocha with a custom test runner (instead of `useKarma`/`useMocha`)? Could we have such a runner provided with just an output stream to feed back test results to Gradle/IJ, with no restrictions on nesting/concurrency by KGP internals, and no or minimal+documented interpretation of results by KGP?
a
Well... replacing the framework is a good idea. I've been looking into that (just for browser testing, since the Node tests seem to work fine as-is). When testing with Karma we can already manually override the framework, by manually overriding it in a custom Karma config file. However, since we need to use a custom framework that supports async-describes (replacing Jasmine), then we'd also have to create a custom 'test event listener' (which converts the Jasmine test events, like "suite started", "log message", "test skipped" into TCSM). But that's okay, we can create a custom 'test event listener'. But then we need to produce TCSM messages for the TCSM handler in the KGP task, which has strong opinions about the formatting and ordering of messages. That might be difficult to interop with.
So, I've been thinking: let's just make Kotest completely independent. 1. We pick a JS browser test runner, like Karma or Web Test Runner. 2. Create some KotestJsLauncher that will be executed by Gradle inside the existing jsTest task. 3. KotestJsLauncher will discover and execute the Kotest JS tests. 4. The Kotest JS framework will output test/suite started/finished/failed log messages (which could be TCSM, but since KotestJsLauncher will parse them, they can be any format). However, it gets difficult: KotestJsLauncher needs to report the test events to Gradle, but Gradle does not provide an official way to report test events. The only official way to report tests to Gradle is via a JUnit engine. (KGP uses Gradle internals to report test events, which is a no-no). So... Kotest would have to create a JVM
@Test
class and add it to the jsTest Task classpath. This
@Test
class would then launch KotestJsLauncher. And then also Kotest would need a custom JUnit engine that KotestJsLauncher can report tests to.
o
Could we use the existing Kotest Junit5 extension for that?
And even then, there seem to be these annoying limitations baked in:
If you are running Kotest via Gradle's Junit Platform support, and if you are using a nested spec style, you will notice that only the leaf test name is included in output and test reports. This is a limitation of gradle which is designed around class.method test frameworks.
So it seems like we never really had full nesting in Kotest anyway. And Kotest already knows how to work around that via
displayFullTestPath
.
a
Could we use the existing Kotest Junit5 extension for that?
Yes, although isn't there an idea for Kotest to remove the JUnit engine?
o
Don't know. Maybe that's about legacy JUnit 4, which was removed once, but came back again?
What other way does Kotest have to communicate JVM test results? I'm always using it with the JUnit 5 runner.
a
ah okay, maybe I'm misremembering!
I'm just looking into how to launch the JS browser tests. I see that Karma is deprecated, and the suggested replacements are Jasmine or Web Test Runner. WTR looks more usable. But they're both JS based. So, I was thinking, what about if we used Playwright to run the JS browser tests? I think it'd be easier to use, because it's Java based. It would avoid having to set up and interop with an NPM library.
It would work something like this: 1. KGP compiles the JS tests to a .js file 2. Kotest Gradle Plugin generates an
.html
file that loads the compiled .js file. 3. Kotest GP adds the KotestJUnitEngine to the jsTest task classpath. 4. KotestJUnitEngine opens the html page using Playwright (the browsers could be configured via the Kotest GP DSL). 5. The tests run, and print output (e.g. TCSM) to the browser console log, which gets picked up by KotestJUnitEngine and reported as JUnit tests.
o
Sounds interesting! (Well, for me, anything that does not force me to touch JS sounds attractive.)
a
o
OK, good catch. I don’t know what’s planned there. So what do you think about opening a new thread on #kotest-contributors to discuss this with Sam, and link back from there to this discussion?
👍 1
a
btw, here's a demo to show that the IJLog XML is what IJ uses to render the test results.
If you run
gradle testOutputLoggingIJ
, then IJ will render the tests, but not if you run
gradle testOutputLoggingTCSM
I have a very very very rough POC for a custom Kotest JS Browser tests runner in the v3 subproject https://github.com/aSemy/kotlin-js-wasm-testing/blob/5613b0ea2005cfafec11782b96c142b78391327e/v3/src/jsTest/kotlin/tests.kt Run
gradle :v3:jsTestKotest
, and some of the tests will be shown in IntelliJ (see screenshot). I'm not sure why they don't all render, I guess something isn't quite right with the IJLog XML, but it's fixable. How it works: • Tests are defined in src/jsTest using a custom test DSL (suspended nesting is supported) • KGP compiles src/jsTest/kotlin to .js • a custom Gradle Test task uses a custom JUnit TestEngine • The custom JUnit TestEngine hosts uses Ktor to host the .js files, and an index.html • Playwright visits the hosted index.html • The tests run (which is magic to me, I guess they run automatically because they're in a main function?) • The custom test DSL logs test/suite started/finished events (to the browser console) • Playwright has a console listener, which parses the event, and converts them to IJLog XML
🚀 1
o
Glancing over it (code in browser only), this looks pretty amazing. Seems like only the disabled tests were not rendered. 👀
a
thanks! Yeah, I think there's something wrong with parent spec
Also, when a test failed, the TCSM had to output both a test-failed and test-finished event, but with IJLog only a test-failed event is required. And I think when I was converting from TCSM to IJLog, I didn't fix it