FYI, next time I'm planning to prepare an extended report for failing tests. For each failing case, it will contain general reason (failed during compilation or Python execution), details (Kotlin compiler error or Python exception message) and running time. It will allow us to:
• see if e. g. lots of tests fail with the same reason. Then we can focus on fixing this, potentially getting more new passing tests with low effort
• Troubleshooting long box test running time: see distribution of running times, see how they correlate with failing reason (e. g. infinite loop, stack overflow)
It will be a new generated file, checked by CI (i know, one more file to take care of, but we may then somehow merge the failed tests reports).
08/16/2021, 3:38 PM
Great! I love the powerful tests we have. I totally don't mind if you make them even better, thanks!
Could you please invest some time in updating READMEs? Maybe list all the commands needed in one place just to copy-paste them easily? It should not only make the development easier for us, but also to answer questions of newcomers (if any).
08/16/2021, 4:30 PM
I currently use the GitHub Actions config file as the source of knowledge how to run something. Maybe we should just refer to there from README 🤔
08/16/2021, 5:07 PM
Yep, I use it too. But I have to copy line-by-line and also paths for CI are different to allow comparisons.
The latter is the worst. Maybe we can somehow change the paths for CI to overwrite files but also change diff to compare with git history somehow... I can look into it unless you have a better idea
08/16/2021, 5:54 PM
I'd be more pragmatic for now and just adjust the README 🙂
on the other hand, overwriting files and then asserting that