I'm looking at `kover` for code coverage, but I am...
# test
u
I'm looking at
kover
for code coverage, but I am new it generally. How do you actually plug it to CICD? I'm noticing it runs every (kind) of test and that takes forever for a normal PR blocking pipeline. Can I specify what kind of tests I want it to run? Or do you maybe run it in parallel to the regular tests, or maybe only nightly? Or should I just eat it, and use it instead of running tests explicitly?
v
It depends on what are you trying to achieve. If you want to gain insights what parts of code have poor coverage, than you just need to have extra job that will run weekly and you can use output reports to identify places where coverage should be improved. I would say that is good and easy starting point. If you want to have reasonable insights to coverage impact for every PR, just running coverage over all project and showing single number is not very useful outside of very small projects. Once you have reasonably large codebase with hundreds of modules. Even large PR without any coverage would not shift needle much. So author won't feel much bad for not writing any tests at all. Same applies for PR that is covered 100% again won't shift needle, so it's not rewarding for author of PR. For that purpose i found best to present just coverage of modules that are affected by that PR. That way both will show much realistic number in module they affect. If your goal is to enforce some minimal coverage, would advise rather educate and motivate people why writing unit tests is good for them. And it's very important also how unit tests are being written, Testing behavior not implementation, avoiding using mocks, testing mainly entry points that will test whole logic, faking just what is necessary network/os layers. That way you have few tests, they does not need to be changed during refactoring's and usually bring high coverage without writing low value tests for every class/function.
1
u
Yes thats one thing I am proud of that we have quality tests not just spamming mock verifies etc etc what you said And I am new to the topic practically, as I knew it is a controversial topic and every metric that becomes a bechmark will be gamed and result will be AI generated crap just to satisfy the number which I dont want -- so I avoided it But then, we are now at a mature stage and I dont pay for CI CD time so why not revisit it Playing around with it I do see a value in it, it showed me stuff I missed while testing, and yea the overview as which parts are undertested So yes I am torn if this should be something that fails the PR. I am leaning towards no, but then again our reality is everything is last minute and people do tend to ommit tests when pressured to hit deadlines, and this would be a physical impediment so it would in a way protect the codebase -- but yea then we are back to the low quality test debate Do you want any other way to enforce testing cross organization other than coverage? I'd assume static analysis that detects if tests were written is also silly as not every change changes behavior etc
But running it on diff only is intriguing, I wasnt aware of that
v
Only enforcement we have is that every entry point has unit tests written. This is enforced by static analysis (in our case Konsist) e.g. if ViewModel is added in PR and does not have any unit tests PR cant be merged. Unless that violation is suppressed in code manually. That is allowed and ok to add unit tests in separate PR. Even thou preferred would be doing it in the same PR. Of curse there are cases when it was suppressed and second PR with tests never came, e.g. time for task run out or whatever. But now it becomes visible as suppressed violation and it's tracked against overall technical debt. For every module code health score is being calculated, based on technical debt. And suppressed rules are one type of debt. + violations from all static analysis tools we use are aggregated detekt, sonar, lint, konsist and code coverage bellow 80% is also part of technical debt calculation. As every module has his owner, they are responsible for tech debt there. Outside of having simple public visibility of what is state of code you are responsible for. That should be motivation on its own. Once a month stats are presented how tech debt is evolving what issues where resolved, what type of new issues where added. Showing top 3 collogues who contributed most in removing tech debt over past month.