Just to see what's possible, we've been developing...
# server
b
Just to see what's possible, we've been developing a web application basically from scratch, with the only dependencies being the standard libraries. It's developed using strict TDD, and uses the gradle build tool. Check it out here: https://github.com/7ep/r3z Would love to get your feedback.
โž• 1
a
Wow you even wrote the server by hand
r
a pity it's not written in pure multiplatform common Kotlin ๐Ÿ˜‰ ๐ŸงŒ
b
True. I can't say for certain how much easier it made it, but the Java standard libraries made it all feel pretty easy. It was shocking really.
d
A lot of the biggest lies in software are that certain things are much more complicated than they actually are...
๐Ÿ‘ 1
j
I like the concept a lot, but if you are TDD you need to keep the build green at all times. I can only see one green build....
โ“ 2
As the builds are never green, and many commits didn't get built, if something broke you'd never know where it got broken.
b
Not sure what you mean by green build here. Generally, each commit passes tests unless we say otherwise in the commit message.
Are you talking about Github actions or sonarcloud analysis?
j
In response to the ? : Looking at https://github.com/7ep/r3z/commits/master all the red X are failed builds.
b
Ah. No. Those are SonarCloud saying it failed.
It's static analysis. Here are the big bugs:
They haven't really updated their tool as well for Kotlin as for Java, IMHO.
j
I have pretty low opinion of tools like that tbh, unless you take the time to refine the rules to fit your opinions, and always fix issues when they come up. More useful to use intellij code inspector Green box.
This is a bit of a shame as showing a github a actions green build for this TDD thing would be a good story.
d
+1000 . And if it's failing your green build and just adding noise, strip it out until it's useful.
b
We don't take it too seriously - it's like gnomish engineering (any WoW fans here?). But occasionally, it has a gem. We don't rely on it for our determination of "green" on TDD. For us, it's successful when our real tests pass, and they're our pride.
d
Suggest that you have a clearly denoted green path, with other things relegated to the sidelines (and not going red if you're not going to take action on them).
โž• 1
j
This = red Xs for the commits
c
code metrics and tdd donโ€™t go so well together because in tdd you need a failing test to make a change
(and Iโ€™m 100% in the tdd and 0% in the code metrics camp)
d
Unless it's "lines of code deleted"
b
Ahh. An interesting aspect of code metrics is: they're only bad when used by the wrong people in the wrong way.
j
Hmm. Not really. You can change any thing you like when things are green. That's the "refactor" part.
c
how is the change test driven then?
if i need a refactoring i write a test that needs the refactoring
b
But a refactoring, by definition, should cause no impact on tests.
blob think smart 1
d
Almost all code metrics are really only cared about people that want to use them in the wrong way..
c
a metric becomes worthless when you optimize for it
@Byron Katz but most of my refactorings are motivated by tests that i want to write or test that i already wrote and set to pending for now
b
For example we use code coverage tremendously on this project for some specific reasons: Did we miss anything? Is anything dead code? But you're right, most teams use code coverage as a heuristic of quality, which is insane.
j
(Christophe) You're probably not gaining the best design out of TDD-as-design if you have a load of tests written but not yet in play, as then maybe you've taken quite a few steps forward before getting feedback... what happens to those tests? They might now be invalid, or limiting your ability to see new solutions. Anyhow, big conversation for a kotlin thread!
c
i only measure mutation coverage (with pitest.org) , and thats really useful to find out what tests are missing, or what code is no longer needed
b
All our tests are in play. To see them, it is necessary to have the code on a machine and run the tests, "gradlew alltests"
The entire suite of tests - about 400 of them - unit, integration, API, perf, and ui - take about 40 seconds in total.
We are able to fearlessly refactor like mad, because our tests give a genuine safety net. Generally, if the tests pass, you're mostly good to go.
๐Ÿ‘ 1
c
for me 1 test fails:
Copy code
coverosR3z.timerecording.UITimeEntryTests > timeEntryTests FAILED
    java.util.NoSuchElementException: Collection contains no element matching the predicate.
        at coverosR3z.timerecording.UITimeEntryTests$verifyTheEntry$id$1.invoke(UITimeEntryTests.kt:251)
        at coverosR3z.timerecording.UITimeEntryTests$verifyTheEntry$id$1.invoke(UITimeEntryTests.kt:24)
        at coverosR3z.persistence.types.DataAccess.read(DataAccess.kt:61)
        at coverosR3z.timerecording.UITimeEntryTests.verifyTheEntry(UITimeEntryTests.kt:215)
        at coverosR3z.timerecording.UITimeEntryTests.timeentry - An employee should be able to enter time for a specified date(UITimeEntryTests.kt:78)
        at coverosR3z.timerecording.UITimeEntryTests.timeEntryTests(UITimeEntryTests.kt:35)

3 tests completed, 1 failed, 1 skipped
b
which commit?
c
@James Richardson i never have more than one test written and not implemented. What I am saying is that I refactor in reply to tests that i cannot yet write or to one pending test that i started writing
current master, a6a1cc6ddd1e7d7ad10336fe153401101f5b5864
I just cloned it and ran ./gradlew allTests
b
๐Ÿค”
c
and if we are talking about tdd perfection i could mention that the assertion failure message is not very useful ๐Ÿ™‚
๐Ÿ‘ 1
j
Oh cool, sorry I misinterpreted what you were saying.
Not sure anyone is claiming perfection.. but an interesting journey, sure...
c
but great thread. maybe there should be a #tdd channel here
btw while we are talking about tdd and writing things from scratch: i wrote a test runner from scratch https://github.com/christophsturm/failfast
d
If you want feedback on style, there are a few stylistic things which seem slightly out of place for me - use of custom Val's with get() returning a constant value, single line methods with return which should be expressions. What looks like interfaces with a single implementation (and starting with I prefix).
b
I would need an example for what you mean by custom val
d
val size: Int get() = map.size
(that actual instance is dynamic in the code, but there are lots that aren't)
Another that is static override val path: String get() = "homepage"
b
Oh that is all just because when you allow intellij to automatically implement necessary methods for an interface, it does it that way, and I didn't see any harm in it.
Is there some drawback?
d
Imho its just verbose and a bit pointless. ๐Ÿ™ƒ
b
ah ok ๐Ÿ™‚
d
I'd also be interested to know which style of TDD you use- I'm assuming Chicago style
b
Chicago? Actually hadn't heard of that one. I've heard of Detroit and London
d
Chicago is what I'd refer to as "uncle bob" tdd
Inside out Vs outside in
b
Oh, looked it up. Same thing. Alright, well, anyway: we do a blend. Big Kent Beck fans, but also Dan North fans. Very cyclical - top, bottom, top, bottom, go at it sideways, step way back, look again. Design is a mess. We don't adhere to any one direction.
You might note a few BDD tags in the code - we wrote our own (still very rudimentary) BDD framework.
d
Any reason for junit 4 instead of 5?
Also - use of logging is not exactly tdd'd. Mutable var and statics in logging is generally suspect, and entire thing could be better replaced with something meaningful like structured events system which is testable
b
We intentionally didn't tdd the logs. We also didn't tdd the invariants. I just haven't seen the return on time invested when it comes to stuff like that. For example, we don't tdd this:
Copy code
val split = pathAndQuery.split("?")
check(split.size in 1..2)
we rely on the invariant. We mostly rely on TDD to provide clarity of thought when considering the innate complexity of program design. We didn't see any need to use the TDD method for designing a log statement. Do you find otherwise?
In my past I have seen where people too-strictly adhered to TDD and it didn't have grace. I see TDD and other techniques as awesome in proper proportion, just as a glass of water is fine to drink but not an aquarium. That is to say, I use it very heavily, but not like how it's commonly taught - to keep to it religiously always and forever, for everything - there must be a balance to everything. I love functional programming, blended with procedural and OOP, and don't strictly stay in one, but instead use what makes sense at the time.
c
still you may want to tdd all the features that you rely on. and if you rely on logging you have to test it
b
A common problem pattern I have seen is when people are overzealous about the way they test. They create too many tests tied to the details of a particular implementation. Then they spend all sorts of time going forward updating tests because of minor changes. That happens a lot with log statements.
Still, it gives me pause to hear your advocacy, since I hadn't considered it in a while.
j
Yeah, lots of people think "oh yeah, I'll log this and that", but actually when you think about it for a bit - the recipients of those log messages are also stakeholders in the system. Developers or people on support (often same people) ... oftentimes figuring out what happened by interpretation of log lines. Alternative approach is emitting events conclusively reporting what happened, by means of event emitter, which is a declared collaborator to your object...
๐Ÿ‘ 1
d
So you used "non-fanatical TDD" (which is a perfectly pragmatic choice, but definitely not strict ๐Ÿ˜œ). This is worth a watch if you hadn't seen it (there's also a bit on logs at 25ish mins in)

https://youtu.be/B48Exq57Zg8โ–พ

b
In regards to the comment about Junit4 vs 5, we considered 5 to suffer from second-system effect. It simply didn't add anything we needed or liked, but did add a lot of extra needless (to our perspective) stuff. Occasionally we would switch over and start studying the docs for anything useful, but just didn't find anything, and we'd switch back.