Starting a thread about memory usage since the pre...
# kotest
w
Starting a thread about memory usage since the previous one is long enough 🙂
The biggest contributors to memory usage seems to be Strings and ArrayLists. The ones I could identify and quickly check were all in
Description
class, specifically
chain()
and I think
path()
methods. One thing I noticed is with
+ listOf(this)
could be replaced with
+ this
. Other than that I cached all the methods that seemed to be operating on strings and were cache’able, and it seems I reduced peak usage by ~1GB
GC logs with caching enabled
GC logs without caching
IDEA patch with my changes
Sadly, other than that I see most allocations coming from kotlin reflect and for some reason I don’t see what triggers those 😕 I saw grabbing Spec name in some place, but if you have some pointers about reflections API on hot paths I’ll have a closer look there
The 1GB saving is not much, and I think more could be done by revisiting some of the logic around tests paths, naming etc. but I’m not familiar enough with the codebase to suggest something. Also seems like reflection is really heavy in that regard, so perhaps some of the reflection-related stuff can be cached as well
s
Probably lots of optimization could go into that stuff. The
Description
class is getting deprecated anyway in 4.7
kotlin reflect is used for getting spec names (we could easily cache that), for parameters in some data tests, for property tests (do you use prop tests?)
c
i wonder if https://github.com/Kotlin/kotlinx.collections.immutable could be helpful for cases where you want to have immutability without copying around
w
We don’t have property tests, but I have a hunch that there are some optimisations around
forall
that can be made as well
s
are you using data tests ?
w
tough question 😄 I think we do, but not many. We have more of the older style
forall(row(…
tests, and not the new API that allows data classes
s
or the new new api that I haven't written docs on yet 😂
(the new new api is just the new api but in a separate package)
@christophsturm that looks nice, but I'm vary about using anything else experimental from JB. Kotest uses the experimental duration class as it was a big mistake.
c
i have had problems with duration too. but i think with 1.5 its no longer experimental
s
no it's still experimental
c
ok but inline classes are no longer experimental
s
right
w
Yay 😄 Yeah I’m not really up to date with those, I know some people preferred the old approach so they kept using it + we have more older tests that we didn’t migrate. Also there were some bugs initially that bit us. Either way we use both, but definitely more of the older API. I know one particular test has ~4 nested levels of the old forall and those tests take super long, even though it’s just 2/3 rows for each forall (so under 100 test cases, not that much). But that’s probably not related to memory
c
so duration will probably not break with every update 🙂
s
duration has broken with every single update so far
this PR is entirely because durations are experimental https://github.com/kotest/kotest/pull/2242/files
@wasyl the big change with 4.5 data testing is that forAlls (called withData) now stack. So
Copy code
withData { withData { } }
will give you nested tests, which on the old stuff wouldn't
w
yea duration even hit us as a non-library — we can’t update kotlin without updating coroutines because of the duration library, and we’ll have to migrate super convenient
Int.seconds
etc. apis to whatever they did instead 😞 not fun
s
Yes Int.seconds was awesome, and now it's Duration.seconds(int), so ugly
c
why was that removed? i really like that in ruby on rails.
s
@wasyl are you on 4.5 ?
c
i guess it could still be a separate library
w
Yes, 4.5.0
s
@christophsturm so when are you starting to join the kotest team 😂 😛
@wasyl I've pushed a change to cache the result of the reflection call to get the class name from a KClass. Do you want to try that see if it's any better, once it's built? https://github.com/kotest/kotest/actions/runs/842287844
👍 1
w
i guess it could still be a separate library
someone already did that 😄 https://github.com/eygraber/kotlin-duration-extensions
❤️ 1
c
i could contribute to the idea plugin and make it run failfast tests too :))
s
You could just make kotest support what you're missing 🙂
@wasyl I've made an even more aggressive cached build here https://github.com/kotest/kotest/actions/runs/842320540
👌 1
c
the reason why i did not do this earlier and instead started my own test runner was that it was just easier for me to start from scratch than to figure out how kotest works. but you are right, its definately a good option for the future. I’m just a bit afraid that kotest is harder to refactor because there are so many features that have to be taken into account.
s
That's what happens when people start using your framework. They have requests and the feature set grows. Kotest tries to be general purpose, but I don't think that precludes being fast. I would say that outside of the test runner classes, it's very straightforward. And most of the complexity comes from trying to be junit friendly. I would like to ignore junit platform as it's not well suited to a modern test framework in my opinion, but of course you can't do that unless you are going to provide a gradle plugin (which we now have) and an intellij plugin (which we've had for ages) but there are still certain things that are junit first (eg the way intellij hooks into gradle for example). Over the past year or so I've tried to go in a direction of being less a junit platform engine and more a completely standalone framework.
c
right, but i think its also important to say no to features. but you are right there is no reason why kotest should be slower than failfast. definately something to look at, maybe i should try to implement the failfast dsl as kotest dsl and see if i hit any roadblocks.
s
We do say no to features a lot of the time 🙂 I'm not even sure failfast is noticeable faster, it's just that you're comparing failfast's own test suite with kotest's own test suite. The latter is like 10,000 tests. Certainly I don't think the DSL matters - it's just nested test lambdas at the end of the day, and that's the same for kotest, spek, failfast, whatever.
Also I think your experience implementing an entire test framework means you would understand Kotest very quickly.
c
yes its definately possible that kotest is faster.
s
Imagine how good Kotest would be with more people contributing. We're still a small team of 4 regular contributors and a handful of semi regular contributors. I suggested to the current maintainer of Spek that we fold together as he was looking for someone to take over that project (seems like development has slowed but it still has a lot of users) but I don't think he was keen.
c
in failfast i have 2 test executors, one for the initial execution, and then one that uses that map to just execute single tests. it may seem redundant and stupid but if you really want high core utilization from the start i see no other way
spek looks pretty dead
s
Pretty much yeah
c
and i have a `dependency { }`method that creates or waits for a test dependency, and it does that in a separate coroutine dispatcher to free the cpu core for another test.
s
take a look at the concurrency stuff in kotest, specifically, start with SpecLauncher and follow it through. Allows the users to customize how concurrency works.
c
ok i will
w
@sam how do I know snapshot release version for https://github.com/kotest/kotest/actions/runs/842320540?
Pretty much the same memory usage (~7.8G) as the version without caching above, that’s with
4.6.0.230-SNAPSHOT
s
what about speed ?
snapshot is 4.6.0-buildnumber-SNAPSHOT
w
Ah, I was looking for buildnumber — it’s there in action title
master #230
. About speed: I don’t see any significant improvement , both versions finish on average within 1 second of each other
s
I guess reflection is not slow then
w
let me look at memory profiler
Still tons of allocations from kotlin.reflect. Don’t look at sizes, this is live and I don’t know how to show something better, but the proportion is clear
s
How many times does it get called, it's not clear
w
Would that caching I did above make sense as well? You mentioned
Description
will be deprecated, will the logic be moved somewhere or be reworked completely?
s
I'm adding a replacement that gives you more information about the test at runtime (like the tags that were present at runtime, not just compile time). It's early days but it will replace description eventually, although description will hang about for a long while yet for compatibility. https://github.com/kotest/kotest/pull/2239
w
This is somewhere after ~200 or 300 tests (not specs)
s
I've cached various methods in description by putting them in lazy {} blocks, see if this improves things https://github.com/kotest/kotest/actions/runs/842480289
w
~1GB lower memory footprint, slight speed improvement (~10%, but this is not very scientific), and still kotlin.reflect APIs churning through Strings and ArrayLists in the profiler.
So, a nice win 🙂 I’m gonna try with deeper traces to figure out where the reflect calls are being called primarily
s
yeah if you can find that out that would be good
do you do a lot of data class comparisions, like obja shouldbe objb ?
w
I’d guess this is majority of our assertions
s
actual full classes, not fields ?
not like obja.name shouldBe objb.name
but obja shouldBe objb
w
Yes, we have a lot of those
With bigger stacktrace I see that
TestCase#toString()
calls
SuspendLambda.toString()
which triggers tons of
kotlin.reflect
stuff
s
wonder why testcase to string is being called
logging prob
w
log("Executing active test $testCase with context $context")
in
TestCaseExecutor:148
s
yep, I'm currently replacing them with functions
w
Yeah with inline functions that should be practically free with logging enabled anyway I guess?
s
yes
w
There’s also
TestCaseExecutor:179
for example, I can grab some more unless you’re just replacing all logs everywhere 😄
s
the latter 🙂
almost done
wanna review for me real quick ?
👀 1
w
Looks good 🙂
probably
log(stringFn)
could still delegate to
log(null, stringFn)
?
s
addressed your feedback and merged
if you want to try build 232 once its done
w
Will do!
s
this should be a big improvement 🙂
w
seems like the build failed
s
Yeah I added 233
👍 1
w
30% speed improvement 😮 And very interesting, peak memory during tests execution is down to ~1G
s
instead of 7gb?
w
Yes, but!
There’s still 4G peak very early during the tests process run
s
test discovery maybe ?
w
Yeah most likely
s
if its first couple of seconds
how many tests do you have
w
This is first GC at ~1.6s, memory is slighly over 4GB. After that it doesn’t go over 1.2GB
s
sounds like discovery
w
Yeah. About how many tests: this is just one module, so 180 specs, around 1k `it`s (and
InstancePerLeaf
strategy)
s
ok so a reasonable number of test lambdas
so reflection usage should be down to about once per class now
w
Sadly startup profiling is broken for me, so I can’t profile that early in the execution process 😞
s
ok, well not much we can do about discovery anyway
and a couple of seconds is good right
w
Yeah it’s ok. The 4GB memory could use an improvement if possible, but without profiling it’s difficult to say
I mean, it’s still a lot of memory even for discovery, esp. since it’s back to 7M afterwards and only rises again during tests execution
s
Yes that's fair
it does seem a huge amount of memory
w
Great improvement other than that regardless, many thanks 🙂 I’ll profile some more and let you know the results, but probably in a couple of days
s
alright, I'll take a look at discovery as well
Take the next build, and try running with sys prop or env value
Copy code
kotest.framework.discovery.jar.scan.disable
set to true
I think that might speed up discovery hugely
w
Will do, what does it do?
s
doesn't scan inside jars on the classpath
you're only interested in compiled test classes right
at least I think that's how it should work
w
Yeah, if the jvm knows which are included jars that could help
Looking at the commit, how about an optional opt-in
include only packages
option? I see some
rejectPackages
, if there’s an alternative
includePackages
then maybe it would help? But I’m gonna check the newest build first
s
yep that's a good idea too
w
(I’m also thinking that e.g. kotlin or androidx folks won’t be able to use Kotest, right? 😄)
s
it would still need to un-zip the jars to look for your packages tho
👍 1
whereas this way it will just skip jars completely
and yes, com.sun, kotlin. and android. and java. can't use kotest 😂
w
Not much improvement with this property, sadly 😞 neither speed nor memory, I added
"-Dkotest.framework.discovery.jar.scan.disable=true"
property
s
hmmm ok
well assuming it worked, that's a shame
👌 1