Please add a reaction to vote: :one: Immutable co...
# announcements
c
Please add a reaction to vote: 1️⃣ Immutable comparator performs better 2️⃣ Classic mutable comparator performs better 0️⃣ No real difference
2️⃣ 2
0️⃣ 7
1️⃣ 2
k
Why is this a vote? Just measure it.
Either way, reusing variables like you're doing with
result
is completely useless, the optimizer will split the variable up again first thing anyway.
g
Almost sure that immutable will be worse just because it creates 2 lambda per value, you can try inline of course
k
Ah it's not even inline? I missed that.
g
Agree about result, no real reason to reuse it, I would just do early return, also it would allow to write it without nesting, mutability and lambdas
c
I was trying to invoke some discussion. Ideally the immutable version should perform close enough to the classic version that their is not real penalty in using the lambdas.
Inline is obviously signitificant.
g
I just don't see why it should be mutable in your "Mutable" example. Also as I said, lambda will significantly increase overhead, but I really don't see reason to use it. Inline version for sure will help, but increased size of bytecode may be less efficient (tho it will be probably pretty small difference), anyway, if your really want to know the answer you should benchmark it
c
I'm busy doing the benchmarks. The version with the lambdas is about 10% slower and non-inlined 50%
k
Make sure to use JMH and get non-constant inputs from somewhere.
c
s
You should warm up the JIT, otherwise your results are skewed. You whould probably look into JMH as Karel suggested.
c
There is a set of JMH benchmarks in src/jmh. The runner in src/main was first stab.
j
It would be awesome if you also included a graph with the data 🙂
c
Coming up
s
Sorry, I didn't see them. Benchmark setup looks fine at the first glance. I didn't expect a 10 % difference, the resulting byte code should more or less be the same. At least so I thought
c
GraalVM is once again surprising
I'm also trying to understand the difference because the decompiled code looks good.
k
You need to blackhole the compareTo results, I don't think
.toLong()
is enough.
Also don't you need a lot more warmup and benchmark iterations for what's basically a oneliner?
c
The numbers are pretty flat from 2nd warmup call
I'm adding to blackHole.consume
k
Yeah but afaik the most optimized compilation only starts after a couple hundred/thousand invocations of a function in Hotspot.
c
They are called 150million times per second. So after 10s the JIT is warmed up, I took warmup to from 3 to 15 iterations of 3seconds and it made no difference
I added blackHole.sonsume and the results are the same.
k
My bad, I haven't done timed iterations like that. Lucky that the blackhole doesn't change anything, just making sure simple smile
c
Good practice thanks.