Using the following code snippet, and some other c...
# getting-started
m
Using the following code snippet, and some other code to profile it over 1M iterations with random numbers:
Copy code
fun Pair<Int, Int>.sumIsEven() = (first xor second) and 1 == 0
fun Pair<Int, Int>.sumIsEvenAlt() = (first + second) % 2 == 0
I would assume the former to be faster as it doesn't require summing and dividing and just does bitwise comparisons, but I'm getting the following results:
Copy code
=== Case 1 ===
Average: 42ns
Max: 546_802ns
Total: 42_349_933ns

=== Case 2 ===
Average: 36ns
Max: 52_984ns
Total: 36_907_022ns
What's the reason for this difference?
s
The max value for the first case is 10 times bigger, which looks suspicious. How stable are those results? Are you sure there was no GC, warmup or any other unrelated performance drop during the test?
m
Swapping their order and letting the JVM warm up doesn't seem to make much of a difference, and although there's rare cases where the max for both is in the same order of magnitude, the average of the bit-based one is always slower
p
use JMH for a ‘fair’ benchmark also, micro-optimization like this is almost certainly pointless since you’re dealing with boxing overhead that will likely dwarf any theoretical optimization (which hotspot will also be optimizing over) in your code
g
I just benchmarked the operations with kotlinx.benchmark on JVM. There are 4 tests: each operation was tested with 2
Int
argument and with single
Pair<Int, Int>
argument. Here is the result:
Copy code
Mathematical, not boxed:
3,865 ±(99.9%) 0,151 ns/op

Bitwise, not boxed:
3,788 ±(99.9%) 0,016 ns/op

Mathematical, boxed:
4,087 ±(99.9%) 0,182 ns/op

Bitwise, boxed:
4,061 ±(99.9%) 0,012 ns/op
The results shows that there is no significant difference. The bitwise alternative is just slightly faster and more time-stable. P.S. The benchmark code:
Copy code
@State(Scope.Thread)
open class OperationsBenchmark {
    var first: Int = 575757
    var second: Int = 179179

    @Setup(Level.Iteration)
    fun prepareValues() {
        first = Random.nextInt()
        second = Random.nextInt()
    }

    @Benchmark
    fun Blackhole.mathematicalIsEven() {
        consume((first + second) % 2 == 0)
    }

    @Benchmark
    fun Blackhole.bitwiseIsEven() {
        consume((first xor second) and 1 == 0)
    }
}

@State(Scope.Thread)
open class BoxedOperationsBenchmark {
    var inputBox: Pair<Int, Int> = Pair(575757, 179179)

    @Setup(Level.Iteration)
    fun prepareValues() {
        inputBox = Pair(Random.nextInt(), Random.nextInt())
    }

    @Benchmark
    fun Blackhole.mathematicalIsEven() {
        consume((inputBox.first + inputBox.second) % 2 == 0)
    }

    @Benchmark
    fun Blackhole.bitwiseIsEven() {
        consume((inputBox.first xor inputBox.second) and 1 == 0)
    }
}
And the benchmark configuration:
Copy code
warmups = 20
iterations = 10
iterationTime = 3
mode = "avgt"
outputTimeUnit = "ns"
p
I got results that were a lot closer to what I would expect, using your benchmarks on a JDK 17 build on an M1 MBP