Hello, I restarted new benchmarks for the Free mod...
# arrow-contributors
p
Hello, I restarted new benchmarks for the Free module... For these tests I configured JMH in SingleShot mode (i.e. without JVM warmup, and with a single measurement)... and, i compared Cats/Arrow implementations ! Each test uses a similar method to calculate the Fibonacci sequence. For example :
Copy code
fun trampolineFibonacci(n: Int): TrampolineF<Int> =
    when {
      n < 2 -> Trampoline.done(n)
      else -> {
        TrampolineF.fx {
          val x = Trampoline.defer { trampolineFibonacci(n - 1) }.bind()
          val y = Trampoline.defer { trampolineFibonacci(n - 2) }.bind()
          x + y
        }
      }
    }
Benchmark (Param) Score (ms/op) arrow.benchmarks.TrampolineBench.eval 30 7933,298 arrow.benchmarks.TrampolineBench.trampoline 30 9285,310 cats.benchmarks.TrampolineBench.eval 30 655,456 cats.benchmarks.TrampolineBench.stdlib (TailCalls) 30 596,109 cats.benchmarks.TrampolineBench.trampoline 30 1567,035 For Cats the Trampoline / Eval ratio is significant. Overall results on the Arrow side do not seem very good 🤔 I didn't do any tests without
fx