# arrow-contributors


08/29/2020, 12:11 PM
Hey, I am currently looking at adding a few benchmarks to various things (for my stm pr first and later delimited continuations) and I've been wondering if we have a common setup/already set up benchmarks to look at. The
module in
seems like a good start but it does not have
benchmarks as far as I can tell (do we even have some and if so where?) so I am not sure if that package is still supposed to be used... Also does it make sense to automate running benchmarks and show results in prs if any relevant code changes? (Could be a manual task as well but just having a way to compare performance for a pr and visualize the changes if we have benchmarks set up would be nice). Any ideas? I'd love to do some groundwork here so that adding benchmarks throughout other arrow repos becomes a more straightforward process...


08/30/2020, 5:48 PM
Hi Jannis, yes there is an effort to include benchmarks in the site on going that @Israel Pérez González @Rachel were looking into but there is no common approach to add benchmarks. I’d say for the time being just add them and when Raquel and Isra get that done they can adjust the modules and automate the PR and site aspects of it.
The Hood plugin is suppose to automate the comparison aspect of it verifying thresholds are not crossed across merges to master or fails to build. I believe it was integrated in helios but not sure about the state of it in Arrow.


08/30/2020, 6:07 PM
I'll start with stm benchmarks and add them to the benchmarks-fx module for now 👍. I really just want a way to verify optimization ideas and to compare against the scala implementations (cats-stm, stm4cats, zio etc). As far as I am aware those all use a global lock to protect updates, which I think is a terrible idea for a concurrency abstraction and I kind of want some proof of that^^ For reference ghc implements both global and fine grained locks and chooses at compile time and I believe the fine grained one is default for threaded compilation. (Although it is kind of hard to find references to that...) The benchmark and further work on delimited control has to wait on my side anyway as I won't be home for about a week, but I do want to finish up the stm pr as much as possible^^
👍 1
But yeah getting some sort of automated benchmark tool running seems like a good way forward (I have not seen hood, looks great^^). I am a bit worried about how long this will run each time since the arrow repos even after the split are quite large, but still being able to easily verify optimization ideas would be great


08/30/2020, 9:25 PM
Hi! Everything is ready!! Just waiting for benchmarks thresholds!! Hood works fine in Arrow and adds the comparison with master branch as a new comment in PRs (and adds a new comment with every new commit on the same PR).
It was enabled for a few weeks though it was disabled because of a lack of clear thresholds
@Jannis when you create the PR with the new benchmark, just uncomment the lines here