This question might sound strange, but I will ask ...
# gradle
r
This question might sound strange, but I will ask anyways 😆 Did anyone notice the bad disk i/o in arm machines(mac m1 machines with ssd disk) vs the Ubuntu distro. I was benchmarking, gradle build on self hosted mac runner vs the existing infra offering virtualised Ubuntu/linux runner and noticed bad disk i/o when gradle packs and unpacks artifacts, as compared to Ubuntu/amd runner.
e
macos and windows have always had more I/O overhead than linux
r
Ahh okay, my assumption was mac-os M1 self hosted runner with SSD disk ( no virtualisation) in place would or should perform better as compared to shared NFS storage backed by VM. But was surprised to see the result.
Initially I thought, it could be a security tools difference, installed on different VMs or bare metal vs self hosted mac runner.. but.!!
@ephemient i can and will run more benchmarks and look around.. but was also wondering, had you noticed similar results in past running gradle builds with local cache enabled.
v
You are running a native ARM Java, right? Not an Intel Java through Rosetta or how it was on mac.
r
Yeah native arm. JDK was also installed via actions/setups-java, which installs arm based on machine arc. I cross-checked, it's native arm, no rosetta in play.
e
rosetta doesn't affect I/O speeds anyway. it translates whole codepages to native at once, and then it's basically a native program
r
This is definitely a bummer, where linux take around 20m (CPU time) but the mac m1 takes 2hr Xm (CPU time) - depending on cache-artifacts size, while unpacking the artifacts. Which makes me think, if its worth moving your CI agents to mac runners, considering remote cache and local cache plays an important role in gradle builds.