Interesting read on Swift for data science: <https...
# datascience
h
Interesting read on Swift for data science: https://www.fast.ai/2019/01/10/swift-numerics/
💾 2
👍 1
a
Thanks for the link. Value-types are great plus for swift, but lack of JIT, Java interop and tooling is a problem. Julia, while using similar toolchain seems to be much better candidate for scientific applications.
h
Interested on peoples views of how kotlin compares. The author briefly mentions it: "Java: verbose (but getting better, particularly if you use Kotlin), less flexible (due to JVM issues), somewhat slow (but overall a language that has many useful application areas)", but does not consider kotlin-native.
a
Kotlin native is to immature to compare. The statement about GC performance deficiency is incorrect (I think in most cases Swift RC will be slower, yet more smooth), but it is a common delussion about JVM.
h
"somewhat slow" I assume refers to boxing
And as you mentioned, GC
a
All languages have boxing. Java just have automatic boxing, which is not always obvious. It is possible to avoid boxing overhead in Java/Kotlin.
h
Yeah I was looking at how you approach it in kmath the other day
a
GC is not slow! In fact in most cases it works much faster than RC. Especially modern JVM GC. Mobile developers do not like it because unexpected GC-pauses, but average performance is very good.
1
I think that we can tweak kotlin compiler a little in future to be better with autoboxing. Also there is Graal. The prototype performs really good with some tests I run. Not as good as optimized unboxed variant, but close to it.
g
Throughput of GC should be higher than ARC or even manual memory management
Swift avoids the things that can make a language slow; e.g. it doesn’t use garbage collection
Wow, such lack of understanding of memory management in general
But in general would be interesting to see Swift in some close to real life benchmarks
Because everything what I saw was impossible to compare with other generic usage labguages And a lot of such theoretical ranting about performance (which, as I see even in JVM just means nothing on practice), just as a funny example of how hard benchmarking is see this article https://shipilev.net/blog/2014/java-scala-divided-we-fail/
a
In my opinion, artificial performance tests are useless. Especially in JIT runtimes. You can get slower performance in some specific method to get better in complete program because of inlining and memoization. Also one needs to remember that language flexibility and safety in most cases is much more inportant than performance. One in theory could write super-fast programs in C, but there are very few people who really can.
g
Synthetic micro benchmarks are not useless, but useful only for particular cases and pretty hard to do them correctly. Fair comparasion of 2 languages is hard Comparing2 completely different ecosystem even harder Do some conclusions about performance by theory is just wrong
I think you approach that tries to solve some real task using particular libraries (which enough sophisticated and featureful) is much better.