It will be in `kotlinx-io`. It is just not ready y...
# kotlin-native
e
It will be in
kotlinx-io
. It is just not ready yet.
j
is there a genre or modality of memory buffer to pursue if i wanted to fix or finish a piece along the way towards my own goals?
i like stl/stdc++, i like bytebuffer, i even manage to agree with netty's improved buffer. if for whatever reason porting say bytebuffer to common is of use as a stopgap, what nuances of coroutines/suspend should i look out for? @elizarov ^^
is https://github.com/Kotlin/kotlinx.serialization/blob/master/runtime/common/src/main/kotlin/kotlinx/io/Buffers.kt in common already? perhaps kotlinx-io could add a mention in the README
e
We plan to port ByteBuffer to common under the common name of
Buffer
, but without position-based APIs (only index-based ones) so that we can use a simple implementation in native and in JS.
The work is in big flux now, because we don’t want to create just some kind of API. It has to be simple and performant at the same time. The goal is to port kotlinx-serialization JSON parsers to new kotlinx-io without loosing in performance which is right now on par with “native” JVM JSON parsers.
j
https://github.com/jnorthrup/PrematureXorSwap supports the elimination of position-based bloat methods on bytebuffers.
e
It is not just bloat. They are conceptually slow
j
yeah as the benchmarks back up. i have often used a stack variable to track position with bytebuffers in the past because mark/reset really don't improve anything
the polymorphism of nio buffers was also brkoken from day 1, i have no idea if the most recvent incarnation fixed that
e
That is something we cannot help with. ByteBuffer is the only efficient mechanism JVM currently offers to transfer data to/from native IO apis
j
a ggeneric parameter would seem like the trick to make the inheritance a little more sensical even if they are still hand-mapped virtuals underneath
Byffer<UByte> is so sorely needed instead of buf.get()&0xff
e
If all your stack uses a single kind of bytebuffer, then it is going to be fast in practice.
We don’t plan
Buffer
as user-facing API. We design it as lover-level API that a “user” (a person writing an implementation of some network protocol, for example) will never have to work with directly.
Don’t get me wrong. It will be
public
, but low-level.
You will only have to work with
Buffer
if you need to interface a custom I/O API (write your own sockets of file impls) or if you need to write functions to read/write some custom primitive data-types that are not supported out-of-the box (like your custom varint format or something)
j
on the topic of byebuffer json parsers, do you have a benchmark for the serialization parser you mentioned? a few years back we took gwt autobeans to the limit of singleminded bytebuffer optimization. at the time porting it back to js native arrays was an interesting thing in gwt, but i made a 1-pass, lazy, and lazier edition for the jvm
https://github.com/0xCopy/json-lazy-autobean the stateful parser had a surprisingly good ratio to the stateless forward iterator
https://gist.github.com/jnorthrup/ffeb96d236ed9a157b6dae76b696b577 interesting reversal in the typical bytebuffer performance, the directbytebuffer is faster than the heap buffer for the first time in my recollection
e
It really depends on what you do with them
j
in this case, waited 5 years between benchmarks