`a.mapIndexed { index, value -> value to myList...
# announcements
t
a.mapIndexed { index, value -> value to myList[index] }.toMap()
?
h
this also creates an extra interim list of pairs
t
yup. probably need to roll your own if that intermediate collections/pairs are going to be an issue
h
it's also identical to
a.zip(b)
t
maybe could do it as a sequence and wrap all call to associate to track the index?
h
sequences inherently have additional collection creation overhead. i'm hoping they get an update to be optimized for equal performance with collections
👌 1
k
a.withIndex().associateBy({ (_, v) -> v }, { (i, _) -> b[i] })
looks terrible but doesn't have any overhead.
t
I like that:
Copy code
inline fun <K, V> Iterable<K>.associateByIndex(valueSelector: (index: Int) -> V): Map<K, V> {
    return withIndex().associateBy({ (_, k) -> k}, { (i, _) -> valueSelector(i) })
}

val a = listOf(1,2,3)
val b = listOf("one", "two", "three")
val c = a.associateByIndex { b[it] }
println(c.toString())
k
This should be called
associateWith
, no? Also, are you sure
crossinline
is necessary? It kind of defeats the purpose of
inline
.
t
it is associating a key with a value via the index, not to the index? and you are right no need for a crossinline.
allocations is roughly
2N+3
where N is the size of the IterableReceiver assuming the valueSelector doesn't allocate
h
it would be called
associateWithIndexed
according to stdlib naming convention if including the indices in the params of the transform
a
another possible solution would be:
Copy code
fun <K, V> Iterable<K>.associateWith(values: Iterable<V>): Map<K, V> {
    return values.iterator().run {
        associateWith { next() }
    }
}
h
I was almost certain
withIndex
had overhead...
t
allocates an iterator and indexed value per item in the iterable. so it is N+1 allocations
a
I guess my solution has no such overhead
k
If allocating an iterator is a problem you're definitely using the wrong language.
😂 3
A JMH benchmark: https://pastebin.com/raw/VViiXKYF Results:
Copy code
Benchmark                              Mode  Cnt     Score     Error  Units
MapBenchmarks.iterator                thrpt   10  5095.020 ± 105.089  ops/s
MapBenchmarks.manualAlloc             thrpt   10  4190.481 ± 209.660  ops/s
MapBenchmarks.manual                  thrpt   10  2686.919 ±  60.247  ops/s
MapBenchmarks.mapIndexed              thrpt   10  3820.297 ±  34.474  ops/s
MapBenchmarks.sequenceZip             thrpt   10  2415.201 ±  60.417  ops/s
MapBenchmarks.withIndexAssociate      thrpt   10  2360.287 ±  43.676  ops/s
MapBenchmarks.withIndexAssociateByTo  thrpt   10  2311.393 ±  19.049  ops/s
MapBenchmarks.zip                     thrpt   10  4021.002 ± 127.191  ops/s
I can't really think of a reason why
manualAlloc
is slower than
iterator
, but I've spent too much time on this already 🙂
h
hahahahaha. you're the best, Karel
since it was the best version, mind posting the code for that one?
copy/paste instead of thinking is my specialty!
k
All of the code is on pastebin.
h
oh 😛
somehow missed that. too interested in the benchmarks
t
perhaps manualAlloc requires more hash calculations than iterator. but yea can't see it either
k
But they both just call
Map.put
for each element.
h
What prize does @Arkady Bazhanov win?
k
He has to explain why it's faster first!
h
🎁
maybe somehow it's the request of the indices.
or probably the memory reallocation involved with the mutablelist
iterator approach might set aside the exact block of memory from outset
🤷‍♂️
k
for (i in listA.indiches)
gets to compiled to a java-style
int size = listA.size; for(int i = 0; i < size; i++)
so that should be fine.
And I preallocate the map manually as well so that should be fine too.
I'll actually profile tomorrow.
t
List(10_000) { Random.nextInt() }
ends up backed by ArrayList so those lookups should be O(1) cheap. my thinking is JVM impl is crazy good with iterators?
h
does mutableMap not have to recallocate a new block after reaching a certain size?
t
oh hey what if
well jmh should even it out. but maybe it was more hash collisions in one vs the next?
k
@Hullaballoonatic Yes, but
manualAlloc
preallocates the right size.
@trevjones I ran everything twice, and got similar results.
h
How does it perform with non-random numbers?
t
I figured as much, but the psuedo random could throw results in a single run.
k
Non-random numbers:
Copy code
MapBenchmarks.iterator                thrpt   10  6338.874 � 1001.794  ops/s
MapBenchmarks.manual                  thrpt   10  6469.501 �   79.655  ops/s
MapBenchmarks.manualAlloc             thrpt   10  7929.022 �  107.842  ops/s
MapBenchmarks.mapIndexed              thrpt   10  4913.874 �   63.817  ops/s
MapBenchmarks.sequenceZip             thrpt   10  5423.528 �  329.522  ops/s
MapBenchmarks.withIndexAssociate      thrpt   10  5299.546 �   98.017  ops/s
MapBenchmarks.withIndexAssociateByTo  thrpt   10  4802.971 �   98.678  ops/s
MapBenchmarks.zip                     thrpt   10  4801.796 �  337.945  ops/s
Random numbers again:
Copy code
MapBenchmarks.iterator                thrpt   10  5128.317 � 151.363  ops/s
MapBenchmarks.manual                  thrpt   10  2673.094 �  27.443  ops/s
MapBenchmarks.manualAlloc             thrpt   10  5484.122 � 123.091  ops/s
MapBenchmarks.mapIndexed              thrpt   10  3889.280 �  38.294  ops/s
MapBenchmarks.sequenceZip             thrpt   10  2499.712 �  24.050  ops/s
MapBenchmarks.withIndexAssociate      thrpt   10  2383.809 �  26.758  ops/s
MapBenchmarks.withIndexAssociateByTo  thrpt   10  2353.679 �  62.273  ops/s
MapBenchmarks.zip                     thrpt   10  4128.566 � 105.939  ops/s
Now
manualAlloc
is beating everything as I would expect... Typical benchmarking 🙂
👏 1
i
@karelpeeters the difference between
iterator
and
manualAlloc
can be in the capacity of the map:
manualAlloc
uses the exact collection size, and
associateWith
(in
iterator
) takes in account the default map fill factor of 0.75, so it uses the capacity equal to 4/3 times of the size.
We have had a request for such function and even considered it for 1.3, but haven't found a good name for that, and
assocateWith
has been already taken by the overload with value selector. https://youtrack.jetbrains.com/issue/KT-21587
k
Ah right, completely forgot about map fill factors. Thanks!