:bangbang:*Day 10 Solution Thread* :interrobang:
b
‼️*Day 10 Solution Thread* ⁉️
a
Took a less functional approach today
a
Part 2 was a bit of fun. First chance to use Dynamic Programming this year.
Copy code
``````fun main() {
.map(String::toInt).sorted()
val adapters = listOf(0) + input + listOf(input.maxOrNull()!!+3)
val differences = adapters.zipWithNext().map{it.second - it.first}.groupBy { it }
println("Part 1 : " + differences[1]!!.size*differences[3]!!.size)
println("Part 2 : " + findRoutes(adapters).last())
}

fun findRoutes(adapters:List<Int>) : List<Long> {
acc, i -> acc + (i-3..i-1).filter { it in adapters }.sumOf { acc[it] }
}.last()
}

/**
* Accumulate Function
*
* Basically a Combination of Fold and Map which uses the result of previous element to act as the entry into the next.
*/
fun <T, R> Sequence<T>.accumulate(initial: R, operation: (acc: R, T) -> R): Sequence<R> = sequence {
yield(initial)
var accumulator = initial
forEach {
accumulator = operation(accumulator, it)
yield(accumulator)
}
}``````
m
e
https://github.com/ephemient/aoc2020/blob/main/kt/src/main/kotlin/io/github/ephemient/aoc2020/Day10.kt abuse of
``fold``
. using primitive arrays for speed.
👍 1
m
Copy code
``````object Day10 : AdventOfCode() {
private val parsed = input.lines().map(String::toInt).toCollection(TreeSet())

override fun part1() = parsed.fold(IntArray(3){0} to 0){ (array, prev), cur ->
array[cur - prev - 1]++
array to cur
}.let { (array, _) -> array[0] * (array[2] + 1)}

override fun part2() : Any? {
val memory = mutableMapOf(parsed.last() to BigInteger.ONE)
parsed.descendingIterator().forEach { curIndex ->
val curValue = memory.getValue(curIndex)
parsed.subSet(curIndex - 3, true, curIndex, false).forEach {
}
}
return parsed.subSet(0, false, 3, true).sumOf(memory::getValue)
}

}``````
reverse iteration with TreeSet to build up a possibilities list all the way up to the first few members. Technically O(n log n) due to using a treeset, but they are easy to use in this instance so I didn't mind.
j
My day 10, interesting to see that a little caching can have such a dramatic effect on runtime in this case. Timings (in microseconds this time!)
Copy code
``````| Platform         | Average (µs)  |
| -----------------| --------------|
| GraalVM          | 2483 ± 5719   |
| Node JS          | 6837 ± 8230   |
| Native           | 2432 ± 669    |``````
Ah the solutions with the look-ahead folds are nice, didn't think of that at all
a
I have simplified my solution to Part 2 so that it just uses a standard fold (NB the list of adapters contains the final one to the device)
Copy code
``````fun findRoutes2(adapters:List<Int>) : Long {
acc, i -> acc + (i-3..i-1).filter { it in adapters }.sumOf { acc[it] }
}.last()
}``````
👍 1
e
common advice not to append to a immutable list in a loop, that's O(n²) behavior
a
@ephemient Fair call. Can just use a MutableList and set in place.
Copy code
``````fun findRoutes(adapters:List<Int>) : Long {
fun <E> MutableList<E>.setInPlace(i: Int, value:E):MutableList<E>{
this[i] = value
return this
}
val accumulator: MutableList<Long> = (0..adapters.last()).map { if (it == 0) 1L else 0L }.toMutableList()
acc, i -> acc.setInPlace(i, (i-3..i-1).filter { it in adapters }.sumOf { acc[it] })
}.last()
}``````
e
yep, and you can toss out old elements of the list since you don't need them, and optimize that by shifting the items down, and optimize it more by using a fixed sized array... and then you have my solution
a
So are you just holding the last 3 elements ?
e
using a LongArray(4) because it was slightly more convenient, but effectively yes
👍 1
t
I'll write this up later, I have a busy day to get to...
Copy code
``````class Day10(input: List<Int>) {

private val adapters: List<Int> = input.plus(0).plus(input.maxOrNull()!! + 3).sorted()

fun solvePart1(): Int =
.asSequence()
.zipWithNext()
.map { it.second - it.first }
.groupingBy { it }
.eachCount()
.run {
this.getOrDefault(1, 1) * this.getOrDefault(3, 1)
}

fun solvePart2(): Long {
val pathCounts = LongArray(adapters.size) { if (it == 0) 1L else 0L }
pathCounts.indices.forEach { pathIndex ->
pathIndex.trailingRange(3).forEach { candidate ->
if (adapters[candidate] + 3 >= adapters[pathIndex]) {
pathCounts[pathIndex] = pathCounts[pathIndex] + pathCounts[candidate]
}
}
}
return pathCounts.last()
}

private fun Int.trailingRange(n: Int): IntRange =
(this - n).coerceAtLeast(0) until this

}``````
Part 2 just figures out how many candidates there are to get us to any given adapter and adds the number of ways to get to that to our candidate. We only have to look back at most three places. I'll probably refactor p2 to not be as clunky, but it works quickly and I can explain it. 🙂
e
Did anyone else do part 2 in reverse? Meaning, to calculate the possible "paths" starting from the end? https://github.com/edgars-supe/advent-of-code/blob/master/src/main/kotlin/lv/esupe/aoc/year2020/Day10.kt 0.07ms, 0.05 of which are for init.
t
I went forward, but our approaches look similar.
w
so... day 10 part 2, I thought about optimizing by splitting the list of numbers where there is a gap of 3 (which is the only place where I can guarantee that both the numbers around the split are mandatory to use) then perform the permutation on the sub lists, which ended up quite fast and gave the right answer, but I am wondering if there is a much faster approach
n
Today was one of the most intimidating up front but ended up having one of the shortest solutions for me
Copy code
``````fun getData() = (aocDataDir / "day10.txt").readLines().map { it.toLong() }

fun part1() = (sequenceOf(0L) + getData().sorted().asSequence())
.zipWithNext { a, b -> b - a}
.groupingBy { it }.eachCount()
.run {
getValue(1) * (getValue(3) + 1)
}

fun part2(): Long {
val data = getData().sorted()
val result = mutableMapOf(0L to 1L)

for (e in data) {
result[e] = (1..3).map { result.getOrDefault(e-it, 0) }.sum()
}

return result.getValue(data.last())
}``````
proper dynamic programming would be faster (a lot faster in a lower level language), but it was slightly more annoying to make all the edge cases etc work out so caching in a mutableMap and looping up seemed like the sweet spot
t
Nice on part 2, that's nice and clean.
n
thanks!
t
I'm going to explain that approach rather than my array approach, I'll give you credit @Nir
k
I arrived on supper complicated solution and i see here almost one liner. @Nir could you explain how this works ?
b
Copy code
``````fun main() {
val lines = linesFile("data/2020/10_input.txt").map { it.toLong() }.sorted()
val jolts = lines + listOf(lines.last() + 3L)
var bottom = 0L
val chain = listOf(0L) + jolts.mapNotNull {
if (it > bottom && it - bottom <= 3) { it.let { bottom = it } } else null
}
val counts = chain.windowed(2).map { it[1] - it[0] }
println(counts.count { it == 1L } * counts.count { it == 3L })

val result = mutableMapOf(0L to 1L)
println(jolts.map { e -> (1..3).map { result[e - it] ?: 0 }.sum().also { result[e] = it } }.last())
}``````
n
@krotki sure
It's pretty straightforward. the interpretation of the result map is that result[x] is the number of ways to make joltage x
So let's say hypoethetically youhave an adapter for joltage 20. How many ways can we make joltage 20? Well, before we had joltage 20, we must have had either joltage 17, 18, or 19
so, result[20] = result[17] + result[18] + result[19]
because result only depends on lower entries, if we simply sort the entries and then loop, then we can guarantee that result is already populated with the entries we need, which leads to much simpler code compared to recursion + memoization (IMHO)
let me know if that's not fully clear, or if it feels like I'm miscounting something
The solution is a bit unusual IME because usually when you loop upwards you're doing dynamic programming and use an array; when you use a map to cache results it's typically recursion + memoization. For this particular problem the combination of looping up + map just happened to lead to cleaner code
k
Thanks! I also added some print lines to your solution and now I get it how it works. Thanks again!
n
np 👍
Some people I'm talking to in the C++ slack took advantage of the fact that there are no deltas of 2 jolts in the data to come up with log N solutions lol
b
I have 2 jolts delta I think
oh no you're right
e
yeah it's tribonacci, but ultimately you still need to sort, so that puts a lower bound on the time complexity
n
It's not exactly tribbonaci because numbers can be missing but I know what you mean
But yeah I forgot you need to sort regardless. Still log N for the post sorting part is impressive
I sorted originally like many people but you can actually get a pure linear solution with simply recursion and caching, I think
e
the problem relies on all numbers being in 1..3*n (otherwise there's no solution) so there are non-comparison sorts in O(n)
b
non comparison sorts?
you make a 3n sized array and place things there ?
n
yeah, sure, I agree with that
but most people are just calling the built in sort which is obviously NlogN
e
log n for the post-sorting part is only possible if it's all a run of 1 diffs… with our input in practice, we are limited to runs of 4 1-diffs, so we're limited to O(n/5)=O(n) there still
n
I talked to someone who said they had logN by exploiting the fact that all diffs are 3 or 1
so you can basically break things up into blocks of 3
use matrices
and do it that way
e
yes, but breaking it up is O(n)
n
yeah that's true
ah well
e
fun note: Haskell's IntSet (Set specialized to Int keys) takes advantage of the limited data range and does actually sort in O(n)
n
The recursive caching solution felt pretty elegant as well, though slightly longer to write:
Copy code
``````fun MutableMap<Long, Long?>.part2Helper(key: Long): Long {
get(key)?.let { return it }
if (key !in this) {
return 0L
}
return (1..3).map { part2Helper(key - it) }.sum().also { set(key, it) }
}

fun part2Fast(): Long {
val cache = mutableMapOf<Long, Long?>().apply {
getData().associateTo(this) { it to null }
}
cache[0L] = 1L
return cache.part2Helper(cache.keys.maxOrNull()!!)
}``````
e
(they document the complexity as dependent on the machine word size, but that's a constant)
without recursion you can do the second part in O(1) memory
not that it changes the overall algorithmic complexity, but in practice the heap is much bigger than the stack
n
hmm how would that work
i mean you could just explicitly use a queue/heap obviously
you're already using O(N) heap memory either way
e
yep, using O(N) for the sorted list is unavoidable, so O(1) or O(N) more space doesn't change the complexity. but it's possible to do the second half in O(N) heap O(1) stack which is less limited than recursion (required to handle inputs such as https://redd.it/kaenbn), or O(1) heap O(1) stack