<Advent of Code 2023 day 12> :thread:
# advent-of-code
a
n
I was gonna knock out Part 1 quick so I can think about Part 2 while I'm sleeping. But looking at the puzzle I think I'm gonna sleep on Part 1 instead...
🤝 1
k
Where is my
@lru_cache
j
Hm part1 is possible to solve with brute force, part2 I'm struggling with head overflow for the example input, I'll have to come up with something
If you wanna play by hand kde has picmi as graphical app to play it.
n
Oh man, just solved part 1!!!
b
brute forced p1, not gonna work on p2...
i've also learned that i'm real bad at dealing with strings in kotlin
d
Lol, yeah... my input had a max of 18
?
spots in a string. Brute force is manageable for 2^18 possibilities, but 2^94 is going to take a while... 😅
m
I brute forced part 1 but so far solutions I tried for part 2 didn't work out.
😢 2
l
Im wondering if you can extrapolate the combinations from just doubling the sequence
j
@Marcin Wisniowski I feel yeah, might be an idea to take a break and look at it again after a short walk or something like that
@Lukas Lebovitz It's possible for the ones that are determined in the single case at least
d
I’m thinking about recognizing if the smaller pattern strings starts/ends with a
.
and then just return a
possibleArrangements^5
for that group… something like that.
d
Yeah, I think you're on to something, @daugian. Can cache those smaller portions, maybe...
m
not the best algorithm, but it works for now, have to go to work, will clean up later
n
the 16384 is 2^2 * (2^3)^4, and the 2500 is 4 * 5^4. So looks like understanding the grouping of ? from part 1 (and considering the extra ? from part 2) might allow to compute the value for part 2
e
You need memoization for part 2. In order to make it easy, it is helpful to write brute force part as a function of the current search state that returns the number of combinations. In this way, adding memoization is straightforward.
❤️ 1
👀 1
d
But what to cache? Arrangement splits on
.
and cache the number of those arrangement possibilities?
e
You cache result for your search state. There are many ways to define search state. In my code the state is defined by three integers: i - the number of chars of the pattern analyzed so far (index of current char) j - the index of the group we currently build u - the number of # already used in the current group
d
Clear! Thanks!
m
@elizarov Given the fact that memoization is a pretty standard programming technique, are there any plans of introducing MemoizedDeepRecursiveFunction ala Arrow?
b
ok got pt 2
wrote a recursive search, then 30+ minutes later i slapped memoize on it and it runs in 200ms
m
hint for anyone wanting to do memoization, this is an easy way:
Copy code
fun functionToMemoize(arg1: String, arg2: List<Int>): Long {
    buildMap { 
        fun recursion(arg1: String, arg2: List<Int>): Long = getOrPut(arg1 to arg2){
            // your algorithm here
            0L
        }
        return recursion(arg1, arg2)
    }
}
e
Memoization is so easy to add into a recursive search. It hardly warrants a separate function. Moreover, I usually don’t encapsulate my search state into a class and memoize it in arrays. If you give me memoizing function, I’ll have to define a class as well.
w
j
Same time for me: https://github.com/JBeet/MyAdventOfCode2023/blob/main/src/aoc12/Day12.kt I used only two variables for the memoization, instead of Romans 3 (only i & j), but that probably made my algorithm a little bit more complex. Maybe I will try the three-variable version as well, see if it is better understandable.
I like the three variable version, but decided to use the number of broken items that were not yet used in the current group (and -1 for a closed group). https://github.com/JBeet/MyAdventOfCode2023/blob/main/src/aoc12/Day12.kt
d
My part 2 runs in 1.5sec but I’m fine with that 😅
Probably the regex making it slow…
w
Still at Ahold Johan? Nice that you also do AOC!
🙌 1
d
No, not at Ahold at the moment.. Will DM you not to spam this thread!
j
Is there an stdlib method to add cache to
DeepRecursiveFunction
? I tried to add extension function
DeepRecursiveFunction.cached()
but the
val block
inside the
DRF
is
internal
so it’s not really possible. Best I can do is
Copy code
inline fun <T, R> cachedDRF(
    cache: MutableMap<T, R> = mutableMapOf(),
    crossinline block: suspend DeepRecursiveScope<T, R>.(T) -> R
): DeepRecursiveFunction<T, R> =
    DeepRecursiveFunction { value ->
        if (value in cache) cache[value]!! else block(value).also { cache[value] = it }
    }
m
@Jakub Gwóźdź I just asked roman, but he seemed adamant in not seeing a need for it Arrow has MemoizedDeepRecursiveFunction though
e
Btw, I can hardly imagine a case where you need both deep recursion and memoization at the same time.
e
I did manage part 2 with recursion and a couple hacks, no caching
not sure it's the intended solution though, it takes ~15s
could probably just do the obvious parallelization thing…
j
fair enough 🙂
e
https://github.com/ephemient/aoc2023/blob/main/hs/src/Day12.hs without parallelization,
Copy code
benchmarking Day 12/part 1
time                 5.637 ms   (5.620 ms .. 5.649 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 5.675 ms   (5.662 ms .. 5.688 ms)
std dev              39.45 μs   (33.44 μs .. 48.07 μs)

benchmarking Day 12/part 2
time                 14.59 s    (14.49 s .. 14.70 s)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 14.57 s    (14.55 s .. 14.59 s)
std dev              22.63 ms   (7.623 ms .. 29.77 ms)
variance introduced by outliers: 19% (moderately inflated)
and with parallelization (on a 2-core GItHub Actions runner, so I'm not sure why the speedup could be >2x, but whatever):
Copy code
benchmarking Day 12/part 1
time                 3.160 ms   (3.142 ms .. 3.177 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 3.157 ms   (3.142 ms .. 3.170 ms)
std dev              44.06 μs   (35.75 μs .. 60.47 μs)

benchmarking Day 10/part 2
time                 6.614 s    (6.484 s .. 6.716 s)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 6.693 s    (6.648 s .. 6.778 s)
std dev              81.98 ms   (2.349 ms .. 98.57 ms)
variance introduced by outliers: 19% (moderately inflated)
now to decide whether I want to port this to Kotlin or change to a better approach first…
w
For memoization you can use something like the following:
Copy code
object CacheSupport {
    private var _cache: MutableMap<*, *>? = null
    private inline fun <T, R>withCache(arg: T, crossinline perform: (T) -> R): R {
        @Suppress("UNCHECKED_CAST")
        val existingCache: MutableMap<T, R>? = _cache as? MutableMap<T, R>
        val cache: MutableMap<T, R> = existingCache ?: hashMapOf<T, R>().also { _cache = it }
        try {
            return cache.getOrPut(arg) {
                perform(arg)
            }
        } finally {
            if (existingCache == null) {
                _cache = null
            }
        }
    }
}
Then call your recursive function like this:
Copy code
fun process(state: State) = withCache(state) {
   // Do your magic here
}
😎 2
e
yeah I just added caching to my Haskell solution and it's running in about 16ms now so I guess I'll copy that to Kotlin
e
I couldn't solve part 2 with recursion and some tricks, but a simple cache make the miracle!I couldn't solve part 2 with just recursion and some tricks, but a simple cache worked like a charm!
k
For some reason, the
arrangement
function took way too long for me to get right. It still seems to be doing unnecessary things, but it works, so I don't mind too much 😅 Memoization was the easiest part, it turns out https://github.com/kingsleyadio/adventofcode/blob/main/src/com/kingsleyadio/adventofcode/y2023/Day12.kt
m
Another harder one! Took me a while to find the correct place to cache states for part 2. Unrefactored, will refactor later … https://github.com/MikeEnRegalia/advent-of-code/blob/master/src/main/kotlin/aoc2023/day12.kt
b
interesting, i didn't even know about
DeepRecursiveFunction
, that's neat, although i didn't need it for this case
p
Copy code
@Suppress("FunctionName")
fun <T, R> CachedDeepRecursiveFunction(block: suspend DeepRecursiveScope<T, R>.(T) -> R) =
    mutableMapOf<T, R>().let { cache -> DeepRecursiveFunction { cache.getOrPut(it) { block(it) } } }
This came in useful 😄
b
the recursion depth for my search is bounded by the length of the input string, and the max string for part2 is
100
p
I don't tend to need DRF all the time, but at times I prefer it for normal recursion.
j
Yeah drf is not a need here, merely a convenience.
b
i tried switching from basic caching in a map to using arrow's
MemoizedDeepRecursiveFunction
, and it didn't work at all. runtime went from 400ms to > 5m (i stopped it)
p
Copy code
Benchmark                        Mode  Cnt   Score     Error  Units
BenchDay12.deepRecursivePart1    avgt    3   0.816 ±   0.037  ms/op
BenchDay12.deepRecursivePart2    avgt    3  26.353 ±   0.818  ms/op
BenchDay12.normalRecursionPart1  avgt    3   1.097 ±   0.098  ms/op
BenchDay12.normalRecursionPart2  avgt    3  33.699 ±   0.246  ms/op
I get a slight performance increase using my Cached DRF vs caching over a normal recursive function
b
yea cached DRF works fine, its the MDRF that seems broken, probably a bug?
p
I just swapped out my CDRF for MDRF and it ran fine (Arrow 1.2.1)
b
i'm on 1.2.0, i'll try and update
doesn't look like it fixed it:
Copy code
DRF took 400.858433ms (1.00x): 18093821750095
MDRF took 17m 23.347227919s (1.00x): 18093821750095
p
usage looks right to me - not sure why the performance is so bad. PuzDSL framework looks familiar too 😄
b
lol yea, i was lookin around for examples of frameworks when i wanted to write one and found that one quite impressive 🙂
p
Copy code
Benchmark                        Mode  Cnt   Score     Error  Units
BenchDay12.deepRecursivePart1    avgt    3   0.602 ±   0.032  ms/op
BenchDay12.deepRecursivePart2    avgt    3  12.180 ±   4.004  ms/op
BenchDay12.normalRecursionPart1  avgt    3   0.744 ±   0.047  ms/op
BenchDay12.normalRecursionPart2  avgt    3  16.028 ±   2.507  ms/op
optimized it slightly - and I've pushed now: https://github.com/tKe/aoc/blob/main/kt/src/main/kotlin/year2023/Day12.kt
t
What a day for me 😮 • part 1: 20 minutes (and the first time when I had rank below 1000 😊) • part 2: 15 hours (with a rank almost of 12000) ◦ I tried so many different concepts and even after reading @elizarov's tip about caching I had a lot of troubles (but didn't want to look directly into someone else's code) ◦ eventually, I managed to reduce the list of "." symbol lengths (e.g. [1, 0, 4] - caching this wouldn't help) into its size and sum (e.g. {size=3, sum=5}), and caching this did the trick 😮‍💨
e
I changed my implementation to DP. seems a little faster, and arguably more functional
b
@phldavies i asked in #arrow and they said > MDRF is using an atomic reference under the hood to manage concurrent changes. This results in it copying maps over and over again instead of using a concurrent hashmap or equivalent
so i'm not sure why it worked for you but not I
p
I didn't notice much of a performance difference when switching - but then I cached per-record/line whereas you share your cache among all records.
obviously with a larger map, the copying for a single insertion becomes much heavier
b
Ah yea, i made the MDRF creation per-call and it went from 17minutes to 1s, but still a bit slower than just caching
p
Wow, another challenge 😊 https://github.com/PaulWoitaschek/AdventOfCode/blob/280ea93a94803b49633d4b0680902c079e3af3a8/2023/src/main/kotlin/aoc/year2023/Day12.kt What finally brought a big speed up was an early exit
Copy code
if (recognized.size > damages.size) return@getOrPut 0
recognized.forEachIndexed { index, i ->
  if (damages[index] != i) return@getOrPut 0
}
e
In the stream I was just about 3 hours away from the part 2 solution.. pretty close )
e
oh and I never did post a link to https://github.com/ephemient/aoc2023/blob/main/kt/aoc2023-lib/src/commonMain/kotlin/com/github/ephemient/aoc2023/Day12.kt… can see recursive+memo in git history but it's DP now
* DP with fairly about as minimal state as I could get it
r
I had started with iterative approach looking at part 1, generating all combinations 😬 , couldnt wire my brain to perform memoisation for part2 . Switching to recursive soln, worked like a charm https://github.com/ritesh-singh/aoc-2023-in-kotlin/blob/main/src/day12/Day12.kt
n
Verrrry late the party which has moved on but I just completed Part 2. This was unbelievably difficult for me. I started over like three or four different times. Strangely, my last attempt was very similar to my original attempt (that took 12 hours to compute ~500 lines). I just figured out how to make the caching work for it. Debugging the recursive cached algorithm was such a nightmare. https://github.com/nbanman/Play2022/blob/master/src/main/kotlin/org/gristle/adventOfCode/y2023/d12/Y2023D12.kt
👏 2
j
Finally finished part 2 🥳 I’m proud of me that I kept going and haven’t spoiled me with solutions.
🎉 4