Spoil alert in that thread. (graphical depiction o...
# advent-of-code
b
Spoil alert in that thread. (graphical depiction of part of the data structure)
n
Well, sure the data structure is a graph but there are many ways of working with graphs, handling the visitation
b
did you see my solution in the thread?
n
I did
b
(I know my job is dealing with graphs, just from a higher level)
n
It didn't even really occur to me to use naive visitation tbh, you can almost never get away with that in real graph problems
you have to somehow ensure you don't double visit, reuse prior results
b
depends
n
In part1 there wasn't even anything that promised that it was a DAG
b
if you know they are not cyclic, and < 10k entries
I don't even bother
well I would have found it fast that it was cyclic
n
Sure. I guess I keep expecting AOC to actually require not only the simplest solution
Yeah, my thining was like that at some point but then I realized that by caching results for efficiency it was also guarding against cycles anyway
there's a solution there that's very nice, it looks almost the same as naive visitation but still caches
b
which?
n
I also found it easier/more-intuitive to reverse the graph for part 1
Instead of descending and checking if things contained gold, I reverse the graph and found all of gold's parents
to me that seemed more intuitive but evidently not for most people 馃檪
e
I thought that's how you're supposed to do it. Find the bags (b1) that can fit shiny golds, find the bags (b2) that can fit b1, find the bags that can fit b2 and so on.
@Edgars you can do it either way; you can either iterate over all the nodes, start a traversal from each and see how many of them let you end up at gold. Or you can reverse the graph and simply do a traversal from gold
@bjonnh you'll see in Michael's solution he has the fun line:
Copy code
private fun containsShinyGold(memory: MutableMap<String, Boolean>, currentColor: String) : Boolean = memory.getOrPut(currentColor){
 聽 聽 聽 聽parsed.getValue(currentColor).any { containsShinyGold(memory, it.second) }
 聽 聽}
b
yeah I also hesitated to invert the graph, but I聽thought that not knowing part 2 it was a risky move
yeah I just saw that
n
this basically looks like naive visitation but with caching, slick IMHO. props to @Michael de Kaste
b
I only optimize like that when things are too slow 馃槃
e
@Nir I understand, yeah. Just that the latter approach seems much more intuitive to me.
n
So, I've gone through different iterations of making assumptions in part 1 that did or didn't carry over to part 2. And at this point, my basic philosophy is: 1. Write some code that losslessly parses the data into something structured. 2. Solve part 1 using that parser as easily as possible. 3. Solve part 2 using that parser as easily as possible.
I don't write the parser worrying much about the details of how I want to solve part 1
As long as all the information is there you can easily reorganize it or throw some out
So for this day for example I simply parsed the file into List<Rule> where Rule is String + List<ContainedBag>, and ContainedBag is Int + String
b
same approach for the parsers, the solution I sent here is after cleaning up
n
I don't even worry about the graph structure in the parser. The part1 and part2 solutions then take the structured data and convert it into the most logical data structure, which happen to be complete opposites.
Yeah. I just stopped doing that kind of cleanup. I see some people have a single loop that computes both; I would think about that approach if they gave you both parts at the same time, but they don't, so I think for me this is the best approach.
b
oh it is just for me
n
If you allow part1 to "leak" into the parser, for example, then you won't even bother parsing the numbers for part 1.
b
I decided to do short solutions this year, not yet golfing but not too naive
I also decided to not separate parts so far
n
Yeah all of my solutions are basically at least 3 functions. getData(), part1(), and part2(). part1() and part2() both use the same getData()
b
I also never took a CS class so there are a lot of things I never learned
n
I mean if you know graphs well enough to solve this problem in a short period of time and use graphs at work you probably know more than the average CS student
馃槃 1
b
well I聽use only what I need, I always wanted to widen a bit my palette of algorithms
so AoC is a good thing
m
memoization is so easy in kotlin, definitely felt more comfortable to me than reversing the graph. Thanks for the compliment @Nir
n
yeah, the fact that the getOr functions in kotlin take lambdas as their second argument is amazing. I think it's pretty fantastic how many language problems you can adequately solve using sufficient nice lambdas
the combination of super compact,
inline
, and receivers, in particular.
A lot of what kotlin does with lambdas would typically be the province of macros or a special lazy parameter feature. Dodging all that complexity and still solving those problems, IMHO this is probably the most distinctive thing that Kotlin brings to the table, overall (not to say other parts of the language are bad by any means, but many of the other good features of Kotlin appear in other languages, Kotlin's use of lambdas for lazy parameters and DSL is more unique)
b
Copy code
y.day07.Day7Bench.cacheOpt                           sample  123565   0.020 卤  0.001  ms/op
y.day07.Day7Bench.cacheOpt:cacheOpt路p0.00            sample           0.019           ms/op
y.day07.Day7Bench.cacheOpt:cacheOpt路p0.50            sample           0.019           ms/op
y.day07.Day7Bench.cacheOpt:cacheOpt路p0.90            sample           0.021           ms/op
y.day07.Day7Bench.cacheOpt:cacheOpt路p0.95            sample           0.026           ms/op
y.day07.Day7Bench.cacheOpt:cacheOpt路p0.99            sample           0.039           ms/op
y.day07.Day7Bench.cacheOpt:cacheOpt路p0.999           sample           0.059           ms/op
y.day07.Day7Bench.cacheOpt:cacheOpt路p0.9999          sample           0.616           ms/op
y.day07.Day7Bench.cacheOpt:cacheOpt路p1.00            sample           0.884           ms/op
y.day07.Day7Bench.naive                              sample    3874   1.290 卤  0.006  ms/op
y.day07.Day7Bench.naive:naive路p0.00                  sample           1.219           ms/op
y.day07.Day7Bench.naive:naive路p0.50                  sample           1.262           ms/op
y.day07.Day7Bench.naive:naive路p0.90                  sample           1.380           ms/op
y.day07.Day7Bench.naive:naive路p0.95                  sample           1.446           ms/op
y.day07.Day7Bench.naive:naive路p0.99                  sample           1.931           ms/op
y.day07.Day7Bench.naive:naive路p0.999                 sample           2.262           ms/op
y.day07.Day7Bench.naive:naive路p0.9999                sample           2.359           ms/op
y.day07.Day7Bench.naive:naive路p1.00                  sample           2.359           ms/op
the caching version is just ~ 60 times faster 馃槃
(of course non linear with data size etc etc)