Day 16 solution thread :kotlin-gradient:
# advent-of-code
Day 16 solution thread K
ok today was fun. lost half an hour of debugging of working solution only because of f... ints. I should've known by now that whenever there's multiplication in puzzle, first thing to do is convert
Today was a much better puzzle than yesterday
my own fault for being insane enough to do every single day in 4 different languages, but… re-implementing the parsing 4 times was not fun
ah damn maybe its overflow
sigh stupid overflow
I knew it had to be something dumb because the odds that I made a mistake in adding and propagating the constraints and actually ended up with a unique solution were close to nil; it's inevitable that I get the hard part and then screw up multiplying 6 numbers 🙂
why do we have
but no
probably because it's way less useful
You can use a
for a product, that's probably a bit easier to read than
myListOfInts.reduce { a, b -> a * b }
Anyway. I loved this one. I'm late today as usual because I had a lot of meetings this morning 🙂 Blog: Code:
i needed the fold because i was going from Int to Long
Ah. I did the conversion a bit upstream when I mapped out my values.
yeah, i should probably do that. should really just avoid Int like the plague sadly
I kept it as Int and it gave the wrong answer and it took me a good 15 minutes of stepping through the debugger and then manually calculating the product before I realize what I had done wrong. 🙂
As soon as I see product of Int I immediately convert to Long now.  Been bitten too many times
yeah as I've mentioned before I think it's a real shame that the "default" Int in Kotlin is 32 bit
the good Java giveth, the good Java taketh away 😉
32-bit Ints are good for the vast majority of applications. Defaulting to 64-bit would be a waste of memory. How often do we use Short or Byte even when we could?
I mean 64 bits is the effective default in languages that are far more memory conscious than Kotlin
It's not really a significant amount of memory in most cases. When you really have a crazy huge array of integers, that would be the time to think about the smallest integer you can use, if it's important
32 bits is definitely not good for "the vast majority of applications", most applications i've worked on, if they actually use 32 bits everywhere, they will hit overflows
the thing is that the JVM's instructions however are apparently 32 bit by default, as well, or something like that, which mostly renders this argument moot
but to the degree that there is a "default" in languages like C++ or Rust, it's 64 bit, so I suspect it's the right choice
Well, let me rephrase... In Rust, C++, Go, and Swift, you'll typically see the "default" as 64 bits on 64 bit system and 32 bits on 32 bit systems
note that on actual memory constrained devices, you have aarch64-ilp32, which is a 64-bit ARM processor intentionally running with 32-bit ints and pointers
I don't know what you work on, but in my experience, but 32-bit int has never been an issue in our Java backends or Android clients
I've heard people say this before, all I can say is that your experiences seem to be relatively unique, most languages being designed recently think it's worthwhile to take extra steps to protect against integer overflow, and you'd tend to think that's based on real experience
as well as compilation modes being added in older languages (like C, C++) that trap integer overflow, calls for better native support for integer trapping from experts ( ) etc
But we had this discussion before, Kotlin clearly has this 32 bit integer by default for Java compatibility concerns. If not for that, it's extremely likely that Kotlin would join the vast majority of mainstream languages (listed above, along with others) and default to 64 bit
(on 64 bit platforms, which is the majority at this point)
trapping seems reasonable, providing an optional performance versus safety slider
that is something you can't do in Java, for various reasons which we've discussed before
but 64-bit pointers come with increased icache and dcache pressure, which may not show up in microbenchmarks but does affect whole-system performance, even on 64-bit systems with large memory
pointer-heavy applications like interpreters - and the JVM itself - perform worse with 64-bit pointers, all else being equal
and it's not something you can just flip on and off (well, except for the JVM's compressed OOPs)
it seems like you could use 64 bit integers by default, and still have truncated 32 bit pointers in most places, perhaps
but I'm far from an expert on the JVM. But regardless, the point is still the same, it's because of limitations of the JVM
heck, that's what x32-abi is, and the aarch64-ilp32 I mentioned above. I know the the technique dates at least back to the DEC Alpha, which was a 64-bit processor that primarily ran 32-bit applications with extensions to access more memory if needed
Thinkikng back to day 16's program, when writing I should ahve thought super carefully about all the places that I was using integers only to index, and places where I was using integers only as arithmetic values. It's not always that easy or obvious because sometimes you end up wanting to index by these values, for example, and then you wouldn't be allowed.
But basically it's just a bunch of extra mental baggage that you'd get to avoid
again I want to be clear, i'm sure it's the right choice in Kotlin, and obviously the huge advantage of running on the JVM, well worth it. It's just not a decision that makes any sense outside that context (which is why pretty much all languages that compile to native don't make it)
I think if anything it's a tribute to Kotlin that 90% of the small grievances I encounter with the language, the explanation is "because Java"
(because Java/JVM)
I disagree about the native aspect - the move to 64-bit was driven by applications which needed the additional address space, and so they led it with 64-bit pointers. when everybody else followed, that's what they did, but it isn't always suitable (I recall Perl running up to 50% slower on the same processor, at the beginning), and that's why there's efforts like the x32 etc. I mentioned. but the ship has sailed on that, for anybody wanting to take advantage of library compatibility
like how long will be stuck at 32-bits on windows, even 64-bit, for compatibility reasons...
Not really sure what your argument is here, nor how perl (of all languages), is relevant
the performance differences for C++ on 32 bit and 64 bit systems is going to be very small, they each have advantages and disadvantages
we use 64-bit pointers because the first users of 64-bit processors needed them, and applications which came later used them too for compatibility, even at performance detriment
the fact that some language implementations (like Perl) didn't keep up... yes, exactly
well it was also because intel's early 64-bit processors kinda sucked
the argument here isn't really the history of how we arrived to this world...
the point is that at this point, most systems are running 64 bit, and in most cases, you're going to see fairly trivial performance differences in well optimized code
and if you're compiling to native and you have a 64 bit native pointer anyway, there's just no reason at all really to default to 32 bit integers
the performance difference is not too bad for some types of code, but for things like Perl and Python (and JVM, without compressed OOPs) where everything is represented by a pointer, it's not a trivial difference
I'm not saying it's a trivial difference. I'm just saying, these are problems with the implementations of these languages.
if you don't like the word "problems" then just imagine I said "disadvantages"
if your language implementation doesn't have this disadvantage, by using fewer pointers, having better escape analysis, depending more on generating good native code for speedups over the kind of JIT magic of the JVM
then you don't really have any reason to do this
All these examples btw do not compile to native, of course, when you're running a whole VM and have pointers everywhere that you're acting on within a program, yes, it's easy to believe that 32 vs 64 bit is a bigger deal
C++, Rust, swift, Go all compile to native and all make the same choice exactly wrt integers. doesn't seem coincidental.
python just accepts its fate as being super slow and indirection-filled anyway and has an infinite precision bigint type 🙂
I'm not sure what you mean by that - integers, as in the default
, is 32-bit
that's inherited from C, yes
but all the standard integer types in the C++ library are 64 bit on a 64 bit system, generally
int is the default integer type in C, not in C++
std::size_t and std::ptrdiff_t are the "default" integer types in C++, they are what the standard library uses everywhere (the same way that Kotlin's standard library uses Int everywhere)
only because they're expected to hold pointer-sized objects, which are (for generally bad reasons) 64-bit
Because they're matching the system on which they are running....
if your system is already natively doing 64 bit operations, then generally you gain zero benefit working with a single 32 bit integer, compared to working with a single 64 bit integer
if you have a huge container of them that's another story
I gave 3 examples of 64-bit processors running with 32-bit pointers??
"generally" 3 obscure examples don't disprove the rule
they had good reasons for existence. alpha died for other reasons, x32-abi and aarch-ilp32 have issues with binary compatibility (of course), they were driven by embedded where that isn't a concern
but we should have gone with that model to start with
not really sure what model you are talking about
32 bit pointers with 64 bit processors? Okay, but we don't
in that world, there would be significant performance advantages perhaps to using 32 bit integers in most places. In the real world, in most situations there isn't.
That's way all these languages make the same choice. Regardless of the reasons for it, I'm glad to live in this world, because it generally means I can use 64 bit integers as my default, even in very very fast code, and not feel bad about it
in most cases, the JVM is effectively emulating that, by using 32-bit offsets into its heap (compressed OOPs)
and it absolutely does help performance
yes, but the JVM is emulating that overtop of an existing system that is already using 64 bit pointers, so it's not like you're gaining anything per se, just limiting the damage done to perf by the JVM
I think you're making a very simple thing very complicated. The bottom line is that most native languages targeting x86-64, you get to use a 64 bit integer as your default type, in large part because there's no real penalty to doing so. In JVM/Kotlin, you're forced to use a 32 bit integer as your default because there is a penalty, so you use 32 bit integers, and of course still end up with perf that's not as good as the fastest native languages. That to me is unfortunate.
I view Kotlin only as a "good enough" performance language anyway so I'd happily sacrifice a bit of perf to avoid correctness problems or have tons of Int/Long conversions
but obviously that's just me
interpreters themselves are native applications. and I sense we have a difference of opinion here, but as far as I'm concerned, they're more greatly impacted, and would benefit more from 32-bit pointers than anything else (other than HPC) would lose from it
Err, I don't think the problem is a difference of opinion per se.... you seem again to be talking about this alternate reality where native systems ran with 32 bit pointers to help interpreters, or something like that
I'm just interested in the real world and real platforms
if your main target is x86-64 and similar platforms, it's only issues in your language implementation that make a 32 bit default integer sensible. There's really nothing to argue about here when this is exactly what all native languages are doing, including obviously the fastest languages
for the record - not an alternate universe, is actually available on Linux, and has been for ~10 years
..... what do these obscure links prove?
this projects looks like ithasn't been touched in like 6 years
nobody is releasing their software with this
it may not be an alternate universe in the technical sense but it is in the practical sense. anyway
I mean, it's integrated into Linux, GCC, glibc, there's Debian packages... the project itself doesn't need to do anything anymore
the biggest problem with adoption is that we've already got all these long libraries and binaries...
the last gcc it discusses supporting is 4.8
how does it "not need to do anything" anymore
still works on GCC 10 (tested locally)
the x32 project doesn't have any updates because everything it produced is now upstream
it's only a matter of time until some change breaks it....
it doesn't have any updates because practically nobody is using it
Ok done, that was a fast puzzle again
@ephemient If I understand correctly you can tell the JVM to use 32 bits pointers though.
Looks like it is on by default for small heaps
@bjonnh that was part of my point, JVM effectively has 32-bit pointers most of the time (as long as max heap < 32GB) and the same would be useful in almost every other application as well