i ran into this type-inference problem, which i’m ...
# compiler
p
i ran into this type-inference problem, which i’m not sure how to think about. an example:
Copy code
class Container<T>(
    val func: (T) -> Unit,
    val aT: T
)

sealed class Foo {
    object Bar : Foo()
    object Baz : Foo()
}

fun main() {
    val c = Container(
        { f: Foo -> print("Got $f")},
        Foo.Bar
    )

    c.func(Foo.Baz) // doesn't compile, because c is Container<Foo.Bar>, not Container<Foo>

    val c2 = Container(
        { f: Foo -> print("Got $f")},
        Foo.Bar as Foo
    )

    c2.func(Foo.Baz) // compiles
}
so, the compiler infers
c
to be of type
Container<Foo.Bar>
, where I expected to get a
Container<Foo>
. It seems to me that there are two possible choices - either use
Foo
for type parameter
T
, or use the fact that since
f
accepts a
Foo
, it can also accept all `Foo.Bar`s. But what I wanted was in fact the former (the real use case is complicated by me trying to implement a typesafe heterogenous map of sorts, so I didn’t even get a compilation error, just a runtime ClassCastException). It seems to me like the current behaviour might be surprising - is there a reason for it, is it somehow more correct?
d
cc @Victor Petukhov
e
I suppose type inference unifies the upper bound of
Foo
with the lower bound of
Foo.Bar
, and picks the more specific type?
p
that’s my hypothesis too, but i’m not sure i like that behaviour.
t
It might be related to the contravariant input type of the lambda.
v
@Petter Måhlén the answers above are around the truth. Usually, we infer generic type into a common super type between several arguments (which were associated with that generic type), like:
Copy code
fun <K> select(x: K, y: K) = x

fun test(foo: Foo) {
    val x = select(foo, Foo.Bar) // inferred into Foo
}
In terms of constraints, we add two constraints:
LOWER(Foo.Bar)
and
LOWER(Foo)
. So it’s obvious that only
Foo
is suitable here and at the same time it’s the most specific type. But in your usage, one of the type is considered as contravariant because it’s located at lambda’s input types (see
FunctionN
declarations like
Function1<*in* P1, out R>
). It means, that the corresponding concrete type isn’t lower bound any more, it’s upper one now. So here we have two constraints:
LOWER(Foo.Bar)
and
UPPER(Foo)
. Actually, both types are suitable, but
Foo.Bar
is more specific, so we chose it.
p
@Victor Petukhov thanks, that explanation makes sense, and I agree it is correct behaviour. It’s probably even the way one usually wants type inference to work, in most cases similar to the example I ran into. But having percolated this for a few days now, I still think it’s not working in an ideal way in that particular example. It’s not great developer ergonomics. But I understand that it may be subjective (maybe?), and more importantly, it may not be a reasonable scenario to detect in the type inference implementation. Since it’ll almost always be detected by the compiler, it’s not an issue normally. In my case, I got a runtime exception that was really unclear - a ClassCastException that felt like it couldn’t happen. I’ve found a different solution to that particular problem that works nicely, without API users needing to add casts like
Foo.Bar as Foo
, so no worries. 🙂