I have an interesting type inference situation; do...
# random
j
I have an interesting type inference situation; does anybody know what happens here?
Copy code
interface Intf
class Clz : Intf

inline fun <reified T> showType(noinline lmb: (T) -> T) {
    println(typeOf<T>())
}

fun main() {
    showType { _: Intf -> Clz() } // 'Clz', why ?!
    showType { _: Intf -> Clz() as Intf } // 'Intf', as expected
}
(IntelliJ IDEA marks the
as Intf
in the second call with a ‘No cast needed’ warning)
s
Well,
T
is inferred from the lambda signature. In the first example, the lambda function has type
(Intf) -> Clz
. We need to infer a value for
T
such that
(Intf) -> Clz
satisfies
(T) -> T
. Functions are contravariant on input types and covariant on output types, so for
(Intf) -> Clz
to be a subtype of
(T) -> T
,
Intf
must be a supertype of
T
and
Clz
must be a subtype of
T
. The type that satisfies those constraints is
Clz
.
When you add the cast, you're relaxing the function's signature (to
(Intf) -> Intf
), so that other return values would also be allowed.
I guess it's not immediately obvious, but there are two type inferences going on here. The lambda function's type is determined before the
T
is inferred. I don't think the second type inference can feed back into the first one in any way.
j
I see. I assumed the broader type would get preference, but there is no reason to do so; as long as the input and output type both match it's valid. Thanks!