Since it works at compile time it would work for t...
# language-proposals
Since it works at compile time it would work for those types for which you declare a
instance. For example:
Copy code
package foo
interface Reified<A> {
    val selfClass: KClass<A>
Copy code
package bar
object ReifiedString: Reified<String> {
    val selfClass: KClass<String> = String::class
Copy code
import bar.ReifiedString

fooTC<String>() // compiles and returns String::class
fooTC<Int>() // Does not compile because the compiler can't find evidence of `Reified<Int>` imported in scope
So @Marc Knaup would have to declare an object for an each type he wants to get type parameters from? Seems kinda boilerplated.
if he wanted it to be type safe and compile time verified yes, this would not necessarily replace the use case where people want to look at generics at runtime. One could also do
if it wanted to get generic class info or any other shared behavior for any type potentially.
In the KEEP example
class Foo<A>
there is just a generic parameter
A: Any?
. There’s no way the compiler can know what instance of
to pass to
since the compiler cannot know the argument for
here. @ilya.gorbunov I guess the various instances of
would be declared by the compiler automatically should it replace
inline … reified
@Marc Knaup The compiler knows at concrete call sites which is where instances are verified
once you tell the compiler what
means then it can lookup in the imported scope instances that satisfy the constrain
@raulraja if you create an instance of
from Java then the Kotlin compiler cannot know what
means in `Foo`s constructor. Same when you use reflection to create the instance:
. No
it can, just need to add the constrain:
Copy code
class Foo<A> given Reified<A> {
   val aClass: KClass<A> = A.selfClass

import ReifiedString

Foo<Int>() //does not compile because no `Reified<Int>` is in scope
Foo<String>() //compiles because `Reified<String>` is in scope
classes as well as functions can declare dependencies on their generic types to be satisfied at call sites
also removes the need for runtime based Dependency Injection because the compiler can verify your dependency graph at compile time in the same way it can now verify the bounds of your generics. The change is that type class instance verification is resolved by evidence in the current scope at the call sites where things are concrete and not generic.
So it’s just the example being wrong 🙂 And it works as long as object instantiation is restricted to Kotlin code which enforces the
constraint. Why do you import or declare
all the time? Isn’t the goal of
to have that one declared implicitly by the compiler? 🙂
Reified in this case is just an example of a type class, type classes are not ment to replace runtime reified generics, though they can give you the same semantics if you know the types you are talking about because they are verified at compile time.
type class would be user defined unless the Kotlin team decided to put it in the stdlib, my point was just that if you are trying to get generic info for types you do know already then there is no need to have inline or reified modifiers should type classes get accepted in the lang.
So basically it’s something in between
inline … reified
and passing the
manually as a function argument. Because I only have to declare a
SomethingReified : Reified<Something>
only once it’s simpler than passing a
on every function invocation.
No, it's a way to couple a behavior which is defined in a type class to a data type without using inheritance. This allows you to write polymorphic code over generics asking the generics to provide evidence of those behaviors.
For example you can define a type class
and provide instances for
or whatever it can support it's behavior. then in your code you can code everything to
and apply at the edge the impl you want to use
but you can't for example today with subtyping or inheritance make CompletableFuture extend Async because you did not own that. Type classes let you add evidence of behaviors to all types even those you didn't write yourself.
When you write a library today in Kotlin you have to be concrete as to what it returns in its API but with type classes library authors can let users decide return types
Like that one there are many other use cases. For example you can write compile time verified Json codecs/encoders because you can define via type classes what types can be transformed to Json and which ones can't and the compiler will verify that for you and won;t let you try to turn a type into Json unless there is evidence that you can do that.
Another use case is DI. You can have a program declared in terms of a
and inject at compile time in tests an instance that does not hit the DB whereas in prod you provide one that does.
Yeah, but the example of JSON encoders/decoders isn’t a good one. That’s exactly what I’m working right now at this very moment 🙂 and I see that type classes likely won’t help me here. They’re good to recursively decode / encode JSON by using type classes on the model directly. Encoders/decodes are usually implemented using “codecs” which you register in advance and these work in a more abstract way.
Copy code
interface JSONDecoderCodec<out Value : Any, in Context : JSONCoderContext> : JSONCodecProvider<Context> {
Working on that one right now.
Json Encoders and Decoders are trivially implemented in Scala with ADTs and Typeclasses without runtime reflection
with type classes you don't need in advance registration because you can do this:
Copy code
fun <A> A.json(): String given JsonCodec<A> = A.asJSON()
then you can do :
Copy code
if there is a
AFAIK you can pretty much through away reflection and generics lookup at runtime when you have type classes for most use cases.
Yeah I’ve seen these example in the discussion. That
is exactly what I do not want - having any kind of JSON encoding/decoding logic being part of the model (be it directly or through type classes). Better is
to have a clear abstraction/separation. But that’s just a minor change I guess. Implementing these codecs is more interesting as they should be able to recursively encode/decode objects without knowing the actual codec being used. The codecs being used don’t even need to be public/known which is good because implementations should be hidden whenever possible. This is already possible without reflection (except for
) - just not for generic classes. I’d have to spend some time to figure out if type classes give any benefit here.
extension syntax was just an example, you can encode your constrain in top level functions like
fun <A> decode(a: A): String given JsonCodec<A> = a.json()
The actual codecs don't need to be known either, they can be imported:
import production.runtime.*
The import imports the implementation 🙂 But anyway, I currently see no benefit of type classes over traditional
-style approaches. At compile-time the consumer of a JSON encoder/decoder cannot know which codecs have been registered and which ones not (see GSON for example or MongoDB’s codec registry). So compile-time safety is only guaranteed if the consumer wires all the JSON codecs, isn’t it?
If you look at it from an OOP registration point of view you don't know what is outside of the context and yes, you have to trust the one registering the codecs, but if it is expressed as type class constrains then you don't have to to trust the guy registering those because the compiler does the work for you. For example given a
it can't encode it unless there is also a codec for <A> and the compiler can verify that.
of course you can use reflection to not provide codecs at all but that breaks referential transparency because now your functions are dependent of a context they are not aware of and they will pass or fail based on preconditions regardless of their definition. If that is not important for Json encoding then yes you can use reflection to generically encode anything and when something is not registered properly it will fail at runtime.
That may be important or not to the user/lib author but in terms of type safety type classes can do the same and have the compiler verify your dependencies, in this case the codecs.
I don’t see how the compiler could verify that without limiting flexibility or moving more logic from “behind the scenes” (hidden implementation / DI) up to the consumer. Let’s say we get a
instance through DI. How can the compiler know which codecs are registered and which are not?
I do see that it’s possible in some cases but they’re quite invasive.
The compiler looks up all generic definitions applied at concrete sites and look for instances available when they are made concrete. You are doing the same when you register your codecs for runtime reflection at some point.
Is the same principle. Instead of you providing with mutation a property to a service registry the compiler looks in functions and class definitions what their dependencies are until someone makes them concrete
import PersonCodec
is equivalent to
Yes, but the registration could be made internally by a completely different module.
No need for the consumer to know.
So the compiler cannot verify.
How does the user encode it's own models then? doesn't he needs to provide those?
unless you fully use runtime reflection without registration somoene always need to provide the concrete place where he refers to a non generic value.
It could add codecs for own models, sure. But it can also encode/decode models from other modules using their
codecs which they’ve registered.
Where does reflection start? 🙂 If it starts with having to get an instance of
then yes, it’s needed.
So it's the same then, you still need to be concrete at some point, same with type classes but in type classes you don't have runtime reflection, just regular method dispatching which is faster
because the compiler can see that at the place you use
Person -> Json
and you are referring to a Person it can traverse its dependency graph in the same way it does today with
and resolve all the constrains
the compiler then injects your codec at the right place and is even able to inline it if it wanted to.
Private/internal codecs registered dynamically in another module?
if you wish to use mutation yes, you can resolve those before registering it but it doesn't have to, since it can follow the regular imports if they are just declared as constrains via
as in the proposal in the KEEP
FTR I'm not making this up, it's how it works in other langs. For example in Scala people rarely use Dagger or Json codecs based on reflection because the compiler can do that with implicits which is how it implements typeclasses. The proposal for Kotlin does not include implicits just type class instance resolution which is lightweight.
I still think that it won’t be able to work the same flexible way, but I guess that I’d just have to rework everything currently relying on
to use type classes and see if things work as intended or not. It’s difficult to discuss that feature in theory 😮
right, we are speculating but I think is good we talk about these things 🙂. When there is a draft available I will write a small json codec that shows what I mean then we can compare and further discuss.
👍 1
Type classes would bring a lot of cool stuff to OOP not just FP styles and I think Kotlin could highly benefit from strong type safety in many of these common patterns such as DI, encoding, etc.
Yes, I also love such discussions 🙂 Until the KEEP is more concrete then the library is also ready and I can try to transform it. Though quite difficult to test without a compiler verifying it 😁
😂 1
While you write this lib based on runtime reflection some of this code may help you getting to nested generics and other runtime type nonsense
Feel free to take anything you want from there, it's a runtime reflection based implementation to lookup instance of type classes. Your use case is not the same but there it dives deep into the generic types you can obtain with the TypeToken technique
this function will turn a list of types as represented in the type token into actual concrete types:
That code looks like a lot of work 😮 Thanks for the offer, but for now I’ll avoid the features which need that level of reflection. I’m not a fan of using reflection except for comparing `KClass`es and class-level subclass-checks.
if you use typetoken or you want access to nested generics via typetoken they will show up as
etc.. then you need to resolve before doing anything useful with those. I'm not proud of the code, I want to 🔥 it, therefore the KEEP
if you find a better way please share it with me and next time we are in a conf I'll buy 🍻 !