Proposal: I know this is unrealistic, but i'd lov...
# language-proposals
l
Proposal: I know this is unrealistic, but i'd love to maybe just start a discussion about how something resembling this could maybe be implemented. Record-types, which behave like a combination of rusts structs and elm's records. A Record-type would be a inline typedefinition which is structurally typed, so two records would have the same type if they share the same field types and names (elm-style) because of this, they would be defined via type-aliases, and the only way to add methods to them would be extension-functions. The big advantage of this structural typing: they can be row-polymorphic: you could define a function that takes a record-type with, say a name and age of a person, because it only needs this information. in your main code, you are working with a lot more information about any person, say their address, family-status, etc. you could then give your simple function that big person-object, and it would only have access to the persons age and name. this would be hugely beneficial for things like testing, where it could remove a lot of mock-objects, and also have the great benefit of functions really only taking the data they need, thus making their function-signature more expressive and the function-implementation safer (it cannot depend on data and information it should not depend on) this concept, for those interested, is implemented in elm under the name "extensible records", and is generally known as "row-polymorphism" (as seen in purescript, for example). another differenciating factor that these record-types would have is that they don't specify mutability or immutability in their record-type. say we have a record for a person ( i'm just gonna use purescript's record-syntax here):
Copy code
typealias Person =
  { name :: String
  , age :: Int
  , mobileNumber :: String
  , landlineNumber :: String
  , gender :: Gender
  }
then this would not contain any information about the mutability of any of the fields. this is where the rust-inspiration comes in: Record-mutability would be defined on a per-usage basis. if you create a person (simplified here):
Copy code
val person: Person = { name: "Tom", age: 12, gender: Gender.Male } //other fields left out
then this object is deeply immutable. this could be enforced by saying that a record can only store other Record-types or "primitive" types (and maybe a basic set of collections which are guaranteed to be immutable here) if you wanted to have a mutable version of that person, you would do something like this:
Copy code
val person: mut#Person = mut#{ name: "Tom", age: 12, gender: Gender.Male } //other fields left out
(this demo-syntax is ugly, i know) this would mean that all fields of that record (and all fields of potential records stored within it) are mutable, and thus provide a setter (callable via normal
=
functions) turning a mutable record into an immutable one would need to be an explicit operation, because mutable records would be stored like normal classes (by reference), while immutable records would be stored and passed by value, thus be cloned wherever they are passed to. this approach would have a lot of benefits as well: - usually, you, the caller, know, if you need something to be mutable. thats why there are loads of classes that have mutable and immutable variants (
List
/
MutableList
, etc). this would make it possible to let YOU decide about the mutability your object needs - functions could guarantee purity in regards to the record by not requiring your arguments to be mutable, thus not being able to mutate them - you wouldn't need to create builder-classes, because you could just create a mutable version of the record, build your stuff, and then store it as immutable in the end - as a result of guaranteed deep immutability, the compiler could do heavy optimizations which are currently not possible. - you could still have mutating methods on the records, by just defining the extension-functions only for mutable versions
row-polymorphism would (using the person-example) look something like this:
Copy code
fun addToPhoneBook(info: { name :: String, landLine :: String }) {
  // add the person to a phoneBook
}

// in main
val person: Person = { name: "Tom", age: 12, landLine: "142522152334", gender: Gender.Male }
addToPhoneBook(person)
extension functions would look like this:
Copy code
fun <T: { gender :: Gender }> List<T>.filterIsFemale(): List<T> {
  return this.filter { it.gender == Gender.Female }
}

fun <T: { age :: Int }> T.isMinor() = this.age >= 18

fun <T: mut#{ age :: Int }> T.makeAdult() {
  this.age = 18
}
m
It's been discussed. I think it was in #language-evolution, but can't find it at the momoent. The bottom-line is it's very likely Kotlin won't support it until Java does. There's proposals to add Record Types to Java. Also, check out Arrow-Meta. It might provide something like this much sooner.
👍 2
l
that would have been my main concern as well... how would this be used with java-interop
m
Until Java supports it, likely very much like inline class. It would just be the underlying types in Java, or unusable in Java.
l
i think another big problem would be that it would add a whole new type of type, which could make kotlin a lot more confusing, as old code - code / data-types that would be better represented using such a record-type ) wouldn't / couldn't switch to using records, so there would be a divide in the stdlib...
r
Yeah, I've never seen mixing structural and nominal typing work well. Everyone wants the best of both worlds, but they're just not compatible in practice.
l
typescript sort-of does it, you have "interfaces" that can represent regular javaScript objects, and you have regular classes which have a concrete type. the interfaces are structurally typed, but the classes are, afaik, nominally typed
the big problem here would be that the migration wouldn't really work... it would need to either be a big language change or just be added without being used in the stdlib
m
I think Arrow-Kt, leveraging Arrow-Meta, are attempting to achieve this. Don't know much more than that as my FP comprehension is still relatively basic, but that is my interpretation of what I've seen.
l
arrow attempts a lot, i know. arrow also does do and achieve a lot... but for such a big language-feature, you can't rely on a compiler plugin. while implementing it may be possible (altough with questionable syntax, because compiler - plugins, afaik, can't just add new arbitrary syntax) this would need wide adoption and understanding to be useful.
r
@Leon K I don’t think that is accurate in the short term. Meta has type proofs backing with inductive proofs that can derive behaviors and project adhoc syntax over product and coproduct types including data clases, sealed classes, unions that don’t box primitives etc. In the code attached you can see how the Behavior
+
is derived automatically for
Row
if there is Proof of the parts of its product type. In this case two ints. The compiler inductively derives it and establish synthetic subtype relationships replacing impossible casts with the proofs. Proofs are coherent as proposed in Keep 87 but as you can see you can abstract over the arity of product types already by simply making your behaviors target the Tuple hierarchy or any other hierarchy that is isomorphic to what you are modelling. Proofs can be added in user code as a prelude at any point and they are discovered coherently based on the extension types that defines them only allowing orphan overrides if they are internal. This is not ready yet but it will be available in Meta as the user type system DSL to draw relations between types. Some of the supported relations I have identified so far are synthetic subtyping, extensions with intersections, type refinement with predicates and negation proofs that disallow calls to types in the scopes of other types (similar to how restrict suspension works but a la carte)
This enables ad-hoc polymorphism a la carte that does not require subtype relationships and inheritance since proofs can be composed as functions. Also covers the DI use case since fun, val, object and class are supported for inlined proofs values. Essentially anything the compiler understands as a CallableMember can be inserted as proof in place of a type cast
What I mean to say is that Meta can implement this without even altering the lang grammar or semantics and remove the need to extend explicitly in many use cases
Also if you take a look at the presentations in Kotlin Conf and LambdaWorld or the hello world plugin you can inspect absolutely any structure of code with meta and derive yourself from the AST whatever output your program needs. Essentially Meta is uber powerful in that sense since it can intercept and transform the AST before Analysis as if the user had written the code.
q
@raulraja Does this allow Rust style traits or Scala type classes?
I saw you mention ad hoc polymorphism, but the website mainly discusses the functional aspects without any mention to the implementation of custom type classes.
r
Yes this enables protocol extensions like swift or type classes like Scala but it's all coherent in resolution as in Haskell
No implicit scope implosion inside out for resolution
@Proof(Extension) is typeclasses since it projects the API of the type class over the data type as an intersection of their members scope so all typeclass functions are available globally over the types they enhance. Coherent Ad hoc Polymorphism based on synthetic intersections of member scopes of the proofs and data type member apis
Also resolution is inductive opt in
But type proofs is much more than just typeclasses
Is a la carte strategies based on synthetic subtypes
You can make any semantics out of it since you are drawing arbitrary morphisms between the objects that represent the Kotlin types.
It's a system to alter and enhance the Kotlin typesystem at it's core cooperating with type checking, resolution and codegen
It's similar to For All predicates in Haskell and it's higher level than meta since it allows proofs in the same module where they are applied in user code. This means any library can ship a prelude of proofs to create very terse DSLs. It also implies compile time DI verification and injection which is faster than any kapt based alternative or runtime DI. As it happen in other langs like Scala, nobody uses DI containers there because the compiler is the DI container and injections aren't limited to constructors or Fields but to any call site. Injections at call sites are value holes which the compiler can find best candidates for and help you not only in DI but also autocompleting your code with proofs based on shortest paths. This brings more precise types similar to how the Idris experience feels like when the IDE writes your program for you based on the available type morphisms in your proofs
😍 1
I understand this is an incredible amount of power but I think this is the way we advance the language and keep it relevant, diverse and explore what types can bring us to solve problems in a different way we are doing today.
🔥 1
All this is also MPP once Kotlin 1.4 lands with the IR backend completed
Proofs are inlined so I expect this to be a very fast runtime in Kotlin native with an incredible amount of abstraction with little or zero runtime cost.
Over a 100 proofs that include inductive resolution can be computed under 14ms at compilation init time, and resolution is constant time for non inductive proofs and almost constant for inductive proofs once cached.
There is no apparent penalty in compilation unless you go crazy with inductive proofs but even then it's still incremental since it's cooperating with the compiler phases.
A lot of people are commenting on KEEP-87 without much knowledge and making assumptions on how things work but it's not the case. Type clases and all these type system features are happening in arrow meta and when they are ready the keep will be updated. What we built is more powerful and ergonomic that what was proposed and requires no changes to Kotlin's grammar, syntax or semantics aside synthetic Subtyping by proofs that eliminates the need for inheritance
l
i guess my big problem with arrow is, that it takes the extremely elegant language Kotlin and turns it into a a lot more powerful, but very inelegant functional language. the way you have to work around the syntactic constraints of kotlin makes it hard for me to actually use arrow's full potential.
👍 1
☝️ 1
q
How can unions not have boxing? Without unsafe magic I mean.
Unions can be represented as inline classes which extend the higher arity using Nothing as a bottom type
q
I'm still unsure of how you can pass these Unions around without boxing