<https://github.com/mipt-npm/kmath/pull/162/files>
# mathematics
i
a
I am not sure. @breandan do you have any arguments in favor of this one?
b
I took a quick look and I think you're on the right track. There are two ideas here, one is laziness which I have some experience with and can recommend without reservations since it opens up a lot of opportunities for more flexible rewriting and does not sacrifice much since it can be easily elided at runtime if the user desires. With lazy evaluation, you can encode some common mathematical identities to optimize MSTs during execution for e.g. numerical stability or efficiency (Theano has a bunch of these optimizations). Lazy evaluation seems to go hand-in-hand with staging which we discussed here a while ago: https://github.com/breandan/kotlingrad#multi-stage-programming The other idea this PR proposes is proper monads, which I never completely implemented in KG, but have been trying to do so (although it seems to have some tradeoffs). You can have lazy evaluation without monads and monads without lazy evaluation. Basically, my understanding is that Kotlin's type system has some difficulty expressing true monads, since
(A)->B
is syntactic sugar for
invoke(a: A): B
, and the syntax does not fully compose. As it needs to interface with Java's type system, there are some limitations. I have some intuition but need more experience to recommend using this pattern. I tried to understand some of the tradeoffs in this SO question: https://stackoverflow.com/questions/54799122/type-inference-for-higher-order-functions-with-generic-return-types
a
It does not actually add laziness. The MST evaluation is always lazy. The only difference is that you consume lambda instead of function result. It is always consumed anyway. Unless you want to compose functions, which is what we wanted to avoid in the first place.
👍 1
b
By "you", you mean the KMath end user or is the change observable from the user’s point of view? Can you give an example of eager vs. lazy consumption? I need to look more closely at the PR
a
This API is intended as internal way to compute an expression in any Algebra. By lazyness I mean that the expression is stored in a MST tree and is computed lazily on call. And yes, you can do preliminary tree optimizations. In order to compute the expression, we need to somehow call algebra methods like
plus
or
exp
. It is done via named function handlers. The question in only either handler should take argument and return the result immediatly (one step) or it should return unary/binary function which wen could be called. The function will be in most cases called immediatly. @Iaroslav Postovalov if you have and exmple of the delayed execution, plesase intervene.
b
I guess let's focus on the binary operator:
Copy code
override fun binaryOperation(operation: String, left: MST, right: MST): MST.Binary =
        MstAlgebra.binaryOperation(operation, left, right)
This data structure instead becomes a binary lambda function:
Copy code
public override fun binaryOperation(operation: String): (left: MST, right: MST) -> MST.Binary =
        MstAlgebra.binaryOperation(operation)
I agree with Iaroslav this is the mathematically more elegant definition, but I have been struggling to justify exactly what benefit this provides from an implementation perspective, if it is true that
(left: MST, right: MST) -> MST.Binary
is just syntactic sugar for
invoke(left: MST, right: MST) -> MST.Binary
. I am curious to learn if there is some obvious advantage, because there is something about using monads that feels more "beautiful" than simply constructing a tree, although I am somewhat suspicious of this feeling because it leads to Haskell land where
(+) :: Num a => a -> a -> a
a
Actually question is not about MST, but about regular algebras:
Copy code
override fun binaryOperation(operation: String, left: Double, right: Double): Double
and
Copy code
overide fun binaryOperation(operation: String): (Double, Double) -> Double
respectively. The only difference I see is that during implementation you can do something like this:
Copy code
return when(operation){
  "+" -> ::plus
}
This will be done only once, so I do not see any significant advantages.
b
Yeah, there were only a few places I could make use of it internally, so it might benefit someone reading the code. But is
Copy code
return when(operation){
  "+" -> ::plus
}
that much easier to read than:
Copy code
return when(operation){
  "+" -> left(bindings) + right(bindings)
}
I don't know. Maybe there is another benefit? Just a small nitpick, but I would personally use an enum instead of a string for
operation
a
It was done intentionally because some algebras have new operations not present in basic algebras. It is possible to use inherited sealed classes though.
i
@altavir @breandan Actually, the main idea of this patch is not to make KMath look more FP-like, but to have indy-optimized method handle call on JVM.
But unfortunately, Kotlin doesn't use indy for method reference objects