Interesting, it seems the term "field" used in alg...
# mathematics
b
Interesting, it seems the term "field" used in algebra and physics are related. To check if a set of matrices is an algebraic field, it must be closed under (+, -, *), have additive identity, multiplicative identity and commutativity, and a multiplicative inverse. The term vector/matrix field used differential geometry and physics is a actually a vector/matrix space over a field, i.e. the product of a field and a vector/matrix space, whose elements are vectors/matrices. The type signatures are slightly different. Algebraic field: ×: 𝔽 × 𝔽 → 𝔽 +: 𝔽 × 𝔽 → 𝔽 Vector field: +: 𝔽ⁿ × 𝔽ⁿ → 𝔽ⁿ ⋅: 𝔽 × 𝔽ⁿ → 𝔽ⁿ cc: @altavir @dievsky
a
Not quite. In physics we can have (force) vector fields which are actually linear spaces from the point of view of mathematics (you can sum forces in the point by adding vectors). There could also be scalar fields (like potential) In most cases additiona also wotks, but multiplication is meaningless (you can multiply two potentials, but the result does not make any sense). There also could be tensor fields with more complicated meaning. But usually they are still linear spaces, not fields.
Actually, it is one of motivations to have spaces on top not only real numbers, but whatever you want. We want for physical quantities to form at least some kind of algebra.
b
I think the "field" refers to the scalar portion and the "vector/matrix/tensor" refers to the linear part? So the vector field itself is not a field, but the scalar elements of the vector space are
a
We have a physical "space", most times it is 3D and the space word is quoted since it is a space only in the geometrical sense (yes you can operate on coordinates, but we do not talk about those operations here). Now in each point of this space we have a mathematical space of some physical quantity like potential, force or velocity. This quantity in this specific point of real space forms a mathematical space of its own. We can sum two forces applied to one point. By linear I mean, that we additionally have multiplication by a constant like C = a*A + b*B. And the multiplication scales all components of A and B proportionally. I can' think of a simple case, where you have something like A*B or A/B, so it is a Space usually. This is where mathematics ends becaus in physics we seldom can operate on values applied to different Euclidian space points. For example we can find a resultunt force applied to a body even when application points are different.
b
Interesting! In statistics and machine learning we use inner product spaces, which also have linearity and a kernel spaces which have a proper distance metric. I know that finite fields often come up in cryptography and formal language theory. Maybe there are some other connections...
a
There are. For example physical methods are used in multidimensional optimization (Hamiltonian Monte-Carlo) and I am sure there are other applications as well. For example in physics we have a quite robust equilibrium theory for complex systems, Probably someone tried to explore this.
р
I hope that helps to clarify some unfortunate clashes in terminology: https://en.m.wikipedia.org/wiki/Algebra_over_a_field
👍 1
In geometry a vector field on a manifold is an operator on the functions on that manifold which "enables you to take derivatives in the direction of the field" .
The set of functions is an algebra over the reals but the set of vector fields is just a vector space.
Computer science is different to maths here in the sense we still want to define multiplication & division element wise. Those are optimised in most low level implementations and need to be in the higher level API even if that breaks the mathematical structure.
a
Indeed. And from my engineering point of view those algebras are needed just to propagate operations, we do not use any fundamental properties.
b
I think elementwise operators are compatible with the mathematical definition, you just cannot have
VectorField
be a subtype of
Field
, it must be a tuple of
Vector
and
Field
with its own products (e.g. hadamard, inner product). Here is one implementation: https://github.com/breandan/kotlingrad/blob/master/core/src/main/kotlin/edu/umontreal/kotlingrad/typelevel/TypeClassing.kt#L235
a
Kmath initially had the similar design for so-called AlgebraElements, which are structures coupled with an appropriate algebra. But lately, I am removing most of its usages and going to deprecate it in future. It is much easier to define the scope, than to complicate model by difference between structures and elements.
р
Fields in geometry (like vector or tensor fields) must transform consistently with respect to change of coordinates. Element wise operations typically break that. And they really have little to do with algebraic fields.
b
@Ролан are you referring to covariance/contravariance? While elementwise transformations may not be semantically valid in certain spaces, it should still be possible to define them algebraically. Maybe not on the
VectorField
interface itself, forcing all inheritors to implement, but as an extension function. I recently gave a presentation about
vmap
, a pattern which we borrowed in Kotlingrad for working with tensors. You pass in a lambda and it returns a function which accepts a tensor and maps the lambda over a tensor. This can be element wise, or using some more complicated mapping (e.g. convolution). https://github.com/compcalc/compcalc.github.io/blob/main/public/pytorch/ad_pytorch.pdf I was also reading about RCF, which has some nice computation properties for defining mathematical type systems: https://en.wikipedia.org/wiki/Real_closed_field
р
Very interesting, thank you very much for this, I will look into your presentation more in detail. In geometry indeed we have the notions of vector fields and their dual counterparts - the differential forms, and you can put that into a contravariant/covariant context if you wish. I think the "tensors" to which you apply vmap are rather points on the manifold and AD plays the role of exterior derivative (which outputs 1-forms, and those are the duals to vector fields). If your elementwise operation is consistent with the change of coordinates then you're all good.
For real closed fields cf @Aleksei Dievskii he is the algebraist, I am a geometer ))
a
sorry, I've lost the train of thought here for a little. what are we discussing now, exactly? I remember we've started with how kmath's
NDField
wasn't a field in the mathematical sense. it can be more accurately described an (associative commutative unitary) algebra using Hadamar product.
a
Actually, the discussion diverged a lot from this point. I've devised KMath from my point of view as a physicist. It is closer to the geometry than to the pure Algebra, but still is different 🙂
a
(that's if we disregard the difference between real numbers and IEEE 754 doubles.)
a
So we have a lot of different definitions which are not fully compatible. An interesting observation is that we can draw some interesting conclussions from deffinitions themselves. And I also found out that I've unconsciously added some of physics concepts in the design. For remeber I supposed that there should be algebras inside algebras defined by contexts.
Real numbers are another thing. I've deprecated and remove
Real
in favor of working with Kotlin Double, so we can add custom algebra to work with real reals.
a
real reals are quite hard to express in traditional computing, sadly. too many of them. :))
a
Indeed. But we can experiment if there are some applications for that. This is additional benefit of not limiting ourselves to only one deffinition of operations on one type. For Doubles we will still have to define a new type though since intrinsic operations will always override external ones.
р
@Aleksei Dievskii, btw, I wanted also to point to you out that, whatever a given trait for Field would have as an interface (e.g. contains division), this will have to be in the version one comes up for "NDField" - yes, for example, the division won't be defined to instances where the product of the elements is zero, but that shouldn't forbide it away from the API. It should be left up to the user to handle a division by zero exception in that case.
a
that's because the (algebraic) field properties are not completely expressible as traits. you don't only have to have some specific operations, they also must have specific properties. for example, the multiplication must be associative. also, the division must be inverse to multiplication, so
(a/b) * b = a
for any valid
b != 0
. this doesn't hold for
NDField
's division:
[1,1]/[1,0] * [1,0] = [1, NaN]
. that should in no way preclude
KMath
from specifying that their
Field
has some different properties.
a
It would be nice if you could later summarize the discussion in https://github.com/mipt-npm/kmath/issues/192. It is not simple to do mixins and type intersections in Kotlin, but I will try to account for all comments.
b
At some point, you might have a chat with Valery Isaev, who has an implementation of this on the JVM: https://github.com/JetBrains/arend-lib/blob/master/src/Algebra/Field.ard He gave a presentation about it sometime last year: https://vimeo.com/413726748
a
Arend is a theorem proover. its aims are quite orthogonal to KMath aims.
b
Proofs and programs are not so different from each other https://en.wikipedia.org/wiki/Curry–Howard_correspondence
a
They are vastly different. The mathematically correct code is usually not a simple, convinient code. To make it pragmatic, you need to cut some angles, make it convinient to end-users. But it is a discussion for another topic.
р
@Aleksei Dievskii yes of course you are right about the concepts but I was focusing on the API and I find the name field was suggestive. After all multiplying 0.1 with itself will give 0 after a finite amount of iterations, but I don't want to give up calling the double's a field - they give me all the operations a field has.
a
@Ролан there's a difference. with doubles, there is an actual field that they're aiming to imitate. with our "Hadamard algebra", there's no such field. that being said, no one would probably object if you provide a division operation for a ring (or an algebra).
👍 1