Why Anna Karenina applies to linear algebra?
If Leo Tolstoy were a mathematician, he might have written
All zero vectors are alike; each non-zero vector is a vector in its own way
What does this mean?
To start with, what is a vector? In Physics, this is just a number, paired with a direction. Think of velocity, acceleration, forces. In Geometry, it’s also a quantity with magnitude (a number) and a direction. In algebra, is an object that can be added to other similar objects and multiplied by real numbers (scaling it, changing the magnitude but keeping the same direction – or flipping it). There are some rules (known as axioms) to follow, to set the rules of the game, but that’s not important for this article.
There is a special vector. The null vector. It has a magnitude of 0. It has no possible direction, or it points in all directions at once. Regardless of the dimensions of the space you are working in (think for example 2D or 3D geometry, but this also applies in other dimensions), the 0 vector is the same.
This is why we have the Anna Karenina quote: all zero vectors are the same. But all non-zero vectors live in their dimension, have their own direction, their own size.
And, because of this, the 0 vector is important for the definitions for linear dependence and linear independence. Imagine picking \(n\) vectors in the space and adding them up. We can get to 0 (for example, in Physics, a car drives north, east, south, and then west, for similar distances), or not. We can potentially scale up or down these vectors before adding them up and land on 0 again. If we don’t land on 0, we can always add one more vector (that is the opposite of the current sum).
What is more important though, is to try to reach 0 with the minimum amount of vectors. In this case, regardless of the dimension we are in, we will find that there is a minimum number of vectors we can use. If we go below, no matter which vectors we start with, there is no way we can get to 0, no way to get back to where we started, except if we scale all these vectors by 0, effectively doing the trivial \(0 + 0 + \ldots + 0 = 0\). These vectors form what is known as a collection of independent vectors, because there is no way to sum all but 1 of them such that adding that one we get to 0.
If we start adding just one more vector, then that vector could be expressed as a sum of scaled versions of the previous ones. So, now we get to a collection of dependent vectors.
What is interesting is that if we start with a vector in our space, and then keep selecting other random vectors, scaling them up and trying to sum them up to 0, we will always find that always after selecting \(n\) vectors, picking the \(n+1\) one will make the collection dependent, whereas the previous collections are independent. The \(n\) is the dimension of the space, and all the previous steps in the process define subspaces of our space.
Consider for example the case of a 2D space, where we first pick the red vector from the image below:
The only way to get this red vector to 0 is to scale it by 0. So, let’s add another vector, the green one. Again, the only way to get these 2 vectors to sum to 0 is to multiply both by 0 and add the 0s.
But, if we then pick the black vector, we find out that adding two copies of the red vector and two copies of the green one is exactly the black one:
So, subtracting two red vectors and two green ones from the black one would get us to 0. Or, in math terms,
\[ -2\mathbf{r} -2\mathbf{g} + \mathbf{b} = 0\]
Thus, these 3 vectors are dependent, and the dimension of the 2D space is 2 (which looks like a circular definition right now, sorry).
Linear algebra is useful in so many areas, so maybe I’ll write more of these articles.
Comments:
There are 0 comments (add more):