Migrated from http://blogs.msdn.com/b/rezanour

Some math and physics libraries contain separate data types to represent points and vectors. I’ve seen many people become confused by this, since more commonly packages use vectors exclusively for everything. What’s going on here? Why have 2 separate types? Well, the answer is quite simple. Strictly speaking, a point is not a vector and a vector is not a point! We’ll come back to why many packages, XNA included, use vectors to represent points and why that can make sense later.

*Points*

A pointis a *location* in an *n*-dimensional space. It is usually represented in what we call Cartesian coordinates, such as (3, 5, 23). You can think of this as being the absolute location of the point. Because locations are specified using real numbers only, we say that the set of all real values in an *n*-dimensional space are within *R ^{n}*, so for 3-dimensional space it would be

*R*.

^{3}Points are quite limiting in the types of mathematical operations they support. For instance, it doesn’t make sense to add 2 locations together. Think about that for a second. What does it mean if I add together the locations of a pencil and a book? It doesn’t really mean anything. You also can’t multiply a location by another location. That just doesn’t make sense.

*Vectors*

A vector is an element from a vector space, and can be thought of as a *relative displacement*. Vectors do *not* have location, but instead describe how to get from one location to another. Therefore, they are directly related to points in that they describe the displacement required to go from one point (or location) to another. For example, if you have a point *A*, and you add a vector to it, you end up at some new point *B*. Similarly, if we subtract that displacement from *B*, we end up at *A* again. Like points, vectors are also commonly written using Cartesian coordinates. However, it is important to realize that the coordinates, such as (3, 4, 5) do *not *represent a location, but rather how much displacement you have along each axis.

Unlike points, we can add vectors together. For example, when we say something like “move forward 3 feet, then move to your right 3 feet”, that’s 2 displacements concatenated together. Two vectors are considered equal if their *directions *and *magnitudes *(or length) are the same. Since they have no location, we can look at 2 equal vectors side by side, something we couldn’t do with points, since any two equal points would be directly on top of each other (see Figure 2).

There are several operations we can do directly on vectors. We can add 2 vectors together, as we saw above. We can subtract 2 vectors, for instance if you move forward 3 units, then back 2 units. We can multiply a vector by a scalar, which just scales the magnitude of the vector. We can *normalize *a vector, which maintains its direction but scales it’s magnitude to 1. There are also several forms of multiplication we can perform between vectors. We’ll cover all of these in a bit more depth in the next installment of this series.

*Implementation of Points as Vectors*

So, now that we know that points are different from vectors, why do many libraries implement points as vectors? Surely, this can’t be, since we just discovered that we can add vectors but can’t add points, amongst other differences, right? Well, the answer is somewhat complicated. Technically, both points and vectors can be represented by the same data on the computer, which in the case of 3D is 3 floating or double precision values. However, the operations that are allowed for each type varies. Instead of implementing two nearly identical types, with just a different set of operators on them, most packages opt to leave the interpretation of the data up to the user, and just use a single implementation which can perform all the operations. This can lead to many algorithmic bugs if users aren’t aware of how they’re using them, but it’s a common tradeoff nonetheless.

In some packages, they use a single 4-element vector to represent both concepts, using a 1 in the last element (usually called w) for points, and a 0 for vectors. This actually helps loosely enforce the rules above, since subtracting two points would give 1 – 1 = 0 in the w field, which makes it a vector. And adding a vector to a point would leave 1, making it a point. Adding two points would make 2 in the w field, which is invalid as expected. However, this scheme uses considerably more memory (33% more in the case of 3D) with little real world benefit.

So, ultimately, as long as we mentally consider our points as locations and our vectors as displacements in our calculations, then we can at least use the same data structure for both, saving on space and code duplication. We just need to be careful that our algorithms don’t do any mathematically invalid operations on these types, which the compiler won’t catch for us. In order to use a vector as a point, we treat it as the relative displacement *from the origin, 0*. This will give the exact same Cartesian coordinates as the point would have had, now in the form of a vector.