Topic : 3D Matrix Math Demystified
Author : Seumas McNally
Page : << Previous 2  
Go to page :


are ordered (in RAM) like this:
           [0, 3, 6, 9 ]
           [1, 4, 7, 10]
           [2, 5, 8, 11]



The Translation Vector is merely the amount to shift the point in X, Y, and Z, as seen from the Identity reference frame. It would work the same as if it was outside of the matrix, and you simply added it to the point after the matrix transform. If the matrix was representing a jet plane's orientation in the world, the Translation Vector would merely be the coordinates of the center of the jet plane in the world. When describing matrix multiplication as the scaling and adding of vectors before, the X, Y, and Z components of the point to be rotated scaled the X, Y, or Z Axis Vectors respectively, but what scales the Translation Vector? If you're working with matrices that are larger in one or both dimensions than your vectors, you usually fill in the extra vector components with 1s. So a 3-component X, Y, Z vector gets an extra imaginary Translation Component added to the end which is always a 1, and that Translation Component scales the Translation Vector of the matrix. Scaling by 1 is easy, since it means no change at all. Here's the code to transform a point through a 3x4 matrix:


NewX = X * Matrix[0][0] + Y * Matrix[1][0] + Z * Matrix[2][0] + Matrix[3][0];
NewY = X * Matrix[0][1] + Y * Matrix[1][1] + Z * Matrix[2][1] + Matrix[3][1];
NewZ = X * Matrix[0][2] + Y * Matrix[1][2] + Z * Matrix[2][2] + Matrix[3][2];



Going in Reverse
Multiplying forwards through a matrix is great, but what if you want to multiply "backwards", to take a point that has been transformed through a matrix, and bring it back into the Identity reference frame where it started from? In the general case, this requires calculating the Inverse of the matrix, which is a lot of work for general matrices. However in the case of standard rotation matrices (said to represent an Orthonormal Basis) where the three Axis Vectors are at perfect right angles to each other, the Transpose of the matrix happens to also be the Inverse, and the Transpose is created merely by flipping the matrix about its primary diagonal (which runs from upper left to lower right). Here is a visual representation:


         [0,  0, -1]        T   [ 0, 1,  0]
matrix = [1,  0,  0]  matrix  = [ 0, 0, -1]
         [0, -1,  0]            [-1, 0,  0]
          ^   ^   ^  
          |   |   |-- Z Axis Vector
          |   |------ Y Axis Vector
          |---------- X Axis Vector


Effective cell order change (in RAM):

           [0, 3, 6]            [0, 1, 2]
    normal [1, 4, 7]  transpose [3, 4, 5]
           [2, 5, 8]            [6, 7, 8]



You can either make a transposed version of your matrix and then multiply by that, or you can do the math to directly multiply through the transpose of the matrix without actually changing it. If you go for the latter, it just so happens that instead of scaling and adding, you do Dot Products. The Dot Product of the vectors (X, Y, Z) and (A, B, C) is X*A + Y*B + Z*C. The following code to multiply a point through the Transpose of a matrix simply does 3 Dot Products, Point Dot X Axis Vector, Point Dot Y Axis Vector, and Point Dot Z Axis Vector.


NewX = X * Matrix[0][0] + Y * Matrix[0][1] + Z * Matrix[0][2];
NewY = X * Matrix[1][0] + Y * Matrix[1][1] + Z * Matrix[1][2];
NewZ = X * Matrix[2][0] + Y * Matrix[2][1] + Z * Matrix[2][2];



Going in reverse works similarly for 3x4 matrices with Translation, except that you have to subtract the Translation Vector from the point before rotating it, to properly reverse the order of operations, since when going forwards, the Translation Vector was added to the point after rotation.


TX = X - Matrix[3][0];
TY = Y - Matrix[3][1];
TZ = Z - Matrix[3][2];
NewX = TX * Matrix[0][0] + TY * Matrix[0][1] + TZ * Matrix[0][2];
NewY = TX * Matrix[1][0] + TY * Matrix[1][1] + TZ * Matrix[1][2];
NewZ = TX * Matrix[2][0] + TY * Matrix[2][1] + TZ * Matrix[2][2];



Multiplying Matrices
Now things start to get really interesting. Multiplying matrices together, or Concatenating them, allows you to combine the actions of multiple separate matrices into a single matrix, such that when points are multiplied through it, will produce the same result as if you multiplied the points through each of the original matrices in turn. Imagine all the computation that can save you, when you have a lot of points to transform!

I like to say that you multiply one matrix "through" another matrix, since that's what you're really doing mathematically. The process is astoundingly simple, and all you basically do is take the three Axis Vectors of the first matrix (for 3x3 matrices), and do a normal Vector Times Matrix operation on each of them with the second matrix to produce the three Axis Vectors in the result matrix. The same is true for 3x4 matrices being multiplied through each other, in which case you multiply each of the X, Y, and Z Axis Vectors and the Translation Vector of the first matrix through the second matrix as normal, and you get your resultant 3x4 matrix out the other side. Keep in mind that you can also reverse multiply one matrix through another, by using the Transpose of the second matrix (or the appropriate code to do a transposed multiply directly). For a 3x3 matrix, the code for the normal forward case might look like this:


for(i = 0; i < 3; i++){  //Cycle through each vector of first matrix.
  NewMatrix[i][0] = MatrixA[i][0] * MatrixB[0][0] +
    MatrixA[i][1] * MatrixB[1][0] + MatrixA[i][2] * MatrixB[2][0];
  NewMatrix[i][1] = MatrixA[i][0] * MatrixB[0][1] +
    MatrixA[i][1] * MatrixB[1][1] + MatrixA[i][2] * MatrixB[2][1];
  NewMatrix[i][2] = MatrixA[i][0] * MatrixB[0][2] +
    MatrixA[i][1] * MatrixB[1][2] + MatrixA[i][2] * MatrixB[2][2];
}



If you have two matrices, A and B, where A pitches forwards a bit, and B rotates to the right a bit, and you multiply A through B to produce matrix C, then multiplying a point through matrix C will produce the same result as if the point was first multiplied through matrix A, and the resulting point was multiplied through matrix B. Note: This is the way I prefer to think of my matrix operations right now, but it is actually the reverse of the normal way. When working with matrix operations in APIs such as OpenGL, matrix concatenations actually work in reverse order. In the above example of concatenating A and B into C, multiplying a point by C would actually be the same as first multiplying the point by B, and then multiplying the result by A. You can think of it as each successively concatenated matrix acting in respect to the local coordinate reference frame built up by the previously concatenated matrices, whereas with my method each successive matrix acts in the identity reference frame, which keeps the same order as if you took the time to rotate the point by each original matrix in turn.

The End, For Now:
This is by no means a complete treatment of matrix math, but hopefully it will help you to understand the basics better. Once you grasp this much, go dig up a few good books on matrices and learn some of their REAL power, while remembering that at their core, the simpler matrices don't have to be thought of as matrices at all.

Copyright 1998-1999 by Seumas McNally.
No reproduction may be made without the author's written consent.

Courtesy Of Longbow Digital Artists

Reprinted with permission!

Page : << Previous 2