The next important matrix operation we will explore is multiplication of matrices. The operation of matrix multiplication is one of the most important and useful of the matrix operations. Throughout this section, we will also demonstrate how matrix multiplication relates to linear systems of equations.

First, we provide a formal definition of row and column vectors.

Definition (PageIndex{1}): Row and Column Vectors

Matrices of size (n imes 1) or (1 imes n) are called **vectors**. If (X) is such a matrix, then we write (x_{i}) to denote the entry of (X) in the (i^{th}) row of a column matrix, or the (i^{th}) column of a row matrix.

The (n imes 1) matrix [X=left[ egin{array}{c} x_{1} vdots x_{n} end{array}
ight]] is called a **column vector.** The (1 imes n) matrix [X = left[ egin{array}{ccc} x_{1} & cdots & x_{n} end{array}
ight]] is called a **row vector**.

We may simply use the term **vector** throughout this text to refer to either a column or row vector. If we do so, the context will make it clear which we are referring to.

In this chapter, we will again use the notion of linear combination of vectors as in Definition [def:linearcombination]. In this context, a linear combination is a sum consisting of vectors multiplied by scalars. For example, [left[ egin{array}{r} 50 122 end{array} ight] = 7left[ egin{array}{r} 1 4 end{array} ight] +8left[ egin{array}{r} 2 5 end{array} ight] +9left[ egin{array}{r} 3 6 end{array} ight]] is a linear combination of three vectors.

It turns out that we can express any system of linear equations as a linear combination of vectors. In fact, the vectors that we will use are just the columns of the corresponding augmented matrix!

Definition (PageIndex{2}): The Vector Form of a System of Linear Equations

Suppose we have a system of equations given by [egin{array}{c} a_{11}x_{1}+cdots +a_{1n}x_{n}=b_{1} vdots a_{m1}x_{1}+cdots +a_{mn}x_{n}=b_{m} end{array}] We can express this system in **vector form** which is as follows: [x_1 left[ egin{array}{c} a_{11} a_{21} vdots a_{m1} end{array}
ight] + x_2 left[ egin{array}{c} a_{12} a_{22} vdots a_{m2} end{array}
ight] + cdots + x_n left[ egin{array}{c} a_{1n} a_{2n} vdots a_{mn} end{array}
ight] = left[ egin{array}{c} b_1 b_2 vdots b_m end{array}
ight]]

Notice that each vector used here is one column from the corresponding augmented matrix. There is one vector for each variable in the system, along with the constant vector.

The first important form of matrix multiplication is multiplying a matrix by a vector. Consider the product given by [left[ egin{array}{rrr} 1 & 2 & 3 4 & 5 & 6 end{array} ight] left[ egin{array}{r} 7 8 9 end{array} ight]] We will soon see that this equals [7left[ egin{array}{c} 1 4 end{array} ight] +8left[ egin{array}{c} 2 5 end{array} ight] +9left[ egin{array}{c} 3 6 end{array} ight] =left[ egin{array}{c} 50 122 end{array} ight]]

In general terms, [egin{aligned} left[ egin{array}{ccc} a_{11} & a_{12} & a_{13} a_{21} & a_{22} & a_{23} end{array} ight] left[ egin{array}{c} x_{1} x_{2} x_{3} end{array} ight] &=& x_{1}left[ egin{array}{c} a_{11} a_{21} end{array} ight] +x_{2}left[ egin{array}{c} a_{12} a_{22} end{array} ight] +x_{3}left[ egin{array}{c} a_{13} a_{23} end{array} ight] &=&left[ egin{array}{c} a_{11}x_{1}+a_{12}x_{2}+a_{13}x_{3} a_{21}x_{1}+a_{22}x_{2}+a_{23}x_{3} end{array} ight] end{aligned}] Thus you take (x_{1}) times the first column, add to (x_{2}) times the second column, and finally (x_{3}) times the third column. The above sum is a linear combination of the columns of the matrix. When you multiply a matrix on the left by a vector on the right, the numbers making up the vector are just the scalars to be used in the linear combination of the columns as illustrated above.

Here is the formal definition of how to multiply an (m imes n) matrix by an (n imes 1) column vector.

Definition (PageIndex{3}): Multiplication of Vector by Matrix

Let (A=left[ a_{ij} ight]) be an (m imes n) matrix and let (X) be an (n imes 1) matrix given by [A=left[ A_{1} cdots A_{n} ight], X = left[ egin{array}{r} x_{1} vdots x_{n} end{array} ight]]

Then the product (AX) is the (m imes 1) column vector which equals the following linear combination of the columns of (A): [x_{1}A_{1}+x_{2}A_{2}+cdots +x_{n}A_{n} = sum_{j=1}^{n}x_{j}A_{j}]

If we write the columns of (A) in terms of their entries, they are of the form [A_{j} = left[ egin{array}{c} a_{1j} a_{2j} vdots a_{mj} end{array} ight]] Then, we can write the product (AX) as [AX = x_{1}left[ egin{array}{c} a_{11} a_{21} vdots a_{m1} end{array} ight] + x_{2}left[ egin{array}{c} a_{12} a_{22} vdots a_{m2} end{array} ight] +cdots + x_{n}left[ egin{array}{c} a_{1n} a_{2n} vdots a_{mn} end{array} ight]]

Note that multiplication of an (m imes n) matrix and an (n imes 1) vector produces an (m imes 1) vector.

Here is an example.

Example (PageIndex{1}): A Vector Multiplied by a Matrix

Compute the product (AX) for [A = left[ egin{array}{rrrr} 1 & 2 & 1 & 3 0 & 2 & 1 & -2 2 & 1 & 4 & 1 end{array} ight], X = left[ egin{array}{r} 1 2 0 1 end{array} ight]]

**Solution**

We will use Definition [def:multiplicationvectormatrix] to compute the product. Therefore, we compute the product (AX) as follows. [egin{aligned} & 1left[ egin{array}{r} 1 0 2 end{array} ight] + 2left[ egin{array}{r} 2 2 1 end{array} ight] + 0left[ egin{array}{r} 1 1 4 end{array} ight] + 1 left[ egin{array}{r} 3 -2 1 end{array} ight] &= left[ egin{array}{r} 1 0 2 end{array} ight] + left[ egin{array}{r} 4 4 2 end{array} ight] + left[ egin{array}{r} 0 0 0 end{array} ight] + left[ egin{array}{r} 3 -2 1 end{array} ight] &= left[ egin{array}{r} 8 2 5 end{array} ight]end{aligned}]

Using the above operation, we can also write a system of linear equations in **matrix form**. In this form, we express the system as a matrix multiplied by a vector. Consider the following definition.

Definition (PageIndex{4}): The Matrix Form of a System of Linear Equations

Suppose we have a system of equations given by [egin{array}{c} a_{11}x_{1}+cdots +a_{1n}x_{n}=b_{1} a_{21}x_{1}+ cdots + a_{2n}x_{n} = b_{2} vdots a_{m1}x_{1}+cdots +a_{mn}x_{n}=b_{m} end{array}] Then we can express this system in **matrix form** as follows. [left[ egin{array}{cccc} a_{11} & a_{12} & cdots & a_{1n} a_{21} & a_{22} & cdots & a_{2n} vdots & vdots & ddots & vdots a_{m1} & a_{m2} & cdots & a_{mn} end{array}
ight] left[ egin{array}{c} x_{1} x_{2} vdots x_{n} end{array}
ight] = left[ egin{array}{c} b_{1} b_{2} vdots b_{m} end{array}
ight]]

The expression (AX=B) is also known as the **Matrix Form** of the corresponding system of linear equations. The matrix (A) is simply the coefficient matrix of the system, the vector (X) is the column vector constructed from the variables of the system, and finally the vector (B) is the column vector constructed from the constants of the system. It is important to note that any system of linear equations can be written in this form.

Notice that if we write a homogeneous system of equations in matrix form, it would have the form (AX=0), for the zero vector (0).

You can see from this definition that a vector [X = left[ egin{array}{c} x_{1} x_{2} vdots x_{n} end{array} ight]] will satisfy the equation (AX=B) only when the entries (x_{1}, x_{2}, cdots, x_{n}) of the vector (X) are solutions to the original system.

Now that we have examined how to multiply a matrix by a vector, we wish to consider the case where we multiply two matrices of more general sizes, although these sizes still need to be appropriate as we will see. For example, in Example [exa:vectormultbymatrix], we multiplied a (3 imes 4) matrix by a (4 imes 1) vector. We want to investigate how to multiply other sizes of matrices.

We have not yet given any conditions on when matrix multiplication is possible! For matrices (A) and (B), in order to form the product (AB), the number of columns of (A) must equal the number of rows of (B.) Consider a product (AB) where (A) has size (m imes n) and (B) has size (n imes p). Then, the product in terms of size of matrices is given by [(m imesoverset{ ext{these must match!}}{widehat{n);(n} imes p})=m imes p]

Note the two outside numbers give the size of the product. One of the most important rules regarding matrix multiplication is the following. If the two middle numbers don’t match, you can’t multiply the matrices!

When the number of columns of (A) equals the number of rows of (B) the two matrices are said to be **conformable** and the product (AB) is obtained as follows.

Definition (PageIndex{4}): Multiplication of Two Matrices

Let (A) be an (m imes n) matrix and let (B) be an (n imes p) matrix of the form [B=left[ B_{1} cdots B_{p} ight]] where (B_{1},...,B_{p}) are the (n imes 1) columns of (B). Then the (m imes p) matrix (AB) is defined as follows: [AB = A left[ B_{1} cdots B_{p} ight] = left[ (A B)_{1} cdots (AB)_{p} ight]] where ((AB)_{k}) is an (m imes 1) matrix or column vector which gives the (k^{th}) column of (AB).

Consider the following example.

Example (PageIndex{2}): Multiplying Two Matrices

Find (AB) if possible. [A = left[ egin{array}{rrr} 1 & 2 & 1 0 & 2 & 1 end{array} ight], B = left[ egin{array}{rrr} 1 & 2 & 0 0 & 3 & 1 -2 & 1 & 1 end{array} ight]]

**Solution**

The first thing you need to verify when calculating a product is whether the multiplication is possible. The first matrix has size (2 imes 3) and the second matrix has size (3 imes 3). The inside numbers are equal, so (A) and (B) are conformable matrices. According to the above discussion (AB) will be a (2 imes 3) matrix. Definition [def:multiplicationoftwomatrices] gives us a way to calculate each column of (AB), as follows.

[left[ overset{ ext{First column}}{overbrace{left[ egin{array}{rrr} 1 & 2 & 1 0 & 2 & 1 end{array} ight] left[ egin{array}{r} 1 0 -2 end{array} ight] }},overset{ ext{Second column}}{overbrace{left[ egin{array}{rrr} 1 & 2 & 1 0 & 2 & 1 end{array} ight] left[ egin{array}{r} 2 3 1 end{array} ight] }},overset{ ext{Third column}}{overbrace{left[ egin{array}{rrr} 1 & 2 & 1 0 & 2 & 1 end{array} ight] left[ egin{array}{r} 0 1 1 end{array} ight] }} ight]] You know how to multiply a matrix times a vector, using Definition [def:multiplicationvectormatrix] for each of the three columns. Thus [left[ egin{array}{rrr} 1 & 2 & 1 0 & 2 & 1 end{array} ight] left[ egin{array}{rrr} 1 & 2 & 0 0 & 3 & 1 -2 & 1 & 1 end{array} ight] = left[ egin{array}{rrr} -1 & 9 & 3 -2 & 7 & 3 end{array} ight]]

Since vectors are simply (n imes 1) or (1 imes m) matrices, we can also multiply a vector by another vector.

Example (PageIndex{3}): Vector Times Vector Multiplication

Multiply if possible (left[ egin{array}{r} 1 2 1 end{array} ight] left[ egin{array}{rrrr} 1 & 2 & 1 & 0 end{array} ight] .)

**Solution**

In this case we are multiplying a matrix of size (3 imes 1) by a matrix of size (1 imes 4.) The inside numbers match so the product is defined. Note that the product will be a matrix of size (3 imes 4). Using Definition [def:multiplicationoftwomatrices], we can compute this product as follows (:) [left[ egin{array}{r} 1 2 1 end{array} ight] left[ egin{array}{rrrr} 1 & 2 & 1 & 0 end{array} ight] = left[ overset{ ext{First column}}{overbrace{left[ egin{array}{r} 1 2 1 end{array} ight] left[ egin{array}{r} 1 end{array} ight] }},overset{ ext{Second column}}{overbrace{left[ egin{array}{r} 1 2 1 end{array} ight] left[ egin{array}{r} 2 end{array} ight] }},overset{ ext{Third column}}{overbrace{left[ egin{array}{r} 1 2 1 end{array} ight] left[ egin{array}{r} 1 end{array} ight] }}, overset { ext{Fourth column}}{overbrace{left[ egin{array}{r} 1 2 1 end{array} ight] left[ egin{array}{r} 0 end{array} ight]}} ight]]

You can use Definition [def:multiplicationvectormatrix] to verify that this product is [left[ egin{array}{cccc} 1 & 2 & 1 & 0 2 & 4 & 2 & 0 1 & 2 & 1 & 0 end{array} ight]]

Example (PageIndex{4}): A Multiplication Which is Not Defined

Find (BA) if possible. [B = left[ egin{array}{ccc} 1 & 2 & 0 0 & 3 & 1 -2 & 1 & 1 end{array} ight], A = left[ egin{array}{ccc} 1 & 2 & 1 0 & 2 & 1 end{array} ight]]

**Solution**

First check if it is possible. This product is of the form (left( 3 imes 3 ight) left( 2 imes 3 ight) .) The inside numbers do not match and so you can’t do this multiplication.

In this case, we say that the multiplication is not defined. Notice that these are the same matrices which we used in Example [exa:multiplicationoftwomatrices]. In this example, we tried to calculate (BA) instead of (AB). This demonstrates another property of matrix multiplication. While the product (AB) maybe be defined, we cannot assume that the product (BA) will be possible. Therefore, it is important to always check that the product is defined before carrying out any calculations.

Earlier, we defined the zero matrix (0) to be the matrix (of appropriate size) containing zeros in all entries. Consider the following example for multiplication by the zero matrix.

Example (PageIndex{5}): Multiplication by the Zero Matrix

Compute the product (A0) for the matrix [A= left[ egin{array}{rr} 1 & 2 3 & 4 end{array} ight]] and the (2 imes 2) zero matrix given by [0= left[ egin{array}{rr} 0 & 0 0 & 0 end{array} ight]]

**Solution**

In this product, we compute [left[ egin{array}{rr} 1 & 2 3 & 4 end{array} ight] left[ egin{array}{rr} 0 & 0 0 & 0 end{array} ight] = left[ egin{array}{rr} 0 & 0 0 & 0 end{array} ight]]

Hence, (A0=0).

Notice that we could also multiply (A) by the (2 imes 1) zero vector given by (left[ egin{array}{r} 0 0 end{array} ight]). The result would be the (2 imes 1) zero vector. Therefore, it is always the case that (A0=0), for an appropriately sized zero matrix or vector.

## Matrix Multiplication: Product of Two Matrices

Matrix multiplication is the “messy type” because you will need to follow a certain set of procedures in order to get it right. This is the “messy type” because the process is more involved. However, you will realize later after going through the procedure and some examples that the steps required are manageable. Don’t worry, I will help you with this!

But first, we need to ensure that the two matrices are “allowed” to be multiplied together. Otherwise, the given two matrices are “incompatible” to be multiplied. If this is the case, we say that the solution is undefined.

## Matrix Multiplication: (2ࡨ) by (2ࡨ)

Suppose we have a **2ࡨ** matrix A, which has 2 rows and 2 columns:

Suppose we also have a **2ࡨ** matrix B, which has 2 rows and 2 columns:

To multiply matrix A by matrix B, we use the following formula:

This results in a 2ࡨ matrix.

The following examples illustrate how to multiply a 2ࡨ matrix with a 2ࡨ matrix using real numbers.

### Example 1

Suppose we have a **2ࡨ** matrix C, which has 2 rows and 2 columns:

Suppose we also have a **2ࡨ** matrix D, which has 2 rows and 2 columns:

Here is how to multiply matrix C by matrix D:

This results in the following 2ࡨ matrix:

### Example 2

Suppose we have a **2ࡨ** matrix E, which has 2 rows and 2 columns:

Suppose we also have a **2ࡨ** matrix F, which has 2 rows and 2 columns:

Here is how to multiply matrix E by matrix F:

This results in the following 2ࡨ matrix:

### Example 3

Suppose we have a **2ࡨ** matrix G, which has 2 rows and 2 columns:

Suppose we also have a **2ࡨ** matrix H, which has 2 rows and 2 columns:

Here is how to multiply matrix G by matrix H:

This results in the following 2ࡨ matrix:

### Example 4

Suppose we have a **2ࡨ** matrix I, which has 2 rows and 2 columns:

Suppose we also have a **2ࡨ** matrix J, which has 2 rows and 2 columns:

Here is how to multiply matrix I by matrix J:

This results in the following 2ࡨ matrix:

## Multiplication of Matrices

Matrix multiplication is an operation performed upon two (or sometimes more) matrices, with the result being another matrix.

This explanation will assume the student is familiar with the basics of matrices, such as matrix notation and vector dot products.

There are certain rules which must be followed in the multiplication process. First, when multiplying any two matrices #A_(rs)# and #B_(tu)# , where #r# and #t# are the number of rows in matrices #A & B# respectively and #s# and #u# the number of columns in matrices #A & B# respectively, if #s!=t# (that is, the number of rows in #A# does not equal the number of columns in #B# ), the matrix multiplication cannot be carried out.

When multiplying two matrices such as this, the resultant matrix #AB# will possess #r# rows and #u# columns in other words, the same number of rows as the #A# matrix and the same number of columns as the #B# matrix.

Each entry in the #AB# matrix will be calculated via the dot product of a row from the #A# matrix and a column from the #B# matrix. Renaming the #AB# matrix as #C# for ease of use, the value of any individual element #c_ij# can be found by taking the dot product of row #i# from #A# and column #j# from #B# .

There is currently some difficulty in utilizing Socratic's math code to construct a matrix, so different notation must be used temporarily. Consider the 2x3 matrix #A# , such that #a_11 = 1, a_12 = 0, a_13 = 3, a_21 = 0, a_22 = 5, a_23 = -1# , as well as the 3x2 matrix #B# such that #b_11 = 4, b_12 = 5, b_21 = 0, b_22 = -3, b_31 = -4, b_32 = 1# . Then the resultant matrix #AB = C# is a 2x2 matrix, with

#c_11 = (a_11*b_11) + (a_12*b_21) + (a_13*b_31)# ,

#c_12 = (a_11*b_12)+(a_12*b_22)+(a_13*b_32),#

#c_21 = (a_21*b_11) + (a_22*b_21)+(a_23*b_31), #

#c_22 = (a_21*b_12)+(a_22+b_22)+(a_23+b_32)#

Plugging in the respective values, we get #c_11 = -8, c_12 = 8, c_21 = 4, c_22 = -16#

There is some information on Multiplication of Matrices here on Socratic.

I think of it as a process that is easier to explain in person, but I'll do my best here.

Let's go through an example:

**Find the first row of the product**

Take the first row of #((1, 2),(3, 4))# , and make it vertical in front of #((3, 5),(7, 11))# . (We'll do the same for the second row in a minute.)

Now multiply times the first column and add to get the first number in the first row of the answer:

#<:(1 xx 3),(2 xx 7) :>=<:(3),(14) :># now add to get # 17#

Next multiply times the second column and add to get the second number in the first row of the answer:

#<:(1 xx 5),(2 xx 11) :>=<:(5),(22) :># now add to get # 27#

The first row of the product is: #((17,27))#

A this point we know that the product looks like:

**Find the second row of the product**

Find the second row of the product by the same process using the second row of #((1, 2),(3, 4))#

# <: (3),(4) :>((3, 5),(7, 11)) # to get: #9+28 = 37# and #15+44 = 59#

The second row of the product is: #((37,59))#

**Write the answer**

## 2x2 Matrices Multiplication Formula

### Multiplicative Identity Matrix

The multiplicative identity matrix is a matrix that you can multiply by another matrix and the resultant matrix will equal the original matrix. The multiplicative identity matrix is so important it is usually called the identity matrix, and is usually denoted by a double lined 1, or an **I**, no matter what size the identity matrix is.

The multiplicative identity matrix obeys the following equation: **IA = AI = A**

The multiplicative identity matrix for a 2x2 matrix is:

/>

### 2x2 Matrices Multiplication Example

The following will show how to multiply two 2x2 matrices:

/>

### Properties of Matrix Multiplication

1. Matrix multiplication is NOT commutative in general **AB &ne BA**

2. Matrix multiplication is associative. It doesn't matter how 3 or more matrices are grouped when being multiplied, as long as the order isn't changed **A(BC) = (AB)C**

3. Matrix multiplication is associative, analogous to simple algebraic multiplication. The only difference is that the order of the multiplication must be maintained **A(B+C) = AB + AC &ne (B+C)A = BA + CA**

4. If it's a Square Matrix, an identity element exists for matrix multiplication. It is called either E or I **IA = AI = A**

Matrices are widely used in geometry, physics and computer graphics applications. The array of quantities or expressions set out by rows and columns treated as a single element and manipulated according to rules. Matrix calculations can be understood as a set of tools that involves the study of methods and procedures used for collecting, classifying, and analyzing data. In many applications it is necessary to calculate 2x2 matrix multiplication where this online 2x2 matrix multiplication calculator can help you to effortlessly make your calculations easy for the respective inputs.

For any element $Ain H$, you can multiply the top row by any non-zero number to get an element of $G$. So $|G|=(p-1)|H|$.

Now to count $H$.

- If $a eq0$, then $d=(1+bc)/a$, so there are how many matrices? How many choices for $a,b,c$?
- If $a=0$, then $c=-b^<-1>$. That gives how many other matrices? How many choices for $b,d$?

As an alternative to Michael's answer (which does give the right hints), you can count the tuples $(a,b,c,d)$ for which $ad-bc=0$:

- If $a=0$ (one case) and $d$ arbitrary ($p$ cases) and $b=0$ (one case), then $c$ is arbitrary ($p$ cases), so this yields $pcdot p=p^2$ tuples.
- If $a=0$ (one case) and $d$ arbitrary ($p$ cases) and $b eq 0$ ($p-1$ cases), then $c$ must be $=0$, so this case yields $pcdot(p-1)=p^2-p$ tuples.
- If $a eq 0$ ($p-1$ cases) and $d=0$ (one case) and $b=0$ (one case), then $c$ is arbitrary ($p$ cases), so this yields $(p-1)cdot p=p^2-p$ tuples.
- If $a eq 0$ ($p-1$ cases) and $d=0$ (one case) and $b eq 0$ ($p-1$ cases), then $c$ must be $=0$, so this yields $(p-1)cdot(p-1)=p^2-2p+1$ tuples.
- If $a eq 0$ ($p-1$ cases) and $d eq0$ ($p-1$ cases), then $b$ can be arbitrary $ eq 0$ ($p-1$ cases), and $c$ is determined by $a$, $d$ and $b$, so this yields $(p-1)cdot(p-1)cdot(p-1)=p^3-3p^2+3p-1$ tuples.

In total, these five cases add up to $p^2+p^2-p+p^2-p+p^2-2p+1+p^3-3p^2+3p-1=p^3+p^2-p$ "disallowed" tuples, so the number of elements of $G$ is $p^4-(p^3+p^2-p)=p^4-p^3-p^2+p=(p^2-1)cdot(p^2-p)$ (the last term being the standard way of expressing the group's order).

## Multiplication between two matrices

Multiplication between two matrices is feasible if the number of columns of the first matrix is same as the matrix of rows in another matrix then matrix multiplication can be done. In general, Let be an m*n matrix and be an n*p matrix. Then the product of the matrices A and B is the matrix C of order m*p. To get the element of the matrix C, we take the row of A and column of B, multiply then element wise and take sum of all these products. In other words, If , then the row of A is and the column of B is , then = . The matrix is the product of A and B.

## Reviews

Reviewed by Tim Brauch, Associate Professor, Manchester University on 6/15/19

The author makes clear in the foreword that this text is not a linear algebra text. It avoids much of the theory associated with linear algebra although, the author does touch on theorems as necessary. Avoiding theory but using the term. read more

Reviewed by Tim Brauch, Associate Professor, Manchester University on 6/15/19

Comprehensiveness rating: 4 see less

The author makes clear in the foreword that this text is not a linear algebra text. It avoids much of the theory associated with linear algebra although, the author does touch on theorems as necessary. Avoiding theory but using the term "theorem" might require some discussion in class that is avoided in the textbook. Keeping in mind that this book focuses on computation rather than theory, it covers the main computational aspects of matrix algebra. The section on matrix multiplication has heavy emphasis on square matrices in the examples though the homework uses non-square matrices. This might need supplemented with non-square examples for students to refer to when attempting the homework.

Content Accuracy rating: 4

Content-wise, the book seems to be error free. I did not check solutions to all the examples and problems, but the ones I did check were correct.

Relevance/Longevity rating: 5

Linear algebra and matrix algebra doesn't really go out of date. The examples are benign enough not to become outdated. The examples are rather uninteresting. At points the author makes effort to say that the ideas in this book are useful in real life, but the examples are artificial.

There is a quick rush through Reduced Row Echelon Form. After one section, the author assumes the reader is an expert on the topic. I have found this topic can take some students weeks, even months to master. The lack of detail in showing the steps in later sections saves space in the text, but can cause confusion for students. The section on matrix multiplication is a little clunky. The author is trying to avoid the theoretical aspects of a traditional linear algebra course. This leads to questionable notation when introducing matrix multiplication. In essence, the author defines the dot product without using that notation. In the same section, the author multiplies vectors by concatenation (xy means x times y). Because there are multiple ways to multiply vectors, the lack of a sign is ambiguous. However, treating vectors as matrices and there is a standard matrix multiplication for matrices, it would make sense. The author also claims that component-wise matrix multiplication is wrong. While it is not the standard way to multiply matrices, situations arise in which it is the required way. Vector operations are discussed in the chapter on matrix operations. This is the difficulty of the nature of vectors in linear algebra. They are matrices. They are geometric objects. Discussion of on aspect almost requires discussion of the other aspect. However, there is not a "clean" way to do this. The emphasis is on the geometry, without reinforcing the algebraic ideas from the matrix operations sections and then it switches to focusing on the algebra and ignoring the geometry (until a later chapter).

Notation, vocabulary, and such seems consistent throughout. In general, theorems are presented without proof, although in a few sections attempts are proofs are given (perhaps even formal proofs without using that language).

Chapters seem to be rather modular, even if sequential. Most sections, though, end with "guiding questions" for the next section (for example, the section on matrix multiplication ends with questions that infer the matrix inverse will exist, which is explained in the next section). This could cause problems if some sections are skipped, as students are primed for the next section. This does not pose a problem as long as full chapters are used. Each section is appropriate, but begs the next section.

Organization/Structure/Flow rating: 4

A common problem with texts in linear algebra, which this book faces, is whether to consider vectors or matrices, or both. This book switches back and forth. While there seems to be no good way to handle this, and this book takes the standard (traditional) approach, switching this way can be confusing for students.

The book is a PDF with bookmarks for chapters and sections. All images are clear and very well done.

Grammatical Errors rating: 3

There are a few minor typos, none that distract from the text (for example, "recieve" instead of "receive"). In other places, spacing is odd.

Cultural Relevance rating: 3

There is a comment in a footnote about girl and boy names, commenting that a boy has a girl name. It is not necessarily offensive, but it adds nothing to the text. Otherwise, the book is fine. The examples could be more multicultural, but they are generally culturally agnostic.

Overall, the book does what it sets out to do. It teaches matrix algebra with minimal theory and emphasis on computation. It is not completely devoid of theory, and enters the world of proof gently.

## What is a matrix?

I love Matrix movies. But we are not going to talk about it here. Instead, I want to show you that matrices are not some sort of esoteric spell hiding dark secrets that only the geekiest of us can grasp.

Have you ever played one of these card games where you need a scoreboard? If so, I have a piece of good news: you already know what a **matrix** is and you can remember how it works at any time by reverse engineering a scoreboard.

As you can see in the image above, there is one column for each player and a row for each turn. The

matrixis filled with values representing a specific player score at a precise turn. For instance, on the third turn, Daniel had 4 points.

**Dimension**: a fixed number of columns and a fixed number of rows.**Values**: the values held by the matrix must be consistent. If some of the entries hold information about oranges and others about nicotine, your matrix will be of no use.**Operations:**a tool-set of mathematical operations such as addition and multiplication.

As with everything in **Mathematics**, **a matrix is an idea translated into a definition and represented with a notation**.

The latter is straightforward, we just wrap box brackets or parentheses around numbers:

If you are not used to **Mathematics** **notation**, the image above might seem daunting at a first look. Let’s break it down together.

The characters with green and red underscores represent the entries of the matrix:

That **notation** comes super handy when we are manipulating huge matrices or when we want to stay general and express something for each matrix with ** m** rows and

**columns. In this case, we would say that the matrix has dimensions**

*n***(m, n).**

The following **matrix** represents the scoreboard we have seen in the previous section:

If another player were to join the game, we would have to add a column for that player to the scoreboard. The same would happen with the **matrix**: we would need to have 5 columns to represent the whole game.

## Matrix Multiplication for the Identity Matrix

Now what about the matrix multiplication property for identity matrices? Well, the property states the following:

Formula 6: Matrix Multiplication for Identity Matrixwhere I n I_

Equation 12: Matrix Multiplication for identity matrix example pt.1

So for the equation X I 2 = X X I_ <2>= X X I 2 = X , we have:

Equation 12: Matrix Multiplication for identity matrix example pt.2So the equation does hold. Similar to the equation I 2 X = X I_<2>X = X I 2 X = X , we have:

Equation 12: Matrix Multiplication for identity matrix example pt.3Again, the equation holds. So we are done with the question, and both equations hold. This concludes all the properties of matrix multiplication. Now if you want to look at a real life application of matrix multiplication, then I recommend you look at this article.