Articles

12.1: From linear systems to matrix equations - Mathematics


We begin this section by reviewing the definition of and notation for matrices. This point of view has a long history of exploration, and numerous computational devices — including several computer programming languages — have been developed and optimized specifically for analyzing matrix equations.

Let (m, n in mathbb{Z}_{+}) be positive integers, and, as usual, let (mathbb{F}) denote either (mathbb{R}) or (mathbb{C}). Then we begin by defining an (m imes n) matrix (A) to be a rectangular array of numbers

[ A = (a_{i j})_{i,j=1}^{m,n} = (A^{(i, j)})_{i,j=1}^{m,n} = ,
left.
egin{bmatrix}
a_{1 1} & cdots & a_{1 n}
vdots & ddots & vdots
a_{m 1} & cdots & a_{m n}
end{bmatrix}
ight}
m mbox{ numbers}
hspace{-5.675cm}
underbrace{
phantom{
egin{bmatrix}
a & a & a & a_{a_{a}}
a & a & a
a & a & a
a & a & a
end{bmatrix}
}
}_{ extstyle n mbox{ numbers}}
]

where each element (a_{i j} in mathbb{F}) in the array is called an entry of (A) (specifically, (a_{i j}) is called the ``(i, j) entry''). We say that (i) indexes the rows of (A) as it ranges over the set ({1, ldots, m}) and that (j) indexes the columns of (A) as it ranges over the set ({1, ldots, n}). We also say that the matrix (A) has size (m imes n) and note that it is a (finite) sequence of doubly-subscripted numbers for which the two subscripts in no way depend upon each other.

Definition A.1.1. Given positive integers (m, n in mathbb{Z}_{+}), we use (mathbb{F}^{m imes n}) to denote the set of all (m imes n) matrices having entries over (mathbb{F}.)

Example A.1.2. The matrix ( A = egin{bmatrix} 1 & 0 & 2 -1 & 3 & i end{bmatrix} in mathbb{C}^{2 imes 3}), but (A otin mathbb{R}^{2 imes 3}) since the "(2,3)"' entry of (A) is not in (mathbb{R}.)

Given the ubiquity of matrices in both abstract and applied mathematics, a rich vocabulary has been developed for describing various properties and features of matrices. In addition, there is also a rich set of equivalent notations. For the purposes of these notes, we will use the above notation unless the size of the matrix is understood from context or is unimportant. In this case, we will drop much of this notation and denote a matrix simply as

[ A = (a_{i j}) mbox{ or } A = (a_{i j})_{m imes n}. ]

To get a sense of the essential vocabulary, suppose that we have an (m imes n) matrix (A = (a_{i j})) with (m = n). Then we call (A) a square matrix. The elements (a_{1 1}, a_{2 2}, ldots, a_{n n}) in a square matrix form the main diagonal of (A), and the elements (a_{1 n}, a_{2, n-1}, ldots, a_{n 1}) form what is sometimes called the skew main diagonal of (A). Entries not on the main diagonal are also often called off-diagonal entries, and a matrix whose off-diagonal entries are all zero is called a diagonal matrix. It is common to call (a_{1 2}, a_{2 3}, ldots, a_{n-1,n}) the superdiagonal of (A) and (a_{2 1}, a_{3 2}, ldots, a_{n,n-1}) the subdiagonal of (A). The motivation for this terminology should be clear if you create a sample square matrix and trace the entries within these particular subsequences of the matrix.

Square matrices are important because they are fundamental to applications of Linear Algebra. In particular, virtually every use of Linear Algebra either involves square matrices directly or employs them in some indirect manner. In addition, virtually every usage also involves the notion of vector, where here we mean either an (m imes 1) matrix (a.k.a.~a column vector) or a (1 imes n) matrix (a.k.a. a row vector).

Example A.1.3. Suppose that (A = (a_{i j})), (B = (b_{i j})), (C = (c_{ij})), (D = (d_{i j})), and (E = (e_{i j})) are the following matrices over (mathbb{F}):

[ A = left[
egin{array}{r}
3 \
-1 \
1
end{array}
ight]hspace{-.1cm},hspace{.1cm}
B =
left[
egin{array}{rr}
4 & -1 \
0 & 2
end{array}
ight]hspace{-.1cm},hspace{.1cm}
C =
left[
egin{array}{rrr}
1, & 4, & 2
end{array}
ight]hspace{-.1cm},hspace{.1cm}
D =
left[
egin{array}{rrr}
1 & 5 & 2 \
-1 & 0 & 1 \
3 & 2 & 4
end{array}
ight]hspace{-.1cm},hspace{.1cm}
E =
left[
egin{array}{rrr}
6 & 1 & 3 \
-1 & 1 & 2 \
4 & 1 & 3
end{array}
ight]hspace{-.1cm}.
]

Then we say that (A) is a (3 imes 1) matrix (a.k.a.~a column vector), (B) is a (2 imes 2) square matrix, (C) is a (1 imes 3) matrix (a.k.a. a row vector), and both (D) and (E) are square (3 imes 3) matrices. Moreover, only (B) is an upper-triangular matrix (as defined below), and none of the matrices in this example are diagonal matrices.

We can discuss individual entries in each matrix. E.g.,

  1. the (2^{ ext{th}}) row of (D) is (d_{2 1} = -1), (d_{2 2} = 0), and (d_{2 3} = 1).
  2. the main diagonal of (D) is the sequence (d_{1 1} = 1, d_{2 2} = 0, d_{3 3} = 4).
  3. the skew main diagonal of (D) is the sequence (d_{1 3} = 2, d_{2 2} = 0, d_{3 1} = 3).
  4. the off-diagonal entries of (D) are (by row) (d_{1 2}), (d_{1 3}), (d_{2 1}), (d_{2 3}), (d_{3 1}), and (d_{3 2}).
  5. the (2^{ ext{th}}) column of (E) is (e_{1 2} = e_{2 2} = e_{3 2} = 1).
  6. the superdiagonal of (E) is the sequence (e_{1 2} = 1, e_{2 3} = 2).
  7. the subdiagonal of (E) is the sequence (e_{2 1} = -1, e_{3 2} = 1).

A square matrix (A = (a_{i j}) in mathbb{F}^{n imes n}) is called upper triangular (resp. lower triangular) if (a_{i j} = 0) for each pair of integers (i,j in {1, ldots, n}) such that (i > j) (resp. (i < j)). In other words, (A) is triangular if it has the form

[
egin{bmatrix}
a_{1 1} & a_{1 2} & a_{1 3} & cdots & a_{1 n}
0 & a_{2 2} & a_{2 3} & cdots & a_{2 n}
0 & 0 & a_{3 3} & cdots & a_{3 n}
vdots & vdots & vdots & ddots & vdots
0 & 0 & 0 & cdots & a_{n n}
end{bmatrix}

ext{or}

egin{bmatrix}
a_{1 1} & 0 & 0 & cdots & 0
a_{2 1} & a_{2 2} & 0 & cdots & 0
a_{3 1} & a_{3 2} & a_{3 3} & cdots & 0
vdots & vdots & vdots & ddots & vdots
a_{n 1} & a_{n 2} & a_{n 3} & cdots & a_{n n}
end{bmatrix}.
]

Note that a diagonal matrix is simultaneously both an upper triangular matrix and a lower triangular matrix.

Two particularly important examples of diagonal matrices are defined as follows: Given any positive integer (n in mathbb{Z}_{+}), we can construct the identity matrix (I_{n}) and the zero matrix (0_{n imes n}) by setting

[
I_{n} =
egin{bmatrix}
1 & 0 & 0 & cdots & 0 & 0
0 & 1 & 0 & cdots & 0 & 0
0 & 0 & 1 & cdots & 0 & 0
vdots & vdots & vdots & ddots & vdots & vdots
0 & 0 & 0 & cdots & 1 & 0
0 & 0 & 0 & cdots & 0 & 1
end{bmatrix}
mbox{ and }
0_{n imes n} =
egin{bmatrix}
0 & 0 & 0 & cdots & 0 & 0
0 & 0 & 0 & cdots & 0 & 0
0 & 0 & 0 & cdots & 0 & 0
vdots & vdots & vdots & ddots & vdots & vdots
0 & 0 & 0 & cdots & 0 & 0
0 & 0 & 0 & cdots & 0 & 0
end{bmatrix},
]

where each of these matrices is understood to be a square matrix of size (n imes n). The zero matrix (0_{m imes n}) is analogously defined for any (m, n in mathbb{Z}_{+}) and has size (m imes n). I.e.,

[
0_{m imes n} =
left.
egin{bmatrix}
0 & 0 & 0 & cdots & 0 & 0
0 & 0 & 0 & cdots & 0 & 0
0 & 0 & 0 & cdots & 0 & 0
vdots & vdots & vdots & ddots & vdots & vdots
0 & 0 & 0 & cdots & 0 & 0
0 & 0 & 0 & cdots & 0 & 0
end{bmatrix}
ight}
m mbox{ rows}
hspace{-5.675cm}
underbrace{
phantom{
egin{bmatrix}
a & a & a & a_{a_{a}} & a_{a}
a & a & a
a & a & a
a & a & a
a & a & a
a & a & a
a & a & a
end{bmatrix}
}
}_{ extstyle n mbox{ columns}}
]

Let (m, n in mathbb{Z}_{+}) be positive integers. Then a system of (m) linear equations in (n) unknowns (x_{1}, ldots, x_{n}) looks like

egin{equation}
label{eqn:GenericLinearSystem}
left.
egin{aligned}
a_{1 1}x_{1} + a_{1 2}x_{2} + a_{1 3}x_{3} + cdots + a_{1 n}x_{n} & = b_{1}
a_{2 1}x_{1} + a_{2 2}x_{2} + a_{2 3}x_{3} + cdots + a_{2 n}x_{n} & = b_{2}
a_{3 1}x_{1} + a_{3 2}x_{2} + a_{3 3}x_{3} + cdots + a_{3 n}x_{n} & = b_{3}
& ,vdots
a_{m 1}x_{1} + a_{m 2}x_{2} + a_{m 3}x_{3} + cdots + a_{m n}x_{n} & = b_{m}
end{aligned}
ight}, ag{A.1.1}
end{equation}

where each (a_{i j}, b_{i} in mathbb{F}) is a scalar for (i = 1, 2, ldots, m) and (j = 1, 2, ldots, n). In other words, each scalar (b_{1}, ldots, b_{m} in mathbb{F}) is being written as a linear combination of the unknowns (x_{1}, ldots, x_{n}) using coefficients from the field (mathbb{F}). To solve System (A.1.1) means to describe the set of all possible values for (x_{1}, ldots, x_{n}) (when thought of as scalars in (mathbb{F})) such that each of the (m) equations in System (A.1.1) is satisfied simultaneously.

Rather than dealing directly with a given linear system, it is often convenient to first encode the system using less cumbersome notation. Specifically, System (A.1.1) can be summarized using exactly three matrices. First, we collect the coefficients from each equation into the (m imes n) matrix (A = (a_{i j}) in mathbb{F}^{m imes n}), which we call the coefficient matrix for the linear system. Similarly, we assemble the unknowns (x_{1}, x_{2}, ldots, x_{n}) into an (n imes 1) column vector (x = (x_{i}) in mathbb{F}^{n}), and the right-hand sides (b_{1}, b_{2}, ldots, b_{m}) of the equation are used to form an (m imes 1) column vector (b = (b_{i}) in mathbb{F}^{m}). In other words,

[
A =
egin{bmatrix}
a_{1 1} & a_{1 2} & cdots & a_{1 n}
a_{2 1} & a_{2 2} & cdots & a_{2 n}
vdots & vdots & ddots & vdots
a_{m 1} & a_{m 2} & cdots & a_{m n}
end{bmatrix},
x =
egin{bmatrix}
x_{1}
x_{2}
vdots
x_{n}
end{bmatrix},
ext{ and }
b =
egin{bmatrix}
b_{1}
b_{2}
vdots
b_{m}
end{bmatrix}.
]

Then the left-hand side of the (i^{ ext{th}}) equation in System (A.1.1) can be recovered by taking the dot product (a.k.a. Euclidean inner product) of (x) with the (i^{ ext{th}}) row in (A):

[
egin{bmatrix}
a_{i 1} & a_{i 2} & cdots & a_{i n}
end{bmatrix}
cdot x
=
sum_{j = 1}^{n} a_{i j}x_{j}
=
a_{i 1}x_{1} + a_{i 2}x_{2} + a_{i 3}x_{3} + cdots + a_{i n}x_{n}.
]

In general, we can extend the dot product between two vectors in order to form the product of any two matrices (as in Section A.2.2). For the purposes of this section, though, it suffices to simply define the product of the matrix (A in mathbb{F}^{m imes n}) and the vector (x in mathbb{F}^{n}) to be

egin{equation}
label{eqn:MatrixVectorProduct}
Ax =
egin{bmatrix}
a_{1 1} & a_{1 2} & cdots & a_{1 n}
a_{2 1} & a_{2 2} & cdots & a_{2 n}
vdots & vdots & ddots & vdots
a_{m 1} & a_{m 2} & cdots & a_{m n}
end{bmatrix}
egin{bmatrix}
x_{1}
x_{2}
vdots
x_{n}
end{bmatrix}
=
egin{bmatrix}
a_{1 1}x_{1} + a_{1 2}x_{2} + cdots + a_{1 n}x_{n}
a_{2 1}x_{1} + a_{2 2}x_{2} + cdots + a_{2 n}x_{n}
vdots
a_{m 1}x_{1} + a_{m 2}x_{2} + cdots + a_{m n}x_{n}
end{bmatrix}. ag{A.1.2}
end{equation}

Then, since each entry in the resulting (m imes 1) column vector (Ax in mathbb{F}^{m}) corresponds exactly to the left-hand side of each equation in System (A.1.1), we have effectively encoded System (A.1.1) as the single matrix equation

egin{equation}
Ax =
egin{bmatrix}
a_{1 1}x_{1} + a_{1 2}x_{2} + cdots + a_{1 n}x_{n}
a_{2 1}x_{1} + a_{2 2}x_{2} + cdots + a_{2 n}x_{n}
vdots
a_{m 1}x_{1} + a_{m 2}x_{2} + cdots + a_{m n}x_{n}
end{bmatrix}
=
egin{bmatrix}
b_{1}
vdots
b_{m}
end{bmatrix}
= b. ag{A.1.3}
end{equation}

Example A.1.4. The linear system
[
left.
egin{array}{rrrrrrrrrrrr}
x_{1} & + & 6 x_{2} & ~ & ~ & ~ & + & 4 x_{5} & - & 2 x_{6} & = & 14
~ & ~ & ~ & ~ & x_{3} & ~ & + & 3 x_{5} & + & x_{6} & = & -3
~ & ~ & ~ & ~ & ~ & x_{4} & + & 5 x_{5} & + & 2 x_{6} & = & 11
end{array}
ight}.
]

has three equations and involves the six variables (x_{1}, x_{2}, ldots, x_{6}). One can check that possible solutions to this system include

[
egin{bmatrix}
x_{1}
x_{2}
x_{3}
x_{4}
x_{6}
x_{6}
end{bmatrix}
=
egin{bmatrix}
14 \
0 \
-3 \
11 \
0 \
0
end{bmatrix}
ext{ and }
egin{bmatrix}
x_{1}
x_{2}
x_{3}
x_{4}
x_{6}
x_{6}
end{bmatrix}
=
egin{bmatrix}
6 \
1 \
-9 \
-5 \
2 \
3
end{bmatrix}.
]

Note that, in describing these solutions, we have used the six unknowns (x_{1}, x_{2}, ldots, x_{6}) to form the (6 imes 1) column vector (x = (x_{i}) in mathbb{F}^{6}). We can similarly form the coefficient matrix (A in mathbb{F}^{3 imes 6}) and the (3 imes 1) column vector (b in mathbb{F}^{3}), where

[ A =
egin{bmatrix}
1 & 6 & 0 & 0 & 4 & -2 \
0 & 0 & 1 & 0 & 3 & 1 \
0 & 0 & 0 & 1 & 5 & 2
end{bmatrix}
ext{ and }
egin{bmatrix}
b_{1}
b_{2}
b_{3}
end{bmatrix}
=
egin{bmatrix}
14 \
-3 \
11
end{bmatrix}.
]

You should check that, given these matrices, each of the solutions given above satisfies Equation (A.1.3).

We close this section by mentioning another common conventions for encoding linear systems. Specifically, rather than attempt to solve Equation (A.1.1) directly, one can instead look at the equivalent problem of describing all coefficients (x_{1}, ldots, x_{n} in mathbb{F}) for which the following vector equation is satisfied:

egin{equation}
label{eqn:GenericVectorSystem}
x_{1}
egin{bmatrix}
a_{1 1}
a_{2 1}
a_{3 1}
vdots
a_{m 1}
end{bmatrix}
+ x_{2}
egin{bmatrix}
a_{1 2}
a_{2 2}
a_{3 2}
vdots
a_{m 2}
end{bmatrix}
+ x_{3}
egin{bmatrix}
a_{1 3}
a_{2 3}
a_{3 3}
vdots
a_{m 3}
end{bmatrix}
+ cdots + x_{n}
egin{bmatrix}
a_{1 n}
a_{2 n}
a_{3 n}
vdots
a_{m n}
end{bmatrix}
=
egin{bmatrix}
b_{1}
b_{2}
b_{3}
vdots
b_{m}
end{bmatrix}. ag{A.1.4}
end{equation}

This approach emphasizes analysis of the so-called column vectors (A^{(cdot, j)}) (j = 1, ldots, n ) of the coefficient matrix (A) in the matrix equation (A x = b). (See in Section A.2.1 for more details about how Equation (A.1.4). Conversely, it is also common to directly encounter Equation (A.1.4) when studying certain questions about vectors in (mathbb{F}^{n}).

It is important to note that System (A.1.1) differs from Equations (A.1.3) and (A.1.4) only in terms of notation. The common aspect of these different representations is that the left-hand side of each equation in System (A.1.1) is a linear sum. Because of this, it is also common to rewrite System (A.1.1) using more compact notation such as

[
sum_{k = 1}^{n}a_{1 k}x_{k} = b_{1},
sum_{k = 1}^{n}a_{2 k}x_{k} = b_{2},
sum_{k = 1}^{n}a_{3 k}x_{k} = b_{3},
ldots,
sum_{k = 1}^{n}a_{m k}x_{k} = b_{m}.
]


SIAM Journal on Matrix Analysis and Applications

In this paper we develop a new superfast solver for Toeplitz systems of linear equations. To solve Toeplitz systems many people use displacement equation methods. With displacement structures, Toeplitz matrices can be transformed into Cauchy-like matrices using the FFT or other trigonometric transformations. These Cauchy-like matrices have a special property, that is, their off-diagonal blocks have small numerical ranks. This low-rank property plays a central role in our superfast Toeplitz solver. It enables us to quickly approximate the Cauchy-like matrices by structured matrices called sequentially semiseparable (SSS) matrices. The major work of the constructions of these SSS forms can be done in precomputations (independent of the Toeplitz matrix entries). These SSS representations are compact because of the low-rank property. The SSS Cauchy-like systems can be solved in linear time with linear storage. Excluding precomputations the main operations are the FFT and SSS system solve, which are both very efficient. Our new Toeplitz solver is stable in practice. Numerical examples are presented to illustrate the efficiency and the practical stability.


Problems and Solutions

This is one of over 2,400 courses on OCW. Explore materials for this course in the pages linked along the left.

MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.

No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.

Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.

Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)

About MIT OpenCourseWare

MIT OpenCourseWare is an online publication of materials from over 2,500 MIT courses, freely sharing knowledge with learners and educators around the world. Learn more »

© 2001&ndash2018
Massachusetts Institute of Technology

Your use of the MIT OpenCourseWare site and materials is subject to our Creative Commons License and other terms of use.


MATRICES AND SYSTEM OF LINEAR EQUATIONS - PowerPoint PPT Presentation

PowerShow.com is a leading presentation/slideshow sharing website. Whether your application is business, how-to, education, medicine, school, church, sales, marketing, online training or just for fun, PowerShow.com is a great resource. And, best of all, most of its cool features are free and easy to use.

You can use PowerShow.com to find and download example online PowerPoint ppt presentations on just about any topic you can imagine so you can learn how to improve your own slides and presentations for free. Or use it to find and download high-quality how-to PowerPoint ppt presentations with illustrated or animated slides that will teach you how to do something new, also for free. Or use it to upload your own PowerPoint slides so you can share them with your teachers, class, students, bosses, employees, customers, potential investors or the world. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. That's all free as well!

For a small fee you can get the industry's best online privacy or publicly promote your presentations and slide shows with top rankings. But aside from that it's free. We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. All for free. Most of the presentations and slideshows on PowerShow.com are free to view, many are even free to download. (You can choose whether to allow people to download your original PowerPoint presentations and photo slideshows for a fee or free or not at all.) Check out PowerShow.com today - for FREE. There is truly something for everyone!

presentations for free. Or use it to find and download high-quality how-to PowerPoint ppt presentations with illustrated or animated slides that will teach you how to do something new, also for free. Or use it to upload your own PowerPoint slides so you can share them with your teachers, class, students, bosses, employees, customers, potential investors or the world. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. That's all free as well!


Augmented Matrices of Consistent Linear Systems

So my textbook for Linear Equations has problems referencing augmented matrices, but I can't find where it talks about it. I did find a few examples but I want to know what an augmented matrix is and why the following are or are not. Wikipedia was not helpful. (also, this is my first day of class and we only went over the syllabus, but I want to get some stuff under my belt so I can ask my TA's questions in Wednesday discussion).

egin1 & h & 2-5 & 20 & -12end The above matrix is the augmented matrix of a consistent linear system if $h e4$

egin1 & 4 & -22 & h & -4end The above matrix is the augmented matrix of a consistent linear system.

egin-8 & 24 & h2 & -6 & 7end The above matrix is the augmented matrix of a consistent linear system if $h e-28$

Then the problems from the book I don't have the answers too, but this one builds on augmented matrices:

egin1 & -3 & 7 & h & 2 & -8 & g -2 & 4 & -6 & kend Like what? How do I even start? Can I safely assume that x-3y+7z=g?


Engaging students: Solving linear systems of equations with matrices

In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place.

I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course).

This student submission comes from my former student Andrew Sansom. His topic, from Algebra II: solving linear systems of equations with matrices.

A1. What interesting (i.e., uncontrived) word problems using this topic can your students do now? (You may find resources such as http://www.spacemath.nasa.gov to be very helpful in this regard feel free to suggest others.)

The Square in Downtown Denton is a popular place to visit and hang out. A new business owner needs to decide which road he should put an advertisement so that the most people will see it as they drive by. He does not have enough resources to traffic every block and street, but he knows that he can use algebra to solve for the ones he missed. In the above map, he put a blue box that contains the number of people that walked on each street during one hour. Use a system of linear equations to determine how much traffic is on every street/block on this map.

HINT: Remember that in every intersection, the same number of people have to walk in and walk out each hour, so write an equation for each intersection that has the sum of people walking in is equal to the number of people walking out.
HINT: Remember that the same people enter and exit the entire map every hour. Write an equation that has the sum of each street going into the map equal to the sum of each street going out of the map.

1. Build each equation, as suggested by the hints.

2. Rewrite the system of simultaneous linear equations in standard form.

3. Rewrite the system as an augmented matrix

4. Reduce the system to Reduced Row Echelon Form (using a calculator)


5. Use this reduced matrix to find solutions for each variable

This gives us a completed map:


Clearly, the business owner should advertise on Hickory Street between Elm and Locust St (Possibly in front of Beth Marie’s).

B1. How can this topic be used in your students’ future courses in mathematics or science?

Systems of Simultaneous Linear Equations appear frequently in most problems that involve modelling more than one thing at a time. In high school, the ability to use matrices to solve such systems (especially large ones) simply many problems that would appear in AP or IB Physics exams. Circuit Analysis (including Kirchhof’s and Ohm’s laws) frequently amounts to setting up large systems of simultaneous equations similar to the above network traffic problem. Similarly, there are kinematics problems where multiple forces/torques acting on an object that naturally lend themselves to large systems of equations.

In chemistry, mixture problems can be solved using systems of equations. If more than substance is being mixed, then the system can become too large to efficiently solve except by Gaussian Elimination and matrix operations. (DeFreese, n.d.)

At the university level, learning to solve systems using matrices prepares the student for Linear Algebra, which is useful in almost every math class taken thereafter.

D4. What are the contributions of various cultures to this topic?

Simultaneous linear equations were featured in Ancient China in a text called Jiuzhang Suanshu or Nine Chapters of the Mathematical Art to solve problems involving weights and quantities of grains. The method prescribed involves listing the coefficients of terms in an array is exceptionally similar to Gaussian Elimination.

Later, in early modern Europe, the methods of elimination were known, but not taught in textbooks until Newton published such an English text in 1720, though he did not use matrices in that text. Gauss provided an even more systematic approach to solving simultaneous linear equations involving least squares by 1794, which was used in 1801 to find Ceres when it was sighted and then lost. During Gauss’s lifetime and in the century that followed, Gauss’s method of elimination because a standard way of solving large systems for human computers. Furthermore, by adopting brackets, “Gauss relieved computers of the tedium of having to rewrite equations, and in so doing, he enabled them to consider how to best organize their work.” (Grcar J. F., 2011).


Matrix Representation of System of Linear Equations

A system of linear equations is as follows.

a 11 x 1 + a 12 x 2 + … + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + … + a 2 n x n = b 2 ⋯ a m 1 x 1 + a m 2 x 2 + … + a m n x n = b m

This system can be represented as the matrix equation A ⋅ x → = b → , where A is the coefficient matrix.

A = ( a 11 … a 1 n ⋮ ⋱ ⋮ a m 1 ⋯ a m n )

b → is the vector containing the right sides of equations.

If the solution is not unique, linsolve issues a warning, chooses one solution, and returns it.

If the system does not have a solution, linsolve issues a warning and returns X with all elements set to Inf .

Calling linsolve for numeric matrices that are not symbolic objects invokes the MATLAB ® linsolve function. This function accepts real arguments only. If your system of equations uses complex numbers, use sym to convert at least one matrix to a symbolic matrix, and then call linsolve .


Subsection 2.3.1 The Matrix Equation

In this section we introduce a very concise way of writing a system of linear equations:

are vectors (generally of different sizes), so first we must explain how to multiply a matrix by a vector.

Remark

In this book, we do not reserve the letters

for the numbers of rows and columns of a matrix. If we write “

Definition

is the linear combination

Example

to make sense, the number of entries of

has to be the same as the number of columns of

we are using the entries of

as the coefficients of the columns of

in a linear combination. The resulting vector has the same number of entries as the number of rows of

has that number of entries.

Properties of the Matrix-Vector Product
Definition

A matrix equation is an equation of the form

is a vector whose coefficients

In this book we will study two complementary questions about a matrix equation

    Given a specific choice of

what are all of the solutions to

The first question is more like the questions you might be used to from your earlier courses in algebra you have a lot of practice solving equations like

The second question is perhaps a new concept for you. The rank theorem in Section 2.9, which is the culmination of this chapter, tells us that the two questions are intimately related.

Matrix Equations and Vector Equations

Consider the vector equation

This is equivalent to the matrix equation

is equivalent to the vector equation

Example
Four Ways of Writing a Linear System

We now have four equivalent ways of writing (and thinking about) a system of linear equations:

In particular, all four have the same solution set.

We will move back and forth freely between the four ways of writing a linear system, over and over again, for the rest of the book.

Another Way to Compute

The above definition is a useful way of defining the product of a matrix with a vector when it comes to understanding the relationship between matrix equations and vector equations. Here we give a definition that is better-adapted to computations by hand.

Definition

A row vector is a matrix with one row. The product of a row vector of length


The system is called a consistent system with a single solution. To determine the solution of the system we use Cramer's rule.

We calculate $ Delta_>$, the determinant obtained by replacing the column containing the coefficients of the respective variable $x_<1>$ with the column of constant terms.
$Delta_>= egin b_ <1>& a_ <1,2>& a_ <1,3>& . & . & a_ <1,n> b_ <2>& a_ <2,2>& a_ <2,3>& . & . & a_ <2,n> b_ <3>& a_ <3,2>& a_ <3,3>& . & . & a_ <3,n> cdots b_ & a_ & a_ & . & . & a_ end$

We calculate $ Delta_>$, the determinant obtained by replacing the column containing the coefficients of the respective variable $x_<2>$ with the column of constant terms.
$Delta_>= egin a_ <1,1>& b_ <1>& a_ <1,3>& . & . & a_ <1,n> a_ <2,1>& b_ <2>& a_ <2,3>& . & . & a_ <2,n> a_ <3,1>& b_ <3>& a_ <3,3>& . & . & a_ <3,n> cdots a_ & b_ & a_ & . & . & a_ end$

We calculate $ Delta_>$, the determinant obtained by replacing the column containing the coefficients of the respective variable $x_<3>$ with the column of constant terms.
$Delta_>= egin a_ <1,1>& a_ <1,2>& b_ <1>& . & . & a_ <1,n> a_ <2,1>& a_ <2,2>& b_ <2>& . & . & a_ <2,n> a_ <3,1>& a_ <3,2>& b_ <3>& . & . & a_ <3,n> cdots a_ & a_ & a_ & . & . & a_ end$

We keep doing this for the other variables until the last one and then we write down the solution of the system.
$x_=dfrac>>$

Example 53
$egin 2cdot x + 3cdot y -5cdot z = color<-7> -3 cdot x + 2cdot y + z = color<-9> 4cdot x - y + 2cdot z = color <17>end$

The matrix associated to the system is
$ egin 2 & 3 & -5 -3 & 2 & 1 4 & -1 & 2 end$

We calculate the determinant of the matrix and we get $Delta = 8 -15 + 12 +40 +2 + 18 = 65$
We calculate $ Delta_= egin color <-7>& 3 & -5 color <-9>& 2 & 1 color <17>& -1 & 2 end= -28 - 45 + 51 + 170 - 7 +54 = 195$

We calculate $ Delta_= egin 2 & color <-7>& -5 -3 & color <-9>& 1 4 & color <17>& 2 end=-36 + 255 -28 -180 -34 -42 = -65$

We calculate $ Delta_= egin 2 & 3 &color<-7> -3 & 2 & color<-9> 4 & -1 & color <17>end= 68 -21 -108 + 56 -18 + 153 =130$

Example 54
$egin 4cdot x + 5cdot y -2cdot z = color<3> -2 cdot x + 3cdot y - z = color<-3> -1cdot x - 2cdot y + 3cdot z = color <-5>end$

The matrix associated to the system is $ egin 4 & 5 & -2 -2 & 3 & -1 -1 & -2 & 3 end$

We calculate the determinant of the matrix and we get $Delta = 36 -8 + 5 -6 -8 + 30 = 49$

We calculate $ Delta_= egin color <3>& 5 & -2 color <-3>& 3 &-1 color <-5>& -2 & 3 end= 27 - 12 + 25 - 30 - 6 + 45 = 49$

We calculate $ Delta_= egin 4 & color <3>& -2 -2 & color <-3>& -1 -1 & color <-5>& 3 end=-36 -20+ 3 +6 -20 + 18 = -49$

We calculate $ Delta_= egin 4 & 5 & color<3> -2 & 3 & color<-3> -1& -2 & color <-5>end= -60 + 12 + 15 + 9 - 24 -50 = - 98$

If the system is homogeneous, its solution is <000>because in determinants $Delta_$,$Delta_$ and $Delta_$ there are columns of 0, so they are also equal to 0.

Example 55
$egin 2cdot x + 3cdot y -5cdot z = color<0> -3 cdot x + 2cdot y + z = color<0> 4cdot x - y + 2cdot z = color <0>end$

The matrix associated to the system is
$ egin 2 & 3 & -5 -3 & 2 & 1 4 & -1 & 2 end$

We calculate the determinant of the matrix and we get $Delta = 8 -15 + 12 +40 +2 + 18 = 65 $


Solving linear systems

Matrices can be used to describe a linear system of equations as well as solve them using matrix multiplication. Say we are given a system of n linear equations and n unknowns:

In this case, n = 3. Whenever we have missing variables, such as in the last equation, 2x2 - 5x3 = 3, we can artificially introduce x1 into the equation by setting the coefficient of x1 to 0. In other words,

If we organize the coefficients of x1, x2, and x3 into a matrix A, we get:

If we organize the constants on the right-hand side of each equation into a column vector b, we get

Then in the language of matrix multiplication, solving for x1, x2, x3 is the same as finding a vector x = [x1, x2, x3] T such that

which is eqivalent to Ax = b. We can use Gaussian elimination on the augmented matrix,


Watch the video: Class 8, Linear Equations, Ex-, Part- 1, ML Aggarwal... Roshan Sir (October 2021).