prs4e.mw

Problem Set 4 

> with(LinearAlgebra): with(plots):
 

3.1 #46 Here we are asked to compute the determinants of A and its inverse for 4x4, 5x5 and 6x6 matrices to make a conjecture about how they are related.  In the event that we have a matrix with a zero determinant we are directed to reduce it to echelon form and discuss what we find. 

 

Maple can populate the matrices for us and quickly compute determinants of matrices and their inverses:  

> A:=RandomMatrix(4);
 

Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mrow(Typesetting:-mo( (1)
 

> Determinant(A); Determinant(MatrixInverse(A));
 

 

64334045
`/`(1, 64334045) (2)
 

> B:=RandomMatrix(5);
 

Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mn( (3)
 

> Determinant(B); Determinant(MatrixInverse(B));
 

 

-2666209314
-`/`(1, 2666209314) (4)
 

> C:=RandomMatrix(6);
 

Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mn( (5)
 

> Determinant(C); Determinant(MatrixInverse(C));
 

 

782188886595
`/`(1, 782188886595) (6)
 

In each case the determinant of the inverse is 1 over the determinant of the original.   

 

Here is a proof: 

A MatrixInverse(A) = I 

determinant (A MatrixInverse(A)) = determinant (I) = 1.  

Since the determinant of a product is the product of the determinants, we have determinant (A) determinant(MatrixInverse(A)) = 1.   

They are both numbers that multiply to give 1, so neither can be 0 and they must be reciprocals of each other.   

 

2.8 #38 with additional instructions  

To find a complete basis for the column space, we set up the system for span: Ax=b, and use Gaussian on the augmented matrix [A|b], just like back in chapter 1: 

 

> s28n38:=Matrix([[5, 3, 2, -6, -8], [4, 1, 3, -8, -7], [5, 1, 4, 5, 19], [-7, -5, -2, 8, 5]]);
 

Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mn( (7)
 

Part A: The definition of the nullspace of a matrix is the set of solutions to Ax=0, the homogeneous system. 

 

Part B:  The augmented matrix for the homogeneous equation corresponding to Null A is found by augmenting A with the zero vector:  

> s28n38:=Matrix([[5, 3, 2, -6, -8], [4, 1, 3, -8, -7], [5, 1, 4, 5, 19], [-7, -5, -2, 8, 5]]): Null:=Matrix([s28n38,Vector([0,0,0,0])]);
 

Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mn( (8)
 

Part C:  

> ReducedRowEchelonForm(Null);
 

Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mn( (9)
 

Part D: To solve for Null A, we are looking for the solutions to Ax=0 equation, the intersections of the rows:   

There are no inconsistent rows because a homogeneous system can never be inconsistent! 

x_3=free=t, because it is missing a pivot.  Then we solve for the other variables in terms of this free variable. 

x1 = -x3 = -t 

x2 = x3 = t 

x4=0 

x5=0 

So the solutions are `*`(t, `*`(rtable(1 .. 5, [-1, 1, 1, 0, 0], subtype = Vector[column])))This is Null A and a basis for it is  

 

Part E: 

Here the nullspace is a line, because it has one free variable t.   

 

Part F: 

The column space is the span of the columns, or equivalently, all the vectors b that are consistent in the equation Ax=b.  

 

 

Part G:  

Solve for Col A as follows: reduce A, circle the pivots and provide the pivot columns of A (not reduced A) as the basis for the Col A. Note that linear combinations of these basis vectors span Col A. 

 

Since each column is in R^4, it is a subspace of R^4 - so to find a basis, we want vectors that both span the column space in R^4 and are linearly independent too.  The full set of column vectors span the column space, but by reducing A and selecting the pivot columns, we are ensuring l.i: 

> s28n38;
ReducedRowEchelonForm(s28n38);
 

 

Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mn(
Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mn( (10)
 

Columns 1, 2, 4 and 5 have pivots (column 3 does not).  Hence a basis, can be taken from all but column 3 of the original (NOT the reduced) matrix columns, corresponding to the pivot columns: 

rtable(1 .. 4, [5, 4, 5, -7], subtype = Vector[column]), rtable(1 .. 4, [3, 1, 1, -5], subtype = Vector[column]) ,The full column space is obtained by linear combinations: `+`(`*`(s, `*`(rtable(1 .. 4, [5, 4, 5, -7], subtype = Vector[column]))), `*`(t, `*`(rtable(1 .. 4, [3, 1, 1, -5], subtype = Vector[column])))) + u 

 

We see that the column space is the entire space of R^4 because we have 4 basis vectors in R^4.  

 

Notice that column 3 = column 1 - column 2, so column 3 can't be part of any basis that the 1st 2 columns are in, or it would violate the definition of linearly independent. 

 

 

Part H:  

Set up and solve the augmented matrix for the system Ax=Vector([b1,b2,b3,b4]]) and apply GaussianElimination in Maple.  

 

To look for another representation of Col A we set up the system for span: Ax=b, and use Gaussian on the augmented matrix [Ab], just like back in chapter 1: 

> ColSpaceAug:=Matrix([s28n38,Vector([b1,b2,b3,b4])]);
GaussianElimination(ColSpaceAug);
 

 

Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mn(
Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mn( (11)
 

Part I: Examine any inconsistent parts (like [0 0 0 0 combination of bs]).   

 

We have full row pivits here, so there are no inconsistent parts to set equal to 0 and provide an algebraic representation of Col A.  

 

CAUTION:  This original matrix not a square matrix, so we can't apply the Inverse Matrix Theorem results.  For instance, the original 5 columns are not linearly independent in R^4 [there are too many of them], but they do span R^4. 

 

Part J: Any vector b is in the column space of the matrix, so that is all of R^4, i.e. the entire space. We can see this from part G because there are 4 free variables from the 4 basis vectors as well as Part H where there is no restrictive equation in R^4 giving us a smaller subspace. 

 

5.6  #10 with additional instructions: 

Part A: What is our book’s definition (or the glossary) of an eigenvalue? 

 

An eigenvalue for a matrix A is a scalar lambda so that Ax= lambda x.  Geometrically lambda is how vectors scale (stretch, shrink, flip, or stay constant) as they stay on the same line through the origin under A as a linear transformation. 

 

Part B: What is our book’s definition (or the glossary) of an eigenvector? 

 

An eigenvector for a matrix A is a vector x so that Ax= lambda x.  Geometrically x is a vector that stays on the same line it began on.  

 

Part C: Compute the eigenvectors and eigenvalues: 

 

> A:=Matrix([[3/10, 4/10], [-3/10, 11/10]]);
Eigenvectors(A);
 

 

Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mfrac(Typesetting:-mn(
Typesetting:-mprintslash([Vector[column]([[Typesetting:-mfrac(Typesetting:-mn( (12)
 

Part D: Since the 2 eigenvectors are not multiples of each other (one is on the line y=1/2 x, and the other y= 3/2 x), we know that the vectors span all of R^2 and the eigenvector decomposition exists.  If we had three or more vectors we would need to check Ax=b for consistencies or use the matrix theorem for square matrices, but with just 2 different vectors, we can quickly see whether they span a line or the entire R^2 plane.  

 

Part E: 

x[k] = `+`(`*`(a[1], `*`(`^`(`/`(1, 2), k), `*`(rtable(1 .. 2, [2, 1], subtype = Vector[column])))), `*`(a[2], `*`(`^`(`/`(9, 10), k), `*`(rtable(1 .. 2, [`/`(2, 3), 1], subtype = Vector[column]))))) 

Part F:  Both populations die off  for any starting position, because `^`(`/`(1, 2), k)and `^`(`/`(9, 10), k)both go to 0 in the limit. 

Part G:  For most starting positions, the system will die off asymptotically along the eigenvector corresponding to the larger eigenvalue `/`(9, 10) along the y=x line corresponding to the rtable(1 .. 2, [`/`(2, 3), 1], subtype = Vector[column])eigenvector.  So the ratio in the limit is 2 x-populations to 3 y-populations.   

 

Part H: For most starting positions, we can find the  long-term die off rate by examining the larger eigenvalue, which is 

This is .1 under 1, so the die off rate is 10% each year (the population is 90% of what it was the previous year in the limit).   

This holds as long as a[2]is not 0 in our decomposition.   

 

Parts I and J: Roughly sketch by-hand a trajectory plot with starting populations in the 1st quadrant that are not on either eigenvector, and that includes both eigenvectors in the sketch. 

 

Here is a plot that shows the system dying off asymptotic to y=3/2 x: 

Plot_2d 

Problem 4: Rotation matrices in R^2  

Recall that the general rotation matrix which rotates vectors in the counterclockwise direction by angle theta is given by M:=Matrix([[cos(theta),-sin(theta)],[sin(theta),cos(theta)]]);  

 

Part A:   A linear transformation T on R^n assigns each vector in R^n a vector T(x) in R^m and satisfies properties of addition and multiplication.   

T(x + y) = T(x) + T(y) 

T(cx) = cT(x) 

Such linear transformations have matrix representations where T(x) = Ax.  

 

Part B: Apply the Eigenvalues(M); command.  

> M := Matrix([[cos(theta),-sin(theta)],[sin(theta),cos(theta)]]);
Eigenvalues(M);
 

 

Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mrow(Typesetting:-mi(
Typesetting:-mfenced(Typesetting:-mrow(Typesetting:-mtable(Typesetting:-mtr(Typesetting:-mtd(Typesetting:-mrow(Typesetting:-mrow(Typesetting:-mi( (13)
 

Part C: Notice that there are real eigenvalues for certain values of theta only. What are these values of theta and what eigenvalues do they produce? (Recall that I = the square root of negative one does not exist as a real number and that cos(theta) is less than or equal to 1 always.)  

Notice that cos(theta)^2 - 1 is less than 0 for many values of theta, producing imaginary eigenvalues.  For real eigenvalues, we must have cos(theta)^2 > 1 or cos( theta)^2 = 1.    

 

Notice that cos(theta)^2 > 1  has NO solutions since cos(theta) is at most 1 , and so is the square of the number.  

 

Hence our only solutions for real eigenvalues occur when cos( theta)^2 = 1.  This happens when cos(theta)=1 or cos(theta) = -1.  

 

I.e. theta is 0 or Pi (a 2Pi rotation is equivalent to a 0 rotation...).  

 


Part D: For each such value of θ that gives real-eigenvalues, what eigenvalues do they produce? Don’t forget to look for other possible eigenvalues! 

 

For theta=Pi, and odd integer multiples of pi the eigenvalue is -1 as rotation by 180 degrees flips vectors so Ax = -x. 

 

For Theta=0, and all integer multiples of 2pi the eigenvalue is 1 as rotation all the way around keeps vectors the same so Ax = x. 

 

Part E: For each θ that gives a real eigenvalue, find a basis for the corresponding eigenspace.  

  

For theta=Pi, and odd integer multiples of pi  

> Eigenvectors(Matrix([[cos(Pi),-sin(Pi)],[sin(Pi),cos(Pi)]]));
 

>
 

Typesetting:-mprintslash([Vector[column]([[Typesetting:-mn( (14)
 

The basis is rtable(1 .. 2, [0, 1], subtype = Vector[column])and rtable(1 .. 2, [1, 0], subtype = Vector[column]) 

 

We get an eigenvalue of -1 and two eigenvectors.  This makes sense, since a Pi rotation about the origin sends [1,0] to [-1,0], and sends [0,1] to [0,-1].  In fact, it sends every vector v to -v.  So, the eigenspace is the entire R^2 (which has [0,1] and [1,0] as a basis).  

 

For Theta=0, and all integer multiples of 2pi  

> Eigenvectors(Matrix([[cos(0),-sin(0)],[sin(0),cos(0)]]));
 

>
 

Typesetting:-mprintslash([Vector[column]([[Typesetting:-mn( (15)
 

The basis is rtable(1 .. 2, [0, 1], subtype = Vector[column])and rtable(1 .. 2, [1, 0], subtype = Vector[column]). 

We get an eigenvalue of 1 and two eigenvectors.  This makes sense since it sends every vector v to v.  So, the eigenspace is the entire R^2 (which has [0,1] and [1,0] as a basis).   

 

Part F:   Use only a geometric explanation to explain why most rotation matrices have no eigenvalues or eigenvectors (i.e. that they do not scale along the same line through the origin) and include how the rotation angle connects—how do rotations act on vectors physically and how does this relate to the original line the vector was on for most θ? 

 

We see that rotation by Pi/6 has no eigenvalues or eigenvectors, since this rotation moves every vector off of its line through the origin (the definition of eigenvector is that the matrix scales it on the same line through the origin, as Ax=lamda x, where lambda x is the same line as x).  The same will be true about most rotation matrices since Mx will be at an angle of theta degrees with respect to x, which changes the slope and takes it off the original line.  

Plot_2d
 

 The only exceptions are when the rotation takes the vectors back to the line they started on - there are 2 cases here.  One is the rotation matrix by 0 or 2Pi or even multiples of pi, which fixes every vector: 

M := rtable(1 .. 2, 1 .. 2, [[1, 0], [0, 1]], subtype = Matrix)
 

Plot_2d
 

and the second is the rotation matrix by Pi or odd multiples which sends every vector to it's opposite (still the same line). 

M := rtable(1 .. 2, 1 .. 2, [[-1, 0], [0, -1]], subtype = Matrix)
 

Plot_2d