Loading...
Projects: Linear Algebra
Role on Project: Instructor, Subject Matter Expert
Position Title: Professor, Mathematics
Department: Department of Mathematics
Institution: University of Toronto
From this author
Digital Object Types: Video Links
Title of Resource Part of Description
Lecture 24: Introduction to Diagonalization (Nicholson Section 3.3/Section 5.5) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 50:11
Description: Note: in this lecture I use the language of “linear independence”.  You may not have seen this yet. For two vectors, it means that they’re not parallel. Hold onto that concept and you’ll learn more about linear independence later on. Started by reviewing definition of eigenvalue and eigenectors.
3:50 --- Reviewed procedure for finding eigenvalues and eigenvectors of A.
8:00 --- Given a 3x3 matrix w/ eigenvalues 1,2,2 what are the benefits? (Only need to solve two linear systems when hunting eigenvectors.) What are the risks? (You might not be able to find two fundamentally different eigenvectors when you’re looking for eigenvectors w/ eigenvalue 2.)
11:00 --- Considered a 3x3 matrix A and built a matrix P out of three eigenvectors of A. Computed AP, using block multiplication. The result is that AP is a matrix where each column is a multiple of the corresponding column of P. What is the multiple? The eigenvalue. The result is the P diag( labmda1, lambda2,lambda3) where lambda1, lambda2, and lambda3 are eigenvalues of A. Make sure you understand this portion of the lecture very well --- all of diagonalization is built on AP = P diag( …).
20:00 --- Introduced the concept of diagonalization. It's critical that the matrix P be invertible.
23:10 --- What happens if I change the order of the columns in P? The new matrix will still be invertible. What happens to AP? What’s the result on the diagonal matrix that I ultimately get.
27:00 --- What happens if I replace one of the columns of P with a nonzero multiple of that column?
32:20 --- What happens if I replace one of the columns of P with a nonzero multiple of one of the other columns?
36:30 --- Defined what it means for a square matrix to be diagonalizable. Note: my definition (that A = P diag(…) inv(P) ) is different from the one in Nicholson (that inv(P) A P is a diagonal matrix) but the two definitions are equivalent if you left- and right-multiply by the appropriate matrices.
 37:40 --- Stated a theorem that if A is diagonalizable then the columns of P are eigenvectors and the entries in the diagonal of the diagonal matrix are eigenvalues.  
42:45 --- Given a specific 3x3 matrix, is it diagonalizable? Did the long division on the characteristic polynomial, in case you’re wanting to see that. If you don’t want to spend so much ink/lead then learn how to do synthetic division; this is short-hand for long division.
Lecture 25: Introduction to Systems of Linear ODEs (Nicholson Section 3.5) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 49:38
Description: The lecture starts with a crash course on ordinary differential equations. If you don’t understand a single ODE then you’ve got no chance of understanding a system of ODEs… That said, if you’re short on time and don’t have time for the “why bother?” aspect then just go directly to 38:00.
0:50 --- Started with a simple ODE problem; modelling a tank of liquid with an inflow and an outflow.
7:15 --- What is a differential equation? What is a solution? What does it mean for a function to satisfy a differential equation?
11:20 --- How to find a differential equation that models the saltwater tank?
17:40 --- Presented the solution of the initial value problem. Answered the questions about the solution.
20:30 --- Now need to address the question about what’s the right flow rate to use. This involves going back to the original modelling equation and introducing a parameter for the flow rate and finding a solution that depends on both the parameter and on time. (Before the flow rate was just a number and the solution only depended on time.)
24:45 --- Introduced the “smell the coffee and wake up” problem. Rooms in an apartment are linked to one another by air vents. Presented the problem and the questions one has about the problem.
33:45 --- How to write the pair of ODEs that model the case with only two rooms.
38:00 --- wrote the system of two ODEs as an ODE for a vector in R^2; a 2x2 matrix A is involved.  Gave a bird’s eye recipe for the approach.
39:50 --- The eigenvectors and eigenvalues of A.
40:40 --- wrote down the general solution (didn’t explain where it came from).
41:30 --- What we need to do to satisfy the initial condition on the ODE. Note: something that sometimes confuses students is what happens when there’s a zero eigenvalue.  In this case, exp(0 t) = 1, and so your solution involves a vector that doesn’t change in time; it just sits there. In this case, the solution has a vector 50 [1;1] in addition to a vector that does depend on time 50 exp(-2/500 t) [1;-1]. Note that the vectors [1;1] and exp(-2/500 t)[1;-1] both being multiplied by the same number, 50, has to do with the initial data. Different initial data could lead to their being multiplied by different numbers.
43:45 --- Analyzed the solutions to make sure that they make sense --- Do you get what you expect as time goes to infinity?
45:00 --- presented the solutions using matlab.  This allowed me to play with the more general problem, including varying the number of rooms, the flow rate, etc. You see that if there are enough rooms then people in the far rooms won’t smell enough coffee to be woken by it. Here are some hand-written notes on the coffee problem, as well as a matlab script you can explore with.
Lecture 26: Systems of Linear ODEs --- where do the solutions come from? (Nicholson Section 3.5) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 44:11
Description: Started by reminding students of what the system of ODEs is.
2:00 --- Last class I presented the general solution as a linear combination of "things". First, I demonstrate that each of these "things" solves the system of ODEs. Translation: I demonstrate that if (lambda,v) is an eigenvalue-eigenvector pair of A then x(t) = exp(lambda t) v is a solution of the system of ODEs. Note: I blithely differentiate vectors but you’re probably not so happy doing that --- you’re used to differentiating single functions. You need to sit down and write the time-dependent vectors in terms of their components and convince yourself that differentiating the vector is the same as differentiating each component and that the things I do so quickly (like d/dt of exp(lambda t) v equals lambda exp(lambda t) v.
9:30 --- I demonstrate that if x1(t) and x2(t) are two solutions of the system of ODEs and c1 and c2 are constants then c1 x1(t) + c2 x2(t) is also a solution of the system of ODEs.
17:30 --- Given k eigenvalue-eigenvector pairs, I can write a solution of the system of ODEs that involves k coefficients c1, c2, … ck.
18:20 --- Choose the coefficients using the initial data.
21:00 --- If we have a problem in R^n, what happens if k doesn’t equal n? Do we need k to equal n? Answer: if I’m going to be able to solve every possible initial set of initial conditions I’m going to need n linearly independent eigenvectors in R^n. (You don’t know what linear independence means yet --- sorry! Come back to this in a few weeks. In the meantime, it would have sufficed to say “If A is a diagonalizable matrix then…”)
25:45 --- Presented a different argument for where the general solution came from. This argument relies on diagonalizing A.  We reduce a problem that we don’t know how to solve to a problem that we do know how to solve.
Lecture 27: Introduction to linear combinations (Nicholson Section 5.1) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 22:18 
Description: Introduced set notation (relatively) slowly and carefully. Introduced R^2, Z^2.
5:15 --- Did an example where a set is graphically presented as a collection of vectors in R^2.
11:08 --- Scalar multiplication of vectors, introduced linear combinations.
17:50 --- Introduced the word “basis” but didn’t carefully define it.
18:30 --- Introduced “the standard basis for R^2”, R^3, and R^4.
Lecture 28: Subsets, Subspaces, Linear Combinations, Span (Nicholson Section 5.1) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 46:27  
Description: Introduce language “subspace” and “span”.  
1:45 --- Introduced 6 subsets of R^2 using set notation.
4:48 --- Presented each subset as a collection of vectors in R^2. Note: I was not as careful as I could have been --- when presenting one of the quadrants (Q2, for example) I simply shaded in a region of the plane as if it were a collection of points. Really, I should have drawn one position vector for each point in that region, filling up the quadrant with infinitely many vectors of varying lengths and angles (but with the angles always in a certain range, determined by the subset Q2).  
21:30 --- Defined what it means for a subset of R^n to be a subspace of R^n.
22:00 --- Went through the previously introduced 6 subsets and identified which were subspaces.
24:50 --- Introduced a subset S of R^4; is it a subspace? Proved that it’s a subspace of R^4.
33:44 --- Given a set of vectors in R^3 --- if one lets S be the set of all linear combinations of these vectors, what would this set look like in R^3?  
35:20 --- another subset of R^3. Is it a subspace?
37:27 --- a subset of R^2. Is it a subspace?
42:45 --- another subset of R^2. Is it a subspace?
43:35 --- Given {v1, v2 ,.. vk} a set of k vectors in R^n, introduced Span(v1, v2 ,.. vk) and stated that it’s a subspace of R^n.
Lecture 29: Linear Dependence, Linear Independence (Nicholson Section 5.2) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 58:21
Description: 1:30 --- introduced a set which is the set of all linear combinations of two specific vectors in R^2.  Discussed the set, examples of vectors in the set, etc.
10:00 --- Theorem: the set of all linear combinations of two (fixed) vectors in R^n is a subspace of R^n. Proved the theorem.
19:00 --- Introduced Span(v1, v2 ,.. vk) and stated that it’s a subspace of R^n.
22:00 --- gave an example of a subset of R^2 which is closed under scalar multiplication but not under vector addition. Gave an example of a subset of R^2 which is closed under vector addition but not under scalar multiplication.
24:40 --- example in which I show that [7;1;13] is not in Span([2;1;4],[-1;3;1],[3;5;7]). Used high school methods to solve the problem. NOTE: there’s a mistake at 30:00. The second equation should be 6 t1 + 10 t3 = 20, not 2 t1 + 4 t3 = 6!!  When plugging t1 into the third equation, one gets -2 t3 + 83/7 = 13, which has the solution t3 = -4/7. This then determines t1 = 30/7 and t2 = -1/7 and, in fact, we have that [7;1;13] is in Span([2;1;4],[-1;3;1],[3;5;7]) because (30/7)* [2;1;4]+(-1/7)*[-1;3;1]+(-4/7)*[3;5;7] = [7;1;13]. What went wrong? I made a mistake when copying from my notes. My notes had the example “[7;1;13] is not in Span([2;1;4],[-1;3;-1],[3;5;7]).” Note that the third component of the second vector in the span is -1, not +1, as written on the blackboard. If you correct that mistake then you’ll get that the rest of the approach works and that you end up with the impossibility of 44/7=6.
33:30 --- example in which I show that [7;0;13] is in Span([2;1;4],[-1;3;-1],[3;5;7]). I simply wrote down the solution without showing how to find it. Students were then asked to demonstrate that [7;0;13] is in Span([2;1;4],[-1;3;-1]) and [7;0;13] is in Span([2;1;4],[3;5;7]) and [7;0;13] is in Span([-1;3;-1],[3;5;7]).
40:00 --- proved that Span([2;1;4],[-1;3;-1],[3;5;7]) equals Span([2;1;4],[-1;3;-1]).
43:30 --- defined what it means for a set of vectors to be linearly independent.
45:00 --- defined what it means for a set of vectors to be linearly dependent. Gave an example of a linearly dependent set of four vectors in R^3.
48:06 --- Showed that {[2;1;2],[-2;2;1],[1;2;-2]} is linearly independent.
Lecture 2: Linear Systems: Solutions, Elementary Operations (Nicholson, Section 1.1) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 42:50
Description:
Gave the definition of “a linear equation in n variables”. 
6:00 --- Defined what it means for a vector to be a solution of a linear equation.
10:15 --- Note that if you write an equation like x1 – 2 x2 + x3 = 1 then [2;1;1] is a solution in R^3.  And [2;1;1;29] is a solution in R^4.  And [2;1;1;3;-8] is a solution in R^5.   An equation doesn’t determine the R^n that a solution lives in. Certainly, the equation won’t have solutions in R^1 (what would you plug in for x2 and x3?) or solutions in R^2 (what would you plug in for x3?) but it’s perfectly reasonable to consider that equation in R^27 if needed --- it depends on what physical problem the equation is coming from. 
10:25 --- defined a “system of linear equations”. 
10:40 --- Stated that a linear system either has no solutions, has exactly one solution, or has infinitely many solutions.  (The proof will come later!)  Presented a linear system that has no solutions.  Presented a linear system that has exactly one solution.  Presented a linear system that has infinitely many solutions.
16:00 --- Presented a system of 3 linear equations in 3 unknowns and found the solution using a sequence of elementary operations.  (swapping two equations, multiplying an equation by a nonzero number, adding a multiple of one equation to another equation.)  Key in all of this is that the elementary operations don’t change the solution set.  That is, if you have S = {all solutions of the original system} and T = {all solutions of the system you get after applying one elementary operation} then S = T.   If you can prove that S = T for each of the three elementary operations then you know that the solutions of the original system are the same as the solutions of a later (easier to solve) linear system because you know that the solutions remain unchanged after each step on the way.  (The worry is, of course, that you might lose solutions or gain solutions by doing these elementary operations.  Certainly, if you multiplied one of the equations by zero you’d be at high risk of creating a new system that has more solutions than the previous system.) 
29:10 --- presented a second 3x3 system and demonstrated that there were infinitely many solutions. 
31:20 --- presented a third 3x3 system and demonstrated that there were no solutions. 
33:50 --- gave an argument based on a specific system of 2 linear equations in 2 unknowns showing how it is that the elementary operations don’t change the set of solutions. 
Lecture 30: More on spanning and linear independence (Nicholson Section 5.2) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 50:02
Description: Started by reminding students of definition of span and linear independence.
4:48 --- is [3;-1;2;1] in Span([1;1;0;1],[2;0;0;2],[0;2;-1;1])?
14:10 --- Does Span([1;1;2],[1;-1;-1],[2;1;1]) equal R^3? Can an arbitrary vector [a;b;c] be written as a linear combination of the three vectors?
25:45 --- Is {[1;1;2],[1;-1;-1],[2;1;1]} a basis for R^3?
29:40 --- Is a set of 4 vectors in R^3 a linearly independent set? Asked about a specific example but it should be clear to you that whenever you have 4 or more vectors in R^3 the set will be linear dependent. This is because when you set up the system of linear equations needed to address the question you’ll have more unknowns than you have equations. This means that either there’s no solution or there’re infinitely many solutions. (Having exactly one solution is not an option.) And having no solution isn’t an option (because you already know that the zero vector is a solution) so it follows that there are infinitely many solutions.
34:09 --- stated this as a general theorem.
36:40 --- Considered a specific example of 3 vectors in R^3. Are they linearly dependent or not? It turned out they aren’t.
41:00 --- Considered the same set of 3 vectors --- do they span R^3? No. What’s interesting is --- in the process of figuring out that the answer is “no” you find the scalar equation of the plane that the three vectors do span.
46:45 --- Given 3 vectors in R^3, do they form a basis for R^3? (answer maybe yes, maybe no. It depends on the specific vectors.) What about 2 vectors? (answer: never! Can’t span R^3!) What about 4 vectors (answer: never! Will always be linearly dependent!)
Lecture 31: Bases, finding bases for R^n (Nicholson, Section 5.2) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 1:01:52
Description: Started with a review of the definitions of linear independence and basis. 
1:50 --- Need to consider the one bizarre subspace of R^n. The subset S = {0} is a subspace of R^n. But it has no basis because no set of vectors in S will be a linearly independent set.  This discussion is the reason why theorems in the book (like Theorem 5.2.6) assume that the subspace in question is not {0}.
5:30 --- Is a given set of 4 vectors in R^3 a basis for R^3?  No: demonstrated that the vectors are linearly dependent. 
12:00 --- Discussed theorem that says that if you have k vectors in R^n and you build an n x k matrix A by putting the vectors into the columns of A then: the k vectors are linearly independent if and only if rank(A) = k.  Discussed the implications of the theorem. 
17:20 --- Given any m x n matrix A, if you find its RREF then rank(A) = number of leading 1s in RREF and so rank(A) ≤ number of columns of A.  But we also know rank(A) = number of nonzero rows of the RREF and so rank(A) ≤ number of rows of A.  So rank(A) ≤ min{m,n}.  This is a super-important fact that we use all the time. 
23:15 --- Two vectors in R^3.  Can they be a basis for R^3?  No: demonstrated that they cannot span R^3. 
34:00 --- Discussed theorem that says: given k vectors in R^n, if k < n then the vectors cannot span R^n. 
37:00 --- Is a given set of 3 vectors in R^3 a basis for R^3?  Yes: it turns out to be a basis.  Given n vectors in R^n, then figuring out whether or not it’s a basis takes work --- there’s no fast answer. 
42:34 --- I wrote "k vectors in R^n form a basis for R^n if and only if the rank of the coefficient matrix equals k." This is ridiculously wrong. If I have two vectors {<1,0,0>,<0,1,0>} in R^3 then clearly they aren't a basis for R^3. But for this example, k=2 and the rank of the coefficient matrix is 2. So by the (wrong) theorem I wrote up, those two vectors form a basis for R^3.
43:00 --- Stated and discussed theorem that says that k vectors will span R^n if rank(A)=n where A is the n x k matrix whose columns are the vectors in question. 
46:45 --- Up to this point in the lecture, all I was working on was whether or not a set of vectors was a basis for R^n.  At this point, I turned to a proper subspace, S, of R^4 and sought a basis for the subspace S.  (“Proper subspace of R^4” means a subspace which is smaller than R^4.) 
52:25 --- corrected mistake from last class. 
55:25 --- In general, given k vectors in R^4, can they be a basis for R^4?  Discussed this for k<4, k=4, and k>4.  Make sure that you understand this argument and that it has nothing to do with R^4 --- if the vectors were in R^n you’d be looking at three cases k<n, k=n, and k>n.
Lecture 32: Subspaces Related to Linear Transformations (Nicholson Section 5.4) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 23:20
Description: Introduced subspaces that are related to a linear transformation L from R^n to R^m and subspaces that are relevant to an mxn matrix A. Null(L) is a subspace of R^n, Range(L) is a subspace of R^m, Null(A) = “Solution Space of Ax=0” is a subspace of R^n, Col(A) = “span of all columns of A” is a subspace of R^m, Row(A) = “span of all rows of A” is a subspace of R^n.
5:00 --- if [L] is the standard matrix for a linear transformation L, is there any connection between the two subspaces Null(L) and Range(L) and the three subspaces Null([L]), Col([L]), and Row([L])? Answer: Null(L)=Null([L]) and Range(L)=Col([L]). The previous book introduced “standard matrix” early on which is why I’m referring to in these lectures; Nicholson only introduces it in chapter 9. So you may not know this language. Here’s what “standard matrix” means. Nicholson refers to “the matrix of a linear transformation” at the bottom of page 106. This is the “standard matrix”; he just doesn’t call it that until page 497 (he’s trying to avoid confusing you too early, I assume).
7:15 --- Proved Range(L) is a subspace of R^m.
19:45 --- Did example for L from R^3 to R^3 where L projects a vector onto the vector [1;2;3]. Found Range(L) and Null(L) using geometric arguments.
Lecture 33: Introduction to Solution space, Null space, Solution set (Nicholson Section 5.4) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 48:37
Description: Lecture starts at 1:26. Given a matrix A, defined the “solution space of Ax=0” and, given a linear transformation L from R^n to R^m, defined Null(L).  
6:20 --- proved that Null(L) is a subspace of R^n.
16:00 --- example of L:R^3 → R^3 where L corresponds to projection onto a specific vector. Found Null(L) by geometric/intuitive arguments. Verified the intuition by representing the linear transformation as a matrix transformation and finding the corresponding solution space. Found a basis for Null(L). Found the dimension of Null(L).
31:45 --- example of L:R^3R^3 where L corresponds to projection onto a specific plane. Found Null(L) by geometric/intuitive arguments. Verified the intuition by representing the linear transformation as a matrix transformation and finding the corresponding solution space. Found a basis for Null(L). Found the dimension of Null(L).
41:45 --- Given an mxn matrix A, defined the solution set of the system Ax=b. Found solution set for a specific 2x4 matrix A. This is an important example because A has a column of zeros and students often get confused by such matrices! Wrote the solution set as a specific solution of Ax=b plus a linear combination of solutions to the homogeneous problem Ax=0.
Lecture 34: Introduction to range of a linear transformation, column space of a matrix (Nicholson Section 5.4) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 50:55
Description: L is a linear transformation from R^n to R^m and A is a mxn matrix. Started with a review of Null(L) and the solution space of Ax=0; these are subspaces of R^n.
2:00 --- introduced the range of a linear transformation and the column space of a matrix. These are subspaces of R^m.
5:30 --- Found Range(L) where L is the linear transformation corresponding to projection onto a specific vector. Once we found Range(L), then we found a basis for Range(L).
21:30 --- Found Range(L) where L is the linear transformation corresponding to projection onto a specific plane. Once we found Range(L), then we found a basis for Range(L).
37:55 --- Defined the column space of a matrix: Col(A).
39:10 --- given a specific matrix, A, found a basis for Col(A).
Lecture 35: Null(L), Range(L), the rank theorem, Row(A) (Nicholson Section 5.4) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 50:55
Description: Started with a review of the previous lecture: Null(L) and Range(L) for a pair of geometric transformations from R^3 to R^3. For both examples, dim(Null(L)) + dim(Range(L)) = 3. This will be true in general --- if L goes from R^n to R^m we’ll have dim(Null(L)) + dim(Range(L)) = n.
7:55 --- given a linear transformation, L: R^n  R^m, I presented an algorithm for finding a basis for Null(L).
15:45 --- given a linear transformation, L: R^n → R^m, I presented an algorithm for finding a basis for Range(L).
22:40 --- stated the rank theorem dim(Null(L))+dim(Range(L)) = n and explained why it’s true.
25:00 --- defined the “nullity” of a linear transformation/the “nullity” of a matrix.
28:40 --- Defined the row space of A: Row(A). It’s a subspace of R^n.
31:40 --- Stated the theorem that if A and B are row equivalent then Row(A)=Row(B). This means that when you do elementary row operations on a matrix, the row space of the final matrix is the same as the row space of the initial matrix is the same as the row space of each intermediary matrix.  
32:45 --- Applied the theorem to a classic exam question.
43:30 --- Discussed a classic mistake that confused/under-rested students make when asked for a basis for Col(A).
45:10 --- Did an example of finding a basis for the solution space S = {x such that Ax=0}.
Lecture 36: Introduction to Sets of Orthogonal Vectors (Nicholson Section 5.3/Section 8.1) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 53:20
Description: Defined what it means for a set of vectors to be orthogonal. Demonstrated that if P is a matrix whose columns are mutually orthogonal then P^T P = diagonal matrix.
4:30 --- Started with a discussion of change of basis material that wasn’t discussed and what students aren’t responsible for.
5:45 --- recalled the definition of an orthogonal set of vectors. Discussed specific examples.
9:28 --- Why do we care? A) If a set is orthogonal it’s easy to check if it’s linearly independent. B) If a set’s orthogonal it’s easy to figure out whether a specific vector is in the span of the set. C) If a matrix is symmetric then it’s diagonalizable and you can find an orthogonal basis of eigenvectors.
12:30 --- Proved that an orthogonal set of nonzero vectors is linearly independent. 14:45 --- Misspoke and said “orthogonal” instead of “linearly independent”.
21:20 --- If a vector v is in the span of a set of orthogonal vectors, here’s a fast way of finding the linear combination that equals v.
26:30 --- Stated a theorem which is “Given an orthogonal set of vectors, if v is in the span of the orthogonal set then you can immediately write down a linear combination that equals v.
28:30 --- What goes wrong if v isn’t in the span of the orthogonal set of vectors.
31:45 --- What goes wrong if the set of vectors isn’t orthogonal. Gave an algebraic explanation and a geometric explanation of what goes wrong.
39:45 --- Given an orthogonal set, what’s a fast way of checking if a vector is in their span?
45:45 --- Review of wonderful properties of orthogonal sets of vectors. What’s the cost of this “free lunch”?
48:15 --- Defined what it means for a vector to be orthogonal to a subspace S of R^n. Gave a geometric example of S and vectors that are orthogonal to S.
Lecture 37: More on diagonalization (Nicholson Section 5.5) | Link Linear Algebra Lecture Videos
Alternate Video Access via MyMedia | Video Duration: 48:40
Description: Started by reviewing the definition of what it means for a square matrix to be diagonalizable.
1:20 --- An nxn matrix A is diagonalizable if and only if you can find n linearly independent eigenvectors of A. Note: this lecture uses the language of linear independence, but Nicholson doesn’t get into that until Chapter 6. And so, in section 5.5 the book refers to “basic eigenvectors” and rather than asking that you have a full set of linearly independent eigenvectors, it asks that you have as many “basic eigenvectors” as the algebraic multiplicity of the eigenvalue. It’s the same thing.
5:30 --- Defined what it means for two square matrices to be “similar”. Reviewed a 3x3 example.
13:19 --- Stated theorem “Any set of eigenvectors with distinct eigenvalues is a linearly independent set”.
14:55 --- Compared the eigenvalues of A to the eigenvalues of the diagonal matrix. (They’re the same.) Compared the trace of A to the trace of the diagonal matrix. (They’re the same.) Compared the determinant of A to the determinant of the diagonal matrix. (They’re the same.)
18:40 --- returned to another prior 3x3 example.
25:45 --- Did a new 3x3 example. This one is super-important it’s a “nearly diagonal” matrix and it’s not diagonalizable.
34:14 --- Stated theorem that if A and B are similar matrices then they have the same eigenvalues, same determinant, same trace, and same rank. Note: this material isn’t in this section of Nicholson’s book. I provided it in because it’s important and beautiful and the proofs require you to understand some important things. But feel free to skip directly to 44:15.
36:00 -- I proved that the determinants are the same.
39:30 --- I proved that the eigenvalues are the same.
44:15 --- Did another 3x3 example. It’s not diagonalizable. But it’s nearly diagonalizable --- this is the Jordan Canonical Form theorem. It’s not in the course but it’s super-important and you’ll likely use it before you graduate.