Skip
down to homework assignments
Skip down to daily blog of topics and outside resources
Instructor:
Robert Ellis |
Office: E1
Bldg, Rm 105C |
Email: |
Lectures |
TR 11:25am-12:40pm |
Perlstein Hall 109
|
Office Hours |
MW Walk-in, TR by
appointment |
Office phone
567-5336 |
|
Appointments
and emailed questions are welcome |
|
Textbook |
Anton, Elementary
Linear Algebra, 9th edition, Wiley |
Test Dates
Quiz 1: Thursday, October 7 (quiz, key)
Exam 1 (Through 4.1): Thursday October 14 (see Blackboard->Course Documents
for key)
Quiz 2: Tuesday, November 16 (quiz, key)
Exam 2 (Chapter 4 and 5.1-5.5): Tuesday, November 23 (see Blackboard->Course
Documents for key)
Final Exam: Tuesday, December 7, 10:30am-12:30pm
Sample
cover sheet for homework done in groups of 2 or 3
First day handout (pdf) (course contract &
exam schedule)
Tools
Online practice: calculus on the web
(click `Linear Algebra` book icon)
Student
companion site for the textbook
Linear Algebra
Toolkit for practice, checking homework, etc.
Due Date |
Required Reading & Suggested problems |
Turn-In Assignment* |
T 12/7 |
Final Exam. 10:30am-12:30pm, Perlstein 109. |
Chapter 7 practice for Final Exam.
Material similar to this will appear on the final exam. Sect. 7.1: 1ab, 2ab, 3ab, 4ab, 5ab, 6ab, 7a,
8a, 9a, 10, 11, 12 Sect. 7.2: 1, 2, 8, 10, 11, 13, 16, 19 |
R 12/2 |
Read: Sect.
7.1-7.2 Recommended
practice in Calculus on the
Web (click the buttons according to the path below, and work problems
online-warning: this list is not
exhaustive-study your notes/text/hws too). (a) 21.Linear
Algebra->3.Eigenvalues->1.Eigenvalues and Eigenvectors->Modules 1-3:
Finding eigenvalues, Bases for eigenspaces, Eigenvectors in the plane |
Optional
HW 13. (HW 13 will be graded for feedback, but the grade will not
count. Turn in any or all problems. Pick it up in office hours 11:30am-1:30pm
M 12/6. Sect.
6.1: 15a, 17, 19 Sect.
6.2: 6a, 13b, 15ab, 18c Sect.
6.3: 8, 9, 17a, 18, 20, 24abe Sect.
6.5: 6, 8 Sect.
6.6: 1 |
T 11/30 |
Read: Sect. 6.3-6.5, and pp.347-349 Recommended practice in Calculus on the Web (click the
buttons according to the path below, and work problems online-warning: this list is not
exhaustive-study your notes/text/hws too). (a) Inner Product Spaces. 21.Linear Algebra->4.Inner
Products->Sections 1-3: Inner Product and Length, Orthogonal Vectors,
Orthogonal Spaces |
|
T 11/23 |
Prepare:
Exam 2 Recommended practice in Calculus on the Web (click the
buttons according to the path below, and work problems online-warning: this list is not
exhaustive-study your notes/text/hws too). (a) Linear combinations.
21.Linear Algebra > 2.Spaces and Transformations > 1.Linear
Combinations (b) Linear independence.
21.Linear Algebra > 2.Spaces and Transformations > 2.Linear
Independence (c) Matrix transformations. 21.Linear Algebra >
2.Spaces and Transformations > 3.Matrix Transformations (d) Row, column, null spaces.
21.Linear Algebra > 2.Spaces and Transformations > 4.Subspaces |
Exam
2: Chapter 4, 5.1-5.5 |
R 11/18 |
Read: Through Sect. 6.4 |
|
T 11/16 |
Read:
Through Sect. 6.2 Quiz
2: Sect.
5.1-5.5 |
HW 12.
Sect. 5.5: 10c Sect. 5.6: 8d-g, 12,
15 Sect. 6.1: 2cd, 4,
6, 8b |
R 11/11 |
Read: Through Sect. 6.1 |
|
T 11/9 |
Read:
Through Sect. 5.6 |
HW 11. Sect. 5.4: 14,
23 Sect.
5.5: 4, 6c, 12a, 15 |
R 11/4 |
Read: Through Sect. 5.5 |
|
T 11/2 |
|
Warning: You must read and understand the examples
in Section 5.3, as the lecture focus was very different. HW 10. Sect 5.3: 4, 8, 12, 15 Sect. 5.4:
6, 10, 22 |
R 10/28 |
|
HW 9. Sect. 4.4: 8, 13a Sect. 5.1: 2, 12, 20 (justify) Sect. 5.2: 2a-c, 7 (do 1 row reduction for a-d), 13,
17 |
T 10/26 |
Read:
Through p.256 |
|
T 10/19 |
Read: Through p.243 |
HW 8. Sect. 4.3: 8ab,
18b |
R 10/14 |
Prepare:
Exam 1 |
Exam
1 (Through Sect. 4.1) |
T 10/12 |
Read: Through Sect. 5.2 |
HW 7. Sect. 4.2: 8, 11, 14, 18a, 26, 28 Sect. 4.3: 14a,16c |
R 10/7 |
Read:
Through Sect. 4.4 |
Quiz
1. Chapters 1&2 11:25a-11:40a |
T 10/5 |
Read: Through Sect. 4.3 These problems are deferred
until next time: Sect. 4.2: 4d,
6d, 8, 11, 14 |
HW 6. Sect. 3.4: 4a, 10a, 20, 30 Sect. 3.5: 8, 16, 40a Sect. 4.1: 16, 26 |
R 9/30 |
Read: Through
p.186 of Sect. 4.2 |
|
T 9/28 |
Read: Through Sect. 3.5 |
HW 5. Sect. 2.3: 3, 9, 13, 15b (remember to show
work), 16 Sect. 2.4: 17a Sect. 3.1: 8, 20 Sect. 3.2: 7, 9a, 12 Sect. 3.3: 4a, 5a, 17a, 21 |
R 9/23 |
Read:
Sect. 2.4 through Sect. 3.2 (spend only a little time on 2.4) |
|
T 9/21 |
Read: Through Sect. 2.3 |
HW 4. Sect. 1.7: 9, 13, 14a, 22a, 24a Sect. 2.1: 12, 22, 23, 30, 31ab Sect. 2.2: 5, 13, 16 (just 5, not 4-7; but
make it different than your previous work for 5) |
R 9/16 |
Read: Through Sect. 2.2 |
|
T 9/14 |
Read: Through Sect. 2.1 |
HW 3. Sect. 1.5: 5ab, 7a, 8d, 12, 17 Sect. 1.6: 2, 12ab, 20a, 21, 25 |
R 9/9 |
Read: Through page 90 |
|
T 9/7 |
Read: Through page 73 |
HW 2. Sect. 1.3: 6d, 7bc, 11, 12b, 18a, Sect. 1.4: 6a, 8, 10b, 12, 21a, 23,
29ab |
R 9/2 |
Read: Through page 59 |
|
T 8/31 |
Read: Through page 42 |
HW 1. Sect. 1.1: 8 |
R 8/26 |
Read: Through Section 1.3. |
|
*Warning: Students with non-sanctioned editions of the
textbook are responsible for working the problems as listed in the edition
officially used in this course. The 9th edition with applications seems to be the same, but beware. The 10th
edition has different problems, and who knows about the international editions.
Class Activities and extra resources |
|
T 11/23 |
Exam 2, Sect. 4.1-5.5 |
R 11/18 |
6.2: Cauchy-Swarz inequality for inner product spaces; distance,
dot product angle, and Pythagorean Theorem in inner product spaces;
Orthogonal complement inner product subspaces come in pairs; nullspace and
row space of a matrix are orthogonal complements with respect to the
Euclidean inner product; |
T 11/16 |
6.1: Norm and distance
defined from an inner product; the matrix inner product; an integral inner
product; approximation of a continuous function f(x) from C[-pi,pi] by inner product
projections on the functions 1, sin(x), and cos(x); Thm. 6.1.1.: properties
of inner products (read, and think about dot products) |
R 11/11 |
5.6: (Various theorems about row/column/null spaces of A) Thm. 5.6.7: #free parameters of solution to Ax=b when the system is consistent is #columns of A minus rank(A) Thm. 5.6.8: characterization of when Ax=0 has only the
trivial solution 6.1: Definition of inner product from four axioms; inner
products from weighted dot products; average of test scores as an inner
product |
T 11/9 |
5.5: Row-equivalent
matrices have linearly independent sets of vectors in the same columns, and
bases in the same columns; a row-echelon form matrix R has rows with leading 1s as a basis for the row space, and
columns with leading 1s as a basis for the column space; 5.6: Matrix rank and
nullity; Four key vector spaces: row space, column space, nullspace, and
nullspace of transpose; rank+nullity=#columns |
R 11/4 |
5.4: Every basis for a finite-dimensional vector space has the
same size; Plus/Minus theorem to increase the size of a linearly independent
set and keep independence, or remove a vector from a linearly dependent set
and keep the same span; a spanning set of vectors can be reduced (if
necessary) to a basis; a linearly independent set of vectors that does not
span can be increased to a basis; a vector subspace has dimension at most
that of the parent finite-dimensional subspace, with equal dimension iff the
subspace equals the parent space; 5.5: Definitions of row space, column
space, and nullspace of a matrix; Ax=b is consistent iff b is in the column space of A; Ax=b is consistent iff the solution x can be expressed as a particular
solution plus a linear combination of the basis vectors of the nullspace of A; elementary row operations change
neither the nullspace nor the row space of a matrix A |
T 11/2 |
5.4: Transforming between
coordinates of the standard basis and an alternate basis via matrix
multiplication; finite-dimensional vector spaces and dimension; examples of
bases for finite- and infinite-dimensional vector spaces; all bases for a
finite-dimensional vector space have the same number of vectors; computing a
basis for the homogeneous solution set of a system of linear equations |
R 10/28 |
5.3: Geometric interpretation of linear dependence in R2,
R3; more than n vectors
in Rn are linearly
dependent; 5.4: Plotting points in R2 via the standard basis and
an alternate (skewed) basis; definition of basis (spans space, linearly
independent); matrix invertibility criterion for a basis in Rn; interpretation of the
linear transformation constructed using a basis of Rn; existence and uniqueness of the expression of every
vector in terms of a linear combination of the basis; converting coordinate
vectors between bases in Rn
via matrix multiplication |
T 10/26 |
5.2: Vectors that span Pn; vectors that may or may
not span R3, R4; a condition for two sets of vectors
spanning the same space; 5.3: definition of linear dependence/independence of
a set of vectors; use of Gaussian elimination to detect linear
dependence/independence and find a largest subset of linearly independent
vectors that span the same space |
R 10/21 |
5.2: Subspaces of vector spaces; subspace test; examples of
subspaces; set of homogeneous solutions of a linear system of equations is a
subspace; linear combinations of vectors; the span of a set of vectors is a
subspace |
T 10/19 |
5.1: Axiomatic definition
of real vector spaces; important examples of vector spaces; some properties
deduced from vector space axioms |
R 10/14 |
Exam 1, Sect. 1.1-4.1 |
T 10/12 |
4.3: Linearity properties
and use in characterizing linear transforms; formula for the standard matrix
of a linear transform; projection onto a line is a linear transform;
eigenvalues and eigenvectors of linear transforms derive from the associated
standard matrix |
R 10/7 |
Quiz 1, Chapt. 1-2; 4.2: composition of linear transformations;
4.3: 1-1 linear operators on Rn are invertible |
T 10/5 |
4.1: Cauchy-Schwarz
inequality used to prove n-dim triangle inequality; n-dim version of
Pythagorean Theorem; dot product written as matrix multiplication; 4.2:
coordinate functions Rn->R; linear transformations; standard
matrix of a linear transformation; two methods to compute the standard
matrix; projections, reflections, and rotations as linear operators; inverse
pairs of linear transformations |
R 9/30 |
3.5: Intersection
of 3 planes via Gaussian Elimination (Wolfram Demonstration); distance
between a point and a plane, or between two planes; point-normal form,
general form, vector form of a plane; general form, vector form of a line;
parametric equations for a line; 4.1: Euclidean n-space, generalized from 2-
and 3-space; Cauchy-Schwarz inequality |
T 9/28 |
3.3: Vector projections;
distance between a point and a line; 3.4: Cross product in determinant and
component form; area of a parallelogram from a cross product; scalar triple
product and volume of a parallelepiped Vector projection,
Cross
product, Distance
between a point and a line, Wolfram Demonstration (requires Mathematica
or Mathematica Player) |
R 9/23 |
2.4: Combinatorial definition of the determinant, based on
permutations and inversions; 3.1: Introduction to vectors; 3.2: Vector norms;
3.3: Vector dot product in geometric and component form; normal vector to a
line |
T 9/21 |
2.3: Properties of the
determinant, including det(AB)=det(A)det(B); A invertible iff det(A) is not
zero; intro to eigenvalues and eigenvectors |
R 9/16 |
2.1: Determinants by cofactor expansion; matrix inverse via
matrix adjoint; Cramer`s Rule; determinants of triangular matrices; 2.2:
Determinants of elementary matrices; determinants by row reduction |
T 9/14 |
1.7: Diagonal, triangular,
and symmetric matrices and their properties; 2.1: 2x2 and 3x3 determinants |
R 9/9 |
1.6:
Number of solutions for Ax=b, including when A is invertible; easier
detection of invertible matrices; extension of equivalent properties of an
invertible matrix in Thm. 1.6.4 |
T 9/7 |
1.4: Matrix polynomials, properties of the matrix
transpose; 1.5: Elementary matrices, equivalent properties of an invertible
matrix, a method for computing A-1 using Gauss-Jordan elimination |
R 9/2 |
Matrix
trace; properties of matrix arithmetic, but not commutativity of
multiplication or multiplicative cancellation; zero matrix and identity
matrix; definition and uniqueness of matrix inverse when it exists; 2x2
matrix inverses; matrix powers; proofs, including by induction and sequences
of equations |
T 8/31 |
Homogeneous systems have infinitely many solutions when
there are more variables than equations, and in other cases may or may not;
various matrix definitions and operations, including addition, subtraction,
dot product, multiplication; encoding a system as Ax=b; matrix multiplication
as linear combinations of columns, or of rows; transpose and trace |
R 8/26 |
example
with free and leading variables; variable order affects solution
parameterization; properties of row-echelon and reduced row-echelon form; Gaussian
elimination and Gauss-Jordan elimination algorithms; example algorithm
instance using applet below; homogeneous systems are consistent and have at
least the trivial solution |
T 8/24 |
linear equations and systems of linear equations;
consistent and inconsistent systems; three cases for solution set; augmented
matrix encoding; elementary row operations; introduction to row-echelon form,
reduced row-echelon form, Gaussian elimination, and Gauss-Jordan elimination |
page maintained by Robert Ellis / http://math.iit.edu/~rellis/