Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010265717 | QA184.2 G63 2010 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
Linear algebra forms the basis for much of modern mathematics--theoretical, applied, and computational. Finite-Dimensional Linear Algebra provides a solid foundation for the study of advanced mathematics and discusses applications of linear algebra to such diverse areas as combinatorics, differential equations, optimization, and approximation.
The author begins with an overview of the essential themes of the book: linear equations, best approximation, and diagonalization. He then takes students through an axiomatic development of vector spaces, linear operators, eigenvalues, norms, and inner products. In addition to discussing the special properties of symmetric matrices, he covers the Jordan canonical form, an important theoretical tool, and the singular value decomposition, a powerful tool for computation. The final chapters present introductions to numerical linear algebra and analysis in vector spaces, including a brief introduction to functional analysis (infinite-dimensional linear algebra).
Drawing on material from the author's own course, this textbook gives students a strong theoretical understanding of linear algebra. It offers many illustrations of how linear algebra is used throughout mathematics.
Author Notes
Mark S. Gockenbach is a professor and chair of the Department of Mathematical Sciences at Michigan Technological University.
Table of Contents
Preface | p. xv |
About the author | p. xxi |
1 Some problems posed on vector spaces | p. 1 |
1.1 Linear equations | p. 1 |
1.1.1 Systems of linear algebraic equations | p. 1 |
1.1.2 Linear ordinary differential equations | p. 4 |
1.1.3 Some interpretation: The structure of the solution set to a linear equation | p. 5 |
1.1.4 Finite fields and applications in discrete mathematics | p. 7 |
1.2 Best approximation | p. 8 |
1.2.1 Overdetermined linear systems | p. 8 |
1.2.2 Best approximation by a polynomial | p. 11 |
1.3 Diagonalization | p. 13 |
1.4 Summary | p. 17 |
2 Fields and vector spaces | p. 19 |
2.1 Fields | p. 19 |
2.1.1 Definition and examples | p. 19 |
2.1.2 Basic properties of fields | p. 21 |
2.2 Vector spaces | p. 29 |
2.2.1 Examples of vector spaces | p. 31 |
2.3 Subspaces | p. 38 |
2.4 Linear combinations and spanning sets | p. 43 |
2.5 Linear independence | p. 50 |
2.6 Basis and dimension | p. 57 |
2.7 Properties of bases | p. 66 |
2.8 Polynomial interpolation and the Lagrange basis | p. 73 |
2.8.1 Secret sharing | p. 77 |
2.9 Continuous piecewise polynomial functions | p. 82 |
2.9.1 Continuous piecewise linear functions | p. 84 |
2.9.2 Continuous piecewise quadratic functions | p. 87 |
2.9.3 Error in polynomial interpolation | p. 90 |
3 Linear operators | p. 93 |
3.1 Linear operators | p. 93 |
3.1.1 Matrix operators | p. 95 |
3.2 More properties of linear operators | p. 101 |
3.2.1 Vector spaces of operators | p. 101 |
3.2.2 The matrix of a linear operator on Euclidean spaces | p. 101 |
3.2.3 Derivative and differential operators | p. 103 |
3.2.4 Representing spanning sets and bases using matrices | p. 103 |
3.2.5 The transpose of a matrix | p. 104 |
3.3 Isomorphic vector spaces | p. 107 |
3.3.1 Injective and surjective functions; inverses | p. 108 |
3.3.2 The matrix of a linear operator on general vector spaces | p. 111 |
3.4 Linear operator equations | p. 116 |
3.4.1 Homogeneous linear equations | p. 117 |
3.4.2 Inhomogeneous linear equations | p. 118 |
3.4.3 General solutions | p. 120 |
3.5 Existence and uniqueness of solutions | p. 124 |
3.5.1 The kernel of a linear operator and injectivity | p. 124 |
3.5.2 The rank of a linear operator and surjectivity | p. 126 |
3.5.3 Existence and uniqueness | p. 128 |
3.6 The fundamental theorem; inverse operators | p. 131 |
3.6.1 The inverse of a linear operator | p. 133 |
3.6.2 The inverse of a matrix | p. 134 |
3.7 Gaussian elimination | p. 142 |
3.7.1 Computing A -1 | p. 148 |
3.7.2 Fields other than R | p. 149 |
3.8 Newton's method | p. 153 |
3.9 Linear ordinary differential equations | p. 158 |
3.9.1 The dimension of ker(L) | p. 158 |
3.9.2 Finding a basis for ker(L) | p. 161 |
3.9.2.1 The easy case: Distinct real roots | p. 162 |
3.9.2.2 The case of repeated real roots | p. 162 |
3.9.2.3 The case of complex roots | p. 163 |
3.9.3 The Wronskian test for linear independence | p. 163 |
3.9.4 The Vandermonde matrix | p. 166 |
3.10 Graph theory | p. 168 |
3.10.1 The incidence matrix of a graph | p. 168 |
3.10.2 Walks and matrix multiplication | p. 169 |
3.10.3 Graph isomorphisms | p. 171 |
3.11 Coding theory | p. 175 |
3.11.1 Generator matrices; encoding and decoding | p. 177 |
3.11.2 Error correction | p. 179 |
3.11.3 The probability of errors | p. 181 |
3.12 Linear programming | p. 183 |
3.12.1 Specification of linear programming problems | p. 184 |
3.12.2 Basic theory | p. 186 |
3.12.3 The simplex method | p. 191 |
3.12.3.1 Finding an initial BFS | p. 196 |
3.12.3.2 Unbounded LPs | p. 199 |
3.12.3.3 Degeneracy and cycling | p. 200 |
3.12.4 Variations on the standard LPs | p. 202 |
4 Determinants and eigenvalues | p. 205 |
4.1 The determinant function | p. 206 |
4.1.1 Permutations | p. 210 |
4.1.2 The complete expansion of the determinant | p. 212 |
4.2 Further properties of the determinant function | p. 217 |
4.3 Practical computation of det (A) | p. 221 |
4.3.1 A recursive formula for det (A) | p. 224 |
4.3.2 Cramer's rule | p. 226 |
4.4 A note about polynomials | p. 230 |
4.5 Eigenvalues and the characteristic polynomial | p. 232 |
4.5.1 Eigenvalues of real matrix | p. 235 |
4.6 Diagonalization | p. 241 |
4.7 Eigenvalues of linear operators | p. 251 |
4.8 Systems of linear ODEs | p. 257 |
4.8.1 Complex eigenvalues | p. 259 |
4.8.2 Solving the initial value problem | p. 260 |
4.8.3 Linear systems in matrix form | p. 261 |
4.9 Integer programming | p. 265 |
4.9.1 Totally unimodular matrices | p. 265 |
4.9.2 Transportation problems | p. 268 |
5 The Jordan canonical form | p. 273 |
5.1 Invariant subspaces | p. 273 |
5.1.1 Direct sums | p. 276 |
5.1.2 Eigenspaces and generalized eigenspaces | p. 277 |
5.2 Generalized eigenspaces | p. 283 |
5.2.1 Appendix: Beyond generalized eigenspaces | p. 290 |
5.2.2 The Cayley-Hamilton theorem | p. 294 |
5.3 Nilpotent operators | p. 300 |
5.4 The Jordan canonical form of a matrix | p. 309 |
5.5 The matrix exponential | p. 318 |
5.5.1 Definition of the matrix exponential | p. 319 |
5.5.2 Computing the matrix exponential | p. 319 |
5.6 Graphs and eigenvalues | p. 325 |
5.6.1 Cospectral graphs | p. 325 |
5.6.2 Bipartite graphs and eigenvalues | p. 326 |
5.6.3 Regular graphs | p. 328 |
5.6.4 Distinct eigenvalues of a graph | p. 330 |
6 Orthogonality and best approximation | p. 333 |
6.1 Norms and inner products | p. 333 |
6.1.1 Examples of norms and inner products | p. 337 |
6.2 The adjoint of a linear operator | p. 342 |
6.2.1 The adjoint of a linear operator | p. 343 |
6.3 Orthogonal vectors and bases | p. 350 |
6.3.1 Orthogonal bases | p. 351 |
6.4 The projection theorem | p. 357 |
6.4.1 Overdetermined linear systems | p. 361 |
6.5 The Gram-Schmidt process | p. 368 |
6.5.1 Least-squares polynomial approximation | p. 371 |
6.6 Orthogonal complements | p. 377 |
6.6.1 The fundamental theorem of linear algebra revisited | p. 381 |
6.7 Complex inner product spaces | p. 386 |
6.7.1 Examples of complex inner product spaces | p. 388 |
6.7.2 Orthogonality in complex inner product spaces | p. 389 |
6.7.3 The adjoint of a linear operator | p. 390 |
6.8 More on polynomial approximation | p. 394 |
6.8.1 A weighted L 2 inner product | p. 397 |
6.9 The energy inner product and Galerkin's method | p. 401 |
6.9.1 Piecewise polynomials | p. 404 |
6.9.2 Continuous piecewise quadratic functions | p. 407 |
6.9.3 Higher degree finite element spaces | p. 409 |
6.10 Gaussian quadrature | p. 411 |
6.10.1 The trapezoidal rule and Simpson's rule | p. 412 |
6.10.2 Gaussian quadrature | p. 413 |
6.10.3 Orthogonal polynomials | p. 415 |
6.10.4 Weighted Gaussian quadrature | p. 419 |
6.11 The Helmholtz decomposition | p. 420 |
6.11.1 The divergence theorem | p. 421 |
6.11.2 Stokes's theorem | p. 422 |
6.11.3 The Helmholtz decomposition | p. 423 |
7 The spectral theory of symmetric matrices | p. 425 |
7.1 The spectral theorem for symmetric matrices | p. 425 |
7.1.1 Symmetric positive definite matrices | p. 428 |
7.1.2 Hermitian matrices | p. 430 |
7.2 The spectral theorem for normal matrices | p. 434 |
7.2.1 Outer products and the spectral decomposition | p. 437 |
7.3 Optimization and the Hessian matrix | p. 440 |
7.3.1 Background | p. 440 |
7.3.2 Optimization of quadratic functions | p. 441 |
7.3.3 Taylor's theorem | p. 443 |
7.3.4 First-and second-order optimality conditions | p. 444 |
7.3.5 Local quadratic approximations | p. 446 |
7.4 Lagrange multipliers | p. 448 |
7.5 Spectral methods for differential equations | p. 453 |
7.5.1 Eigenpairs of the differential operator | p. 454 |
7.5.2 Solving the BVP using eigenfunctions | p. 456 |
8 The singular value decomposition | p. 463 |
8.1 Introduction to the SVD | p. 463 |
8.1.1 The SVD for singular matrices | p. 467 |
8.2 The SVD for general matrices | p. 470 |
8.3 Solving least-squares problems using the SVD | p. 476 |
8.4 The SVD and linear inverse problems | p. 483 |
8.4.1 Resolving inverse problems through regularization | p. 489 |
8.4.2 The truncated SVD method | p. 489 |
8.4.3 Tikhonov regularization | p. 490 |
8.5 The Smith normal form of a matrix | p. 494 |
8.5.1 An algorithm to compute the Smith normal form | p. 495 |
8.5.2 Applications of the Smith normal form | p. 501 |
9 Matrix factorizations and numerical linear algebra | p. 507 |
9.1 The LU factorization | p. 507 |
9.1.1 Operation counts | p. 512 |
9.1.2 Solving Ax=b using the LU factorization | p. 514 |
9.2 Partial pivoting | p. 516 |
9.2.1 Finite-precision arithmetic | p. 517 |
9.2.2 Examples of errors in Gaussian elimination | p. 518 |
9.2.3 Partial pivoting | p. 519 |
9.2.4 The PLU factorization | p. 522 |
9.3 The Cholesky factorization | p. 524 |
9.4 Matrix norms | p. 530 |
9.4.1 Examples of induced matrix norms | p. 534 |
9.5 The sensitivity of linear systems to errors | p. 537 |
9.6 Numerical stability | p. 542 |
9.6.1 Backward error analysis | p. 543 |
9.6.2 Analysis of Gaussian elimination with partial pivoting | p. 545 |
9.7 The sensitivity of the least-squares problem | p. 548 |
9.8 The QR factorization | p. 554 |
9.8.1 Solving the least-squares problem | p. 556 |
9.8.2 Computing the QR factorization | p. 556 |
9.8.3 Backward stability of the Householder QR algorithm | p. 561 |
9.8.4 Solving a linear system | p. 562 |
9.9 Eigenvalues and simultaneous iteration | p. 564 |
9.9.1 Reduction to triangular form | p. 564 |
9.9.2 The power method | p. 566 |
9.9.3 Simultaneous iteration | p. 567 |
9.10 The QR algorithm | p. 572 |
9.10.1 A practical QR algorithm | p. 573 |
9.10.1.1 Reduction to upper Hessenberg form | p. 574 |
9.10.1.2 The explicitly shifted QR algorithm | p. 576 |
9.10.1.3 The implicitly shifted QR algorithm | p. 579 |
10 Analysis in vector spaces | p. 581 |
10.1 Analysis in R n | p. 581 |
10.1.1 Convergence and continuity in R n | p. 582 |
10.1.2 Compactness | p. 584 |
10.1.3 Completeness of R n | p. 586 |
10.1.4 Equivalence of norms on R n | p. 586 |
10.2 Infinite-dimensional vector spaces | p. 590 |
10.2.1 Banach and Hilbert spaces | p. 592 |
10.3 Functional analysis | p. 596 |
10.3.1 The dual of a Hilbert space | p. 600 |
10.4 Weak convergence | p. 605 |
10.4.1 Convexity | p. 611 |
A The Euclidean algorithm | p. 617 |
A.0.1 Computing multiplicative inverses in Z p | p. 618 |
A.0.2 Related results | p. 619 |
B Permutations | p. 621 |
C Polynomials | p. 625 |
C.1 Rings of Polynomials | p. 625 |
C.2 Polynomial functions | p. 630 |
C.2.1 Factorization of polynomials | p. 632 |
D Summary of analysis in R | p. 633 |
D.0.1 Convergence | p. 633 |
D.0.2 Completeness of R | p. 634 |
D.0.3 Open and closed sets | p. 635 |
D.0.4 Continuous functions | p. 636 |
Bibliography | p. 637 |
Index | p. 641 |