Matrices
Matrices are the ultimate organizational tool in mathematics—think of them as tables of numbers arranged in rows and columns that can be manipulated…
Start with the simplest version: this lesson is about Matrices. If you can explain the core idea to a friend using everyday language, examples, and one clear reason why it matters, you have moved from memorising to understanding.
Matrices are the ultimate organizational tool in mathematics—think of them as tables of numbers arranged in rows and columns that can be manipulated systematically to solve complex problems. A matrix is a rectangular array of numbers enclosed in brackets, like a spreadsheet of data. Just as a single number transforms another through multiplication, matrices transform entire collections of numbers simultaneously. This chapter explores matrix operations, structure, and how they elegantly solve systems of linear equations that would be tedious to solve by hand.
What is a Matrix?
A matrix A is an m × n array of numbers arranged in m rows and n columns. Each number is called an element or entry. An element in row i and column j is denoted aᵢⱼ.
Example: The 2 × 3 matrix
[ 1 2 3 ]
[ 4 5 6 ]
has a₁₁ = 1, a₁₂ = 2, a₂₃ = 6, and so on.
Real-world context: Imagine a store inventory system where rows represent products (wheat, rice, flour) and columns represent months (Jan, Feb, Mar). The matrix stores all quantities in an organized format. Operations on this matrix let you track inventory changes, calculate totals, or compare months.
Building from Class 11
While Class 11 introduced basic algebraic structures and systems of equations, matrices formalize these systems. Instead of writing:
- a₁x + b₁y = c₁
- a₂x + b₂y = c₂
We write the compact matrix form:
[ a₁ b₁ ] [ x ] [ c₁ ]
[ a₂ b₂ ] [ y ] = [ c₂ ]
This is cleaner, enables computer algorithms, and extends naturally to hundreds of unknowns.
Types of Matrices
- Row matrix: 1 × n (single row)
- Column matrix: m × 1 (single column)
- Square matrix: n × n (equal rows and columns; enables determinants from chapter-04-determinants)
- Diagonal matrix: Non-zero elements only on the main diagonal
- Identity matrix I: Diagonal matrix with all 1s on diagonal; I·A = A (multiplicative identity, like multiplying by 1)
- Null matrix: All zero elements
- Transpose Aᵀ: Swap rows and columns; (Aᵀ)ᵀ = A
- Symmetric matrix: A = Aᵀ (reads the same after flipping)
Matrix Operations
Addition and Subtraction
Only possible for matrices of the same dimensions. Add element-wise: (A + B)ᵢⱼ = aᵢⱼ + bᵢⱼ
Scalar Multiplication
Multiply every element by a constant: (kA)ᵢⱼ = k·aᵢⱼ
Matrix Multiplication
For A (m × n) and B (n × p), the product AB is m × p. The element (AB)ᵢⱼ equals the dot product of row i of A with column j of B:
- (AB)ᵢⱼ = aᵢ₁b₁ⱼ + aᵢ₂b₂ⱼ + ... + aᵢₙbₙⱼ
- Important: AB ≠ BA in general (multiplication is not commutative)
- This non-commutativity reflects real processes: putting on socks then shoes differs from putting on shoes then socks
Inverse of a Matrix
For a square matrix A, the inverse A⁻¹ satisfies AA⁻¹ = A⁻¹A = I. Not all matrices have inverses (only non-singular ones, connected to chapter-04-determinants). Finding A⁻¹ is crucial for solving Ax = b by computing x = A⁻¹b.
Solving Systems Using Matrices
The system Ax = b can be solved by:
- Compute the determinant (det A ≠ 0 means unique solution exists)
- Find A⁻¹ using chapter-04-determinants methods
- Multiply both sides by A⁻¹: x = A⁻¹b
This extends to systems with 10, 100, or 1000 unknowns—impossible by hand, but elegant with matrix methods.
Connections to Other Topics
- chapter-04-determinants: Determines if matrices are invertible and solves systems
- chapter-10-vector-algebra: Vectors are column matrices; linear transformations use matrix multiplication
- chapter-06-application-of-derivatives: Jacobian matrices in multivariable calculus
- Class 11 Linear Systems: More efficient representation and solution
Socratic Questions
- Why does matrix multiplication require the number of columns in the first matrix to equal the number of rows in the second? What does this constraint represent geometrically?
- You know that in regular algebra, if ab = ac and a ≠ 0, then b = c. Why can't we apply the same reasoning to matrices? That is, why doesn't AB = AC and A ≠ 0 imply B = C?
- What is the geometric meaning of multiplying a vector by a matrix A? How does the transpose Aᵀ relate to the original transformation?
- For what types of matrices does AB = BA (commutativity)? Can you think of practical scenarios where two operations are commutative versus non-commutative?
- A matrix A is invertible if and only if det(A) ≠ 0. Why would having determinant zero make a matrix non-invertible? What does this reveal about the system Ax = b when det(A) = 0?
