Types of Matrices
-
Square Matrix: A matrix with the **same number of rows and columns** (n ×
n).
Example: A = [[1, 2], [3, 4]]
(2×2 matrix) -
Rectangular Matrix: A matrix with **different numbers of rows and
columns**.
Example: B = [[1, 2, 3], [4, 5, 6]]
(2×3 matrix) -
Diagonal Matrix: A square matrix where **all non-diagonal elements are
zero**. Diagonal elements may or may not be zero.
Example: D = [[5, 0], [0, 3]]
-
Identity Matrix (I): A diagonal matrix where all diagonal elements are
**1**, and all others are **0**. It acts as the multiplicative identity in matrix algebra.
Example: I = [[1, 0], [0, 1]]
-
Zero Matrix (O): A matrix where **all elements are zero**. It acts as the
additive identity.
Example: O = [[0, 0], [0, 0]]
-
Symmetric Matrix: A square matrix that is **equal to its transpose**. That
is, A = Aᵗ.
Example: S = [[1, 2], [2, 4]]
(Sᵗ = S) -
Skew-Symmetric Matrix: A square matrix that satisfies **A = -Aᵗ**. All
diagonal elements must be zero.
Example: K = [[0, -3], [3, 0]]
(Kᵗ = -K) -
Hermitian Matrix: A complex square matrix that is equal to its **conjugate
transpose**: A = Aᴴ (Aᴴ = A̅ᵗ).
Example: H = [[2, 2 - i], [2 + i, 3]]
-
Skew-Hermitian Matrix: A complex square matrix that satisfies **A = -Aᴴ**.
Example: SH = [[0, 1 + i], [-1 + i, 0]]
Matrix Properties
-
Trace:
The trace of a matrix is the sum of the values on its main diagonal (from top-left to bottom-right). It is only defined for square matrices (same number of rows and columns).
Formula: Trace(A) = a₁₁ + a₂₂ + a₃₃ + ... + aₙₙ
Example:
A = [ [1, 2], [3, 4] ]
Main diagonal = 1 and 4 → Trace(A) = 1 + 4 = 5
💡 Tip: The trace is useful in physics and statistics, like in quantum mechanics and matrix equations. -
Determinant:
The determinant is a number that tells us a lot about a square matrix, like whether it is invertible.
Key Points:- If det(A) ≠ 0 → Matrix is invertible (non-singular)
- If det(A) = 0 → Matrix is not invertible (singular)
- Used in solving equations, area/volume calculations, and linear transformations
Formula for 2×2 matrix:
A = [ [a, b], [c, d] ] → det(A) = (a × d) − (b × c)
Example:
A = [ [1, 2], [3, 4] ] → det(A) = (1 × 4) − (2 × 3) = 4 − 6 = −2
💡 Tip: A determinant of 0 means the matrix squashes space — it's like collapsing a 3D shape into a 2D flat surface.
Matrix Operations
-
Addition / Subtraction:
Add or subtract corresponding elements of two matrices — but only if they have the same size.
Example:
A = [ [1, 2], [3, 4] ]
B = [ [5, 6], [7, 8] ]
A + B = [ [1+5, 2+6], [3+7, 4+8] ] = [ [6, 8], [10, 12] ] -
Multiplication:
Multiply matrices using the row-by-column rule.
Multiply each row of the first matrix with each column of the second.
Example:
A = [ [1, 2], [3, 4] ]
B = [ [5, 6], [7, 8] ]
A × B = [ [1×5 + 2×7, 1×6 + 2×8], [3×5 + 4×7, 3×6 + 4×8] ]
= [ [19, 22], [43, 50] ] -
Scalar Multiplication:
Multiply every element of a matrix by a single number (scalar).
Example:
k = 3, A = [ [2, 4], [6, 8] ]
k × A = [ [6, 12], [18, 24] ] -
Transpose (Aᵗ):
Switch rows and columns of the matrix.
Example:
A = [ [1, 2], [3, 4] ]
Aᵗ = [ [1, 3], [2, 4] ] -
Conjugate Transpose (Aᴴ):
Also called the Hermitian transpose. It is the transpose of the matrix plus taking complex conjugates (i becomes −i).
Example:
A = [ [1 + 2i, 3 − i], [4i, 5] ]
Aᴴ = [ [1 − 2i, −4i], [3 + i, 5] ] -
Inverse (A⁻¹):
The matrix that "undoes" the original matrix. Only exists if the determinant is not zero.
Formula:
For A = [ [a, b], [c, d] ], A⁻¹ = (1 / det(A)) × [ [d, −b], [−c, a] ]
Example:
A = [ [4, 7], [2, 6] ] → det(A) = (4×6) − (7×2) = 10
A⁻¹ = (1/10) × [ [6, −7], [−2, 4] ] = [ [0.6, −0.7], [−0.2, 0.4] ]
Special Matrices
-
Projection Matrix:
A matrix is called a projection matrix if it satisfies the condition: P × P = P (or P² = P).
This means that applying the projection twice is the same as applying it once.
It is used to "project" a vector onto a line or plane.
Example Use: In linear regression, a projection matrix helps compute the best-fit line by projecting data points onto the regression line. -
Orthogonal Matrix:
A matrix is orthogonal if its transpose is equal to its inverse. That is:
Aᵗ × A = A × Aᵗ = I (identity matrix)
In other words, its rows and columns are orthonormal — they are all perpendicular to each other and each have a length of 1.
Why it's useful: Orthogonal matrices preserve length and angles, which makes them very useful in computer graphics, 3D transformations, and image processing. -
Idempotent Matrix:
A matrix A is called idempotent if A × A = A (or A² = A).
This means that multiplying the matrix by itself doesn't change it.
It often appears in statistics, especially in projection operations and variance calculations. -
Partitioned Matrix:
Also known as a block matrix, it is a matrix divided into smaller rectangular sections called "blocks".
These blocks can be treated as single elements, which simplifies large matrix operations.
Example Use: Partitioned matrices are often used in solving complex systems by dividing them into manageable parts.
Applications:
- Orthogonal Matrices: Common in 3D transformations, computer graphics, and robotics where maintaining shape and size is crucial.
- Projection Matrices: Used in linear regression, machine learning, optimization, and signal processing to project data into a subspace.
- Idempotent Matrices: Appear in statistical models and data fitting, especially in least squares solutions.
- Partitioned Matrices: Helpful in matrix algebra, simplifying large matrix problems in control systems and engineering.