← Back to Hub
Mathematics XII · 13 Chapters · 230+ formulas
Mathematics Formula Reference
Every formula from all 13 chapters — organised by topic for quick revision.
1. Relations and Functions
2. Inverse Trigonometric Functions
3. Matrices
4. Determinants
5. Continuity and Differentiability
6. Application of Derivatives
7. Integrals
8. Application of Integrals
9. Differential Equations
10. Vector Algebra
11. Three Dimensional Geometry
12. Linear Programming
13. Probability
⚡ Critical Formulas — Memorise These First
Sum/Difference Rule
$(u \pm v)' = u' \pm v'$
Product Rule (Leibnitz Rule)
$(uv)' = u'v + uv'$
Rate of change using Chain Rule
$\frac{dy}{dx} = \frac{dy/dt}{dx/dt}, \; \frac{dx}{dt} \neq 0$
Rate of change of area of circle
$\frac{dA}{dr} = 2\pi r$
Power Rule
$\int x^n\,dx = \frac{x^{n+1}}{n+1} + C, \; n \neq -1$
Cosine integral
$\int \cos x\,dx = \sin x + C$
Variable Separable Form
$\frac{dy}{dx} = h(y) \cdot g(x)$
Separated Form
$\frac{1}{h(y)}\,dy = g(x)\,dx$
01
Relations and Functions
3 marks
Empty Relation
$R = \phi \subset A \times A$
No element is related to any element
Universal Relation
$R = A \times A$
Every element is related to every element
One-one test
$f(x_1) = f(x_2) \Rightarrow x_1 = x_2, \; \forall \; x_1, x_2 \in X$
Equivalent to: x1 ≠ x2 ⇒ f(x1) ≠ f(x2)
Onto condition
$\forall \; y \in Y, \; \exists \; x \in X \text{ such that } f(x) = y$
Equivalently, f is onto if and only if Range of f = Y (codomain)
Composition of functions
$g \circ f(x) = g(f(x)), \; \forall \; x \in A$
If f: A → B and g: B → C, then gof: A → C
Inverse function condition
$g \circ f = I_X \text{ and } f \circ g = I_Y$
f is invertible ⟺ f is one-one and onto (bijective)
Inverse verification
$f^{-1}(y) = x \iff f(x) = y$
f⁻¹ o f = I_X and f o f⁻¹ = I_Y
02
Inverse Trigonometric Functions
3 marks
Domain and Range of sin⁻¹
$\sin^{-1} : [-1, 1] \to \left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$
Principal value branch
Domain and Range of cos⁻¹
$\cos^{-1} : [-1, 1] \to [0, \pi]$
Principal value branch
Domain and Range of cosec⁻¹
$\csc^{-1} : \mathbb{R} - (-1, 1) \to \left[-\frac{\pi}{2}, \frac{\pi}{2}\right] - \{0\}$
Principal value branch. Domain is |x| ≥ 1, i.e., x ≤ −1 or x ≥ 1
Domain and Range of sec⁻¹
$\sec^{-1} : \mathbb{R} - (-1, 1) \to [0, \pi] - \left\{\frac{\pi}{2}\right\}$
Principal value branch. Domain is |x| ≥ 1, i.e., x ≤ −1 or x ≥ 1
Domain and Range of tan⁻¹
$\tan^{-1} : \mathbb{R} \to \left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$
Principal value branch
Domain and Range of cot⁻¹
$\cot^{-1} : \mathbb{R} \to (0, \pi)$
Principal value branch
Sine inverse-forward composition
$\sin(\sin^{-1} x) = x, \; x \in [-1, 1]$
Composition of function with its inverse
Sine forward-inverse composition
$\sin^{-1}(\sin x) = x, \; x \in \left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$
Composition of inverse with function
Cancellation property (sin)
$\sin(\sin^{-1} x) = x, \; x \in [-1,1] \text{ and } \sin^{-1}(\sin x) = x, \; x \in \left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$
Similar results hold for other trigonometric functions for suitable values of domain
Double angle formula for sin⁻¹
$\sin^{-1}(2x\sqrt{1 - x^2}) = 2\sin^{-1} x, \; -\frac{1}{\sqrt{2}} \leq x \leq \frac{1}{\sqrt{2}}$
Derived by substituting x = sin θ
Double angle formula for cos⁻¹
$\sin^{-1}(2x\sqrt{1 - x^2}) = 2\cos^{-1} x, \; \frac{1}{\sqrt{2}} \leq x \leq 1$
Derived by substituting x = cos θ
Simplification of cot⁻¹(1/√(x² − 1))
$\cot^{-1}\!\left(\frac{1}{\sqrt{x^2 - 1}}\right) = \sec^{-1} x, \; x > 1$
Derived by substituting x = sec θ
03
Matrices
5 marks
General m x n matrix
$A = [a_{ij}]_{m \times n}, \; 1 \leq i \leq m, \; 1 \leq j \leq n$
The i-th row consists of elements aᵢ₁, aᵢ₂, ..., aᵢₙ and the j-th column consists of elements a₁ⱼ, a₂ⱼ, ..., aₘⱼ
Matrix addition
$A + B = [a_{ij} + b_{ij}]_{m \times n}$
Both matrices must be of the same order
Scalar multiplication
$kA = [k \cdot a_{ij}]_{m \times n}$
The (i,j)-th element of kA is k · aᵢⱼ
Matrix multiplication element
$c_{ik} = a_{i1}b_{1k} + a_{i2}b_{2k} + \cdots + a_{in}b_{nk} = \sum_{j=1}^{n} a_{ij} b_{jk}$
Number of columns of A must equal number of rows of B
Commutative law of addition
$A + B = B + A$
For matrices of the same order
Associative law of addition
$(A + B) + C = A + (B + C)$
For matrices of the same order
Additive identity
$A + O = O + A = A$
O is the zero matrix of the same order as A
Additive inverse
$A + (-A) = (-A) + A = O$
-A = [-aᵢⱼ]ₘ ₓ ₙ
Scalar distributive over matrix addition
$k(A + B) = kA + kB$
A, B are matrices of same order, k is a scalar
Scalar sum distributive
$(k + l)A = kA + lA$
k and l are scalars
Associative law of multiplication
$(AB)C = A(BC)$
Whenever both sides of the equality are defined
Distributive law (left)
$A(B + C) = AB + AC$
Whenever both sides are defined
Distributive law (right)
$(A + B)C = AC + BC$
Whenever both sides are defined
Multiplicative identity
$IA = AI = A$
I is the identity matrix of appropriate order
Transpose of transpose
$(A^{T})^{T} = A$
Taking transpose twice gives back the original matrix
Transpose of scalar multiple
$(kA)^{T} = kA^{T}$
Where k is any constant
Transpose of sum
$(A + B)^{T} = A^{T} + B^{T}$
For matrices A and B of suitable orders
Transpose of product
$(AB)^{T} = B^{T}A^{T}$
The order reverses when taking transpose of a ∏
Symmetric part of a matrix
$\frac{1}{2}(A + A') \text{ is symmetric}$
For any square matrix A with real number entries
Skew symmetric part of a matrix
$\frac{1}{2}(A - A') \text{ is skew-symmetric}$
For any square matrix A with real number entries
Decomposition of a square matrix
$A = \frac{1}{2}(A + A') + \frac{1}{2}(A - A')$
Any square matrix can be expressed as the ∑ of a symmetric and a skew symmetric matrix
Inverse of a product
$(AB)^{-1} = B^{-1} A^{-1}$
If A and B are invertible matrices of the same order
04
Determinants
5 marks
$\Delta = \frac{1}{2} \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix}$
$\Delta = \frac{1}{2} |\text{determinant value}|$
$\begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} = 0 \; (\text{collinear})$
$M_{ij} = \text{minor of } a_{ij}$
$A_{ij} = (-1)^{i+j} \cdot M_{ij}$
$Minor of an element of a determinant of order n (n >= 2) is a determinant of order n-1$
$\Delta = a_{i1}A_{i1} + a_{i2}A_{i2} + a_{i3}A_{i3}$
$\Delta = a_{1j}A_{1j} + a_{2j}A_{2j} + a_{3j}A_{3j}$
$a_{i1}A_{j1} + a_{i2}A_{j2} + a_{i3}A_{j3} = 0, \; i \neq j$
$For A = [a11 a12 a13; a21 a22 a23; a31 a32 a33]$
adj A = Transpose of [A11 A12 A13 ┃ A21 A22 A23 ┃ A31 A32 A33] = [A11 A21 A31 ┃ A12 A22 A32 ┃ A13 A23 A33]
$For 2x2 matrix A = [a11 a12; a21 a22]$
adj A = [a22 -a12 ┃ -a21 a11] (interchange diagonal elements, change sign of off-diagonal elements)
$A(\text{adj } A) = (\text{adj } A)A = |A| \cdot I$
$|\text{adj}(A)| = |A|^{n-1}$
$A^{-1} = \frac{1}{|A|} \cdot \text{adj}(A), \; |A| \neq 0$
$|AB| = |A| \cdot |B|$
$If AB = BA = I, then B is called the inverse of A, i.e., B = A^(-1)$
$A^{-1} = B, \; B^{-1} = A, \; (A^{-1})^{-1} = A$
For system
$a1*x + b1*y + c1*z = d1, a2*x + b2*y + c2*z = d2, a3*x + b3*y + c3*z = d3: Matrix form AX = B where A = [a1 b1 c1; a2 b2 c2; a3 b3 c3], X = [x; y; z], B = [d1; d2; d3]$
$|A| \neq 0 \Rightarrow X = A^{-1}B$
$If |A| = 0 and (adj A)B != O, system is inconsistent (no solution)$
$If |A| = 0 and (adj A)B = O, system may be consistent (infinitely many solutions) or inconsistent$
05
Continuity and Differentiability
8 marks
Sum/Difference Rule
$(u \pm v)' = u' \pm v'$
Product Rule (Leibnitz Rule)
$(uv)' = u'v + uv'$
Derivative of a ∏ of two functions
Quotient Rule
$\left(\frac{u}{v}\right)' = \frac{u'v - uv'}{v^2}, \; v \neq 0$
Chain Rule (two functions)
$\frac{df}{dx} = \frac{dv}{dt} \cdot \frac{dt}{dx}$
Chain Rule (three functions)
$\frac{df}{dx} = \frac{dw}{ds} \cdot \frac{ds}{dt} \cdot \frac{dt}{dx}$
Derivative of sin^(-1) x
$\frac{d}{dx}(\sin^{-1} x) = \frac{1}{\sqrt{1 - x^2}}$
Derivative of cos^(-1) x
$\frac{d}{dx}(\cos^{-1} x) = \frac{-1}{\sqrt{1 - x^2}}$
Derivative of tan^(-1) x
$\frac{d}{dx}(\tan^{-1} x) = \frac{1}{1 + x^2}$
Derivative of e^x
$\frac{d}{dx}(e^x) = e^x$
Derivative of log x (natural log)
$\frac{d}{dx}(\log x) = \frac{1}{x}$
Derivative of a^x
$\frac{d}{dx}(a^x) = a^x \log a$
Change of base formula
$\log_a p = \frac{\log_b p}{\log_b a}$
Log of product
$\log_b(pq) = \log_b p + \log_b q$
Log of power
$\log_b(p^n) = n \log_b p$
Log of quotient
$\log_b\!\left(\frac{x}{y}\right) = \log_b x - \log_b y$
Logarithmic differentiation formula
$\frac{dy}{dx} = y\left[v(x) \cdot \frac{u'(x)}{u(x)} + v'(x) \cdot \log u(x)\right]$
Parametric differentiation
$\frac{dy}{dx} = \frac{dy/dt}{dx/dt} = \frac{g'(t)}{f'(t)}, \; f'(t) \neq 0$
06
Application of Derivatives
8 marks
Rate of change using Chain Rule
$\frac{dy}{dx} = \frac{dy/dt}{dx/dt}, \; \frac{dx}{dt} \neq 0$
Rate of change of area of circle
$\frac{dA}{dr} = 2\pi r$
07
Integrals
8 marks
Power Rule
$\int x^n\,dx = \frac{x^{n+1}}{n+1} + C, \; n \neq -1$
Particularly, ∫ dx = x + C
Cosine integral
$\int \cos x\,dx = \sin x + C$
Sine integral
$\int \sin x\,dx = -\cos x + C$
Secant squared integral
$\int \sec^2 x\,dx = \tan x + C$
Cosecant squared integral
$\int \csc^2 x\,dx = -\cot x + C$
Secant-tangent integral
$\int \sec x \tan x\,dx = \sec x + C$
Cosecant-cotangent integral
$\int \csc x \cot x\,dx = -\csc x + C$
Inverse sine integral
$\int \frac{dx}{\sqrt{1 - x^2}} = \sin^{-1} x + C$
Negative inverse cosine integral
$\int \frac{dx}{\sqrt{1 - x^2}} = -\cos^{-1} x + C$
Inverse tangent integral
$\int \frac{dx}{1 + x^2} = \tan^{-1} x + C$
Exponential integral
$\int e^x\,dx = e^x + C$
Logarithmic integral
$\int \frac{1}{x}\,dx = \log |x| + C$
General exponential integral
$\int a^x\,dx = \frac{a^x}{\log a} + C$
$\int \frac{dx}{x^2 - a^2} = \frac{1}{2a} \log \left|\frac{x-a}{x+a}\right| + C$
$\int \frac{dx}{a^2 - x^2} = \frac{1}{2a} \log \left|\frac{a+x}{a-x}\right| + C$
$\int \frac{dx}{x^2 + a^2} = \frac{1}{a} \tan^{-1}\!\left(\frac{x}{a}\right) + C$
$\int \frac{dx}{\sqrt{x^2 - a^2}} = \log \left|x + \sqrt{x^2 - a^2}\right| + C$
$\int \frac{dx}{\sqrt{a^2 - x^2}} = \sin^{-1}\!\left(\frac{x}{a}\right) + C$
$\int \frac{dx}{\sqrt{x^2 + a^2}} = \log \left|x + \sqrt{x^2 + a^2}\right| + C$
Distinct linear factors
$\frac{px+q}{(x-a)(x-b)} = \frac{A}{x-a} + \frac{B}{x-b}, \; a \neq b$
Two distinct linear factors ∈ denominator
Repeated linear factor
$\frac{px+q}{(x-a)^2} = \frac{A}{x-a} + \frac{B}{(x-a)^2}$
Same linear factor repeated twice
Three distinct linear factors
$\frac{px^2+qx+r}{(x-a)(x-b)(x-c)} = \frac{A}{x-a} + \frac{B}{x-b} + \frac{C}{x-c}$
Three distinct linear factors ∈ denominator
Repeated and distinct linear factors
$\frac{px^2+qx+r}{(x-a)^2(x-b)} = \frac{A}{x-a} + \frac{B}{(x-a)^2} + \frac{C}{x-b}$
One repeated and one distinct linear factor
Linear and irreducible quadratic factors
$\frac{px^2+qx+r}{(x-a)(x^2+bx+c)} = \frac{A}{x-a} + \frac{Bx+C}{x^2+bx+c}$
Where x² + bx + c cannot be factorised further
Integration by Parts formula
$\int f(x)g(x)\,dx = f(x)\!\int g(x)\,dx - \int \left[f'(x)\!\int g(x)\,dx\right]dx$
Special exponential formula
$\int e^x [f(x) + f'(x)]\,dx = e^x f(x) + C$
Property P0
$\int_a^b f(x)\,dx = \int_a^b f(t)\,dt$
The variable of integration is a dummy variable.
Property P1
$\int_a^b f(x)\,dx = -\int_b^a f(x)\,dx$
Interchanging limits changes the sign.
Property P2
$\int_a^b f(x)\,dx = \int_a^c f(x)\,dx + \int_c^b f(x)\,dx$
Splitting the interval at an intermediate point c.
Property P3
$\int_a^b f(x)\,dx = \int_a^b f(a + b - x)\,dx$
Substitution x → a + b - x.
Property P4
$\int_0^a f(x)\,dx = \int_0^a f(a - x)\,dx$
Particular case of P3 with lower limit 0.
Property P5
$\int_0^{2a} f(x)\,dx = \int_0^a f(x)\,dx + \int_0^a f(2a - x)\,dx$
Splitting [0, 2a] using substitution.
Property P6
$\int_0^{2a} f(x)\,dx = \begin{cases} 2\int_0^a f(x)\,dx & \text{if } f(2a-x) = f(x) \\ 0 & \text{if } f(2a-x) = -f(x) \end{cases}$
Useful for symmetric/antisymmetric functions about x = a.
Property P7
$\int_{-a}^{a} f(x)\,dx = \begin{cases} 2\int_0^a f(x)\,dx & \text{if } f(-x) = f(x) \\ 0 & \text{if } f(-x) = -f(x) \end{cases}$
Even and odd function properties for symmetric intervals.
08
Application of Integrals
5 marks
Area using vertical strips
$A = \int_a^b y\,dx = \int_a^b f(x)\,dx$
Area bounded by curve y = f(x), x-axis, and lines x = a, x = b
Area using horizontal strips
$A = \int_c^d x\,dy = \int_c^d g(y)\,dy$
Area bounded by curve x = g(y), y-axis, and lines y = c, y = d
Area when curve is below x-axis
$A = \left|\int_a^b f(x)\,dx\right|$
If f(x) < 0 from x = a to x = b, the area is the absolute value of the ∫.
Area when curve crosses x-axis
$A = |A_{1}| + A_{2}$
When part of the curve is above and part below x-axis, take absolute value of negative area and add to positive area.
09
Differential Equations
5 marks
Variable Separable Form
$\frac{dy}{dx} = h(y) \cdot g(x)$
Standard form for variable separable equations
Separated Form
$\frac{1}{h(y)}\,dy = g(x)\,dx$
After separating the variables
General Solution
$\int \frac{1}{h(y)}\,dy = \int g(x)\,dx + C$
Integrate both sides to get the solution; H(y) = G(x) + C
Substitution for dy/dx form
$y = vx, \; \frac{dy}{dx} = v + x\frac{dv}{dx}$
Used when dy/dx = g(y/x)
Substitution for dx/dy form
$x = vy, \; \frac{dx}{dy} = v + y\frac{dv}{dy}$
Used when dx/dy = h(x/y)
Reduced form
$x\frac{dv}{dx} = g(v) - v, \; \frac{dv}{g(v) - v} = \frac{dx}{x}$
After substitution, separate variables ∈ v and x
General Solution
$\int \frac{dv}{g(v) - v} = \int \frac{1}{x}\,dx + C$
Integrate and replace v by y/x to get the solution
Standard Form (Type 1)
$\frac{dy}{dx} + Py = Q$
P, Q are constants or functions of x only
Integrating Factor (Type 1)
$\text{I.F.} = e^{\int P\,dx}$
Integrating factor for dy/dx + Py = Q
General Solution (Type 1)
$y \cdot (\text{I.F.}) = \int (Q \times \text{I.F.})\,dx + C$
y · e
∫P dx
= ∫(Q · e
∫P dx
) dx + C
Standard Form (Type 2)
$\frac{dx}{dy} + P_1 x = Q_1$
P₁, Q₁ are constants or functions of y only
Integrating Factor (Type 2)
$\text{I.F.} = e^{\int P_1\,dy}$
Integrating factor for dx/dy + P₁x = Q₁
General Solution (Type 2)
$x \cdot (\text{I.F.}) = \int (Q_1 \times \text{I.F.})\,dy + C$
x · e
∫P₁ dy
= ∫(Q₁ · e
∫P₁ dy
) dy + C
10
Vector Algebra
5 marks
Magnitude of position vector
$|\vec{OP}| = \sqrt{x^2 + y^2 + z^2}$
Direction cosines
$\cos\alpha = \frac{x}{r}, \; \cos\beta = \frac{y}{r}, \; \cos\gamma = \frac{z}{r}$
Direction cosine identity
$l^{2} + m^{2} + n^{2} = 1$
Triangle law
$\vec{AC} = \vec{AB} + \vec{BC}$
Vector difference
$\vec{a} - \vec{b} = \vec{AB} + \vec{BC'} \text{ where } \vec{BC'} = -\vec{BC}$
Sides of triangle sum to zero
$\vec{AB} + \vec{BC} + \vec{CA} = \vec{AA} = \vec{0}$
Scalar multiplication magnitude
$|\lambda \vec{a}| = |\lambda| \cdot |\vec{a}|$
Negative of a vector
$\vec{a} + (-\vec{a}) = (-\vec{a}) + \vec{a} = \vec{0}$
Unit vector
$\hat{a} = \frac{1}{|\vec{a}|} \cdot \vec{a}, \; \vec{a} \neq \vec{0}$
For any scalar k
$k \cdot \vec{0} = \vec{0}$
Position vector in component form
$\vec{OP} = x\hat{i} + y\hat{j} + z\hat{k}$
Magnitude from components
$|\vec{r}| = |x\hat{i} + y\hat{j} + z\hat{k}| = \sqrt{x^2 + y^2 + z^2}$
Sum of vectors in component form
$\vec{a} + \vec{b} = (a_1+b_1)\hat{i} + (a_2+b_2)\hat{j} + (a_3+b_3)\hat{k}$
Difference of vectors in component form
$\vec{a} - \vec{b} = (a_1-b_1)\hat{i} + (a_2-b_2)\hat{j} + (a_3-b_3)\hat{k}$
Equality of vectors
$\vec{a} = \vec{b} \iff a_1 = b_1, \; a_2 = b_2, \; a_3 = b_3$
Scalar multiplication in component form
$\lambda\vec{a} = (\lambda a_1)\hat{i} + (\lambda a_2)\hat{j} + (\lambda a_3)\hat{k}$
Distributive law 1
$k\vec{a} + m\vec{a} = (k + m)\vec{a}$
Distributive law 2
$k(m\vec{a}) = (km)\vec{a}$
Distributive law 3
$k(\vec{a} + \vec{b}) = k\vec{a} + k\vec{b}$
Collinearity condition
$\vec{b} = \lambda\vec{a} \iff \frac{b_1}{a_1} = \frac{b_2}{a_2} = \frac{b_3}{a_3} = \lambda$
Vector joining two points
$\vec{P_1P_2} = (x_2-x_1)\hat{i} + (y_2-y_1)\hat{j} + (z_2-z_1)\hat{k}$
Distance between two points
$|\vec{P_1P_2}| = \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2 + (z_2-z_1)^2}$
Section formula (internal)
$\vec{OR} = \frac{m\vec{b} + n\vec{a}}{m + n}$
Section formula (external)
$\vec{OR} = \frac{m\vec{b} - n\vec{a}}{m - n}$
Midpoint formula
$\vec{OR} = \frac{\vec{a} + \vec{b}}{2}$
Scalar (dot) product definition
$\vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}|\cos\theta$
Dot product of perpendicular vectors
$\vec{a} \cdot \vec{b} = 0 \iff \vec{a} \perp \vec{b}$
Dot product when theta = 0
$\vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}|; \; \vec{a} \cdot \vec{a} = |\vec{a}|^2$
Dot product when theta = pi
$\vec{a} \cdot \vec{b} = -|\vec{a}||\vec{b}|$
Dot products of unit vectors
$\hat{i} \cdot \hat{i} = \hat{j} \cdot \hat{j} = \hat{k} \cdot \hat{k} = 1; \; \hat{i} \cdot \hat{j} = \hat{j} \cdot \hat{k} = \hat{k} \cdot \hat{i} = 0$
Angle between vectors using dot product
$\cos\theta = \frac{\vec{a} \cdot \vec{b}}{|\vec{a}||\vec{b}|}, \; \theta = \cos^{-1}\!\left(\frac{\vec{a} \cdot \vec{b}}{|\vec{a}||\vec{b}|}\right)$
Commutative property of dot product
$\vec{a} \cdot \vec{b} = \vec{b} \cdot \vec{a}$
Distributive property of dot product
$\vec{a} \cdot (\vec{b} + \vec{c}) = \vec{a} \cdot \vec{b} + \vec{a} \cdot \vec{c}$
Scalar factor in dot product
$(\lambda\vec{a}) \cdot \vec{b} = \lambda(\vec{a} \cdot \vec{b}) = \vec{a} \cdot (\lambda\vec{b})$
Dot product in component form
$\vec{a} \cdot \vec{b} = a_1 b_1 + a_2 b_2 + a_3 b_3$
Projection of a on b
$\text{proj}_{\vec{b}}\vec{a} = \frac{\vec{a} \cdot \vec{b}}{|\vec{b}|}$
Projection of a on b (vector form)
$\vec{a} \cdot \frac{\vec{b}}{|\vec{b}|} = \frac{\vec{a} \cdot \vec{b}}{|\vec{b}|}$
Direction cosines from dot product
$\cos\alpha = \frac{a_1}{|\vec{a}|}, \; \cos\beta = \frac{a_2}{|\vec{a}|}, \; \cos\gamma = \frac{a_3}{|\vec{a}|}$
Unit vector in terms of direction cosines
$\hat{a} = \cos\alpha\,\hat{i} + \cos\beta\,\hat{j} + \cos\gamma\,\hat{k}$
Vector (cross) product definition
$\vec{a} \times \vec{b} = |\vec{a}||\vec{b}|\sin\theta \; \hat{n}$
Cross product of parallel/collinear vectors
$\vec{a} \times \vec{b} = \vec{0} \iff \vec{a} \parallel \vec{b}$
Cross product of self
$\vec{a} \times \vec{a} = \vec{0}$
Cross product when theta = pi/2
$|\vec{a} \times \vec{b}| = |\vec{a}||\vec{b}|$
Cross products of unit vectors
$\hat{i} \times \hat{i} = \hat{j} \times \hat{j} = \hat{k} \times \hat{k} = \vec{0}; \; \hat{i} \times \hat{j} = \hat{k}, \; \hat{j} \times \hat{k} = \hat{i}, \; \hat{k} \times \hat{i} = \hat{j}$
Reverse cross products of unit vectors
$\hat{j} \times \hat{i} = -\hat{k}, \; \hat{k} \times \hat{j} = -\hat{i}, \; \hat{i} \times \hat{k} = -\hat{j}$
Angle from cross product
$\sin\theta = \frac{|\vec{a} \times \vec{b}|}{|\vec{a}||\vec{b}|}$
Anti-commutative property
$\vec{a} \times \vec{b} = -(\vec{b} \times \vec{a})$
Area of triangle
$\text{Area}_{\triangle} = \frac{1}{2}|\vec{a} \times \vec{b}|$
Area of parallelogram
$\text{Area}_{\square} = |\vec{a} \times \vec{b}|$
Distributive property of cross product
$\vec{a} \times (\vec{b} + \vec{c}) = \vec{a} \times \vec{b} + \vec{a} \times \vec{c}$
Scalar factor in cross product
$\lambda(\vec{a} \times \vec{b}) = (\lambda\vec{a}) \times \vec{b} = \vec{a} \times (\lambda\vec{b})$
Cross product in component form (determinant)
$\vec{a} \times \vec{b} = \begin{vmatrix} \hat{i} & \hat{j} & \hat{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{vmatrix}$
Cauchy-Schwarz Inequality
$|\vec{a} \cdot \vec{b}| \leq |\vec{a}||\vec{b}|$
Triangle Inequality
$|\vec{a} + \vec{b}| \leq |\vec{a}| + |\vec{b}|$
11
Three Dimensional Geometry
5 marks
Direction cosines identity
$l^{2} + m^{2} + n^{2} = 1$
Sum of squares of direction cosines equals 1
Relation between direction ratios and direction cosines
$\frac{l}{a} = \frac{m}{b} = \frac{n}{c} = \pm\frac{1}{\sqrt{a^2 + b^2 + c^2}}$
Connecting direction ratios (a,b,c) to direction cosines (l,m,n)
Direction cosines from direction ratios
$l = \pm\frac{a}{\sqrt{a^2+b^2+c^2}}, \; m = \pm\frac{b}{\sqrt{a^2+b^2+c^2}}, \; n = \pm\frac{c}{\sqrt{a^2+b^2+c^2}}$
Direction cosines computed from direction ratios
Direction cosines of line joining two points
$l = \frac{x_2-x_1}{PQ}, \; m = \frac{y_2-y_1}{PQ}, \; n = \frac{z_2-z_1}{PQ}$
Direction cosines of line segment joining P(x1,y1,z1) and Q(x2,y2,z2)
Direction ratios of line joining two points
$x_{2}-x_{1}, \; y_{2}-y_{1}, \; z_{2}-z_{1}$
Direction ratios of line segment from P(x1,y1,z1) to Q(x2,y2,z2)
Vector equation of a line (point + direction)
$\vec{r} = \vec{a} + \lambda\vec{b}$
Line through point with position vector a, parallel to vector b. λ is a real parameter.
Cartesian equation of a line (point + direction ratios)
$\frac{x - x_1}{a} = \frac{y - y_1}{b} = \frac{z - z_1}{c}$
Line through (x1,y1,z1) with direction ratios a, b, c
Cartesian equation using direction cosines
$\frac{x - x_1}{l} = \frac{y - y_1}{m} = \frac{z - z_1}{n}$
Line through (x1,y1,z1) with direction cosines l, m, n
Parametric equations of a line
$x = x_1 + \lambda a, \; y = y_1 + \lambda b, \; z = z_1 + \lambda c$
Parametric form of line through (x1,y1,z1) with direction ratios a, b, c
Vector equation of a line through two points
$\vec{r} = \vec{a} + \lambda(\vec{b} - \vec{a})$
Line through two points with position vectors a and b
Angle between two lines (direction ratios)
$\cos\theta = \frac{|a_1 a_2 + b_1 b_2 + c_1 c_2|}{\sqrt{a_1^2+b_1^2+c_1^2} \cdot \sqrt{a_2^2+b_2^2+c_2^2}}$
Angle between lines with direction ratios (a1,b1,c1) and (a2,b2,c2)
Angle between two lines (direction cosines)
$\cos\theta = |l_1 l_2 + m_1 m_2 + n_1 n_2|$
Angle between lines with direction cosines (l1,m1,n1) and (l2,m2,n2), since l²+m²+n²=1
sin(theta) using direction ratios
$\sin\theta = \frac{\sqrt{(a_1 b_2-a_2 b_1)^2 + (b_1 c_2-b_2 c_1)^2 + (c_1 a_2-c_2 a_1)^2}}{\sqrt{a_1^2+b_1^2+c_1^2} \cdot \sqrt{a_2^2+b_2^2+c_2^2}}$
Sine of angle between two lines
sin(theta) using direction cosines
$\sin\theta = \sqrt{(l_1 m_2-l_2 m_1)^2 + (m_1 n_2-m_2 n_1)^2 + (n_1 l_2-n_2 l_1)^2}$
Sine of angle between two lines using direction cosines
Angle between lines in vector form
$\cos\theta = \frac{|\vec{b_1} \cdot \vec{b_2}|}{|\vec{b_1}| \cdot |\vec{b_2}|}$
For lines r = a1 + λ*b1 and r = a2 + μ*b2
Condition for perpendicular lines (direction ratios)
$a_{1}a_{2} + b_{1}b_{2} + c_{1}c_{2} = 0$
Two lines are perpendicular when θ = 90 deg
Condition for parallel lines (direction ratios)
$\frac{a_1}{a_2} = \frac{b_1}{b_2} = \frac{c_1}{c_2}$
Two lines are parallel when θ = 0
Shortest distance between skew lines (vector form)
$d = \frac{|(\vec{b_1} \times \vec{b_2}) \cdot (\vec{a_2} - \vec{a_1})|}{|\vec{b_1} \times \vec{b_2}|}$
For lines r = a1 + λ*b1 and r = a2 + μ*b2
Shortest distance between skew lines (Cartesian form)
$d = \frac{\begin{vmatrix} x_2-x_1 & y_2-y_1 & z_2-z_1 \\ a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \end{vmatrix}}{\sqrt{(b_1 c_2-b_2 c_1)^2 + (c_1 a_2-c_2 a_1)^2 + (a_1 b_2-a_2 b_1)^2}}$
For lines (x-x1)/a1 = (y-y1)/b1 = (z-z1)/c1 and (x-x2)/a2 = (y-y2)/b2 = (z-z2)/c2
Distance between parallel lines
$d = \frac{|\vec{b} \times (\vec{a_2} - \vec{a_1})|}{|\vec{b}|}$
For parallel lines r = a1 + λ*b and r = a2 + μ*b
12
Linear Programming
5 marks
General Objective Function
$Z = ax + by$
a, b are constants; x, y are decision variables; Z is to be maximised or minimised
13
Probability
8 marks
Conditional Probability
$P(E|F) = \frac{P(E \cap F)}{P(F)}, \; P(F) \neq 0$
Also written as P(E|F) = n(E ∩ F) / n(F) for equally likely outcomes
Complement Rule
$P(A') = 1 - P(A)$
Probability of event not occurring
Addition Theorem
$P(A \cup B) = P(A) + P(B) - P(A \cap B)$
For any two events A and B
Mutually Exclusive Events
$P(A \cup B) = P(A) + P(B)$
When A ∩ B = ϕ (no common outcomes)
Property 1: P(S|F)
$P(S|F) = P(F|F) = 1$
The conditional probability of the sample space S given F is 1
Property 2: Addition rule for conditional probability
$P((A \cup B)|F) = P(A|F) + P(B|F) - P((A \cap B)|F)$
For disjoint events A and B: P((A ∪ B)|F) = P(A|F) + P(B|F)
Property 3: Complement rule
$P(E'|F) = 1 - P(E|F)$
Follows from P(S|F) = 1 and E, E' being disjoint with E ∪ E' = S
Multiplication Rule (two events)
$P(E \cap F) = P(E) \cdot P(F|E) = P(F) \cdot P(E|F)$
Provided P(E) ≠ 0 and P(F) ≠ 0
Multiplication Rule (three events)
$P(E \cap F \cap G) = P(E) \cdot P(F|E) \cdot P(G|E \cap F)$
Can be extended to four or more events similarly
Test for Independence
$P(E \cap F) = P(E) \cdot P(F)$
If this holds, E and F are independent events
Probability of at least one of two independent events
$P(A \cup B) = 1 - P(A') \cdot P(B')$
For independent events A and B
Three Independent Events
$P(A \cap B \cap C) = P(A) \cdot P(B) \cdot P(C)$
Extends to n mutually independent events
Theorem of Total Probability
$P(A) = \sum_{j=1}^{n} P(E_j) P(A|E_j)$
Where {E₁, E₂, ..., Eₙ} is a partition of S and each Eᵢ has nonzero probability
Bayes' Theorem (Simple Form)
$P(A|B) = \frac{P(B|A) \cdot P(A)}{P(B)}$
Gives posterior probability of A given B has occurred
Bayes' Theorem (General Form)
$P(E_i|A) = \frac{P(E_i) P(A|E_i)}{\sum_{j=1}^{n} P(E_j) P(A|E_j)}$
For partition {E₁, E₂, …, Eₙ} of S. Also called the formula for the probability of 'causes'.
Mean (Expected Value)
$E(X) = \mu = \sum_{i=1}^{n} x_i p_i$
xᵢ are values of X and pᵢ are corresponding probabilities
Variance
$\text{Var}(X) = E(X^2) - [E(X)]^2 = \sum x_i^2 p_i - \left(\sum x_i p_i\right)^2$
Also written as σ²
Variance (Alternative)
$\text{Var}(X) = \sum_{i=1}^{n} (x_i - \mu)^2 \cdot p_i$
Direct formula using deviations from the mean
Standard Deviation
$\sigma = \sqrt{\text{Var}(X)}$
Non-negative square root of the variance
Binomial Probability
$P(X = r) = \binom{n}{r} p^r q^{n-r}, \; q = 1 - p$
r = 0, 1, 2, ..., n. Here n = number of trials, p = probability of success, q = probability of failure
Mean of Binomial Distribution
$E(X) = np$
n = number of trials, p = probability of success
Variance of Binomial Distribution
$\text{Var}(X) = npq = np(1 - p)$
Always less than or equal to the mean np since q ≤ 1
Standard Deviation of Binomial Distribution
$\sigma = \sqrt{npq}$
Where q = 1 − p