← Back to Hub

Mathematics Chapters

Chapter summaries, key formulas, important theorems, exam tips and exercise overviews for all 13 chapters — 952 NCERT questions covered.

Chapter 1
Relations and Functions
Types of Relations · Types of Functions · Composition of Functions and Invertible Function
3marks

🎯 Must-Read — Key concepts for this chapter

  1. A relation R from A to B is a subset of A x B
  2. If (a, b) ∈ R, we write a R b
  3. Functions are special kind of relations
  4. A relation ∈ a set A is a subset of A x A
  5. Empty relation: R = φ; Universal relation: R = A x A

📖 Chapter Summary

ConceptKey Fact
RelationA relation R from a set A to a set B is an arbitrary subset of A x B. If (a, b) ∈ R, we say that a is related to b under the relation R, written as a R b.
Empty RelationA relation R ∈ a set A is called empty relation, if no element of A is related to any element of A, i.e., R = φ ⊂ A x A.
Universal RelationA relation R ∈ a set A is called universal relation, if each element of A is related to every element of A, i.e., R = A x A.
Trivial RelationsBoth the empty relation and the universal relation are sometimes called trivial relations.
Reflexive RelationA relation R ∈ a set A is called reflexive, if (a, a) ∈ R, for every a ∈ A.
Symmetric RelationA relation R ∈ a set A is called symmetric, if (a1, a2) ∈ R implies that (a2, a1) ∈ R, for all a1, a2 ∈ A.
Transitive RelationA relation R ∈ a set A is called transitive, if (a1, a2) ∈ R and (a2, a3) ∈ R implies that (a1, a3) ∈ R, for all a1, a2, a3 ∈ A.
Equivalence RelationA relation R ∈ a set A is said to be an equivalence relation if R is reflexive, symmetric and transitive.
Equivalence ClassGiven an equivalence relation R ∈ a set X, the equivalence class [a] containing a ∈ X is the subset of X containing all elements b related to a. The equivalence classes form a partition of X into mutually disjoint subsets.
One-one (Injective) FunctionA function f: X → Y is defined to be one-one (or injective), if the images of distinct elements of X under f are distinct, i.e., for every x1, x2 ∈ X, f(x1) = f(x2) implies x1 = x2. Otherwise, f is called many-one.
Onto (Surjective) FunctionA function f: X → Y is said to be onto (or surjective), if every element of Y is the image of some element of X under f, i.e., for every y ∈ Y, there exists an element x ∈ X such that f(x) = y.
Bijective FunctionA function f: X → Y is said to be one-one and onto (or bijective), if f is both one-one and onto.
Composition of FunctionsLet f: A → B and g: B → C be two functions. Then the composition of f and g, denoted by gof, is defined as the function gof: A → C given by gof(x) = g(f(x)), for all x ∈ A.
Invertible FunctionA function f: X → Y is defined to be invertible, if there exists a function g: Y → X such that gof = I_X and fog = I_Y. The function g is called the inverse of f and is denoted by f⁻¹.
Identity FunctionThe identity function I_X: X → X is defined as I_X(x) = x for all x ∈ X.
📐 Key Formulas
  • Empty Relation $R = \phi \subset A \times A$ No element is related to any element
  • Universal Relation $R = A \times A$ Every element is related to every element
  • One-one test $f(x_1) = f(x_2) \Rightarrow x_1 = x_2, \; \forall \; x_1, x_2 \in X$ Equivalent to: x1 ≠ x2 ⇒ f(x1) ≠ f(x2)
  • Onto condition $\forall \; y \in Y, \; \exists \; x \in X \text{ such that } f(x) = y$ Equivalently, f is onto if and only if Range of f = Y (codomain)
  • Composition of functions $g \circ f(x) = g(f(x)), \; \forall \; x \in A$ If f: A → B and g: B → C, then gof: A → C
  • Inverse function condition $g \circ f = I_X \text{ and } f \circ g = I_Y$ f is invertible ⟺ f is one-one and onto (bijective)
  • Inverse verification $f^{-1}(y) = x \iff f(x) = y$ f⁻¹ o f = I_X and f o f⁻¹ = I_Y
📜 Important Theorems
  • 📌
    Equivalence Classes Partition Theorem: Given an arbitrary equivalence relation R ∈ an arbitrary set X, R divides X into mutually disjoint subsets Aᵢ called partitions or subdivisions of X satisfying: (i) all elements of Aᵢ are related to each other, for all i, (ii) no element of Aᵢ is related to any element of Aⱼ, i ≠ j, (iii) union of Aⱼ = X and Aᵢ intersect Aⱼ = φ, i ≠ j. The subsets Aᵢ are called equivalence classes.
  • 📌
    Finite Set Bijection Property: For an arbitrary finite set X, a one-one function f: X → X is necessarily onto and an onto map f: X → X is necessarily one-one. This is a characteristic difference between a finite and an infinite set.
  • 📌
    Invertibility and Bijectivity: A function f is invertible if and only if f is one-one and onto (bijective). If f is invertible, then f must be one-one and onto, and conversely, if f is one-one and onto, then f must be invertible.

💡 Quick Tips & Memory Aids

  • Reflexive: (a, a) ∈ R for all a ∈ A
  • Symmetric: (a, b) ∈ R implies (b, a) ∈ R
  • Transitive: (a, b) ∈ R and (b, c) ∈ R implies (a, c) ∈ R
  • Equivalence relation must be reflexive, symmetric and transitive
  • An equivalence relation partitions a set into disjoint equivalence classes
  • If R1 and R2 are equivalence relations ∈ a set A, then R1 intersect R2 is also an equivalence relation
  • f is one-one (injective) if distinct elements have distinct images

📝 Exercise Overview

  • 35 total questions across 3 exercises
  • 24 long-answer questions (proofs, show-that, derivations)
  • 5 short-answer questions
  • 6 multiple-choice questions
Chapter 2
Inverse Trigonometric Functions
Basic Concepts · Properties of Inverse Trigonometric Functions
3marks

🎯 Must-Read — Key concepts for this chapter

  1. If f : X → Y such that f(x) = y is one-one and onto, then we can define a unique function g : Y → X such that g(y) = x, where x ∈ X and y = f(x), y ∈ Y
  2. The function g is called the inverse of f and is denoted by f⁻¹
  3. Domain of g = Range of f, and Range of g = Domain of f
  4. g⁻¹ = (f⁻¹)⁻¹ = f
  5. (f⁻¹ ∘ f)(x) = f⁻¹(f(x)) = f⁻¹(y) = x

📖 Chapter Summary

ConceptKey Fact
Principal Value BranchThe branch of an inverse trigonometric function with a specific restricted range that is conventionally chosen as the standard. For sin⁻¹, the principal value branch has range [−π/2, π/2].
Principal ValueThe value of an inverse trigonometric function which lies ∈ the range of the principal branch is called the principal value of that inverse trigonometric function.
sin⁻¹ (arc sine function)The inverse of sine function with domain [−1, 1] and range [−π/2, π/2]. If sin y = x, then y = sin⁻¹ x.
cos⁻¹ (arc cosine function)The inverse of cosine function with domain [−1, 1] and range [0, π]. If cos y = x, then y = cos⁻¹ x.
cosec⁻¹ (arc cosecant function)The inverse of cosecant function with domain R − (−1, 1) and range [−π/2, π/2] − {0}.
sec⁻¹ (arc secant function)The inverse of secant function with domain R − (−1, 1) and range [0, π] − {π/2}.
tan⁻¹ (arc tangent function)The inverse of tangent function with domain R and range (−π/2, π/2).
cot⁻¹ (arc cotangent function)The inverse of cotangent function with domain R and range (0, π).
The domains and ranges (principal value branches) of inverse trigonometric functions are: sin⁻¹: [−1, 1] → [−π/2, π/2] ┃ cos⁻¹: [−1, 1] → [0, π] ┃ cosec⁻¹: R − (−1, 1) → [−π/2, π/2] − {0} ┃ sec⁻¹: R − (−1, 1) → [0, π] − {π/2} ┃ tan⁻¹: R → (−π/2, π/2) ┃ cot⁻¹: R → (0, π)
sin⁻¹ x should not be confused with (sin x)⁻¹. In fact (sin x)⁻¹ = 1/sin x, and similarly for other trigonometric functions
The value of an inverse trigonometric function which lies in the range of its principal value branchcalled the principal value of that inverse trigonometric function
For suitable values of domain: y = sin⁻¹ x implies x = sin y
For suitable values of domain: sin(sin⁻¹ x) = x
For suitable values of domain: sin⁻¹(sin x) = x
📐 Key Formulas
  • Domain and Range of sin⁻¹ $\sin^{-1} : [-1, 1] \to \left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$ Principal value branch
  • Domain and Range of cos⁻¹ $\cos^{-1} : [-1, 1] \to [0, \pi]$ Principal value branch
  • Domain and Range of cosec⁻¹ $\csc^{-1} : \mathbb{R} - (-1, 1) \to \left[-\frac{\pi}{2}, \frac{\pi}{2}\right] - \{0\}$ Principal value branch. Domain is |x| ≥ 1, i.e., x ≤ −1 or x ≥ 1
  • Domain and Range of sec⁻¹ $\sec^{-1} : \mathbb{R} - (-1, 1) \to [0, \pi] - \left\{\frac{\pi}{2}\right\}$ Principal value branch. Domain is |x| ≥ 1, i.e., x ≤ −1 or x ≥ 1
  • Domain and Range of tan⁻¹ $\tan^{-1} : \mathbb{R} \to \left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$ Principal value branch
  • Domain and Range of cot⁻¹ $\cot^{-1} : \mathbb{R} \to (0, \pi)$ Principal value branch
  • Sine inverse-forward composition $\sin(\sin^{-1} x) = x, \; x \in [-1, 1]$ Composition of function with its inverse
  • Sine forward-inverse composition $\sin^{-1}(\sin x) = x, \; x \in \left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$ Composition of inverse with function
  • Cancellation property (sin) $\sin(\sin^{-1} x) = x, \; x \in [-1,1] \text{ and } \sin^{-1}(\sin x) = x, \; x \in \left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$ Similar results hold for other trigonometric functions for suitable values of domain
  • Double angle formula for sin⁻¹ $\sin^{-1}(2x\sqrt{1 - x^2}) = 2\sin^{-1} x, \; -\frac{1}{\sqrt{2}} \leq x \leq \frac{1}{\sqrt{2}}$ Derived by substituting x = sin θ
  • Double angle formula for cos⁻¹ $\sin^{-1}(2x\sqrt{1 - x^2}) = 2\cos^{-1} x, \; \frac{1}{\sqrt{2}} \leq x \leq 1$ Derived by substituting x = cos θ
  • Simplification of cot⁻¹(1/√(x² − 1)) $\cot^{-1}\!\left(\frac{1}{\sqrt{x^2 - 1}}\right) = \sec^{-1} x, \; x > 1$ Derived by substituting x = sec θ

💡 Quick Tips & Memory Aids

  • (f ∘ f⁻¹)(y) = f(f⁻¹(y)) = f(x) = y
  • sin⁻¹ x should not be confused with (sin x)⁻¹. In fact (sin x)⁻¹ = 1/sin x, and similarly for other trigonometric functions
  • Whenever no branch of an inverse trigonometric function is mentioned, we mean the principal value branch of that function
  • The graph of y = sin⁻¹ x can be obtained from the graph of y = sin x by interchanging x and y axes
  • The graph of an inverse function is a mirror image (reflection) of the corresponding graph of the original function along the line y = x
  • Sine function restricted to [−π/2, π/2] is one-one and onto with range [−1, 1]
  • Cosine function restricted to [0, π] is one-one and onto with range [−1, 1]

📝 Exercise Overview

  • 43 total questions across 3 exercises
  • 16 long-answer questions (proofs, show-that, derivations)
  • 20 short-answer questions
  • 7 multiple-choice questions
Chapter 3
Matrices
Matrix · Types of Matrices · Operations on Matrices · Transpose of a Matrix · Symmetric and Skew Symmetric Matrices · Invertible Matrices
5marks

🎯 Must-Read — Key concepts for this chapter

  1. Matrices simplify work compared to straightforward methods
  2. Matrices represent coefficients ∈ systems of linear equations
  3. Matrix notation is used ∈ electronic spreadsheet programs
  4. We follow the notation A = [aᵢⱼ]ₘ ₓ ₙ to indicate that A is a matrix of order m x n
  5. We consider only matrices whose elements are real numbers or functions taking real values

📖 Chapter Summary

ConceptKey Fact
MatrixAn ordered rectangular array of numbers or functions. The numbers or functions are called the elements or the entries of the matrix.
Order of a matrixA matrix having m rows and n columns is called a matrix of order m x n (read as an m by n matrix). The number of elements ∈ an m x n matrix is mn.
Element a_ijAn element lying ∈ the i-th row and j-th column of a matrix. Also called the (i, j)-th element of the matrix.
Column matrixA matrix is said to be a column matrix if it has only one column. In general, A = [aᵢⱼ]ₘ ₓ ₁ is a column matrix of order m x 1.
Row matrixA matrix is said to be a row matrix if it has only one row. In general, B = [bᵢⱼ]₁ ₓ ₙ is a row matrix of order 1 x n.
Square matrixA matrix ∈ which the number of rows is equal to the number of columns. An m x n matrix is a square matrix if m = n, and is known as a square matrix of order n.
Diagonal elementsIf A = [aᵢⱼ] is a square matrix of order n, then elements a₁₁, a₂₂, ..., aₙₙ are said to constitute the diagonal of the matrix A.
Diagonal matrixA square matrix B = [bᵢⱼ]ₘ ₓ ₘ is said to be a diagonal matrix if all its non-diagonal elements are zero, that is bᵢⱼ = 0 when i ≠ j.
Scalar matrixA diagonal matrix is said to be a scalar matrix if its diagonal elements are equal, that is B = [bᵢⱼ]ₙ ₓ ₙ is a scalar matrix if bᵢⱼ = 0 when i ≠ j, and bᵢⱼ = k when i = j, for some constant k.
Identity matrixA square matrix ∈ which elements ∈ the diagonal are all 1 and rest are all zero is called an identity matrix. The square matrix A = [aᵢⱼ]ₙ ₓ ₙ is an identity matrix if aᵢⱼ = 1 when i = j, and aᵢⱼ = 0 when i ≠ j. Denoted by Iₙ or simply I.
Zero matrixA matrix is said to be zero matrix or null matrix if all its elements are zero. Denoted by O. Its order will be clear from the context.
Equality of matricesTwo matrices A = [aᵢⱼ] and B = [bᵢⱼ] are said to be equal if (i) they are of the same order, and (ii) each element of A is equal to the corresponding element of B, that is aᵢⱼ = bᵢⱼ for all i and j. Written as A = B.
Addition of matricesIf A = [aᵢⱼ] and B = [bᵢⱼ] are two matrices of the same order m x n, then their ∑ A + B is defined as a matrix C = [cᵢⱼ]ₘ ₓ ₙ, where cᵢⱼ = aᵢⱼ + bᵢⱼ for all possible values of i and j.
Scalar multiplicationIf A = [aᵢⱼ]ₘ ₓ ₙ is a matrix and k is a scalar, then kA is another matrix obtained by multiplying each element of A by the scalar k. In other words, kA = k[aᵢⱼ]ₘ ₓ ₙ = [k(aᵢⱼ)]ₘ ₓ ₙ.
Negative of a matrixThe negative of a matrix is denoted by -A. We define -A = (-1)A.
📐 Key Formulas
  • General m x n matrix $A = [a_{ij}]_{m \times n}, \; 1 \leq i \leq m, \; 1 \leq j \leq n$ The i-th row consists of elements aᵢ₁, aᵢ₂, ..., aᵢₙ and the j-th column consists of elements a₁ⱼ, a₂ⱼ, ..., aₘⱼ
  • Matrix addition $A + B = [a_{ij} + b_{ij}]_{m \times n}$ Both matrices must be of the same order
  • Scalar multiplication $kA = [k \cdot a_{ij}]_{m \times n}$ The (i,j)-th element of kA is k · aᵢⱼ
  • Matrix multiplication element $c_{ik} = a_{i1}b_{1k} + a_{i2}b_{2k} + \cdots + a_{in}b_{nk} = \sum_{j=1}^{n} a_{ij} b_{jk}$ Number of columns of A must equal number of rows of B
  • Commutative law of addition $A + B = B + A$ For matrices of the same order
  • Associative law of addition $(A + B) + C = A + (B + C)$ For matrices of the same order
  • Additive identity $A + O = O + A = A$ O is the zero matrix of the same order as A
  • Additive inverse $A + (-A) = (-A) + A = O$ -A = [-aᵢⱼ]ₘ ₓ ₙ
  • Scalar distributive over matrix addition $k(A + B) = kA + kB$ A, B are matrices of same order, k is a scalar
  • Scalar sum distributive $(k + l)A = kA + lA$ k and l are scalars
  • Associative law of multiplication $(AB)C = A(BC)$ Whenever both sides of the equality are defined
  • Distributive law (left) $A(B + C) = AB + AC$ Whenever both sides are defined
  • Distributive law (right) $(A + B)C = AC + BC$ Whenever both sides are defined
  • Multiplicative identity $IA = AI = A$ I is the identity matrix of appropriate order
  • Transpose of transpose $(A^{T})^{T} = A$ Taking transpose twice gives back the original matrix
  • Transpose of scalar multiple $(kA)^{T} = kA^{T}$ Where k is any constant
  • Transpose of sum $(A + B)^{T} = A^{T} + B^{T}$ For matrices A and B of suitable orders
  • Transpose of product $(AB)^{T} = B^{T}A^{T}$ The order reverses when taking transpose of a product
  • Symmetric part of a matrix $\frac{1}{2}(A + A') \text{ is symmetric}$ For any square matrix A with real number entries
  • Skew symmetric part of a matrix $\frac{1}{2}(A - A') \text{ is skew-symmetric}$ For any square matrix A with real number entries
📜 Important Theorems
  • 📌
    Theorem 1: For any square matrix A with real number entries, A + A' is a symmetric matrix and A - A' is a skew symmetric matrix.
  • 📌
    Theorem 2: Any square matrix can be expressed as the ∑ of a symmetric and a skew symmetric matrix. Specifically, A = (1/2)(A + A') + (1/2)(A - A').
  • 📌
    Theorem 3 (Uniqueness of inverse): Inverse of a square matrix, if it exists, is unique.
  • 📌
    Theorem 4 (Inverse of a product): If A and B are invertible matrices of the same order, then (AB)⁻¹ = B⁻¹ A⁻¹.

💡 Quick Tips & Memory Aids

  • Matrices can represent vertices of geometric figures ∈ a plane
  • A scalar matrix is an identity matrix when k = 1
  • Every identity matrix is a scalar matrix, but not every scalar matrix is an identity matrix
  • For equality of matrices, both order and all corresponding elements must match
  • A + B is defined only when A and B are of the same order
  • Addition of matrices is a binary operation on the set of matrices of the same order
  • Matrix multiplication is NOT commutative ∈ general: AB ≠ BA

📝 Exercise Overview

  • 56 total questions across 5 exercises
  • 20 long-answer questions (proofs, show-that, derivations)
  • 25 short-answer questions
  • 11 multiple-choice questions
Chapter 4
Determinants
Determinant · Area of a Triangle · Minors and Cofactors · Adjoint and Inverse of a Matrix · Applications of Determinants and Matrices
5marks

🎯 Must-Read — Key concepts for this chapter

  1. Determinants have wide applications ∈ Engineering, Science, Economics, Social Science, etc.
  2. In this chapter, determinants up to order three with real entries only are studied
  3. Topics covered: properties of determinants, minors, cofactors, applications ∈ finding area of triangle, adjoint and inverse of a square matrix, consistency and inconsistency of systems, solution using inverse of a matrix
  4. For matrix A, |A| is read as determinant of A and not modulus of A
  5. Only square matrices have determinants

📖 Chapter Summary

ConceptKey Fact
MinorMinor of an element aᵢⱼ of a determinant is the determinant obtained by deleting its ith row and jth column ∈ which element aᵢⱼ lies. Minor of element aᵢⱼ is denoted by Mᵢⱼ.
CofactorCofactor of an element aᵢⱼ, denoted by Aᵢⱼ, is defined by Aᵢⱼ = (-1)i+j · Mᵢⱼ, where Mᵢⱼ is the minor of aᵢⱼ.
Adjoint of a matrixThe adjoint of a square matrix A = [aᵢⱼ]_(n x n) is defined as the transpose of the matrix [Aᵢⱼ]_(n x n), where Aᵢⱼ is the cofactor of element aᵢⱼ. It is denoted by adj A.
Singular matrixA square matrix A is said to be singular if |A| = 0.
Non-singular matrixA square matrix A is said to be non-singular if |A| ≠ 0.
Consistent systemA system of equations is said to be consistent if its solution (one or more) exists.
Inconsistent systemA system of equations is said to be inconsistent if its solution does not exist.
Determinant of a matrix A = [a₁₁] of order 1: |a₁₁| = a₁₁
Determinant of a 2x2 matrix A = [a11 a12 ┃ a21 a22]: |A| = a11·a22 - a12·a21
Determinant of a 3x3 matrix (expanding along R1): |A| = a1|b2 c2 ┃ b3 c3| - b1|a2 c2 ┃ a3 c3| + c1|a2 b2 ┃ a3 b3|
For any square matrix A, the |A| satisfies certain properties
Area of triangle with vertices (x1,y1), (x2,y2), (x3,y3): Δ = (1/2)|x1 y1 1 ┃ x2 y2 1 ┃ x3 y3 1|
Minor of element a_ijthe determinant obtained by deleting ith row and jth column, denoted Mᵢⱼ
Cofactor of aᵢⱼ: Aᵢⱼ = (-1)i+j · Mᵢⱼ
Value of determinant = ∑ of product of elements of a row (or column) with corresponding cofactors. E.g., |A| = a11·A11 + a12·A12 + a13·A13
📐 Key Formulas
  • $\Delta = \frac{1}{2} \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix}$
  • $\Delta = \frac{1}{2} |\text{determinant value}|$
  • $\begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} = 0 \; (\text{collinear})$
  • $M_{ij} = \text{minor of } a_{ij}$
  • $A_{ij} = (-1)^{i+j} \cdot M_{ij}$
  • $Minor of an element of a determinant of order n (n >= 2) is a determinant of order n-1$
  • $\Delta = a_{i1}A_{i1} + a_{i2}A_{i2} + a_{i3}A_{i3}$
  • $\Delta = a_{1j}A_{1j} + a_{2j}A_{2j} + a_{3j}A_{3j}$
  • $a_{i1}A_{j1} + a_{i2}A_{j2} + a_{i3}A_{j3} = 0, \; i \neq j$
  • $For A = [a11 a12 a13; a21 a22 a23; a31 a32 a33], adj A = Transpose of [A11 A12 A13; A21 A22 A23; A31 A32 A33] = [A11 A21 A31; A12 A22 A32; A13 A23 A33]$
  • $For 2x2 matrix A = [a11 a12; a21 a22], adj A = [a22 -a12; -a21 a11] (interchange diagonal elements, change sign of off-diagonal elements)$
  • $A(\text{adj } A) = (\text{adj } A)A = |A| \cdot I$
  • $|\text{adj}(A)| = |A|^{n-1}$
  • $A^{-1} = \frac{1}{|A|} \cdot \text{adj}(A), \; |A| \neq 0$
  • $|AB| = |A| \cdot |B|$
  • $If AB = BA = I, then B is called the inverse of A, i.e., B = A^(-1)$
  • $A^{-1} = B, \; B^{-1} = A, \; (A^{-1})^{-1} = A$
  • $For system: a1*x + b1*y + c1*z = d1, a2*x + b2*y + c2*z = d2, a3*x + b3*y + c3*z = d3: Matrix form AX = B where A = [a1 b1 c1; a2 b2 c2; a3 b3 c3], X = [x; y; z], B = [d1; d2; d3]$
  • $|A| \neq 0 \Rightarrow X = A^{-1}B$
  • $If |A| = 0 and (adj A)B != O, system is inconsistent (no solution)$
📜 Important Theorems
  • 📌
    Theorem 1: If A be any given square matrix of order n, then A(adj A) = (adj A)A = |A|*I, where I is the identity matrix of order n.
  • 📌
    Theorem 2: If A and B are nonsingular matrices of the same order, then AB and BA are also nonsingular matrices of the same order.
  • 📌
    Theorem 3: The determinant of the product of matrices is equal to product of their respective determinants, that is, |AB| = |A| · |B|, where A and B are square matrices of the same order.
  • 📌
    Theorem 4: A square matrix A is invertible if and only if A is nonsingular matrix.

💡 Quick Tips & Memory Aids

  • Always take absolute value since area is a positive quantity
  • If area is given, use both positive and negative values of the determinant
  • Collinearity condition: area = 0 implies three points are collinear
  • Δ = ∑ of product of elements of any row (or column) with their corresponding cofactors
  • If elements of a row (or column) are multiplied with cofactors of any other row (or column), the ∑ is zero
  • For example, a11·A21 + a12·A22 + a13·A23 = 0 (R1 elements with R2 cofactors gives zero since it creates a determinant with two identical rows)
  • A(adj A) = (adj A)A = |A|I

📝 Exercise Overview

  • 61 total questions across 6 exercises
  • 7 multiple-choice questions
Chapter 5
Continuity and Differentiability
Continuity · Algebra of Continuous Functions · Differentiability · Derivatives of Composite Functions (Chain Rule) · Derivatives of Implicit Functions · Derivatives of Inverse Trigonometric Functions · Exponential and Logarithmic Functions · Logarithmic Differentiation · Derivatives of Functions in Parametric Forms · Second Order Derivative · Mean Value Theorem
8marks

🎯 Must-Read — Key concepts for this chapter

  1. Previously learnt to differentiate polynomial and trigonometric functions
  2. This chapter connects continuity and differentiability
  3. Powerful techniques of differentiation are developed
  4. f is continuous at x = c if: (1) f(c) is defined, (2) lim(x->c) f(x) exists, (3) lim(x->c) f(x) = f(c)
  5. If f is not continuous at c, then c is called a point of discontinuity of f

📖 Chapter Summary

ConceptKey Fact
Continuity at a pointSuppose f is a real function on a subset of the real numbers and let c be a point ∈ the domain of f. Then f is continuous at c if lim(x->c) f(x) = f(c)
Continuous functionA real function f is said to be continuous if it is continuous at every point ∈ the domain of f
Derivative of f at cThe derivative of f at c is defined by lim(h->0) [f(c+h) - f(c)]/h, provided this limit exists. Denoted by f'(c) or (d/dx)(f(x))|_c
Derivative functionf'(x) = lim(h->0) [f(x+h) - f(x)]/h, wherever the limit exists. Also denoted by f'(x) or (d/dx)(f(x)) or dy/dx or y'
Differentiable in an intervalA function is differentiable ∈ [a, b] if it is differentiable at every point of [a, b]. At endpoints, we use left/right hand derivatives respectively. Differentiable ∈ (a, b) means differentiable at every point of (a, b)
Explicit functionWhen y = f(x) expresses y directly ∈ terms of x
Implicit functionWhen the relationship between x and y is given by an equation like x + sin(xy) - y = 0, where y cannot easily be expressed as a function of x
Exponential functionThe exponential function with positive base b > 1 is the function y = f(x) = bˣ
Logarithmic functionLet b > 1 be a real number. Then we say logarithm of a to base b is x if bˣ = a. Written as log_b a = x if bˣ = a
Second order derivativeIf y = f(x), then dy/dx = f'(x). If f'(x) is differentiable, then d/dx(dy/dx) = d²y/dx² is the second order derivative. Denoted by f''(x), D²y, y'', or y₂
A real valued functioncontinuous at a point ∈ its domain if the limit of the function at that point equals the value of the function at that point. A function is continuous if it is continuous on the whole of its domain.
Sum, difference, product and quotient of continuous functions are continuous. i.e., if f and g are continuous functions, then (f +/- g)(x) = f(x) +/- g(x)continuous, (f . g)(x) = f(x) . g(x) is continuous, (f/g)(x) = f(x)/g(x) (wherever g(x) ≠ 0) is continuous.
Every differentiable functioncontinuous, but the converse is not true.
Chain rulethe rule to differentiate composites of functions. If f = v o u, t = u(x) and if both dt/dx and dv/dt exist, then df/dx = (dv/dt)(dt/dx).
Following are some of the standard derivatives (in appropriate domains): d/dx(sin⁻¹ x) = 1/√(1 - x²), d/dx(cos⁻¹ x) = -1/√(1 - x²), d/dx(tan⁻¹ x) = 1/(1 + x²).
📐 Key Formulas
  • Sum/Difference Rule $(u \pm v)' = u' \pm v'$
  • Product Rule (Leibnitz Rule) $(uv)' = u'v + uv'$ Derivative of a product of two functions
  • Quotient Rule $\left(\frac{u}{v}\right)' = \frac{u'v - uv'}{v^2}, \; v \neq 0$
  • Chain Rule (two functions) $\frac{df}{dx} = \frac{dv}{dt} \cdot \frac{dt}{dx}$
  • Chain Rule (three functions) $\frac{df}{dx} = \frac{dw}{ds} \cdot \frac{ds}{dt} \cdot \frac{dt}{dx}$
  • Derivative of sin^(-1) x $\frac{d}{dx}(\sin^{-1} x) = \frac{1}{\sqrt{1 - x^2}}$
  • Derivative of cos^(-1) x $\frac{d}{dx}(\cos^{-1} x) = \frac{-1}{\sqrt{1 - x^2}}$
  • Derivative of tan^(-1) x $\frac{d}{dx}(\tan^{-1} x) = \frac{1}{1 + x^2}$
  • Derivative of e^x $\frac{d}{dx}(e^x) = e^x$
  • Derivative of log x (natural log) $\frac{d}{dx}(\log x) = \frac{1}{x}$
  • Derivative of a^x $\frac{d}{dx}(a^x) = a^x \log a$
  • Change of base formula $\log_a p = \frac{\log_b p}{\log_b a}$
  • Log of product $\log_b(pq) = \log_b p + \log_b q$
  • Log of power $\log_b(p^n) = n \log_b p$
  • Log of quotient $\log_b\!\left(\frac{x}{y}\right) = \log_b x - \log_b y$
  • Logarithmic differentiation formula $\frac{dy}{dx} = y\left[v(x) \cdot \frac{u'(x)}{u(x)} + v'(x) \cdot \log u(x)\right]$
  • Parametric differentiation $\frac{dy}{dx} = \frac{dy/dt}{dx/dt} = \frac{g'(t)}{f'(t)}, \; f'(t) \neq 0$
📜 Important Theorems
  • 📌
    Theorem 1: Suppose f and g be two real functions continuous at a real number c. Then: (1) f + g is continuous at x = c, (2) f - g is continuous at x = c, (3) f . g is continuous at x = c, (4) f/g is continuous at x = c (provided g(c) ≠ 0)
  • 📌
    Theorem 2 (Composition): Suppose f and g are real valued functions such that (f o g) is defined at c. If g is continuous at c and if f is continuous at g(c), then (f o g) is continuous at c
  • 📌
    Theorem 3: If a function f is differentiable at a point c, then it is also continuous at that point.
  • 📌
    Corollary 1: Every differentiable function is continuous. The converse is NOT true: f(x) = |x| is continuous at x = 0 but not differentiable at x = 0.
  • 📌
    Theorem 4 (Chain Rule): Let f be a real valued function which is a composite of two functions u and v; i.e., f = v o u. Suppose t = u(x) and if both dt/dx and dv/dt exist, then df/dx = (dv/dt) . (dt/dx)
  • 📌
    Theorem 5*: (1) The derivative of eˣ w.r.t. x is eˣ; i.e., d/dx(eˣ) = eˣ. (2) The derivative of log x w.r.t. x is 1/x; i.e., d/dx(log x) = 1/x. (*Please see supplementary material on Page 222)
  • 📌
    Rolle's Theorem: If f: [a, b] → R is continuous on [a, b], differentiable on (a, b), and f(a) = f(b), then there exists some c ∈ (a, b) such that f'(c) = 0
  • 📌
    Mean Value Theorem (Lagrange's MVT): If f: [a, b] → R is continuous on [a, b] and differentiable on (a, b), then there exists some c ∈ (a, b) such that f'(c) = [f(b) - f(a)]/(b - a)

💡 Quick Tips & Memory Aids

  • Every constant function f(x) = k is continuous at every real number
  • The identity function f(x) = x is continuous at every real number
  • Every polynomial function is continuous
  • f(x) = |x| is a continuous function
  • f(x) = 1/x is continuous at every point of its domain (x ≠ 0)
  • The greatest integer function f(x) = [x] is discontinuous at every integer
  • If f(x) = λ for some real number λ, then (λ . g)(x) = λ . g(x) is also continuous

📝 Exercise Overview

  • 137 total questions across 8 exercises
Chapter 6
Application of Derivatives
Rate of Change of Quantities · Increasing and Decreasing Functions · Maxima and Minima · Maximum and Minimum Values of a Function in a Closed Interval
8marks

🎯 Must-Read — Key concepts for this chapter

  1. Chapter 5 covered finding derivatives; this chapter covers applications of those derivatives
  2. Applications include: (i) rate of change, (ii) tangent and normal equations, (iii) turning points for maxima/minima, (iv) increasing/decreasing intervals, (v) approximations
  3. dy/dx is positive if y increases as x increases
  4. dy/dx is negative if y decreases as x increases
  5. The rate of change of y with respect to x can be calculated using the rates of change of y and x both with respect to t (Chain Rule)

📖 Chapter Summary

ConceptKey Fact
Rate of ChangeIf y = f(x), then dy/dx (or f'(x)) represents the rate of change of y with respect to x, and (dy/dx) at x=x₀ represents the rate of change of y with respect to x at x = x₀.
Marginal CostThe instantaneous rate of change of total cost with respect to output. If C(x) is the total cost for x units, then Marginal Cost (MC) = dC/dx.
Marginal RevenueThe rate of change of total revenue with respect to the number of units sold. If R(x) is total revenue for x units, then Marginal Revenue (MR) = dR/dx.
Increasing functionA function f is increasing on interval I if x₁ < x₂ ∈ I implies f(x₁) ≤ f(x₂) for all x₁, x₂ ∈ I.
Decreasing functionA function f is decreasing on interval I if x₁ < x₂ ∈ I implies f(x₁) ≥ f(x₂) for all x₁, x₂ ∈ I.
Constant functionA function f is constant on interval I if f(x) = c for all x ∈ I, where c is a constant.
Strictly increasing functionA function f is strictly increasing on interval I if x₁ < x₂ ∈ I implies f(x₁) < f(x₂) for all x₁, x₂ ∈ I.
Strictly decreasing functionA function f is strictly decreasing on interval I if x₁ < x₂ ∈ I implies f(x₁) > f(x₂) for all x₁, x₂ ∈ I.
Increasing/Decreasing at a pointLet x₀ be a point ∈ the domain of f. Then f is said to be increasing (decreasing) at x₀ if there exists an open interval I containing x₀ such that f is increasing (decreasing) ∈ I.
Maximum valuef is said to have a maximum value ∈ interval I if there exists a point c ∈ I such that f(c) > f(x) for all x ∈ I. The number f(c) is called the maximum value and c is called the point of maximum value.
Minimum valuef is said to have a minimum value ∈ interval I if there exists a point c ∈ I such that f(c) < f(x) for all x ∈ I. The number f(c) is called the minimum value and c is called the point of minimum value.
Extreme valuef is said to have an extreme value ∈ I if there exists a point c ∈ I such that f(c) is either a maximum or minimum value of f ∈ I. The number f(c) is called an extreme value and c is called an extreme point.
Local maximac is a point of local maxima if there is an h > 0 such that f(c) ≥ f(x) for all x ∈ (c-h, c+h), x ≠ c. The value f(c) is called the local maximum value.
Local minimac is a point of local minima if there is an h > 0 such that f(c) ≤ f(x) for all x ∈ (c-h, c+h). The value f(c) is called the local minimum value.
Critical pointA point c ∈ the domain of a function f at which either f'(c) = 0 or f is not differentiable is called a critical point of f.
📐 Key Formulas
  • Rate of change using Chain Rule $\frac{dy}{dx} = \frac{dy/dt}{dx/dt}, \; \frac{dx}{dt} \neq 0$
  • Rate of change of area of circle $\frac{dA}{dr} = 2\pi r$
📜 Important Theorems
  • 📌
    First Derivative Test for Increasing/Decreasing: Let f be continuous on [a, b] and differentiable on (a, b). Then: (a) f is increasing ∈ [a,b] if f'(x) ≥ 0 for each x ∈ (a,b) ┃ (b) f is decreasing ∈ [a,b] if f'(x) ≤ 0 for each x ∈ (a,b) ┃ (c) f is a constant function ∈ [a,b] if f'(x) = 0 for each x ∈ (a,b).
  • 📌
    Necessary condition for local extrema: Let f be a function defined on an open interval I. Suppose c is ∈ I. If f has a local maxima or a local minima at x = c, then either f'(c) = 0 or f is not differentiable at c.
  • 📌
    First Derivative Test: Let f be a function defined on an open interval I. Let f be continuous at a critical point c ∈ I. Then: (i) If f'(x) changes sign from positive to negative as x increases through c (f'(x) > 0 to left, f'(x) < 0 to right), then c is a point of local maxima. (ii) If f'(x) changes sign from negative to positive as x increases through c (f'(x) < 0 to left, f'(x) > 0 to right), then c is a point of local minima. (iii) If f'(x) does not change sign as x increases through c, then c is neither a point of local maxima nor local minima (point of inflection).
  • 📌
    Second Derivative Test: Let f be a function defined on an interval I and c ∈ I. Let f be twice differentiable at c. Then: (i) x = c is a point of local maxima if f'(c) = 0 and f''(c) < 0. The value f(c) is local maximum value. (ii) x = c is a point of local minima if f'(c) = 0 and f''(c) > 0. The value f(c) is local minimum value. (iii) The test fails if f'(c) = 0 and f''(c) = 0. In this case, go back to the first derivative test.
  • 📌
    Existence of absolute extrema on closed interval: Let f be a continuous function on an interval I = [a, b]. Then f has the absolute maximum value and f attains it at least once ∈ I. Also, f has the absolute minimum value and attains it at least once ∈ I.
  • 📌
    Condition for absolute extrema at interior points: Let f be a differentiable function on a closed interval I and let c be any interior point of I. Then: (i) f'(c) = 0 if f attains its absolute maximum value at c. (ii) f'(c) = 0 if f attains its absolute minimum value at c.

💡 Quick Tips & Memory Aids

  • For a cube with side x: V = x³, S = 6x²; if dV/dt = 9 cm³/s and x = 10 cm, then dS/dt = 3.6 cm²/s
  • For a circle with radius r: A = π*r²; dA/dt = 2*π*r*(dr/dt)
  • If f'(x) > 0 for x ∈ an interval (excluding endpoints) and f is continuous, then f is increasing
  • If f'(x) < 0 for x ∈ an interval (excluding endpoints) and f is continuous, then f is decreasing
  • To find intervals: set f'(x) = 0, find critical points, test sign of f'(x) ∈ each interval
  • f(x) = x³ - 3x² + 4x is increasing on R since f'(x) = 3(x-1)² + 1 > 0 always
  • cos x is decreasing ∈ (0, π) and increasing ∈ (π, 2*π)

📝 Exercise Overview

  • 82 total questions across 4 exercises
Chapter 7
Integrals
Integration as an Inverse Process of Differentiation · Methods of Integration · Integrals of Some Particular Functions · Integration by Partial Fractions · Integration by Parts · Definite Integral · Fundamental Theorem of Calculus · Evaluation of Definite Integrals by Substitution · Some Properties of Definite Integrals
8marks

🎯 Must-Read — Key concepts for this chapter

  1. Integration is the inverse process of differentiation.
  2. Integral calculus was developed to solve problems of finding functions from derivatives and finding areas under curves.
  3. d/dx[∫ f(x) dx] = f(x) and ∫ f'(x) dx = f(x) + C: differentiation and integration are inverses of each other.
  4. Two indefinite integrals with the same derivative lead to the same family of curves and so they are equivalent.
  5. ∫ [f(x) + g(x)] dx = ∫ f(x) dx + ∫ g(x) dx (Property III)

📖 Chapter Summary

ConceptKey Fact
Anti derivative (Primitive)A function F is called an anti derivative of f if F'(x) = f(x) for all x ∈ the domain.
Indefinite IntegralThe symbol ∫ f(x) dx represents the entire class of anti derivatives, read as the indefinite ∫ of f with respect to x.
Integrandf(x) ∈ the expression ∫ f(x) dx.
Variable of integrationx ∈ the expression ∫ f(x) dx.
Constant of IntegrationAny real number C, considered as a constant function, that appears ∈ the general anti derivative F(x) + C.
Proper rational functionP(x)/Q(x) where degree of P(x) is less than degree of Q(x).
Improper rational functionP(x)/Q(x) where degree of P(x) is greater than or equal to degree of Q(x).
Definite IntegralIf f has an anti derivative F on [a, b], then ∫ from a to b of f(x) dx = F(b) - F(a).
Lower limitThe value a ∈ ∫ from a to b of f(x) dx.
Upper limitThe value b ∈ ∫ from a to b of f(x) dx.
Area functionA(x) = ∫ from a to x of f(x) dx, representing the area under the curve from a to x.
Integrationthe inverse process of differentiation. Let (d/dx)F(x) = f(x). Then ∫ f(x) dx = F(x) + C. These are called indefinite integrals or general integrals. C is called constant of integration. All these integrals differ by a constant.
Some properties of indefinite integrals: (1) ∫ [f(x) + g(x)] dx = ∫ f(x) dx + ∫ g(x) dx. (2) For any real number k, ∫ k f(x) dx = k ∫ f(x) dx. More generally, ∫ [k1 f1(x) + k2 f2(x) + ... + kn fn(x)] dx = k1 ∫ f1(x) dx + ... + kn ∫ fn(x) dx.
Some standard integrals: (i) ∫ xⁿ dx = xn+1/(n+1) + C, n ≠ -1. (ii) ∫ cos x dx = sin x + C. (iii) ∫ sin x dx = -cos x + C. (iv) ∫ sec² x dx = tan x + C. (v) ∫ cosec² x dx = -cot x + C. (vi) ∫ sec x tan x dx = sec x + C. (vii) ∫ cosec x cot x dx = -cosec x + C. (viii) ∫ dx/√(1-x²) = sin⁻¹ x + C. (ix) ∫ dx/√(1-x²) = -cos⁻¹ x + C. (x) ∫ dx/(1+x²) = tan⁻¹ x + C. (xi) ∫ dx/(1+x²) = -cot⁻¹ x + C. (xii) ∫ eˣ dx = eˣ + C. (xiii) ∫ aˣ dx = aˣ/log a + C. (xiv) ∫ (1/x) dx = log |x| + C.
Integration by partial fractions: P(x)/Q(x) where P, Q are polynomials and Q(x) ≠ 0. If degree of P ≥ degree of Q, divide to get T(x) + P1(x)/Q(x). Decompose into partial fractions of five standard types based on the nature of factors ∈ denominator.
📐 Key Formulas
  • Power Rule $\int x^n\,dx = \frac{x^{n+1}}{n+1} + C, \; n \neq -1$ Particularly, ∫ dx = x + C
  • Cosine integral $\int \cos x\,dx = \sin x + C$
  • Sine integral $\int \sin x\,dx = -\cos x + C$
  • Secant squared integral $\int \sec^2 x\,dx = \tan x + C$
  • Cosecant squared integral $\int \csc^2 x\,dx = -\cot x + C$
  • Secant-tangent integral $\int \sec x \tan x\,dx = \sec x + C$
  • Cosecant-cotangent integral $\int \csc x \cot x\,dx = -\csc x + C$
  • Inverse sine integral $\int \frac{dx}{\sqrt{1 - x^2}} = \sin^{-1} x + C$
  • Negative inverse cosine integral $\int \frac{dx}{\sqrt{1 - x^2}} = -\cos^{-1} x + C$
  • Inverse tangent integral $\int \frac{dx}{1 + x^2} = \tan^{-1} x + C$
  • Exponential integral $\int e^x\,dx = e^x + C$
  • Logarithmic integral $\int \frac{1}{x}\,dx = \log |x| + C$
  • General exponential integral $\int a^x\,dx = \frac{a^x}{\log a} + C$
  • $\int \frac{dx}{x^2 - a^2} = \frac{1}{2a} \log \left|\frac{x-a}{x+a}\right| + C$
  • $\int \frac{dx}{a^2 - x^2} = \frac{1}{2a} \log \left|\frac{a+x}{a-x}\right| + C$
  • $\int \frac{dx}{x^2 + a^2} = \frac{1}{a} \tan^{-1}\!\left(\frac{x}{a}\right) + C$
  • $\int \frac{dx}{\sqrt{x^2 - a^2}} = \log \left|x + \sqrt{x^2 - a^2}\right| + C$
  • $\int \frac{dx}{\sqrt{a^2 - x^2}} = \sin^{-1}\!\left(\frac{x}{a}\right) + C$
  • $\int \frac{dx}{\sqrt{x^2 + a^2}} = \log \left|x + \sqrt{x^2 + a^2}\right| + C$
  • Distinct linear factors $\frac{px+q}{(x-a)(x-b)} = \frac{A}{x-a} + \frac{B}{x-b}, \; a \neq b$ Two distinct linear factors ∈ denominator
📜 Important Theorems
  • 📌
    First Fundamental Theorem of Integral Calculus: Let f be a continuous function on the closed interval [a, b] and let A(x) be the area function. Then A'(x) = f(x), for all x ∈ [a, b].
  • 📌
    Second Fundamental Theorem of Integral Calculus: Let f be continuous function defined on the closed interval [a, b] and F be an anti derivative of f. Then ∫ from a to b of f(x) dx = [F(x)] from a to b = F(b) - F(a).

💡 Quick Tips & Memory Aids

  • For any real number k, ∫ k f(x) dx = k ∫ f(x) dx (Property IV)
  • ∫ [k1·f1(x) + k2·f2(x) + ... + kn·fn(x)] dx = k1*∫ f1(x) dx + k2*∫ f2(x) dx + ... + kn*∫ fn(x) dx (Property V: generalization)
  • (7) To find ∫ dx/(ax² + bx + c), write ax²+bx+c = a[(x+b/2a)² + (c/a - b²/4a²)] and reduce to standard form.
  • (8) To find ∫ dx/√(ax² + bx + c), proceed similarly as ∈ (7).
  • (9) To find ∫ (px+q)/(ax²+bx+c) dx, express px+q = A d/dx(ax²+bx+c) + B = A(2ax+b) + B, then solve for A, B.
  • (10) To find ∫ (px+q)/√(ax²+bx+c) dx, proceed as ∈ (9) and reduce to standard forms.
  • To find constants A, B, C, equate numerators after multiplying both sides by the denominator, then compare coefficients or substitute suitable values of x.

📝 Exercise Overview

  • 261 total questions across 11 exercises
Chapter 8
Application of Integrals
Area under Simple Curves
5marks

🎯 Must-Read — Key concepts for this chapter

  1. Application of definite integrals to find area under curves is a specific use of integration as the limit of a ∑.
  2. We also find the area bounded by the above said curves.
  3. If the curve lies below the x-axis, the definite ∫ gives a negative value; take the absolute value for the area.
  4. If the curve crosses the x-axis, split the ∫ at the crossing points and take absolute values of the negative parts.
  5. The choice between vertical strips (integrating with respect to x) and horizontal strips (integrating with respect to y) depends on the curve and which is more convenient.

📖 Chapter Summary

ConceptKey Fact
Elementary area (vertical strip)dA = y dx = f(x) dx, a thin vertical strip of height y and width dx under the curve y = f(x).
Elementary area (horizontal strip)dA = x dy = g(y) dy, a thin horizontal strip of width x and height dy beside the curve x = g(y).
The area of the region bounded by the curve y = f(x), x-axis and the lines x = a and x = b (b > a)given by the formula: Area = ∫ from a to b of y dx = ∫ from a to b of f(x) dx.
The area of the region bounded by the curve x = phi(y), y-axis and the lines y = c, y = dgiven by the formula: Area = ∫ from c to d of x dy = ∫ from c to d of φ(y) dy.
📐 Key Formulas
  • Area using vertical strips $A = \int_a^b y\,dx = \int_a^b f(x)\,dx$ Area bounded by curve y = f(x), x-axis, and lines x = a, x = b
  • Area using horizontal strips $A = \int_c^d x\,dy = \int_c^d g(y)\,dy$ Area bounded by curve x = g(y), y-axis, and lines y = c, y = d
  • Area when curve is below x-axis $A = \left|\int_a^b f(x)\,dx\right|$ If f(x) < 0 from x = a to x = b, the area is the absolute value of the ∫.
  • Area when curve crosses x-axis $A = |A_{1}| + A_{2}$ When part of the curve is above and part below x-axis, take absolute value of negative area and add to positive area.

📝 Exercise Overview

  • 9 total questions across 2 exercises
  • 4 multiple-choice questions
Chapter 9
Differential Equations
Basic Concepts · Order of a Differential Equation · Degree of a Differential Equation · General and Particular Solutions of a Differential Equation · Methods of Solving First Order, First Degree Differential Equations · Differential Equations with Variables Separable · Homogeneous Differential Equations · Linear Differential Equations
5marks

🎯 Must-Read — Key concepts for this chapter

  1. An equation involving derivative(s) of the dependent variable with respect to independent variable(s) is called a differential equation.
  2. x(dy/dx) + y = 0 is a differential equation because it involves a derivative of y with respect to x.
  3. An ordinary differential equation involves derivatives with respect to only one independent variable.
  4. 2(d²y/dx²) + (dy/dx)³ = 0 is an ordinary differential equation.
  5. Partial differential equations involve derivatives with respect to more than one independent variable (not covered ∈ this chapter).

📖 Chapter Summary

ConceptKey Fact
Order of a Differential EquationThe order of the highest order derivative of the dependent variable with respect to the independent variable involved ∈ the given differential equation.
Degree of a Differential EquationWhen a differential equation is a polynomial equation ∈ derivatives, the degree is the highest power (positive ∫ index) of the highest order derivative involved ∈ the given differential equation.
General SolutionThe solution which contains arbitrary constants is called the general solution (primitive) of the differential equation. It contains as many arbitrary constants as the order of the equation.
Particular SolutionThe solution free from arbitrary constants, i.e., the solution obtained from the general solution by giving particular values to the arbitrary constants, is called a particular solution of the differential equation.
Variable Separable MethodA method of solving first order differential equations where dy/dx = F(x,y) can be written as dy/dx = g(x)·h(y), allowing separation of variables so that all terms involving y are on one side and all terms involving x are on the other side.
Homogeneous FunctionA function F(x, y) is said to be a homogeneous function of degree n if F(λx, λy) = λⁿF(x, y) for any nonzero constant λ.
Homogeneous Differential EquationA differential equation of the form dy/dx = F(x, y) is said to be homogeneous if F(x, y) is a homogeneous function of degree zero.
First Order Linear Differential EquationA differential equation of the form dy/dx + Py = Q, where P and Q are constants or functions of x only.
Integrating Factor (I.F.)The function g(x) = e∫P dx which when multiplied to both sides of the linear differential equation dy/dx + Py = Q, makes the LHS the derivative of y·e∫P dx. It is denoted as I.F.
An equation involving derivatives of the dependent variable with respect to independent variable (variables)known as a differential equation.
Order of a differential equationthe order of the highest order derivative occurring ∈ the differential equation.
Degree of a differential equationdefined if it is a polynomial equation ∈ its derivatives.
Degree (when defined) of a differential equationthe highest power (positive integer only) of the highest order derivative ∈ it.
A function which satisfies the given differential equationcalled its solution. The solution which contains as many arbitrary constants as the order of the differential equation is called a general solution and the solution free from arbitrary constants is called particular solution.
Variable separable methodused to solve such an equation ∈ which variables can be separated completely, i.e. terms containing y should remain with dy and terms containing x should remain with dx.
📐 Key Formulas
  • Variable Separable Form $\frac{dy}{dx} = h(y) \cdot g(x)$ Standard form for variable separable equations
  • Separated Form $\frac{1}{h(y)}\,dy = g(x)\,dx$ After separating the variables
  • General Solution $\int \frac{1}{h(y)}\,dy = \int g(x)\,dx + C$ Integrate both sides to get the solution; H(y) = G(x) + C
  • Substitution for dy/dx form $y = vx, \; \frac{dy}{dx} = v + x\frac{dv}{dx}$ Used when dy/dx = g(y/x)
  • Substitution for dx/dy form $x = vy, \; \frac{dx}{dy} = v + y\frac{dv}{dy}$ Used when dx/dy = h(x/y)
  • Reduced form $x\frac{dv}{dx} = g(v) - v, \; \frac{dv}{g(v) - v} = \frac{dx}{x}$ After substitution, separate variables ∈ v and x
  • General Solution $\int \frac{dv}{g(v) - v} = \int \frac{1}{x}\,dx + C$ Integrate and replace v by y/x to get the solution
  • Standard Form (Type 1) $\frac{dy}{dx} + Py = Q$ P, Q are constants or functions of x only
  • Integrating Factor (Type 1) $\text{I.F.} = e^{\int P\,dx}$ Integrating factor for dy/dx + Py = Q
  • General Solution (Type 1) $y \cdot (\text{I.F.}) = \int (Q \times \text{I.F.})\,dx + C$ y · e∫P dx = ∫(Q · e∫P dx) dx + C
  • Standard Form (Type 2) $\frac{dx}{dy} + P_1 x = Q_1$ P₁, Q₁ are constants or functions of y only
  • Integrating Factor (Type 2) $\text{I.F.} = e^{\int P_1\,dy}$ Integrating factor for dx/dy + P₁x = Q₁
  • General Solution (Type 2) $x \cdot (\text{I.F.}) = \int (Q_1 \times \text{I.F.})\,dy + C$ x · e∫P₁ dy = ∫(Q₁ · e∫P₁ dy) dy + C

💡 Quick Tips & Memory Aids

  • Notation: dy/dx = y', d²y/dx² = y'', d³y/dx³ = y'''
  • For higher order derivatives: yₙ denotes dⁿy/dxⁿ (nth order derivative)
  • dy/dx = eˣ has order 1 (highest derivative is first order)
  • d²y/dx² + y = 0 has order 2 (highest derivative is second order)
  • (d³y/dx³) + x²(d²y/dx²)³ = 0 has order 3 (highest derivative is third order)
  • Order is always a positive integer.
  • d³y/dx³ + 2(d²y/dx²)² - dy/dx + y = 0: order 3, degree 1 (polynomial ∈ y''', y'', y'; highest power of y''' is 1)

📝 Exercise Overview

  • 98 total questions across 6 exercises
Chapter 10
Vector Algebra
Some Basic Concepts · Types of Vectors · Addition of Vectors · Multiplication of a Vector by a Scalar · Product of Two Vectors
5marks

🎯 Must-Read — Key concepts for this chapter

  1. Scalars are real numbers representing magnitude only
  2. Vectors have both magnitude and direction
  3. This chapter covers basic concepts, operations on vectors, and their algebraic and geometric properties
  4. Since the length is never negative, the notation |a| < 0 has no meaning
  5. l² + m² + n² = 1 but a² + b² + c² ≠ 1 ∈ general

📖 Chapter Summary

ConceptKey Fact
VectorA quantity that has magnitude as well as direction is called a vector. A directed line segment is a vector.
Position VectorThe vector OP having O (origin) as initial point and P as terminal point is called the position vector of point P with respect to origin O.
Direction CosinesThe cosine values of the direction angles α, β, γ that a vector makes with the positive x, y, z axes, denoted by l, m, n respectively.
Direction RatiosNumbers proportional to the direction cosines of a vector, denoted as a, b, c.
Zero VectorA vector whose initial and terminal points coincide, denoted as 0⃗. It has zero magnitude and cannot be assigned a definite direction (or may be regarded as having any direction). Vectors AA, BB represent the zero vector.
Unit VectorA vector whose magnitude is unity (1 unit). The unit vector ∈ the direction of a given vector a is denoted by â.
Coinitial VectorsTwo or more vectors having the same initial point are called coinitial vectors.
Collinear VectorsTwo or more vectors are said to be collinear if they are parallel to the same line, irrespective of their magnitudes and directions.
Equal VectorsTwo vectors a and b are said to be equal if they have the same magnitude and direction regardless of the positions of their initial points, written as a = b.
Negative of a VectorA vector whose magnitude is the same as that of a given vector but direction is opposite to that of it. Vector BA is negative of vector AB, written as BA = -AB.
Free VectorsVectors that may be subject to parallel displacement without changing magnitude and direction. Throughout this chapter, we deal with free vectors only.
Triangle Law of Vector AdditionIf two vectors a and b are positioned so that the initial point of one coincides with the terminal point of the other, then the ∑ (resultant) a + b is represented by the third side of the triangle formed. In triangle ABC: AC = AB + BC.
Parallelogram Law of Vector AdditionIf two vectors a and b are represented by the two adjacent sides of a parallelogram ∈ magnitude and direction, then their ∑ a + b is represented ∈ magnitude and direction by the diagonal of the parallelogram through their common point.
Scalar MultiplicationThe product of vector a by scalar λ, denoted λ*a, is a vector collinear to a. The vector λ*a has direction same (or opposite) to a according as λ is positive (or negative). Its magnitude is |λ| times the magnitude of a: |λ*a| = |λ||a|.
Component FormAny vector r = xî + yĵ + zk̂ is said to be ∈ component form. Here x, y, z are scalar components and xî, yĵ, zk̂ are vector components along the respective axes. Also called rectangular components.
📐 Key Formulas
  • Magnitude of position vector $|\vec{OP}| = \sqrt{x^2 + y^2 + z^2}$
  • Direction cosines $\cos\alpha = \frac{x}{r}, \; \cos\beta = \frac{y}{r}, \; \cos\gamma = \frac{z}{r}$
  • Direction cosine identity $l^{2} + m^{2} + n^{2} = 1$
  • Triangle law $\vec{AC} = \vec{AB} + \vec{BC}$
  • Vector difference $\vec{a} - \vec{b} = \vec{AB} + \vec{BC'} \text{ where } \vec{BC'} = -\vec{BC}$
  • Sides of triangle sum to zero $\vec{AB} + \vec{BC} + \vec{CA} = \vec{AA} = \vec{0}$
  • Scalar multiplication magnitude $|\lambda \vec{a}| = |\lambda| \cdot |\vec{a}|$
  • Negative of a vector $\vec{a} + (-\vec{a}) = (-\vec{a}) + \vec{a} = \vec{0}$
  • Unit vector $\hat{a} = \frac{1}{|\vec{a}|} \cdot \vec{a}, \; \vec{a} \neq \vec{0}$
  • For any scalar k $k \cdot \vec{0} = \vec{0}$
  • Position vector in component form $\vec{OP} = x\hat{i} + y\hat{j} + z\hat{k}$
  • Magnitude from components $|\vec{r}| = |x\hat{i} + y\hat{j} + z\hat{k}| = \sqrt{x^2 + y^2 + z^2}$
  • Sum of vectors in component form $\vec{a} + \vec{b} = (a_1+b_1)\hat{i} + (a_2+b_2)\hat{j} + (a_3+b_3)\hat{k}$
  • Difference of vectors in component form $\vec{a} - \vec{b} = (a_1-b_1)\hat{i} + (a_2-b_2)\hat{j} + (a_3-b_3)\hat{k}$
  • Equality of vectors $\vec{a} = \vec{b} \iff a_1 = b_1, \; a_2 = b_2, \; a_3 = b_3$
  • Scalar multiplication in component form $\lambda\vec{a} = (\lambda a_1)\hat{i} + (\lambda a_2)\hat{j} + (\lambda a_3)\hat{k}$
  • Distributive law 1 $k\vec{a} + m\vec{a} = (k + m)\vec{a}$
  • Distributive law 2 $k(m\vec{a}) = (km)\vec{a}$
  • Distributive law 3 $k(\vec{a} + \vec{b}) = k\vec{a} + k\vec{b}$
  • Collinearity condition $\vec{b} = \lambda\vec{a} \iff \frac{b_1}{a_1} = \frac{b_2}{a_2} = \frac{b_3}{a_3} = \lambda$
📜 Important Theorems
  • 📌
    Commutative Property of Vector Addition: For any two vectors a and b: a + b = b + a
  • 📌
    Associative Property of Vector Addition: For any three vectors a, b, c: (a + b) + c = a + (b + c)
  • 📌
    Cauchy-Schwarz Inequality: For any two vectors a and b: |a . b| ≤ |a||b|
  • 📌
    Triangle Inequality: For any two vectors a and b: |a + b| ≤ |a| + |b|
  • 📌
    Collinearity from Triangle Inequality: If |a + b| = |a| + |b|, then |AC| = |AB| + |BC|, showing that A, B, C are collinear.

💡 Quick Tips & Memory Aids

  • Coordinates of point P(x,y,z) can be expressed as (lr, mr, nr)
  • Zero vector has zero magnitude and indeterminate direction
  • Unit vector â has |â| = 1
  • Coinitial vectors share the same starting point
  • Collinear vectors are parallel to the same line
  • Equal vectors have same magnitude AND same direction
  • Negative vector has same magnitude but opposite direction

📝 Exercise Overview

  • 73 total questions across 5 exercises
Chapter 11
Three Dimensional Geometry
Direction Cosines and Direction Ratios of a Line · Equation of a Line in Space · Angle between Two Lines · Shortest Distance between Two Lines
5marks

🎯 Must-Read — Key concepts for this chapter

  1. In Class XI, Analytical Geometry ∈ two dimensions and introduction to three dimensional geometry used Cartesian methods only
  2. This chapter uses vector algebra for 3D geometry
  3. For any line, if a, b, c are direction ratios, then ka, kb, kc (k ≠ 0) are also direction ratios
  4. Any two sets of direction ratios of a line are proportional
  5. There are infinitely many sets of direction ratios for any line

📖 Chapter Summary

ConceptKey Fact
Direction cosinesIf a directed line L passes through the origin making angles α, β, γ with x, y, z axes respectively, then cos(α), cos(β), cos(γ) are called direction cosines, denoted by l, m, n
Direction ratiosAny three numbers a, b, c which are proportional to the direction cosines l, m, n of a line. If l, m, n are direction cosines, then a = λ*l, b = λ*m, c = λ*n for any nonzero λ ∈ R
Skew linesLines ∈ space which are neither intersecting nor parallel. Such pair of lines are non coplanar.
Shortest distance between two linesThe join of a point ∈ one line with one point on the other line so that the length of the segment so obtained is the smallest.
Direction cosines of a line are the cosines of the angles made by the line with the positive directions of the coordinate axes.
If l, m, n are the direction cosines of a line, then l² + m² + n² = 1.
Direction cosines of a line joining two points P(x1,y1,z1) and Q(x2,y2,z2) are (x2-x1)/PQ, (y2-y1)/PQ, (z2-z1)/PQ where PQ = √((x2-x1)² + (y2-y1)² + (z2-z1)²).
Direction ratios of a line are the numbers which are proportional to the direction cosines of a line.
If l, m, n are the direction cosines and a, b, c are the direction ratios of a line then l = a/√(a²+b²+c²), m = b/√(a²+b²+c²), n = c/√(a²+b²+c²).
Skew lines are lines ∈ space which are neither parallel nor intersecting. They lie ∈ different planes.
Angle between skew linesthe angle between two intersecting lines drawn from any point (preferably through the origin) parallel to each of the skew lines.
If l1, m1, n1 and l2, m2, n2 are the direction cosines of two lines and thetathe acute angle between them, then cos(θ) = |l1·l2 + m1·m2 + n1·n2|.
If a1, b1, c1 and a2, b2, c2 are the direction ratios of two lines and thetathe acute angle between them, then cos(θ) = |a1·a2 + b1·b2 + c1·c2| / (√(a1²+b1²+c1²) · √(a2²+b2²+c2²)).
Vector equation of a line through a point with position vector a and parallel to vector br = a + λ*b.
Equation of a line through point (x1,y1,z1) having direction cosines l, m, n(x-x1)/l = (y-y1)/m = (z-z1)/n.
📐 Key Formulas
  • Direction cosines identity $l^{2} + m^{2} + n^{2} = 1$ Sum of squares of direction cosines equals 1
  • Relation between direction ratios and direction cosines $\frac{l}{a} = \frac{m}{b} = \frac{n}{c} = \pm\frac{1}{\sqrt{a^2 + b^2 + c^2}}$ Connecting direction ratios (a,b,c) to direction cosines (l,m,n)
  • Direction cosines from direction ratios $l = \pm\frac{a}{\sqrt{a^2+b^2+c^2}}, \; m = \pm\frac{b}{\sqrt{a^2+b^2+c^2}}, \; n = \pm\frac{c}{\sqrt{a^2+b^2+c^2}}$ Direction cosines computed from direction ratios
  • Direction cosines of line joining two points $l = \frac{x_2-x_1}{PQ}, \; m = \frac{y_2-y_1}{PQ}, \; n = \frac{z_2-z_1}{PQ}$ Direction cosines of line segment joining P(x1,y1,z1) and Q(x2,y2,z2)
  • Direction ratios of line joining two points $x_{2}-x_{1}, \; y_{2}-y_{1}, \; z_{2}-z_{1}$ Direction ratios of line segment from P(x1,y1,z1) to Q(x2,y2,z2)
  • Vector equation of a line (point + direction) $\vec{r} = \vec{a} + \lambda\vec{b}$ Line through point with position vector a, parallel to vector b. λ is a real parameter.
  • Cartesian equation of a line (point + direction ratios) $\frac{x - x_1}{a} = \frac{y - y_1}{b} = \frac{z - z_1}{c}$ Line through (x1,y1,z1) with direction ratios a, b, c
  • Cartesian equation using direction cosines $\frac{x - x_1}{l} = \frac{y - y_1}{m} = \frac{z - z_1}{n}$ Line through (x1,y1,z1) with direction cosines l, m, n
  • Parametric equations of a line $x = x_1 + \lambda a, \; y = y_1 + \lambda b, \; z = z_1 + \lambda c$ Parametric form of line through (x1,y1,z1) with direction ratios a, b, c
  • Vector equation of a line through two points $\vec{r} = \vec{a} + \lambda(\vec{b} - \vec{a})$ Line through two points with position vectors a and b
  • Angle between two lines (direction ratios) $\cos\theta = \frac{|a_1 a_2 + b_1 b_2 + c_1 c_2|}{\sqrt{a_1^2+b_1^2+c_1^2} \cdot \sqrt{a_2^2+b_2^2+c_2^2}}$ Angle between lines with direction ratios (a1,b1,c1) and (a2,b2,c2)
  • Angle between two lines (direction cosines) $\cos\theta = |l_1 l_2 + m_1 m_2 + n_1 n_2|$ Angle between lines with direction cosines (l1,m1,n1) and (l2,m2,n2), since l²+m²+n²=1
  • sin(theta) using direction ratios $\sin\theta = \frac{\sqrt{(a_1 b_2-a_2 b_1)^2 + (b_1 c_2-b_2 c_1)^2 + (c_1 a_2-c_2 a_1)^2}}{\sqrt{a_1^2+b_1^2+c_1^2} \cdot \sqrt{a_2^2+b_2^2+c_2^2}}$ Sine of angle between two lines
  • sin(theta) using direction cosines $\sin\theta = \sqrt{(l_1 m_2-l_2 m_1)^2 + (m_1 n_2-m_2 n_1)^2 + (n_1 l_2-n_2 l_1)^2}$ Sine of angle between two lines using direction cosines
  • Angle between lines in vector form $\cos\theta = \frac{|\vec{b_1} \cdot \vec{b_2}|}{|\vec{b_1}| \cdot |\vec{b_2}|}$ For lines r = a1 + λ*b1 and r = a2 + μ*b2
  • Condition for perpendicular lines (direction ratios) $a_{1}a_{2} + b_{1}b_{2} + c_{1}c_{2} = 0$ Two lines are perpendicular when θ = 90 deg
  • Condition for parallel lines (direction ratios) $\frac{a_1}{a_2} = \frac{b_1}{b_2} = \frac{c_1}{c_2}$ Two lines are parallel when θ = 0
  • Shortest distance between skew lines (vector form) $d = \frac{|(\vec{b_1} \times \vec{b_2}) \cdot (\vec{a_2} - \vec{a_1})|}{|\vec{b_1} \times \vec{b_2}|}$ For lines r = a1 + λ*b1 and r = a2 + μ*b2
  • Shortest distance between skew lines (Cartesian form) $d = \frac{\begin{vmatrix} x_2-x_1 & y_2-y_1 & z_2-z_1 \\ a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \end{vmatrix}}{\sqrt{(b_1 c_2-b_2 c_1)^2 + (c_1 a_2-c_2 a_1)^2 + (a_1 b_2-a_2 b_1)^2}}$ For lines (x-x1)/a1 = (y-y1)/b1 = (z-z1)/c1 and (x-x2)/a2 = (y-y2)/b2 = (z-z2)/c2
  • Distance between parallel lines $d = \frac{|\vec{b} \times (\vec{a_2} - \vec{a_1})|}{|\vec{b}|}$ For parallel lines r = a1 + λ*b and r = a2 + μ*b

💡 Quick Tips & Memory Aids

  • Two parallel lines have the same set of direction cosines
  • Direction cosines of x-axis: 1, 0, 0; y-axis: 0, 1, 0; z-axis: 0, 0, 1
  • If b = a*î + b*ĵ + c·k̂, then a, b, c are direction ratios of the line and conversely
  • b should not be confused with |b|
  • Derivation of Cartesian form from vector form: eliminate λ from parametric equations
  • If lines do not pass through origin, take parallel lines through origin to find angle
  • The angle θ is always taken as the acute angle between the lines

📝 Exercise Overview

  • 25 total questions across 3 exercises
Chapter 12
Linear Programming
Linear Programming Problem and its Mathematical Formulation · Mathematical Formulation of the Problem · Graphical Method of Solving Linear Programming Problems · Corner Point Method
5marks

🎯 Must-Read — Key concepts for this chapter

  1. In earlier classes, systems of linear equations and linear inequalities ∈ two variables were studied
  2. Many applications ∈ mathematics involve systems of inequalities/equations
  3. This chapter applies systems of linear inequalities/equations to solve real life problems
  4. Example: A furniture dealer wanting to maximise profit from buying tables and chairs with investment and storage constraints
  5. If the feasible region is unbounded, then a maximum or a minimum value of the objective function may not exist. However, if it exists, it must occur at a corner point of R (by Theorem 1).

📖 Chapter Summary

ConceptKey Fact
Optimisation ProblemsProblems which seek to maximise (or minimise) profit (or cost) form a general class of problems called optimisation problems. An optimisation problem may involve finding maximum profit, minimum cost, or minimum use of resources etc.
Linear Programming ProblemA special but very important class of optimisation problems is linear programming problem. Linear programming problems are of much interest because of their wide applicability ∈ industry, commerce, management science etc.
Linear Programming Problem (LPP)A Linear Programming Problem is one that is concerned with finding the optimal value (maximum or minimum value) of a linear function (called objective function) of several variables (say x and y), subject to the conditions that the variables are non-negative and satisfy a set of linear inequalities (called linear constraints). The term 'linear' implies that all the mathematical relations used ∈ the problem are linear relations while the term 'programming' refers to the method of determining a particular programme or plan of action.
Objective FunctionA linear function Z = ax + by, where a, b are constants, which has to be maximised or minimised is called a linear objective function. Variables x and y are called decision variables.
ConstraintsThe linear inequalities or equations or restrictions on the variables of a linear programming problem are called constraints. The conditions x ≥ 0, y ≥ 0 are called non-negative restrictions.
Optimisation ProblemA problem which seeks to maximise or minimise a linear function (say of two variables x and y) subject to certain constraints as determined by a set of linear inequalities is called an optimisation problem. Linear programming problems are special type of optimisation problems.
Feasible RegionThe common region determined by all the constraints including non-negative constraints x ≥ 0, y ≥ 0 of a linear programming problem is called the feasible region (or solution region) for the problem. The region other than feasible region is called an infeasible region.
Feasible SolutionsPoints within and on the boundary of the feasible region represent feasible solutions of the constraints. Every point within and on the boundary of the feasible region represents a feasible solution to the problem.
Infeasible SolutionAny point outside the feasible region is called an infeasible solution.
Optimal (Feasible) SolutionAny point ∈ the feasible region that gives the optimal value (maximum or minimum) of the objective function is called an optimal solution.
Bounded Feasible RegionA feasible region of a system of linear inequalities is said to be bounded if it can be enclosed within a circle.
Unbounded Feasible RegionA feasible region is called unbounded if it cannot be enclosed within a circle, meaning the feasible region does extend indefinitely ∈ any direction.
Corner PointA corner point of a feasible region is a point ∈ the region which is the intersection of two boundary lines.
Decision VariablesThe variables x and y ∈ the objective function Z = ax + by are called decision variables. They are sometimes also simply called variables and are non-negative.
A linear programming problemone that is concerned with finding the optimal value (maximum or minimum) of a linear function of several variables (called objective function) subject to the conditions that the variables are non-negative and satisfy a set of linear inequalities (called linear constraints). Variables are sometimes called decision variables and are non-negative.
📐 Key Formulas
  • General Objective Function $Z = ax + by$ a, b are constants; x, y are decision variables; Z is to be maximised or minimised
📜 Important Theorems
  • 📌
    Theorem 1 (Fundamental Theorem of Linear Programming): Let R be the feasible region (convex polygon) for a linear programming problem and let Z = ax + by be the objective function. When Z has an optimal value (maximum or minimum), where the variables x and y are subject to constraints described by linear inequalities, this optimal value must occur at a corner point (vertex) of the feasible region.
  • 📌
    Theorem 2: Let R be the feasible region for a linear programming problem, and let Z = ax + by be the objective function. If R is bounded, then the objective function Z has both a maximum and a minimum value on R and each of these occurs at a corner point (vertex) of R.

💡 Quick Tips & Memory Aids

  • The feasible region is always a convex region.
  • The maximum (or minimum) solution of the objective function occurs at the vertex (corner) of the feasible region.
  • If two corner points produce the same maximum (or minimum) value of the objective function, then every point on the line segment joining these points will also give the same maximum (or minimum) value.
  • A problem may have multiple optimal solutions when two corner points give the same optimal value.
  • If there is no point satisfying all the constraints simultaneously, the problem has no feasible region and hence no feasible solution.
  • Step 1: Identify the decision variables (e.g., number of tables x and chairs y)
  • Step 2: Write non-negative constraints (x ≥ 0, y ≥ 0)

📝 Exercise Overview

  • 10 total questions across 1 exercise
Chapter 13
Probability
Conditional Probability · Properties of Conditional Probability · Multiplication Theorem on Probability · Independent Events · Bayes' Theorem · Random Variables and Probability Distribution · Bernoulli Trials and Binomial Distribution
8marks

🎯 Must-Read — Key concepts for this chapter

  1. In earlier classes, probability was studied as a measure of uncertainty of events ∈ a random experiment
  2. The axiomatic theory and the classical theory of probability are equivalent for equally likely outcomes
  3. Throughout this chapter, experiments have equally likely outcomes unless stated otherwise
  4. When event F is known to have occurred, the sample space reduces from S to F
  5. The elements of F favourable to event E are the common elements of E and F, i.e., E ∩ F

📖 Chapter Summary

ConceptKey Fact
Conditional ProbabilityIf E and F are two events associated with the same sample space of a random experiment, the conditional probability of the event E given that F has occurred, i.e., P(E|F) is given by P(E|F) = P(E intersection F) / P(F), provided P(F) ≠ 0.
Independent Events (Definition 2)Two events E and F are said to be independent if P(F|E) = P(F) provided P(E) ≠ 0, and P(E|F) = P(E) provided P(F) ≠ 0. Thus, we need P(E) ≠ 0 and P(F) ≠ 0.
Independent Events (Definition 3)Let E and F be two events associated with the same random experiment, then E and F are said to be independent if P(E ∩ F) = P(E) . P(F).
Mutually Independent Events (three events)Three events A, B and C are said to be mutually independent if: P(A ∩ B) = P(A) P(B), P(A ∩ C) = P(A) P(C), P(B ∩ C) = P(B) P(C), and P(A ∩ B n C) = P(A) P(B) P(C). All four conditions must hold.
Partition of a Sample SpaceA set of events E₁, E₂, ..., Eₙ is said to represent a partition of the sample space S if: (a) Eᵢ n Eⱼ = φ for i ≠ j (pairwise disjoint), (b) E₁ u E₂ u ... u Eₙ = S (exhaustive), and (c) P(Eᵢ) > 0 for all i = 1, 2, ..., n (nonzero probabilities).
HypothesesWhen Bayes' theorem is applied, the events E₁, E₂, ..., Eₙ are called hypotheses.
Prior ProbabilityThe probability P(Eᵢ) is called the priori probability of the hypothesis Eᵢ.
Posterior ProbabilityThe conditional probability P(Eᵢ|A) is called the a posteriori probability of the hypothesis Eᵢ. It gives the probability of a particular 'cause' Eᵢ given that event A has occurred.
Random VariableA random variable is a real valued function whose domain is the sample space of a random experiment. It assigns a real number to each outcome of the experiment.
Probability DistributionThe system of values of a random variable X together with the corresponding probabilities is called the probability distribution of X: X takes values x₁, x₂, ..., xₙ with probabilities p₁, p₂, ..., pₙ where pᵢ ≥ 0 and ∑ of all pᵢ = 1.
Mean (Expected Value)The mean or expectation of a random variable X is defined as E(X) = μ = sumᵢ=₁^{n} xᵢ pᵢ, where xᵢ are the values of X and pᵢ are the corresponding probabilities.
VarianceThe variance of a random variable X is defined as Var(X) = σ² = E(X²) - [E(X)]² = ∑ xᵢ² pᵢ - (∑ xᵢ pᵢ)².
Standard DeviationThe standard deviation of a random variable X is σ = √(Var(X)).
Bernoulli TrialsTrials of a random experiment are called Bernoulli trials if: (i) there are a finite number of trials, (ii) the trials are independent of each other, (iii) each trial has exactly two outcomes: success or failure, and (iv) the probability of success (p) remains the same ∈ each trial.
Binomial DistributionA random variable X taking values 0, 1, 2, ..., n is said to have a binomial distribution with parameters n and p if its probability distribution is given by P(X = r) = nCr pʳ qn-r, where q = 1 - p and r = 0, 1, 2, ..., n.
📐 Key Formulas
  • Conditional Probability $P(E|F) = \frac{P(E \cap F)}{P(F)}, \; P(F) \neq 0$ Also written as P(E|F) = n(E ∩ F) / n(F) for equally likely outcomes
  • Complement Rule $P(A') = 1 - P(A)$ Probability of event not occurring
  • Addition Theorem $P(A \cup B) = P(A) + P(B) - P(A \cap B)$ For any two events A and B
  • Mutually Exclusive Events $P(A \cup B) = P(A) + P(B)$ When A ∩ B = ϕ (no common outcomes)
  • Property 1: P(S|F) $P(S|F) = P(F|F) = 1$ The conditional probability of the sample space S given F is 1
  • Property 2: Addition rule for conditional probability $P((A \cup B)|F) = P(A|F) + P(B|F) - P((A \cap B)|F)$ For disjoint events A and B: P((A ∪ B)|F) = P(A|F) + P(B|F)
  • Property 3: Complement rule $P(E'|F) = 1 - P(E|F)$ Follows from P(S|F) = 1 and E, E' being disjoint with E ∪ E' = S
  • Multiplication Rule (two events) $P(E \cap F) = P(E) \cdot P(F|E) = P(F) \cdot P(E|F)$ Provided P(E) ≠ 0 and P(F) ≠ 0
  • Multiplication Rule (three events) $P(E \cap F \cap G) = P(E) \cdot P(F|E) \cdot P(G|E \cap F)$ Can be extended to four or more events similarly
  • Test for Independence $P(E \cap F) = P(E) \cdot P(F)$ If this holds, E and F are independent events
  • Probability of at least one of two independent events $P(A \cup B) = 1 - P(A') \cdot P(B')$ For independent events A and B
  • Three Independent Events $P(A \cap B \cap C) = P(A) \cdot P(B) \cdot P(C)$ Extends to n mutually independent events
  • Theorem of Total Probability $P(A) = \sum_{j=1}^{n} P(E_j) P(A|E_j)$ Where {E₁, E₂, ..., Eₙ} is a partition of S and each Eᵢ has nonzero probability
  • Bayes' Theorem (Simple Form) $P(A|B) = \frac{P(B|A) \cdot P(A)}{P(B)}$ Gives posterior probability of A given B has occurred
  • Bayes' Theorem (General Form) $P(E_i|A) = \frac{P(E_i) P(A|E_i)}{\sum_{j=1}^{n} P(E_j) P(A|E_j)}$ For partition {E₁, E₂, …, Eₙ} of S. Also called the formula for the probability of 'causes'.
  • Mean (Expected Value) $E(X) = \mu = \sum_{i=1}^{n} x_i p_i$ xᵢ are values of X and pᵢ are corresponding probabilities
  • Variance $\text{Var}(X) = E(X^2) - [E(X)]^2 = \sum x_i^2 p_i - \left(\sum x_i p_i\right)^2$ Also written as σ²
  • Variance (Alternative) $\text{Var}(X) = \sum_{i=1}^{n} (x_i - \mu)^2 \cdot p_i$ Direct formula using deviations from the mean
  • Standard Deviation $\sigma = \sqrt{\text{Var}(X)}$ Non-negative square root of the variance
  • Binomial Probability $P(X = r) = \binom{n}{r} p^r q^{n-r}, \; q = 1 - p$ r = 0, 1, 2, ..., n. Here n = number of trials, p = probability of success, q = probability of failure
📜 Important Theorems
  • 📌
    Multiplication Theorem of Probability: For any two events E and F: P(E ∩ F) = P(E) . P(F|E) = P(F) . P(E|F), provided P(E) ≠ 0 and P(F) ≠ 0.
  • 📌
    Independence of Complements: If E and F are independent events, then (a) E and F' are independent, (b) E' and F are independent, (c) E' and F' are independent.
  • 📌
    Theorem of Total Probability: Let {E₁, E₂, ..., Eₙ} be a partition of the sample space S, and suppose that each of E₁, E₂, ..., Eₙ has nonzero probability of occurrence. Let A be any event associated with S, then P(A) = P(E₁) P(A|E₁) + P(E₂) P(A|E₂) + ... + P(Eₙ) P(A|Eₙ) = sumⱼ=₁^{n} P(Eⱼ) P(A|Eⱼ).
  • 📌
    Bayes' Theorem: If E₁, E₂, ..., Eₙ are n non-empty events which constitute a partition of sample space S, i.e., E₁, E₂, ..., Eₙ are pairwise disjoint and E₁ u E₂ u ... u Eₙ = S, and A is any event of nonzero probability, then P(Eᵢ|A) = P(Eᵢ) P(A|Eᵢ) / sumⱼ=₁^{n} P(Eⱼ) P(A|Eⱼ), for any i = 1, 2, 3, ..., n.

💡 Quick Tips & Memory Aids

  • Conditional probability is valid only when P(F) ≠ 0, i.e., F ≠ φ
  • 0 ≤ P(E|F) ≤ 1
  • P(S|F) = P(F|F) = 1
  • For disjoint events A and B: P((A ∪ B)|F) = P(A|F) + P(B|F)
  • P(E'|F) = 1 - P(E|F)
  • E ∩ F (also written as EF) denotes the simultaneous occurrence of events E and F
  • P(E ∩ F) = P(E) . P(F|E) where P(F|E) is the conditional probability of F given E

📝 Exercise Overview

  • 62 total questions across 4 exercises
  • 23 long-answer questions (proofs, show-that, derivations)
  • 30 short-answer questions