Chapter summaries, key formulas, important theorems, exam tips and exercise overviews for all 13 chapters — 952 NCERT questions covered.
| Concept | Key Fact |
|---|---|
| Relation | A relation R from a set A to a set B is an arbitrary subset of A x B. If (a, b) ∈ R, we say that a is related to b under the relation R, written as a R b. |
| Empty Relation | A relation R ∈ a set A is called empty relation, if no element of A is related to any element of A, i.e., R = φ ⊂ A x A. |
| Universal Relation | A relation R ∈ a set A is called universal relation, if each element of A is related to every element of A, i.e., R = A x A. |
| Trivial Relations | Both the empty relation and the universal relation are sometimes called trivial relations. |
| Reflexive Relation | A relation R ∈ a set A is called reflexive, if (a, a) ∈ R, for every a ∈ A. |
| Symmetric Relation | A relation R ∈ a set A is called symmetric, if (a1, a2) ∈ R implies that (a2, a1) ∈ R, for all a1, a2 ∈ A. |
| Transitive Relation | A relation R ∈ a set A is called transitive, if (a1, a2) ∈ R and (a2, a3) ∈ R implies that (a1, a3) ∈ R, for all a1, a2, a3 ∈ A. |
| Equivalence Relation | A relation R ∈ a set A is said to be an equivalence relation if R is reflexive, symmetric and transitive. |
| Equivalence Class | Given an equivalence relation R ∈ a set X, the equivalence class [a] containing a ∈ X is the subset of X containing all elements b related to a. The equivalence classes form a partition of X into mutually disjoint subsets. |
| One-one (Injective) Function | A function f: X → Y is defined to be one-one (or injective), if the images of distinct elements of X under f are distinct, i.e., for every x1, x2 ∈ X, f(x1) = f(x2) implies x1 = x2. Otherwise, f is called many-one. |
| Onto (Surjective) Function | A function f: X → Y is said to be onto (or surjective), if every element of Y is the image of some element of X under f, i.e., for every y ∈ Y, there exists an element x ∈ X such that f(x) = y. |
| Bijective Function | A function f: X → Y is said to be one-one and onto (or bijective), if f is both one-one and onto. |
| Composition of Functions | Let f: A → B and g: B → C be two functions. Then the composition of f and g, denoted by gof, is defined as the function gof: A → C given by gof(x) = g(f(x)), for all x ∈ A. |
| Invertible Function | A function f: X → Y is defined to be invertible, if there exists a function g: Y → X such that gof = I_X and fog = I_Y. The function g is called the inverse of f and is denoted by f⁻¹. |
| Identity Function | The identity function I_X: X → X is defined as I_X(x) = x for all x ∈ X. |
| Concept | Key Fact |
|---|---|
| Principal Value Branch | The branch of an inverse trigonometric function with a specific restricted range that is conventionally chosen as the standard. For sin⁻¹, the principal value branch has range [−π/2, π/2]. |
| Principal Value | The value of an inverse trigonometric function which lies ∈ the range of the principal branch is called the principal value of that inverse trigonometric function. |
| sin⁻¹ (arc sine function) | The inverse of sine function with domain [−1, 1] and range [−π/2, π/2]. If sin y = x, then y = sin⁻¹ x. |
| cos⁻¹ (arc cosine function) | The inverse of cosine function with domain [−1, 1] and range [0, π]. If cos y = x, then y = cos⁻¹ x. |
| cosec⁻¹ (arc cosecant function) | The inverse of cosecant function with domain R − (−1, 1) and range [−π/2, π/2] − {0}. |
| sec⁻¹ (arc secant function) | The inverse of secant function with domain R − (−1, 1) and range [0, π] − {π/2}. |
| tan⁻¹ (arc tangent function) | The inverse of tangent function with domain R and range (−π/2, π/2). |
| cot⁻¹ (arc cotangent function) | The inverse of cotangent function with domain R and range (0, π). |
| The domains and ranges (principal value branches) of inverse trigonometric functions are: sin⁻¹: [−1, 1] → [−π/2, π/2] ┃ cos⁻¹: [−1, 1] → [0, π] ┃ cosec⁻¹: R − (−1, 1) → [−π/2, π/2] − {0} ┃ sec⁻¹: R − (−1, 1) → [0, π] − {π/2} ┃ tan⁻¹: R → (−π/2, π/2) ┃ cot⁻¹: R → (0, π) | |
| sin⁻¹ x should not be confused with (sin x)⁻¹. In fact (sin x)⁻¹ = 1/sin x, and similarly for other trigonometric functions | |
| The value of an inverse trigonometric function which lies in the range of its principal value branch | called the principal value of that inverse trigonometric function |
| For suitable values of domain: y = sin⁻¹ x implies x = sin y | |
| For suitable values of domain: sin(sin⁻¹ x) = x | |
| For suitable values of domain: sin⁻¹(sin x) = x |
| Concept | Key Fact |
|---|---|
| Matrix | An ordered rectangular array of numbers or functions. The numbers or functions are called the elements or the entries of the matrix. |
| Order of a matrix | A matrix having m rows and n columns is called a matrix of order m x n (read as an m by n matrix). The number of elements ∈ an m x n matrix is mn. |
| Element a_ij | An element lying ∈ the i-th row and j-th column of a matrix. Also called the (i, j)-th element of the matrix. |
| Column matrix | A matrix is said to be a column matrix if it has only one column. In general, A = [aᵢⱼ]ₘ ₓ ₁ is a column matrix of order m x 1. |
| Row matrix | A matrix is said to be a row matrix if it has only one row. In general, B = [bᵢⱼ]₁ ₓ ₙ is a row matrix of order 1 x n. |
| Square matrix | A matrix ∈ which the number of rows is equal to the number of columns. An m x n matrix is a square matrix if m = n, and is known as a square matrix of order n. |
| Diagonal elements | If A = [aᵢⱼ] is a square matrix of order n, then elements a₁₁, a₂₂, ..., aₙₙ are said to constitute the diagonal of the matrix A. |
| Diagonal matrix | A square matrix B = [bᵢⱼ]ₘ ₓ ₘ is said to be a diagonal matrix if all its non-diagonal elements are zero, that is bᵢⱼ = 0 when i ≠ j. |
| Scalar matrix | A diagonal matrix is said to be a scalar matrix if its diagonal elements are equal, that is B = [bᵢⱼ]ₙ ₓ ₙ is a scalar matrix if bᵢⱼ = 0 when i ≠ j, and bᵢⱼ = k when i = j, for some constant k. |
| Identity matrix | A square matrix ∈ which elements ∈ the diagonal are all 1 and rest are all zero is called an identity matrix. The square matrix A = [aᵢⱼ]ₙ ₓ ₙ is an identity matrix if aᵢⱼ = 1 when i = j, and aᵢⱼ = 0 when i ≠ j. Denoted by Iₙ or simply I. |
| Zero matrix | A matrix is said to be zero matrix or null matrix if all its elements are zero. Denoted by O. Its order will be clear from the context. |
| Equality of matrices | Two matrices A = [aᵢⱼ] and B = [bᵢⱼ] are said to be equal if (i) they are of the same order, and (ii) each element of A is equal to the corresponding element of B, that is aᵢⱼ = bᵢⱼ for all i and j. Written as A = B. |
| Addition of matrices | If A = [aᵢⱼ] and B = [bᵢⱼ] are two matrices of the same order m x n, then their ∑ A + B is defined as a matrix C = [cᵢⱼ]ₘ ₓ ₙ, where cᵢⱼ = aᵢⱼ + bᵢⱼ for all possible values of i and j. |
| Scalar multiplication | If A = [aᵢⱼ]ₘ ₓ ₙ is a matrix and k is a scalar, then kA is another matrix obtained by multiplying each element of A by the scalar k. In other words, kA = k[aᵢⱼ]ₘ ₓ ₙ = [k(aᵢⱼ)]ₘ ₓ ₙ. |
| Negative of a matrix | The negative of a matrix is denoted by -A. We define -A = (-1)A. |
| Concept | Key Fact |
|---|---|
| Minor | Minor of an element aᵢⱼ of a determinant is the determinant obtained by deleting its ith row and jth column ∈ which element aᵢⱼ lies. Minor of element aᵢⱼ is denoted by Mᵢⱼ. |
| Cofactor | Cofactor of an element aᵢⱼ, denoted by Aᵢⱼ, is defined by Aᵢⱼ = (-1)i+j · Mᵢⱼ, where Mᵢⱼ is the minor of aᵢⱼ. |
| Adjoint of a matrix | The adjoint of a square matrix A = [aᵢⱼ]_(n x n) is defined as the transpose of the matrix [Aᵢⱼ]_(n x n), where Aᵢⱼ is the cofactor of element aᵢⱼ. It is denoted by adj A. |
| Singular matrix | A square matrix A is said to be singular if |A| = 0. |
| Non-singular matrix | A square matrix A is said to be non-singular if |A| ≠ 0. |
| Consistent system | A system of equations is said to be consistent if its solution (one or more) exists. |
| Inconsistent system | A system of equations is said to be inconsistent if its solution does not exist. |
| Determinant of a matrix A = [a₁₁] of order 1: |a₁₁| = a₁₁ | |
| Determinant of a 2x2 matrix A = [a11 a12 ┃ a21 a22]: |A| = a11·a22 - a12·a21 | |
| Determinant of a 3x3 matrix (expanding along R1): |A| = a1|b2 c2 ┃ b3 c3| - b1|a2 c2 ┃ a3 c3| + c1|a2 b2 ┃ a3 b3| | |
| For any square matrix A, the |A| satisfies certain properties | |
| Area of triangle with vertices (x1,y1), (x2,y2), (x3,y3): Δ = (1/2)|x1 y1 1 ┃ x2 y2 1 ┃ x3 y3 1| | |
| Minor of element a_ij | the determinant obtained by deleting ith row and jth column, denoted Mᵢⱼ |
| Cofactor of aᵢⱼ: Aᵢⱼ = (-1)i+j · Mᵢⱼ | |
| Value of determinant = ∑ of product of elements of a row (or column) with corresponding cofactors. E.g., |A| = a11·A11 + a12·A12 + a13·A13 |
| Concept | Key Fact |
|---|---|
| Continuity at a point | Suppose f is a real function on a subset of the real numbers and let c be a point ∈ the domain of f. Then f is continuous at c if lim(x->c) f(x) = f(c) |
| Continuous function | A real function f is said to be continuous if it is continuous at every point ∈ the domain of f |
| Derivative of f at c | The derivative of f at c is defined by lim(h->0) [f(c+h) - f(c)]/h, provided this limit exists. Denoted by f'(c) or (d/dx)(f(x))|_c |
| Derivative function | f'(x) = lim(h->0) [f(x+h) - f(x)]/h, wherever the limit exists. Also denoted by f'(x) or (d/dx)(f(x)) or dy/dx or y' |
| Differentiable in an interval | A function is differentiable ∈ [a, b] if it is differentiable at every point of [a, b]. At endpoints, we use left/right hand derivatives respectively. Differentiable ∈ (a, b) means differentiable at every point of (a, b) |
| Explicit function | When y = f(x) expresses y directly ∈ terms of x |
| Implicit function | When the relationship between x and y is given by an equation like x + sin(xy) - y = 0, where y cannot easily be expressed as a function of x |
| Exponential function | The exponential function with positive base b > 1 is the function y = f(x) = bˣ |
| Logarithmic function | Let b > 1 be a real number. Then we say logarithm of a to base b is x if bˣ = a. Written as log_b a = x if bˣ = a |
| Second order derivative | If y = f(x), then dy/dx = f'(x). If f'(x) is differentiable, then d/dx(dy/dx) = d²y/dx² is the second order derivative. Denoted by f''(x), D²y, y'', or y₂ |
| A real valued function | continuous at a point ∈ its domain if the limit of the function at that point equals the value of the function at that point. A function is continuous if it is continuous on the whole of its domain. |
| Sum, difference, product and quotient of continuous functions are continuous. i.e., if f and g are continuous functions, then (f +/- g)(x) = f(x) +/- g(x) | continuous, (f . g)(x) = f(x) . g(x) is continuous, (f/g)(x) = f(x)/g(x) (wherever g(x) ≠ 0) is continuous. |
| Every differentiable function | continuous, but the converse is not true. |
| Chain rule | the rule to differentiate composites of functions. If f = v o u, t = u(x) and if both dt/dx and dv/dt exist, then df/dx = (dv/dt)(dt/dx). |
| Following are some of the standard derivatives (in appropriate domains): d/dx(sin⁻¹ x) = 1/√(1 - x²), d/dx(cos⁻¹ x) = -1/√(1 - x²), d/dx(tan⁻¹ x) = 1/(1 + x²). |
| Concept | Key Fact |
|---|---|
| Rate of Change | If y = f(x), then dy/dx (or f'(x)) represents the rate of change of y with respect to x, and (dy/dx) at x=x₀ represents the rate of change of y with respect to x at x = x₀. |
| Marginal Cost | The instantaneous rate of change of total cost with respect to output. If C(x) is the total cost for x units, then Marginal Cost (MC) = dC/dx. |
| Marginal Revenue | The rate of change of total revenue with respect to the number of units sold. If R(x) is total revenue for x units, then Marginal Revenue (MR) = dR/dx. |
| Increasing function | A function f is increasing on interval I if x₁ < x₂ ∈ I implies f(x₁) ≤ f(x₂) for all x₁, x₂ ∈ I. |
| Decreasing function | A function f is decreasing on interval I if x₁ < x₂ ∈ I implies f(x₁) ≥ f(x₂) for all x₁, x₂ ∈ I. |
| Constant function | A function f is constant on interval I if f(x) = c for all x ∈ I, where c is a constant. |
| Strictly increasing function | A function f is strictly increasing on interval I if x₁ < x₂ ∈ I implies f(x₁) < f(x₂) for all x₁, x₂ ∈ I. |
| Strictly decreasing function | A function f is strictly decreasing on interval I if x₁ < x₂ ∈ I implies f(x₁) > f(x₂) for all x₁, x₂ ∈ I. |
| Increasing/Decreasing at a point | Let x₀ be a point ∈ the domain of f. Then f is said to be increasing (decreasing) at x₀ if there exists an open interval I containing x₀ such that f is increasing (decreasing) ∈ I. |
| Maximum value | f is said to have a maximum value ∈ interval I if there exists a point c ∈ I such that f(c) > f(x) for all x ∈ I. The number f(c) is called the maximum value and c is called the point of maximum value. |
| Minimum value | f is said to have a minimum value ∈ interval I if there exists a point c ∈ I such that f(c) < f(x) for all x ∈ I. The number f(c) is called the minimum value and c is called the point of minimum value. |
| Extreme value | f is said to have an extreme value ∈ I if there exists a point c ∈ I such that f(c) is either a maximum or minimum value of f ∈ I. The number f(c) is called an extreme value and c is called an extreme point. |
| Local maxima | c is a point of local maxima if there is an h > 0 such that f(c) ≥ f(x) for all x ∈ (c-h, c+h), x ≠ c. The value f(c) is called the local maximum value. |
| Local minima | c is a point of local minima if there is an h > 0 such that f(c) ≤ f(x) for all x ∈ (c-h, c+h). The value f(c) is called the local minimum value. |
| Critical point | A point c ∈ the domain of a function f at which either f'(c) = 0 or f is not differentiable is called a critical point of f. |
| Concept | Key Fact |
|---|---|
| Anti derivative (Primitive) | A function F is called an anti derivative of f if F'(x) = f(x) for all x ∈ the domain. |
| Indefinite Integral | The symbol ∫ f(x) dx represents the entire class of anti derivatives, read as the indefinite ∫ of f with respect to x. |
| Integrand | f(x) ∈ the expression ∫ f(x) dx. |
| Variable of integration | x ∈ the expression ∫ f(x) dx. |
| Constant of Integration | Any real number C, considered as a constant function, that appears ∈ the general anti derivative F(x) + C. |
| Proper rational function | P(x)/Q(x) where degree of P(x) is less than degree of Q(x). |
| Improper rational function | P(x)/Q(x) where degree of P(x) is greater than or equal to degree of Q(x). |
| Definite Integral | If f has an anti derivative F on [a, b], then ∫ from a to b of f(x) dx = F(b) - F(a). |
| Lower limit | The value a ∈ ∫ from a to b of f(x) dx. |
| Upper limit | The value b ∈ ∫ from a to b of f(x) dx. |
| Area function | A(x) = ∫ from a to x of f(x) dx, representing the area under the curve from a to x. |
| Integration | the inverse process of differentiation. Let (d/dx)F(x) = f(x). Then ∫ f(x) dx = F(x) + C. These are called indefinite integrals or general integrals. C is called constant of integration. All these integrals differ by a constant. |
| Some properties of indefinite integrals: (1) ∫ [f(x) + g(x)] dx = ∫ f(x) dx + ∫ g(x) dx. (2) For any real number k, ∫ k f(x) dx = k ∫ f(x) dx. More generally, ∫ [k1 f1(x) + k2 f2(x) + ... + kn fn(x)] dx = k1 ∫ f1(x) dx + ... + kn ∫ fn(x) dx. | |
| Some standard integrals: (i) ∫ xⁿ dx = xn+1/(n+1) + C, n ≠ -1. (ii) ∫ cos x dx = sin x + C. (iii) ∫ sin x dx = -cos x + C. (iv) ∫ sec² x dx = tan x + C. (v) ∫ cosec² x dx = -cot x + C. (vi) ∫ sec x tan x dx = sec x + C. (vii) ∫ cosec x cot x dx = -cosec x + C. (viii) ∫ dx/√(1-x²) = sin⁻¹ x + C. (ix) ∫ dx/√(1-x²) = -cos⁻¹ x + C. (x) ∫ dx/(1+x²) = tan⁻¹ x + C. (xi) ∫ dx/(1+x²) = -cot⁻¹ x + C. (xii) ∫ eˣ dx = eˣ + C. (xiii) ∫ aˣ dx = aˣ/log a + C. (xiv) ∫ (1/x) dx = log |x| + C. | |
| Integration by partial fractions: P(x)/Q(x) where P, Q are polynomials and Q(x) ≠ 0. If degree of P ≥ degree of Q, divide to get T(x) + P1(x)/Q(x). Decompose into partial fractions of five standard types based on the nature of factors ∈ denominator. |
| Concept | Key Fact |
|---|---|
| Elementary area (vertical strip) | dA = y dx = f(x) dx, a thin vertical strip of height y and width dx under the curve y = f(x). |
| Elementary area (horizontal strip) | dA = x dy = g(y) dy, a thin horizontal strip of width x and height dy beside the curve x = g(y). |
| The area of the region bounded by the curve y = f(x), x-axis and the lines x = a and x = b (b > a) | given by the formula: Area = ∫ from a to b of y dx = ∫ from a to b of f(x) dx. |
| The area of the region bounded by the curve x = phi(y), y-axis and the lines y = c, y = d | given by the formula: Area = ∫ from c to d of x dy = ∫ from c to d of φ(y) dy. |
| Concept | Key Fact |
|---|---|
| Order of a Differential Equation | The order of the highest order derivative of the dependent variable with respect to the independent variable involved ∈ the given differential equation. |
| Degree of a Differential Equation | When a differential equation is a polynomial equation ∈ derivatives, the degree is the highest power (positive ∫ index) of the highest order derivative involved ∈ the given differential equation. |
| General Solution | The solution which contains arbitrary constants is called the general solution (primitive) of the differential equation. It contains as many arbitrary constants as the order of the equation. |
| Particular Solution | The solution free from arbitrary constants, i.e., the solution obtained from the general solution by giving particular values to the arbitrary constants, is called a particular solution of the differential equation. |
| Variable Separable Method | A method of solving first order differential equations where dy/dx = F(x,y) can be written as dy/dx = g(x)·h(y), allowing separation of variables so that all terms involving y are on one side and all terms involving x are on the other side. |
| Homogeneous Function | A function F(x, y) is said to be a homogeneous function of degree n if F(λx, λy) = λⁿF(x, y) for any nonzero constant λ. |
| Homogeneous Differential Equation | A differential equation of the form dy/dx = F(x, y) is said to be homogeneous if F(x, y) is a homogeneous function of degree zero. |
| First Order Linear Differential Equation | A differential equation of the form dy/dx + Py = Q, where P and Q are constants or functions of x only. |
| Integrating Factor (I.F.) | The function g(x) = e∫P dx which when multiplied to both sides of the linear differential equation dy/dx + Py = Q, makes the LHS the derivative of y·e∫P dx. It is denoted as I.F. |
| An equation involving derivatives of the dependent variable with respect to independent variable (variables) | known as a differential equation. |
| Order of a differential equation | the order of the highest order derivative occurring ∈ the differential equation. |
| Degree of a differential equation | defined if it is a polynomial equation ∈ its derivatives. |
| Degree (when defined) of a differential equation | the highest power (positive integer only) of the highest order derivative ∈ it. |
| A function which satisfies the given differential equation | called its solution. The solution which contains as many arbitrary constants as the order of the differential equation is called a general solution and the solution free from arbitrary constants is called particular solution. |
| Variable separable method | used to solve such an equation ∈ which variables can be separated completely, i.e. terms containing y should remain with dy and terms containing x should remain with dx. |
| Concept | Key Fact |
|---|---|
| Vector | A quantity that has magnitude as well as direction is called a vector. A directed line segment is a vector. |
| Position Vector | The vector OP having O (origin) as initial point and P as terminal point is called the position vector of point P with respect to origin O. |
| Direction Cosines | The cosine values of the direction angles α, β, γ that a vector makes with the positive x, y, z axes, denoted by l, m, n respectively. |
| Direction Ratios | Numbers proportional to the direction cosines of a vector, denoted as a, b, c. |
| Zero Vector | A vector whose initial and terminal points coincide, denoted as 0⃗. It has zero magnitude and cannot be assigned a definite direction (or may be regarded as having any direction). Vectors AA, BB represent the zero vector. |
| Unit Vector | A vector whose magnitude is unity (1 unit). The unit vector ∈ the direction of a given vector a is denoted by â. |
| Coinitial Vectors | Two or more vectors having the same initial point are called coinitial vectors. |
| Collinear Vectors | Two or more vectors are said to be collinear if they are parallel to the same line, irrespective of their magnitudes and directions. |
| Equal Vectors | Two vectors a and b are said to be equal if they have the same magnitude and direction regardless of the positions of their initial points, written as a = b. |
| Negative of a Vector | A vector whose magnitude is the same as that of a given vector but direction is opposite to that of it. Vector BA is negative of vector AB, written as BA = -AB. |
| Free Vectors | Vectors that may be subject to parallel displacement without changing magnitude and direction. Throughout this chapter, we deal with free vectors only. |
| Triangle Law of Vector Addition | If two vectors a and b are positioned so that the initial point of one coincides with the terminal point of the other, then the ∑ (resultant) a + b is represented by the third side of the triangle formed. In triangle ABC: AC = AB + BC. |
| Parallelogram Law of Vector Addition | If two vectors a and b are represented by the two adjacent sides of a parallelogram ∈ magnitude and direction, then their ∑ a + b is represented ∈ magnitude and direction by the diagonal of the parallelogram through their common point. |
| Scalar Multiplication | The product of vector a by scalar λ, denoted λ*a, is a vector collinear to a. The vector λ*a has direction same (or opposite) to a according as λ is positive (or negative). Its magnitude is |λ| times the magnitude of a: |λ*a| = |λ||a|. |
| Component Form | Any vector r = xî + yĵ + zk̂ is said to be ∈ component form. Here x, y, z are scalar components and xî, yĵ, zk̂ are vector components along the respective axes. Also called rectangular components. |
| Concept | Key Fact |
|---|---|
| Direction cosines | If a directed line L passes through the origin making angles α, β, γ with x, y, z axes respectively, then cos(α), cos(β), cos(γ) are called direction cosines, denoted by l, m, n |
| Direction ratios | Any three numbers a, b, c which are proportional to the direction cosines l, m, n of a line. If l, m, n are direction cosines, then a = λ*l, b = λ*m, c = λ*n for any nonzero λ ∈ R |
| Skew lines | Lines ∈ space which are neither intersecting nor parallel. Such pair of lines are non coplanar. |
| Shortest distance between two lines | The join of a point ∈ one line with one point on the other line so that the length of the segment so obtained is the smallest. |
| Direction cosines of a line are the cosines of the angles made by the line with the positive directions of the coordinate axes. | |
| If l, m, n are the direction cosines of a line, then l² + m² + n² = 1. | |
| Direction cosines of a line joining two points P(x1,y1,z1) and Q(x2,y2,z2) are (x2-x1)/PQ, (y2-y1)/PQ, (z2-z1)/PQ where PQ = √((x2-x1)² + (y2-y1)² + (z2-z1)²). | |
| Direction ratios of a line are the numbers which are proportional to the direction cosines of a line. | |
| If l, m, n are the direction cosines and a, b, c are the direction ratios of a line then l = a/√(a²+b²+c²), m = b/√(a²+b²+c²), n = c/√(a²+b²+c²). | |
| Skew lines are lines ∈ space which are neither parallel nor intersecting. They lie ∈ different planes. | |
| Angle between skew lines | the angle between two intersecting lines drawn from any point (preferably through the origin) parallel to each of the skew lines. |
| If l1, m1, n1 and l2, m2, n2 are the direction cosines of two lines and theta | the acute angle between them, then cos(θ) = |l1·l2 + m1·m2 + n1·n2|. |
| If a1, b1, c1 and a2, b2, c2 are the direction ratios of two lines and theta | the acute angle between them, then cos(θ) = |a1·a2 + b1·b2 + c1·c2| / (√(a1²+b1²+c1²) · √(a2²+b2²+c2²)). |
| Vector equation of a line through a point with position vector a and parallel to vector b | r = a + λ*b. |
| Equation of a line through point (x1,y1,z1) having direction cosines l, m, n | (x-x1)/l = (y-y1)/m = (z-z1)/n. |
| Concept | Key Fact |
|---|---|
| Optimisation Problems | Problems which seek to maximise (or minimise) profit (or cost) form a general class of problems called optimisation problems. An optimisation problem may involve finding maximum profit, minimum cost, or minimum use of resources etc. |
| Linear Programming Problem | A special but very important class of optimisation problems is linear programming problem. Linear programming problems are of much interest because of their wide applicability ∈ industry, commerce, management science etc. |
| Linear Programming Problem (LPP) | A Linear Programming Problem is one that is concerned with finding the optimal value (maximum or minimum value) of a linear function (called objective function) of several variables (say x and y), subject to the conditions that the variables are non-negative and satisfy a set of linear inequalities (called linear constraints). The term 'linear' implies that all the mathematical relations used ∈ the problem are linear relations while the term 'programming' refers to the method of determining a particular programme or plan of action. |
| Objective Function | A linear function Z = ax + by, where a, b are constants, which has to be maximised or minimised is called a linear objective function. Variables x and y are called decision variables. |
| Constraints | The linear inequalities or equations or restrictions on the variables of a linear programming problem are called constraints. The conditions x ≥ 0, y ≥ 0 are called non-negative restrictions. |
| Optimisation Problem | A problem which seeks to maximise or minimise a linear function (say of two variables x and y) subject to certain constraints as determined by a set of linear inequalities is called an optimisation problem. Linear programming problems are special type of optimisation problems. |
| Feasible Region | The common region determined by all the constraints including non-negative constraints x ≥ 0, y ≥ 0 of a linear programming problem is called the feasible region (or solution region) for the problem. The region other than feasible region is called an infeasible region. |
| Feasible Solutions | Points within and on the boundary of the feasible region represent feasible solutions of the constraints. Every point within and on the boundary of the feasible region represents a feasible solution to the problem. |
| Infeasible Solution | Any point outside the feasible region is called an infeasible solution. |
| Optimal (Feasible) Solution | Any point ∈ the feasible region that gives the optimal value (maximum or minimum) of the objective function is called an optimal solution. |
| Bounded Feasible Region | A feasible region of a system of linear inequalities is said to be bounded if it can be enclosed within a circle. |
| Unbounded Feasible Region | A feasible region is called unbounded if it cannot be enclosed within a circle, meaning the feasible region does extend indefinitely ∈ any direction. |
| Corner Point | A corner point of a feasible region is a point ∈ the region which is the intersection of two boundary lines. |
| Decision Variables | The variables x and y ∈ the objective function Z = ax + by are called decision variables. They are sometimes also simply called variables and are non-negative. |
| A linear programming problem | one that is concerned with finding the optimal value (maximum or minimum) of a linear function of several variables (called objective function) subject to the conditions that the variables are non-negative and satisfy a set of linear inequalities (called linear constraints). Variables are sometimes called decision variables and are non-negative. |
| Concept | Key Fact |
|---|---|
| Conditional Probability | If E and F are two events associated with the same sample space of a random experiment, the conditional probability of the event E given that F has occurred, i.e., P(E|F) is given by P(E|F) = P(E intersection F) / P(F), provided P(F) ≠ 0. |
| Independent Events (Definition 2) | Two events E and F are said to be independent if P(F|E) = P(F) provided P(E) ≠ 0, and P(E|F) = P(E) provided P(F) ≠ 0. Thus, we need P(E) ≠ 0 and P(F) ≠ 0. |
| Independent Events (Definition 3) | Let E and F be two events associated with the same random experiment, then E and F are said to be independent if P(E ∩ F) = P(E) . P(F). |
| Mutually Independent Events (three events) | Three events A, B and C are said to be mutually independent if: P(A ∩ B) = P(A) P(B), P(A ∩ C) = P(A) P(C), P(B ∩ C) = P(B) P(C), and P(A ∩ B n C) = P(A) P(B) P(C). All four conditions must hold. |
| Partition of a Sample Space | A set of events E₁, E₂, ..., Eₙ is said to represent a partition of the sample space S if: (a) Eᵢ n Eⱼ = φ for i ≠ j (pairwise disjoint), (b) E₁ u E₂ u ... u Eₙ = S (exhaustive), and (c) P(Eᵢ) > 0 for all i = 1, 2, ..., n (nonzero probabilities). |
| Hypotheses | When Bayes' theorem is applied, the events E₁, E₂, ..., Eₙ are called hypotheses. |
| Prior Probability | The probability P(Eᵢ) is called the priori probability of the hypothesis Eᵢ. |
| Posterior Probability | The conditional probability P(Eᵢ|A) is called the a posteriori probability of the hypothesis Eᵢ. It gives the probability of a particular 'cause' Eᵢ given that event A has occurred. |
| Random Variable | A random variable is a real valued function whose domain is the sample space of a random experiment. It assigns a real number to each outcome of the experiment. |
| Probability Distribution | The system of values of a random variable X together with the corresponding probabilities is called the probability distribution of X: X takes values x₁, x₂, ..., xₙ with probabilities p₁, p₂, ..., pₙ where pᵢ ≥ 0 and ∑ of all pᵢ = 1. |
| Mean (Expected Value) | The mean or expectation of a random variable X is defined as E(X) = μ = sumᵢ=₁^{n} xᵢ pᵢ, where xᵢ are the values of X and pᵢ are the corresponding probabilities. |
| Variance | The variance of a random variable X is defined as Var(X) = σ² = E(X²) - [E(X)]² = ∑ xᵢ² pᵢ - (∑ xᵢ pᵢ)². |
| Standard Deviation | The standard deviation of a random variable X is σ = √(Var(X)). |
| Bernoulli Trials | Trials of a random experiment are called Bernoulli trials if: (i) there are a finite number of trials, (ii) the trials are independent of each other, (iii) each trial has exactly two outcomes: success or failure, and (iv) the probability of success (p) remains the same ∈ each trial. |
| Binomial Distribution | A random variable X taking values 0, 1, 2, ..., n is said to have a binomial distribution with parameters n and p if its probability distribution is given by P(X = r) = nCr pʳ qn-r, where q = 1 - p and r = 0, 1, 2, ..., n. |