Examples of augmented matrix in the following topics:
-
- Augmented matrix: an augmented matrix is a matrix obtained by appending the columns of two given matrices, usually for the purpose of performing the same elementary row operations on each of the given matrices.
- A triangular matrix is one that is either lower triangular or upper triangular.
- A matrix that is both upper and lower triangular is a diagonal matrix.
- Use elementary row operations on the augmented matrix $[A|b]$ to transform $A$ to upper triangle form.
- Use elementary row operations to put a matrix in simplified form
-
- Row multiplication (scale): Multiply a row of a matrix by a nonzero constant.
- Row addition (pivot): Add to one row of a matrix some multiple of another row.
- Since the matrix is essentially the coefficients and constants of a linear system, the three row operations preserve the matrix.
- Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original.
- There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss-Jordan elimination.
-
- It is a variation of Gaussian elimination, which places zeros below each pivot in the matrix, starting with the top row and working downwards.
- Both Gauss–Jordan and Gaussian elimination have time complexity of order $O({ n }^{ 3 })$ for an n by n full rank matrix (using Big O Notation), but the order of magnitude of the number of arithmetic operations (there are roughly the same number of additions and multiplications/divisions) used in solving an n by n matrix by Gauss-Jordan elimination is ${n}^{3}$, whereas that for Gaussian elimination is $\frac{2n}{3}$.
- The steps of Gauss-Jordan elimination are very similar to that of Gaussian elimination, the main difference being that we will work in diagonal form instead of putting the augmented matrix into upper triangle form.
- Use elementary row operations on matrix [A|b] to transform A into diagonal form.
- A matrix is in reduced row echelon form (also called row canonical form) if it is the result of a Gauss–Jordan elimination.
-
- The matrix has a long history of application in solving linear equations.
- A matrix with m rows and n columns is called an m × n matrix or $m$-by-$n$ matrix, while m and n are called its dimensions.
- A matrix which has the same number of rows and columns is called a square matrix.
- In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix.
- Each element of a matrix is often denoted by a variable with two subscripts.
-
- The identity matrix $[I]$ is defined so that $[A][I]=[I][A]=[A]$, i.e. it is the matrix version of multiplying a number by one.
- The matrix that has this property is referred to as the identity matrix.
- The identity matrix, designated as $[I]$, is defined by the property:
- What matrix has this property?
- For a $3 \times 3$ matrix, the identity matrix is a $3 \times 3$ matrix with diagonal $1$s and the rest equal to $0$:
-
- When multiplying matrices, the elements of the rows in the first matrix are multiplied with corresponding columns in the second matrix.
- If $A$ is an $n\times m $ matrix and $B$ is an $m \times p$ matrix, the result $AB$ of their multiplication is an $n \times p$ matrix defined only if the number of columns $m$ in $A$ is equal to the number of rows $m$ in $B$.
- Scalar multiplication is simply multiplying a value through all the elements of a matrix, whereas matrix multiplication is multiplying every element of each row of the first matrix times every element of each column in the second matrix.
- When multiplying matrices, the elements of the rows in the first matrix are multiplied with corresponding columns in the second matrix.
- Each entry of the resultant matrix is computed one at a time.
-
- It is possible to solve this system using the elimination or substitution method, but it is also possible to do it with a matrix operation.
- Solving a system of linear equations using the inverse of a matrix requires the definition of two new matrices: $X$ is the matrix representing the variables of the system, and $B$ is the matrix representing the constants.
- Using matrix multiplication, we may define a system of equations with the same number of equations as variables as:
- To solve a system of linear equations using an inverse matrix, let $A$ be the coefficient matrix, let $X$ be the variable matrix, and let $B$ be the constant matrix.
- If the coefficient matrix is not invertible, the system could be inconsistent and have no solution, or be dependent and have infinitely many solutions.
-
- It can be proven that any matrix has a unique inverse if its determinant is nonzero.
- The determinant of a matrix $[A]$ is denoted $\det(A)$, $\det\ A$, or $\left | A \right |$.
- In the case where the matrix entries are written out in full, the determinant is denoted by surrounding the matrix entries by vertical bars instead of the brackets or parentheses of the matrix.
- In linear algebra, the determinant is a value associated with a square matrix.
- For a $2 \times 2$ matrix, $\begin{bmatrix} a & b\\ c & d \end{bmatrix}$,
-
- A system of equations can be readily solved using the concept of the inverse matrix and matrix multiplication.
- A system of equations can be readily solved using the concepts of the inverse matrix and matrix multiplication.
- This can be done by hand, finding the inverse matrix of $[A]$, then performing the appropriate matrix multiplication with $[B]$.
- Using the matrix function on the calculator, first enter both matrices.
- Then calculate $[A^{-1}][B]$, that is, the inverse of matrix $[A]$, multiplied by matrix $[B]$.
-
- The cofactor of an entry $(i,j)$ of a matrix $A$ is the signed minor of that matrix.
- Specifically the cofactor of the $(i,j)$ entry of a matrix, also known as the $(i,j)$ cofactor of that matrix, is the signed minor of that entry.
- The cofactor of $a_{ij}$ entry of a matrix is defined as:
- In linear algebra, a minor of a matrix $A$ is the determinant of some smaller square matrix, cut down from $A$ by removing one or more of its rows or columns.
- The determinant of any matrix can be found using its signed minors.