2.9 Square matrices and determinants

Now \(N\times N\) matrices \(\tilde A=\left(a_{jk}\right)\;j,k=1,\ldots,N\)

Definition 25 \begin{eqnarray*} \tilde A&=&\left(\begin{array}{cccc}a_{11}&a_{12}&\cdots&a_{1N}\\a_{21}&a_{22}&\cdots&a_{2N}\\\vdots&\vdots&\ddots&\vdots\\a_{N1}&a_{N2}&\cdots&a_{NN}\end{array}\right)\\ \\ \det{\left(\tilde A\right)}&=&\left|\begin{array}{ccc}a_{11}&\cdots&a_{1N}\\\vdots&\ddots&\vdots\\a_{N1}&\cdots&a_{NN}\end{array}\right|=\sum_{P(N)}\left(-1\right)^{j\left(P\right)}a_{1,j_1}a_{2,j_2}\cdots a_{N,j_N}\\\\ & & \mbox{where P(N) are all permutations of the numbers $1,\ldots,N$}\\&&\mbox{and j(P) is the number of changes between $(1,\ldots,N)$ and $(j_1,\ldots,j_N)$ }\end{eqnarray*}

\(\Rightarrow\) definition not practical for a calculation of \(\det(\tilde A)\). Therefore, calculation via successive expansion in sub-determinants (Laplace rule): \(N=1:\; \det(a)=a \quad a\in\mathbb{R}\).

As we will see the determinant is the (only) totally antisymmetric multilinear operation acting on the components of a matrix. Corresponding to the Kronecker-symbol sometimes a notation using the totally antisymmetric function \(\epsilon_{i,j,...,k}\) (Levi-Civita symbol) is helpful for the formal calculation of a determinant:

\[\det{\left(\tilde A\right)} = \sum_{P(N)}\left(-1\right)^{j\left(P\right)}a_{1,j_1}a_{2,j_2}\cdots a_{N,j_N} = \sum_{j_1, j_2, ..., j_N = 1}^N \epsilon_{j_1,j_2,...,j_N} a_{1,j_1}a_{2,j_2}\cdots a_{N,j_N}\]

\(\epsilon_{j_1,j_2,...,j_N}\) is zero if any of the indices are equal, it is \(1 = \epsilon_{1,2,...,N}\), and changes it’s sign for each change in the order of indices. (Hint: this are exactly the properties of the quantum numbers of Fermions according to the Pauli principle, the determinant or \(\epsilon_{i,j,...,k}\) are therefor often used to calculate many particle states in quantum mechanics).
The geometrical interpretation in 3D of a determinant will be given in section 2.15 and a more general interpretation in section 2.16.
Calculation by Laplace rule:

\[\det\left(\tilde A\right)=\sum_{j=1}^N a_{jk}A_{jk}=\sum_{j=1}^N a_{kj}A_{kj},\mbox{ for }N\gt1\]

development via the column/line, adaptive expansion by column or row where: \(k\in(1,\ldots,N)\) arbitrary and
cofactor of \(a_{jk}\) in \(\tilde A\)

\[A_{jk}=\left(-1\right)^{j+k}\left|\begin{array}{ccccccc}a_{11}&a_{12}&\cdots&a_{1,k-1}&a_{1,k+1}&\cdots&a_{1N}\\ a_{21}&a_{22}&\cdots&a_{2,k-1}&a_{2,k+1}&\cdots&a_{2N}\\ \vdots\\ a_{j-1,1}&a_{j-1,2}&\cdots&a_{j-1,k-1}&a_{j-1,k+1}&\cdots&a_{j-1,N}\\ a_{j+1,1}&a_{j+1,2}&\cdots&a_{j+1,k-1}&a_{j+1,k+1}&\cdots&a_{j+1,N}\\ \vdots\\ a_{N1}&a_{N2}&\cdots&a_{N,k-1}&a_{N,k+1}&\cdots&a_{NN} \end{array}\right|\hat=\begin{array}{l}\mbox{determinants of $\tilde A$ where}\\\mbox{j$^{th}$ line and k$^{th}$column are}\\\mbox{erased}\end{array}\]

Examples:

  1. \[\left|\begin{array}{cc}a_{11}&a_{12}\\a_{21}&a_{22}\end{array}\right|=a_{11}a_{22}-a_{21}a_{12}\quad\text{development via 1$^{\mbox{st}}$ column}\]
  2. \begin{eqnarray*}\left|\begin{array}{ccc} a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{array}\right|&=&a_{11}\left| {\genfrac{}{}{0pt}{}{a_{22}}{a_{32}}}{\genfrac{}{}{0pt}{}{a_{23}}{a_{33}}}\right|-a_{12}\left|{\genfrac{}{}{0pt}{}{a_{21}}{a_{31}}}{\genfrac{}{}{0pt}{}{a_{23}}{a_{33}}}\right|+a_{13}\left|{\genfrac{}{}{0pt}{}{a_{21}}{a_{31}}}{\genfrac{}{}{0pt}{}{a_{22}}{a_{32}}}\right|\\ &=&a_{11}a_{22}a_{33}-a_{11}a_{32}a_{23}-a_{12}a_{21}a_{33}+a_{12}a_{23}a_{31}+a_{12}a_{21}a_{32}-a_{13}a_{31}a_{22}\end{eqnarray*}

    \(\Rightarrow\) calculation of larger determinants still difficult!

Calculation rules for determinants:

  1. Determinants are antisymmetric for changing the order of rows or columns,
    i.e. \(\quad \left|\begin{array}{ccc}\vec{a}&\vec{b}&\cdots\end{array}\right|=-\left|\begin{array}{ccc}\vec{b}&\vec{a}&\cdots\end{array}\right|\)

  2. Determinants vanish if two vectors are identical,
    i.e. \(\quad \left|\begin{array}{ccc}\vec{a}&\vec{a}&\cdots\end{array}\right|=-\left|\begin{array}{ccc}\vec{a}&\vec{a}&\cdots\end{array}\right|=0\)

  3. Determinants are linear,
    i.e. \(\quad \left|\begin{array}{cc}\vec{a}+\vec{b}&\cdots\end{array}\right|=\left|\begin{array}{cc}\vec{a}&\cdots\end{array}\right|+\left|\begin{array}{cc}\vec{b}&\cdots\end{array}\right|\)

  4. Determinants are linear,
    i.e. \(\quad \left|\begin{array}{cc}\alpha \vec{a}&\cdots\end{array}\right|=\alpha \left|\begin{array}{cc}\vec{a}&\cdots\end{array}\right|\)

  5. Adding linear combination of other rows/columns does not change Determinants,
    i.e. \(\quad \left|\begin{array}{ccc}\left(\vec{a}+\beta \vec{b}\right)&\vec{b}&\cdots\end{array}\right|=\left|\begin{array}{ccc}\vec{a}&\vec{b}&\cdots\end{array}\right|+\beta \left|\begin{array}{ccc}\vec{b}&\vec{b}&\cdots\end{array}\right|=\left|\begin{array}{ccc}\vec{a}&\vec{b}&\cdots\end{array}\right|\)

  6. Subtracting projections of (row/column)-vectors does therefor not change the determinant, so the determinant calculates the product of the length of a set of orthogonal vectors, i.e the volume spanned up by the set of vectors. If the volume is not zero the set of vectors is linearly independent.

2.9.1 Examples for the calculation rules for determinants


With frame Back Forward as PDF

© J. Carstensen (Math for MS)