A short introduction to Matrix exponentials

我们将讨论矩阵指数函数的定义和基本性质.

Introduction

A common application of matrix exponentials is the study of linear differential systems, for example, \[ \frac{\partial u}{\partial t} = Au,\tag{1}\]where \(u(x,t): \mathbb{R}\times (0,\infty )\mapsto \mathbb{R}^n\) is a vector and \(A \in \mathbb{R}^{n\times n}\). Its solution can be expressed as\[ u(x,t) = e^{At}u(x,0).\tag{2}\]A simplified equation is the scalar differential equation\[ u’=au\]with general solution \(u=e^{at}\).

Definition and examples

It is known that for \(t\)\[ e^t = 1+t+\frac{1}{2}t^2+\ldots +\frac{1}{k!}t^k+\ldots\]Similarly, the matrix exponentials is defined for \(A \in \mathbb{R}^{n\times n}\) to be\[ e^{At} = 1+At+\frac{1}{2}(At)^2+\ldots +\frac{1}{k!}(At)^k+\ldots \tag{3}\]

Example 1. Let \(A=\begin{bmatrix} 1 &0\\ 0 & 1\end{bmatrix}\). We calculate\[ e^{At} = \sum_{k=0}^{\infty}\frac{1}{k!}\begin{bmatrix} 1 &0\\ 0 & 1\end{bmatrix}^k t^k =\begin{bmatrix} \sum_{k=0}^{\infty}\frac{1}{k!}t^k &0\\ 0 & \sum_{k=0}^{\infty}\frac{1}{k!}t^k\end{bmatrix} = \begin{bmatrix} e^t &0\\ 0 & e^t\end{bmatrix} = e^t \begin{bmatrix} 1 &0\\ 0 & 1\end{bmatrix}.\]Once we have \(e^{At}\), the solution to (1) is explicitly obtained.

Example 2. Let \(A=\begin{bmatrix} 1 &1 \\ 0 & 1\end{bmatrix}\). We calculate\[ A^k = \begin{bmatrix} 1 & k \\ 0 & 1\end{bmatrix}.\]Hence,\[ e^{At} = \sum_{k=0}^{\infty}\frac{1}{k!}\begin{bmatrix} 1 & k\\ 0 & 1\end{bmatrix} t^k =\begin{bmatrix} \sum_{k=0}^{\infty}\frac{1}{k!}t^k & \sum_{k=0}^{\infty}\frac{k}{k!}t^k\\ 0 & \sum_{k=0}^{\infty}\frac{1}{k!}t^k\end{bmatrix} = \begin{bmatrix} e^t & te^t\\ 0 & e^t\end{bmatrix} = e^t \begin{bmatrix} 1 & t\\ 0 & 1\end{bmatrix}.\]One could check that \(u(x,t)=e^{At}u(x,0)=e^t \begin{bmatrix} 1 & t\\ 0 & 1\end{bmatrix}u(x,0)\) indeed satisfies the differential equation (1):\[ \frac{du}{dt} = e^t \begin{bmatrix} 1 & 1+t\\ 0 & 1\end{bmatrix} = A u.\]

Properties

From the definition of the matrix exponentials, one easily obtains

Lemma. For diagonal matrix \(D=(a_{ii})\), there holds that\[ e^{Dt} = \mathrm{diag}(e^{a_{11}t}, e^{a_{22}t},\ldots ,e^{a_{nn}t}).\]

The calculation of matrix exponentials can be done by matrix similarity. Let \(S\) is invertible such that\[ A = S \Lambda S^{-1},\tag{4}\]where \(\Lambda\) is a diagonal matrix. Or we say \(A\) is similar to \(\Lambda\). Since similar matrices have the same eigenvalues, we could also view \(\Lambda\) consists of the eigenvalues of \(A\) if it is diagonalizable. using (4) in the definition (3) yields\[ e^{At} = \sum_{k=0}^{\infty}\frac{1}{k!}(At)^k = \sum_{k=0}^{\infty}\frac{t^k}{k!}S \Lambda^k S^{-1} = Se^{\Lambda t} S^{-1}.\tag{5}\]

Theorem. For every matrix \(A \in \mathbb{C}^{n\times n}\), one can find an invertible matrix S, and an upper triangular matrix \(\Lambda\) such that (4) holds.

Discussion 1. When \(A\) is diagonalizable, say \(\Lambda = diag(\lambda_1, \lambda_2, \ldots , \lambda_n)\), then\[ u(x,t)=e^{At}u(x,0) = Se^{\Lambda t} S^{-1} u(x,0) = \sum_{j=1}^{n} e^{\lambda_j t} s_j c_j \tag{6a}\]where \(c_j\) is the \(j\)-th element of \(S^{-1} u(x,0)\). This process can be summarized as

  1. Diagonalization. Given \(A=S \Lambda S^{-1}\), define \(v=S^{-1}u\) and write (1) to be \(\frac{\partial v}{\partial t} =\Lambda v \) which has solution\[ v = e^{\Lambda t}v(x,0). \tag{6b}\]
  2. Back-transformation. \(u = S v\), which is nothing but (6a).

Discussion 2. What would happen if \(A\) was not diagonalizable? This is not very common in the study of differential equations, because it would lead to an ill-posed system, see [2]. In this case, for example, let \(\Lambda = \begin{bmatrix} \lambda & c \\ 0 & \lambda \end{bmatrix}\).Then,\[ \Lambda = \lambda I\begin{bmatrix} \lambda & 0 \\ 0 & \lambda \end{bmatrix} + \begin{bmatrix} 0 & c \\ 0 & 0 \end{bmatrix}=\lambda I+c N,\]and \(\Lambda^k = \lambda^k I + k \lambda^{k-1}cN \). Hence,\[ e^{\Lambda t} = \sum_{k=0}^{\infty} \frac{1}{k!}(\lambda^k I + k \lambda^{k-1}cN) t^k = \begin{bmatrix} e^t & ct e^{t} \\ 0 & e^t \end{bmatrix}.\]

References

[1] Introduction to Matrix Exponentials. Math 252 — Fall 2002.
[2] H.-O. Kreiss and J. Lorenz, Initial-Boundary Value Problems and the Navier-Stokes Equations. Society for Industrial and Applied Mathematics, 2004. doi: 10.1137/1.9780898719130.

分享到:
0 0 投票数
文章评分
订阅评论
提醒
guest

0 评论
内联反馈
查看所有评论