Loading [MathJax]/extensions/TeX/AMSmath.js

Well-posedness of IVPs

This is a study note on the IVPs from Kreiss and Lorenz [3].

Introduction

We consider the well-posedness of the initial value problems (IVPs) in the form u_t = P(\partial /\partial x)u = \sum_{|\nu|\leq m} A_\nu\frac{\partial ^{|\nu|}}{\partial x_1^{\nu_1}\cdots \partial x_d^{\nu_d}} u,\tag{1}\\ u(x,0) = f(x),where x \in D := \mathbb{R}^d and u(x,t):D \times (0,\infty )\mapsto \mathbb{R} is the unknown. The initial function f(x) is smooth enough and has compact support over the domain. We write \|\cdot \|=\|\cdot \|_{L^2(D)} for short.

We consider the well-posedness of the IVP (1). First, we should clarify what is well-posedness, generally speaking, it states the solution u should continuously depend on the initial data u(x,0), that is there exist constants C>0,\alpha>0 such that for any t>0 \|u(x,t)\| \leq C e^{\alpha t} \|u(x,0)\|.\tag{2}

Solution of the IVPs

Fourier transform is a powerful tool to solve PDEs, see [1, 3] for an introduction. Assume \hat{f}(\omega)= [\mathcal{F}f](\omega), then \hat{f}(\omega) = \frac{1}{(2\pi)^{d/2}} \int_{D} e^{-i \omega \cdot x}f(x)dx,\\ f(x) = \frac{1}{(2\pi)^{d/2}} \int_{D} e^{i \omega \cdot x}\hat{f}(\omega)d \omega.

Suppose initial value is some given mode, dented by e^{i \omega \cdot x}\hat{f}(\omega), then the solution u to (1) can be represented by u(x,t) = e^{i \omega \cdot x}e^{P(i \omega)t}\hat{f}(\omega),where e^{P(i \omega)t} is nothing but the Fourier transform of differential operator P(\partial /\partial x). The principle of superposition allows us to obtain the exact solution to (1): u(x,t) = \frac{1}{(2\pi)^{d/2}} \int_{D} e^{i \omega \cdot x}e^{P(i \omega)t}\hat{f}(\omega) d \omega. \tag{3}

Well-posedness

If there exist constants C>0,\alpha>0 such that |e^{P(i \omega)t}| \leq C e^{\alpha t}, \quad \forall \omega \in D, t>0, \tag{4}then we from Parseval’s identity obtain \|u(x,t)\|=\|e^{P(i \omega)t}\hat{f}(\omega)\| \leq C e^{\alpha t} \|\hat{f}(\omega)\| = C e^{\alpha t} \|f(x)\|.\tag{5}It can be shown that the converse is also true. That is (4) is a necessary and sufficient condition for stability estimate (5). To see it, we need to show given \|u(x,t)\|\leq C e^{\alpha t} \|f(x)\|, there holds (4). We give full proof in the following.

Begin of the proof
To show (4) is to show the matrix operator e^{P(i \omega)t} is bounded from above. General proof would start with the definition of an operator T: \sup _{v} \frac{|Tv|}{|v|}=\gamma. However, one could also show by choosing a special sequence v_j such that (1-\epsilon_j) |T| |v_j| \leq \gamma |v_j|,which also implies |T|\leq \gamma.

Another technique is that \|\hat{f}(\omega)\| is continuous for any \omega \in D.we need to show |e^{P(i \omega_0)t}| \leq C e^{\alpha t}, where C,\alpha are independent of \omega_0. Using (5) we hope \begin{split} C e^{\alpha t} \|f_j(x)\| &\geq \|u_j\| \quad (\text{from }(5))\\ &= \|e^{P(i \omega)t}\hat{f}_j\|\quad (\text{Parseval})\\ &= (1-\epsilon_j) e^{P(i \omega_0)t} \|\hat{f}_j\|\quad (\text{? }). \end{split}

To the end of the last step, we let \omega_0 be fixed, and from the continuity of \|\cdot \|, for any \epsilon_j>0 there exists \delta_j s.t. \|e^{P(i \omega)t}\hat{f}_j\|\geq (1-\epsilon_j) \|e^{P(i \omega_0)t}\hat{f}_j(\omega)\|=(1-\epsilon_j) |e^{P(i \omega_0)t}|\|\hat{f}_j(\omega)\|,where \hat{f}_j(\omega)=v for |\omega-\omega_0|<\delta_j and zero otherwise; and v \in \mathbb{C}^n with |v|=1 satisfying \|e^{P(i \omega_0)t}\|=\|e^{P(i \omega_0)t}v\|.
end of the proof

Generalization to L^2 space

In the last section, we consider the initial data f of interest to be especially smooth, however, this can be extended to L^2 space using density argument.

Let f_j(x)\to f(x) in L^2-norm, and from the estimate (5) we know there are solutions u_j(x,t) satisfy \|u_i-u_j\| \leq C e^{\alpha t} \|f_i-f_j\| \to 0,which from weak convergence implies there is a solution u \in L^2(D) such that \|u_i-u\|\to 0. Such solution u(x, t) in L^2-space is called the generalized solution of IVPs (1).

We give an example in [3, Section 2.2.5] that illustrates the process.

Example 1. For one-dimensional differential equation u_t+u_x=0 with initial value u(x,0)=f(x) = 1 for -1< x< 1 and f(x)=0 otherwise. We construct a sequence \left\{f_j \right\} that approximates f by f_j(x) = \frac{1}{\sqrt{2}}\int_{-\infty}^{\infty}e^{i \omega x}\hat{f}_j(\omega)d \omega,where \hat{f}_j=\hat{f} for -j<\omega< j and zero otherwise. Then the solution with initial data f_j would be u_j(x,t) = \frac{1}{\sqrt{2}}\int_{-\infty }^{\infty} e^{i \omega x}e^{-i \omega t}\hat{f}_j(\omega)d \omega =f_j(x-t).Take the limit and obtain u=f(x-t), which is not smoot but still belongs to L^2(\mathbb{R}).

References

[1] Fourier 变换和 Laplace 变换.
[2] A short introduction to matrix exponentials. Numanal.com.
[3] H.-O. Kreiss and J. Lorenz, Initial-Boundary Value Problems and the Navier-Stokes Equations. Society for Industrial and Applied Mathematics, 2004. doi: 10.1137/1.9780898719130.

分享到:
0 0 投票数
文章评分
订阅评论
提醒
guest


0 评论
内联反馈
查看所有评论