Telechargé par KENZO TECH

article de mémoire

publicité
Analysis of Coupled Reaction-Diffusion Equations for
RNA Interactions
Maryann E. Hohn∗
Bo Li†
Weihua Yang‡
November 11, 2014
Abstract
We consider a system of coupled reaction-diffusion equations that models the interaction between multiple types of chemical species, particularly the interaction between
one messenger RNA and different types of non-coding microRNAs in biological cells.
We construct various modeling systems with different levels of complexity for the reaction, nonlinear diffusion, and coupled reaction and diffusion of the RNA interactions,
respectively, with the most complex one being the full coupled reaction-diffusion equations. The simplest system consists of ordinary differential equations (ODE) modeling
the chemical reaction. We present a derivation of this system using the chemical master equation and the mean-field approximation, and prove the existence, uniqueness,
and linear stability of equilibrium solution of the ODE system. Next, we consider a
single, nonlinear diffusion equation for one species that results from the slow diffusion
of the others. Using variational techniques, we prove the existence and uniqueness of
solution to a boundary-value problem of this nonlinear diffusion equation. Finally, we
consider the full system of reaction-diffusion equations, both steady-state and timedependent. We use the monotone method to construct iteratively upper and lower
solutions and show that their respective limits are solutions to the reaction-diffusion
system. For the time-dependent system of reaction-diffusion equations, we obtain
the existence and uniqueness of global solutions. We also obtain some asymptotic
properties of such solutions.
Key words: RNA, gene expression, reaction-diffusion systems, well-posedness, variational methods, monotone methods, maximum principle.
AMS subject classifications: 34D20, 35J20, 35J25, 35J57, 35J60, 35K57, 92D25.
∗
Department of Mathematics, University of Connecticut, Storrs, 196 Auditorium Road, Unit 3009, Storrs,
CT 06269-3009, USA. Email: [email protected]
†
Department of Mathematics and Center for Theoretical Biological Physics, University of California, San
Diego, 9500 Gilman Drive, Mail code: 0112, La Jolla, CA 92093-0112, USA. Email: [email protected]
‡
Department of Mathematics and Institute of Mathematics and Physics, Beijing University of Technology,
No. 100, Pingleyuan, Chaoyang District, Beijing, P. R. China, 100124, and Department of Mathematics,
University of California, San Diego, 9500 Gilman Drive, Mail code: 0112, La Jolla, CA 92093-0112, USA.
[email protected]
1
1
Introduction
Let Ω be a bounded domain in R3 with a smooth boundary ∂Ω. Let N ≥ 1 be an integer. Let
Di (i = 1, . . . , N ), D, βi (i = 1, . . . , N ), β, and ki (i = 1, . . . , N ) be positive numbers. Let
αi (i = 1, . . . , N ) and α be nonnegative functions on Ω × (0, ∞). We consider the following
system of coupled reaction-diffusion equations:
∂ui
= Di ∆ui − βi ui − ki ui v + αi in Ω × (0, ∞), i = 1, . . . , N,
∂t
N
X
∂v
= D∆v − βv −
ki ui v + α in Ω × (0, ∞),
∂t
i=1
(1.1)
(1.2)
together with the boundary and initial conditions
∂v
∂ui
=
= 0 on ∂Ω × (0, ∞), i = 1, . . . , N,
∂n
∂n
ui (·, 0) = ui 0 and v(·, 0) = v0 in Ω, i = 1, . . . , N,
(1.3)
(1.4)
where ∂/∂n denotes the normal derivative along the exterior unit normal n at the boundary
∂Ω, and all ui 0 (i = 1, . . . , N ) and v0 are nonnegative functions on Ω.
The reaction-diffusion system (1.1)–(1.4) is a biophysical model of the interaction between different types of Ribonucleic acid (RNA) molecules, a class of biological molecules
that are crucial in the coding and decoding, regulation, and expression of genes [23].
Small, non-coding RNAs (sRNA) regulate developmental events such as cell growth and
tissue differentiation through binding and reacting with messenger RNA (mRNA) in a
cell. Different sRNA species may competitively bind to different mRNA targets to regulate genes [4,6,12,13,16,21]. Recent experiments suggest that the concentration of mRNA
and different sRNA in cells and across tissue is linked to the expression of a gene [22]. One
of the main and long-term goals of our study of the reaction-diffusion system (1.1)–(1.4)
is therefore to possibly provide some insight into how different RNA concentrations can
contribute to turning genes “on” or “off” across various length scales, and eventually to the
gene expression.
In Eqs. (1.1) and (1.2), the function ui = ui (x, t) for each i (1 ≤ i ≤ N ) represents
the local concentration of the ith sRNA species at x ∈ Ω and time t. We assume a total
of N sRNA species. The function v = v(x, t) represents the local concentration of the
mRNA species at x ∈ Ω and time t. For each i (1 ≤ i ≤ N ), Di is the diffusion coefficient
and βi is the self-degradation rate of the ith sRNA species. Similarly, D is the diffusion
coefficient and β is the self-degradation rate of mRNA. For each i (1 ≤ i ≤ N ), ki is the
rate of reaction between the ith sRNA and mRNA. We neglect the interactions among
different sRNA species as they can be effectively described through their diffusion and selfdegradation coefficients. The reaction terms ui v (i = 1, . . . , N ) result from the mean-field
approximation. The nonnegative functions αi = αi (x, t) (i = 1, . . . , N ) and α = α(x, t)
(x ∈ Ω, t > 0) are the production rates of the corresponding RNA species, and are termed
transcription profiles. Notice that we set the linear size of the region Ω to be of tissue length
to account for the RNA interaction across different cells [22].
2
The reaction-diffusion system model (1.1)–(1.4) was first proposed for the special case
N = 1 and one space dimension in [14]; cf. also [12, 15, 19]. The full model with N (≥ 2)
sRNA species was proposed in [7].
An interesting feature of the reaction-diffusion system (1.1)–(1.4), first discovered in [14],
is that the increase in the diffusivity (within certain range) of an sRNA species sharpens
the concentration profile of mRNA. Figure 1 depicts numerically computed steady-state
solutions to the system (1.1)–(1.3) in one space dimension with N = 1, Ω = (0, 1), D = 0,
a few selected values of D1 , β1 = β = 0.01, k1 = 1, and
α1 = 0.1 + 0.1 tanh(5x − 2.5),
α = 0.1 + 0.1 tanh(2.5 − 5x).
(1.5)
(1.6)
One can see that as the diffusion constant D1 of the sRNA increases, the profile of the
steady-state concentration v = v(x) of the mRNA sharpens.
20
0.00001
Concentrations of mRNA
18
0.0005
16
0.0001
14
0.001
12
10
8
6
4
2
0
0
0.1
0.2
0.3
0.4
0.5
x
0.6
0.7
0.8
0.9
1
Figure 1: Numerical solutions to the steady-state equations with the boundary conditions
(1.1)–(1.3) in one space dimension with N = 1, Ω = (0, 1), D = 0, β1 = β = 0.01, k1 = 1,
and α1 and α given in (1.5) and (1.6), respectively. The numerically computed, steady-state
concentration of mRNA v = v(x) (0 < x < 1) sharpens as the the diffusion constant D1 of
the sRNA increases from 0.00001 to 0.0005, 0.0001, and 0.001.
As one of a series of studies on the reaction-diffusion system modeling, analysis, and
computation of the the RNA interactions, the present work focuses on: (1) the construction
of various modeling systems with different levels of complexity for the reaction, nonlinear
diffusion, and coupled reaction and diffusion, respectively, with the most complex one being
the full reaction-diffusion system (1.1)–(1.4); and (2) the mathematical justification for each
of the models, proving the well-posedness of the corresponding differential equations. To
understand how the reaction terms (i.e., the product terms ui v in (1.1) and (1.2)) come
from, we shall first, however, present a brief derivation of the corresponding reaction system
3
(i.e., no diffusion) for the case N = 1 using a chemical master equation and the mean-field
approximation [19].
We shall consider our different modeling systems in four cases.
Case 1. We consider the following system of ordinary different equations (ODE) for the
concentrations ui = ui (t) ≥ 0 (i = 1, . . . , N ) and v = v(t) ≥ 0:
dui
= −βi ui − ki ui v + αi i = 1, . . . , N,
dt
N
X
dv
= −βv −
ki ui v + α,
dt
i=1
(1.7)
(1.8)
where all αi (i = 1, . . . , N ) and α are nonnegative numbers. We shall prove the existence,
uniqueness, and linear stability of the steady-state solution to this ODE system; cf. Theorem 3.1.
Case 2. We consider the situation where all the diffusion coefficients Di (i = 1, . . . , N )
are much smaller than the diffusion coefficient D. That is, we consider the approximation
Di = 0 (i = 1, . . . , N ). Assume all α and αi (i = 1, . . . , N ) are independent of time t. The
steady-state solution of ui in Eq. (1.1) with Di = 0 leads to ui = αi /(βi +ki v) (i = 1, . . . , N ).
These expressions, coupled with the v-equation (1.2), imply that the steady-state solution
v ≥ 0 should satisfy the following nonlinear diffusion equation and boundary condition:
N
X
ki αi v
+ α = 0 in Ω,
D∆v − βv −
β + ki v
i=1 i
∂v
= 0 on ∂Ω.
∂n
(1.9)
(1.10)
The single, nonlinear equation (1.9) is the Euler–Lagrange equation of some energy functional. We shall use the direct method in the calculus of variations to prove the existence
and uniqueness of the nonnegative solution to the boundary-value problem (1.9) and (1.10);
cf. Theorem 4.1.
Case 3. We consider the following steady-state system corresponding to (1.1)–(1.4) for
the concentrations ui ≥ 0 (i = 1, . . . , N ) and v ≥ 0:
Di ∆ui − βi ui − ki ui v + αi = 0 in Ω,
D∆v − βv −
N
X
ki ui v + α = 0 in Ω,
i = 1, . . . , N,
(1.11)
(1.12)
i=1
∂ui
∂v
=
= 0 on ∂Ω,
∂n
∂n
i = 1, . . . , N.
(1.13)
Here again we assume that α and αi (i = 1, . . . , N ) are independent of time t. We shall use
the monotone method [20] to prove the existence of a solution to this system of reactiondiffusion equations; cf. Theorem 5.1. The monotone method amounts to constructing sequences of upper and lower solutions, extracting convergent subsequences, and proving that
the limits are desired solutions.
4
Case 4. This is the full reaction-diffusion system (1.1)–(1.4). We shall prove the existence
and uniqueness of global solution to this system; cf. Theorem 6.1. To do so, we first
consider local solutions, i.e., solutions defined on a finite time interval. Again, we use the
monotone method to construct iteratively upper and lower solutions and show their limits
are the desired solutions. Unlike in the case of steady-state solutions, we are not able to
use high solution regularity, as that would require compatibility conditions. Rather, we use
an integral representation of solution to our initial-boundary-value problem. We then use
the Maximum Principle for systems of linear parabolic equations to obtain the existence
and uniqueness of global solution. We also study some additional properties such as the
asymptotic behavior of solutions to the full system.
While our underlying reaction-diffusion system has been proposed to model RNA interactions in molecular biology, its basic mathematical properties are similar to some of those
reaction-diffusion systems modeling other physical and biological processes. Our preliminary analysis presented here therefore shares some common features in the study of reactiondiffusion systems; cf. e.g., [10, 17] and the references therein. Our continuing mathematical
effort in understanding the reaction and diffusion of RNA is to analyze the qualitative properties of solutions to the corresponding equations, in particular, the asymptotic behavior of
such solutions as certain parameters become very small or large.
The rest of this paper is organized as follows: In Section 2, we present a brief derivation
of the reaction system (1.7) and (1.8) for the case N = 1 using a chemical master equation
and the mean-field approximation. In Section 3, we consider the system of ODE (1.7)
and (1.8) and prove the existence, uniqueness, and linear stability of steady-state solution.
In Section 4, we prove the existence and uniqueness of the boundary-value problem of
the single nonlinear diffusion equation (1.9) and (1.10) for the concentration v of mRNA.
In Section 5 we prove the existence of a steady-state solution to the system of reactiondiffusion equations (1.11)–(1.13). In Section 6, we prove the existence and uniqueness of
global solution to the full system of time-dependent, reaction-diffusion equations (1.1)–(1.4).
Finally, in Section 7, we prove some asymptotic properties of solutions to the full system of
time-dependent reacation-diffusion equations.
2
Derivation of the Reaction System
We give a brief derivation of the reaction system (1.7) and (1.8), and make a remark on
how the full reaction-diffusion system (1.1)–(1.4) is formulated.
For simplicity, we shall consider two chemical species: mRNA and one sRNA. Figure 2
describes sRNA-mediated gene silencing within the cell and depicts the different rates in
which mRNA and sRNA populations may change at time t. In the figure, αs and αm
describe the sRNA and mRNA production rates, βs and βm describe the sRNA and mRNA
independent degradation rates, and γ describes the coupled degradation rate at time t.
Notice in the rate diagram that the mRNA and sRNA binding process is irreversible. The
numerical value of each of these rates can be determined via experimental data [15].
We denote by Mt and St the numbers of mRNA and sRNA, respectively, in a given cell
at time t, and consider the two continuous-time processes (Mt )t≥0 and (St )t≥0 . We assume
5
ø
αs
αm
βs
sRNA
mRNA
γ
ø
βm
ø
Figure 2: Kinetic scheme of the interaction of sRNA and mRNA in a cell. Here, αs and
αm represent the production rates of sRNA and mRNA, respectively; βs and βm represent
the independent degradation rates of sRNA and mRNA, respectively; and γ represents the
coupled degradation rate of sRNA and mRNA.
that (Mt , St )t≥0 is a stationary continuous-time Markov chain with state space S with the
following ordering:
S = {(0, 0), . . . , (m − 1, s − 1), (m − 1, s), (m, s − 1), (m, s), (m, s + 1), . . .} .
We assume that the total numbers of mRNA and sRNA are finite, and hence the state
space is finite. For any given state (m, s) ∈ S, we denote by Pm,s (t) the probability that the
system is in this state, i.e., Pm,s (t) = P (Mt = m, St = s). For convenience, we extend S to
include integer pairs (m, s) for m < 0 or s < 0 and set Pm,s (t) = 0 if m < 0 or s < 0. Note
that
X
Pm,s (t) = 1
(2.1)
(m,s)∈S
for any t ≥ 0. Note also that the averages hMt i , hSt i, and hMt St i are defined by
X
X
X
hMt i =
mPm,s (t), hSt i =
sPm,s (t), and hMt St i =
msPm,s (t).
(m,s)∈S
(m,s)∈S
(m,s)∈S
(2.2)
The following master equation describes the reactions defined in Figure 2:
Ṗm,s (t) = αm Pm−1,s (t) + αs Pm,s−1 (t) + βs (s + 1)Pm,s+1 (t) + βm (m + 1)Pm+1,s (t)
+ γ(m + 1)(s + 1)Pm+1,s+1 (t) − (αm + αs + βm m + βs s + γms)Pm,s (t),
where a dot denotes the time derivative. Using this and (2.2), we obtain by a series of
calculations that
X
d
hMt i =
mṖm,s (t) = αm Am + αs As + βm Bm + βs Bs + γC,
(2.3)
dt
(m,s)∈S
6
where
Am =
X
mPm−1,s (t) −
(m,s)∈S
X
As =
(m,s)∈S
Bm =
mPm,s (t),
(m,s)∈S
mPm,s−1 (t) −
X
X
X
mPm,s (t),
(m,s)∈S
(m,s)∈S
Bs =
X
C=
m2 Pm,s (t),
(m,s)∈S
m(s + 1)Pm,s+1 (t) −
(m,s)∈S
X
X
m(m + 1)Pm+1,s (t) −
X
msPm,s (t),
(m,s)∈S
X
m(m + 1)(s + 1)Pm+1,s+1 (t) −
(m,s)∈S
m2 sPm,s (t).
(m,s)∈S
By our convention that Pm,s (t) = 0 if m < 0 or s < 0, we have by the change of index
m − 1 → m and (2.1) that
X
X
Am =
(m + 1)Pm,s (t) −
mPm,s (t)
(m+1,s)∈S
X
=
(m,s)∈S
(m + 1)Pm,s (t) −
(m,s)∈S
X
=
X
mPm,s (t)
(m,s)∈S
Pm,s (t)
(m,s)∈S
= 1.
Similarly, by changing the index s − 1 → s, we have
X
X
As =
mPm,s (t) −
mPm,s (t)
(m,s+1)∈S
X
=
(m,s)∈S
mPm,s (t) −
(m,s)∈S
X
mPm,s (t)
(m,s)∈S
= 0.
Changing m + 1 → m, we obtain by (2.2) that
X
X
Bm =
(m − 1)mPm,s (t) −
m2 Pm,s (t)
(m−1,s)∈S
=
X
(m,s)∈S
(m − 1)mPm,s (t) −
(m,s)∈S
=−
X
X
(m,s)∈S
mPm,s (t)
(m,s)∈S
= − hMt i .
7
m2 Pm,s (t)
Changing s + 1 → s and noting ms = 0 when s = 0, we have
X
X
Bs =
msPm,s (t) −
msPm,s (t)
(m,s−1)∈S
=
X
(m,s)∈S
X
msPm,s (t) −
(m,s)∈S
msPm,s (t),
(m,s)∈S
= 0.
Finally, changing m + 1 → m and s + 1 → s, we obtain by (2.2) that
X
X
C=
(m − 1)msPm,s (t) −
m2 sPm,s (t)
(m−1,s−1)∈S
=
X
(m,s)∈S
(m − 1)msPm,s (t) −
(m,s)∈S
=−
X
X
m2 sPm,s (t)
(m,s)∈S
msPm,s (t)
(m,s)∈S
= − hMt St i .
Inserting all Am , As , Bm , Bs and C into (2.3), we obtain
d
hMt i = αm − βm hMt i − γ hMt St i .
dt
(2.4)
Similarly, we have
d
hSt i = αs − βs hSt i − γ hMt St i .
(2.5)
dt
We now make the mean-field assumption: hMt St i = hMt i hSt i . If we denote by v(t) =
hMt i and u1 (t) = hSt i, the spatially homogeneous concentrations of mRNA and sRNA,
respectively, then we obtain (1.7) (N = 1) from (2.4) and (1.8) from (2.5) (N = 1), respectively, with α1 = αm , β1 = βm , k1 = γ, α = αs , and β = βs .
We remark that based on Fick’s law the spatial diffution of the underlying sRNA and
mRNA molecules can be described by Di ∆ui (i = 1, . . . , N ) and D∆u with all Di and D the
diffusion constants, respectively [1, 11, 17]. Here we have neglected any possible and more
complicated processes such as cross diffusion and anomalous diffusion. Combining these
terms with the reaction system (1.7) and (1.8), we obtain the full reaction-diffusion system
(1.1)–(1.4) as our mathematical model for the RNA interaction.
3
Reaction System: Steady-State Solution and Its Linear Stability
Theorem 3.1. Assume all βi , ki , and αi (i = 1, . . . , N ), and β, k, and α are positive
numbers. The system of ODE
dui
= −βi ui − ki ui v + αi ,
dt
8
i = 1, . . . , N,
(3.1)
N
X
dv
= −βv −
ki ui v + α
dt
i=1
(3.2)
has a unique equilibrium solution (u10 , . . . , uN 0 , v0 ) ∈ RN +1 with all ui0 > 0 (i = 1, . . . , N )
and v0 > 0. Moreover, it is linearly stable.
Proof. If (u1 , . . . , uN , v) ∈ RN +1 is anPequilibrium solution to (3.1) and (3.2) with all ui > 0
(i = 1, . . . , N ) and v > 0, then S := N
i=1 ki ui should satisfy
S=
N
X
i=1
ki αi
,
βi + ki α (β + S)−1
(3.3)
and the solution should be given by
ui =
αi
βi + ki α (β + S)−1
(i = 1, . . . , N ) and v =
α
.
β+S
(3.4)
Thus the key here is to prove that there is a unique solution S > 0 to (3.3).
Define g : [0, ∞) → R by
g(s) = s −
N
X
i=1
ki αi
.
βi + ki α (β + s)−1
(3.5)
Clearly, g is smooth in [0, ∞), g(0) < 0, and g(+∞) = +∞. Thus F := {s ≥ 0 : g(s) ≥ 0}
is nonempty, closed, and bounded below. Let s0 = min F . Then s0 > 0 and g(s0 ) = 0.
Moreover, g ′ (s0 ) ≥ 0. By direct calculations, we have
′′
g (s) = 2α
N
X
i=1
ki2 αi βi
> 0 for all s > 0.
[βi (β + s) + ki α]3
Thus g ′ (s) > g ′ (s0 ) = 0 for s > s0 . Hence g(s) > g(s0 ) = 0 for s > s0 . Therefore s0 is the
unique solution to g = 0 on [0, ∞).
Set now
αi
ui0 =
,
i = 1, . . . , N,
(3.6)
βi + ki α (β + s0 )−1
α
.
(3.7)
v0 =
β + s0
P
Clearly, all ui0 > 0 (i = 1, . . . , N ) and vi0 > 0. Note that s0 = N
i=1 ki ui0 , since g(s0 ) = 0.
Thus (3.7) implies which together with (3.6) further imply βi ui0 + ki ui0 v0 = αi (i =
1, . . . , N ). Therefore (u10 , . . . , uN 0 , v0 ) is an equilibrium solution to (3.1) and (3.2).
Assume both (u10 , . . . , uN 0 , v0 ) and (ū10 , . . . , ūN 0 , v̄0 ) are equilibrium solutions to (3.1)
and (3.2) with allPui0 > 0 and ūi0 > 0 (i =P1, . . . , N ), v0 > 0 and v̄0 > 0. Then, by (3.3)
N
and (3.5), S0 := N
i=1 ki ui0 > 0 and S 0 :=
i=1 ki ūi0 > 0 satisfy g(S0 ) = 0 and g(S 0 ) = 0,
9
respectively. By the uniqueness of solution s0 of g = 0, we then have S 0 = S0 = s0 . It
then follows from (3.6) and (3.7) that ui0 = ūi0 (i = 1, . . . , N ) and v0 = v̄0 . Therefore the
equilibrium solution is unique.
The linearized system for (U1 , . . . , UN , V ) around the equilibrium solution (u10 , . . . , uN 0 , v0 )
is given by
dUi
= −(βi + ki v0 )Ui − ki ui0 V,
i = 1, . . . , N,
dt
!
N
N
X
X
dV
=−
ki v0 Ui − β +
ki ui0 V.
dt
i=1
i=1
T
Let w = U1 , . . . , UN , V , where the superscript T denotes the transpose. This system is
then dw/dt = M w, where


−(β1 + k1 v0 )
0
...
0
−k1 u10


0
−(β2 + k2 v0 ) . . .
0
−k2 u20




..
..
..
.
.
.
.
.
.
.
M =




0
0
−(βN + kN v0 )
N uN 0
−kP


−k1 v0
−k2 v0
...
−kN v0
− β+ N
i=1 ki ui0
It is easy to see that M is strictly column diagonally dominant with negative diagonal
entries. Gers̆gorin’s Theorem (with columns replacing rows) [8] then implies that the real
part of any eigenvalue of M is negative. This leads to the desired linear stability.
4
A Single Nonlinear Diffusion Equation: Existence
and Uniqueness of Solution
Theorem 4.1. Assume Ω is a bounded domain in R3 with a Lipschitz-continuous boundary
∂Ω. Assume D, β, and all βi and ki (i = 1, . . . , N ) are positive numbers. Assume αi ∈ L2 (Ω)
with αi ≥ 0 a.e. in Ω (i = 1, . . . , N ) and α ∈ L2 (Ω) with α ≥ 0 a.e. in Ω. Then there exists
a unique weak solution v ∈ H 1 (Ω) with v ≥ 0 a.e. in Ω to the boundary-value problem
N
X
ki αi v
D∆v − βv −
+α=0
β
+
k
v
i
i
i=1
in Ω,
(4.1)
∂v
= 0 on ∂Ω.
(4.2)
∂n
The same statement is true if the Neumann boundary condition (4.2) is replaced by the
Dirichlet boundary condition v = v0 on ∂Ω for some v0 ∈ H 1 (Ω) with v0 ≥ 0 on ∂Ω.
Proof. We prove the case with the Neumann boundary condition (4.2) as the Dirichlet
boundary condition can be treated similarly. We define J : H 1 (Ω) → R ∪ {±∞} by
)
Z (
N
β 2 X αi βi ki
ki
D
2
|∇u| + u +
u − ln 1 + u − αu dx,
J[u] =
2
2
ki βi
βi
Ω
i=1
10
where ln s = −∞ for s ≤ 0. Define g(s) = s − ln(1 + s) for s ∈ R. It is easy to see that
g = +∞ on (−∞, −1], and g is strictly convex and attains its unique minimum at 0 with
g(0) = 0 on (−1, ∞). Thus, since the term in the summation in J is (αi βi /ki )g(ki u/βi ) ≥ 0,
there exist constants C1 , C2 ∈ R with C1 > 0 such that
J[u] ≥ C1 kuk2H 1 (Ω) + C2
∀u ∈ H 1 (Ω).
(4.3)
Denote θ = inf u∈H 1 (Ω) J[u]. Clearly, θ is finite. Standard arguments [2, 5, 9] with an energyminimizing sequence, using Fatou’s lemma to treat the lower-order terms in J, lead to the
existence of u ∈ H 1 (Ω) such that J[u] = θ.
Now, we prove that |u| is also a minimizer of J on H 1 (Ω). In fact, we prove more
generally that if w ∈ H 1 (Ω) then J[|w|] ≤ J[w]. (If u = v0 on ∂Ω, then |u| = |v0 | = v0
on ∂Ω, since v0 ≥ 0 on ∂Ω.) First, |w|2 = w2 and |∇|w| | = |∇w| a.e. in Ω. Since α is
nonnegative in Ω, we also have −α|w| ≤ −αw a.e. in Ω. Consider h(s) = g(|s|) − g(s)
with again g(s) = s − ln(1 + s). If s ≤ −1 then h(s) = −∞. For s ∈ (−1, 0), we have
h(s) = −2s + ln(1 + s) − ln(1 − s) and h′ (s) = 2s2 /(1 − s2 ) > 0. Thus h(s) < h(0) = 0. If
s ≥ 0 then h(s) = 0. Hence h(s) ≤ 0 for all s ∈ R. Consequently, by the definition of J and
the fact that all α ≥ 0 and αi ≥ 0 (i = 1, . . . , N ) a.e. in Ω, we have J[|w|] ≤ J[w].
Denote now v = |u|. Then v is also a minimizer of J on H 1 (Ω) and v ≥ 0 in Ω. Let
η ∈ H 1 (Ω) ∩ L∞ (Ω) and fix i (1 ≤ i ≤ N ). It follows from the Mean-Value Theorem and
Lebesgue Dominated Convergence Theorem that
Z
Z
d
ki
αi βi η
αi βi
dx
∀η ∈ H 1 (Ω) ∩ L∞ (Ω).
ln 1 + (v + tη) dx =
dt t=0 Ω ki
βi
β
+
k
v
i
i
Ω
Since v minimizes J over H 1 (Ω), we have (d/dt)|t=0 J[v + tη] = 0 for all η ∈ H 1 (Ω) ∩ L∞ (Ω).
Standard calculations then imply that
!#
Z "
N
X
αi βi
D∇v · ∇η + η βv +
−α
dx = 0
∀η ∈ H 1 (Ω) ∩ L∞ (Ω).
β + ki v
Ω
i=1 i
Notice that 0 ≤ αi βi /(βi + ki v) ≤ αi a.e. in Ω for all i = 1, . . . , N. Therefore, since
H 1 (Ω) ∩ L∞ (Ω) is dense in H 1 (Ω), we can replace η ∈ H 1 (Ω) ∩ L∞ (Ω) by η ∈ H 1 (Ω).
Consequently, the minimizer v ∈ H 1 (Ω) is a weak solution to (4.1) and (4.2).
If v̂ ∈ H 1 (Ω) is also a nonnegative weak solution to (4.1) and (4.2), then
P w = v − v̂ is a
weak solution to D∆w − bw = 0 in Ω and ∂n w = 0 on ∂Ω, where b = β + N
i=1 βi ki αi /[(βi +
1
ki v)(βi + ki v̂)] is in H (Ω) and b ≥ β > 0 in Ω. Therefore w = 0 a.e. in Ω. Hence the solution
is unique.
We remark that the regularity of the solution v to the boundary-value problem (4.1)
and (4.2) depends on the smoothness of the domain Ω and that of the variable coefficients
αi (i = 1, . . . , N ) and the source function α. Since the solution v is nonnegative and the
nonlinear term of v is bounded, the regularity of v is in fact similar to that of the solution
to a linear elliptic problem. For instance, if ∂Ω is of the class C k and all α, αi ∈ W k,p (Ω)
(i = 1, . . . , N ) for some nonnegative integer k and p ∈ [2, ∞), then v ∈ W k+2,p (Ω). If ∂Ω is
of the class C 2,γ and all α, αi ∈ C 0,γ (Ω) (i = 1, . . . , N ) for some γ ∈ (0, 1), then v ∈ C 2,γ (Ω).
11
5
Reaction-Diffusion System: Existence of Steady-State
Solution
Theorem 5.1. Let Ω be a bounded domain in R3 . Assume all Di , D, βi , β, and ki (i =
1, . . . , N ) are positive constants. Assume all αi and α (i = 1, . . . , N ) are nonnegative
functions on Ω.
(1) Assume the boundary ∂Ω of Ω is in the class C 2,µ for some µ ∈ (0, 1). Assume
also αi , α ∈ C 0,µ (Ω) (i = 1, . . . , N ). There exist u1 , . . . , uN , v ∈ C 2,µ (Ω) with ui ≥ 0
(i = 1, . . . , N ) and u ≥ 0 in Ω such that (u1 , . . . , uN , v) is a solution to the boundaryvalue problem
Di ∆ui − βi ui − ki ui v + αi = 0
D∆v − βv −
N
X
ki ui v + α = 0
in Ω,
i = 1, . . . , N,
in Ω,
(5.1)
(5.2)
i=1
∂ui
∂v
=
=0
∂n
∂n
on ∂Ω,
i = 1, . . . , N.
(5.3)
(2) Assume the boundary ∂Ω of Ω is in the class C 2 and all αi , α ∈ L2 (Ω) (i = 1, . . . , N ).
There exist u1 , . . . , uN , v ∈ H 2 (Ω) with ui ≥ 0 (i = 1, . . . , N ) and u ≥ 0 in Ω such
that (u1 , . . . , uN , v) is a solution to the system of boundary-value problems (5.1)–(5.3).
We remark that we do not know if the solution is unique. Our numerical calculations
indicate that the solution may not be unique. If αi (i = 1, . . . , N ) satisfy some additional
assumptions, then we may have the solution uniqueness; see [18] (Theorem 6.2, Chapter 8).
Proof. (1) We divide our proof in five steps.
(0)
(0)
Step 1. Construction of upper solutions and lower solutions. We define ūi , v̄ (0) , ui , v (0)
(i = 1, . . . , N ) to be constant functions such that:
(0)
ūi ≥
kαi kL∞ (Ω)
,
βi
v̄ (0) ≥
kαkL∞ (Ω)
,
β
(0)
ui = 0,
v (0) = 0 in Ω,
i = 1, . . . , N.
(5.4)
It is clear that
(0)
(0)
(0)
Di ∆ūi − βi ūi − ki ūi v (0) + αi ≤ 0 in Ω,
D∆v (0) − βv (0) −
N
X
i = 1, . . . N,
(0)
ki ūi v (0) + α ≥ 0 in Ω,
(5.5)
(5.6)
i=1
(0)
∂ ūi
(0)
∂v
= 0 on ∂Ω, i = 1, . . . , N,
∂n
∂n
(0)
(0)
(0)
Di ∆ui − βi ui − ki ui v̄ (0) + αi ≥ 0 in Ω,
D∆v̄
=
(0)
− βv̄
(0)
−
N
X
(0)
ki ui v̄ (0) + α ≤ 0 in Ω,
i=1
12
(5.7)
i = 1, . . . , N,
(5.8)
(5.9)
(0)
∂v̄ (0)
∂ui
=
= 0 on ∂Ω,
∂n
∂n
i = 1, . . . , N.
(5.10)
Step 2. Iteration. Let
(
c = max β +
N
X
(0)
ki ūi , β1 + k1 v̄ (0) , . . . , βN + kN v̄ (0)
i=1
(k)
)
.
(5.11)
(k)
Define iteratively the functions ūi , v (k) , ui , v̄ (k) (i = 1, . . . , N ) for k = 1, 2, . . . by
(k)
(k)
− Di ∆ūi + cūi
(k−1)
= cūi
(k−1)
− βi ūi
(k−1) (k−1)
v
− ki ūi
− D∆v (k) + cv (k) = cv (k−1) − βv (k−1) −
N
X
+ αi
(k−1) (k−1)
ki ūi
v
in Ω,
i = 1, . . . , N, (5.12)
+ α in Ω,
(5.13)
i=1
(k)
∂ ūi
(k)
∂v
= 0 on ∂Ω, i = 1, . . . , N,
∂n
∂n
(k)
(k)
(k−1)
(k−1)
(k−1) (k−1)
− βi ui
− ki ui
v̄
+ αi
− Di ∆ui + cui = cui
=
− D∆v̄
(k)
+ cv̄
(k)
= cv̄
(k−1)
− βv̄
(k−1)
−
N
X
(k−1) (k−1)
v̄
ki ui
(5.14)
in Ω,
i = 1, . . . , N, (5.15)
+ α in Ω,
(5.16)
i=1
(k)
∂ui
∂v̄ (k)
=
= 0 on ∂Ω,
∂n
∂n
i = 1, . . . , N.
(5.17)
We recall that, for any constants D̂ > 0 and ĉ > 0, and any q ∈ C 0,µ (Ω), the standard theory of elliptic boundary-value problems guarantees the existence and uniqueness of
solution w ∈ C 2,µ (Ω) to the boundary-value problem [5]
− D̂∆w + ĉw = q
∂w
=0
on ∂Ω.
∂n
in Ω,
Moreover, there exists a constant C > 0, independent of q, such that
kwkC 2,µ (Ω) ≤ CkqkC 0,µ (Ω) .
(5.18)
It therefore follows from (5.4) and a simple induction argument that there are unique solu(k)
tions to the above boundary-values problems (5.12)–(5.17), defining our functions ūi , v (k) ,
(k)
ui , v̄ (k) , i = 1, . . . , N and k = 1, 2, . . . , all in C 2,µ (Ω).
Step 3. Comparison. We now prove for any k ≥ 1 that
(0)
(k)
0 ≤ ui ≤ u i
(k+1)
≤ ui
(k+1)
≤ ūi
(k)
≤ ūi
(0)
≤ ūi
0 ≤ v (0) ≤ v (k) ≤ v (k+1) ≤ v̄ (k+1) ≤ v̄ (k) ≤ v̄ (0)
in Ω,
in Ω.
i = 1, . . . , N,
(5.19)
(5.20)
It follows from (5.4), (5.5), (5.12) with k = 1, (5.11), (5.7), and (5.14) with k = 1 that
(0)
(1)
(0)
(1)
≥ 0 in Ω, i = 1, . . . , N,
+ c ūi − ūi
− Di ∆ ūi − ūi
13
(0)
(1)
∂ ūi − ūi
∂n
= 0 on ∂Ω,
i = 1, . . . , N.
(0)
(1)
The Maximum Principle [5] implies then ūi ≥ ūi in Ω (i = 1, . . . , N ). Similarly, we have
(0)
(1)
ui ≤ ui , v̄ (0) ≥ v̄ (1) , and v (0) ≤ v (1) in Ω for all i = 1, . . . , N.
(0)
By (5.4), ui ≥ 0 (i = 1, . . . , N ) and v (0) ≥ 0. Next, by (5.12) with k = 1, (5.15) with
k = 1, (5.11), and (5.4), we obtain
(1)
(1)
(1)
(1)
+ c ūi − ui
− Di ∆ ūi − ui
(0)
(0)
(0)
(0)
(0)
(0)
= c ūi − ui
− βi ūi − ui
− ki ūi v (0) − ui v̄ (0)
(0)
(0)
(0)
(0)
ūi − ui
= c − βi − ki v
+ ki ui v̄ (0) − v (0)
≥ 0 in Ω,
i = 1, . . . , N.
(1)
(1)
We also have by (5.14) and (5.17) with k = 1 that ∂n (ūi − ui ) = 0 on ∂Ω (i = 1, . . . , N ).
(1)
(1)
The Maximum Principle then implies ūi ≥ ui in Ω for all i = 1, . . . , N. Similarly, we
have v̄ (1) ≥ v (1) in Ω. We thus have proved
(0)
(1)
(1)
(0)
in Ω,
(0)
(1)
(1)
(0)
in Ω.
0 ≤ ui ≤ ui ≤ ūi ≤ ūi
0≤v
≤v
≤ v̄
≤ v̄
i = 1, . . . , N,
Assume now k ≥ 2 and
(0)
(k−1)
(0)
in Ω,
0 ≤ v (0) ≤ · · · ≤ v (k−1) ≤ v̄ (k−1) ≤ · · · ≤ v̄ (0)
in Ω.
(5.22)
i = 1, . . . , N,
(5.23)
0 ≤ ui ≤ · · · ≤ ui
(k−1)
≤ ūi
≤ · · · ≤ ūi
i = 1, . . . , N,
(5.21)
We prove
(k−1)
(k−1)
in Ω,
v (k−1) ≤ v (k) ≤ v̄ (k) ≤ v̄ (k−1)
in Ω.
ui
(k)
≤ ui
(k)
≤ ūi
≤ ūi
(5.24)
By (5.12) with k − 1 replacing k, (5.12), (5.21), (5.22), and (5.11), we obtain
(k)
(k−1)
(k)
(k−1)
− ūi
+ c ūi
− ūi
− Di ∆ ūi
(k−2)
(k−1)
(k−1)
(k−2)
+ ki ūi
= c − βi − ki v
v (k−1) − v (k−2)
ūi
− ūi
(k−2)
(k−1)
− ūi
≥ c − βi − ki v̄ (0) ūi
≥ 0 in Ω,
i = 1, . . . , N.
(k−1)
(k)
We also have by the boundary conditions (5.14) that ∂n (ūi
− ūi ) = 0 on ∂Ω (i =
(k)
(k−1)
≥ ūi in Ω for all i = 1, . . . , N.
1, . . . , N ). The Maximum Principle now implies ūi
14
(k−1)
Similarly, we have ui
we have
(k)
≤ ui
in Ω (i = 1, . . . , N ). By (5.16), (5.21), (5.22), and (5.11),
− D∆ v̄ (k−1) − v̄ (k) + c v̄ (k−1) − v̄ (k)
!
N
N
X
X
(k−2)
(k−1)
(k−2)
(k−2)
(k−1)
(k−1)
− ui
+
v̄
− v̄
ki v̄
ui
= c−β−
ki ui
≥
c−β−
i=1
N
X
i=1
(0)
ki ūi
i=1
≥ 0 in Ω.
!
v̄ (k−2) − v̄ (k−1)
Since ∂n v̄ (k−1) = ∂n v̄ (k) on ∂Ω, we thus have v̄ (k−1) ≥ v̄ (k) in Ω. Similarly, v (k−1) ≤ v (k) in Ω.
By (5.12), (5.15), (5.21), (5.22), and (5.11), we have
(k)
(k)
(k)
(k)
− Di ∆ ūi − ui
+ c ūi − ui
(k−1)
(k−1)
(k−1)
(k−1)
+ ki ui
v̄ (k−1) − v (k−1)
ūi
− ui
= c − βi − ki v
(k−1)
(k−1)
− ui
≥ c − βi − v̄ (0) ūi
≥ 0 in Ω,
i = 1, . . . , N.
(k)
(k)
(k)
(k)
These and the corresponding boundary conditions for ūi and ui lead to ui ≤ ūi
for all i = 1, . . . , N. By (5.13), (5.16), (5.21), (5.22), and (5.11), we have
− D∆ v̄ (k) − v (k) + c v̄ (k) − v (k)
!
N
N
X
X
(k−1)
(k−1)
(k−1)
(k−1)
(k−1)
v̄
−v
+
− ui
= c−β−
ki v (k−1) ūi
ki ui
≥
c−β−
i=1
N
X
i=1
≥ 0 in Ω.
in Ω
i=1
(0)
ki ūi
!
v̄ (k−1) − v (k−1)
This together with the fact that ∂n v̄ (k−1) = ∂n v (k−1) imply v (k) ≤ v̄ (k) in Ω. We have proved
(5.23) and (5.24). By induction, we have proved (5.19) and (5.20).
Step 4. Regularity and boundedness. From the above iteration (5.5)–(5.10), we obtain by (5.19) and (5.20) uniformly bounded sequences of nonnegative C 2,µ (Ω)-functions
(k)
(k)
{ūi }, {ui }, {v̄ (k) }, and {v (k) }. By standard Hölder estimates for elliptic problems, we
(k)
(k)
conclude that all the sequences {kūi kC 2,µ (Ω) } (1 ≤ i ≤ N ), {kui kC 2,µ (Ω) } (1 ≤ i ≤ N ),
{kv̄ (k) kC 2,µ (Ω) }, and {kv (k) kC 2,µ (Ω) } are bounded.
(k)
(k)
Step 5. Convergence to solution. From Step 4, the sequences {ūi } and {ui } (1 ≤ i ≤
N ), {v̄ (k) }, and {v (k) } are bounded in C 2 (Ω) and pointwise monotonic on Ω. Therefore, they
converge pointwise to some functions ūi and ui (1 ≤ i ≤ N ), v̄, and v on Ω, respectively.
15
(kj ) ∞
}j=1
By the Arzela–Ascoli Theorem, there exist C 2 (Ω)-convergent subsequences {ūi
(k )
{ui j }∞
j=1
(k)
(k)
{ūi }
and
(k)
{ui }
(kj ) ∞
and
}j=1 , of
(1 ≤ i ≤ N ),
(1 ≤ i ≤ N ), {v̄ (kj ) }∞
j=1 , and {v
(k)
2
{v }, and {v̄ }, respectively. Clearly, these subsequences converge in C (Ω) to ūi (1 ≤
i ≤ N ), ui (1 ≤ i ≤ N ), v̄, and v, respectively. Note that each of the subsequences
(k −1)
(k −1)
{ūi j } and {ui j } (1 ≤ i ≤ N ), {v̄ (kj −1) }, and {v (kj −1) } also converges pointwise to
its respective limit. Now replace k in (5.12)–(5.17) by kj and sending j → ∞, we conclude
that (ū1 , · · · , ūN , v) and (u1 , · · · , uN , v̄) are C 2 (Ω)-solutions of the system (5.1)–(5.3).
(k)
(k)
(2) This part can be proved similarly. The sequences of functions {ūi } and {ui } (1 ≤
i ≤ N ), {v̄ (k) }, and {v (k) } can be defined as weak solutions of the corresponding boundaryvalue problems. Moreover, By using the estimate kwkH 2 (Ω) ≤ CkqkL2 (Ω) , replacing (5.18),
we have that all the sequences are bounded in H 2 (Ω). The monotonicity and pointwise
boundedness imply that these sequences converge, respectively, to some functions ūi and
ui (1 ≤ i ≤ N ), v̄, and v on Ω, all being nonnegative. Instead of using the Arzela–Ascoli
Theorem, we use the fact that H 2 (Ω) is a Hilbert space and use also Sobolev compact
(k )
(kj ) ∞
embedding theorem to extract the subsequences {ūi j }∞
}j=1 (1 ≤ i ≤ N ),
j=1 and {ui
(kj ) ∞
2
(kj ) ∞
{v̄ }j=1 , and {v }j=1 that converge weakly in H (Ω) and strongly in H 1 (Ω) to the
pointwise limiting functions ūi and ui (1 ≤ i ≤ N ), v̄, and v, respectively. Finally, we
use the weak forms of the equations to pass to the limit. For instance, we have for any
φ ∈ H 1 (Ω) and all j = 1, 2, . . . that
Z h
i
(kj )
(kj )
Di ∇ūi · ∇φ + cūi φ dx
Ω
Z h
i
(k −1)
(k −1)
(k −1)
=
− ki ūi j v (kj −1) + αi φ dx, i = 1, . . . , N.
− βi ūi j
cūi j
Ω
(k )
Sending j → ∞, we have by the H 1 (Ω)-convergence of {ūi j }∞
j=1 and the pointwise conver(k) ∞
(k) ∞
gence of {ūi }k=1 and {v }k=1 that
Z
Z
[Di ∇ūi · ∇φ + cūi φ]dx =
[cūi − βi ūi − ki ūi v + αi ] φ dx, i = 1, . . . , N.
Ω
Ω
Therefore, (ū1 , . . . , ūN , v) and (u1 , · · · , uN , v̄) are the weak (and nonnegative) solutions in
H 2 (Ω) of the system (5.1)–(5.3).
6
Reaction-Diffusion System: Existence and Uniqueness of Global Solution to Time-Dependent Problem
Our goal in this section is to prove the existence and uniqueness of global solution for the
reaction-diffusion system (1.1)–(1.4). We shall first prove the existence and uniqueness of
local solution to this system. Our proof of the existence of a lcoal solution is similar to that
of Theorem 5.1 on the existence of steady-state solution but involves the representation
formula and regularity of solutions to linear parabolic equations. The uniqueness of a local
solution is obtained by the Maximum Principle for systems of linear parabolic equations.
16
For any set Ω ⊆ R3 and any T > 0, we denote ΩT = Ω × (0, T ]. If Ω ⊆ R3 is open,
we also denote by C12 (ΩT ) the class of functions u : ΩT → R of (x, t) that are continuously
differentiable in t and twice continuously differentiable in x = (x1 , x2 , x3 ) on Ω.
Theorem 6.1 (Existence and uniqueness of local solution). Let Ω be a bounded domain in
R3 with a C 2 -boundary ∂Ω. Let Di , βi , and ki (i = 1, . . . , N ), D and β, and T be all positive
numbers. Let αi ∈ C 1 (ΩT ) (i = 1, . . . , N ) and α ∈ C 1 (ΩT ) be all nonnegative functions on
ΩT . Assume ui 0 ∈ C 1 (Ω) (i = 1, . . . , N ) and v0 ∈ C 1 (Ω) are all nonnegative functions on
Ω. Then there exist unique ui ∈ C(ΩT ) ∩ C12 (ΩT ) (i = 1, . . . , N ) and v ∈ C(ΩT ) ∩ C12 (ΩT )
such that all ui ≥ 0 (i = 1, . . . , N ) and v ≥ 0 in ΩT and
∂ui
= Di ∆ui − βi ui − ki ui v + αi
∂t
N
X
∂v
= D∆v − βv −
ki ui v + α
∂t
i=1
∂ui
∂v
=
=0
on ∂Ω × (0, T ],
∂n
∂n
ui (·, 0) = ui 0 and v(·, 0) = v0
in ΩT ,
i = 1, . . . , N,
(6.1)
in ΩT ,
(6.2)
i = 1, . . . , N,
(6.3)
in Ω,
(6.4)
i = 1, . . . , N.
Proof. We first prove the existence of solution in four steps.
Step 1. Construction of upper solutions and lower solutions. We choose the constant
(0)
(0)
functions ūi , v̄ (0) , ui , and v (0) (i = 1, . . . , N ) on ΩT by
(0)
ūi ≥
kαi kL∞ (ΩT )
+ kui 0 kL∞ (Ω) ,
βi
v̄ (0) ≥
kαkL∞ (ΩT )
+ kv0 kL∞ (Ω) ,
β
(0)
ui = 0,
v (0) = 0.
It is clear that
(0)
∂ ūi
(0)
(0)
(0)
− Di ∆ūi + βi ūi + ki ūi v (0) − αi ≥ 0
∂t
N
X
∂v (0)
(0)
(0)
(0)
ki ūi v (0) − α ≤ 0
− D∆v + βv +
∂t
i=1
in ΩT ,
i = 1, . . . , N,
in ΩT ,
(0)
∂ ūi
∂v (0)
= 0 and
=0
on ∂Ω × (0, T ], i = 1, . . . , N,
∂n
∂n
(0)
in Ω, i = 1, . . . , N,
ūi (·, 0) ≥ ui 0 and v (0) (·, 0) ≤ v0
(0)
∂ui
(0)
(0)
(0)
− Di ∆ui + βi ui + ki ui v̄ (0) − αi ≤ 0 in ΩT , i = 1, . . . , N,
∂t
N
X
∂v̄ (0)
(0)
(0)
(0)
− D∆v̄ + βv̄ +
ki ui v̄ (0) − α ≥ 0 in ΩT ,
∂t
i=1
(0)
∂ui
∂v̄ (0)
= 0 and
= 0 on ∂Ω × (0, T ],
∂n
∂n
(0)
ui (·, 0) ≤ ui 0 and v̄ (0) (·, 0) ≥ v0 in Ω, i = 1, . . . , N.
17
Step 2. Iteration. Let
(
c = max β +
N
X
(0)
ki ūi , β1 + k1 v̄ (0) , . . . , βN + kN v̄ (0)
i=1
)
.
(6.5)
(k)
(k)
Define iteratively the functions ūi , v (k) , ui , and v̄ (k) (i = 1, . . . , N ) on ΩT for k = 1, 2, . . .
by
(k)
∂ ūi
(k)
(k)
(k−1)
(k−1)
(k−1) (k−1)
+ αi
− Di ∆ūi + cūi = cūi
− βi ūi
− ki ūi
v
∂t
in ΩT ,
i = 1, . . . , N,
(6.6)
∂v (k)
− D∆v (k) + cv (k) = cv (k−1) − βv (k−1) −
∂t
(k)
N
X
(k−1) (k−1)
v
ki ūi
+α
in ΩT ,
∂ ūi
∂v (k)
=
= 0 on ∂Ω × (0, T ], i = 1, . . . , N,
∂n
∂n
(k)
ūi (·, 0) = ui 0 and v (k) (·, 0) = v0 in Ω, i = 1, . . . , N,
(k)
∂ui
∂t
(k)
(k)
− Di ∆ui + cui
(k−1)
= cui
(k−1
− βi ui
(6.7)
i=1
(k−1) (k−1)
− ki ui
v̄
(6.8)
(6.9)
+ αi
in ΩT ,
i = 1, . . . , N,
(6.10)
N
X
∂v̄ (k)
(k−1) (k−1)
− D∆v̄ (k) + cv̄ (k) = cv̄ (k−1) − βv̄ (k−1) −
ki ui
v̄
+ α in ΩT ,
∂t
i=1
(6.11)
(k)
∂v̄ (k)
∂ui
=
= 0 on ∂Ω × (0, T ], i = 1, . . . , N,
∂n
∂n
(k)
ui (·, 0) = ui 0 and v̄ (k) (·, 0) = v0 in Ω.
(6.12)
(6.13)
The theory for initial-boundary-value problems of linear parabolic equations (cf. Theorem 2 in Chapter 5 of [3], or Theorem 1.2 in Chapter 2 of [18]) guarantees the existence of
(1)
(1)
solutions ūi , v (1) , ui , and v̄ (1) (i = 1, . . . , N ) for k = 1 that are all in C(Ω)∩C12 (ΩT ) and are
(k)
(k)
all Hölder continuous in x uniformly in ΩT . Suppose ūi , v (k) , ui , and v̄ (k) (i = 1, . . . , N )
for k ≥ 1 exist, and are all in C(Ω) ∩ C12 (ΩT ) and Hölder continuous in x uniformly in
ΩT . Then the theory for initial-boundary-value problems of linear parabolic equations then
(k+1)
(k+1)
, and v̄ (k+1) (i = 1, . . . , N ) all exist, are in
, v (k+1) , ui
implies that the solutions ūi
C(Ω) ∩ C12 (ΩT ), and are Hölder continuous in x uniformly in ΩT . By induction, we have
(k)
(k)
for all k = 1, 2, . . . the existence of the solutions ūi , v (k) , ui , and v̄ (k) (i = 1, . . . , N ) in
C(Ω) ∩ C12 (ΩT ) that are Hölder continuous in x uniformly in ΩT .
In fact, there is a representation formula for our solutions. Suppose q ∈ C(ΩT ) is Hölder
continuous in x uniformly on ΩT and suppose g ∈ C 1 (Ω). Let u ∈ C(ΩT ) ∩ C12 (ΩT ) satisfy
∂u
− D∆u + cu = q
∂t
18
in ΩT ,
(6.14)
u(·, 0) = g(·) in Ω,
∂u
= 0 on ∂Ω × (0, T ].
∂n
(6.15)
(6.16)
Then we can extend g to a C 1 -function on a neighborhood of Ω as the boundary ∂Ω is
of the class C 2 . Hence we have the following representation of the solution to the initialboundary-value problem (6.14)–(6.16) (cf. Section 3 of Chapter 5 of [3] and Theorem 8.3.2
of [18]):
Z
Z tZ
Γ(x, t; ξ, τ )q(ξ, τ ) dξ dτ
u(x, t) =
Γ(x, t; ξ, 0)g(ξ) dξ +
0
Ω
Ω
Z tZ
Γ(x, t; ξ, τ )Ψ(ξ, τ ) dSξ dτ
∀(x, t) ∈ Ω × (0, T ],
(6.17)
+
0
∂Ω
where
2
Γ(x, t; ξ, τ ) = [4πD(t − τ )]−3/2 e−c(t−τ )−|x−ξ| /4D(t−τ ) ,
∞ Z tZ
X
Ψ(x, t) = 2F (x, t) + 2
Mj (x, t; ξ, τ )F (ξ, τ ) dSξ dτ,
j=1
0
(6.18)
(6.19)
∂Ω
Z tZ
∂Γ(x, t; ξ, 0)
∂Γ(x, t; ξ, τ )
F (x, t) =
g(ξ) dξ +
q(ξ, τ ) dξdτ,
∂n(x, t)
∂n(x, t)
Ω
0
Ω
∂Γ(x, t; ξ, τ )
,
M1 (x, t; ξ, τ ) = 2
∂n(x, t)
Z tZ
M1 (x, t; y, σ)Mj (y, σ; ξ, τ ) dSy dσ.
Mj+1 (x, t; ξ, τ ) =
Z
0
∂Ω
Here the infinite series converges and the function F (x, t) is bounded.
Step 3. Comparison. Notice by (6.5) that
c−β−
N
X
(0)
ki ūi ≥ 0 and c − βi − ki v̄ (0) ≥ 0
in ΩT .
i=1
By the Maximum Principle for parabolic equations [2, 3, 9, 18] and using arguments similar
to those for the steady-state solutions (cf. Step 3 in the proof of Theorem 5.1 in Section 5),
we then have from the iteration (6.6)–(6.13) in Step 2 that
(0)
(k)
0 = ui ≤ ui
(k+1)
≤ ui
(k+1)
≤ ūi
(k)
≤ ūi
(0)
≤ ūi
0 = v (0) ≤ v (k) ≤ v (k+1) ≤ v̄ (k+1) ≤ v̄ (k) ≤ v̄ (0)
in ΩT ,
(6.20)
in ΩT .
(6.21)
Step 4. Convergence to solution. By the monotonicity (6.20) and (6.21), we have the
pointwise limits
(k)
ūi = lim ūi ,
k→∞
v̄ = lim v̄ (k) ,
k→∞
(k)
ui = lim ui ,
k→∞
19
v = lim v (k)
k→∞
in ΩT , i = 1, . . . , N.
These limits are nonnegative bounded measurable functions. In particular,
ūi (x, 0) = ui (x, 0) = ui 0 (x) (i = 1, . . . , N ) and v̄(x, 0) = v(x, 0) = v0 (x)
∀x ∈ Ω.
Let us now fix i (1 ≤ i ≤ N ) and set for each integer k ≥ 1
(k)
(k−1)
(k−1)
(k−1)
qi = cūi
− βi ūi
− ki ūi
v (k−1) + αi
qi = cūi − βi ūi − ki ūi v + αi in ΩT .
in ΩT ,
(6.22)
Note that each q (k) ∈ C(ΩT ) is Hölder continuous in x uniformly in ΩT . Moreover,
(k)
lim qi
k→∞
= qi
in ΩT .
(6.23)
It follows from (6.6), (6.8), and (6.9) that
(k)
∂ ūi
(k)
(k)
(k)
− Di ∆ūi + cūi = qi
∂t
(k)
ūi (·, 0) = ui 0 (·) in Ω,
in ΩT ,
(6.24)
(6.25)
(k)
∂ ūi
= 0 on ∂Ω × (0, T ].
∂n
(6.26)
Therefore, by the representation (6.17) and (6.19) for the solution to (6.14)–(6.16) that are
now replaced by (6.24)–(6.26), we have
Z
Z tZ
(k)
(k)
Γ(x, t; ξ, τ )qi (ξ, τ ) dξ dτ
ūi (x, t) =
Γ(x, t; ξ, 0)ui 0 (ξ) dξ +
0
Ω
Ω
Z tZ
(k)
∀(x, t) ∈ Ω × (0, T ],
+
Γ(x, t; ξ, τ )Ψi (ξ, τ ) dSξ dτ
0
(k)
Ψi (x, t)
=
∂Ω
(k)
2Fi (x, t)
+2
∞ Z tZ
X
j=1
(k)
Fi (x, t)
=
Z
Ω
0
(k)
Mj (x, t; ξ, τ )Fi (ξ, τ ) dSξ dτ,
∂Ω
∂Γ(x, t; ξ, 0)
ui0 (ξ) dξ +
∂n(x, t)
Z tZ
0
Ω
∂Γ(x, t; ξ, τ ) (k)
q (ξ, τ ) dξdτ,
∂n(x, t) i
where Γ is given in (6.18) with D replaced by Di .
(k)
Since the sequence {qi }∞
k=1 is uniformly bounded in ΩT and converges (cf. (6.23)), the
(k) ∞
sequence {Fi }k=1 is uniformly bounded in ∂Ω × (0, T ] and converges. Further, the series
(k)
(k)
in the expression of Ψi converges absolutely. Therefore, the sequence {Ψi }∞
k=1 is also
uniformly bounded on ∂Ω × (0, T ] and converges. Let the limit be Ψi = Ψi (x, t). Taking
the limit as k → ∞ and using the Lebesgue Dominated Convergence Theorem, we obtain
by (6.23) that
Z
Z tZ
ūi (x, t) =
Γ(x, t; ξ, 0)ui 0 (ξ) dξ +
Γ(x, t; ξ, τ )qi (ξ, τ ) dξ dτ
Ω
0
Ω
20
+
Z tZ
0
Γ(x, t; ξ, τ )Ψi (ξ, τ ) dSξ dτ,
Ψi (x, t) = 2Fi (x, t) + 2
∞ Z tZ
X
j=1
Fi (x, t) =
Z
Ω
∀(x, t) ∈ Ω × (0, T ],
(6.27)
∂Ω
0
Mj (x, t; ξ, τ )Fi (ξ, τ ) dSξ dτ,
(6.28)
∂Ω
∂Γ(x, t; ξ, 0)
ui0 (ξ) dξ +
∂n(x, t)
Z tZ
0
Ω
∂Γ(x, t; ξ, τ )
qi (ξ, τ ) dξdτ,
∂n(x, t)
where Γ is given in (6.18) with D replaced by Di .
Since ui 0 ∈ C 1 (Ω), the first term in (6.27) is a function of (x, t) in C 2 (ΩT ). Since qi is
bounded, the second term in (6.27) is also a continuous function of (x, t) on ΩT . By (6.28),
the function Ψi (x, t) is continuous on ∂Ω × [0, T ]. Thus the third term in (6.27) and hence
ūi = ūi (x, t) is a continuous function in ΩT . Similarly, v = v(x, t) is a continuous function
in ΩT . Therefore, qi = qi (x, t) as defined in (6.22) is continuous in ΩT . Repeat the same
argument using (6.27) and (6.28), we have that ūi is in fact Hölder continuous in x uniformly
in ΩT . Similarly, v is Hölder continuous in x uniformly in ΩT . Finally, qi is Hölder continuous
in x uniformly in ΩT . Therefore, we have the interior regularity of all ūi (i = 1, . . . , N ) and
v; cf. [3] (Theorem 2 in Section 5.3). Now (6.22), (6.27), and (6.28) imply that ūi , and v,
solve (6.1) with u and v replaced by ūi and v, respectively. The existence of solutions to
other equations can be obtained similarly.
We now prove the uniqueness in three steps.
Step 1. We prove that solutions (ū1 , . . . , ūN , v) and (u1 , . . . , uN , v̄) obtained above satisfy
ui = ūi (i = 1, . . . , N ) and v = v̄ in ΩT .
In fact, setting wi = ūi − ui (i = 1, . . . , N ) and w = v̄ − v, we have
∂wi
in ΩT , i = 1, . . . , N,
= Di ∆wi − βi wi − ki vwi + ki ui w
∂t
N
N
X
X
∂w
ki wi v
in ΩT ,
= D∆w − βw −
ki ui w +
∂t
i=1
i=1
∂w
∂wi
=
=0
on ∂Ω × (0, T ], i = 1, . . . , N,
∂n
∂n
wi (·, 0) = 0 and w(·, 0) = 0
in Ω, i = 1, . . . , N.
The uniqueness of solution to linear systems of parabolic equations then lead to wi = 0
(i = 1, . . . , N ) and w = 0.
Step 2. We prove the following: If (u∗1 , . . . , u∗N , v ∗ ) is any other solution to (6.1)–(6.4)
(0)
(0)
such that ui ≤ u∗i ≤ ūi (i = 1, . . . , N ) and v (0) ≤ v ∗ ≤ v̄ (0) in ΩT , then ui = u∗i = ūi
(i = 1, . . . , N ) and v = v ∗ = v̄ in ΩT .
(0)
(0)
(0)
(0)
In fact, let us replace (ū1 , . . . , ūN , v (0) ) by (u∗1 , . . . , u∗N , v ∗ ) and keep (u1 , . . . , uN , v̄ (0) )
(k)
(k)
unchanged in the iteration in Step 2. Then, (ū1 , . . . , ūN , v (k) ) = (u∗1 , . . . , u∗N , v ∗ ) (k =
(k)
(k)
0, 1, . . . ), and the sequence (u1 , . . . , uN , v̄ (k) ) (k = 0, 1, . . . ) remains unchanged and it
converges to the solution (u1 , . . . , uN , v̄). Therefore, by the iteration we get ui ≤ u∗i (i =
1, . . . , N ) and v ∗ ≤ v̄ in ΩT . A similar argument leads to u∗i ≤ ūi (i = 1, . . . , N ) and v ≤ v ∗
21
in ΩT . These, together with the result proved in Step 1, imply ui = u∗i = ūi (i = 1, . . . , N )
and v = v ∗ = v̄ in ΩT .
(0)
Step 3. If we have two nonnegative solutions defined on ΩT , then we can choose all ūi
(i = 1, . . . , N ) and v̄ (0) (in the iteration in Step 2 of proving existence) large enough to
bound from above these solutions in ΩT . Then, both of these solutions must be the same
as those constructed by iterative upper and lower solutions. The uniqueness is therefore
proved.
Theorem 6.2 (Existence and uniqueness of global solution). Let Ω be a bounded domain
in R3 with a C 2 -boundary ∂Ω. Let Di (i = 1, . . . , N ), D, βi (i = 1, . . . , N ), β, and ki
(i = 1, . . . , N ) be all positive numbers. Let αi ∈ C 1 (Ω × [0, ∞)) (i = 1, . . . , N ) and α ∈
C 1 (Ω × [0, ∞)) be all nonnegative functions on Ω × [0, ∞). Assume ui 0 ∈ C 1 (Ω) (i =
1, . . . , N ) and v0 ∈ C 1 (Ω) are all nonnegative functions on Ω. Then there exists a unique
nonnegative solution (u1 , . . . , uN , v) to the system (1.1)–(1.4) with all ui (i = 1, . . . , N ) and
v being continuous on Ω × [0, ∞) and continuously differentiable in t ∈ (0, ∞) and twice
continuously differentiable in x ∈ Ω.
Proof. Let Tm = m (m = 1, 2, . . . ). Then for each Tm , the system has a unique solution
defined on ΩTm . By the uniqueness of local solution, the solution corresponding to Tm and
that to Tn are identical on [0, Tm ] if m ≤ n. Therefore, on each finite interval of time, all
the solutions are the same as long as they are defined on that interval. Hence we have the
existence of a global solution. It is unique since the local solution is unique.
7
Reaction-Diffusion System: Asymptotic Behavior
We now assume that all αi (i = 1, . . . , N ) and α are independent of time t and consider
the initial-boundary-value problem of the full, time-dependent system of reaction-diffusion
equations (1.1)–(1.4). Given the initial data ui0 (i = 1, . . . , N ) and v0 , the system has a
unique global solution ui = ui (x, t) (i = 1, . . . , N ) and v = v(x, t) by Theorem 6.2. We ask
if the limit of the solution as t → ∞ exists, and if so, if the limit is a steady-state solution.
We first state the following result and omit its proof as it is similar to that for the special
case for two equations; cf. Corollary 8.3.1 in [18]:
Proposition 7.1. Let Ω, T , and Di (i = 1, . . . , N ) be all the same as in Theorem 6.1.
Let ai,j ∈ C 1 (ΩT ) (i, j = 1, . . . , N ) be such that ai,j ≥ 0 in ΩT if i 6= j. Suppose wi ∈
C(ΩT ) ∩ C12 (ΩT ) (i = 1, . . . , N ) satisfy
N
X
∂wi
− Di ∆wi ≥
ai,j (x, t)wj
∂t
i,j=1
in ΩT ,
∂wi
= 0 on ∂Ω × (0, T ], i = 1, . . . , N,
∂n
wi (·, 0) ≥ 0 in Ω, i = 1, . . . , N.
Then wi ≥ 0 in ΩT (i = 1, . . . , N ).
22
i = 1, . . . , N,
The following theorem states indicates particularly that if the initial values are large
constant functions then the global solutions to the time-dependent problem are monotonic
in time t and the limits as t → ∞ are steady-state solutions.
Theorem 7.1. Let Ω be a bounded domain in R3 with its boundary ∂Ω in the class C 2,µ
for some µ ∈ (0, 1). Let Di , βi , and ki (i = 1, . . . , N ), and D and β be all the same as
in Theorem 6.1. Let αi ∈ C 1 (Ω) (i = 1, . . . , N ) and α ∈ C 1 (Ω) be all nonnegative on Ω.
Let (U 1 , . . . , U N , V ) be the nonnegative solution to the system (1.1)–(1.4) with the initial
(0)
values U i (·, 0) = ūi (i = 1, . . . , N ) and V (·, 0) = v (0) all being constant functions. Let
(U 1 , . . . , U N , V ) be the nonnegative solution to the system (1.1)–(1.4) with the initial values
(0)
U i (·, 0) = ui (i = 1, . . . , N ) and V (·, 0) = v̄ (0) all being constant functions. Assume
(0)
(0)
ūi ≥ kαi kL∞ (Ω) /βi , v̄ (0) ≥ kαkL∞ (Ω) /β, and ui = v (0) = 0 (i = 1, . . . , N ). Then the
following hold true:
(1) U i ≥ U i (i = 1, . . . , N ) and V ≥ V in Ω × [0, ∞).
(2) U i (i = 1, . . . , N ) and V are monotonically nonincreasing in t and U i (i = 1, . . . , N )
and V are monotonically nondecreasing in t.
(3) Let
(U 1,s , . . . , U N,s , V s ) = lim (U 1 , . . . , U N , V ),
t→∞
(U 1,s , . . . , U N,s , V s ) = lim (U 1 , . . . , U N , V ),
t→∞
Then U i,s ≥ U i,s (i = 1, . . . , N ) and V s ≥ V s on Ω. Moreover, all U i,s , U i,s , V s , and
V s are in C 2,µ (Ω), and (U 1,s , . . . , U N,s , V s ) and (U 1,s , . . . , U N,s , V s ) are solutions of
the time-independent system (5.1)–(5.3).
(0)
(0)
(4) If (u∗1,s , . . . , u∗N,s , vs∗ ) is any solution to (5.1)–(5.3) such that ui ≤ u∗i,s ≤ ūi (i =
1, . . . , N ) and v (0) ≤ vs∗ ≤ v̄ (0) on Ω, then U i,s ≤ u∗i,s ≤ U i,s (i = 1, . . . , N ) and
V s ≤ vs∗ ≤ V s on Ω.
Proof. (1) Let T > 0. Let Wi = U i − U i (i = 1, . . . , N ) and W = V − V . We then have
∂Wi
in ΩT , i = 1, . . . , N,
= Di ∆Wi − βi Wi − ki V Wi + ki U i W
∂t
N
N
X
X
∂W
= D∆W − βW −
ki U i W +
ki Wi V
in ΩT ,
∂t
i=1
i=1
∂W
∂Wi
=
=0
on ∂Ω × (0, T ], i = 1, . . . , N,
∂n
∂n
(0)
(0)
Wi (·, 0) = ūi − ui ≥ 0 and W (·, 0) = v̄ (0) − v (0) ≥ 0 in Ω, i = 1, . . . , N.
Therefore, we get by Proposition 7.1 that Wi ≥ 0 (i = 1, . . . , N ) and W ≥ 0 in ΩT . Since
T > 0 is arbitrary, we obtain the desired inequality on Ω × [0, ∞).
(2) Let T > 0 and δ > 0. Set W i (x, t) = U i (x, t) − U i (x, t + δ) (i = 1, . . . , N ) and
W (x, t) = V (x, t + δ) − V (x, t). Then
∂W i (x, t)
= Di ∆W i (x, t) − βi W i (x, t) − ki V (x, t + δ)W i (x, t) + ki U i (x, t)W (x, t)
∂t
23
∀(x, t) ∈ ΩT ,
i = 1, . . . , N,
N
N
X
X
∂W (x, t)
ki U i (x, t + δ)W (x, t) +
ki W i (x, t)V (x, t)
= D∆W (x, t) − βW (x, t) −
∂t
i=1
i=1
∀(x, t) ∈ ΩT ,
∂W
∂W i
=
= 0 on ∂Ω × (0, T ], i = 1, . . . , N,
∂n
∂n
(0)
W i (·, 0) = ūi − U i (δ) ≥ 0 and W (·, 0) = V (δ) − v (0) ≥ 0 in Ω, i = 1, . . . , N.
Again by Proposition 7.1 we get W i ≥ 0 (i = 1, . . . , N ) and W ≥ 0. Hence U i (i = 1, . . . , N )
are monotonically nonincreasing in t and V is monotonically nondecreasing in t. Similarly, U i
(i = 1, . . . , N ) are monotonically nondecreasing in t and V is monotonically nonincreasing
in t.
(3) By Part (1) we have U i,s ≥ U i,s (i = 1, . . . , N ) and V s ≥ V s on Ω. The claim that
(U 1,s , . . . , U N,s , V s ) and (U 1,s , . . . , U N,s , V s ) are solutions of the time-independent system
(5.1)–(5.3) can be proved similarly as the proof of Theorem 10.4.3 in [18] and that of
Theorem 3.6 in [20].
(4) This part can proved by the same argument in Step 2 in the proof of uniqueness of
solution of Theorem 6.1.
(0)
(0)
Theorem 7.2. Let Ω, Di , D, βi , β, ki , αi , α, ūi , ui , v̄ (0) , v (0) , U i , U i , V , and V
(i = 1, . . . , N ) be all the same as in Theorem 7.1. Let ui0 ∈ C 1 (Ω) (i = 1, . . . , N ) and
(0)
v0 ∈ C 1 (Ω) be such that v (0) ≤ ui 0 ≤ ūi and v (0) ≤ v0 ≤ v̄ (0) (i = 1, . . . , N ) in Ω.
Let (u1 , . . . , uN , v) be the unique nonnegative global solution to the time-dependent problem
(1.1)–(1.4) with the initial data (u10 , . . . , uN 0 , v0 ). Then U i ≥ ui ≥ U i (i = 1, . . . , N ) and
V ≥ v ≥ V in Ω × [0, ∞).
Proof. This is similar to the proof of Part (1) of Theorem 7.1.
The following two corollaries relate the uniqueness of steady-state solution to the asymptotic behavior of solution to the time-dependent problem.
Corollary 7.1. With the assumption of Theorem 7.2, the following hold true:
(1) That U i,s = U i,s (i = 1, . . . , N ) and V s = V s in Ω if and only if the steady-state
(0)
(0)
solution (u1,s , . . . , uN,s ) ∈ (C 2 (Ω))N +1 satisfying ui ≤ ui,s ≤ ūi (i = 1, . . . , N ) and
v (0) ≤ vs ≤ v̄ (0) in Ω is unique.
(0)
(2) If the steady-state solution (u1,s , . . . , uN,s ) ∈ (C 2 (Ω))N +1 that satisfies ui ≤ ui,s ≤
(0)
ūi (i = 1, . . . , N ) and v (0) ≤ vs ≤ v̄ (0) in Ω is unique, then for any initial data
(0)
(u1 0 , . . . , uN 0 , v0 ) ∈ (C 1 (Ω))N +1 with v (0) ≤ ui 0 ≤ ūi (i = 1, . . . , N ) and v (0) ≤ v0 ≤
v̄ (0) in Ω, the corresponding nonnegative solution (u1 , . . . , uN , v) of the time-dependent
problem (1.1)–(1.4) converges to (u1,s , . . . , uN,s , vs ) as t → ∞.
(3) If the nonnegative steady-state solution in (C 2 (Ω))N +1 is unique, then the nonnegative
global solution to the time-dependent problem with any nonnegative initial data in
(C 1 (Ω))N +1 converges to this steady-state solution as t → ∞.
24
Proof. (1) This follows immediately from Theorem 7.2.
(2) This follows from Theorem 7.1 and Theorem 7.2.
(0)
(3) Choose ūi (i = 1, . . . , N ) and v̄ (0) all large enough and apply Part (2).
Corollary 7.2. With the same assumption as in Corollary 7.1, if U i,s 6= U i,s for some i
with 1 ≤ i ≤ N or V s 6= V s , then: (1) For any initial data (u1 0 , . . . , uN 0 , v0 ) ∈ (C 1 (Ω))N +1
(0)
with ui ≤ ui 0 ≤ U i,s (i = 1, . . . , N ) and V s ≤ v0 ≤ v̄ (0) in Ω, the corresponding nonnegative global solution (u1 , . . . , uN , v) of the time-dependent problem (1.1)–(1.4) converges to
(U 1,s , . . . , U N,s , V s ) as t → ∞; and (2) For any initial data (u1 0 , . . . , uN 0 , v0 ) ∈ (C 1 (Ω))N +1
(0)
with U i,s ≤ ui 0 ≤ ūi (i = 1, . . . , N ) and v (0) ≤ v0 ≤ V s in Ω, the corresponding nonnegative global solution (u1 , . . . , uN , v) of the time-dependent problem (1.1)–(1.4) converges to
(U 1,s , . . . , U N,s , V s ) as t → ∞.
Proof. This is similar to the proof of Theorem 7.1.
Acknowledgment. This work was supported by the San Diego Fellowship (M.E.H.) the US
National Science Foundation (NSF) through grant DMS-0811259 (B.L.) and DMS-1319731
(B.L.), Center for Theoretical Biological Physics through NSF grant PHY-0822283 (B.L.),
the National Institutes of Health (NIH) through grant R01GM096188 (B.L.), and Beijing Teachers Training Center for Higher Education (W.Y.). The authors thank Professor
Herbert Levine for introducing to them the mean-field model for RNA interactions, and
Professor Daomin Cao and Professor Yuan Lou for helpful discussions. They also thank the
anonymous referee for many helpful suggestions.
References
[1] H. C. Berg. Random Walks in Biology. Princeton University Press, 1993.
[2] L. C. Evans. Partial Differential Equations, volume 19 of Graduate Studies in Mathematics. Amer. Math. Soc., 2nd edition, 2010.
[3] A. Friedman. Partial Differential Equations of Parabolic Type. Prentice-Hall, Inc.,
1964.
[4] F. S. Fröhlich. Activation of gene expression by small RNA. Current Opin. Microbiology, 12:674–682, 2009.
[5] D. Gilbarg and N. S. Trudinger. Elliptic Partial Differential Equations of Second Order.
Springer–Verlag, 2nd edition, 1998.
[6] L. He and G. J. Hannon. MicroRNAs: small RNAs with a big role in gene regulation.
Nature Rev. Genetics, 5(7):522–531, 2004.
[7] M. E. Hohn. Diffusion Equations Models and Numerical Simulations of Gene Expression. PhD thesis, University of California, San Diego, 2013.
25
[8] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, 1990.
[9] J. Jost. Partial Differential Equations. Springer, 2002.
[10] K.-Y. Lam and W.-M. Ni. Uniqueness and complete dynamics in heterogeneous
competition-diffusion systems. SIAM J. Applied Math., 72:1695–1712, 2012.
[11] J. Lapham, J. P. Rife, P. B. Moore, and D. M. Crothers. Measurement of diffusion
constants for nucleic acids by NMR. J. Biomol. NMR, 10:255–262, 1997.
[12] E. Levine and T. Hwa. Small RNAs establish gene expression thresholds. Current
Opin. Microbiology, 11(6):574–579, 2008.
[13] E. Levine, E. Ben Jacob, and H. Levine. Target-specific and global effectors in gene
regulation by microRNA. Biophys. J., 93(11):L52–L54, 2007.
[14] E. Levine, P. McHale, and H. Levine. Small regulatory RNAs may sharpen spatial
expression patterns. PLoS Comput. Biology, 3:e233, 2007.
[15] E. Levine, Z. Zhang, T. Kuhlman, and T. Hwa. Quantitative characteristics of gene
regulation by small RNA. PLoS Biology, 5(9):1998–2010, 2007.
[16] A. Loinger, Y. Hemla, I. Simon, H. Margalit, and O. Biham. Competition between
small RNAs: A quantitative view. Biophys. J, 102(8):1712–1721, 2012.
[17] W.-M. Ni. The Mathematics of Diffusion, volume 82. SIAM, 2011.
[18] C. V. Pao. Nonlinear Parabolic and Elliptic Equations. Plenum Press, New York and
London, 1992.
[19] T. Platini, T. Jia, and R. V. Kulkarni. Regulation by small RNAs via coupled degradation: Mean-field and variational approaches. Phys. Rev. E, 84:021928, 2011.
[20] D. H. Sattinger. Monotone methods in nonlinear elliptic and parabolic boundary value
problems. Indiana Univ. Math. J., 21(11):979–1000, 1972.
[21] G. Stefani and F. J. Slack. Small non-coding RNAs in animal development. Nature
Rev. Molecular Cell Biology, 9(3):219–230, 2008.
[22] H. Valadi, K. Ekström, A. Bossios, M. Sjöstrand, J. J. Lee, and J. O. Lötvall. Exosomemediated transfer of mRNAs and microRNAs is a novel mechanism of genetic exchange
between cells. Nature Cell Biology, 9(6):654–659, 2007.
[23] J. D. Watson, T. A. Baker, S. P. Bell, A. Gann, M. Levine, and R. Losick. Molecular
Biology of the Gene. Benjamin Cummings, 7th edition, 2013.
26
Téléchargement