Hostname: page-component-76fb5796d-22dnz Total loading time: 0 Render date: 2024-04-28T02:03:03.433Z Has data issue: false hasContentIssue false

Analysis of a spatially inhomogeneous stochastic partial differential equation epidemic model

Published online by Cambridge University Press:  16 July 2020

Dang H. Nguyen*
Affiliation:
University of Alabama
Nhu N. Nguyen*
Affiliation:
Wayne State University
George Yin*
Affiliation:
Wayne State University
*
*Postal address: University of Alabama, Tuscaloosa, AL35487, USA.
***Postal address: Department of Mathematics, Wayne State University, Detroit, MI, 48202, USA.
***Postal address: Department of Mathematics, Wayne State University, Detroit, MI, 48202, USA.
Rights & Permissions [Opens in a new window]

Abstract

This work proposes and analyzes a family of spatially inhomogeneous epidemic models. This is our first effort to use stochastic partial differential equations (SPDEs) to model epidemic dynamics with spatial variations and environmental noise. After setting up the problem, the existence and uniqueness of solutions of the underlying SPDEs are examined. Then, definitions of permanence and extinction are given, and certain sufficient conditions are provided for permanence and extinction. Our hope is that this paper will open up windows for investigation of epidemic models from a new angle.

Type
Research Papers
Copyright
© Applied Probability Trust 2020

1. Introduction

This work presents an attempt at studying stochastic epidemic models in which spatial inhomogeneity is allowed. The hope is that it will open up a new angle for investigating a large class of epidemic processes. In lieu of the usual formulation based on stochastic differential equations considered in the literature, we propose a new class of models by using stochastic partial differential equations. This greatly enriches the class of systems and offers great opportunities both mathematically and practically. Meanwhile, it poses greater challenges.

Epidemic models (compartment models) in which the density functions are spatially homogeneous were introduced in 1927 by Kermack and McKendrick [Reference Kermack and McKendrick24, Reference Kermack and McKendrick25]. The main idea is to partition the population into susceptible, infected, and recovered (SIR) classes. The dynamics of these classes are given by a system of deterministic differential equations. One classical model takes the form

\begin{equation*} \left\{\begin{array}{ll}{\rm d} S(t)=\bigg[\Lambda-\mu_{\rm S}S(t)-\dfrac{\alpha S(t)I(t)}{S(t)+I(t)}\bigg]{\rm d} t, & t\geq 0, \\[9pt] {\rm d} I(t)=\bigg[-(\mu_{\rm I}+r)I(t)+\dfrac{\alpha S(t)I(t)}{S(t)+I(t)}\bigg]{\rm d} t, & t\geq 0, \\[9pt] {\rm d} R(t)=\big[-\mu_{\rm R} R(t)+r I(t)\big]{\rm d} t, & t \geq 0, \\[2pt] S(0)=S_0\geq 0,\quad I(0)=I_0\geq 0,\quad R(0)=R_0\geq 0, \end{array}\right.\end{equation*}

where S(t), I(t), and R(t) are the densities of the susceptible, infected, and recovered populations, respectively. In the above, $\Lambda$ is the recruitment rate of the population; $\mu_{\rm S}$ , $\mu_{\rm I}$ , and $\mu_{\rm R}$ are the death rates of susceptible, infected, and recovered individuals, respectively; $\alpha$ is the infection rate; and r is the recovery rate. To simplify the study, it has been noted that the dynamics of recovered individuals have no effect on the disease transmission dynamics. Thus, following the usual practice, the recovered individuals are removed from the formulation henceforth. SIR models are known to be useful and suited to such diseases as rubella, whooping cough, measles, smallpox, etc. It has also been recognized that random effects are not avoidable and a population is often subject to random disturbances. Thus, much effort has also been devoted to the investigation of stochastic epidemic models. One popular approach is adding stochastic noise perturbations to the above deterministic models. In recent years, resurgent attention has been devoted to analyzing and designing controls of infectious diseases for host populations; see [Reference Allen, Bolker, Lou and Nevai1, Reference Ball and Sirl4, Reference Britton and Lindholm6, Reference Dieu, Du and Nhu14, Reference Du and Nhu17, Reference Gathy and Lefevre20, Reference Hieu, Du, Auger and Nguyen23, Reference Kortchemski26, Reference Wang and Zhao36, Reference Wilkinson, Ball and Sharkey37] and references therein.

For the deterministic models, studying the systems from a dynamic system point of view, certain threshold-type results have been found. In accordance with the threshold, the population tends to the disease-free equilibrium or approaches an endemic equilibrium under certain conditions. It has taken a long time to find the critical threshold value for the corresponding stochastic systems. A characterization of systems using critical thresholds was reported very recently in [Reference Dieu, Nguyen, Du and Yin15, Reference Du and Nhu18, Reference Hening and Nguyen21, Reference Hening, Nguyen and Yin22], in which sufficient and almost necessary conditions were obtained using the idea of a Lyapunov exponent, so that the asymptotic behavior of the system has been completely classified. Such ideas can also be found in [Reference Du, Nguyen and Yin16, Reference Nguyen and Yin28] for related problems.

From another angle, it has been widely recognized that there should be spatial dependence in the model to better reflect the spatial variations. In the spatially inhomogeneous case, the epidemic reaction–diffusion system takes the form

\begin{equation*}\begin{cases}\displaystyle \frac{\partial}{\partial t}S(t,x)=k_1\Delta S(t,x) +\Lambda(x)-\mu_1(x)S(t,x)-\dfrac{\alpha(x) S(t,x)I(t,x)}{S(t,x)+I(t,x)}\quad\text{in }\mathbb{R}^+\times\mathcal{O},\\[11pt]\displaystyle \frac{\partial}{\partial t}I(t,x)=k_2\Delta I(t,x)-\mu_2(x) I(t,x) + \dfrac{\alpha(x) S(t,x)I(t,x)}{S(t,x)+I(t,x)} \quad\text{in }\mathbb{R}^+\times\mathcal{O},\\[11pt] \partial_{\nu}S(t,x)=\partial_{\nu}I(t,x)=0\quad\quad\quad\quad\;\text{in }\mathbb{R}^+\times\partial\mathcal{O},\\[4pt] S(x,0)=S_0(x),I(x,0)=I_0(x)\quad\;\;\text{in }\mathcal{O},\end{cases}\end{equation*}

where $\Delta $ is the Laplacian with respect to the spatial variable, $\mathcal{O}$ is a bounded domain with a $C^2$ boundary of $\mathbb{R}^l$ ( $l\geq 1$ ), $\partial_{\nu}S$ denotes the directional derivative with the $\nu$ being the outer normal direction on $\partial \mathcal{O}$ , and $k_1$ and $k_2$ are positive constants representing the diffusion rates of the susceptible and infected population densities, respectively. In addition, $\Lambda(x),\mu_1(x),\mu_2(x),\alpha(x)\in C^2(\mathcal{O})$ are non-negative functions. Recently, epidemic reaction–diffusion models have been studied in [Reference Allen, Bolker, Lou and Nevai2, Reference Ducrot and Giletti19, Reference Peng32, Reference Peng and Liu33, Reference Zhang, Wang and Zhao40] and the references therein. In [Reference Wang and Zhao36], some results were given for a general epidemic model with reaction–diffusion in terms of basic reproduction numbers. The above models are all noise free. However, random noise perturbations in the environment inevitably appear often. Therefore, a more suitable description requires considering stochastic epidemic diffusive models. Taking this into consideration, we propose a spatially non-homogeneous model using a system of stochastic partial differential equations given by

(1.1) \begin{equation}\begin{cases}{\rm d} S(t,x)=\bigg[k_1\Delta S(t,x) +\Lambda(x)-\mu_1(x)S(t,x)-\dfrac{\alpha(x) S(t,x)I(t,x)}{S(t,x)+I(t,x)}\bigg]{\rm d} t \\[5pt] \qquad\qquad\qquad\qquad\quad +\, S(t,x){\rm d} W_1(t,x)\quad\text{in }\mathbb{R}^+\times\mathcal{O}, \\[5pt]{\rm d} I(t,x)=\bigg[k_2\Delta I(t,x)\,-\mu_2(x) I(t,x)+\dfrac{\alpha(x) S(t,x)I(t,x)}{S(t,x)+I(t,x)}\bigg]{\rm d} t\\[5pt] \qquad\qquad\qquad\qquad\quad+\,I(t,x){\rm d} W_2(t,x)\quad\text{in }\mathbb{R}^+\times\mathcal{O}, \\[5pt]\partial_{\nu}S(t,x)=\partial_{\nu}I(t,x)=0\quad\quad\quad\quad\quad\quad\quad\text{in }\mathbb{R}^+\times\partial\mathcal{O},\\[5pt]S(x,0)=S_0(x),I(x,0)=I_0(x)\quad\quad\quad\quad\ \ \text{in }\mathcal{O},\end{cases}\end{equation}

where $W_1(t,x)$ and $W_2(t,x)$ are $L^2(\mathcal{O},\mathbb{R})$ -value Wiener processes that represent noise in both time and space. We refer readers to [Reference Da Prato and Zabczyk12] for more details on the $L^2(\mathcal{O},\mathbb{R})$ -value Wiener process.

Because this is our first work in this direction, we have to settle a number of issues. First, we establish the existence and uniqueness of solutions in the sense of mild solution of the stochastic partial differential equations. Moreover, we examine some long-term behavior of the solutions. These are the main objectives of the current work.

The rest of the paper is arranged as follows. Section 2 gives some preliminary results and also formulates the problem that we wish to study. Section 3 establishes the existence and uniqueness of the solution of the stochastic partial differential equations. Section 4 provides sufficient conditions for extinction and permanence, while Section 5 provides an example. Finally, Section 6 concludes the paper with some further remarks.

2. Preliminary and formulation

Let $\mathcal{O}$ be a bounded domain in $\mathbb{R}^l$ (with $l\geq 1$ ) having a $C^2$ boundary, and let $H\,:\!=L^2(\mathcal{O};\,\mathbb{R})$ be the separable Hilbert space, endowed with the scalar product

\begin{equation*}\langle u, v \rangle_H\,:\!=\int_{\mathcal{O}} u(x) v(x) \, {\rm d} x \end{equation*}

and the corresponding norm $\left\vert{u}\right\vert_H = \sqrt {\langle u, u \rangle_H}$ . We will say $ u\geq 0$ if $ u(x)\geq 0$ almost everywhere in $\mathcal{O}$ . Moreover, we denote by $L^2(\mathcal{O},\mathbb{R}^2)$ the space of all functions $u(x)=(u_1(x),u_2(x))$ where $ u_1,u_2\in L^2(\mathcal{O},\mathbb{R})$ , on which the inner product is defined as

\begin{align*}\langle u,v \rangle_{L^2(\mathcal{O},\mathbb{R}^2)}&\,:\!=\int_\mathcal{O} \big\langle u(x),v(x) \big\rangle_{\mathbb{R}^2} \, {\rm d} x=\int_{\mathcal{O}}(u_1(x)v_1(x)+u_2(x)v_2(x)) \, {\rm d} x\\&\,=\langle u_1,v_1 \rangle_{L^2(\mathcal{O},\mathbb{R})}+\langle u_2,v_2 \rangle_{L^2(\mathcal{O},\mathbb{R})},\end{align*}

for all $u,v\in L^2(\mathcal{O},\mathbb{R}^2)$ . Note that $L^2(\mathcal{O},\mathbb{R}^2)$ is a separable Hilbert space. In what follows, we use u to denote a function that is either real valued or $\mathbb{R}^2$ -valued (as will be clear from the context). Denote by E the Banach space $C(\mathcal{ \overline{O}};\,\mathbb{R})$ endowed with the sup-norm

\begin{equation*}\left\vert{u}\right\vert_E\,:\!=\sup_{x\in \mathcal{\overline{O}}}\left\vert{u(x)}\right\vert.\end{equation*}

Let $(\Omega, \mathcal{F},\{\mathcal{F}_t\}_{t\geq 0},\mathbb{P})$ be a complete probability space, and $L^{\;p}(\Omega;\,C([0,t],C(\mathcal{\overline{O}},\mathbb{R}^2)))$ be the space of all predictable $C(\mathcal{\overline{O}},\mathbb{R}^2)$ -valued processes u in $C([0,t],C(\mathcal{\overline{O}},\mathbb{R}^2))$ , P almost surely (a.s.), with the norm $L_{t,p} $ as follows:

\begin{equation*}\left\vert{u}\right\vert^p_{L_{t,p}}\,:\!=\mathbb{E} \sup_{s\in [0,t]}\left\vert{u(s)}\right\vert^p_{C(\mathcal{\overline{O}},\mathbb{R}^2)},\end{equation*}

where

\begin{equation*}\left\vert{u}\right\vert_{C(\mathcal{\overline{O}},\mathbb{R}^2)}=\Bigg(\sum_{i=1}^2\sup_{x\in \mathcal{\overline{O}}}\left\vert{u_i(x)}\right\vert^2\Bigg)^\frac 12\qquad\text{if}\ u=(u_1,u_2)\in C(\mathcal{\overline{O}},\mathbb{R}^2).\end{equation*}

For $\varepsilon>0,p\geq 1$ , denote by $W^{\varepsilon,p}(\mathcal{O},\mathbb{R}^2)$ the Sobolev–Slobodeckij space (the Sobolev space with non-integer exponent) endowed with the norm

\begin{equation*}\left\vert{u}\right\vert_{\varepsilon,p}\,:\!=\left\vert{u}\right\vert_{L^{\;p}(\mathcal{O},\mathbb{R}^2)}+\sum_{i=1}^2\int_{\mathcal{O}\times\mathcal{O}}\dfrac{\left\vert{u_i(x)-u_i(y)}\right\vert^p}{\left\vert{x-y}\right\vert^{\varepsilon p+l}} \, {\rm d} x \, {\rm d} y.\end{equation*}

Assume that $B_{k,1}(t)$ and $B_{k,2}(t)$ with $k=1,2,\dots$ are independent $\{\mathcal{F}_t\}_{t\geq 0}$ -adapted one-dimensional Wiener processes. Now, fix an orthonormal basis $\{e_k\}_{k=1}^{\infty}$ in H and assume that this sequence is uniformly bounded in $L^{\infty}(\mathcal{O},\mathbb{R})$ , i.e.

\begin{equation*}C_0\,:\!=\sup_{k\in\mathbb{N}}\left\vert{e_k}\right\vert_{L^{\infty}(\mathcal{O},\mathbb{R})}=\sup_{k\in\mathbb{N}}\text{ess}\,\sup\limits_{\!\!\!\!\!\!\!x\in\mathcal{O}} e_k(x)<\infty.\end{equation*}

We define the infinite-dimensional Wiener processes $W_i(t)$ , which are driving noise in (1.1), as follows:

\begin{equation*}\displaystyle W_i(t)=\sum_{k=1}^{\infty}\sqrt {a_{k,i}}B_{k,i}(t)e_k,\qquad i=1,2,\end{equation*}

where $\{a_{k,i}\}_{k=1}^{\infty}$ are sequences of non-negative real numbers satisfying

(2.1) \begin{equation}a_i\,:\!=\sum_{k=1}^{\infty}a_{k,i}<\infty,\qquad i=1,2.\end{equation}

Let $A_1$ and $A_2$ be Neumann realizations of $k_1\Delta$ and $k_2\Delta$ in H, respectively, i.e.

\begin{equation*}D(A_i)=\big\{u\in H \mid \Delta u\in H\;\text{and}\;\partial_\nu u=0\;\text{on}\;\partial \mathcal{O}\big\},\end{equation*}
\begin{equation*}A_i u=k_i\Delta u,\qquad u\in D(A_i),\end{equation*}

where the Laplace operator in the above definition is understood in the distribution sense. Then, $A_1$ and $A_2$ are infinitesimal generators of analytic semigroups ${\rm e}^{tA_1}$ and ${\rm e}^{tA_2}$ with corresponding Neumann heat kernels denoted by $p_\mathcal{O}^{N,1}(t,x,y)$ , $p_\mathcal{O}^{N,2}(t,x,y)$ , i.e.

\begin{equation*}({\rm e}^{tA_i}u)(x)=\int_\mathcal{O} p_\mathcal{O}^{N,i}(t,x,y)u(y) \, {\rm d} y, \qquad i=1,2,\end{equation*}

respectively. In addition, if we denote by $A\,:\!=(A_1,A_2)$ the operator defined in $L^2(\mathcal{O},\mathbb{R}^2)$ by $Au\,:\!=(A_1u_1,A_2u_2)$ for $u=(u_1,u_2)\in L^2(\mathcal{O},\mathbb{R}^2)$ , then it generates an analytic semigroup ${\rm e}^{tA}$ with ${\rm e}^{tA}u=({\rm e}^{tA_1}u_1,{\rm e}^{tA_2}u_2)$ . In [Reference Davies13, Theorem 1.4.1], it is proved that the space $L^1(\mathcal{O},\mathbb{R}^2 )\cap L^\infty(\mathcal{O},\mathbb{R}^2)$ is invariant under ${\rm e}^{tA}$ , so that ${\rm e}^{tA}$ may be extended to a non-negative one-parameter semigroup ${\rm e}^{tA(p)}$ on $L^{\;p}(\mathcal{O};\,R^2 )$ , for all $1\leq p\leq\infty$ . All these semigroups are strongly continuous and consistent in the sense that ${\rm e}^{tA(p)}u={\rm e}^{tA(q)}u$ for any $u\in L^{\;p}(\mathcal{O},\mathbb{R}^2)\cap L^q(\mathcal{O},\mathbb{R}^2)$ (see [Reference Cerrai8]). So, we will suppress the superscript p and denote them by ${\rm e}^{tA}$ whenever there is no confusion. Moreover, if we consider the part $A_i^E$ of $A_i$ in the space of continuous functions E, it generates an analytic semigroup (see [Reference Arendt3, Chapter 2]), which has no dense domain in general. However, since we have assumed that $\mathcal{O}$ has a $C^2$ boundary in our boundary condition, $A_i^E$ has dense domain in E (see [Reference Da Prato and Zabczyk12, Appendix A.5.2]), and hence this analytic semigroup is strongly continuous. Finally, we recall some well-known properties of the operators $A_i$ and analytic semigroups ${\rm e}^{tA_i}$ for $i=1,2$ as follows (for further details, we refer the reader to [Reference Arendt3, Reference Davies13, Reference Ouhabaz31] and the references therein):

  1. For all $u\in H$ , $\int_ 0^t {\rm e}^{sA_i}u \, {\rm d} s\in D(A_i)$ and $A_i(\int_0^t {\rm e}^{sA_i}u \, {\rm d} s)={\rm e}^{tA_i}u-u$ .

  2. By Green’s identity, it can be proved that $A_i$ is symmetric, that $A_i$ is self-adjoint in H, and that, $\text{for all}\ u\in D(A_i)$ , $\int_\mathcal{O} (A_iu)(x) \, {\rm d} x=0$ .

  3. For any $t>0$ , $x,y\in\mathcal{O}$ ,

    \begin{equation*}0\leq p_\mathcal{O}^{N,i}(t,x,y)\leq c_1 (t\wedge 1)^{-\frac l2}\exp\bigg\{-c_2\frac{\left\vert{x-y}\right\vert^2}{t}\bigg\}\end{equation*}
    for some constants $c_1$ and $c_2$ which depend on $\mathcal{O}$ but are independent of u and t.
  4. The semigroup ${\rm e}^{tA}$ satisfies

    (2.2) \begin{equation}\begin{aligned}\left\vert{{\rm e}^{tA}u}\right\vert_{L^\infty(\mathcal{O},\mathbb{R}^2)}\leq c\left\vert{u}\right\vert_{L^\infty(\mathcal{O},\mathbb{R}^2)}\quad\text{and}\quad\left\vert{{\rm e}^{tA}u}\right\vert_{C(\mathcal{\overline{O}},\mathbb{R}^2)}\leq c\left\vert{u}\right\vert_{C(\mathcal{\overline{O}},\mathbb{R}^2)}\end{aligned}\end{equation}
    for some constant c which depends on $\mathcal{O}$ but is independent of u and t.
  5. For any $t,\varepsilon>0$ , $p\geq 1$ , the semigroup ${\rm e}^{tA}$ maps $L^{\;p}(\mathcal{O},\mathbb{R}^2)$ into $W^{\varepsilon,p}(\mathcal{O},\mathbb{R}^2)$ and, $\text{for all}\ u\in L^{\;p}(\mathcal{O},\mathbb{R}^2)$ ,

    (2.3) \begin{equation}\left\vert{{\rm e}^{tA}u}\right\vert_{\varepsilon,p}\leq c(t\wedge 1)^{-\varepsilon/2}\left\vert{u}\right\vert_{L^{\;p}(\mathcal{O},\mathbb{R}^2)} \end{equation}
    for some constant c independent of u and t.

Now, we rewrite (1.1) as a stochastic differential equation in an infinite-dimension space:

(2.4) \begin{equation}\begin{cases}{\rm d} S(t)=\bigg[A_1 S(t) +\Lambda-\mu_1S(t)-\dfrac{\alpha S(t)I(t)}{S(t)+I(t)}\bigg] \, {\rm d} t + S(t) \, {\rm d} W_1(t),\\[0.3cm]{\rm d} I(t)=\bigg[A_2 I(t)-\mu_2I(t) + \dfrac{\alpha S(t)I(t)}{S(t)+I(t)}\bigg] \, {\rm d} t+I(t) \, {\rm d} W_2(t),\\S(0)=S_0,I(0)=I_0.\end{cases}\end{equation}

As usual, we say that $(S(t),I(t))$ is a mild solution to (2.4) if

(2.5) \begin{equation}\begin{cases}\displaystyle S(t)={\rm e}^{tA_1}S_0+\int_0^t {\rm e}^{(t-s)A_1}\bigg(\Lambda-\mu_1S(s)-\dfrac{\alpha S(s)I(s)}{S(s)+I(s)}\bigg) \, {\rm d} s+W_S(t),\\[0.3cm]\displaystyle I(t)={\rm e}^{tA_2}I_0+\int_0^t {\rm e}^{(t-s)A_2}\bigg(-\mu_2 I(s)+\dfrac{\alpha S(s)I(s)}{S(s)+I(s)}\bigg) \, {\rm d} s+W_I(t),\end{cases}\end{equation}

where

\begin{equation*}W_S(t)=\int_0^t {\rm e}^{(t-s)A_1}S(s) \, {\rm d} W_1(s)\quad\text{and}\quad W_I(t)=\int_0^t {\rm e}^{(t-s)A_2}I(s) \, {\rm d} W_2(s),\end{equation*}

or, in vector form,

(2.6) \begin{equation}\displaystyle Z(t)={\rm e}^{tA}Z_0+\int_0^t {\rm e}^{(t-s)A}F(Z(s)) \, {\rm d} s+\int_0^t {\rm e}^{(t-s)A}Z(s) \, {\rm d} W(s),\end{equation}

where $ Z=(S,I)$ , $F(Z)=(F_1(Z),F_2(Z))\,:\!=\bigg(\Lambda-\mu_1S-\dfrac{\alpha SI}{S+I},-\mu_2I+\dfrac{\alpha SI}{S+I}\bigg)$ , and

\begin{equation*}{\rm e}^{(t-s)A}Z(s) \, {\rm d} W(s)\,:\!=({\rm e}^{(t-s)A_1}S(s) \, {\rm d} W_1(s)\;,\; {\rm e}^{(t-s)A_2}I(s) \, {\rm d} W_2(s)).\end{equation*}

Because we are modeling SIR epidemic systems, we are only interested in the positive ( $\geq 0$ ) solutions. Therefore, we define a positive mild solution of (2.4) as a mild solution S(t, x), I(t, x) such that $S(t,x),I(t,x)\geq 0$ , almost everywhere $x\in\mathcal{O}$ , for all $t\geq 0$ . Moreover, to have the term $\dfrac{si}{s+i}$ well defined, we assume that it is equal 0 whenever either $s=0$ or $i=0$ .

Remark 2.1. The integrals on the right-hand side of (2.5) are understood as Bochner integrals (in the Banach space H), while $W_S(t)$ and $W_I(t)$ are stochastic integrals (stochastic convolutions). The S(s) (resp. I(s)) in the stochastic integrals is understood as a multiplication operator, i.e.

\begin{equation*}S(s)(u)=S(s)u \qquad \text{for all}\ u\in H.\end{equation*}

The stochastic integral $\int_0^t {\rm e}^{(t-s)A_i}U(s) \, {\rm d} W_i(s)$ (see [Reference Da Prato and Zabczyk12, Chapter 4] for more details on stochastic integrals) is well defined if the process U(s) satisfies

\begin{equation*}\int_0^t \sum_{k=1}^{\infty}a_{k,i}\big\vert{{\rm e}^{(t-s)A_i} U(s) e_k}\big\vert^2_H \, {\rm d} s<\infty.\end{equation*}

Finally, in the vector form, to simplify the notation we do not write the vectors in column form. However, the calculations involving vectors are understood as in the usual sense.

To investigate epidemic models, an important question is whether the infected individuals will die out in the long term. That is, the consideration of extinction or permanence. Since the mild solution is used, let us introduce the definitions in the weak sense as follows.

Definition 2.1. A population with density u(t, x) is said to be extinct in the mean if

\begin{equation*}\limsup_{t\to\infty}\dfrac 1t\int_0^t\mathbb{E}\int_\mathcal{O} u(s,x) \, {\rm d} x \, {\rm d} s=0,\end{equation*}

and is said to be permanent in the mean if there exists a positive number $R_I$ , independent of the initial conditions of the population, such that

\begin{equation*}\liminf_{t\to\infty}\dfrac 1t\int_0^t\bigg(\mathbb{E}\int_\mathcal{O} (u^2(s,x)\wedge 1) \, {\rm d} x\bigg)^{\frac 12} \, {\rm d} s\geq R_I.\end{equation*}

Remark 2.2. It is well known that it is fairly difficult to confirm the existence of strong solutions for stochastic partial differential equations (or even weak solutions); see [Reference Da Prato and Zabczyk12, Section 6.1]. As an alternative, we shall use the notion of mild solutions. Hence, the convergence in our situation is in the weak sense. Note, however, in the deterministic case, [Reference Allen, Bolker, Lou and Nevai2, Reference Ducrot and Giletti19, Reference Peng32, Reference Peng and Liu33, Reference Zhang, Wang and Zhao40] obtained strong solutions of deterministic reaction–diffusion epidemic models and the convergence was taken in $L^{\infty}$ , E, or Sobolev spaces. In what follows, for convenience, we often suppress the phrase ‘in the mean’ when we refer to extinction and permanence, because we are mainly working with mild solutions.

3. Existence and uniqueness of a positive mild solution

In this section we shall prove the existence and uniqueness of a positive mild solution of the system as well as its continuous dependence on the initial conditions. In what follows, without loss of the generality we can assume $\left\vert{\mathcal{O}}\right\vert=1$ , where $\left\vert{\mathcal{O}}\right\vert$ is the volume of the bounded domain $\mathcal{O}$ in $\mathbb{R}^l$ , and the initial values are non-random for simplicity.

Theorem 3.1. For any initial data $0\leq S_0,I_0 \in E$ , there exists a unique positive mild solution $(S(t),I(t))$ of (2.4) that belongs to $L^{\;p}(\Omega;\, C([0,T],C(\mathcal{\overline{O}},\mathbb{R}^2)))$ for any $T>0$ and $p \geq 1$ . Moreover, this solution depends continuously on the initial data.

3.1. Proof of Theorem 3.1

In this proof, the letter c denotes a positive constant whose value may change in different occurrences. We will write the dependence of the constant on parameters explicitly if it is essential. First, we rewrite the coefficients by defining f and $f^*$ as follows:

\begin{equation*}f(x,s,i)=\bigg(\Lambda(x)-\mu_1(x)s-\dfrac{\alpha(x) si}{s+i},-\mu_2(x)i+\dfrac{\alpha(x) si}{s+i}\bigg),\qquad x\in\mathcal{O},\ (s,i)\in \mathbb{R}^2,\end{equation*}

and

\begin{equation*}f^*(x,s,i)=f(x,s\vee 0,i\vee 0).\end{equation*}

Writing $z=(s,i)$ , by noting that we are assuming the term $\dfrac {si}{s+i}$ will be equal to 0 whenever either $s=0$ or $i=0$ , it is easy to see that $f^*(x,\cdot,\cdot):\,\mathbb{R}^2 \mapsto\mathbb{R}^2$ is Lipschitz continuous, uniformly in $x\in\mathcal{O}$ , so that the composition operator $F^*(z)$ associated with $f^*$ , i.e.

\begin{equation*}F^*(z)(x)=(F^*_1(z)(x),F_{2}^*(z)(x))\,:\!=f^*(x,z(x)),\qquad x\in \mathcal{O},\end{equation*}

is Lipschitz continuous in both $L^2(\mathcal{O}, \mathbb{R}^2)$ and $C(\mathcal{\overline{O}},\mathbb{R}^2)$ . Now, we consider the following problem:

(3.1) \begin{equation}{\rm d} Z^*(t)=\big[AZ^*(t)+F^*(Z^*(t))\big] \, {\rm d} t+(Z^*(t)\vee 0) \, {\rm d} W(t), \quad Z^*(0)=Z_0=(S_0,I_0),\end{equation}

where $Z^*(t)=(S^*(t),I^*(t))$ and $Z^*(t)\vee 0$ is defined by

\begin{equation*}(Z^*(t)\vee 0)(x)=(S^*(t,x)\vee 0,I^*(t,x)\vee 0).\end{equation*}

For any

\begin{equation*}u(t,x)=(u_1(t,x),u_2(t,x))\in L^{\;p}(\Omega;\, C([0,T],C(\mathcal{\overline{O}},\mathbb{R}^2))),\end{equation*}

consider the mapping

\begin{equation*}\gamma(u)(t)\,:\!={\rm e}^{tA}Z_0+\int_0^t {\rm e}^{(t-s)A}F^*(u(s)) \, {\rm d} s+\varphi(u)(t),\end{equation*}

where

\begin{align*}\varphi(u)(t)&\,:\!=\int_0^t {\rm e}^{(t-s)A}(u(s)\vee 0) \, {\rm d} W(s)\\&\,:\!=\bigg(\int_0^t {\rm e}^{(t-s)A_1}(u_1(s)\vee 0) \, {\rm d} W_1(s),\int_0^t {\rm e}^{(t-s)A_2}(u_2(s)\vee 0) \, {\rm d} W_2(s)\bigg).\end{align*}\vskip-\lastskip\pagebreak

We will prove that $\gamma$ is a contraction mapping in $L^{\;p}(\Omega;\, C([0,T_0],C(\mathcal{\overline{O}},\mathbb{R}^2)))$ for some $T_0>0$ and any $p\geq p_0$ for some $p_0$ .

Lemma 3.1. There exists $p_0$ such that, for any $p\geq p_0$ ,

  1. the mapping $\varphi$ maps $L^{\;p}(\Omega;\, C([0,t],C(\mathcal{\overline{O}},\mathbb{R}^2)))$ into itself,

  2. for any $u=(u_1,u_2),v=(v_1,v_2)\in L^{\;p}(\Omega;\, C([0,t],C(\mathcal{\overline{O}},\mathbb{R}^2))),$

    (3.2) \begin{equation}\left\vert{\varphi (u)-\varphi (v)}\right\vert_{L_{t,p}}\leq c_p(t)\left\vert{u-v}\right\vert_{L_{t,p}}\!,\end{equation}
    where $c_p(t)$ is some constant satisfying $c_p(t)\downarrow 0$ as $t\downarrow 0$ .

Proof. Let $p_0$ be sufficiently large to ensure that, for any $p\geq p_0$ , we can choose simultaneously $\beta,\varepsilon>0$ such that

\begin{equation*}\frac 1p<\beta<\frac 12\quad\text{and}\quad \frac lp<\varepsilon<2\bigg(\beta-\frac 1p\bigg).\end{equation*}

Now, for any fixed $p\geq p_0$ , let $\beta,\varepsilon$ be chosen as above. By using a factorization argument (see, e.g., [Reference Da Prato and Zabczyk12, Theorem 8.3]), we have

\begin{equation*}\varphi(u)(t)-\varphi(v)(t)=\dfrac{\sin \pi \beta}{\pi}\int_0^t (t-s)^{\beta-1}{\rm e}^{(t-s)A}Y_\beta(u,v)(s) \, {\rm d} s,\end{equation*}

where

\begin{equation*}Y_\beta(u,v)(s)=\int_0^s (s-r)^{-\beta}{\rm e}^{(s-r)A}(u(r)\vee 0-v(r)\vee 0) \, {\rm d} W(r).\end{equation*}

If

\begin{equation*}\int_0^t \big\vert{Y_\beta(u,v)(s)}\big\vert_{L^{\;p}(\mathcal{O},\mathbb{R}^2)}^p{\rm d} s<\infty \hbox{ a.s.},\end{equation*}

then it is easily seen from the properties in (2.3) of the semigroup ${\rm e}^{tA}$ and Hölder’s inequality that

(3.3) \begin{equation}\begin{aligned}&\left\vert{\varphi(u)(t)-\varphi(v)(t)}\right\vert_{\varepsilon,p}\\&\quad \leq c_\beta\int_0^t (t-s)^{\beta-1}((t-s)\wedge 1)^{-\varepsilon/2}\left\vert{Y_\beta(u,v)}\right\vert_{L^{\;p}(\mathcal{O},\mathbb{R}^2)} \, {\rm d} s\\&\quad \leq c_{\beta,p}(t)\bigg(\int_0^t ((t-s)\wedge 1)^{\frac p{p-1}(\beta-\varepsilon/2-1)} \, {\rm d} s\bigg)^{\frac{p-1}p}\bigg(\int_0^t \left\vert{Y_\beta(u,v)(s)}\right\vert_{L^{\;p}(\mathcal{O},\mathbb{R}^2)}^p \, {\rm d} s\bigg)^{\frac 1p}\\&\quad\leq c_{\beta,p}(t)\bigg(\int_0^t \left\vert{Y_\beta(u,v)(s)}\right\vert_{L^{\;p}(\mathcal{O},\mathbb{R}^2)}^pds\bigg)^{\frac 1p} \quad \text{a.s.} ,\end{aligned}\end{equation}

where $c_{\beta,p}(t)$ is some positive constant satisfying $c_{\beta,p}(t)\downarrow 0$ as $t\downarrow 0$ . Rewriting $Y_\beta(u,v)(s)=(Y_{1\beta}(u,v)(s),Y_{2\beta}(u,v)(s))$ , where

\begin{equation*}Y_{i\beta}(u,v)(s)\,:\!=\int_0^s (s-r)^{-\beta}{\rm e}^{(s-r)A_i}(u_i(r)\vee 0-v_i(r)\vee 0) \, {\rm d} W_i(r),\qquad i=1,2.\end{equation*}\vskip-\lastskip\pagebreak

Therefore, applying the Burkholder inequality, we obtain that for all $s\in [0,t]$ and almost every $x\in\mathcal{O}$ ,

\begin{equation*}\begin{aligned}&\!\!\!\mathbb{E} \left\vert{Y_{i\beta}(u,v)(s,x)}\right\vert^p\leq c_p\mathbb{E}\bigg[\int_0^s(s-r)^{-2\beta}\sum_{k=1}^{\infty}a_{k,i}\left\vert{M_i(s,r,k,x)}\right\vert^2 \, {\rm d} r\bigg]^{\frac p2} , \end{aligned}\end{equation*}

where

\begin{equation*}M_i(s,r,k)={\rm e}^{(s-r)A_i}(u_i(r)\vee 0-v_i(r)\vee 0)e_k.\end{equation*}

In the above, we used the notations

\begin{equation*}Y_{i\beta}(u,v)(s,x)\,:\!=Y_{i\beta}(u,v)(s)(x),\quad M_i(s,r,k,x)\,:\!=M_i(s,r,k)(x),\qquad i=1,2.\end{equation*}

As a consequence,

(3.4) \begin{equation}\begin{aligned}\mathbb{E} \int_0^t &\left\vert{Y_\beta(u,v)(s)}\right\vert_{L^{\;p}(\mathcal{O},\mathbb{R}^2)}^p \, {\rm d} s\\&\leq c_p(t)\mathbb{E}\int_0^t\int_\mathcal{O} \Big(\left\vert{Y_{1\beta}(u,v)(s,x)}\right\vert^p+\left\vert{Y_{2\beta}(u,v)(s,x)}\right\vert^p\Big) \, {\rm d} x \, {\rm d} s\\&\leq c_p(t) \int_0^t\mathbb{E}\bigg(\int_0^s (s-r)^{-2\beta}(a_1+a_2)\sup_{k\in\mathbb{N}}\left\vert{M(s,r,k)}\right\vert_{L^{\infty}(\mathcal{O},\mathbb{R}^2)}^2 \, {\rm d} r\bigg)^\frac p2 \, {\rm d} s,\end{aligned}\end{equation}

where $M(s,r,k)\,:\!=(M_1(s,r,k),M_2(s,r,k))$ and $a_1,a_2$ are defined in (2.1). Moreover, from the uniformly boundedness property of $\{e_k\}_{k=1}^\infty$ and (2.2), we have

(3.5) \begin{equation}\sup_{k\in\mathbb{N}}\left\vert{M(s,r,k)}\right\vert_{L^\infty(\mathcal{O},\mathbb{R}^2)}\leq c\left\vert{u(r)-v(r)}\right\vert_{C(\mathcal{\overline{O}},\mathbb{R}^2)} \end{equation}

for some constant c independent of s, r, u, and v. Combining (3.4) and (3.5) implies that

(3.6) \begin{equation}\begin{aligned}\mathbb{E} \int_0^t &\left\vert{Y_\beta(u,v)(s)}\right\vert_{L^{\;p}(\mathcal{O},\mathbb{R}^2)}^p \, {\rm d} s\\&\leq c_p(t)\int_0^t\mathbb{E}\sup_{r\in [0,s]}\left\vert{u(r)-v(r)}\right\vert_{C(\mathcal{\overline{O}},\mathbb{R}^2)}^p\bigg(\int_0^s (s-r)^{-2\beta} \, {\rm d} r\bigg)^\frac p2 \, {\rm d} s\\&\leq c_{\beta,p}(t)\int_0^t\mathbb{E}\sup_{r\in [0,s]}\left\vert{u(r)-v(r)}\right\vert_{C(\mathcal{\overline{O}},\mathbb{R}^2)}^p \, {\rm d} s\leq c_{\beta,p}(t)\left\vert{u-v}\right\vert^p_{L_{t,p}}<\infty,\end{aligned}\end{equation}

where $c_{\beta,p}(t)$ is some positive constant satisfying $c_{\beta,p}(t)\downarrow 0$ as $t\downarrow 0$ . Therefore, the inequality (3.3) holds and, as a consequence, $\varphi(u)(t)-\varphi(v)(t)\in W^{\varepsilon,p}(\mathcal{O},\mathbb{R}^2)$ . Since $\varepsilon >l/p$ , the Sobolev embedding theorem implies that $\varphi(u)(t)-\varphi(v)(t)\in C(\mathcal{\overline{O}},\mathbb{R}^2).$ Finally, (3.3) and (3.6) imply that

\begin{equation*}\left\vert{\varphi(u)-\varphi(v)}\right\vert_{L_{t,p}}\leq c_p(t)\left\vert{u-v}\right\vert_{L_{t,p}}\end{equation*}

for some constant $c_p(t)$ satisfying $c_p(t)\downarrow 0$ as $t\downarrow 0.$ The lemma is proved.

Therefore, for $p\geq p_0$ , with sufficiently large $p_0$ , $\gamma$ maps $L^{\;p}(\Omega;\, C([0,t],C(\mathcal{\overline{O}},\mathbb{R}^2)))$ into itself. Moreover, by using (2.2) and the Lipschitz continuity of $F^*$ , we have

(3.7) \begin{equation}\begin{aligned}\int_0^t&\left\vert{{\rm e}^{(t-s)A}\big[F^*(u(s))-F^*(v(s))\big]}\right\vert_{C(\mathcal{\overline{O}},\mathbb{R}^2)}^p \, {\rm d} s\leq c\int_0^t \left\vert{(u(s)-v(s))}\right\vert^p_{C(\mathcal{\overline{O}},\mathbb{R}^2)} \, {\rm d} s\\&\leq c\int_0^t \sup_{r\in[0,s]}\left\vert{(u(r)-v(r))}\right\vert_{C(\mathcal{\overline{O}},\mathbb{R}^2)}^p \, {\rm d} s\leq ct\sup_{s\in [0,t]}\left\vert{u(s)-v(s)}\right\vert_{C(\mathcal{\overline{O}},\mathbb{R}^2)}^p.\end{aligned}\end{equation}

Hence, (3.2) and (3.7) imply that

\begin{equation*}\left\vert{\gamma (u)-\gamma (v)}\right\vert_{L_{t,p}}\leq c_p(t)\left\vert{u-v}\right\vert_{L_{t,p}}\!,\end{equation*}

where $c_p(t)$ is some constant depending on p and t satisfying $c_p(t)\downarrow 0$ as $t\downarrow 0$ . Therefore, for some $T_0$ sufficient small, $\gamma$ is a contraction mapping in $L^{\;p}(\Omega;\, C([0,T_0]$ , $C(\mathcal{\overline{O}},\mathbb{R}^2))).$ By a fixed point argument we can conclude that (3.1) admits a unique mild solution in $L^{\;p}(\Omega;\, C([0,T_0],C(\mathcal{\overline{O}},\mathbb{R}^2))).$ Thus, by repeating the above argument in each finite time interval $[kT_0,(k+1)T_0]$ , for any $T>0$ and $p\geq p_0$ (3.1) admits a unique mild solution $Z^*(t)=(S^*(t),I^*(t))$ in $L^{\;p}(\Omega;\, C([0,T],C(\mathcal{\overline{O}},\mathbb{R}^2))).$ We proceed to prove the positivity of $S^*(t),I^*(t)$ .

Lemma 3.2. Let $(S^*(t),I^*(t))$ be the unique mild solution of (3.1). Then, for all $t\in [0,T]$ , $S^*(t),I^*(t)\geq 0$ a.s.

Proof. Equivalently, $(S^*(t),I^*(t))$ is the mild solution of the equation

(3.8) \begin{equation}\begin{cases}{\rm d} S^*(t)=\big[A_1 S^*(t) + F_{1}(S^*(t)\vee 0,I^*(t)\vee 0)\big]{\rm d} t + (S^*(t)\vee 0){\rm d} W_1(t),\\{\rm d} I^*(t)=\big[A_2 I^*(t) + F_{2}(S^*(t)\vee 0,I^*(t)\vee 0)\big]{\rm d} t+(I^*(t)\vee 0){\rm d} W_2(t),\\S^*(0)=S_0,I^*(0)=I_0.\end{cases}\end{equation}

For $i=1,2$ , let $\lambda_i\in \rho(A_i)$ be the resolvent set of $A_i$ and $R_i(\lambda_i)\,:\!=\lambda_i R_i(\lambda_i,A_i)$ , with $R_i(\lambda_i,A_i)$ being the resolvent of $A_i$ . For each small $\varepsilon>0$ , $\lambda=(\lambda_1,\lambda_2)\in \rho(A_1)\times\rho(A_2)$ , by [Reference Liu27, Proposition 1.3.6], there exists a unique strong solution $S_{\lambda,\varepsilon}(t,x),I_{\lambda,\varepsilon}(t,x)$ of the equation

\begin{equation*} \begin{cases}\displaystyle{\rm d} S_{\lambda,\varepsilon}(t)=\big[A_1S_{\lambda,\varepsilon}(t)+R_1(\lambda_1)F_{1}(\varepsilon\Phi(\varepsilon^{-1} S_{\lambda,\varepsilon}(t)),\varepsilon\Phi(\varepsilon^{-1} I_{\lambda,\varepsilon}(t)))\big]{\rm d} t\\\quad\quad\quad\quad\quad\quad+R_1(\lambda_1)\varepsilon\Phi(\varepsilon^{-1} S_{\lambda,\varepsilon}(t)){\rm d} W_1(t),\\\displaystyle dI_{\lambda,\varepsilon}(t)=\big[A_2 I_{\lambda,\varepsilon}(t)+ R_2(\lambda_2)F_{2}(\varepsilon\Phi(\varepsilon^{-1} S_{\lambda,\varepsilon}(t)),\varepsilon\Phi(\varepsilon^{-1} I_{\lambda,\varepsilon}(t)))\big]{\rm d} t\\ \quad\quad\quad\quad\quad\quad+R_2(\lambda_2)\varepsilon\Phi(\varepsilon^{-1} I_{\lambda,\varepsilon}(t)){\rm d} W_2(t),\\S_{\lambda,\varepsilon}(0)=R_1(\lambda_1)S_0,\quad I_{\lambda,\varepsilon}(0)=R_2(\lambda_2)I_0,\end{cases}\end{equation*}

where

\begin{equation*}\Phi(\xi)=\begin{cases}0 \qquad\text{if}\;\xi\leq 0,\\3\xi^5-8\xi^4+6\xi^3 \qquad\text{if}\;0<\xi < 1,\\\xi \qquad\text{if}\;\xi\geq1,\end{cases}\end{equation*}

satisfying

\begin{equation*}\begin{cases}\Phi\in C^2(\mathbb{R}),\\\varepsilon\Phi(\varepsilon^{-1}\xi)\to \xi\vee 0\quad \text{as}\;\varepsilon\to 0.\end{cases}\end{equation*}

Combined with the convergence property in [Reference Liu27, Proposition 1.3.6], we obtain that $(S_{\lambda(k),\varepsilon}(t)$ , $I_{\lambda(k),\varepsilon}(t)) \to (S^{*}(t),I^{*}(t))$ in $L^{\;p}(\Omega;\, C([0,T],L^2(\mathcal{O},\mathbb{R}^2)))$ for some sequence $\{\lambda(k)\}_{k=1}^{\infty}\subset \rho(A_1)\times\rho(A_2)$ and $\varepsilon\to 0$ .

Now, let

\begin{equation*}g(\xi)=\begin{cases}\xi^2-\dfrac 16\quad\quad\;\;\;\ \text{if}\;\xi\leq -1,\\-\dfrac {\xi^4}2-\dfrac{4\xi^3}{3}\quad\text{if}\;-1<\xi < 0,\\0 \quad\quad\quad\quad\quad\;\ \text{if}\;\xi\geq 0.\end{cases}\end{equation*}\vskip-\lastskip\pagebreak

Then $g'(\xi)\leq 0\;\text{for all}\ \xi$ and $g''(\xi)\geq 0\;\text{for all}\ \xi$ . Hence, we are to compute

\begin{equation*}d_t\bigg(\int_\mathcal{O} g(I_{\lambda,\varepsilon}(t,x){\rm d} x)\bigg).\end{equation*}

Since $g'(\xi)\Phi(\xi)=g''(\xi)\Phi(\xi)=0\;\text{for all}\ \xi$ , by Itô’s Lemma [Reference Curtain and Falez10, Theorem 3.8], we get

\begin{equation*}\begin{aligned}\int_\mathcal{O} g(I_{\lambda,\varepsilon}(t,x))\,{\rm d} x&=k_2\int_0^t\int_\mathcal{O} g'(I_{\lambda,\varepsilon}(s,x))\Delta I_{\lambda,\varepsilon}(s,x)\,{\rm d} x\,{\rm d} s\\&=-k_2\int_0^t \int_\mathcal{O} g''(I_{\lambda,\varepsilon}(s,x))\left\vert{\nabla I_{\lambda,\varepsilon}(s,x)}\right\vert^2\,{\rm d} x\,{\rm d} s\\&\leq 0.\end{aligned}\end{equation*}

Since $g(\xi)>0$ for all $\xi<0$ , we conclude that, for all $\lambda \in \rho(A_1)\times \rho(A_2)$ , $\varepsilon> 0$ , $ I_{\lambda,\varepsilon}(t,x)\geq 0$ for all $t\in [0,T]$ , almost everywhere in $\mathcal{O}$ . Similarly, we have

\begin{equation*}\begin{aligned}\int_\mathcal{O} g(S_{\lambda,\varepsilon}(t,x))\,{\rm d} x&=\int_0^t\int_\mathcal{O} g'(S_{\lambda,\varepsilon}(s,x))(k_1\Delta S_{\lambda,\varepsilon}(s,x)+(R_1(\lambda_1)\Lambda)(x))\,{\rm d} x\,{\rm d} s\\&=-k_1\int_0^t \int_\mathcal{O} g''(S_{\lambda,\varepsilon}(s,x))\left\vert{\nabla S_{\lambda,\varepsilon}(s,x)}\right\vert^2\,{\rm d} x\,{\rm d} s\\&\quad+\int_0^t\int_\mathcal{O} g'(S_{\lambda,\varepsilon}(s,x))(R_1(\lambda_1)\Lambda)(x)\,{\rm d} x\,{\rm d} s\\&\leq 0,\end{aligned}\end{equation*}

where the last inequality above follows from the fact that

\begin{equation*}R_1(\lambda_1,A_1)=\int_0^\infty{\rm e}^{-\lambda_1 t}{\rm e}^{tA_1}\,{\rm d} t\end{equation*}

preserves positivity. Again, since $g(\xi)>0$ for all $\xi<0$ , we obtain the positivity of $S_{\lambda,\varepsilon}(t,x)$ . Hence, $S^{*}(t,x),I^{*}(t,x)\geq 0$ almost everywhere in $\mathcal{O}$ for all $t\in [0,T]$ , a.s.

Returning to the proof of the theorem, since $(S^*(t),I^*(t))$ is a unique mild solution of (3.8) and is positive, it is a mild solution of (2.4). Therefore, (2.4) admits a unique positive mild solution $(S(t),I(t))$ .

Now we prove the second part. For convenience, we use subscripts to indicate the dependence of the solution on the initial value. Let $Z_{z_0}(t),Z_{z^{\prime}_0}(t)$ be the positive mild solutions of (2.6) with the initial conditions $Z(0)=z_0$ and $Z(0)=z^{\prime}_0$ , respectively. That means,

\begin{equation*}Z_{z_0}(t)={\rm e}^{tA}z_0+\int_0^t {\rm e}^{(t-s)A}F^*(Z_{z_0}(s))\,{\rm d} s+\int_0^t {\rm e}^{(t-s)A} Z_{z_0}(s)\,{\rm d} W(s)\end{equation*}

and

\begin{equation*}Z_{z^{\prime}_0}(t)={\rm e}^{tA}z^{\prime}_0+\int_0^t {\rm e}^{(t-s)A}F^*(Z_{z^{\prime}_0}(s))\,{\rm d} s+\int_0^t {\rm e}^{(t-s)A}Z_{z^{\prime}_0}(s){\rm d} W(s).\end{equation*}

This implies that

\begin{equation*}\begin{aligned}Z_{z_0}(t)-Z_{z^{\prime}_0}(t)&={\rm e}^{tA}z_0-e^{tA}z^{\prime}_0+\int_0^t {\rm e}^{(t-s)A}(F^*(Z_{z_0}(s))-F^*(Z_{z^{\prime}_0}(s)))\,{\rm d} s\\&\quad+\int_0^t {\rm e}^{(t-s)A}(Z_{z_0}(s)-Z_{z^{\prime}_0}(s))\,{\rm d} W(s).\end{aligned}\end{equation*}

Since (3.3) and (3.6) hold, we can obtain that

(3.9) \begin{equation}\begin{aligned}\mathbb{E} \sup_{s\in[0,t]}&\left\vert{\int_0^s {\rm e}^{(s-r)A}(Z_{z_0}(r)-Z_{z^{\prime}_0}(r))\,{\rm d} W(r)}\right\vert^p_{C(\mathcal{\overline{O}},\mathbb{R}^2)}\\&\leq c_p(t)\int_0^t \mathbb{E}\sup_{r\in [0,s]}\left\vert{Z_{z_0}(r)-Z_{z^{\prime}_0}(r)}\right\vert_{C(\mathcal{\overline{O}},\mathbb{R}^2)}^p{\rm d} s\\&\leq c_p(t)\int_0^t \left\vert{Z_{z_0}-Z_{z^{\prime}_0}}\right\vert_{L_{s,p}}^p{\rm d} s .\end{aligned}\end{equation}

Therefore, by virtue of (3.7) and (3.9), it is possible to get

\begin{equation*}\left\vert{Z_{z_0}-Z_{z^{\prime}_0}}\right\vert_{L_{t,p}}^p\leq c_p\left\vert{z_0-z^{\prime}_0}\right\vert_{C(\mathcal{\overline{O}},\mathbb{R}^2)}^p+c_p(t)\int_0^t \left\vert{Z_{z_0}-Z_{z^{\prime}_0}}\right\vert_{L_{s,p}}^p{\rm d} s.\end{equation*}

Hence, it is easy to obtain, from Gronwall’s inequality, that

\begin{equation*}\left\vert{Z_{z_0}-Z_{z^{\prime}_0}}\right\vert_{L_{T,p}}^p\leq c_p(T)\left\vert{z_0-z^{\prime}_0}\right\vert_{C(\mathcal{\overline{O}},\mathbb{R}^2)}^p.\end{equation*}

Therefore, the continuous dependence of the solution on the initial values is proved.

4. Long-time behavior

This section investigates the properties of the positive mild solution $(S(t),I(t))$ of system (2.4) when $t\to\infty$ . In particular, we provide the sufficient conditions for extinction and permanence. For each function $u\in E$ , denote

\begin{equation*}u_*=\inf_{x\in\mathcal{\overline{O}}}u(x).\end{equation*}

Define the number

\begin{equation*}\widehat R=\int_\mathcal{O}\alpha(x)\,{\rm d} x-\int_\mathcal{O}\mu_2(x)\,{\rm d} x-\dfrac {a_2}2.\end{equation*}

Theorem 4.1. If $\Lambda_*>0$ and $\widehat R>0$ , then the infected class is permanent in the sense that, for any initial values $0\leq S_0,I_0\in E$ satisfying

\begin{equation*}\int_\mathcal{O} -\ln I_0(x)\,{\rm d} x<\infty,\end{equation*}

we have

\begin{equation*}\liminf_{t\to\infty} \dfrac1t\int_0^t\left(\mathbb{E}\int_\mathcal{O} (I^2(s,x)\wedge 1) \,{\rm d} x\right)^{\frac12}{\rm d} s\geq R_I \end{equation*}

for some $R_I>0$ independent of initial values.

Proof. To obtain the long-time properties of $(S(t),I(t))$ , one of tools we use is Itô’s formula. Unfortunately, in general Itô’s formula is not valid for mild solutions. Hence, our idea is to approximate the solution by a sequence of strong solutions when the noise is finite dimensional. First, we assume that $S_0,I_0\in D(A_i^E)$ , where $D(A_i^E)$ is the domain of $A_i^E$ , the part of $A_i$ in E. For each fixed $n\in \mathbb{N}$ , let $\overline S_n(t,x),\overline I_{n}(t,x)$ be the strong solution (see [Reference Da Prato and Zabczyk12] for more details about strong, weak, and mild solutions) of the following equations:

(4.1) \begin{equation}\begin{cases}{\rm d} \overline S_n(t,x)=\bigg[A_1\overline S_n(t,x)+\Lambda(x)-\mu_1(x)\overline S_n(t,x)-\dfrac{\alpha(x) \overline S_n(t,x)\overline I_n(t,x)}{ \overline S_n(t,x)+\overline I_n(t,x)}\bigg]{\rm d} t \\ \qquad\qquad\quad + \displaystyle \sum_{k=1}^n \sqrt{a_{k,1}}e_k(x)\overline S_n(t,x){\rm d} B_{k,1}(t),\\{\rm d} \overline I_n(t,x)=\bigg[A_2 \overline I_n(t,x)-\mu_2(x) \overline I_n(t,x) + \dfrac{\alpha(x) \overline S_n(t,x)\overline I_n(t,x)}{ \overline S_n(t,x)+\overline I_n(t,x)}\bigg]{\rm d} t\\\qquad\qquad\quad +\displaystyle \sum_{k=1}^n \sqrt{a_{k,2}}e_k(x)\overline I_n(t,x){\rm d} B_{k,2}(t),\\\overline S_n(x,0)=S_0(x),\quad \overline I_n(x,0)=I_0(x).\end{cases}\end{equation}

The existence and uniqueness of a strong solution of (4.1) follow the results in [Reference Da Prato and Tubaro11] or [12, Section 7.4]. To see that the conditions in these references are satisfied, we note that the semigroups ${\rm e}^{tA_1}$ and ${\rm e}^{tA_2}$ (as well as their restrictions to E) are analytic (see [Reference Arendt3, Chapter 2]) and strongly continuous (see [Reference Da Prato and Zabczyk12, Appendix A.5.2]). Moreover, from the characterizations of fractional power of elliptic operators in [Reference Yagi38, Chapter 16] or [Reference Da Prato and Zabczyk12, Appendix A], it is easy to confirm that the coefficients in (4.1) satisfy condition (e) in Hypothesis 2 of [Reference Da Prato and Tubaro11]. Moreover, a detailed argument can be also found in [Reference Nguyen and Yin29, Reference Nguyen and Yin30].

In addition, from the continuous dependence on parameter $\xi$ of the fixed points of the family of uniform contraction mappings $T(\xi)$ , by a similar ‘parameter-dependent contraction mapping’ argument, it is easy to obtain (see [Reference Da Prato and Zabczyk12] or [Reference Nguyen and Yin30, Proposition 4.2]) that, for any fixed t,

\begin{equation*}\lim_{n\to\infty}\mathbb{E}\left\vert{S(t)-\overline S_n(t)}\right\vert_H^2\to 0\end{equation*}

and

\begin{equation*}\lim_{n\to\infty}\mathbb{E}\left\vert{I(t)-\overline I_n(t)}\right\vert_H^2\to 0.\end{equation*}

To proceed, we state and prove the following auxiliary lemmas.

Lemma 4.1. Let $\mu_*\,:\!=\inf_{x\in\mathcal{\overline{O}}}\min\{\mu_1(x),\mu_2(x)\}$ . If $\mu_*>0$ then

\begin{equation*}\mathbb{E}\int_\mathcal{O}(S(t,x)+ I(t,x))\,{\rm d} x\leq {\rm e}^{-\mu_* t} \int_\mathcal{O}(S_0(x)+I_0(x))\,{\rm d} x+ \dfrac{ |\Lambda|_E}{\mu_*}.\end{equation*}

Proof. In view of Itô’s formula ([Reference Curtain and Falez10, Theorem 3.8]), we can obtain

\begin{equation*}\begin{aligned}\mathbb{E}\, {\rm e}^{\mu_*t}\int_\mathcal{O}(\overline S_n(t,x)+ \overline I_n(t,x))\,{\rm d} x&\leq \int_\mathcal{O}(S_0(x)+I_0(x))\,{\rm d} x+\mathbb{E}\int_0^t {\rm e}^{\mu_*s}\int_\mathcal{O}\Lambda(x)\,{\rm d} x \,{\rm d} s\\&\leq \int_\mathcal{O}(S_0(x)+I_0(x))\,{\rm d} x+\dfrac{ |\Lambda|_E}{\mu_*}{\rm e}^{\mu_* t}.\end{aligned}\end{equation*}

Letting $n\to\infty$ , we obtain the desired result.

Now we are in a position to estimate $\mathbb{E}( \int_\mathcal{O} \frac 1{\overline S_n^p(t ,x)}\,{\rm d} x)$ by the following lemma.

Lemma 4.2. For any $p>0$ , if $\int_\mathcal{O}\frac 1{S_0^p(x)}\,{\rm d} x<\infty$ , there exists $\widetilde K_p>0$ , which is independent of n and the initial conditions, such that

\begin{equation*}\begin{aligned}\mathbb{E}&\int_\mathcal{O} \dfrac 1{\overline S_n^p(t ,x)}\,{\rm d} x\leq {\rm e}^{-t}\int_\mathcal{O}\frac 1{S_0^p(x)}\,{\rm d} x+\widetilde K_p.\end{aligned}\end{equation*}

Proof. For any $0<\varepsilon<\frac{p\Lambda_*}{2}$ , using Itô’s Lemma ([Reference Curtain and Falez10, Theorem 3.8]) and by direct calculations, we have

(4.2) \begin{equation}\begin{split}&\hspace*{-6pt} {\rm e}^{t }\!\!\int_\mathcal{O} \dfrac 1{(\overline S_n(t ,x)+\varepsilon)^p}\,{\rm d} x \\[6pt] &\hspace*{-6pt} \ =\int_\mathcal{O}\dfrac 1{(S_0(x)+\varepsilon)^p}\,{\rm d} x+\int_0^{t } {\rm e}^s\int_\mathcal{O} \dfrac 1{(\overline S_n(s,x)+\varepsilon)^p}\,{\rm d} x\,{\rm d} s +\int_0^{t } {\rm e}^s\int_\mathcal{O}\dfrac {-p}{(\overline S_n(s,x)+\varepsilon)^{p+1}}\\[6pt]&\hspace*{-6pt}\qquad \times \bigg(k_1\Delta \overline S_n(s,x)+\Lambda(x)-\mu_1(x)\overline S_n(s,x)-\dfrac{\alpha(x)\overline S_n(s,x)\overline I_n(s,x)}{\overline S_n(s,x)+\overline I_n(s,x)}\bigg)\,{\rm d} x\,{\rm d} s\\[6pt]&\hspace*{-6pt}\qquad+\dfrac 12 \int_0^{t } {\rm e}^s\sum_{k=1}^n\int_\mathcal{O} \dfrac{p(p+1)a_{k,1}e^2_k(x)\overline S_n^2(s,x)}{(\overline S_n(s,x)+\varepsilon)^{p+2}}\,{\rm d} x\,{\rm d} s\\[6pt]&\hspace*{-6pt}\qquad +\sum_{k=1}^n\int_0^{t } {\rm e}^s\bigg[\sqrt{a_{k,1}}\int_\mathcal{O} \dfrac{-pe_k(x)\overline S_n(s,x)}{(\overline S_n(s,x)+\varepsilon)^{p+1}}\,{\rm d} x\bigg]\,{\rm d} B_{k,1}(s)\\[6pt]&\hspace*{-6pt}\leq \int_\mathcal{O}\dfrac 1{(S_0(x)+\varepsilon)^p}\,{\rm d} x+\int_0^{t } {\rm e}^s\int_\mathcal{O}\dfrac{-pk_1\Delta\overline S_n(s,x)}{(\overline S_n(s,x)+\varepsilon)^{p+1}}\,{\rm d} x\,{\rm d} s\\[6pt]&\hspace*{-6pt}\qquad+\int_0^{t } {\rm e}^s\int_\mathcal{O} \dfrac{p}{(\overline S_n(s,x)+\varepsilon)^{p+1}}\bigg(-\Lambda (x)+\frac{\varepsilon}p+\bigg(\left\vert{\mu_1}\right\vert_E+\left\vert{\alpha}\right\vert_E+\dfrac 1p+\dfrac{p+1}{2}a_1C_0^2\bigg)\\[6pt]&\hspace*{-6pt}\qquad \times \overline S_n(s,x)\bigg)\,{\rm d} x\,{\rm d} s+\sum_{k=1}^n\int_0^{t } {\rm e}^s\bigg[\sqrt{a_{k,1}}\int_\mathcal{O} \dfrac{-pe_k(x)\overline S_n(s,x)}{(\overline S_n(s,x)+\varepsilon)^{p+1}}\,{\rm d} x\bigg]\,{\rm d} B_{k,1}(s)\\[6pt]&\hspace*{-6pt}\leq \int_\mathcal{O}\dfrac 1{(S_0(x)+\varepsilon)^p}\,{\rm d} x+\int_0^{t } \dfrac{pK_p^{p+1}2^p}{\Lambda_*^p}{\rm e}^s\,{\rm d} s\\[6pt]&\hspace*{-6pt}\qquad +\sum_{k=1}^n\int_0^{t } {\rm e}^s\bigg[\sqrt{a_{k,1}}\int_\mathcal{O} \dfrac{-pe_k(x)\overline S_n(s,x)}{(\overline S_n(s,x)+\varepsilon)^{p+1}}\,{\rm d} x\bigg]\,{\rm d} B_{k,1}(s),\end{split}\end{equation}

where $K_p=\left\vert{\mu_1}\right\vert_E+\left\vert{\alpha}\right\vert_E+\frac 1p+\frac{p+1}{2}a_1C_0^2$ . In the above, we used the facts that

\begin{equation*}\int_\mathcal{O}\dfrac{-pk_1\Delta\overline S_n(s,x)}{(\overline S_n(s,x)+\varepsilon)^{p+1}}\,{\rm d} x=-p(p+1)k_1\int_\mathcal{O}\dfrac{\left\vert{\nabla \overline S_n(s,x)}\right\vert^2}{(\overline S_n(s,x)+\varepsilon)^{p+2}}\,{\rm d} x\leq 0 \ \hbox{ a.s.}\end{equation*}

and

\begin{equation*}\begin{aligned}\int_\mathcal{O} &\dfrac{p}{(\overline S_n(s,x)+\varepsilon)^{p+1}}\bigg(-\Lambda (x)+\dfrac{\varepsilon}p+\bigg(\left\vert{\mu_1}\right\vert_E+\left\vert{\alpha}\right\vert_E+\dfrac 1p+\dfrac{p+1}{2}a_1C_0^2\bigg)\overline S_n(s,x)\bigg)\,{\rm d} x\\&\leq\int_\mathcal{O}\dfrac{p}{(\overline S_n(s,x)+\varepsilon)^{p+1}}\bigg(-\dfrac{\Lambda_*}2+K_p\overline S_n(s,x)\bigg)\boldsymbol{1}_{\big\{\overline S_n(s,x)\geq \frac{\Lambda_*}{2K_p}\big\}}\,{\rm d} x\\&\leq \dfrac{pK_p^{p+1}2^p}{\Lambda_*^p}\ \text{a.s.\;}\end{aligned}\end{equation*}\vskip-\lastskip\pagebreak

Hence, (4.2) implies that, for all $t\geq 0$ and $n\in \mathbb{N}$ ,

(4.3) \begin{equation}\begin{aligned}\mathbb{E} \int_\mathcal{O} \dfrac 1{(\overline S_n(t ,x)+\varepsilon)^p}\,{\rm d} x\leq {\rm e}^{-t}\int_\mathcal{O}\dfrac 1{(S_0(x)+\varepsilon)^p}\,{\rm d} x+ {\rm e}^{-t }\int_0^{t }\dfrac{pK_p^{p+1}2^p}{\Lambda_*^p}{\rm e}^s\,{\rm d} s.\end{aligned}\end{equation}

Letting $\varepsilon\to 0$ , we have, from the monotone convergence theorem,

\begin{equation*} \begin{aligned}\mathbb{E} \int_\mathcal{O} \dfrac 1{\overline S_n^p(t ,x)}\,{\rm d} x\leq {\rm e}^{-t}\int_\mathcal{O}\frac 1{S_0^p(x)}\,{\rm d} x+ {\rm e}^{-t }\int_0^{t }\dfrac{pK_p^{p+1}2^p}{\Lambda_*^p}{\rm e}^s\,{\rm d} s.\end{aligned}\end{equation*}

The proof of the lemma is completed.

Noting that our initial conditions are not assumed to satisfy $\int_\mathcal{O} \frac 1{S^2_0(x)}\,{\rm d} x<\infty$ , we will prove in the following lemma that, after some finite time, the solutions have inverse functions that belong to $L^2(\mathcal{O},\mathbb{R})$ .

Lemma 4.3. For any $n\in\mathbb{N}$ ,

\begin{equation*}\begin{aligned}\mathbb{E} &\int_\mathcal{O} \dfrac 1{\overline S_n^2(4 ,x)}\,{\rm d} x\leq \ell_0,\end{aligned}\end{equation*}

where $\ell_0$ depends only on the initial conditions (independent of n).

Proof. Using

\begin{equation*}\begin{aligned}\mathbb{E}\int_\mathcal{O} \overline S_n(t,x)\,{\rm d} x&=\int_\mathcal{O} S_0(x)\,{\rm d} x +\int_0^t \mathbb{E}\int_\mathcal{O} \bigg(k_1\Delta \overline S_n(s,x)+\Lambda(x)\\&\quad\quad\quad-\mu_1(x)\overline S_n(s,x)-\dfrac{\alpha(x)\overline S_n(s,x)\overline I_n(s,x)}{\overline S_n(s,x)\overline I_n(s,x)}\bigg)\,{\rm d} x\,{\rm d} s\\&\leq \int_\mathcal{O} S_0(x)\,{\rm d} x +t\left\vert{\Lambda}\right\vert_E \end{aligned}\end{equation*}

and $s^q\leq s+1$ for all $s\in\mathbb{R}>0, q \in [0,1]$ , it is easy to show that there exists $\ell_{1}>0$ such that

(4.4) \begin{equation}\mathbb{E} \int_\mathcal{O} \overline S_n^q(t ,x)\,{\rm d} x\leq \ell_{1} \qquad \text{for any } t\in[0,1], q\in[0,1],\end{equation}

where $\ell_1$ is independent of n. For any $\varepsilon>0$ , using Itô’s Lemma ([Reference Curtain and Falez10, Theorem 3.8]) again, we have

(4.5) \begin{equation}\begin{aligned}&\!\!\!\mathbb{E} \int_\mathcal{O} (\overline S_n(1 ,x)+\varepsilon)^{\frac12}\,{\rm d} x \\ & \ =\int_\mathcal{O}(S_0(x)+\varepsilon)^{\frac12}{\rm d} x+\int_0^{1}\mathbb{E} \int_\mathcal{O}\dfrac {1}{2(\overline S_n(s,x)+\varepsilon)^{\frac12}}\bigg(k_1\Delta \overline S_n(s,x)\\\\&\qquad+\Lambda(x)-\mu_1(x)\overline S_n(s,x)-\dfrac{\alpha(x)\overline S_n(s,x)\overline I_n(s,x)}{\overline S_n(s,x)+\overline I_n(s,x)}\bigg)\,{\rm d} x\,{\rm d} s\\&\qquad-\dfrac 18 \int_0^{1} \mathbb{E}\sum_{k=1}^n\int_\mathcal{O} \dfrac{a_{k,1}e^2_k(x)\overline S_n^2(s,x)}{(\overline S_n(s,x)+\varepsilon)^{\frac32}}\,{\rm d} x\,{\rm d} s \\&\geq \frac12\int_0^1\mathbb{E} \int_\mathcal{O} \dfrac{\Lambda(x)}{(\overline S_n(s,x)+\varepsilon)^{\frac12}}\,{\rm d} x\,{\rm d} s-N_1 \int_0^1\left(\mathbb{E}\int_\mathcal{O} \overline S_n^{\frac12}(s ,x)\,{\rm d} x\right){\rm d} s,\end{aligned}\end{equation}

where

\begin{equation*}N_1=\dfrac{\left\vert{\mu_1}\right\vert_E+\left\vert{\alpha}\right\vert_E+\frac {a_1C_0^2}4}{2}.\end{equation*}

In view of (4.4) and (4.5), we have

\begin{equation*}\frac12\int_0^1\mathbb{E} \int_\mathcal{O} \dfrac{\Lambda(x)}{(\overline S_n(s,x)+\varepsilon)^{\frac12}}\,{\rm d} x\,{\rm d} s\leq (1+N_1)\ell_{1}+\sqrt{\varepsilon} \qquad\text{for all}\ \varepsilon>0,\end{equation*}

which implies that

\begin{equation*}\int_0^1\mathbb{E} \int_\mathcal{O} \dfrac{\Lambda(x)}{(\overline S_n(s ,x))^{\frac12}}\,{\rm d} x\,{\rm d} s\leq 2(1+N_1)\ell_{1} ,\end{equation*}

or there exists $t_1=t_1(n)\in[0,1]$ such that

\begin{equation*}\mathbb{E} \int_\mathcal{O} \dfrac{\Lambda(x)}{(\overline S_n(t_1,x))^{\frac12}}\,{\rm d} x\leq \dfrac{2(1+N_1)\ell_{1}}{\Lambda_*}.\end{equation*}

Applying Lemma 4.2 and the Markov property of $(S_n, I_n)$ , we have

\begin{equation*}\mathbb{E} \int_\mathcal{O} \dfrac{\Lambda(x)}{(\overline S_n(t ,x)+\varepsilon)^{\frac12}}\,{\rm d} x\leq\ell_{2} \qquad \text{for all}\ t\in[1,2],\end{equation*}

for some $\ell_{2}$ independent of n. We again have

(4.6) \begin{equation}\begin{aligned}&\!\!\!\mathbb{E} \int_\mathcal{O} (\overline S_n(2 ,x)+\varepsilon)^{-\frac12}{\rm d} x \\[6pt] & \ =\mathbb{E}\int_\mathcal{O}(\overline S_n(1,x)+\varepsilon)^{-\frac12}{\rm d} x -\int_1^{2}\mathbb{E} \int_\mathcal{O}\dfrac {1}{2(\overline S_n(s,x)+\varepsilon)^{\frac32}}\bigg(k_1\Delta \overline S_n(s,x)\\[6pt]&\qquad+\Lambda(x)-\mu_1(x)\overline S_n(s,x)-\dfrac{\alpha(x)\overline S_n(s,x)\overline I_n(s,x)}{\overline S_n(s,x)+\overline I_n(s,x)}\bigg)\,{\rm d} x\,{\rm d} s\\[6pt]&\qquad+\dfrac 38 \int_1^{2 } \mathbb{E}\sum_{k=1}^n\int_\mathcal{O} \dfrac{a_{k,1}e^2_k(x)\overline S_n^2(s,x)}{(\overline S_n(s,x)+\varepsilon)^{\frac32}}\,{\rm d} x\,{\rm d} s\\[6pt]&\leq -\frac12\int_1^2\mathbb{E} \int_\mathcal{O} \dfrac{\Lambda(x)}{(\overline S_n(s ,x)+\varepsilon)^{\frac32}}\,{\rm d} x\,{\rm d} s+\mathbb{E}\int_\mathcal{O}(\overline S_n(1,x)+\varepsilon)^{-\frac12}{\rm d} x\\[6pt]&\qquad +N_2 \int_1^2\left(\mathbb{E}\int_\mathcal{O} \overline S_n^{-\frac12}(s ,x)\,{\rm d} x\right){\rm d} s,\end{aligned}\end{equation}

where

\begin{equation*}N_2=\dfrac{\left\vert{\mu_1}\right\vert_E+\left\vert{\alpha}\right\vert_E+\frac {3a_1C_0^2}4}{2}.\end{equation*}\vskip-\lastskip\pagebreak

Thus,

\begin{equation*}\int_1^2\mathbb{E} \int_\mathcal{O} \dfrac{\Lambda(x)}{(\overline S_n(s ,x)+\varepsilon)^{\frac32}}\,{\rm d} x\,{\rm d} s\leq \ell_{3} \end{equation*}

for some $\ell_{3}$ depending only on the initial conditions. Letting $\varepsilon\to0$ , we obtain that, for some $t_2=t_2(n)\in[1,2]$ ,

\begin{equation*}\mathbb{E} \int_\mathcal{O} \dfrac{\Lambda(x)}{\overline S_n^{\frac32}(t_2,x)}\,{\rm d} x\leq \ell_{3},\end{equation*}

which together with Lemma 4.2 implies that

\begin{equation*}\mathbb{E} \int_\mathcal{O} \dfrac{\Lambda(x)}{(\overline S_n(t ,x)+\varepsilon)^{\frac32}}\,{\rm d} x\leq \ell_{4},\qquad \text{for all}\ t\in[2,3],\end{equation*}

where $\ell_4$ is some constant independent of n. Using the same process we can obtain that there exist $t_3=t_3(n)\in [0,4]$ and $\ell_5$ such that

\begin{equation*}\mathbb{E} \int_\mathcal{O} \dfrac{\Lambda(x)}{(\overline S_n(t_3 ,x))^{\frac52}}\,{\rm d} x\leq \ell_{5}.\end{equation*}

Therefore, it is possible to obtain the existence of two constants, $t_4=t_4(n)\in [0,4]$ and $\ell_6$ , satisfying

\begin{equation*}\mathbb{E} \int_\mathcal{O} \dfrac 1{(\overline S_n(t_4,x))^2}\,{\rm d} x<\ell_6.\end{equation*}

The lemma is proved by applying Lemma 4.2.

In view of Lemmas 4.2 and 4.3, we have

(4.7) \begin{equation}\begin{aligned}\mathbb{E} \int_\mathcal{O} \dfrac 1{\overline S_n^2(t ,x)}\,{\rm d} x\leq {\rm e}^{-t}\ell_0+ \widetilde K_2\qquad \text{for all}\ n\in\mathbb{N}, t\geq 4.\end{aligned}\end{equation}

Note that both $\ell_0$ and $\widetilde K_2$ are independent of n, and $\ell_0$ may depend on the initial point but $\widetilde K_2$ is independent. By Itô’s Lemma ([Reference Curtain and Falez10, Theorem 3.8]) again, and with calculations similar to obtaining (4.3), we have

\begin{equation*}\begin{aligned}\mathbb{E}\! \int_\mathcal{O}\overline I_n(t,x)\,{\rm d} x& \geq\mathbb{E} \int_\mathcal{O}\ln (\overline I_n(t,x)+\varepsilon)\,{\rm d} x \\&=\int_\mathcal{O}\ln (I_0(x)+\varepsilon)\,{\rm d} x+\int_0^t \mathbb{E}\int_\mathcal{O}\dfrac 1{\overline I_n(s,x)+\varepsilon}\bigg(k_2\Delta \overline I_n(s,x)\\&\quad\quad\quad-\mu_2(x)\overline I_n(s,x)+\dfrac{\alpha(x)\overline S_n(s,x)\overline I_n(s,x)}{\overline S_n(s,x)+\overline I_n(s,x)}\,{\rm d} x\,{\rm d} s\bigg)\\&\quad\quad\quad-\dfrac 12\int_0^t\mathbb{E}\sum_{k=1}^n \int_\mathcal{O}\dfrac{a_{k,2}\overline I_n^2(s,x)e_k^2(x)}{(\overline I_n(s,x)+\varepsilon)^2}\,{\rm d} x\,{\rm d} s\\&\geq \int_\mathcal{O}\!\!\ln (I_0(x)+\varepsilon){\rm d} x-\bigg(\dfrac {a_2}2+\left\vert{\mu_2}\right\vert_E\bigg)t \ \text{for all}\ n\in\mathbb{N}, t>0, 0<\varepsilon<1.\end{aligned}\end{equation*}

As a consequence,

\begin{equation*} \mathbb{E}\! \int_\mathcal{O}\!\overline I_n(t,x)\,{\rm d} x\geq\mathbb{E}\!\int_\mathcal{O}\! \ln \overline I_n(t,x)\,{\rm d} x\geq\!\int_\mathcal{O}\!\ln I_0(x)\,{\rm d} x-\bigg(\dfrac{a_2}2+\left\vert{\mu_2}\right\vert_E\bigg)t>-\infty\ \text{for all}\ t>0.\end{equation*}

That means,

(4.8) \begin{equation}\mathbb{P}\big\{\overline I_n(t,x)>0\;\text{almost everywhere in}\;\mathcal{O}\big\}=1 \qquad\text{for all}\ n\in\mathbb{N}, t>0.\end{equation}

On the other hand, combining Itô’s Lemma and basic calculations implies that

\begin{equation*}\begin{aligned}0&\geq \mathbb{E}\int_\mathcal{O}\ln \dfrac{\overline I_n(t,x)+\varepsilon}{1+\overline I_n(t,x)}\,{\rm d} x\geq \int_\mathcal{O} \ln\dfrac{I_0(x)+\varepsilon}{1+I_0(x)}\,{\rm d} x+\widehat Rt\\[4pt]&\quad\quad-\int_0^{t}\mathbb{E}\int_\mathcal{O} \bigg(\dfrac{\alpha(x) \overline I_n(s,x)}{\overline S_n(s,x)+\overline I_n(s,x)}+\dfrac{\alpha(x) \overline S_n(s,x)\overline I_n(s,x)}{(\overline S_n(s,x)+\overline I_n(s,x))(\overline I_n(s,x)+1)}\bigg)\,{\rm d} x\,{\rm d} s\\[4pt]&\quad\quad-\int_0^t\mathbb{E}\int_\mathcal{O} \dfrac{\alpha(x)\varepsilon}{\overline I_n(s,x)+\varepsilon}\,{\rm d} x\,{\rm d} s \qquad \text{for all}\ t>0,n\in \mathbb{N}, 0<\varepsilon<1.\end{aligned}\end{equation*}

Thus, for all $t> 0$ , $n\in\mathbb{N}$ , $0<\varepsilon<1$ ,

(4.9) \begin{equation}\begin{aligned}\int_0^t \mathbb{E}\int_\mathcal{O} &\bigg(\dfrac{\alpha(x)\overline I_n(s,x)}{\overline S_n(s,x)+\overline I_n(s,x)}+\dfrac{\alpha(x)\overline S_n(s,x)\overline I_n(s,x)}{(\overline S_n(s,x)+\overline I_n(s,x))(\overline I_n(s,x)+1)}\bigg)\,{\rm d} x\,{\rm d} s\\[4pt]& \geq \mathbb{E}\int_\mathcal{O} \ln\dfrac{I_0(x)+\varepsilon}{1+I_0(x)}\,{\rm d} x+\widehat Rt-\left\vert{\alpha}\right\vert_E\int_0^t \mathbb{E}\int_\mathcal{O} \dfrac{\varepsilon}{\overline I_n(s,x)+\varepsilon}\,{\rm d} x\,{\rm d} s.\end{aligned}\end{equation}

Let $\varepsilon\to 0$ ; using (4.8) and (4.9), we have

(4.10) \begin{equation}\begin{aligned}\int_0^t \mathbb{E}\int_\mathcal{O} &\bigg(\dfrac{\alpha(x)\overline I_n(s,x)}{\overline S_n(s,x)+\overline I_n(s,x)}+\dfrac{\alpha(x)\overline S_n(s,x)\overline I_n(s,x)}{(\overline S_n(s,x)+\overline I_n(s,x))(\overline I_n(s,x)+1)}\bigg)\,{\rm d} x\,{\rm d} s\\[4pt]\geq& \int_\mathcal{O}\ln \dfrac{I_0(x)}{1+I_0(x)}\,{\rm d} x+\widehat Rt\qquad\text{for all}\ t>0,n\in\mathbb{N}.\end{aligned}\end{equation}

We have the following estimates:

\begin{equation*}\begin{aligned}\left\vert{\alpha}\right\vert_E\left(\mathbb{E}\int_\mathcal{O} \dfrac{\overline I_n^2(s,x)}{(1+\overline I_n(s,x))^2}\,{\rm d} x\right)^{\frac12}&\geq \mathbb{E}\int_\mathcal{O}\dfrac{\alpha(x)\overline I_n(s,x)}{1+\overline I_n(s,x)}\,{\rm d} x \\[4pt]&\geq\mathbb{E} \int_\mathcal{O} \dfrac{\alpha(x)\overline S_n(s,x)\overline I_n(s,x)}{(\overline S_n(s,x)+\overline I_n(s,x))(\overline I_n(s,x)+1)}\,{\rm d} x \end{aligned}\end{equation*}

and

\begin{equation*}\begin{aligned}\left\vert{\alpha}\right\vert_E&\left(\mathbb{E}\int_\mathcal{O} \dfrac{\overline I_n^2(s,x)}{(1+\overline I_n(s,x))^2}\,{\rm d} x\right)^{\frac12}\left(\mathbb{E} \int_\mathcal{O}\left(\dfrac 1{\overline S_n(s,x)}+1\right)^2{\rm d} x\right)^{\frac12}\\[4pt]&\geq \mathbb{E}\int_\mathcal{O}\dfrac{\alpha(x)\overline I_n(s,x)}{1+\overline I_n(s,x)}\left(\dfrac1{\overline S_n(s,x)}+1\right){\rm d} x \geq \mathbb{E}\int_\mathcal{O}\dfrac{\alpha(x)\overline I_n(s,x)}{\overline S_n(s,x)+\overline I_n(s,x)}\,{\rm d} x,\end{aligned}\end{equation*}

since

\begin{equation*}\frac{1+I}{S+I}=\frac{1}{S+I}+\frac{I}{S+I}\leq \frac1S+1.\end{equation*}

Therefore, after some basic estimates, we can get from (4.10) that

\begin{equation*}\begin{aligned}\int_4^t \left\vert{\alpha}\right\vert_E&\left(\mathbb{E}\int_0 \dfrac{\overline I_n^2(s,x)}{(1+\overline I_n(s,x))^2}\,{\rm d} x\right)^{\frac12}\left(1+\left(\mathbb{E} \int_\mathcal{O}\left(\dfrac 1{\overline S_n(s,x)}+1\right)^2{\rm d} x\right)^{\frac12}\right){\rm d} s\\[3pt]\geq&\int_4^t \mathbb{E}\int_\mathcal{O} \bigg(\dfrac{\alpha(x)\overline I_n(s,x)}{\overline S_n(s,x)+\overline I_n(s,x)}+\dfrac{\alpha(x)\overline S_n(s,x)\overline I_n(s,x)}{(\overline S_n(s,x)+\overline I_n(s,x))(\overline I_n(s,x)+1)}\bigg)\,{\rm d} x\,{\rm d} s \\[3pt]\geq&\int_\mathcal{O}\ln \dfrac{I_0(x)}{1+I_0(x)}\,{\rm d} x+\widehat R t- 8\left\vert{\alpha}\right\vert_E,\end{aligned}\end{equation*}

which together with (4.7) leads to

\begin{equation*}\begin{aligned}\int_4^t \left\vert{\alpha}\right\vert_E&\left(\mathbb{E}\int_0 \dfrac{\overline I_n^2(s,x)\,{\rm d} x}{(1+\overline I_n(s,x))^2}\right)^{\frac12}\left(2\sqrt{{\rm e}^{-s}\ell_0}+2\widetilde K_2^{\frac 12}+3\right) {\rm d} s \\\geq& \int_\mathcal{O}\ln \dfrac{I_0(x)}{1+I_0(x)}\,{\rm d} x-8\left\vert{\alpha}\right\vert_E+\widehat R t.\end{aligned}\end{equation*}

Letting $n\to\infty$ yields

(4.11) \begin{equation}\begin{aligned}\int_4^t \left\vert{\alpha}\right\vert_E&\left(\mathbb{E}\int_\mathcal{O} \dfrac{I^2(s,x)}{(1+I(s,x))^2}\,{\rm d} x\right)^{\frac12}\left(2\sqrt{{\rm e}^{-s}\ell_0}+2\widetilde K_2^{\frac 12}+3\right) {\rm d} s \\\geq& \int_\mathcal{O}\ln \dfrac{I_0(x)}{1+I_0(x)}\,{\rm d} x-8\left\vert{\alpha}\right\vert_E+\widehat R t,\end{aligned}\end{equation}

which is easily followed by

\begin{equation*}\begin{aligned}\liminf_{t\to\infty}\dfrac1t\int_0^t \left(\mathbb{E}\int_\mathcal{O} \dfrac{I^2(s,x)}{(1+I(s,x))^2}\,{\rm d} x\right)^{\frac12}{\rm d} s\geq \dfrac{\widehat R}{\left\vert{\alpha}\right\vert_E(2\widetilde K_2^{\frac 12}+3)}.\end{aligned}\end{equation*}

As a consequence,

\begin{equation*}\liminf_{t\to\infty}\dfrac1t\int_0^t\left(\mathbb{E}\int_\mathcal{O} (I^2(s,x)\wedge 1) \,{\rm d} x\right)^{\frac12}{\rm d} s\geq R_I>0,\end{equation*}

where $R_I$ is independent of the initial points. The proof of the theorem is completed by using the dense property of $D(A_i^E)$ in E and the continuous dependence on initial data of the solution. In more detail, since the constants $\widetilde K_2$ and $\widehat R$ are independent of the initial points, the estimates (4.7) and (4.11) still hold for the solution starting from arbitrary initial points $S_0,I_0\in E$ with $\int_\mathcal{O} -\ln I_0(x)\,{\rm d} x<\infty$ .

Theorem 4.2. For any non-negative initial data $S_0,I_0\in E$ , if

\begin{equation*} (\mu_2-\alpha)_*= \inf_{x\in\mathcal{\overline{O}}}(\mu_2(x)-\alpha(x))>0,\end{equation*}

then the infected class will be extinct with exponential rate.

Proof. First, we define the linear operator $J\,:\,H \mapsto\mathbb{R}$ as, for all $u\in H$ ,

\begin{equation*}Ju\,:\!=\int_\mathcal{O} u(x)\,{\rm d} x.\end{equation*}

By the properties of ${\rm e}^{tA_i}$ , for all $u\in H$ , $J({\rm e}^{tA_i}u-u)=0$ or $Ju=J{\rm e}^{tA_i}u$ , for all $i=1,2$ .

Now, as in the definition of the mild solution, we have

\begin{equation*}I(t)={\rm e}^{tA_2}I_0+\int_0^t{\rm e}^{(t-s)A_2}\bigg(-\mu_2I(s)+\dfrac{\alpha S(s)I(s)}{S(s)+I(s)}\bigg)\,{\rm d} s+\int_0^t {\rm e}^{(t-s)A_2}I(s)\,{\rm d} W_2(s).\end{equation*}

Hence, applying the operator J to both sides, using the properties of operator J and stochastic convolution (see [12, Proposition 4.15]), we obtain

\begin{equation*}\begin{aligned}\int_\mathcal{O}I(t,x)\,{\rm d} x=&\int_\mathcal{O}I_0(x)\,{\rm d} x+\int_0^t \int_\mathcal{O}\bigg(-\mu_2(x) I(s,x)+\dfrac{\alpha(x) S(s,x)I(s,x)}{S(s,x)+I(s,x)}\bigg)\,{\rm d} x\,{\rm d} s\\&\quad+\int_0^t J({\rm e}^{(t-s)A_2}I(s))\,{\rm d} W_2(s),\end{aligned}\end{equation*}

where $J({\rm e}^{(t-s)A_2}I(s))$ in the stochastic integral is understood as the process taking values in spaces of the linear operator from H to $\mathbb{R}$ that is defined by

\begin{equation*}J({\rm e}^{(t-s)A_2}I(s))u\,:\!=\int_\mathcal{O} ({\rm e}^{(t-s)A_2}I(s)u)(x)\,{\rm d} x\qquad\text{for all}\ u\in H.\end{equation*}

From (2.1), it is easy to see that these integrals are well defined. By taking the expectation on both sides and using the properties of stochastic integral [10, Proposition 2.9],

\begin{equation*}\begin{aligned}\mathbb{E} \int_\mathcal{O}I(t,x)\,{\rm d} x&=\int_\mathcal{O}I_0(x)\,{\rm d} x+\mathbb{E}\int_0^t\int_\mathcal{O}\bigg(-\mu_2(x) I(s,x)+\dfrac{\alpha(x) S(s,x)I(s,x)}{S(s,x)+I(s,x)}\bigg)\,{\rm d} x\,{\rm d} s\end{aligned}\end{equation*}

As a consequence,

\begin{equation*}\begin{aligned}\mathbb{E}\! \int_\mathcal{O}I(t,x)\,{\rm d} x-\mathbb{E}\! \int_\mathcal{O} I(s,x)\,{\rm d} x&=\int_s^t\!\mathbb{E}\!\int_\mathcal{O}\bigg(-\mu_2(x) I(r,x)+\dfrac{\alpha(x) S(r,x)I(s,x)}{S(r,x)+I(r,x)}\bigg)\,{\rm d} x\,{\rm d} r\\&\leq -(\mu_2-\alpha)_*\int_s^t\mathbb{E}\int_\mathcal{O} I(r,x)\,{\rm d} x\,{\rm d} r .\end{aligned}\end{equation*}

Hence, we can obtain the following estimate for the upper Dini derivative:

\begin{equation*}\dfrac{{\rm d}}{{\rm d} t^+}\mathbb{E}\int_\mathcal{O}I(t,x)\,{\rm d} x\leq -(\mu_2-\alpha)_* \mathbb{E}\int_\mathcal{O}I(t,x)\,{\rm d} x\qquad\text{for all}\ t\geq 0.\end{equation*}

Since $(\mu_2-\alpha)_*>0$ , we can get that $\mathbb{E}\int_\mathcal{O}I(t,x)\,{\rm d} x$ converges to 0 with exponential rate as $t\to \infty$ . Hence, it easy to claim that the infected class goes extinct.

Theorem 4.3. Suppose that $W_2(t)$ is a space-independent Brownian motion with covariance $a_2 t$ . For any non-negative initial data $S_0,I_0\in E$ , if

\begin{equation*} (\mu_2-\alpha)_*+\frac{a_2}2\,:\!=\inf_{x\in\mathcal{\overline{O}}}(\mu_2(x)-\alpha(x))+\frac{a_2}2>0,\end{equation*}

then when $p>0$ is sufficiently small that

\begin{equation*}R_p\,:\!=(\mu_2-\alpha)_*+\frac{(1-p)a_2}2>0,\end{equation*}

we have

\begin{equation*}\limsup_{t\to\infty}\dfrac{\ln \mathbb{E}\left(\int_\mathcal{O} I(t,x)\,{\rm d} x\right)^p}t\leq -pR_p<0.\end{equation*}

Proof. Since $W_2(t)$ is a space-independent Brownian motion, as in the arguments in the proof of Theorem 4.1, the mild solution I(t) is also the solution in the strong sense if $I_0\in D(A_i^E)$ . Hence, with an initial value in $D(A_i^E)$ , we have

\begin{equation*}\int_\mathcal{O}\! I(t,x)\,{\rm d} x = \int_0^t\!\int_\mathcal{O}\bigg(-\mu_2(x) I(s,x)+\dfrac{\alpha(x) S(s,x)I(s,x)}{S(s,x)+I(s,x)}\bigg)\,{\rm d} x\,{\rm d} s + \int_0^t\!\int_\mathcal{O}\! I(s,x)\,{\rm d} W_2(s) .\end{equation*}

By Itô’s formula, we obtain that

\begin{equation*}\begin{aligned}\bigg(&\int_\mathcal{O} I(t,x)\,{\rm d} x\bigg)^p\\&= \int_s^t\bigg[ p\bigg(\int_\mathcal{O} I(r,x)\,{\rm d} x\bigg)^{p-1}\int_\mathcal{O}\bigg(-\mu_2(x) I(r,x)+\dfrac{\alpha(x) S(r,x)I(r,x)}{S(r,x)+I(r,x)}\bigg)\,{\rm d} x\bigg]\,{\rm d} r\\& \quad + \int_s^t p(1-p)\frac{a_2}2 \left(\int_\mathcal{O}I(r,x)\,{\rm d} x\right)^p{\rm d} r + \int_s^t\left(\int_\mathcal{O}I(r,x)\,{\rm d} x\right)^p{\rm d} W_2(r)\\&\leq -pR_p \int_s^t \left(\int_\mathcal{O}I(r,x)\,{\rm d} x\right)^p{\rm d} r + \int_s^t\left(\int_\mathcal{O}I(r,x)\,{\rm d} x\right)^p{\rm d} W_2(r).\end{aligned}\end{equation*}

Since $\mathbb{E} \left(\int_\mathcal{O}I(t,x)\,{\rm d} x\right)^p<\infty$ , we have

\begin{equation*}\mathbb{E} \left(\int_\mathcal{O} I(t,x)\,{\rm d} x\right)^p= \mathbb{E}\left(\int_\mathcal{O} I(s,x)\,{\rm d} x\right)^p-pR_p \int_s^t \mathbb{E}\left(\int_\mathcal{O}I(r,x)\,{\rm d} x\right)^p{\rm d} r ,\end{equation*}

which easily leads to

\begin{equation*}\dfrac{{\rm d}}{{\rm d} t^+} \mathbb{E} \left(\int_\mathcal{O} I(t,x)\,{\rm d} x\right)^p\leq -pR_p \mathbb{E} \left(\int_\mathcal{O} I(t,x)\,{\rm d} x\right)^p.\end{equation*}

An application of the differential inequality shows that

(4.12) \begin{equation}\mathbb{E} \left(\int_\mathcal{O} I(t,x)\,{\rm d} x\right)^p \leq {\rm e}^{-pR_pt} \left(\int_\mathcal{O} I(0,x)\,{\rm d} x\right)^p \end{equation}

for any $t\geq 0$ and initial values in $D(A_i^E)$ . Since $D(A_i^E)$ is dense in E, (4.12) holds for each fixed t and any initial values in E. Then the desired result can be obtained.

5. An example

In this section, to demonstrate our results we consider an example where the noise processes in (1.1) are standard Brownian motions and the recruitment, death, infection, and recovery rates are independent of the space variable. Precisely, we consider

(5.1) \begin{equation}\begin{cases}{\rm d} S(t,x)=\bigg[k_1\Delta S(t,x)+\Lambda-\mu_1S(t,x)- \dfrac{\alpha S(t,x)I(t,x)}{S(t,x)+ I(t,x)}\bigg]{\rm d} t \\ \qquad\qquad +\, \sigma_1S(t,x){\rm d} B_1(t)\quad \text{in } \mathbb{R}^+\times\mathcal{O}, \\[1ex]{\rm d} I(t,x)=\bigg[k_2\Delta I(t,x)-\mu_2I(t,x) +\dfrac{\alpha S(t,x)I(t,x)}{S(t,x)+ I(t,x)}\bigg]{\rm d} t \\\qquad\qquad +\,\sigma_2 I(t,x){\rm d} B_2(t)\quad\text{in } \mathbb{R}^+\times\mathcal{O},\\[1ex]\partial_{\nu}S(t,x)=\partial_{\nu}I(t,x)=0\quad\quad\quad\quad\quad\quad\quad\text{in} \;\;\;\;\mathbb{R}^+\times\partial\mathcal{O},\\S(x,0)=S_0(x),I(x,0)=I_0(x)\quad\quad\quad\quad\;\ \text{in} \;\;\;\;\mathcal{\overline{O}},\end{cases}\end{equation}

where $\Lambda$ , $\mu_1$ , $\mu_2$ , and $\alpha$ are positive constants, and $B_1(t)$ , $B_2(t)$ are independent standard Brownian motions. As we obtained above, for any initial values $0\leq S_0,I_0\in E$ , (5.1) has a unique positive mild solution $S(t,x),I(t,x)\geq 0$ . Moreover, the long-time behavior of the system is shown in the following theorem.

Theorem 5.1. Let S(t, x),I(t, x) be the positive mild solution (in fact also in the strong sense) of (5.1).

  1. (i) For any non-negative initial values $S_0,I_0\in E$ , if $\alpha<\mu_2+\dfrac {\sigma_2^2}2$ , then the infected individual is extinct.

  2. (ii) For initial values $0\leq S_0,I_0\in E$ satisfying

    \begin{equation*}\int_\mathcal{O}-\ln I_0(x)\,{\rm d} x<\infty,\end{equation*}
    If $\alpha>\mu_2+\dfrac {\sigma_2^2}2$ , then the infected class is permanent.

Remark 5.1. As in Theorem 5.1, the sufficient condition for permanence is almost a necessary condition. It is similar to the result for the SIS reaction–diffusion epidemic model shown in [Reference Peng and Liu33, Theorem 1.2].

6. Concluding remarks

Being possibly one of the first papers on spatially inhomogeneous stochastic partial differential equation epidemic models, we hope that our effort will provide some insights for subsequent study and investigation. For possible future study, we mention the following topics.

  • First, there is growing interest in using the so-called regime-switching stochastic models in various applications; see [Reference Yin and Zhu39] for the treatment of switching diffusion models, in which both continuous dynamics and discrete events coexist. Such switching diffusion models have gained popularity with applications ranging from networked control systems to financial engineering. For instance, in a financial market model, one may use a random switching process to model the mode of the market (bull or bear). Such a random switching process can be built into the stochastic partial differential equation models considered here. The switching is used to reflect different random environments that are not covered by the stochastic partial differential equation part of the model.

  • Second, instead of systems driven by Brownian motions, we may consider systems driven by Lévy process; some recent work can be seen in [Reference Bao, Yin and Yuan5]. One could work with stochastic partial differential equation models driven by Lévy processes. Recent work on switching jump diffusions [Reference Chen, Chen, Tran and Yin9] may also be adopted in stochastic partial differential equation models.

  • Finally, in terms of mathematical development, various estimates of the long-time properties were given in the average norm, although the solution is in the better space E. Future work will be to obtain stochastic regularity of the solution by using the methods in [Reference Brzeźniak7, Reference Veraar and Weis34, Reference van Neeren, Veraar and Weis35] so that it is possible to provide estimates in the sup-norm ( $\left\vert{\cdot}\right\vert_E$ ). Nevertheless, some mathematical details need to be carefully worked out. The result, in turn, will be of interest to people working on real data. Some other properties such as the strict positivity of the solutions and sharper conditions for extinction and permanence are worthy of consideration.

Acknowledgements

We are grateful to the editors and reviewer for their evaluation. Our special thanks go to the reviewer for detailed comments and suggestions on an earlier version of the manuscript, which have much improved the paper. The research of D. Nguyen was supported in part by the National Science Foundation under grant DMS-1853467. The research of N. Nguyen and G. Yin was supported in part by the Army Research Office under grant W911NF-19-1-0176.

References

Allen, L. J. S., Bolker, B. M., Lou, Y. andNevai, A. L. (2007). Asymptotic profiles of the steady states for an SIS epidemic patch model. SIAM J. Appl. Math. 67, 12831309.CrossRefGoogle Scholar
Allen, L. J. S., Bolker, B. M., Lou, Y. andNevai, A. L. (2008) Asymptotic profiles of the steady states for an SIS epidemic reaction–diffusion model. Discrete Cont. Dyn. Syst. 21, 120.CrossRefGoogle Scholar
Arendt, W. (2004) Semigroups and evolution equations: Functional calculus, regularity and kernel estimates. In Handbook of Differential Equations: Evolutionary Differential Equations, eds C. M. Dafermos and E. Feireisl, North Holland, Amsterdam.Google Scholar
Ball, F. andSirl, D. (2012). An SIR epidemic model on a population with random network and household structure, and several types of individuals. Adv. Appl. Prob. 44, 6386.CrossRefGoogle Scholar
Bao, J., Yin, G. andYuan, C. (2017). Two-time-scale stochastic partial differential equations driven by alpha-stable noises: Averaging principles. Bernoulli 23, 645669.CrossRefGoogle Scholar
Britton, T. andLindholm, M. (2009). The early stage behaviour of a stochastic SIR epidemic with term-time forcing. J. Appl. Prob. 46, 975992.CrossRefGoogle Scholar
Brzeźniak, Z. (1995). Stochastic partial differential equations in M-type 2 Banach spaces. Potential Anal. 4, 145.CrossRefGoogle Scholar
Cerrai, S. (2001). Second Order PDE’s in Finite and Infinite Dimension: A Probabilistic Approach (Lect. Notes Math. Ser. 1762). Springer, Berlin.CrossRefGoogle Scholar
Chen, X., Chen, Z.-Q., Tran, K. andYin, G. (2019). Properties of switching jump diffusions: Maximum principles and Harnack inequalities. Bernoulli 25, 10451075.CrossRefGoogle Scholar
Curtain, R. F. andFalez, P. L. (1970). Itô’s Lemma in infinite dimensions. J. Math. Anal. Appl. 31, 434448.CrossRefGoogle Scholar
Da Prato, G. andTubaro, L. (1985). Some results on semilinear stochastic differential equations in Hilbert spaces. Stochastics 15, 271281.CrossRefGoogle Scholar
Da Prato, G. andZabczyk, J. (1992). Stochastic Equations in Infinite Dimensions. Cambridge University Press.CrossRefGoogle Scholar
Davies, E. B. (1989). Heat Kernels and Spectral Theory (Camb. Univ. Tracts Math. 92). Cambridge University Press.Google Scholar
Dieu, N. T., Du, N. H. andNhu, N. N. (2019). Conditions for permanence and ergodicity of certain SIR epidemic models, Acta. Appl. Math. 160, 8199.CrossRefGoogle Scholar
Dieu, N. T., Nguyen, D. H., Du, N. H. andYin, G. (2016). Classification of asymptotic behavior in a stochastic SIR model. SIAM J. Appl. Dynam. Sys. 15, 10621084.CrossRefGoogle Scholar
Du, N. H., Nguyen, D. H. andYin, G. (2016). Conditions for permanence and ergodicity of certain stochastic predator–prey models. J. Appl. Prob. 53, 187202.CrossRefGoogle Scholar
Du, N. H. andNhu, N. N. (2017). Permanence and extinction of certain stochastic SIR models perturbed by a complex type of noises. Appl. Math. Lett. 64, 223230.CrossRefGoogle Scholar
Du, N. H. andNhu, N. N. (2018). Permanence and extinction for the stochastic SIR epidemic model. Submitted.Google Scholar
Ducrot, A. andGiletti, T. (2014). Convergence to a pulsating traveling wave for an epidemic reaction–diffusion system with non-diffusive susceptible population. J. Math. Biol. 69, 533552.CrossRefGoogle Scholar
Gathy, M. andLefevre, C. (2009). From damage models to SIR epidemics and cascading failures. Adv. Appl. Prob. 41, 247269.CrossRefGoogle Scholar
Hening, A. andNguyen, D. H. (2018). Stochastic Lotka–Volterra food chains. J. Math. Bio. 77, 135163.CrossRefGoogle ScholarPubMed
Hening, A., Nguyen, D. H. andYin, G. (2018). Stochastic population growth in spatially heterogeneous environments: The density-dependent case. J. Math. Biol. 76, 697754.CrossRefGoogle ScholarPubMed
Hieu, N. T., Du, N. H., Auger, P. andNguyen, D. H. (2015). Dynamical behavior of a stochastic SIRS epidemic model. Math. Model. Nat. Phenom. 10, 5673.CrossRefGoogle Scholar
Kermack, W. O. andMcKendrick, A. G. (1927). Contributions to the mathematical theory of epidemics (part I). Proc. R. Soc. London A 115, 700721.CrossRefGoogle Scholar
Kermack, W. O. andMcKendrick, A. G. (1932). Contributions to the mathematical theory of epidemics (part II). Proc. R. Soc. London A 138, 5583.CrossRefGoogle Scholar
Kortchemski, I. (2015) A predator–prey SIR type dynamics on large complete graphs with three phase transitions. Stoch. Process. Appl. 125, 886917.CrossRefGoogle Scholar
Liu, K. (2005). Stability of Infinite-Dimensional Stochastic Differential Equations with Applications. Chapman and Hall/CRC, New York.CrossRefGoogle Scholar
Nguyen, D. H. andYin, G. (2017). Coexistence and exclusion of stochastic competitive Lotka–Volterra models. J. Differential Equat. 262, 11921225.CrossRefGoogle Scholar
Nguyen, N. N. andYin, G. (2019). Stochastic partial differential equation SIS epidemic models: Modeling and analysis. Commun. Stoch. Anal. 13, 8.Google Scholar
Nguyen, N. N. andYin, G. (2020). Stochastic partial differential equation models for spatially dependent predator–prey equations. Discrete Cont. Dyn. Syst. Ser. B 25, 117139.CrossRefGoogle Scholar
Ouhabaz, E. M. (2004). Analysis of Heat Equations on Domains (London Math. Soc. Monographs 31). Princeton University Press.Google Scholar
Peng, R. (2009). Asymptotic profiles of the positive steady state for an SIS epidemic reaction–diffusion model I. J. Differential Equat. 247, 10961119.CrossRefGoogle Scholar
Peng, R. andLiu, S. (2009). Global stability of the steady states of an SIS epidemic reaction–diffusion model. Nonlin. Anal. 71, 239247.CrossRefGoogle Scholar
van Neeren. J. M. A. M., Veraar, M. C. andWeis, L. (2008). Stochastic evolution equations in UMD Banach spaces J. Funct. Anal. 255, 940993.Google Scholar
van Neeren, J. M. A. M., Veraar, M. andWeis, L. (2012). Stochastic maximal LP-regularity. Ann. Prob. 40, 788812.Google Scholar
Wang, W. andZhao, X. Q. (2012). Basic reproduction numbers for reaction–diffusion epidemic models. SIAM J. Appl. Dyn. Syst. 11, 16521673.CrossRefGoogle Scholar
Wilkinson, R. R., Ball, F. G. andSharkey, K. J. (2016). The deterministic Kermack–McKendrick model bounds the general stochastic epidemic. J. Appl. Prob. 53, 10311040.CrossRefGoogle Scholar
Yagi, A. (2010). Abstract Parabolic Evolution Equations and their Applications. Springer, Berlin.CrossRefGoogle Scholar
Yin, G. andZhu, C. (2010). Hybrid Switching Diffusions: Properties and Applications. Springer, New York.CrossRefGoogle Scholar
Zhang, L., Wang, Z. C. andZhao, X. Q. (2015). Threshold dynamics of a time periodic reaction–diffusion epidemic model with latent period. J. Differential Equat. 258, 30113036.CrossRefGoogle Scholar