1. Introduction
In the seminal paper [Reference Turing26], Turing proposed the concept of diffusion-driven instability (DDI), which may explain the spontaneous formation of the pattern in developmental biology. Here, DDI is the spatial homogeneous instability caused by the interaction of two chemical substances with different diffusion rates. Since then, the Turing notion has become a paradigm for the pattern generation and inspired the emergence of various theoretical models, but its biological verification has remained elusive [Reference Akam1, Reference Gierer and Meinhardt10].
However, not all patterns are formed as a result of DDI. Some models incorporate a combination of a reaction-diffusion equation and an ordinary differential equation. A common example of migration involves macroalgae and herbivores, particularly since macroalgae are stationary and exist solely within the environment inhabited by herbivore species. Furthermore, the pattern occurs not just in classical reaction–diffusion systems in which all species diffuse [Reference Guo and Wang11, Reference Guo, You and Ahmed Abbakar12, Reference You and Guo14, Reference Song, Jiang, Liu and Yuan24] but also in degenerate systems in which certain species do not diffuse. The latter systems were modelled by reaction–diffusion–ODE systems, see [Reference Aronson, Tesei and Weinberger2, Reference Le, Tsujikawa and Yagi17, Reference Sherrat, Maini, Jager and Muller25]. A model consisting of free receptors, bound receptors and ligands was proposed by Sherrat et al. [Reference Sherrat, Maini, Jager and Muller25], which described the coupling of cell-localized processes with cell to cell communication via diffusion in a cell assembly. Free and bound receptors are located on the surface of the cell and therefore do not diffuse. Ligand diffuses and acts by binding itself to receptors, thereby triggering an intracellular response that leads to cell differentiation. Their model has a built-in spatial heterogeneity that triggers patterning. Marciniak–Czochra [Reference Marciniak–Czochra19, Reference Marciniak–Czochra20] later extended their model and demonstrated that nonlinear interactions of hysteresis type can result in the spontaneous emergence of the pattern, without the need for spatial heterogeneity. For a detailed mathematical analysis of their work, please refer to [Reference Harting, Marciniak–Czochra and Takagi15, Reference Kothe, Marciniak–Czochra and Takagi16, Reference Li, Marciniak–Czochra, Takagi and Wu18, Reference Marciniak–Czochra, Takagi and Nakayama21].
In order to analyse the contribution of non-diffusive components in the pattern development procedure, we concentrate on the following system
 \begin{equation} \left \{\begin{array}{lll} u_t=r\left (1-\dfrac {u}{K}\right )u-\dfrac {cuv}{m+bu}, & t\gt 0,& x\in \overline \Omega, \\[8pt] v_t=d_2\Delta v-av+\dfrac {\beta cuv}{m+bu},& t\gt 0,& x\in \Omega, \\ \partial _\tau v =0, & t\gt 0,& x\in \partial \Omega, \\ u(x,0)=u_0(x)\ge 0, \not \equiv 0 \quad v(x,0)=v_0(x)\ge 0,\not \equiv 0 & x\in {{\Omega }}, \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{lll} u_t=r\left (1-\dfrac {u}{K}\right )u-\dfrac {cuv}{m+bu}, & t\gt 0,& x\in \overline \Omega, \\[8pt] v_t=d_2\Delta v-av+\dfrac {\beta cuv}{m+bu},& t\gt 0,& x\in \Omega, \\ \partial _\tau v =0, & t\gt 0,& x\in \partial \Omega, \\ u(x,0)=u_0(x)\ge 0, \not \equiv 0 \quad v(x,0)=v_0(x)\ge 0,\not \equiv 0 & x\in {{\Omega }}, \end{array} \right . \end{equation}
where 
 $u$
 and
$u$
 and 
 $v$
 represent the population density of the prey and predator, respectively;
$v$
 represent the population density of the prey and predator, respectively; 
 $d_2$
 represents the predator diffusion rates;
$d_2$
 represents the predator diffusion rates; 
 $\Omega$
 is a bounded domain in the Euclidean space
$\Omega$
 is a bounded domain in the Euclidean space 
 $R^N$
 with smooth boundary, denoted as
$R^N$
 with smooth boundary, denoted as 
 $\partial \Omega$
;
$\partial \Omega$
; 
 $\Delta$
 is the Laplace operator in
$\Delta$
 is the Laplace operator in 
 $R^N$
;
$R^N$
; 
 $\tau$
 is the unit outer normal vector on
$\tau$
 is the unit outer normal vector on 
 $\partial \Omega$
. The parameters
$\partial \Omega$
. The parameters 
 $r$
,
$r$
, 
 $m$
,
$m$
, 
 $c$
,
$c$
, 
 $b$
,
$b$
, 
 $a$
,
$a$
, 
 $\beta$
,
$\beta$
, 
 $K$
 are positive constants.
$K$
 are positive constants.
The case is interesting because a scalar reaction–diffusion equation typically cannot produce stable spatially heterogeneous patterns [Reference Casten and Holland5]. While it is true that problem (1.1) does not exhibit stable Turing-type patterns, it is worth noting that DDI (Diffusion-Driven Instability) can still occur by selecting suitable parameters. Interestingly, under certain conditions, the stationary problem associated with equation (1.1) can be simplified into a boundary value problem for a single reaction–diffusion equation featuring a discontinuous nonlinearity, which leads to the emergence of positive solutions with jump discontinuity. There is a lot of work on such issues, such as [Reference Cygan, Marciniak–Czochra, Karch and Suzuki6, Reference Takagi and Zhang27, Reference Zhang and Yang32]. Hence, the focus of this paper is studying the stationary problem associated with equation (1.1).
 \begin{equation} \left \{\begin{array}{lll} r\left (1-\dfrac {u}{K}\right )u-\dfrac {cuv}{m+bu}=0,& x\in \overline \Omega, \\[8pt] d_2\Delta v-av+\dfrac {\beta cuv}{m+bu}=0,& x\in \Omega, \\ \partial _\tau v =0,& x\in \partial \Omega .\\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{lll} r\left (1-\dfrac {u}{K}\right )u-\dfrac {cuv}{m+bu}=0,& x\in \overline \Omega, \\[8pt] d_2\Delta v-av+\dfrac {\beta cuv}{m+bu}=0,& x\in \Omega, \\ \partial _\tau v =0,& x\in \partial \Omega .\\ \end{array} \right . \end{equation}
For convenience, we let
 \begin{equation*}f_1(u,v)=r\left (1-\dfrac {u}{K}\right )u-\dfrac {cuv}{m+bu},\qquad f_2(u,v)=-av+\dfrac {\beta cuv}{m+bu}.\end{equation*}
\begin{equation*}f_1(u,v)=r\left (1-\dfrac {u}{K}\right )u-\dfrac {cuv}{m+bu},\qquad f_2(u,v)=-av+\dfrac {\beta cuv}{m+bu}.\end{equation*}
 The main findings of our current work can be summarized as follows. To begin, we carefully select suitable coefficients 
 $a$
,
$a$
, 
 $c$
,
$c$
, 
 $m$
,
$m$
, 
 $b$
 and
$b$
 and 
 $\beta$
 in order to ensure that the kinetic system (without considering diffusion) of problem (1.1) possesses only one positive equilibrium
$\beta$
 in order to ensure that the kinetic system (without considering diffusion) of problem (1.1) possesses only one positive equilibrium 
 $(u^*_2,v^*_2)$
 located on the right branch, as depicted in Figure 1(b). Then, by a variational approach to bifurcation methods, we show the existence of regular stationary solutions of problem (1.2). Next, by transforming the problem into a boundary value problem for a single equation involving
$(u^*_2,v^*_2)$
 located on the right branch, as depicted in Figure 1(b). Then, by a variational approach to bifurcation methods, we show the existence of regular stationary solutions of problem (1.2). Next, by transforming the problem into a boundary value problem for a single equation involving 
 $v(x)$
, we establish the existence of a discontinuous solution
$v(x)$
, we establish the existence of a discontinuous solution 
 $(u(x),v(x))$
 for problem (1.2) using the generalized mountain pass lemma (Theorem4.1). This solution is characterized by a jump discontinuity in
$(u(x),v(x))$
 for problem (1.2) using the generalized mountain pass lemma (Theorem4.1). This solution is characterized by a jump discontinuity in 
 $u(x)$
 and
$u(x)$
 and 
 $\Delta v(x)$
. The innovation of our current research is from the presence of a discontinuous nonlinearity in the reduced problem for
$\Delta v(x)$
. The innovation of our current research is from the presence of a discontinuous nonlinearity in the reduced problem for 
 $v(x)$
, which results in invalidating the general mountain pass lemma introduced by Ambrosetti and Rabinowitz [Reference Ambrosetti and Rabinowitz3]. Fortunately, Chang [Reference Chang7] expanded the existing theory to handle problems involving partial differential equations that contain discontinuous nonlinearities. It appears that this approach is suitable for our specific issue, allowing us to solve challenges we faced.
$v(x)$
, which results in invalidating the general mountain pass lemma introduced by Ambrosetti and Rabinowitz [Reference Ambrosetti and Rabinowitz3]. Fortunately, Chang [Reference Chang7] expanded the existing theory to handle problems involving partial differential equations that contain discontinuous nonlinearities. It appears that this approach is suitable for our specific issue, allowing us to solve challenges we faced.

Figure 1. 
Nullclines for 
 $f_1(u,v)=f_2(u,v)=0$
. The blue curve represents the solution of
$f_1(u,v)=f_2(u,v)=0$
. The blue curve represents the solution of 
 $f_1(u,v)=0$
, while the red curve represents the solution of
$f_1(u,v)=0$
, while the red curve represents the solution of 
 $f_2(u,v)=0$
. In
$f_2(u,v)=0$
. In 
 $(a)$
, we select
$(a)$
, we select 
 $a=0.4$
,
$a=0.4$
, 
 $b=1$
,
$b=1$
, 
 $m=0.3$
,
$m=0.3$
, 
 $K=0.8$
,
$K=0.8$
, 
 $c=1$
,
$c=1$
, 
 $\beta =1.4$
,
$\beta =1.4$
, 
 $r=0.3$
. In
$r=0.3$
. In 
 $(b)$
, we select
$(b)$
, we select 
 $a=0.4$
,
$a=0.4$
, 
 $b=1$
,
$b=1$
, 
 $m=0.3$
,
$m=0.3$
, 
 $K=0.8$
,
$K=0.8$
, 
 $c=1$
,
$c=1$
, 
 $\beta =0.6$
,
$\beta =0.6$
, 
 $r=0.3$
.
$r=0.3$
.
 We analyse problem (1.2) within the one-dimensional domain 
 $[0,1]$
 to know the structure of pattern formation. Under certain conditions on the coefficients, the equation
$[0,1]$
 to know the structure of pattern formation. Under certain conditions on the coefficients, the equation 
 $f_1(u,v)=0$
, where
$f_1(u,v)=0$
, where 
 $u\geq 0$
, has three distinct branches. These branches can be represented as
$u\geq 0$
, has three distinct branches. These branches can be represented as 
 $u = h_0(v)\, :\!\equiv 0$
,
$u = h_0(v)\, :\!\equiv 0$
, 
 $u = h_1(v)$
, and
$u = h_1(v)$
, and 
 $u = h_2(v)$
, with the feature that
$u = h_2(v)$
, with the feature that 
 $h_0(v) \lt h_1(v) \lt h_2(v)$
 (see Figure 1). To begin, we select a non-negative constant
$h_0(v) \lt h_1(v) \lt h_2(v)$
 (see Figure 1). To begin, we select a non-negative constant 
 $\gamma \in (0, v^*_2)$
 and utilize the functions
$\gamma \in (0, v^*_2)$
 and utilize the functions 
 $u = h_0(v)$
 and
$u = h_0(v)$
 and 
 $u = h_2(v)$
 in the following manner:
$u = h_2(v)$
 in the following manner: 
 $u = h_0(v)$
 for
$u = h_0(v)$
 for 
 $v\lt \gamma$
 and
$v\lt \gamma$
 and 
 $u = h_2(v)$
 for
$u = h_2(v)$
 for 
 $v\gt \gamma$
. Subsequently, the equation (1.2) is transformed into a boundary value problem for
$v\gt \gamma$
. Subsequently, the equation (1.2) is transformed into a boundary value problem for 
 $v(x)$
, which has discontinuous nonlinearity.
$v(x)$
, which has discontinuous nonlinearity.
 Next, by considering all values of the diffusion coefficient 
 $d_2$
, we are able to construct monotone solutions for this particular equation, and they are then used to construct symmetric solutions through the process of reflecting the monotone solutions, as described in Theorem5.1. In order to demonstrate Theorem5.1, we employ the shooting way, which was used in the research of Takagi and Zhang [Reference Takagi and Zhang28]. Furthermore, by selecting a smaller interval for
$d_2$
, we are able to construct monotone solutions for this particular equation, and they are then used to construct symmetric solutions through the process of reflecting the monotone solutions, as described in Theorem5.1. In order to demonstrate Theorem5.1, we employ the shooting way, which was used in the research of Takagi and Zhang [Reference Takagi and Zhang28]. Furthermore, by selecting a smaller interval for 
 $\beta$
 within the range of
$\beta$
 within the range of 
 $(0,v^*_2)$
, we can establish the uniqueness of solutions for any given
$(0,v^*_2)$
, we can establish the uniqueness of solutions for any given 
 $d_2$
. This is accomplished by employing another form of shooting method [Reference Mimura, Tabata and Hosono22], as demonstrated in Theorems5.2 and 5.3. Moreover, the mode of
$d_2$
. This is accomplished by employing another form of shooting method [Reference Mimura, Tabata and Hosono22], as demonstrated in Theorems5.2 and 5.3. Moreover, the mode of 
 $(u(x),v(x))$
 refers to the number of points at which
$(u(x),v(x))$
 refers to the number of points at which 
 $v''(x)$
 is discontinuous. Specifically, an
$v''(x)$
 is discontinuous. Specifically, an 
 $n$
-mode solution
$n$
-mode solution 
 $v_n(x)$
 (
$v_n(x)$
 (
 $n \geq 2$
) implies that there are exactly
$n \geq 2$
) implies that there are exactly 
 $n$
 points of discontinuity in
$n$
 points of discontinuity in 
 $v''_n(x)$
. Notably, a one-mode solution
$v''_n(x)$
. Notably, a one-mode solution 
 $v_1(x)$
 shows that
$v_1(x)$
 shows that 
 $v_1(x)$
 is either monotone increasing or monotone decreasing on
$v_1(x)$
 is either monotone increasing or monotone decreasing on 
 $[0,1]$
.
$[0,1]$
.
 Finally, with the aid of bifurcation theory [Reference Crandall and Rabinowitz8], we create nonconstant continuous stable states close to 
 $(u^*_3, v^*_3)$
 within the one-dimensional space domain of [0,1] and investigate their instability.
$(u^*_3, v^*_3)$
 within the one-dimensional space domain of [0,1] and investigate their instability.
 The paper is divided into five sections: In Section 2, we present preliminary results on nonlinear functions 
 $f_1$
 and
$f_1$
 and 
 $f_2$
 that will be utilized in the subsequent sections of this paper. In Section 3, we construct regular stationary solutions of problem (1.2) utilizing the bifurcation theory. In Section 4, we prove the existence of discontinuous stationary solutions of problem (1.2). In Section 5, we not only construct steady states with jump discontinuities but also explore various types of these states, and these steady states can exhibit monotonic or symmetric behaviour. Additionally, we verify the uniqueness of these steady states under certain additional conditions. In Section 6, the investigation focuses on the stability of stationary solutions.
$f_2$
 that will be utilized in the subsequent sections of this paper. In Section 3, we construct regular stationary solutions of problem (1.2) utilizing the bifurcation theory. In Section 4, we prove the existence of discontinuous stationary solutions of problem (1.2). In Section 5, we not only construct steady states with jump discontinuities but also explore various types of these states, and these steady states can exhibit monotonic or symmetric behaviour. Additionally, we verify the uniqueness of these steady states under certain additional conditions. In Section 6, the investigation focuses on the stability of stationary solutions.
2. Preliminaries
 We shall discuss certain properties of the functions 
 $f_1$
 and
$f_1$
 and 
 $f_2$
 that will be applied in this paper.
$f_2$
 that will be applied in this paper.
Proposition 2.1. 
If 
 $Kb\gt m$
 and
$Kb\gt m$
 and 
 $u\geq 0$
 hold, then
$u\geq 0$
 hold, then 
 $f_1(u,v)=0$
 has three distinct branches:
$f_1(u,v)=0$
 has three distinct branches: 
 $u=h_0(v)=0$
 for
$u=h_0(v)=0$
 for 
 $v\in (\!-\infty, +\infty )$
,
$v\in (\!-\infty, +\infty )$
, 
 $u=h_1(v)$
 for
$u=h_1(v)$
 for 
 $v\in (rm/c,v_M)$
 and
$v\in (rm/c,v_M)$
 and 
 $u=h_2(v)$
 for
$u=h_2(v)$
 for 
 $v\in (\!-\infty, v_M)$
, where
$v\in (\!-\infty, v_M)$
, where
 \begin{equation*}u_M=\dfrac {Kb-m}{2b},\quad v_M=\dfrac {r\left (1-\dfrac {u_M}{K}\right )(m+bu_M)}{c}\gt 0.\end{equation*}
\begin{equation*}u_M=\dfrac {Kb-m}{2b},\quad v_M=\dfrac {r\left (1-\dfrac {u_M}{K}\right )(m+bu_M)}{c}\gt 0.\end{equation*}
Proof. If 
 $f_1(u,v)=0$
, then
$f_1(u,v)=0$
, then
 \begin{equation*}u=0 \quad \mbox {or} \quad r\left (1-\dfrac {u}{K}\right )-\dfrac {cv}{m+bu}=0.\end{equation*}
\begin{equation*}u=0 \quad \mbox {or} \quad r\left (1-\dfrac {u}{K}\right )-\dfrac {cv}{m+bu}=0.\end{equation*}
It is easy to obtain that 
 $v=r\left (1-\dfrac {u}{K}\right )(m+bu)/c$
 has a maximum point
$v=r\left (1-\dfrac {u}{K}\right )(m+bu)/c$
 has a maximum point 
 $(u_M,v_M)$
, where
$(u_M,v_M)$
, where
 \begin{equation} u_M=\dfrac {Kb-m}{2b},\quad v_M=\dfrac {r\left (1-\dfrac {u_M}{K}\right )(m+bu_M)}{c}. \end{equation}
\begin{equation} u_M=\dfrac {Kb-m}{2b},\quad v_M=\dfrac {r\left (1-\dfrac {u_M}{K}\right )(m+bu_M)}{c}. \end{equation}
When 
 $Kb\gt m$
, then
$Kb\gt m$
, then 
 $u_M\gt 0$
, which shows that
$u_M\gt 0$
, which shows that
 \begin{equation} v=p(u)=r\left (1-\dfrac {u}{K}\right )(m+bu)/c \end{equation}
\begin{equation} v=p(u)=r\left (1-\dfrac {u}{K}\right )(m+bu)/c \end{equation}
is monotone increasing in 
 $(\!-\infty, u_M)$
, while monotone decreasing in
$(\!-\infty, u_M)$
, while monotone decreasing in 
 $(u_M,+\infty )$
. As a result, for
$(u_M,+\infty )$
. As a result, for 
 $v\in (p(0),v_M)=(rm/c,v_M)$
,
$v\in (p(0),v_M)=(rm/c,v_M)$
, 
 $u = h_{1}(v)$
 is monotone increasing with respect to
$u = h_{1}(v)$
 is monotone increasing with respect to 
 $v$
, and for
$v$
, and for 
 $v\in (\!-\infty, v_M)$
,
$v\in (\!-\infty, v_M)$
, 
 $u=h_2(v)$
 is monotone decreasing with respect to
$u=h_2(v)$
 is monotone decreasing with respect to 
 $v$
. Direct calculations provide
$v$
. Direct calculations provide 
 $u_M\lt K$
. Based on the expression of
$u_M\lt K$
. Based on the expression of 
 $v_M$
 in (2.1), we easily deduce that
$v_M$
 in (2.1), we easily deduce that 
 $v_M\gt 0$
.
$v_M\gt 0$
.
Proposition 2.2. 
Assume that 
 $Kb\gt m$
 and
$Kb\gt m$
 and 
 $\beta c\gt ab$
 hold. Then,
$\beta c\gt ab$
 hold. Then,
 
(i) 
 $(u^*_2,v^*_2)$
 is a positive solution of
$(u^*_2,v^*_2)$
 is a positive solution of 
 $f_1(u,v)=f_2(u,v)=0$
 for
$f_1(u,v)=f_2(u,v)=0$
 for 
 $u_M\lt am/(\beta c-ab)\lt K$
 and it is on the branch
$u_M\lt am/(\beta c-ab)\lt K$
 and it is on the branch 
 $u=h_2(v)$
, where
$u=h_2(v)$
, where 
 $u^*_2=am/(\beta c-ab)$
 and
$u^*_2=am/(\beta c-ab)$
 and 
 $v^*_2=r\left (1-\dfrac {u^*_2}{K}\right )(m+bu^*_2)/c$
;
$v^*_2=r\left (1-\dfrac {u^*_2}{K}\right )(m+bu^*_2)/c$
;
 
(ii) 
 $(u^*_3,v^*_3)$
 is a positive solution of
$(u^*_3,v^*_3)$
 is a positive solution of 
 $f_1(u,v)=f_2(u,v)=0$
 for
$f_1(u,v)=f_2(u,v)=0$
 for 
 $0\lt am/(\beta c-ab)\lt u_M$
 and it is on the branch
$0\lt am/(\beta c-ab)\lt u_M$
 and it is on the branch 
 $u=h_1(v)$
, where
$u=h_1(v)$
, where 
 $u^*_3=am/(\beta c-ab)$
 and
$u^*_3=am/(\beta c-ab)$
 and 
 $v^*_3=r\left (1-\dfrac {u^*_3}{K}\right )(m+bu^*_3)/c$
.
$v^*_3=r\left (1-\dfrac {u^*_3}{K}\right )(m+bu^*_3)/c$
.
Proof. We omit the details because the proof is elementary.
Proposition 2.3. Assume that the Proposition 2.2 (i) holds, we have the following results.
 
(i) 
 $f_2(h_0(v),v)\lt 0$
 for
$f_2(h_0(v),v)\lt 0$
 for 
 $v\in (0,+\infty )$
 and
$v\in (0,+\infty )$
 and 
 $f_2(h_2(v),v)\gt 0$
 for
$f_2(h_2(v),v)\gt 0$
 for 
 $v\in (0,v^*_2)$
.
$v\in (0,v^*_2)$
.
 
(ii) 
 $\frac {d}{dv}f_2(h_0(v),v)\lt 0$
 for
$\frac {d}{dv}f_2(h_0(v),v)\lt 0$
 for 
 $v\in (\!-\infty, +\infty )$
 and there is a constant
$v\in (\!-\infty, +\infty )$
 and there is a constant 
 $\widetilde {d}\in (0,v^*_2)$
 that ensures
$\widetilde {d}\in (0,v^*_2)$
 that ensures 
 $\frac {d}{dv}f_2(h_2(v),v)\lt 0$
 for
$\frac {d}{dv}f_2(h_2(v),v)\lt 0$
 for 
 $(\widetilde {d},v^*_2]$
.
$(\widetilde {d},v^*_2]$
.
Proof. (i) Obviously,
 \begin{equation*}f_2(h_0(v),v)=-av\lt 0 \quad \mbox {for}\quad v\in (0,+\infty ).\end{equation*}
\begin{equation*}f_2(h_0(v),v)=-av\lt 0 \quad \mbox {for}\quad v\in (0,+\infty ).\end{equation*}
Then, we find that 
 $u=h_2(v)\gt u^*_2$
 for
$u=h_2(v)\gt u^*_2$
 for 
 $v\in (\!-\infty, v^*_2)$
 and
$v\in (\!-\infty, v^*_2)$
 and 
 $\beta cu/(m+bu)$
 is monotone increasing with respect to
$\beta cu/(m+bu)$
 is monotone increasing with respect to 
 $u$
. So
$u$
. So
 \begin{equation*}\dfrac {\beta ch_2(v)}{m+bh_2(v)}-a\gt \dfrac {\beta cu^*_2}{m+bu^*_2}-a=0.\end{equation*}
\begin{equation*}\dfrac {\beta ch_2(v)}{m+bh_2(v)}-a\gt \dfrac {\beta cu^*_2}{m+bu^*_2}-a=0.\end{equation*}
This shows that 
 $f_2(h_2(v),v)\gt 0$
 for
$f_2(h_2(v),v)\gt 0$
 for 
 $v\in (0,v^*_2)$
.
$v\in (0,v^*_2)$
.
(ii) By direct calculations, we get
 \begin{equation*}\frac {d}{dv}f_2(h_0(v),v)=-a\lt 0 \quad \mbox {for}\quad v\in (\!-\infty, +\infty ),\end{equation*}
\begin{equation*}\frac {d}{dv}f_2(h_0(v),v)=-a\lt 0 \quad \mbox {for}\quad v\in (\!-\infty, +\infty ),\end{equation*}
 \begin{equation*}\frac {d}{dv}f_2(h_2(v),v)=\left (\dfrac {\beta ch_2(v)}{m+bh_2(v)}-a\right )+v\dfrac {\beta cmh^{'}_R(v)}{(m+bh_2(v))^2}.\end{equation*}
\begin{equation*}\frac {d}{dv}f_2(h_2(v),v)=\left (\dfrac {\beta ch_2(v)}{m+bh_2(v)}-a\right )+v\dfrac {\beta cmh^{'}_R(v)}{(m+bh_2(v))^2}.\end{equation*}
Recall that 
 $h^{'}_2(v)\lt 0$
 for
$h^{'}_2(v)\lt 0$
 for 
 $v\in (\!-\infty, v_M)$
 from the Proposition2.1 proof. Combining this with
$v\in (\!-\infty, v_M)$
 from the Proposition2.1 proof. Combining this with 
 $-a+\beta ch_2(v^*_2)/(m+bh_2(v^*_2))=0,$
 then we get
$-a+\beta ch_2(v^*_2)/(m+bh_2(v^*_2))=0,$
 then we get 
 $\frac {d}{dv}f_2(h_2(v),v)|_{v=v^*_2}\lt 0.$
 By continuity, there exists a constant
$\frac {d}{dv}f_2(h_2(v),v)|_{v=v^*_2}\lt 0.$
 By continuity, there exists a constant 
 $\widetilde {d}\in (0,v^*_2)$
 such that
$\widetilde {d}\in (0,v^*_2)$
 such that 
 $\frac {d}{dv}f_2(h_2(v),v)\lt 0$
 for
$\frac {d}{dv}f_2(h_2(v),v)\lt 0$
 for 
 $(\widetilde {d},v^*_2]$
.
$(\widetilde {d},v^*_2]$
.
3. Existence of regular stationary solutions
 In this section, we mainly show the existence of regular stationary solutions of problem (1.2). Firstly, we review the results of [Reference Cygan, Marciniak–Czochra, Karch and Suzuki4] and express them in a form that [Reference Cygan, Marciniak–Czochra, Karch and Suzuki4] is already suitable to deal with a system consisting of PDEs and ODEs. Thus, we deal with a solution 
 $(u,v)=(u(x),v(x))$
 to the boundary value problem
$(u,v)=(u(x),v(x))$
 to the boundary value problem
 \begin{equation} \left \{\begin{array}{lll} \qquad \quad \;\;\; f(u,v)=0,& x\in \overline \Omega, \\ d_2\Delta _\tau v+g(u,v)=0,& x\in \Omega, \\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{lll} \qquad \quad \;\;\; f(u,v)=0,& x\in \overline \Omega, \\ d_2\Delta _\tau v+g(u,v)=0,& x\in \Omega, \\ \end{array} \right . \end{equation}
with arbitrary 
 $C^2-$
functions
$C^2-$
functions 
 $f$
 and
$f$
 and 
 $g$
, with constant
$g$
, with constant 
 $d_2 \gt 0$
, and in an open bounded domain
$d_2 \gt 0$
, and in an open bounded domain 
 $\Omega \subseteq R^N$
 with a
$\Omega \subseteq R^N$
 with a 
 $C^2-$
boundary.
$C^2-$
boundary. 
 $\Delta _\tau$
 represents the Laplacian operator and Neumann boundary conditions.
$\Delta _\tau$
 represents the Laplacian operator and Neumann boundary conditions.
Definition 1. 
([Reference Cygan, Marciniak–Czochra, Karch and Suzuki4]) A solution 
 $(u,v)=(u(x),v(x))$
 of problem (
3.1
) is called weak if
$(u,v)=(u(x),v(x))$
 of problem (
3.1
) is called weak if
 
(i) 
 $u$
 is measurable,
$u$
 is measurable,
 
(ii) 
 $v\in W^{1,2}(\Omega ),$
$v\in W^{1,2}(\Omega ),$
 
(iii) 
 $g(u,v)\in (W^{1,2}(\Omega ))^*$
$g(u,v)\in (W^{1,2}(\Omega ))^*$
 
 $($
the dual of the space
$($
the dual of the space 
 $W^{1,2}(\Omega )$
$W^{1,2}(\Omega )$
 $)$
,
$)$
,
 
(iv) the equation 
 $f(u(x),v(x))=0$
 is satisfied for almost all
$f(u(x),v(x))=0$
 is satisfied for almost all 
 $x\in \Omega, $
$x\in \Omega, $
(v) the equality
 \begin{equation*}-d_2\int _\Omega \nabla v(x)\cdot \nabla \zeta (x) dx+\int _\Omega g(u(x),v(x))\zeta (x)dx=0\end{equation*}
\begin{equation*}-d_2\int _\Omega \nabla v(x)\cdot \nabla \zeta (x) dx+\int _\Omega g(u(x),v(x))\zeta (x)dx=0\end{equation*}
holds for all test functions 
 $\zeta \in W^{1,2}(\Omega ).$
$\zeta \in W^{1,2}(\Omega ).$
Definition 2. 
([Reference Cygan, Marciniak–Czochra, Karch and Suzuki4]) The weak solution of problem (
3.1
) in the sense of Definition 
1
 is called a regular solution, if there is a 
 $C^2$
-function
$C^2$
-function 
 $\theta\, :\, {R}\rightarrow {R}$
 such that
$\theta\, :\, {R}\rightarrow {R}$
 such that 
 $u(x)=\theta (v(x))$
 for all
$u(x)=\theta (v(x))$
 for all 
 $x\in \Omega .$
$x\in \Omega .$
Remark 1. It is easy to find that every regular solution of problem (3.1) satisfies
 \begin{equation*}f(u(x),v(x))=f(\theta (v(x)),v(x))=0\quad \mbox {for all}\quad x\in \Omega, \end{equation*}
\begin{equation*}f(u(x),v(x))=f(\theta (v(x)),v(x))=0\quad \mbox {for all}\quad x\in \Omega, \end{equation*}
where 
 $v=v(x)$
 is a solution of the elliptic Neumann problem
$v=v(x)$
 is a solution of the elliptic Neumann problem
 \begin{equation} d_2\Delta _\tau v+P(v)=0 \quad \mbox {for}\quad x\in \Omega \end{equation}
\begin{equation} d_2\Delta _\tau v+P(v)=0 \quad \mbox {for}\quad x\in \Omega \end{equation}
with 
 $P(v)=g(\theta (v),v).$
$P(v)=g(\theta (v),v).$
Proposition 3.1. 
Assume that 
 $N\le 6$
. Let
$N\le 6$
. Let 
 $y\in C^2_b(R)$
 satisfy
$y\in C^2_b(R)$
 satisfy 
 $y(0)=y'(0)=0$
. There is a sequence of numbers
$y(0)=y'(0)=0$
. There is a sequence of numbers 
 $d_s\rightarrow d_2$
 and a sequence of non-constant functions
$d_s\rightarrow d_2$
 and a sequence of non-constant functions 
 $v_s\in W^{1,2}(\Omega )$
 such that
$v_s\in W^{1,2}(\Omega )$
 such that 
 $||v_s||_{W^{1,2}}\rightarrow 0$
 and which satisfies the boundary value problem
$||v_s||_{W^{1,2}}\rightarrow 0$
 and which satisfies the boundary value problem
 \begin{equation} d_s\Delta _\tau v_s+(\lambda _k+d_2-d_s)v_s+y(v_s)=0\quad \mbox {for}\quad x\in \Omega . \end{equation}
\begin{equation} d_s\Delta _\tau v_s+(\lambda _k+d_2-d_s)v_s+y(v_s)=0\quad \mbox {for}\quad x\in \Omega . \end{equation}
Proof. We prove this lemma using the Rabinowitz bifurcation theorem of the variational equation [Reference Rabinowitz23]. Then, assume that
 (i) 
 $M$
 is a real Hilbert space,
$M$
 is a real Hilbert space,
 (ii) 
 $X\in C^2(M,R)$
 with
$X\in C^2(M,R)$
 with 
 $X'(u)=Lu+Z(u)$
,
$X'(u)=Lu+Z(u)$
,
 (iii) 
 $L$
 is linear and
$L$
 is linear and 
 $Z(u)=o(||u||)$
 at
$Z(u)=o(||u||)$
 at 
 $u=0,$
$u=0,$
 (iv) 
 $\lambda$
 is an isolated eigenvalue of
$\lambda$
 is an isolated eigenvalue of 
 $L$
 of a finite multiplicity. If these assumptions hold, by [Reference Rabinowitz23], we know that
$L$
 of a finite multiplicity. If these assumptions hold, by [Reference Rabinowitz23], we know that 
 $(\lambda, 0)\in R\times M$
 is a bifurcation point of
$(\lambda, 0)\in R\times M$
 is a bifurcation point of
 \begin{equation} A(\mu, v)\equiv Lv+Z(v)-\mu v=0. \end{equation}
\begin{equation} A(\mu, v)\equiv Lv+Z(v)-\mu v=0. \end{equation}
Thus, for 
 $||v||\neq 0$
, the solution
$||v||\neq 0$
, the solution 
 $(\mu, v)$
 of the equation (3.4) is present in each neighbourhood of
$(\mu, v)$
 of the equation (3.4) is present in each neighbourhood of 
 $(\lambda, 0)$
. And we apply the usual Sobolev space
$(\lambda, 0)$
. And we apply the usual Sobolev space 
 $M=W^{1,2}(\Omega )$
 with the equivalent scalar product
$M=W^{1,2}(\Omega )$
 with the equivalent scalar product
 \begin{equation*}\left \langle u,v\right \rangle _{W^{1,2}(\Omega )}=\int _\Omega \nabla u\cdot \nabla v dx+\int _\Omega uv dx\end{equation*}
\begin{equation*}\left \langle u,v\right \rangle _{W^{1,2}(\Omega )}=\int _\Omega \nabla u\cdot \nabla v dx+\int _\Omega uv dx\end{equation*}
and the functional
 \begin{equation*}X(v)=\dfrac {\lambda _k+d_2}{2}\int _\Omega v^2dx+\int _\Omega S(v)dx\end{equation*}
\begin{equation*}X(v)=\dfrac {\lambda _k+d_2}{2}\int _\Omega v^2dx+\int _\Omega S(v)dx\end{equation*}
with 
 $S(v)=\int _{0}^{v}y(s)ds$
. It is easy to find that
$S(v)=\int _{0}^{v}y(s)ds$
. It is easy to find that 
 $X\in C(W^{1,2}(\Omega ),R)$
 and
$X\in C(W^{1,2}(\Omega ),R)$
 and 
 $X(v)$
 is differentiable in the
$X(v)$
 is differentiable in the 
 $Fr\acute {e}chet$
 sense for each
$Fr\acute {e}chet$
 sense for each 
 $v\in W^{1,2}(\Omega )$
. By simple calculation, we have
$v\in W^{1,2}(\Omega )$
. By simple calculation, we have
 \begin{equation*}DX(v)\zeta =(\lambda _k+d_2)\int _\Omega v\zeta dx+\int _\Omega y(v)\zeta dx\end{equation*}
\begin{equation*}DX(v)\zeta =(\lambda _k+d_2)\int _\Omega v\zeta dx+\int _\Omega y(v)\zeta dx\end{equation*}
with 
 $DX\in C(W^{1,2}(\Omega )$
,
$DX\in C(W^{1,2}(\Omega )$
, 
 $\mbox {Lin}(W^{1,2}(\Omega ),R)).$
 The second
$\mbox {Lin}(W^{1,2}(\Omega ),R)).$
 The second 
 $Fr\acute {e}chet$
 derivative at the point
$Fr\acute {e}chet$
 derivative at the point 
 $v\in W^{1,2}(\Omega )$
 is represented by the bilinear form
$v\in W^{1,2}(\Omega )$
 is represented by the bilinear form
 \begin{equation*}\left \langle D^2X(v)\zeta, \kappa \right \rangle =(\lambda _k+d_2)\int _\Omega \zeta \kappa dx+\int _\Omega y'(v)\zeta \kappa dx.\end{equation*}
\begin{equation*}\left \langle D^2X(v)\zeta, \kappa \right \rangle =(\lambda _k+d_2)\int _\Omega \zeta \kappa dx+\int _\Omega y'(v)\zeta \kappa dx.\end{equation*}
 Next, we prove that 
 $D^2X(v)\in C\left (W^{1,2}(\Omega ),\mbox {Lin}(W^{1,2}(\Omega ),\mbox {Lin}(W^{1,2}(\Omega ),R))\right )$
. For
$D^2X(v)\in C\left (W^{1,2}(\Omega ),\mbox {Lin}(W^{1,2}(\Omega ),\mbox {Lin}(W^{1,2}(\Omega ),R))\right )$
. For 
 $v_n\rightarrow v$
 in
$v_n\rightarrow v$
 in 
 $W^{1,2}(\Omega )$
 and
$W^{1,2}(\Omega )$
 and 
 $\zeta, \kappa \in W^{1,2}(\Omega )$
, we estimate
$\zeta, \kappa \in W^{1,2}(\Omega )$
, we estimate
 \begin{align*}|\left \langle (D^2X(v_n)-D^2X(v))\zeta, \kappa \right \rangle |&\le \int _\Omega |y'(v_n)-y'(v)||\zeta ||\kappa |dx \\&\le ||y''||_\infty \int _\Omega |v_n-v||\zeta ||\kappa |dx\\&\le ||y''||_\infty ||v_n-v||_3||\zeta ||_3||\kappa ||_3\\&\le ||y''||_\infty ||v_n-v||_{W^{1,2}}||\zeta ||_{W^{1,2}}||\kappa ||_{W^{1,2}}.\end{align*}
\begin{align*}|\left \langle (D^2X(v_n)-D^2X(v))\zeta, \kappa \right \rangle |&\le \int _\Omega |y'(v_n)-y'(v)||\zeta ||\kappa |dx \\&\le ||y''||_\infty \int _\Omega |v_n-v||\zeta ||\kappa |dx\\&\le ||y''||_\infty ||v_n-v||_3||\zeta ||_3||\kappa ||_3\\&\le ||y''||_\infty ||v_n-v||_{W^{1,2}}||\zeta ||_{W^{1,2}}||\kappa ||_{W^{1,2}}.\end{align*}
The last inequality comes from the Sobolev embedding assuming 
 $N\le 6$
.
$N\le 6$
.
 Particularly, we have for each test function 
 $\zeta \in W^{1,2}(\Omega )$
$\zeta \in W^{1,2}(\Omega )$
 \begin{equation*}X'(v)(\zeta )=(\lambda _k+d_2)\int _\Omega v\zeta dx+\int _\Omega y(v)\zeta dx\equiv Lv(\zeta )+H(v)(\zeta ).\end{equation*}
\begin{equation*}X'(v)(\zeta )=(\lambda _k+d_2)\int _\Omega v\zeta dx+\int _\Omega y(v)\zeta dx\equiv Lv(\zeta )+H(v)(\zeta ).\end{equation*}
Therefore, we can find that 
 $H(v)=o(||v||_{W^{1,2}})$
 as
$H(v)=o(||v||_{W^{1,2}})$
 as 
 $||v||_{W^{1,2}}\rightarrow 0$
 by assuming
$||v||_{W^{1,2}}\rightarrow 0$
 by assuming 
 $y=y(v)$
. It is easy to find that
$y=y(v)$
. It is easy to find that 
 $\mu =1$
 is an isolated eigenvalue of the operator
$\mu =1$
 is an isolated eigenvalue of the operator 
 $L$
 with finite multiplicity. So, we obtain
$L$
 with finite multiplicity. So, we obtain
 \begin{equation*}Lv(\zeta )=\left \langle v,\zeta \right \rangle _{W^{1,2}(\Omega )}\quad \mbox {for all}\quad \zeta \in W^{1,2}(\Omega ),\end{equation*}
\begin{equation*}Lv(\zeta )=\left \langle v,\zeta \right \rangle _{W^{1,2}(\Omega )}\quad \mbox {for all}\quad \zeta \in W^{1,2}(\Omega ),\end{equation*}
that is
 \begin{equation*}(\lambda _k+d_2)\int _\Omega v\zeta dx=\int _\Omega \nabla v\cdot \nabla \zeta dx+\int _\Omega v\zeta dx\quad \mbox {for all}\quad \zeta \in W^{1,2}(\Omega ).\end{equation*}
\begin{equation*}(\lambda _k+d_2)\int _\Omega v\zeta dx=\int _\Omega \nabla v\cdot \nabla \zeta dx+\int _\Omega v\zeta dx\quad \mbox {for all}\quad \zeta \in W^{1,2}(\Omega ).\end{equation*}
Obviously, we can reduce to the eigenvalue problem for 
 $\Delta _\tau$
. Now, the property that
$\Delta _\tau$
. Now, the property that 
 $\lambda _k$
 is an isolated eigenvalue with finite multiplicity is applied. So, by the Rabinowitz Theorem [Reference Rabinowitz23], we can find that
$\lambda _k$
 is an isolated eigenvalue with finite multiplicity is applied. So, by the Rabinowitz Theorem [Reference Rabinowitz23], we can find that 
 $(1,0)$
 is a bifurcation point of (3.4) which means that there is a sequence of numbers
$(1,0)$
 is a bifurcation point of (3.4) which means that there is a sequence of numbers 
 $d_s\rightarrow d_2$
 and nonzero
$d_s\rightarrow d_2$
 and nonzero 
 $\left \{ {v_s}\right \}\subset W^{1,2}(\Omega )$
 such that
$\left \{ {v_s}\right \}\subset W^{1,2}(\Omega )$
 such that 
 $||v_s||_{W^{1,2}}\rightarrow 0$
, satisfying
$||v_s||_{W^{1,2}}\rightarrow 0$
, satisfying
 \begin{equation*}Lv_s(\zeta )+Z(v_s)(\zeta )-d_s\left \langle v_s,\zeta \right \rangle _{1,2}=0\quad \mbox {for all}\quad \zeta \in W^{1,2}(\Omega ),\end{equation*}
\begin{equation*}Lv_s(\zeta )+Z(v_s)(\zeta )-d_s\left \langle v_s,\zeta \right \rangle _{1,2}=0\quad \mbox {for all}\quad \zeta \in W^{1,2}(\Omega ),\end{equation*}
which is equivalent to the equation satisfied by the weak solutions 
 $v_s\in W^{1,2}(\Omega )$
 to problem (3.3)
$v_s\in W^{1,2}(\Omega )$
 to problem (3.3)
 \begin{equation*}-d_s\int _\Omega \nabla v_s\cdot \nabla \zeta dx+(\lambda _k+d_2-d_s)\int _\Omega v_s\zeta dx+\int _\Omega y(v_s)\zeta dx=0\end{equation*}
\begin{equation*}-d_s\int _\Omega \nabla v_s\cdot \nabla \zeta dx+(\lambda _k+d_2-d_s)\int _\Omega v_s\zeta dx+\int _\Omega y(v_s)\zeta dx=0\end{equation*}
for all 
 $\zeta \in W^{1,2}(\Omega )$
.
$\zeta \in W^{1,2}(\Omega )$
.
Proposition 3.2. 
Assume that 
 $N\le 6$
. Suppose that
$N\le 6$
. Suppose that 
 $(\overline u,\overline v)\in R^2$
 is a constant solution of problem (
1.2
) such that
$(\overline u,\overline v)\in R^2$
 is a constant solution of problem (
1.2
) such that 
 $f(\overline u,\overline v)=0$
 and
$f(\overline u,\overline v)=0$
 and 
 $g(\overline u,\overline v)=0$
. We use the following notation
$g(\overline u,\overline v)=0$
. We use the following notation
 \begin{equation} a_0=f_{u}(\overline u,\overline v),\quad b_0=f_{v}(\overline u,\overline v),\quad c_0=g_{u}(\overline u,\overline v),\quad d_0=g_{v}(\overline u,\overline v) \end{equation}
\begin{equation} a_0=f_{u}(\overline u,\overline v),\quad b_0=f_{v}(\overline u,\overline v),\quad c_0=g_{u}(\overline u,\overline v),\quad d_0=g_{v}(\overline u,\overline v) \end{equation}
and assume that
 \begin{equation} a_0\neq 0 \quad \mbox {and}\quad \dfrac {1}{a_0}det{\left (\begin{array}{cc} a_0 & b_0\\ c_0 & d_0 \end{array}\right )}=d_2\lambda _k\gt 0, \end{equation}
\begin{equation} a_0\neq 0 \quad \mbox {and}\quad \dfrac {1}{a_0}det{\left (\begin{array}{cc} a_0 & b_0\\ c_0 & d_0 \end{array}\right )}=d_2\lambda _k\gt 0, \end{equation}
for some 
 $\lambda _k$
 eigenvalues of
$\lambda _k$
 eigenvalues of 
 $-\Delta _\tau$
. Then, there is a sequence of real numbers
$-\Delta _\tau$
. Then, there is a sequence of real numbers 
 $d_s \rightarrow d_2$
 such that the following perturbed problem
$d_s \rightarrow d_2$
 such that the following perturbed problem
 \begin{equation} \left \{\begin{array}{ll} f(u,v)=0,& x\in \overline \Omega, \\ d_s\Delta _\tau v+(d_2-d_s)(v-\overline v)+g(u,v)=0,& x\in \Omega, \\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{ll} f(u,v)=0,& x\in \overline \Omega, \\ d_s\Delta _\tau v+(d_2-d_s)(v-\overline v)+g(u,v)=0,& x\in \Omega, \\ \end{array} \right . \end{equation}
has a non-constant regular solution.
Proof. We construct non-constant solutions to the reaction–diffusion–ODE system by Proposition3.1. In the following, we define an open ball with a radius of 
 $\rho \gt 0$
 that is centred at
$\rho \gt 0$
 that is centred at 
 $\overline v$
 as
$\overline v$
 as 
 $B_\rho (\overline v)$
. First, we show that only a finite number of
$B_\rho (\overline v)$
. First, we show that only a finite number of 
 $v_s$
 can be constant in Proposition3.1. If there is a constant subsequence
$v_s$
 can be constant in Proposition3.1. If there is a constant subsequence 
 $\left \{ {v_{l_n}}\right \}$
 that satisfies the equation (3.3) such that
$\left \{ {v_{l_n}}\right \}$
 that satisfies the equation (3.3) such that 
 $v_{l_n}\rightarrow 0$
, then we know that
$v_{l_n}\rightarrow 0$
, then we know that 
 $y'(0)=-\lambda _k$
, which is obviously a contradiction.
$y'(0)=-\lambda _k$
, which is obviously a contradiction.
 Next, since det
 $f_{u}(\overline u,\overline v)\neq 0$
, for all
$f_{u}(\overline u,\overline v)\neq 0$
, for all 
 $V\in (B_\rho (\overline v))$
, we obtain that there is a
$V\in (B_\rho (\overline v))$
, we obtain that there is a 
 $\rho \gt 0$
 and a function
$\rho \gt 0$
 and a function 
 $\theta \in C^2(B_\rho (\overline v))$
 such that
$\theta \in C^2(B_\rho (\overline v))$
 such that 
 $\theta (\overline v)=\overline u$
 and
$\theta (\overline v)=\overline u$
 and 
 $f(\theta (V),V)=0$
. Then, for all
$f(\theta (V),V)=0$
. Then, for all 
 $V\in (B_\rho (\overline v))$
, we prove that
$V\in (B_\rho (\overline v))$
, we prove that 
 $P(V)\equiv g(\theta (V),V)$
 satisfies
$P(V)\equiv g(\theta (V),V)$
 satisfies 
 $P(\overline v)=0$
 and
$P(\overline v)=0$
 and 
 $P'(\overline v)=d_2\lambda _k\gt 0.$
 It is easy to find that
$P'(\overline v)=d_2\lambda _k\gt 0.$
 It is easy to find that 
 $P(\overline v)=g(\theta (\overline v),\overline v)=g(\overline u,\overline v)=0$
. In addition, differentiating the function
$P(\overline v)=g(\theta (\overline v),\overline v)=g(\overline u,\overline v)=0$
. In addition, differentiating the function 
 $P(V)=g(\theta (V),V)$
 gets
$P(V)=g(\theta (V),V)$
 gets
 \begin{equation} P'(V)=g_{u}(\theta (V),V)\theta '(V)+g_{v}(\theta (V),V). \end{equation}
\begin{equation} P'(V)=g_{u}(\theta (V),V)\theta '(V)+g_{v}(\theta (V),V). \end{equation}
On the other hand, we differentiate the equation 
 $f(\theta (V),V)=0$
 to obtain
$f(\theta (V),V)=0$
 to obtain 
 $f_{u}(\theta (V),V)\theta '(V)+f_{v}(\theta (V),V)=0$
, or, equivalently,
$f_{u}(\theta (V),V)\theta '(V)+f_{v}(\theta (V),V)=0$
, or, equivalently,
 \begin{equation} \theta '(V)=-f_{u}^{-1}(\theta (V),V)f_{v}(\theta (V),V). \end{equation}
\begin{equation} \theta '(V)=-f_{u}^{-1}(\theta (V),V)f_{v}(\theta (V),V). \end{equation}
In the end, choosing 
 $V=\overline v$
, substituting equation (3.9) into equation (3.8) and by (3.5) we have
$V=\overline v$
, substituting equation (3.9) into equation (3.8) and by (3.5) we have
 \begin{equation*}P'(\overline v)=-g_{u}(\theta (\overline v),\overline v)f_{u}^{-1}(\theta (\overline v),\overline v)f_{v}(\theta (\overline v),\overline v)+g_{u}(\theta (\overline v),\overline v) =-c_0a_0^{-1}b_0+d_0.\end{equation*}
\begin{equation*}P'(\overline v)=-g_{u}(\theta (\overline v),\overline v)f_{u}^{-1}(\theta (\overline v),\overline v)f_{v}(\theta (\overline v),\overline v)+g_{u}(\theta (\overline v),\overline v) =-c_0a_0^{-1}b_0+d_0.\end{equation*}
Notice that
 \begin{equation} \begin{array}{l} -c_0a_0^{-1}b_0+d_0=\mbox {det}{\left (\begin{array}{cc} 1 & 0\\ 0 & -c_0a_0^{-1}b_0+d_0 \end{array}\right )}=\dfrac {1}{a_0}\mbox {det}{\left (\begin{array}{cc} a_0 & 0\\ c_0 & -c_0a_0^{-1}b_0+d_0 \end{array}\right )}\\\\ \qquad \qquad \qquad \;\;=\dfrac {1}{a_0}\mbox {det}\left ({\left (\begin{array}{cc} a_0 & b_0\\ c_0 & d_0 \end{array}\right )}{\left (\begin{array}{cc} 1 & -a^{-1}_0b_0\\ 0 & 1 \end{array}\right )}\right )=\dfrac {1}{a_0}\mbox {det}{\left (\begin{array}{cc} a_0 & b_0\\ c_0 & d_0 \end{array}\right )}.\end{array} \end{equation}
\begin{equation} \begin{array}{l} -c_0a_0^{-1}b_0+d_0=\mbox {det}{\left (\begin{array}{cc} 1 & 0\\ 0 & -c_0a_0^{-1}b_0+d_0 \end{array}\right )}=\dfrac {1}{a_0}\mbox {det}{\left (\begin{array}{cc} a_0 & 0\\ c_0 & -c_0a_0^{-1}b_0+d_0 \end{array}\right )}\\\\ \qquad \qquad \qquad \;\;=\dfrac {1}{a_0}\mbox {det}\left ({\left (\begin{array}{cc} a_0 & b_0\\ c_0 & d_0 \end{array}\right )}{\left (\begin{array}{cc} 1 & -a^{-1}_0b_0\\ 0 & 1 \end{array}\right )}\right )=\dfrac {1}{a_0}\mbox {det}{\left (\begin{array}{cc} a_0 & b_0\\ c_0 & d_0 \end{array}\right )}.\end{array} \end{equation}
So, 
 $P'(\overline v)=d_2\lambda _k\gt 0.$
$P'(\overline v)=d_2\lambda _k\gt 0.$
 Next, 
 $\overline P$
 represents an arbitrary extension of the function
$\overline P$
 represents an arbitrary extension of the function 
 $P$
 to the whole line
$P$
 to the whole line 
 $R$
 that satisfies
$R$
 that satisfies
 \begin{equation} \overline P\in C^2_b(R)\quad \mbox {and}\quad \overline P(V)=P(V)\quad \mbox {for all}\quad V\in (B_\rho (\overline v)). \end{equation}
\begin{equation} \overline P\in C^2_b(R)\quad \mbox {and}\quad \overline P(V)=P(V)\quad \mbox {for all}\quad V\in (B_\rho (\overline v)). \end{equation}
By Proposition3.1, we have a sequence 
 $d_s\rightarrow d_2$
 such that
$d_s\rightarrow d_2$
 such that
 \begin{equation} d_s\Delta _\tau v_s+(d_2-d_s)(v_s-\overline v)+\overline P(v_s)=0\quad \mbox {for}\quad x\in \Omega \end{equation}
\begin{equation} d_s\Delta _\tau v_s+(d_2-d_s)(v_s-\overline v)+\overline P(v_s)=0\quad \mbox {for}\quad x\in \Omega \end{equation}
has a non-constant solution 
 $v_s\in W^{1,2}(\Omega )$
. Indeed, It is sufficient to search for these solutions in the form
$v_s\in W^{1,2}(\Omega )$
. Indeed, It is sufficient to search for these solutions in the form 
 $v_s=\overline v+m_s$
, where
$v_s=\overline v+m_s$
, where 
 $m_s$
 satisfies
$m_s$
 satisfies
 \begin{equation} d_s\Delta _\tau m_s+(\overline P'(\overline v)+d_2-d_s)m_s+ y(m_s)=0\quad \mbox {in}\quad \Omega \end{equation}
\begin{equation} d_s\Delta _\tau m_s+(\overline P'(\overline v)+d_2-d_s)m_s+ y(m_s)=0\quad \mbox {in}\quad \Omega \end{equation}
with 
 $\overline h'(\overline v)=d_2\lambda _k$
 and
$\overline h'(\overline v)=d_2\lambda _k$
 and 
 $y(m_s)=\overline h(\overline v+m_s)-\overline h'(\overline v)m_s$
 satisfies
$y(m_s)=\overline h(\overline v+m_s)-\overline h'(\overline v)m_s$
 satisfies 
 $y\in C^2_b(R)$
,
$y\in C^2_b(R)$
, 
 $y(0)=0$
, and
$y(0)=0$
, and 
 $y'(0)=0$
.
$y'(0)=0$
.
 Next, we can find the solutions 
 $m_s$
 of problem (3.13) by Proposition3.1. Therefore, according to the standard elliptic theory, we have
$m_s$
 of problem (3.13) by Proposition3.1. Therefore, according to the standard elliptic theory, we have 
 $||m_s||_{W^{2,2}(\Omega )}\rightarrow 0$
. By the bootstrap arguments using the elliptic
$||m_s||_{W^{2,2}(\Omega )}\rightarrow 0$
. By the bootstrap arguments using the elliptic 
 $L_p$
 estimates and the Sobolev embedding theorem, we know that
$L_p$
 estimates and the Sobolev embedding theorem, we know that 
 $||m_s||_{W^{2,q}(\Omega )}\rightarrow 0$
 for
$||m_s||_{W^{2,q}(\Omega )}\rightarrow 0$
 for 
 $q\gt \frac {N}{2}$
 and hence
$q\gt \frac {N}{2}$
 and hence 
 $||m_s||_{L^\infty (\Omega )}\rightarrow 0$
. Particularly, by (3.11), if
$||m_s||_{L^\infty (\Omega )}\rightarrow 0$
. Particularly, by (3.11), if 
 $||m_s||_\infty \le \rho$
, we get
$||m_s||_\infty \le \rho$
, we get 
 $\overline h(v_s)=\overline h(\overline v+m_s)=h(\overline v+m_s)=h(v_s)$
. So, the nontrivial solution of problem (3.12) is
$\overline h(v_s)=\overline h(\overline v+m_s)=h(\overline v+m_s)=h(v_s)$
. So, the nontrivial solution of problem (3.12) is 
 $v_s=\overline v+m_s$
, where
$v_s=\overline v+m_s$
, where 
 $\overline h$
 is changed to
$\overline h$
 is changed to 
 $\rho$
. In the end, we define
$\rho$
. In the end, we define 
 $u_s=\theta (v_s)$
 to get a nontrivial regular solution of problem (3.7).
$u_s=\theta (v_s)$
 to get a nontrivial regular solution of problem (3.7).
Now we apply the previous results to the specific reaction–diffusion–ODE model (1.2). So, (1.2) may be rewritten as
 \begin{equation} \left \{\begin{array}{lll} r\left (1-\dfrac {u}{K}\right )u-\dfrac {cuv}{m+bu}=0,& x\in \overline \Omega, \\ d_2\Delta _\tau v-av+\dfrac {\beta cuv}{m+bu}=0,& x\in \Omega .\\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{lll} r\left (1-\dfrac {u}{K}\right )u-\dfrac {cuv}{m+bu}=0,& x\in \overline \Omega, \\ d_2\Delta _\tau v-av+\dfrac {\beta cuv}{m+bu}=0,& x\in \Omega .\\ \end{array} \right . \end{equation}
By simple calculation, it is easy to see that problem (3.14) has trivial equilibrium 
 $(\overline u_1,\overline v_1)=(0,0)$
, semi-trivial equilibrium
$(\overline u_1,\overline v_1)=(0,0)$
, semi-trivial equilibrium 
 $(\overline u_2,\overline v_2)=(K,0)$
, and if
$(\overline u_2,\overline v_2)=(K,0)$
, and if 
 $0\lt \dfrac {am}{\beta c-ab}$
, then
$0\lt \dfrac {am}{\beta c-ab}$
, then 
 $(\overline u_3,\overline v_3)=(u^*,v^*)$
, where
$(\overline u_3,\overline v_3)=(u^*,v^*)$
, where
 \begin{equation*}u^*=\dfrac {am}{\beta c-ab},\qquad v^*=\dfrac {\beta mr[K(\beta c-ab)-am]}{K(\beta c-ab)^2}=\dfrac {\beta ru^*(K-u^*)}{aK}.\end{equation*}
\begin{equation*}u^*=\dfrac {am}{\beta c-ab},\qquad v^*=\dfrac {\beta mr[K(\beta c-ab)-am]}{K(\beta c-ab)^2}=\dfrac {\beta ru^*(K-u^*)}{aK}.\end{equation*}
We always assume that 
 $a\lt \dfrac {\beta cK}{m+bK}$
 and
$a\lt \dfrac {\beta cK}{m+bK}$
 and 
 $u^*\lt K$
.
$u^*\lt K$
.
Theorem 3.3. 
Assume that 
 $2\overline u_3\gt K$
. For
$2\overline u_3\gt K$
. For 
 $a$
,
$a$
, 
 $b$
,
$b$
, 
 $c$
,
$c$
, 
 $r$
,
$r$
, 
 $m$
,
$m$
, 
 $\beta$
,
$\beta$
, 
 $K$
 are all positive constants and for a discrete sequence of the diffusion coefficients
$K$
 are all positive constants and for a discrete sequence of the diffusion coefficients 
 $d_2\gt 0$
 problem (
3.14
) has a regular stationary solution.
$d_2\gt 0$
 problem (
3.14
) has a regular stationary solution.
Proof. We consider a solution 
 $(\overline u_3,\overline v_3)$
 of problem (3.14) and use Proposition3.2 with the constant stationary solution
$(\overline u_3,\overline v_3)$
 of problem (3.14) and use Proposition3.2 with the constant stationary solution 
 $(\overline u,\overline v)=(\overline u_3,\overline v_3)$
. Since
$(\overline u,\overline v)=(\overline u_3,\overline v_3)$
. Since 
 $2\overline u_3\gt K$
,
$2\overline u_3\gt K$
, 
 $1-\dfrac {2\overline u_3}{K}\lt 0$
, that is,
$1-\dfrac {2\overline u_3}{K}\lt 0$
, that is, 
 $a_0\lt 0$
. By simple calculation, we can find that
$a_0\lt 0$
. By simple calculation, we can find that
 \begin{equation*}\mbox {det}{\left (\begin{array}{cc} a_0 & b_0\\ c_0 & d_0 \end{array}\right )}= \left |\begin{array}{cc} r-\dfrac {2r\overline u_3}{K}-\dfrac {cm\overline v_3}{(m+b\overline u_3)^2} & -\dfrac {c\overline u_3}{m+b\overline u_3}\\[8pt] \dfrac {\beta cm\overline v_3}{(m+b\overline u_3)^2} & 0 \end{array}\right |=\dfrac {\beta cm\overline v_3}{(m+b\overline u_3)^2}\cdot \dfrac {c\overline u_3}{m+b\overline u_3}\gt 0. \end{equation*}
\begin{equation*}\mbox {det}{\left (\begin{array}{cc} a_0 & b_0\\ c_0 & d_0 \end{array}\right )}= \left |\begin{array}{cc} r-\dfrac {2r\overline u_3}{K}-\dfrac {cm\overline v_3}{(m+b\overline u_3)^2} & -\dfrac {c\overline u_3}{m+b\overline u_3}\\[8pt] \dfrac {\beta cm\overline v_3}{(m+b\overline u_3)^2} & 0 \end{array}\right |=\dfrac {\beta cm\overline v_3}{(m+b\overline u_3)^2}\cdot \dfrac {c\overline u_3}{m+b\overline u_3}\gt 0. \end{equation*}
As a result, for some eigenvalue 
 $\lambda _k\gt 0$
, we may select
$\lambda _k\gt 0$
, we may select 
 $d_2\gt 0$
 to satisfy equation (3.6).
$d_2\gt 0$
 to satisfy equation (3.6).
4. Existence of steady states with jump discontinuous
 In this section, we prove the existence of non-constant solutions of problem (1.2) by applying a generalized mountain pass lemma due to Chang [Reference Chang7, Reference Zhang31]. According to the first equation of (1.2), we obtain 
 $u=h_0(v)$
,
$u=h_0(v)$
, 
 $u=h_1(v)$
 and
$u=h_1(v)$
 and 
 $u=h_2(v)$
. Applying the functions
$u=h_2(v)$
. Applying the functions 
 $u=h_0(v)$
 and
$u=h_0(v)$
 and 
 $u=h_2(v)$
, we get the following single boundary value problem for
$u=h_2(v)$
, we get the following single boundary value problem for 
 $v$
 alone
$v$
 alone
 \begin{equation} \left \{\begin{array}{l} d_2\Delta v+f_2^\gamma (v)=0, \ x\in \Omega, \\ \partial _\tau v =0, \ x\in \partial \Omega, \\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{l} d_2\Delta v+f_2^\gamma (v)=0, \ x\in \Omega, \\ \partial _\tau v =0, \ x\in \partial \Omega, \\ \end{array} \right . \end{equation}
where
 \begin{equation} f_2^\gamma (v)= \left \{\begin{array}{l} f_2(h_0(v),v)=-av,\ v\lt \gamma, \\ f_2(h_2(v),v)=-av+\dfrac {\beta ch_2(v)v}{m+bh_2(v)},\ \gamma \lt v\leq v_M, \end{array} \right . \end{equation}
\begin{equation} f_2^\gamma (v)= \left \{\begin{array}{l} f_2(h_0(v),v)=-av,\ v\lt \gamma, \\ f_2(h_2(v),v)=-av+\dfrac {\beta ch_2(v)v}{m+bh_2(v)},\ \gamma \lt v\leq v_M, \end{array} \right . \end{equation}
and 
 $\gamma \in (\xi, v^*_2)$
. Note that
$\gamma \in (\xi, v^*_2)$
. Note that 
 $\xi =p(0)$
, where
$\xi =p(0)$
, where 
 $p(0)$
 has been defined in (2.2).
$p(0)$
 has been defined in (2.2).
Theorem 4.1. 
Assume that the hypotheses of Proposition 
2.2
 (i) hold, then problem 
4.1
 has at least one classical nontrivial solution 
 $v(x)$
 so that
$v(x)$
 so that 
 $0\leq v(x)\leq v^*_2.$
 Particularly,
$0\leq v(x)\leq v^*_2.$
 Particularly, 
 $v(x)$
 must cross
$v(x)$
 must cross 
 $\gamma$
.
$\gamma$
.
Remark 2. By classical nontrivial solution, we mean a solution 
 $v(x)$
 of (4.1) such that
$v(x)$
 of (4.1) such that 
 $v(x)\not \equiv 0$
,
$v(x)\not \equiv 0$
, 
 $v(x)\not \equiv v^*_2$
,
$v(x)\not \equiv v^*_2$
, 
 $v(x)\in C^1({\bar {\Omega }})$
 and
$v(x)\in C^1({\bar {\Omega }})$
 and 
 $\Delta v(x)$
 on the
$\Delta v(x)$
 on the 
 $\bar {\Omega }$
 has jump discontinuity.
$\bar {\Omega }$
 has jump discontinuity.
 To prove Theorem4.1, we first generalize 
 $f_2^\gamma (v)$
 to
$f_2^\gamma (v)$
 to 
 $\widetilde f_2^\gamma (v)$
, as follows
$\widetilde f_2^\gamma (v)$
, as follows
 \begin{equation} \widetilde f_2^\gamma (v)= \left \{\begin{array}{lll} f_2^\gamma (v) & \text{for}\ v\leq v_M,\\ -a(v-v_M)+v_M\left (\!-a+\dfrac {\beta ch_2(v_M)}{m+bh_2(v_M)}\right ) & \text{for}\ v\gt v_M,\\ \end{array} \right . \end{equation}
\begin{equation} \widetilde f_2^\gamma (v)= \left \{\begin{array}{lll} f_2^\gamma (v) & \text{for}\ v\leq v_M,\\ -a(v-v_M)+v_M\left (\!-a+\dfrac {\beta ch_2(v_M)}{m+bh_2(v_M)}\right ) & \text{for}\ v\gt v_M,\\ \end{array} \right . \end{equation}
and consider
 \begin{equation} \left \{\begin{array}{lll} d_2\Delta v+\widetilde f_2^\gamma (v)=0,& x\in \Omega, \\ \partial _\tau v =0,& x\in \partial \Omega .\\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{lll} d_2\Delta v+\widetilde f_2^\gamma (v)=0,& x\in \Omega, \\ \partial _\tau v =0,& x\in \partial \Omega .\\ \end{array} \right . \end{equation}
Since 
 $\widetilde f_2^\gamma (v)$
 discontinues at
$\widetilde f_2^\gamma (v)$
 discontinues at 
 $v=\gamma$
, we search for the solution of (4.4) in
$v=\gamma$
, we search for the solution of (4.4) in 
 $W^{1,2}(\Omega )$
. The energy functional
$W^{1,2}(\Omega )$
. The energy functional 
 $J_{d_2}(v)$
 connected to (4.4) is described as follows
$J_{d_2}(v)$
 connected to (4.4) is described as follows
 \begin{equation} J_{d_2}(v)=\dfrac {d_2}{2}\int _\Omega |\nabla v|^2dx-\int _\Omega G^\gamma (v)dx, \end{equation}
\begin{equation} J_{d_2}(v)=\dfrac {d_2}{2}\int _\Omega |\nabla v|^2dx-\int _\Omega G^\gamma (v)dx, \end{equation}
where 
 $G^\gamma (v)=\int _{0}^{v} \widetilde f_2^\gamma (s) ds.$
 Moreover, we endow
$G^\gamma (v)=\int _{0}^{v} \widetilde f_2^\gamma (s) ds.$
 Moreover, we endow 
 $W^{1,2}(\Omega )$
 with the norm
$W^{1,2}(\Omega )$
 with the norm
 \begin{equation*}||v||=\left (\int _\Omega |\nabla v|^2dx+\int _\Omega v^2dx\right )^{\frac {1}{2}}.\end{equation*}
\begin{equation*}||v||=\left (\int _\Omega |\nabla v|^2dx+\int _\Omega v^2dx\right )^{\frac {1}{2}}.\end{equation*}
Note that 
 $v=0$
 and
$v=0$
 and 
 $v=v^*_2$
 are two constant solutions of (4.4). As a result, the proof will be divided into two cases: Case 1
$v=v^*_2$
 are two constant solutions of (4.4). As a result, the proof will be divided into two cases: Case 1 
 $J_{d_2}(0)\leq J_{d_2}(v^*_2)$
 and Case 2
$J_{d_2}(0)\leq J_{d_2}(v^*_2)$
 and Case 2 
 $J_{d_2}(0)\gt J_{d_2}(v^*_2)$
.
$J_{d_2}(0)\gt J_{d_2}(v^*_2)$
.
 Firstly, we consider Case 1. Let 
 $s=v-v^*_2$
 and
$s=v-v^*_2$
 and 
 $Q_{d_2}(s)=J_{d_2}(s+v^*_2)-J_{d_2}(v^*_2).$
 Then
$Q_{d_2}(s)=J_{d_2}(s+v^*_2)-J_{d_2}(v^*_2).$
 Then
 \begin{equation} Q_{d_2}(s)=\dfrac {d_2}{2}\int _\Omega |\nabla s|^2dx-\int _\Omega (G^\gamma (s+v^*_2)-G^\gamma (v^*_2))dx \end{equation}
\begin{equation} Q_{d_2}(s)=\dfrac {d_2}{2}\int _\Omega |\nabla s|^2dx-\int _\Omega (G^\gamma (s+v^*_2)-G^\gamma (v^*_2))dx \end{equation}
Here, we rephrase the definitions of the generalized gradient, Palais-Smale condition (henceforth denoted by (PS)), and the generalized mountain pass lemma in the context of our problem.
Definition 3. 
If 
 $Q\,:\, W^{1,2}(\Omega )\rightarrow R$
 is a locally Lipschitz continuous function, then for each
$Q\,:\, W^{1,2}(\Omega )\rightarrow R$
 is a locally Lipschitz continuous function, then for each 
 $\phi \in W^{1,2}(\Omega )$
, we can define the generalized directional derivative
$\phi \in W^{1,2}(\Omega )$
, we can define the generalized directional derivative 
 $Q^o(s;\,\phi )$
 in the direction
$Q^o(s;\,\phi )$
 in the direction 
 $\phi$
 by
$\phi$
 by
 \begin{equation*}Q^o(s;\,\phi )={\mathop {\overline \lim }\limits _{u\to 0,r\downarrow 0}} \frac {Q(s+u+r\phi )-{Q_{{}}}(s+u)}{r},\end{equation*}
\begin{equation*}Q^o(s;\,\phi )={\mathop {\overline \lim }\limits _{u\to 0,r\downarrow 0}} \frac {Q(s+u+r\phi )-{Q_{{}}}(s+u)}{r},\end{equation*}
and the generalized gradient of 
 $Q(s)$
 at
$Q(s)$
 at 
 $s$
, denoted by
$s$
, denoted by 
 $\partial Q(s)$
, is defined to be the subdifferential of the function
$\partial Q(s)$
, is defined to be the subdifferential of the function 
 $Q^o(s;\phi )$
 at
$Q^o(s;\phi )$
 at 
 $0$
. That is,
$0$
. That is, 
 $\psi \in \partial Q(s)\subset (W^{1,2}(\Omega ))^*$
 if and only if
$\psi \in \partial Q(s)\subset (W^{1,2}(\Omega ))^*$
 if and only if 
 $\left \langle {\psi, \phi } \right \rangle \leq Q^o(s;\phi )$
 for all
$\left \langle {\psi, \phi } \right \rangle \leq Q^o(s;\phi )$
 for all 
 $\phi \in W^{1,2}(\Omega )$
, where
$\phi \in W^{1,2}(\Omega )$
, where 
 $(W^{1,2}(\Omega ))^*$
 is the dual space of
$(W^{1,2}(\Omega ))^*$
 is the dual space of 
 $W^{1,2}(\Omega )$
.
$W^{1,2}(\Omega )$
.
Definition 4. 
We say that a locally Lipschitz continuous function 
 $Q$
 satisfies the Palais–Smale condition (P.S.) if any sequence
$Q$
 satisfies the Palais–Smale condition (P.S.) if any sequence 
 $\left \{ {s_n}\right \}\subset W^{1,2}(\Omega )$
 for which
$\left \{ {s_n}\right \}\subset W^{1,2}(\Omega )$
 for which 
 $\left \{{Q(s_n)}\right \}$
 is bounded and
$\left \{{Q(s_n)}\right \}$
 is bounded and 
 $\lambda (s_n)=\mathop {\min }\limits _{\psi \in \partial Q(s_n)}||\psi ||_ {(W^{1,2}(\Omega ))^*}\rightarrow 0$
 possesses a convergent subsequence.
$\lambda (s_n)=\mathop {\min }\limits _{\psi \in \partial Q(s_n)}||\psi ||_ {(W^{1,2}(\Omega ))^*}\rightarrow 0$
 possesses a convergent subsequence.
Theorem 4.2. 
([Reference Chang7]) Let 
 $Q(s)$
 be a locally Lipschitz continuous function on
$Q(s)$
 be a locally Lipschitz continuous function on 
 $W^{1,2}(\Omega )$
 which satisfies (P.S.) and assume that
$W^{1,2}(\Omega )$
 which satisfies (P.S.) and assume that
 
(i) 
 $Q(0)=0$
 and there exist positive constants
$Q(0)=0$
 and there exist positive constants 
 $\rho$
,
$\rho$
, 
 $\alpha$
 such that
$\alpha$
 such that 
 $Q\gt 0$
 in
$Q\gt 0$
 in 
 $B_{\rho }\backslash \{0\}$
 and
$B_{\rho }\backslash \{0\}$
 and 
 $Q\gt \alpha$
 on
$Q\gt \alpha$
 on 
 $\partial B_{\rho }$
;
$\partial B_{\rho }$
;
 
(ii) there is an 
 $e\in W^{1,2}(\Omega )$
,
$e\in W^{1,2}(\Omega )$
, 
 $e\ne 0$
 such that
$e\ne 0$
 such that 
 $Q(e)\le 0$
.
$Q(e)\le 0$
.
 
Then 
 $Q(s)$
 has a critical point. Here,
$Q(s)$
 has a critical point. Here, 
 $B_{\rho }=\left \{ {s\in W^{1,2}(\Omega )|\ ||s||_{W^{1,2}(\Omega )}\le \rho }\right \}$
.
$B_{\rho }=\left \{ {s\in W^{1,2}(\Omega )|\ ||s||_{W^{1,2}(\Omega )}\le \rho }\right \}$
.
 Next, we prove that 
 $Q_{d_2}(s)$
 satisfies all the assumptions of Theorem4.2.
$Q_{d_2}(s)$
 satisfies all the assumptions of Theorem4.2.
Remark 3. If 
 $0\in \partial Q_{d_2}(s)$
, then the critical point of
$0\in \partial Q_{d_2}(s)$
, then the critical point of 
 $Q_{d_2}$
 is
$Q_{d_2}$
 is 
 $s\in W^{1,2}(\Omega )$
, as stated in [7]. Once we obtain a critical point
$s\in W^{1,2}(\Omega )$
, as stated in [7]. Once we obtain a critical point 
 $s$
 of
$s$
 of 
 $Q_{d_2}(s)$
, then
$Q_{d_2}(s)$
, then 
 $v=s+v^*_2$
 is a critical point of
$v=s+v^*_2$
 is a critical point of 
 $J_{d_2}(v)$
.
$J_{d_2}(v)$
.
Proposition 4.3. 
Assume that hypotheses of Proposition 
2.2
 (i) hold. Then 
 $Q_{d_2}(s)$
 is a locally Lipschitz continuous function on
$Q_{d_2}(s)$
 is a locally Lipschitz continuous function on 
 $W^{1,2}(\Omega )$
.
$W^{1,2}(\Omega )$
.
Proof. According to the definition of 
 $\widetilde f_2^\gamma (v)$
 in (4.3), we rewrite equation (4.6) as
$\widetilde f_2^\gamma (v)$
 in (4.3), we rewrite equation (4.6) as
 \begin{equation} Q_{d_2}(s)=R^*-\int _\Omega \int _{0}^{s} (h^\gamma (w+v^*_2)-h^\gamma (v^*_2))dwdx, \end{equation}
\begin{equation} Q_{d_2}(s)=R^*-\int _\Omega \int _{0}^{s} (h^\gamma (w+v^*_2)-h^\gamma (v^*_2))dwdx, \end{equation}
where 
 $R^*=\dfrac {d_2}{2}\int _\Omega |\nabla s|^2dx+\dfrac {a}{2}\int _\Omega s^2dx$
. Clearly,
$R^*=\dfrac {d_2}{2}\int _\Omega |\nabla s|^2dx+\dfrac {a}{2}\int _\Omega s^2dx$
. Clearly, 
 $R^*$
 is
$R^*$
 is 
 $C^1$
 on
$C^1$
 on 
 $W^{1,2}(\Omega )$
 and hence locally Lipschitz continuous. On the other hand, we know that
$W^{1,2}(\Omega )$
 and hence locally Lipschitz continuous. On the other hand, we know that 
 $h^\gamma (v)=0$
 if
$h^\gamma (v)=0$
 if 
 $v\lt \gamma$
,
$v\lt \gamma$
, 
 $h^\gamma (v)={\beta ch_2(v)}v/({m+bh_2(v)})$
 if
$h^\gamma (v)={\beta ch_2(v)}v/({m+bh_2(v)})$
 if 
 $\gamma \lt v\leq v_M$
 and
$\gamma \lt v\leq v_M$
 and 
 $h^\gamma (v)={\beta ch_2(v_M)}v_M/({m+bh_2(v_M)})$
 if
$h^\gamma (v)={\beta ch_2(v_M)}v_M/({m+bh_2(v_M)})$
 if 
 $v\gt v_M$
. Thus, we find that there exist a constant
$v\gt v_M$
. Thus, we find that there exist a constant 
 $a_1$
 such that
$a_1$
 such that 
 $|h^\gamma (v)-h^\gamma (v^*_2)|\lt a_1$
 for all
$|h^\gamma (v)-h^\gamma (v^*_2)|\lt a_1$
 for all 
 $v\in R$
, which means that
$v\in R$
, which means that 
 $|h^\gamma (s+v^*_2)-h^\gamma (v^*_2)|\lt a_1$
 for all
$|h^\gamma (s+v^*_2)-h^\gamma (v^*_2)|\lt a_1$
 for all 
 $s\in R$
. Let
$s\in R$
. Let 
 $H(s)=\int _{0}^{s} (h^\gamma (w+v^*_2)-h^\gamma (v^*_2))dw$
 and
$H(s)=\int _{0}^{s} (h^\gamma (w+v^*_2)-h^\gamma (v^*_2))dw$
 and 
 $B(s)=\int _\Omega H(s)dx$
. Then
$B(s)=\int _\Omega H(s)dx$
. Then
 \begin{equation*}|H(s_1)-H(s_2)|\le |\int _{s_2}^{s_1}\left |h^\gamma (w+v^*_2)-h^\gamma (v^*_2)|dw\right |\lt a_1|s_1-s_2|;\end{equation*}
\begin{equation*}|H(s_1)-H(s_2)|\le |\int _{s_2}^{s_1}\left |h^\gamma (w+v^*_2)-h^\gamma (v^*_2)|dw\right |\lt a_1|s_1-s_2|;\end{equation*}
so that
 \begin{align} |B({s_1})-B({s_2})|&\lt {a_1}{\int }_\Omega |{s_1}-{s_2}|dx\le {a_1}mes{(\Omega )^{1/2}}||{s_1} - {s_2}|{|_{{L^2}(\Omega )}} \nonumber\\ &\quad \le {a_1}mes{(\Omega )^{1/2}}||{s_1}-{s_2}|{|_{W^{1,2}(\Omega )}}. \end{align}
\begin{align} |B({s_1})-B({s_2})|&\lt {a_1}{\int }_\Omega |{s_1}-{s_2}|dx\le {a_1}mes{(\Omega )^{1/2}}||{s_1} - {s_2}|{|_{{L^2}(\Omega )}} \nonumber\\ &\quad \le {a_1}mes{(\Omega )^{1/2}}||{s_1}-{s_2}|{|_{W^{1,2}(\Omega )}}. \end{align}
Therefore, 
 $B(s)$
 is a locally Lipschitz continuous function on
$B(s)$
 is a locally Lipschitz continuous function on 
 $L^2(\Omega )$
 and
$L^2(\Omega )$
 and 
 $W^{1,2}(\Omega )$
. From this, it can be concluded that
$W^{1,2}(\Omega )$
. From this, it can be concluded that 
 $Q_{d_2}(s)$
 is locally Lipschitz continuous on
$Q_{d_2}(s)$
 is locally Lipschitz continuous on 
 $W^{1,2}(\Omega )$
.
$W^{1,2}(\Omega )$
.
Proposition 4.4. 
Assume Proposition 
2.2
 (i) holds. Let 
 $\left \{ {s_n}\right \}\subset W^{1,2}(\Omega )$
 be a sequence such that
$\left \{ {s_n}\right \}\subset W^{1,2}(\Omega )$
 be a sequence such that 
 $\left \{{Q_{d_2}(s_n)}\right \}$
 is bounded and
$\left \{{Q_{d_2}(s_n)}\right \}$
 is bounded and 
 $\lambda (s_n)=\mathop {\min }\limits _{\psi \in \partial Q_{d_2}(s_n)}||\psi ||_ {(W^{1,2}(\Omega ))^*}\rightarrow 0$
 as
$\lambda (s_n)=\mathop {\min }\limits _{\psi \in \partial Q_{d_2}(s_n)}||\psi ||_ {(W^{1,2}(\Omega ))^*}\rightarrow 0$
 as 
 $n\rightarrow \infty .$
 Then
$n\rightarrow \infty .$
 Then 
 $\left \{ {s_n}\right \}$
 possesses a convergent subsequence.
$\left \{ {s_n}\right \}$
 possesses a convergent subsequence.
Proof. By Proposition4.3, we know
 \begin{equation*} Q_{d_2}(s_n)=\dfrac {d_2}{2}\int _\Omega |\nabla s_n|^2dx+\dfrac {a}{2}\int _\Omega s_n^2dx-\int _\Omega H(s_n)dx \end{equation*}
\begin{equation*} Q_{d_2}(s_n)=\dfrac {d_2}{2}\int _\Omega |\nabla s_n|^2dx+\dfrac {a}{2}\int _\Omega s_n^2dx-\int _\Omega H(s_n)dx \end{equation*}
 \begin{equation*}\qquad \qquad \qquad \quad \;\;\;\geq \dfrac {1}{2}\mbox {min}\left \{ d_2,a\right \}||s_n||_{W^{1,2}(\Omega )}^2-a_1\mbox {mes}{(\Omega )^{1/2}}||s_n||_{W^{1,2}(\Omega )}.\end{equation*}
\begin{equation*}\qquad \qquad \qquad \quad \;\;\;\geq \dfrac {1}{2}\mbox {min}\left \{ d_2,a\right \}||s_n||_{W^{1,2}(\Omega )}^2-a_1\mbox {mes}{(\Omega )^{1/2}}||s_n||_{W^{1,2}(\Omega )}.\end{equation*}
Thus, 
 $\left \{ {s_n}\right \}$
 is bounded in
$\left \{ {s_n}\right \}$
 is bounded in 
 $W^{1,2}(\Omega )$
 and there is a weakly convergent subsequence
$W^{1,2}(\Omega )$
 and there is a weakly convergent subsequence 
 $\left \{ {s_{n_i}}\right \}$
 with limit
$\left \{ {s_{n_i}}\right \}$
 with limit 
 $s_0$
 in
$s_0$
 in 
 $W^{1,2}(\Omega ).$
 Since
$W^{1,2}(\Omega ).$
 Since 
 $W^{1,2}(\Omega )\rightarrow L^2(\Omega )$
 is compact,
$W^{1,2}(\Omega )\rightarrow L^2(\Omega )$
 is compact, 
 $\left \{ {s_{n_i}}\right \}$
 is strongly convergent in
$\left \{ {s_{n_i}}\right \}$
 is strongly convergent in 
 $L^2(\Omega )$
. Recalling Proposition4.3, both
$L^2(\Omega )$
. Recalling Proposition4.3, both 
 $B(s)$
 and
$B(s)$
 and 
 $Q_{d_2}(s)$
 are locally Lipschitz continuous. So, according to Definition3, the generalized gradients of
$Q_{d_2}(s)$
 are locally Lipschitz continuous. So, according to Definition3, the generalized gradients of 
 $B(s)$
 and
$B(s)$
 and 
 $Q_{d_2}(s)$
 with respect to
$Q_{d_2}(s)$
 with respect to 
 $s$
 do exist and they are denoted by
$s$
 do exist and they are denoted by 
 $\partial B(s)$
 and
$\partial B(s)$
 and 
 $\partial Q_{d_2}(s)$
, respectively. Note that
$\partial Q_{d_2}(s)$
, respectively. Note that
 \begin{equation} \partial Q_{d_2}(s_n)\subset \left \{ Ls_n\right \}-\partial B(s_n), \end{equation}
\begin{equation} \partial Q_{d_2}(s_n)\subset \left \{ Ls_n\right \}-\partial B(s_n), \end{equation}
where 
 $L$
 is an elliptic differential operator such that
$L$
 is an elliptic differential operator such that 
 $Ls=-d_2\Delta s+as.$
 We have applied Propositions (4) and (3) of [Reference Chang7] to prove (4.9). Since
$Ls=-d_2\Delta s+as.$
 We have applied Propositions (4) and (3) of [Reference Chang7] to prove (4.9). Since 
 $\lambda (s_n)\rightarrow 0$
 as
$\lambda (s_n)\rightarrow 0$
 as 
 $n\rightarrow \infty, $
 there is a sequence
$n\rightarrow \infty, $
 there is a sequence 
 $\rho _{n_i}\in \partial B(s_n)$
 such that
$\rho _{n_i}\in \partial B(s_n)$
 such that
 \begin{equation*}Ls_{n_i}-\rho _{n_i}\rightarrow 0 \quad \mbox {in}\quad {(W^{1,2}(\Omega ))^*}.\end{equation*}
\begin{equation*}Ls_{n_i}-\rho _{n_i}\rightarrow 0 \quad \mbox {in}\quad {(W^{1,2}(\Omega ))^*}.\end{equation*}
Because of 
 $\rho _{n_i}\in \partial B(s_n)$
,
$\rho _{n_i}\in \partial B(s_n)$
, 
 $\left \{ {\rho _{n_i}}\right \}$
 is bounded in
$\left \{ {\rho _{n_i}}\right \}$
 is bounded in 
 $L^2(\Omega )$
. This can be demonstrated by noting that
$L^2(\Omega )$
. This can be demonstrated by noting that 
 $B(s_{n_i})$
 is locally Lipschitz continuous on
$B(s_{n_i})$
 is locally Lipschitz continuous on 
 $L^2(\Omega )$
 according to (4.8). Therefore, there is a subsequence
$L^2(\Omega )$
 according to (4.8). Therefore, there is a subsequence 
 $\left \{ {n'_i}\right \}$
 of
$\left \{ {n'_i}\right \}$
 of 
 $\left \{ {{n_i}}\right \}$
, which satisfies that
$\left \{ {{n_i}}\right \}$
, which satisfies that 
 $\left \{ {{\rho _{n'_i}}}\right \}$
 is weakly convergent to
$\left \{ {{\rho _{n'_i}}}\right \}$
 is weakly convergent to 
 $\rho _0$
 in
$\rho _0$
 in 
 $L^2(\Omega )$
. Therefore, it is strongly convergent in
$L^2(\Omega )$
. Therefore, it is strongly convergent in 
 $(W^{1,2}(\Omega ))^*$
. So,
$(W^{1,2}(\Omega ))^*$
. So,
 \begin{equation*}Ls_{n'_i}\rightarrow \rho _0 \quad \mbox {in}\quad {(W^{1,2}(\Omega ))^*},\end{equation*}
\begin{equation*}Ls_{n'_i}\rightarrow \rho _0 \quad \mbox {in}\quad {(W^{1,2}(\Omega ))^*},\end{equation*}
which shows that 
 $s_{n'_i}\rightarrow (L)^{-1}\rho _0$
 in
$s_{n'_i}\rightarrow (L)^{-1}\rho _0$
 in 
 $W^{1,2}(\Omega ).$
$W^{1,2}(\Omega ).$
Proposition 4.5. Assume that Proposition 2.2 (i) holds. We get
 
(i) 
 $Q_{d_2}(0)=0$
 and there are constants
$Q_{d_2}(0)=0$
 and there are constants 
 $\rho \gt 0$
,
$\rho \gt 0$
, 
 $\alpha \gt 0$
, which satisfy that
$\alpha \gt 0$
, which satisfy that 
 $Q_{d_2}\geq \alpha$
 if
$Q_{d_2}\geq \alpha$
 if 
 $||s||_{W^{1,2}(\Omega )}=\rho$
;
$||s||_{W^{1,2}(\Omega )}=\rho$
;
 
(ii) there is an 
 $e\in W^{1,2}(\Omega )$
,
$e\in W^{1,2}(\Omega )$
, 
 $||e||_{W^{1,2}(\Omega )}\gt \rho$
, which satisfies that
$||e||_{W^{1,2}(\Omega )}\gt \rho$
, which satisfies that 
 $Q_{d_2}(e)\le 0.$
$Q_{d_2}(e)\le 0.$
Proof. (i) By (4.7), we know 
 $Q_{d_2}(0)=0.$
 Moreover, we can rewrite (4.7) as follows
$Q_{d_2}(0)=0.$
 Moreover, we can rewrite (4.7) as follows
 \begin{align} Q_{d_2}(s)&=\dfrac {d_2}{2}{\int }_\Omega |\nabla s|^2dx+\dfrac {1}{2}{\int }_\Omega (a-(h^\gamma )_v(v^*_2))s^2dx\nonumber \\ &\quad-{\int }_\Omega {\int }_{0}^{s} (h^\gamma (w+v^*_2)-h^\gamma (v^*_2)-(h^\gamma )_v(v^*_2)w)dwdx \end{align}
\begin{align} Q_{d_2}(s)&=\dfrac {d_2}{2}{\int }_\Omega |\nabla s|^2dx+\dfrac {1}{2}{\int }_\Omega (a-(h^\gamma )_v(v^*_2))s^2dx\nonumber \\ &\quad-{\int }_\Omega {\int }_{0}^{s} (h^\gamma (w+v^*_2)-h^\gamma (v^*_2)-(h^\gamma )_v(v^*_2)w)dwdx \end{align}
From the definitions of 
 $f_2(h_2(v),v)$
 in (4.2) and
$f_2(h_2(v),v)$
 in (4.2) and 
 $h^\gamma (v)$
 after (4.7), we know
$h^\gamma (v)$
 after (4.7), we know 
 $f_2(h_2(v),v)=-av+h^\gamma (v),$
 which implies
$f_2(h_2(v),v)=-av+h^\gamma (v),$
 which implies 
 $\partial _v f_{2}(h_2(v),v)=-a+(h^\gamma )_v(v).$
 By the proof of Proposition2.3 (ii), we see
$\partial _v f_{2}(h_2(v),v)=-a+(h^\gamma )_v(v).$
 By the proof of Proposition2.3 (ii), we see 
 $\partial _v f_{2}(h_2(v^*_2),v^*_2)\lt 0,$
 so there is a constant
$\partial _v f_{2}(h_2(v^*_2),v^*_2)\lt 0,$
 so there is a constant 
 $a_2\gt 0$
 such that
$a_2\gt 0$
 such that 
 $a-(h^\gamma )_v(v^*_2)\gt a_2$
 for all
$a-(h^\gamma )_v(v^*_2)\gt a_2$
 for all 
 $x\in \overline {\Omega }$
.
$x\in \overline {\Omega }$
.
 Let 
 $\Phi (s)=h^\gamma (s+v^*_2)-h^\gamma (v^*_2)-(h^\gamma )_v(v^*_2)s$
 and
$\Phi (s)=h^\gamma (s+v^*_2)-h^\gamma (v^*_2)-(h^\gamma )_v(v^*_2)s$
 and 
 $\Psi (s)=\int _{0}^{s}\Phi (w)dw.$
 It is easy to see that
$\Psi (s)=\int _{0}^{s}\Phi (w)dw.$
 It is easy to see that 
 $\Phi (s)=o(|s|)$
 at
$\Phi (s)=o(|s|)$
 at 
 $s=0$
 uniformly in
$s=0$
 uniformly in 
 $x\in \overline {\Omega }$
. Thus, for any
$x\in \overline {\Omega }$
. Thus, for any 
 $\iota \gt 0$
, there exists a
$\iota \gt 0$
, there exists a 
 $\delta \gt 0$
 such that
$\delta \gt 0$
 such that 
 $|\Psi (s)|\le \iota s^2$
 if
$|\Psi (s)|\le \iota s^2$
 if 
 $|s|\le \delta$
. In addition, in the proof of Proposition4.3, we can recall the following that
$|s|\le \delta$
. In addition, in the proof of Proposition4.3, we can recall the following that 
 $|h^\gamma (w+v^*_2)-h^\gamma (v^*_2)|\lt a_1$
, which leads to the conclusion that for every
$|h^\gamma (w+v^*_2)-h^\gamma (v^*_2)|\lt a_1$
, which leads to the conclusion that for every 
 $\varepsilon \in (1,(N+2)/(N-2))$
, there exists a constant
$\varepsilon \in (1,(N+2)/(N-2))$
, there exists a constant 
 $a_3\gt 0$
 such that
$a_3\gt 0$
 such that 
 $|h^\gamma (w+v^*_2)-h^\gamma (v^*_2)|\lt a_1+a_3|w|^\varepsilon$
 for all
$|h^\gamma (w+v^*_2)-h^\gamma (v^*_2)|\lt a_1+a_3|w|^\varepsilon$
 for all 
 $w\in R$
. This means that there exists a constant
$w\in R$
. This means that there exists a constant 
 $a_4\gt 0$
 such that
$a_4\gt 0$
 such that 
 $|H(s)|=|\int _{0}^{s} (h^\gamma (w+v^*_2)-h^\gamma (v^*_2))dw|\le a_4|s|^{\varepsilon +1}$
 for
$|H(s)|=|\int _{0}^{s} (h^\gamma (w+v^*_2)-h^\gamma (v^*_2))dw|\le a_4|s|^{\varepsilon +1}$
 for 
 $|s|\gt \delta$
. Thanks to the Sobolev embedding theorem, we get
$|s|\gt \delta$
. Thanks to the Sobolev embedding theorem, we get
 \begin{equation*}Q_{d_2}(s)=\dfrac {d_2}{2}\int _{|s|\gt \delta } |\nabla s|^2dx+\dfrac {a}{2}\int _{|s|\gt \delta } s^2dx-\int _{|s|\gt \delta } H(s)dx\end{equation*}
\begin{equation*}Q_{d_2}(s)=\dfrac {d_2}{2}\int _{|s|\gt \delta } |\nabla s|^2dx+\dfrac {a}{2}\int _{|s|\gt \delta } s^2dx-\int _{|s|\gt \delta } H(s)dx\end{equation*}
 \begin{equation*}\qquad \qquad \qquad \qquad \;\;\;\;+\dfrac {d_2}{2}\int _{|s|\le \delta } |\nabla s|^2dx+\dfrac {1}{2}\int _{|s|\le \delta } (a-(h^\gamma )_v(v^*_2))s^2dx-\int _{|s|\le \delta }\Psi (s)dx\end{equation*}
\begin{equation*}\qquad \qquad \qquad \qquad \;\;\;\;+\dfrac {d_2}{2}\int _{|s|\le \delta } |\nabla s|^2dx+\dfrac {1}{2}\int _{|s|\le \delta } (a-(h^\gamma )_v(v^*_2))s^2dx-\int _{|s|\le \delta }\Psi (s)dx\end{equation*}
 \begin{equation*}\qquad \quad \geq \dfrac {d_2}{2}\int _{|s|\gt \delta } |\nabla s|^2dx+\dfrac {a}{2}\int _{|s|\gt \delta } s^2dx-c_5||s||^{\varepsilon +1}_{W^{1,2}(\Omega )}\end{equation*}
\begin{equation*}\qquad \quad \geq \dfrac {d_2}{2}\int _{|s|\gt \delta } |\nabla s|^2dx+\dfrac {a}{2}\int _{|s|\gt \delta } s^2dx-c_5||s||^{\varepsilon +1}_{W^{1,2}(\Omega )}\end{equation*}
 \begin{equation*}+\dfrac {d_2}{2}\int _{|s|\le \delta } |\nabla s|^2dx+(\dfrac {a_2}{2}-\iota )\int _{|s|\le \delta } s^2dx\end{equation*}
\begin{equation*}+\dfrac {d_2}{2}\int _{|s|\le \delta } |\nabla s|^2dx+(\dfrac {a_2}{2}-\iota )\int _{|s|\le \delta } s^2dx\end{equation*}
 \begin{equation*}\qquad \qquad \qquad \qquad \;\;\;\geq a_6||s||^2_{W^{1,2}(\Omega )}-a_5||s||^{\varepsilon +1}_{W^{1,2}(\Omega )}=(a_6-a_5||s||^{\varepsilon -1}_{W^{1,2}(\Omega )})||s||^2_{W^{1,2} (\Omega )}\end{equation*}
\begin{equation*}\qquad \qquad \qquad \qquad \;\;\;\geq a_6||s||^2_{W^{1,2}(\Omega )}-a_5||s||^{\varepsilon +1}_{W^{1,2}(\Omega )}=(a_6-a_5||s||^{\varepsilon -1}_{W^{1,2}(\Omega )})||s||^2_{W^{1,2} (\Omega )}\end{equation*}
with some positive constants 
 $a_5$
 and
$a_5$
 and 
 $a_6$
. Therefore, let
$a_6$
. Therefore, let 
 $\rho =(a_6/2a_5)^{1/(\varepsilon -1)}$
, we can see that
$\rho =(a_6/2a_5)^{1/(\varepsilon -1)}$
, we can see that 
 $Q_{d_2}(s)=0$
 for
$Q_{d_2}(s)=0$
 for 
 $||s||_{W^{1,2}(\Omega )}\le \rho$
 if and only if
$||s||_{W^{1,2}(\Omega )}\le \rho$
 if and only if 
 $s=0$
 and that
$s=0$
 and that 
 $Q_{d_2}(s)\geq (a_6/2)\rho ^2=\alpha$
 for
$Q_{d_2}(s)\geq (a_6/2)\rho ^2=\alpha$
 for 
 $||s||_{W^{1,2}(\Omega )}=\rho$
.
$||s||_{W^{1,2}(\Omega )}=\rho$
.
 (ii) By selecting 
 $e=-v^*_2$
, we derive
$e=-v^*_2$
, we derive 
 $Q_{d_2}(e)=J_{d_2}(0)-J_{d_2}(v^*_2)\le 0$
 from Case 1.
$Q_{d_2}(e)=J_{d_2}(0)-J_{d_2}(v^*_2)\le 0$
 from Case 1.
 Using Theorem4.2, we find that 
 $Q_{d_2}$
 has a critical point
$Q_{d_2}$
 has a critical point 
 $s(x)$
. Then
$s(x)$
. Then 
 $v(x)=s(x)+v^*_2$
 is a critical point of
$v(x)=s(x)+v^*_2$
 is a critical point of 
 $Q_{d_2}$
. Now, we examine Case 2. Similarly to Propositions4.3 and 4.4, we can show that
$Q_{d_2}$
. Now, we examine Case 2. Similarly to Propositions4.3 and 4.4, we can show that 
 $Q_{d_2}(v)$
 is also locally Lipschitz continuous on
$Q_{d_2}(v)$
 is also locally Lipschitz continuous on 
 $W^{1,2}(\Omega )$
 and satisfies (PS). Hence, it is also crucial to establish the following Proposition.
$W^{1,2}(\Omega )$
 and satisfies (PS). Hence, it is also crucial to establish the following Proposition.
Proposition 4.6. Assume that Proposition 2.2 (i) holds. We conclude that
 
(i) 
 $Q_{d_2}(0)=0$
 and there are constants
$Q_{d_2}(0)=0$
 and there are constants 
 $\rho _1\gt 0$
,
$\rho _1\gt 0$
, 
 $\alpha _1\gt 0$
, which satisfy that
$\alpha _1\gt 0$
, which satisfy that 
 $Q_{d_2}\geq \alpha _1$
 if
$Q_{d_2}\geq \alpha _1$
 if 
 $||v||_{W^{1,2}(\Omega )}=\rho _1$
;
$||v||_{W^{1,2}(\Omega )}=\rho _1$
;
 
(ii) there is an 
 $e_1\in W^{1,2}(\Omega )$
,
$e_1\in W^{1,2}(\Omega )$
, 
 $||e_1||_{W^{1,2}(\Omega )}\gt \rho _1$
, which satisfies that
$||e_1||_{W^{1,2}(\Omega )}\gt \rho _1$
, which satisfies that 
 $Q_{d_2}(e_1)\le 0.$
$Q_{d_2}(e_1)\le 0.$
Proof. (i) Obviously, we know 
 $Q_{d_2}(0)=0$
 by (4.7). Similarly to (4.10), (4.5) can be rewritten as follows
$Q_{d_2}(0)=0$
 by (4.7). Similarly to (4.10), (4.5) can be rewritten as follows
 \begin{equation} Q_{d_2}(v)=\dfrac {d_2}{2}\int _\Omega |\nabla v|^2dx+\dfrac {a}{2}\int _\Omega v^2dx-\int _\Omega \int _{0}^{v} h^\gamma (w)dw dx. \end{equation}
\begin{equation} Q_{d_2}(v)=\dfrac {d_2}{2}\int _\Omega |\nabla v|^2dx+\dfrac {a}{2}\int _\Omega v^2dx-\int _\Omega \int _{0}^{v} h^\gamma (w)dw dx. \end{equation}
Since 
 $h^\gamma (v)=0$
 if
$h^\gamma (v)=0$
 if 
 $v\lt \gamma$
, we know that there exists a
$v\lt \gamma$
, we know that there exists a 
 $\delta _1\in (0,\gamma )$
 such that
$\delta _1\in (0,\gamma )$
 such that 
 $\int _{0}^{v} h^\gamma (w)dw=0$
 for
$\int _{0}^{v} h^\gamma (w)dw=0$
 for 
 $|v|\le \delta _1$
. Then, by the definition of
$|v|\le \delta _1$
. Then, by the definition of 
 $h^\gamma (v)$
, we obtain
$h^\gamma (v)$
, we obtain 
 $| h^\gamma (w)|\lt a_7$
 with a constant
$| h^\gamma (w)|\lt a_7$
 with a constant 
 $a_7\gt 0$
, which implies that for every
$a_7\gt 0$
, which implies that for every 
 $\varepsilon \in (1,(N+2)/(N-2))$
, there exists a constant
$\varepsilon \in (1,(N+2)/(N-2))$
, there exists a constant 
 $a_8\gt 0$
 such that
$a_8\gt 0$
 such that 
 $|h^\gamma (w)|\lt a_7+a_8|w|^\varepsilon$
 for all
$|h^\gamma (w)|\lt a_7+a_8|w|^\varepsilon$
 for all 
 $w\in R$
. This shows that there is a constant
$w\in R$
. This shows that there is a constant 
 $a_9\gt 0$
 such that
$a_9\gt 0$
 such that 
 $|\int _{0}^{v} h^\gamma (w)dw|\le a_9|v|^{\varepsilon +1}$
 for
$|\int _{0}^{v} h^\gamma (w)dw|\le a_9|v|^{\varepsilon +1}$
 for 
 $|v|\gt \delta _1$
. Then repeating the proof of Proposition4.5, we can show that there exist positive constants
$|v|\gt \delta _1$
. Then repeating the proof of Proposition4.5, we can show that there exist positive constants 
 $\rho _1$
,
$\rho _1$
, 
 $\alpha _1$
 such that
$\alpha _1$
 such that 
 $Q_{d_2}\geq \alpha _1$
 if
$Q_{d_2}\geq \alpha _1$
 if 
 $||v||_{W^{1,2}(\Omega )}=\rho _1$
. (ii) Let
$||v||_{W^{1,2}(\Omega )}=\rho _1$
. (ii) Let 
 $e_1=v^*_2$
. Then it is easy to know that
$e_1=v^*_2$
. Then it is easy to know that 
 $Q_{d_2}(e_1)=J_{d_2}(v^*_2)-J_{d_2}(0)\lt 0$
 according to Case 2.
$Q_{d_2}(e_1)=J_{d_2}(v^*_2)-J_{d_2}(0)\lt 0$
 according to Case 2.
 Using Theorem4.2 again, we can show that 
 $Q_{d_2}$
 has a critical point
$Q_{d_2}$
 has a critical point 
 $v(x)$
.
$v(x)$
.
Proof of Theorem 
4.1
. A critical point 
 $v(x)$
 of
$v(x)$
 of 
 $Q_{d_2}$
 has been found. In fact,
$Q_{d_2}$
 has been found. In fact, 
 $v(x)$
 is a weak solution of (4.4). We can obtain that
$v(x)$
 is a weak solution of (4.4). We can obtain that 
 $v(x)$
 is a classical solution of (4.4) by the elliptic regularity theorem [Reference Uhlenbeck29].
$v(x)$
 is a classical solution of (4.4) by the elliptic regularity theorem [Reference Uhlenbeck29].
 Now, we demonstrate that 
 $0\le v(x)\le v^*_2$
. Because the proof for
$0\le v(x)\le v^*_2$
. Because the proof for 
 $0\le v(x)$
 and
$0\le v(x)$
 and 
 $v(x)\le v^*_2$
 are similar, we only show that
$v(x)\le v^*_2$
 are similar, we only show that 
 $v(x)\le v^*_2$
. Let
$v(x)\le v^*_2$
. Let 
 $v(x_1)=\mathop {\max }\limits _{x\in \overline {\Omega }}v(x)\gt v^*_2$
. If
$v(x_1)=\mathop {\max }\limits _{x\in \overline {\Omega }}v(x)\gt v^*_2$
. If 
 $x_1\in \Omega$
, then
$x_1\in \Omega$
, then 
 $d_2\Delta v|_{x=x_1}\le 0$
, which means that
$d_2\Delta v|_{x=x_1}\le 0$
, which means that 
 $\widetilde f_2^\gamma (v(x_1))\geq 0$
. But by the definition of
$\widetilde f_2^\gamma (v(x_1))\geq 0$
. But by the definition of 
 $\widetilde f_2^\gamma (v)$
, we find
$\widetilde f_2^\gamma (v)$
, we find 
 $\widetilde f_2^\gamma (v(x_1))\lt 0$
, which is a contradiction. If
$\widetilde f_2^\gamma (v(x_1))\lt 0$
, which is a contradiction. If 
 $x_1\in {\Omega }$
, there is a ball
$x_1\in {\Omega }$
, there is a ball 
 $B_R(r_0)\subset \Omega$
 centred at
$B_R(r_0)\subset \Omega$
 centred at 
 $r_0\in \Omega$
 of radius
$r_0\in \Omega$
 of radius 
 $R$
, which satisfies
$R$
, which satisfies 
 $\partial \Omega \cap \overline B_R(r_0)=\left \{ {x_1}\right \}$
 and
$\partial \Omega \cap \overline B_R(r_0)=\left \{ {x_1}\right \}$
 and 
 $v(x)\lt v(x_1)$
 in
$v(x)\lt v(x_1)$
 in 
 $B_R(r_0)$
. Since
$B_R(r_0)$
. Since 
 $v(x_1)\gt v^*_2$
, it follows from continuity that
$v(x_1)\gt v^*_2$
, it follows from continuity that 
 $v(x)\gt v^*_2$
 in
$v(x)\gt v^*_2$
 in 
 $B_R(r_0)$
, provided
$B_R(r_0)$
, provided 
 $r_0$
 is sufficiently close to
$r_0$
 is sufficiently close to 
 $x_1$
 and
$x_1$
 and 
 $R$
 is small enough. So,
$R$
 is small enough. So, 
 $\widetilde f_2^\gamma (v(x))\lt 0$
 for
$\widetilde f_2^\gamma (v(x))\lt 0$
 for 
 $x\in B_R(r_0)$
, which shows that
$x\in B_R(r_0)$
, which shows that 
 $d_2\Delta v\gt 0$
 in
$d_2\Delta v\gt 0$
 in 
 $ B_R(r_0)$
. In addition,
$ B_R(r_0)$
. In addition, 
 $v(x)\lt v(x_1)$
 in
$v(x)\lt v(x_1)$
 in 
 $B_R(r_0)$
. By employing the Hopf boundary point Proposition (see, e.g., Chapters 8 and 9 of [Reference Gilbarg and Trudinger13]), we observe that
$B_R(r_0)$
. By employing the Hopf boundary point Proposition (see, e.g., Chapters 8 and 9 of [Reference Gilbarg and Trudinger13]), we observe that 
 $\partial _\tau v\gt 0$
. This is in contradiction with the boundary condition
$\partial _\tau v\gt 0$
. This is in contradiction with the boundary condition 
 $\partial _\tau v=0$
. Thus,
$\partial _\tau v=0$
. Thus, 
 $v(x)\le v^*_2$
 on
$v(x)\le v^*_2$
 on 
 $\overline {\Omega }$
. Since
$\overline {\Omega }$
. Since 
 $f_2^\gamma (v)=\widetilde f_2^\gamma (v)$
 for all
$f_2^\gamma (v)=\widetilde f_2^\gamma (v)$
 for all 
 $v\leq v_M$
, any classical solution
$v\leq v_M$
, any classical solution 
 $v(x)$
 of (4.4) is also a classical solution of (4.1).
$v(x)$
 of (4.4) is also a classical solution of (4.1).
 Finally, we demonstrate that 
 $v(x)$
 crosses
$v(x)$
 crosses 
 $\gamma$
. Otherwise, we can suppose that
$\gamma$
. Otherwise, we can suppose that 
 $\gamma \lt v(x)\le v^*_2$
 for all
$\gamma \lt v(x)\le v^*_2$
 for all 
 $x\in \overline {\Omega }$
. According to Proposition2.3 (i), we find that
$x\in \overline {\Omega }$
. According to Proposition2.3 (i), we find that 
 $f_2^\gamma (v)=f_2(h_2(v),v)\geq 0$
. So,
$f_2^\gamma (v)=f_2(h_2(v),v)\geq 0$
. So, 
 $\int _\Omega f_2(h_2(v),v)dx\geq 0$
. In addition, integrating the first equation of (4.1) over
$\int _\Omega f_2(h_2(v),v)dx\geq 0$
. In addition, integrating the first equation of (4.1) over 
 $\Omega$
, we get
$\Omega$
, we get 
 $\int _\Omega f_2^\gamma (v)=\int _\Omega f_2(h_2(v),v)dx=0$
. This only holds if
$\int _\Omega f_2^\gamma (v)=\int _\Omega f_2(h_2(v),v)dx=0$
. This only holds if 
 $v(x)\equiv v^*_2$
. So we have a contradiction. Similarly, we can demonstrate that
$v(x)\equiv v^*_2$
. So we have a contradiction. Similarly, we can demonstrate that 
 $0\le v(x)\lt \gamma$
 for all
$0\le v(x)\lt \gamma$
 for all 
 $x\in \overline {\Omega }$
 is also invalid.
$x\in \overline {\Omega }$
 is also invalid.
Remark 4. Suppose 
 $v(x)$
 is a solution of (4.1) and define
$v(x)$
 is a solution of (4.1) and define
 \begin{equation*}u(x)= \left \{\begin{array}{lll} 0 & v(x)\lt \gamma, \\ h_2(v(x)) & v(x)\gt \gamma .\\ \end{array} \right .\end{equation*}
\begin{equation*}u(x)= \left \{\begin{array}{lll} 0 & v(x)\lt \gamma, \\ h_2(v(x)) & v(x)\gt \gamma .\\ \end{array} \right .\end{equation*}
 Then, 
 $(u(x),v(x))$
 forms a stationary solution of problem (1.2).
$(u(x),v(x))$
 forms a stationary solution of problem (1.2).
5. Monotone and symmetric solutions
 In this section, we focus on the construction of monotonic and symmetric solutions of (4.1) in the one-dimensional space domain 
 $[0,1]$
 through the method in [Reference Zhang31]. Therefore, (4.1) can be expressed as
$[0,1]$
 through the method in [Reference Zhang31]. Therefore, (4.1) can be expressed as
 \begin{equation} \left \{\begin{array}{lll} d_2 v''+f_2^\gamma (v)=0,& x\in (0,1),\\ v'(0)=v'(1)=0.\\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{lll} d_2 v''+f_2^\gamma (v)=0,& x\in (0,1),\\ v'(0)=v'(1)=0.\\ \end{array} \right . \end{equation}
Firstly, to recall that in the introduction, a solution 
 $V_{n,+}(x) (n\geq 2)$
 for an
$V_{n,+}(x) (n\geq 2)$
 for an 
 $n$
-mode means that the number of points of discontinuity for
$n$
-mode means that the number of points of discontinuity for 
 $V''_{n,+}(x)$
 is
$V''_{n,+}(x)$
 is 
 $n$
.
$n$
.
 First of all, we fix 
 $\gamma \in (\xi, v^*_2)$
 and follow the method in [Reference Takagi and Zhang28] to construct monotonically increasing and symmetric solutions to (5.1) for each
$\gamma \in (\xi, v^*_2)$
 and follow the method in [Reference Takagi and Zhang28] to construct monotonically increasing and symmetric solutions to (5.1) for each 
 $d_2\gt 0$
,where
$d_2\gt 0$
,where 
 $\xi =p(0)$
 has been defined in (2.2).
$\xi =p(0)$
 has been defined in (2.2).
Theorem 5.1. 
Assume that Proposition 
2.2
 (i) and 
 $d_2\gt 0$
 hold. For every
$d_2\gt 0$
 hold. For every 
 $\gamma \in (\xi, v^*_2)$
, problem (
5.1
) has a monotonic increasing solution
$\gamma \in (\xi, v^*_2)$
, problem (
5.1
) has a monotonic increasing solution 
 $V_{1,+}(x;\,d_2)$
. Furthermore, the equation (
5.1
) has an
$V_{1,+}(x;\,d_2)$
. Furthermore, the equation (
5.1
) has an 
 $n$
-mode symmetric solution
$n$
-mode symmetric solution 
 $V_{n,+}(x;\,\overline d_2)$
 with
$V_{n,+}(x;\,\overline d_2)$
 with 
 $\overline d_2=n^2d_2$
 for each value of
$\overline d_2=n^2d_2$
 for each value of 
 $n \gt 2$
.
$n \gt 2$
.
Proof. We divide the proof into two steps.
Step 1. We consider the following initial value problems
 \begin{equation} \left \{\begin{array}{lll} d_2 v''-av=0,& x\in (0,n_0),\\ v'(0)=0,\qquad v(n_0)=\gamma, \\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{lll} d_2 v''-av=0,& x\in (0,n_0),\\ v'(0)=0,\qquad v(n_0)=\gamma, \\ \end{array} \right . \end{equation}
 \begin{align} \left \{\begin{array}{lll} d_2 v''+f_2(h_2(v),v)=0,& x\in (n_0,1),\\ v(n_0)=\gamma, \qquad v'(1)=0,\\ \end{array} \right . \\[8pt]\nonumber\end{align}
\begin{align} \left \{\begin{array}{lll} d_2 v''+f_2(h_2(v),v)=0,& x\in (n_0,1),\\ v(n_0)=\gamma, \qquad v'(1)=0,\\ \end{array} \right . \\[8pt]\nonumber\end{align}
where 
 $n_0\in (0,1)$
. It is easy to find that
$n_0\in (0,1)$
. It is easy to find that 
 $W_0(x;\,n_0,d_2)$
 is a unique monotone increasing solution to problem (5.2) for every
$W_0(x;\,n_0,d_2)$
 is a unique monotone increasing solution to problem (5.2) for every 
 $d_2\gt 0$
, where
$d_2\gt 0$
, where
 \begin{equation*}W_0(x;\,n_0,d_2)=\dfrac {\gamma }{\mbox {cosh}\sqrt {a/d_2}n_0}\mbox {cosh}\sqrt {a/d_2}x.\end{equation*}
\begin{equation*}W_0(x;\,n_0,d_2)=\dfrac {\gamma }{\mbox {cosh}\sqrt {a/d_2}n_0}\mbox {cosh}\sqrt {a/d_2}x.\end{equation*}
Next, we prove that problem (5.3) has a unique monotone increasing solution 
 $W_1(x;\,n_0,d_2)$
 for every
$W_1(x;\,n_0,d_2)$
 for every 
 $d_2\gt 0$
. Since
$d_2\gt 0$
. Since 
 $f_2(h_2(v^*_2),v^*_2)=0$
,
$f_2(h_2(v^*_2),v^*_2)=0$
, 
 $f_2(h_2(v),v)\gt 0$
 for all
$f_2(h_2(v),v)\gt 0$
 for all 
 $v\in (0,v^*_2)$
 and
$v\in (0,v^*_2)$
 and 
 $\partial _v f_{2}(h_2(v^*_2),v^*_2)\lt 0$
 by Proposition2.3, there exist
$\partial _v f_{2}(h_2(v^*_2),v^*_2)\lt 0$
 by Proposition2.3, there exist 
 $0\lt n_1\lt n_2$
 such that
$0\lt n_1\lt n_2$
 such that 
 $-n_1(v-v^*_2)\lt f_2(h_2(v),v)\lt -n_2(v-v^*_2)$
 for
$-n_1(v-v^*_2)\lt f_2(h_2(v),v)\lt -n_2(v-v^*_2)$
 for 
 $v\lt v^*_2$
. In order to find a solution of (5.3), we study the following initial value problems for
$v\lt v^*_2$
. In order to find a solution of (5.3), we study the following initial value problems for 
 $i=1,2$
$i=1,2$
 \begin{equation} \left \{\begin{array}{lll} d_2 v''-n_i(v-v^*_2)=0 & x\in (n_0,1),\\ v(n_0)=\gamma, \quad v'(1)=0.\\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{lll} d_2 v''-n_i(v-v^*_2)=0 & x\in (n_0,1),\\ v(n_0)=\gamma, \quad v'(1)=0.\\ \end{array} \right . \end{equation}
Let 
 $Y_i(x;\,n_0,d_2)$
 be respective solutions of problems (5.4) for
$Y_i(x;\,n_0,d_2)$
 be respective solutions of problems (5.4) for 
 $i=1,2.$
 Then it is easy to find that
$i=1,2.$
 Then it is easy to find that 
 $Y_1(x;\,n_0,d_2)$
 is a lower solution of (5.3) and
$Y_1(x;\,n_0,d_2)$
 is a lower solution of (5.3) and 
 $Y_2(x;\,n_0,d_2)$
 is an upper solution of (5.3). For simplicity, we let
$Y_2(x;\,n_0,d_2)$
 is an upper solution of (5.3). For simplicity, we let 
 $G_{n_0}(x;\,a)=\mbox {cosh}(a(1-x))/\mbox {cosh}(a(1-n_0)).$
 Simple calculations yield that for
$G_{n_0}(x;\,a)=\mbox {cosh}(a(1-x))/\mbox {cosh}(a(1-n_0)).$
 Simple calculations yield that for 
 $i=1,2$
,
$i=1,2$
, 
 $Y_i(x;\,n_0,d_2)=(\gamma -v^*_2)G_{n_0}(x;\,\sqrt {n_i/d_2})+v^*_2.$
 We claim that
$Y_i(x;\,n_0,d_2)=(\gamma -v^*_2)G_{n_0}(x;\,\sqrt {n_i/d_2})+v^*_2.$
 We claim that 
 $Y_1(x;\,n_0,d_2)\lt Y_2(x;\,n_0,d_2)$
, let
$Y_1(x;\,n_0,d_2)\lt Y_2(x;\,n_0,d_2)$
, let 
 $Y(x;\,n_0,d_2)=Y_1(x;\,n_0,d_2)-Y_2(x;\,n_0,d_2)$
. Then
$Y(x;\,n_0,d_2)=Y_1(x;\,n_0,d_2)-Y_2(x;\,n_0,d_2)$
. Then 
 $Y(x;\,n_0,d_2)$
 satisfies
$Y(x;\,n_0,d_2)$
 satisfies
 \begin{equation*}\left \{\begin{array}{lll} d_2 Y''=n_1(Y_1-v^*_2)-n_2(Y_2-v^*_2)\gt n_1Y& x\in (n_0,1),\\ Y(k;\,n_0,d_2)=0,\quad Y'(1;\,n_0,d_2)=0.\\ \end{array} \right .\end{equation*}
\begin{equation*}\left \{\begin{array}{lll} d_2 Y''=n_1(Y_1-v^*_2)-n_2(Y_2-v^*_2)\gt n_1Y& x\in (n_0,1),\\ Y(k;\,n_0,d_2)=0,\quad Y'(1;\,n_0,d_2)=0.\\ \end{array} \right .\end{equation*}
If 
 $Y(x_M;\,n_0,d_2)=\mathop {\max }\limits _{x\in [n_0,1]}Y(x;\,n_0,d_2)\gt 0$
 at some
$Y(x_M;\,n_0,d_2)=\mathop {\max }\limits _{x\in [n_0,1]}Y(x;\,n_0,d_2)\gt 0$
 at some 
 $x_M\in (n_0,1),$
 then
$x_M\in (n_0,1),$
 then
 \begin{equation*}0\geq d_2 Y''(x_M;\,n_0,d_2)\gt n_1Y(x_M;\,n_0,d_2)\gt 0.\end{equation*}
\begin{equation*}0\geq d_2 Y''(x_M;\,n_0,d_2)\gt n_1Y(x_M;\,n_0,d_2)\gt 0.\end{equation*}
This is a contradiction. So, 
 $Y_1(x;\,n_0,d_2)\leq Y_2(x;\,n_0,d_2).$
 We verify that (5.3) has a solution
$Y_1(x;\,n_0,d_2)\leq Y_2(x;\,n_0,d_2).$
 We verify that (5.3) has a solution 
 $W_1(x;\,n_0,d_2)$
 by using the upper and lower solution approach.
$W_1(x;\,n_0,d_2)$
 by using the upper and lower solution approach.
 Next, we show the uniqueness of a solution for (5.3). By the comparison method, we can guarantee the existence of a maximal solution 
 $W_M(x)$
 and a minimal solution
$W_M(x)$
 and a minimal solution 
 $W_m(x)$
 such that
$W_m(x)$
 such that
 \begin{equation*}Y_1(x)\lt W_m(x)\lt W_M(x)\lt Y_2(x).\end{equation*}
\begin{equation*}Y_1(x)\lt W_m(x)\lt W_M(x)\lt Y_2(x).\end{equation*}
Let 
 $\widetilde f_2(v)=f_2(h_2(v),v)/v$
, then
$\widetilde f_2(v)=f_2(h_2(v),v)/v$
, then 
 $\widetilde f_2(v)$
 is strictly decreasing in
$\widetilde f_2(v)$
 is strictly decreasing in 
 $v$
. Due to
$v$
. Due to 
 $W_M(x)$
 and
$W_M(x)$
 and 
 $W_m(x)$
 satisfying (5.3), we see
$W_m(x)$
 satisfying (5.3), we see
 \begin{equation} d_2 W_M''+\widetilde f_2(W_M)W_M=0, \qquad x\in (n_0,1), \end{equation}
\begin{equation} d_2 W_M''+\widetilde f_2(W_M)W_M=0, \qquad x\in (n_0,1), \end{equation}
and
 \begin{equation} d_2 W_m''+\widetilde f_2(W_m)W_m=0, \qquad x\in (n_0,1). \end{equation}
\begin{equation} d_2 W_m''+\widetilde f_2(W_m)W_m=0, \qquad x\in (n_0,1). \end{equation}
Multiply (5.5) by 
 $W_m$
 and multiply (5.6) by
$W_m$
 and multiply (5.6) by 
 $W_M$
. Then we obtain
$W_M$
. Then we obtain
 \begin{align} 0&={\int }_{n_0}^{1}((d_2 W_M''+\widetilde f_2(W_M)W_M)W_m-(d_2 W_m''+\widetilde f_2(W_m)W_m)W_M)dx\nonumber\\&=d_2( W_M'(x)W_m(x)-W_m'(x)W_M(x))\Bigm |^1_{n_0}+{\int }_{n_0}^{1}(\widetilde f_2(W_M)-\widetilde f_2(W_m))W_MW_m dx\nonumber\\&=d_2\gamma (W_m'(n_0)-W_M'(n_0))+{\int }_{n_0}^{1}(\widetilde f_2(W_M)-\widetilde f_2(W_m))W_MW_m dx. \end{align}
\begin{align} 0&={\int }_{n_0}^{1}((d_2 W_M''+\widetilde f_2(W_M)W_M)W_m-(d_2 W_m''+\widetilde f_2(W_m)W_m)W_M)dx\nonumber\\&=d_2( W_M'(x)W_m(x)-W_m'(x)W_M(x))\Bigm |^1_{n_0}+{\int }_{n_0}^{1}(\widetilde f_2(W_M)-\widetilde f_2(W_m))W_MW_m dx\nonumber\\&=d_2\gamma (W_m'(n_0)-W_M'(n_0))+{\int }_{n_0}^{1}(\widetilde f_2(W_M)-\widetilde f_2(W_m))W_MW_m dx. \end{align}
Since 
 $W_M(x)\gt W_m(x)$
 in
$W_M(x)\gt W_m(x)$
 in 
 $(n_0,1]$
 and
$(n_0,1]$
 and 
 $\widetilde f_2(v)$
 is strictly decreasing in
$\widetilde f_2(v)$
 is strictly decreasing in 
 $v$
, then
$v$
, then
 \begin{equation*}W_m'(n_0)-W_M'(n_0)\le 0,\quad \widetilde f_2(W_M)-\widetilde f_2(W_m) \le 0, \quad n_0 \le x \le 1.\end{equation*}
\begin{equation*}W_m'(n_0)-W_M'(n_0)\le 0,\quad \widetilde f_2(W_M)-\widetilde f_2(W_m) \le 0, \quad n_0 \le x \le 1.\end{equation*}
Due to 
 $W_MW_m\gt 0$
,
$W_MW_m\gt 0$
, 
 $W_M\equiv W_m$
 can be seen from equation (5.7). This shows that we have completed the proof of the uniqueness of a solution for (5.3). Moreover, combining
$W_M\equiv W_m$
 can be seen from equation (5.7). This shows that we have completed the proof of the uniqueness of a solution for (5.3). Moreover, combining 
 $d_2 W_1''+f_2(h_2(W_1),W_1)=0$
 with
$d_2 W_1''+f_2(h_2(W_1),W_1)=0$
 with 
 $f_2(h_2(W_1),W_1)\gt 0$
 and
$f_2(h_2(W_1),W_1)\gt 0$
 and 
 $W_1'(1;\,n_0,d_2)=0$
, we get that
$W_1'(1;\,n_0,d_2)=0$
, we get that 
 $W_1(x;\,n_0,d_2)$
 is monotone increasing in
$W_1(x;\,n_0,d_2)$
 is monotone increasing in 
 $x$
.
$x$
.
Step 2. By simple calculation, we get
 \begin{equation} W_0'(n_0;\,n_0,d_2)=\gamma \sqrt {a/d_2}\mbox {tanh}(\sqrt {a/d_2}n_0),\, W_1'(n_0;\,n_0,d_2)=\dfrac {1}{d_2}{\int }_{n_0}^{1}f_2(h_2(W_1),W_1)dx. \end{equation}
\begin{equation} W_0'(n_0;\,n_0,d_2)=\gamma \sqrt {a/d_2}\mbox {tanh}(\sqrt {a/d_2}n_0),\, W_1'(n_0;\,n_0,d_2)=\dfrac {1}{d_2}{\int }_{n_0}^{1}f_2(h_2(W_1),W_1)dx. \end{equation}
Define 
 $\rho _0$
 to be a sufficiently small positive number. If
$\rho _0$
 to be a sufficiently small positive number. If 
 $n_0=1-\rho _0$
, then we find
$n_0=1-\rho _0$
, then we find 
 $W_0'(1-\rho _0;\,1-\rho _0,d_2)\gt W_1'(1-\rho _0;\,1-\rho _0,d_2)$
 by (5.8), since
$W_0'(1-\rho _0;\,1-\rho _0,d_2)\gt W_1'(1-\rho _0;\,1-\rho _0,d_2)$
 by (5.8), since 
 $f_2(h_2(v),v)$
 is bounded for all
$f_2(h_2(v),v)$
 is bounded for all 
 $x\in [0,v^*_2]$
. In addition, if
$x\in [0,v^*_2]$
. In addition, if 
 $n_0=\rho _0$
, then
$n_0=\rho _0$
, then 
 $W_0'(\rho _0;\,\rho _0,d_2)\lt W_1'(\rho _0;\,\rho _0,d_2)$
 for sufficiently small
$W_0'(\rho _0;\,\rho _0,d_2)\lt W_1'(\rho _0;\,\rho _0,d_2)$
 for sufficiently small 
 $\rho _0\gt 0$
. Assume
$\rho _0\gt 0$
. Assume 
 $\Theta (n_0,d_2)=W_0'(n_0;\,n_0,d_2)-W_1'(n_0;\,n_0,d_2)$
. Thus, we obtain
$\Theta (n_0,d_2)=W_0'(n_0;\,n_0,d_2)-W_1'(n_0;\,n_0,d_2)$
. Thus, we obtain
 \begin{equation} \Theta (1-\rho _0,d_2)\gt 0,\quad \Theta (\rho _0,d_2)\lt 0. \end{equation}
\begin{equation} \Theta (1-\rho _0,d_2)\gt 0,\quad \Theta (\rho _0,d_2)\lt 0. \end{equation}
It is easy to see that 
 $\Theta (n_0,d_2)$
 is continuous with respect to
$\Theta (n_0,d_2)$
 is continuous with respect to 
 $n_0$
 for all
$n_0$
 for all 
 $d_2\gt 0$
. The combination of this and (5.9), there exists a
$d_2\gt 0$
. The combination of this and (5.9), there exists a 
 $n_0^*$
 such that
$n_0^*$
 such that 
 $\Theta (n_0^*,d_2)=0$
 and
$\Theta (n_0^*,d_2)=0$
 and
 \begin{equation*}V_{1,+}(x,d_2)= \left \{\begin{array}{lll} W_0(x;\,n_0^*,d_2) & \mbox {for} \quad x\in [0,n_0^*],\\ W_1(x;\,n_0^*,d_2) & \mbox {for} \quad x\in [n_0^*,1],\\ \end{array} \right .\end{equation*}
\begin{equation*}V_{1,+}(x,d_2)= \left \{\begin{array}{lll} W_0(x;\,n_0^*,d_2) & \mbox {for} \quad x\in [0,n_0^*],\\ W_1(x;\,n_0^*,d_2) & \mbox {for} \quad x\in [n_0^*,1],\\ \end{array} \right .\end{equation*}
is a monotone increasing solution of (5.1) for all 
 $d_2\gt 0$
.
$d_2\gt 0$
.
 Following, using 
 $V_{1,+}(x,d_2)$
 and its reflection, we create symmetric solutions to (5.1) starting from the monotone increasing solution
$V_{1,+}(x,d_2)$
 and its reflection, we create symmetric solutions to (5.1) starting from the monotone increasing solution 
 $V_{1,+}(x,d_2)$
. For all
$V_{1,+}(x,d_2)$
. For all 
 $n\geq 2$
, define a function
$n\geq 2$
, define a function 
 $V_{n,+}(x,\overline d_2)$
 on
$V_{n,+}(x,\overline d_2)$
 on 
 $0\le x\le 1$
 by
$0\le x\le 1$
 by
 \begin{equation*}V_{n,+}(x,\overline d_2)= \left \{\begin{array}{lll} V_{1,+}(nx-2j;\,\overline d_2) & \mbox {for} \quad x\in [2j/n,(2j+1)/n],\\ V_{1,+}(2(j+1)-nx;\,\overline d_2) & \mbox {for} \quad x\in [(2j+1)/n,2(j+1)/n],\\ \end{array} \right .\end{equation*}
\begin{equation*}V_{n,+}(x,\overline d_2)= \left \{\begin{array}{lll} V_{1,+}(nx-2j;\,\overline d_2) & \mbox {for} \quad x\in [2j/n,(2j+1)/n],\\ V_{1,+}(2(j+1)-nx;\,\overline d_2) & \mbox {for} \quad x\in [(2j+1)/n,2(j+1)/n],\\ \end{array} \right .\end{equation*}
where 
 $j=0,1,2,\cdots, [n/2]$
 and
$j=0,1,2,\cdots, [n/2]$
 and 
 $\overline d_2=n^2d_2$
. Then
$\overline d_2=n^2d_2$
. Then 
 $V_{n,+}(x,\overline d_2)$
 is a symmetric solution of (5.1). This completes the proof.
$V_{n,+}(x,\overline d_2)$
 is a symmetric solution of (5.1). This completes the proof.
 Next, by using the shooting approach developed in the work of Mimura, Tabata and Hosono [Reference Mimura, Tabata and Hosono22], we further fix 
 $\gamma \in (d^*,v^*_2)$
 and demonstrate the existence and uniqueness of monotone increasing and symmetric solutions to problem (5.1). We consider the following two initial value problems
$\gamma \in (d^*,v^*_2)$
 and demonstrate the existence and uniqueness of monotone increasing and symmetric solutions to problem (5.1). We consider the following two initial value problems
 \begin{equation} \left \{\begin{array}{l} d_2 v''-av=0,\ x\gt 0,\\ v'(0)=0,\ v(0)=b^*_0,\\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{l} d_2 v''-av=0,\ x\gt 0,\\ v'(0)=0,\ v(0)=b^*_0,\\ \end{array} \right . \end{equation}
and
 \begin{equation} \left \{\begin{array}{l} d_2 v''+f_2(h_2(v),v)=0,\ x\gt 0,\\ v'(0)=0,\ v(0)=b^*_1,\\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{l} d_2 v''+f_2(h_2(v),v)=0,\ x\gt 0,\\ v'(0)=0,\ v(0)=b^*_1,\\ \end{array} \right . \end{equation}
where 
 $d^*\lt b^*_0\lt \gamma \lt b^*_1\lt v^*_2.$
$d^*\lt b^*_0\lt \gamma \lt b^*_1\lt v^*_2.$
 Let 
 $V_0(x;\,b^*_0)$
 and
$V_0(x;\,b^*_0)$
 and 
 $V_1(x;\,b^*_1)$
 be unique solutions of (5.10) and (5.11). We can see that
$V_1(x;\,b^*_1)$
 be unique solutions of (5.10) and (5.11). We can see that 
 $V_0(x;\,b^*_0)$
 is monotonically increasing and
$V_0(x;\,b^*_0)$
 is monotonically increasing and 
 $V_1(x;\,b^*_1)$
 is monotonically decreasing (see Proposition5.4). For
$V_1(x;\,b^*_1)$
 is monotonically decreasing (see Proposition5.4). For 
 $j=0,1$
, let
$j=0,1$
, let 
 $x=l_j$
 be the unique solution of
$x=l_j$
 be the unique solution of 
 $V_j(x;\,b^*_j)=\gamma$
 and
$V_j(x;\,b^*_j)=\gamma$
 and 
 $\psi _j(l_j)$
 satisfies
$\psi _j(l_j)$
 satisfies 
 $\psi _j(l_j)=\dfrac {\partial V_j}{\partial x}(l_j;b^*_j(l_j)).$
 Next we set
$\psi _j(l_j)=\dfrac {\partial V_j}{\partial x}(l_j;b^*_j(l_j)).$
 Next we set
 \begin{equation} \begin{array}{l} \overline l_0=\mathop {\lim \sup }\limits _{b^*_0\rightarrow d^*}l_0(b^*_0),\overline l_1=\mathop {\lim \sup }\limits _{b^*_1\rightarrow v^*_2}l_1(b^*_1),\overline \psi _0=\mathop {\lim }\limits _{l_0\rightarrow \overline l_0}\psi _0(l_0),\overline \psi _1=\mathop {\lim }\limits _{l_1\rightarrow \overline l_1}\psi _1(l_1),\\\\ z_0(\alpha _0)=\psi _0^{-1}(\alpha _0),z_1(\alpha _1)=\psi _1^{-1}(\alpha _1),\overline z_0=\mathop {\lim }\limits _{\alpha _0\rightarrow \overline \alpha }z_0(\alpha ),\overline z_1=\mathop {\lim }\limits _{\alpha _1\rightarrow -\overline \alpha }z_1(\alpha ), \end{array} \end{equation}
\begin{equation} \begin{array}{l} \overline l_0=\mathop {\lim \sup }\limits _{b^*_0\rightarrow d^*}l_0(b^*_0),\overline l_1=\mathop {\lim \sup }\limits _{b^*_1\rightarrow v^*_2}l_1(b^*_1),\overline \psi _0=\mathop {\lim }\limits _{l_0\rightarrow \overline l_0}\psi _0(l_0),\overline \psi _1=\mathop {\lim }\limits _{l_1\rightarrow \overline l_1}\psi _1(l_1),\\\\ z_0(\alpha _0)=\psi _0^{-1}(\alpha _0),z_1(\alpha _1)=\psi _1^{-1}(\alpha _1),\overline z_0=\mathop {\lim }\limits _{\alpha _0\rightarrow \overline \alpha }z_0(\alpha ),\overline z_1=\mathop {\lim }\limits _{\alpha _1\rightarrow -\overline \alpha }z_1(\alpha ), \end{array} \end{equation}
where 
 $\alpha _j=\psi _j(l_j)$
 and
$\alpha _j=\psi _j(l_j)$
 and 
 $\overline \alpha =\mbox {min}\left \{ {{\overline \psi _0,-\overline \psi _1}}\right \}$
. Here, we have utilized the facts that the inverses
$\overline \alpha =\mbox {min}\left \{ {{\overline \psi _0,-\overline \psi _1}}\right \}$
. Here, we have utilized the facts that the inverses 
 $b^*_j(l_j)$
 of
$b^*_j(l_j)$
 of 
 $l_j(b^*_j)$
 and the inverses
$l_j(b^*_j)$
 and the inverses 
 $\psi ^{-1}_j(\alpha _j)$
 of
$\psi ^{-1}_j(\alpha _j)$
 of 
 $\alpha _j\ (j= 0,1)$
 indeed exist. We will establish these findings in Proposition5.5. Then the following two theorems are main results of this section.
$\alpha _j\ (j= 0,1)$
 indeed exist. We will establish these findings in Proposition5.5. Then the following two theorems are main results of this section.
Theorem 5.2. 
Assume that Proposition 
2.2
 (i) holds. For each 
 $\gamma \in (d^*,v^*_2)$
, let
$\gamma \in (d^*,v^*_2)$
, let 
 $1\le \overline z_0+\overline z_1.$
 If
$1\le \overline z_0+\overline z_1.$
 If 
 $d_2\gt 0,$
$d_2\gt 0,$
 
 $T_{1,+}(x)$
 is a unique increasing solution of problem (
5.1
) and for every integer
$T_{1,+}(x)$
 is a unique increasing solution of problem (
5.1
) and for every integer 
 $n\geq 2$
,
$n\geq 2$
, 
 $T_{n,+}(x)$
 is a unique
$T_{n,+}(x)$
 is a unique 
 $\mbox {n}$
-mode symmetric solution of problem (
5.1
).
$\mbox {n}$
-mode symmetric solution of problem (
5.1
).
Theorem 5.3. 
Assume that Proposition 
2.2
 (i) holds. For each 
 $\gamma \in (d^*,v^*_2)$
, let
$\gamma \in (d^*,v^*_2)$
, let 
 $1\gt \overline z_0+\overline z_1.$
 If
$1\gt \overline z_0+\overline z_1.$
 If 
 $d_2\gt 0,$
$d_2\gt 0,$
 
 $\widetilde B_{n,+}(x)$
 is a unique
$\widetilde B_{n,+}(x)$
 is a unique 
 $\mbox {n}$
-mode symmetric solution of problem (
5.1
) for every integer
$\mbox {n}$
-mode symmetric solution of problem (
5.1
) for every integer 
 $n\geq N_0$
, where
$n\geq N_0$
, where 
 $N_0$
 is the smallest positive integer greater than
$N_0$
 is the smallest positive integer greater than 
 $1/(\overline z_0+\overline z_1)$
.
$1/(\overline z_0+\overline z_1)$
.
We start our discussion with the following propositions.
Proposition 5.4. 
Assume that Proposition 
2.2
 (i) holds. For all 
 $b^*_0$
 such that (5.10) has a unique positive and strictly increasing solution
$b^*_0$
 such that (5.10) has a unique positive and strictly increasing solution 
 $V_0(x;\,b^*_0)$
 defined for
$V_0(x;\,b^*_0)$
 defined for 
 $x\gt 0$
, and (
5.11
) has a unique positive and strictly decreasing solution
$x\gt 0$
, and (
5.11
) has a unique positive and strictly decreasing solution 
 $V_1(x;\,b^*_1)$
 for
$V_1(x;\,b^*_1)$
 for 
 $0\le x\lt x_{b^*_1}$
, where
$0\le x\lt x_{b^*_1}$
, where 
 $x_{b^*_1}$
 is the solution of
$x_{b^*_1}$
 is the solution of 
 $V_1(x;\,b^*_1)=0.$
$V_1(x;\,b^*_1)=0.$
Proof. By 
 $f_2(h_L(V_0),V_0)=f_2(0,V_0)=-aV_0,$
 we know
$f_2(h_L(V_0),V_0)=f_2(0,V_0)=-aV_0,$
 we know 
 $d_2 V_0''=aV_0.$
 By simple calculation
$d_2 V_0''=aV_0.$
 By simple calculation
 \begin{equation} V_0(x;\,b^*_0)=b^*_0\mbox {cosh}\left (\sqrt {a/d_2}x\right )\quad \mbox {and}\quad V_0'(x;\,b^*_0)=b^*_0\sqrt {a/d_2}\mbox {sinh}\left (\sqrt {a/d_2}x\right ). \end{equation}
\begin{equation} V_0(x;\,b^*_0)=b^*_0\mbox {cosh}\left (\sqrt {a/d_2}x\right )\quad \mbox {and}\quad V_0'(x;\,b^*_0)=b^*_0\sqrt {a/d_2}\mbox {sinh}\left (\sqrt {a/d_2}x\right ). \end{equation}
So, for all 
 $x\gt 0,$
 we can get
$x\gt 0,$
 we can get 
 $V_0(x;\,b^*_0)$
 is positive and strictly increasing. By Proposition2.3 (i), we have
$V_0(x;\,b^*_0)$
 is positive and strictly increasing. By Proposition2.3 (i), we have 
 $d_2 V_1''=-f_2(h_2(V_1),V_1)\lt 0.$
 Thus,
$d_2 V_1''=-f_2(h_2(V_1),V_1)\lt 0.$
 Thus, 
 $V_1(x;\,b^*_1)$
 is decreasing with respect to
$V_1(x;\,b^*_1)$
 is decreasing with respect to 
 $x$
. Since
$x$
. Since 
 $V_1'(0;\,b^*_1)=0,$
 so
$V_1'(0;\,b^*_1)=0,$
 so 
 $V_1'(x;\,b^*_1)\lt 0$
 for
$V_1'(x;\,b^*_1)\lt 0$
 for 
 $0\le x\lt x_{b^*_1}.$
 These results show that
$0\le x\lt x_{b^*_1}.$
 These results show that 
 $V_1(x;\,b^*_1)\gt 0$
 is strictly monotone decreasing on
$V_1(x;\,b^*_1)\gt 0$
 is strictly monotone decreasing on 
 $[0,x_{b^*_1})$
 and
$[0,x_{b^*_1})$
 and 
 $V_1(x_{b^*_1};b^*_1)=0.$
$V_1(x_{b^*_1};b^*_1)=0.$
Proposition 5.5. 
Assume that Proposition 
2.2
 (i) holds. For 
 $\gamma \in (d^*,v^*_2)$
, we have the following conclusions
$\gamma \in (d^*,v^*_2)$
, we have the following conclusions
 
(i) 
 $\dfrac {\partial {l_0}}{\partial b^*_0}\lt 0$
,
$\dfrac {\partial {l_0}}{\partial b^*_0}\lt 0$
, 
 $\mathop {\lim }\limits _{b^*_0\rightarrow \gamma }l_0(b^*_0)=0$
 and
$\mathop {\lim }\limits _{b^*_0\rightarrow \gamma }l_0(b^*_0)=0$
 and 
 $\dfrac {\partial {l_1}}{\partial b^*_1}\gt 0$
,
$\dfrac {\partial {l_1}}{\partial b^*_1}\gt 0$
, 
 $\mathop {\lim }\limits _{b^*_1\rightarrow \gamma }l_1(b^*_1)=0$
;
$\mathop {\lim }\limits _{b^*_1\rightarrow \gamma }l_1(b^*_1)=0$
;
 
(ii) 
 $\dfrac {\partial {\psi _0(l_0)}}{\partial l_0}\gt 0$
,
$\dfrac {\partial {\psi _0(l_0)}}{\partial l_0}\gt 0$
, 
 $\mathop {\lim }\limits _{l_0\rightarrow 0}\psi _0(l_0)=0$
 and
$\mathop {\lim }\limits _{l_0\rightarrow 0}\psi _0(l_0)=0$
 and 
 $\dfrac {\partial {\psi _1(l_1)}}{\partial l_1}\lt 0$
,
$\dfrac {\partial {\psi _1(l_1)}}{\partial l_1}\lt 0$
, 
 $\mathop {\lim }\limits _{l_1\rightarrow 0}\psi _1(l_1)=0$
,
$\mathop {\lim }\limits _{l_1\rightarrow 0}\psi _1(l_1)=0$
,
 
where for 
 $j=0,1$
,
$j=0,1$
, 
 $l_j$
 and
$l_j$
 and 
 $\psi _j(l_j)$
 are already defined in the previous section.
$\psi _j(l_j)$
 are already defined in the previous section.
Proof. Since the assertions for 
 $l_0$
 and
$l_0$
 and 
 $\psi _0(l_0)$
 may be treated similarly, it is sufficient to verify the statements for
$\psi _0(l_0)$
 may be treated similarly, it is sufficient to verify the statements for 
 $l_1$
 and
$l_1$
 and 
 $\psi _1(l_1)$
.
$\psi _1(l_1)$
.
 (i) Recall that 
 $V_1(l_1(b^*_1);\,b^*_1)=\gamma$
. We differentiate it with respect to both sides with respect to
$V_1(l_1(b^*_1);\,b^*_1)=\gamma$
. We differentiate it with respect to both sides with respect to 
 $b^*_1$
 and get
$b^*_1$
 and get
 \begin{equation*}\dfrac {\partial {V_1}}{\partial x}\dfrac {dl_1}{db^*_1}+\dfrac {\partial {V_1}}{\partial b^*_1}=0,\end{equation*}
\begin{equation*}\dfrac {\partial {V_1}}{\partial x}\dfrac {dl_1}{db^*_1}+\dfrac {\partial {V_1}}{\partial b^*_1}=0,\end{equation*}
so
 \begin{equation*}\dfrac {dl_1}{db^*_1}=-{\dfrac {\partial {V_1}}{\partial b^*_1}}/{\dfrac {\partial {V_1}}{\partial x}}.\end{equation*}
\begin{equation*}\dfrac {dl_1}{db^*_1}=-{\dfrac {\partial {V_1}}{\partial b^*_1}}/{\dfrac {\partial {V_1}}{\partial x}}.\end{equation*}
Moreover, let 
 $\xi _1(x;\,b^*_1)=\dfrac {\partial {V_1}}{\partial x}(x;\,b^*_1)$
 and
$\xi _1(x;\,b^*_1)=\dfrac {\partial {V_1}}{\partial x}(x;\,b^*_1)$
 and 
 $\eta _1(x;\,b^*_1)=\dfrac {\partial {V_1}}{\partial b^*_1}(x;\,b^*_1)$
. We get
$\eta _1(x;\,b^*_1)=\dfrac {\partial {V_1}}{\partial b^*_1}(x;\,b^*_1)$
. We get 
 $\xi _1(x;\,b^*_1)\lt 0$
 for all
$\xi _1(x;\,b^*_1)\lt 0$
 for all 
 $x\in (0,l_1)$
 by the proof of Proposition5.4. It is easy to find that
$x\in (0,l_1)$
 by the proof of Proposition5.4. It is easy to find that 
 $\eta _1(x;\,b^*_1)$
 is a solution of
$\eta _1(x;\,b^*_1)$
 is a solution of
 \begin{equation*}\left \{\begin{array}{lll} d_2\eta _1''+\dfrac {d}{dv}f_2(h_2(V_1),V_1)\eta _1=0 & \mbox {for} \quad x\in [0,1],\\ \eta _1'(0;\,b^*_1)=0, \quad \eta _1(0;\,b^*_1)=1.\\ \end{array} \right .\end{equation*}
\begin{equation*}\left \{\begin{array}{lll} d_2\eta _1''+\dfrac {d}{dv}f_2(h_2(V_1),V_1)\eta _1=0 & \mbox {for} \quad x\in [0,1],\\ \eta _1'(0;\,b^*_1)=0, \quad \eta _1(0;\,b^*_1)=1.\\ \end{array} \right .\end{equation*}
We find that if 
 $0\le x\le l_1$
, then
$0\le x\le l_1$
, then 
 $V_1(x;\,b^*_1)\in [\gamma, b^*_1]\subset (d^*,v^*_2].$
 By Proposition2.3 (ii), we have
$V_1(x;\,b^*_1)\in [\gamma, b^*_1]\subset (d^*,v^*_2].$
 By Proposition2.3 (ii), we have 
 $\dfrac {d}{dv}f_2(h_2(V_1),V_1)\lt 0$
 for
$\dfrac {d}{dv}f_2(h_2(V_1),V_1)\lt 0$
 for 
 $V_1(x;\,b^*_1)\in [\gamma, b^*_1]\subset (d^*,v^*_2].$
 So,
$V_1(x;\,b^*_1)\in [\gamma, b^*_1]\subset (d^*,v^*_2].$
 So,
 \begin{equation*}\eta _1''(0;\,b^*_1)=-\dfrac {1}{d_2}\dfrac {d}{dv}f_2(h_2(b^*_1),b^*_1)\eta _1(0;\,b^*_1)\gt 0.\end{equation*}
\begin{equation*}\eta _1''(0;\,b^*_1)=-\dfrac {1}{d_2}\dfrac {d}{dv}f_2(h_2(b^*_1),b^*_1)\eta _1(0;\,b^*_1)\gt 0.\end{equation*}
Expanding 
 $\eta _1(x;\,b^*_1)$
 near
$\eta _1(x;\,b^*_1)$
 near 
 $x=0$
, we get
$x=0$
, we get
 \begin{equation*}\eta _1(x;\,b^*_1)=\eta _1(0;\,b^*_1)+\eta _1'(0;\,b^*_1)x+\dfrac {1}{2}\eta _1''(0;\,b^*_1)x^2+\cdots \gt 0.\end{equation*}
\begin{equation*}\eta _1(x;\,b^*_1)=\eta _1(0;\,b^*_1)+\eta _1'(0;\,b^*_1)x+\dfrac {1}{2}\eta _1''(0;\,b^*_1)x^2+\cdots \gt 0.\end{equation*}
Thus, we have 
 $d_2\eta _1''=-\dfrac {d}{dV_1}f_2(h_2(V_1),V_1)\eta _1\gt 0$
 for
$d_2\eta _1''=-\dfrac {d}{dV_1}f_2(h_2(V_1),V_1)\eta _1\gt 0$
 for 
 $\eta _1(x;\,b^*_1)\gt 0$
. Then,
$\eta _1(x;\,b^*_1)\gt 0$
. Then, 
 $\eta _1'(x;\,b^*_1)$
 is strictly increasing with respect to
$\eta _1'(x;\,b^*_1)$
 is strictly increasing with respect to 
 $x$
. Since
$x$
. Since 
 $\eta _1'(0;\,b^*_1)=0,$
 we know
$\eta _1'(0;\,b^*_1)=0,$
 we know 
 $\eta _1'(x;\,b^*_1)\gt 0$
 for
$\eta _1'(x;\,b^*_1)\gt 0$
 for 
 $x\in (0,l_1]$
. Combining this with
$x\in (0,l_1]$
. Combining this with 
 $\eta _1(0;\,b^*_1)=1$
, we get
$\eta _1(0;\,b^*_1)=1$
, we get 
 $\eta _1(x;\,b^*_1)\geq 1$
 for all
$\eta _1(x;\,b^*_1)\geq 1$
 for all 
 $x\in [0,l_1]$
. So,
$x\in [0,l_1]$
. So,
 \begin{equation*}\dfrac {dl_1}{db^*_1}=-{\dfrac {\partial {V_1}}{\partial b^*_1}}/{\dfrac {\partial {V_1}}{\partial x}}=-\dfrac {\eta _1(l_1(b^*_1);\,b^*_1)}{\xi _1(l_1(b^*_1);\,b^*_1)}\gt 0,\end{equation*}
\begin{equation*}\dfrac {dl_1}{db^*_1}=-{\dfrac {\partial {V_1}}{\partial b^*_1}}/{\dfrac {\partial {V_1}}{\partial x}}=-\dfrac {\eta _1(l_1(b^*_1);\,b^*_1)}{\xi _1(l_1(b^*_1);\,b^*_1)}\gt 0,\end{equation*}
and 
 $l_1$
 is strictly increasing in
$l_1$
 is strictly increasing in 
 $b^*_1$
. Next, we show that
$b^*_1$
. Next, we show that 
 $\mathop {\lim }\limits _{b^*_1\rightarrow \gamma }l_1(b^*_1)=0$
. Since
$\mathop {\lim }\limits _{b^*_1\rightarrow \gamma }l_1(b^*_1)=0$
. Since 
 $e^*\,:\!=f_2(h_2(\gamma ),\gamma )/d_2\gt 0$
 and
$e^*\,:\!=f_2(h_2(\gamma ),\gamma )/d_2\gt 0$
 and 
 $V_1(x;\,b^*_1)$
 satisfies
$V_1(x;\,b^*_1)$
 satisfies
 \begin{equation} V_1(x;\,b^*_1)=b^*_1-\dfrac {e^*}{2}x^2+O(x^3) \quad \mbox {as}\quad x\rightarrow 0. \end{equation}
\begin{equation} V_1(x;\,b^*_1)=b^*_1-\dfrac {e^*}{2}x^2+O(x^3) \quad \mbox {as}\quad x\rightarrow 0. \end{equation}
Therefore, by simple calculation, we have 
 $l_1(b^*_1)=\sqrt {2(b^*_1-\gamma )/e^*}(1+o(1))$
 as
$l_1(b^*_1)=\sqrt {2(b^*_1-\gamma )/e^*}(1+o(1))$
 as 
 $b^*_1\rightarrow \gamma$
, which shows that
$b^*_1\rightarrow \gamma$
, which shows that 
 $\mathop {\lim }\limits _{b^*_1\rightarrow \gamma }l_1(b^*_1)=0$
. As a result, the inverse of
$\mathop {\lim }\limits _{b^*_1\rightarrow \gamma }l_1(b^*_1)=0$
. As a result, the inverse of 
 $l_1(b^*_1)$
 exists and is represented as
$l_1(b^*_1)$
 exists and is represented as 
 $b^*_1(l_1)$
.
$b^*_1(l_1)$
.
 (ii) Since 
 $\psi _1(l_1)=\dfrac {\partial V_1}{\partial x}(l_1;\,b^*_1(l_1))=\xi _1(l_1;\,b^*_1(l_1)),$
 from Proposition 4.2, we get
$\psi _1(l_1)=\dfrac {\partial V_1}{\partial x}(l_1;\,b^*_1(l_1))=\xi _1(l_1;\,b^*_1(l_1)),$
 from Proposition 4.2, we get
 \begin{equation*}\dfrac {d\psi _1(l_1)}{dl_1}=\dfrac {\partial \xi _1}{\partial x}+\dfrac {\partial \xi _1}{\partial b^*_1}\dfrac {db^*_1}{dl_1}=\dfrac {\partial \xi _1}{\partial x}-\dfrac {\partial \xi _1}{\partial b^*_1}\dfrac {\xi _1(l_1;\,b^*_1(l_1))}{\eta _1(l_1;\,b^*_1(l_1))}\end{equation*}
\begin{equation*}\dfrac {d\psi _1(l_1)}{dl_1}=\dfrac {\partial \xi _1}{\partial x}+\dfrac {\partial \xi _1}{\partial b^*_1}\dfrac {db^*_1}{dl_1}=\dfrac {\partial \xi _1}{\partial x}-\dfrac {\partial \xi _1}{\partial b^*_1}\dfrac {\xi _1(l_1;\,b^*_1(l_1))}{\eta _1(l_1;\,b^*_1(l_1))}\end{equation*}
 \begin{equation*}\quad \quad \qquad =\dfrac {1}{\eta _1(l_1;\,b^*_1(l_1))}\left (\dfrac {\partial \xi _1}{\partial x}\eta _1-\dfrac {\partial \eta _1}{\partial x}\xi _1\right )(l_1;\,b^*_1(l_1))\end{equation*}
\begin{equation*}\quad \quad \qquad =\dfrac {1}{\eta _1(l_1;\,b^*_1(l_1))}\left (\dfrac {\partial \xi _1}{\partial x}\eta _1-\dfrac {\partial \eta _1}{\partial x}\xi _1\right )(l_1;\,b^*_1(l_1))\end{equation*}
 \begin{equation*}\quad \quad \qquad =\dfrac {1}{\eta _1(l_1;\,b^*_1(l_1))}\left (\dfrac {\partial \xi _1}{\partial x}\eta _1-\dfrac {\partial \eta _1}{\partial x}\xi _1\right )(0;\,b^*_1(l_1))\end{equation*}
\begin{equation*}\quad \quad \qquad =\dfrac {1}{\eta _1(l_1;\,b^*_1(l_1))}\left (\dfrac {\partial \xi _1}{\partial x}\eta _1-\dfrac {\partial \eta _1}{\partial x}\xi _1\right )(0;\,b^*_1(l_1))\end{equation*}
 \begin{equation*}=-\dfrac {f_2(h_2(b^*_1),b^*_1)}{d_2\eta _1(l_1;\,b^*_1(l_1))}\lt 0.\end{equation*}
\begin{equation*}=-\dfrac {f_2(h_2(b^*_1),b^*_1)}{d_2\eta _1(l_1;\,b^*_1(l_1))}\lt 0.\end{equation*}
Since 
 $\left (\dfrac {\partial \xi _1}{\partial x}\eta _1-\dfrac {\partial \eta _1}{\partial x}\xi _1\right )'=\xi _1''\eta _1-\eta _1''\xi _1=0$
, we obtain
$\left (\dfrac {\partial \xi _1}{\partial x}\eta _1-\dfrac {\partial \eta _1}{\partial x}\xi _1\right )'=\xi _1''\eta _1-\eta _1''\xi _1=0$
, we obtain 
 $\dfrac {\partial \xi _1}{\partial x}\eta _1-\dfrac {\partial \eta _1}{\partial x}\xi _1$
 is a constant. And by (5.14), we have
$\dfrac {\partial \xi _1}{\partial x}\eta _1-\dfrac {\partial \eta _1}{\partial x}\xi _1$
 is a constant. And by (5.14), we have 
 $V_1^{'}(x;\,b^*_1)=-e^*x+O(x^2)$
 as
$V_1^{'}(x;\,b^*_1)=-e^*x+O(x^2)$
 as 
 $x\rightarrow 0$
, so
$x\rightarrow 0$
, so
 \begin{equation*}\psi _1(l_1)=-\sqrt {2e^*(b^*_1-\gamma )}(1+o(1))\quad \mbox {as}\quad l_1\rightarrow 0.\end{equation*}
\begin{equation*}\psi _1(l_1)=-\sqrt {2e^*(b^*_1-\gamma )}(1+o(1))\quad \mbox {as}\quad l_1\rightarrow 0.\end{equation*}
Hence, 
 $\mathop {\lim }\limits _{l_1\rightarrow 0}\psi _1(l_1)=-\mathop {\lim }\limits _{b^*_1\rightarrow \gamma }\sqrt {2e^*(b^*_1-\gamma )}=0.$
 As a result,
$\mathop {\lim }\limits _{l_1\rightarrow 0}\psi _1(l_1)=-\mathop {\lim }\limits _{b^*_1\rightarrow \gamma }\sqrt {2e^*(b^*_1-\gamma )}=0.$
 As a result, 
 $\psi _1(l_1)$
 is strictly decreasing with respect to
$\psi _1(l_1)$
 is strictly decreasing with respect to 
 $l_1$
 and the inverse
$l_1$
 and the inverse 
 $\psi ^{-1}_1(\alpha _1)$
 of
$\psi ^{-1}_1(\alpha _1)$
 of 
 $\alpha _1$
 does exist.
$\alpha _1$
 does exist.
Proposition 5.6. 
For 
 $\alpha \in [0,\overline \alpha ]$
,
$\alpha \in [0,\overline \alpha ]$
, 
 $z_0(\alpha )+z_1(\!-\alpha )$
 are a strictly increasing functions of class
$z_0(\alpha )+z_1(\!-\alpha )$
 are a strictly increasing functions of class 
 $C^1$
 such that
$C^1$
 such that
 \begin{equation*}z_0(0)+z_1(0)=0\quad \mbox {and}\quad z_0(\overline \alpha )+z_1(\!-\overline \alpha )=\overline z_0+\overline z_1.\end{equation*}
\begin{equation*}z_0(0)+z_1(0)=0\quad \mbox {and}\quad z_0(\overline \alpha )+z_1(\!-\overline \alpha )=\overline z_0+\overline z_1.\end{equation*}
Proof. By the implicit function theorem and Proposition5.5, we know that 
 $z_0(\alpha )$
 and
$z_0(\alpha )$
 and 
 $z_1(\!-\alpha )$
 are a strictly increasing functions of class
$z_1(\!-\alpha )$
 are a strictly increasing functions of class 
 $C^1$
. Then, it follows from Proposition5.5 (ii) that
$C^1$
. Then, it follows from Proposition5.5 (ii) that 
 $\mathop {\lim }\limits _{l_0\rightarrow 0}\psi _0(l_0)=0$
 and
$\mathop {\lim }\limits _{l_0\rightarrow 0}\psi _0(l_0)=0$
 and 
 $\mathop {\lim }\limits _{l_1\rightarrow 0}\psi _1(l_1)=0$
. Thus,
$\mathop {\lim }\limits _{l_1\rightarrow 0}\psi _1(l_1)=0$
. Thus, 
 $\mathop {\lim }\limits _{\alpha \rightarrow 0}z_0(\alpha )=0$
 and
$\mathop {\lim }\limits _{\alpha \rightarrow 0}z_0(\alpha )=0$
 and 
 $\mathop {\lim }\limits _{\alpha \rightarrow 0}z_1(\!-\alpha )=0$
, which show
$\mathop {\lim }\limits _{\alpha \rightarrow 0}z_1(\!-\alpha )=0$
, which show 
 $z_0(0)+z_1(0)=0$
. Since
$z_0(0)+z_1(0)=0$
. Since 
 $\overline z_0=\mathop {\lim }\limits _{\alpha _0\rightarrow \overline \alpha }z_0(\alpha ),\overline z_1=\mathop {\lim }\limits _{\alpha _1\rightarrow -\overline \alpha }z_1(\alpha )$
, we have
$\overline z_0=\mathop {\lim }\limits _{\alpha _0\rightarrow \overline \alpha }z_0(\alpha ),\overline z_1=\mathop {\lim }\limits _{\alpha _1\rightarrow -\overline \alpha }z_1(\alpha )$
, we have 
 $z_0(\overline \alpha )+z_1(\!-\overline \alpha )=\overline z_0+\overline z_1.$
$z_0(\overline \alpha )+z_1(\!-\overline \alpha )=\overline z_0+\overline z_1.$
Proof of Theorem 
5.2. Since 
 $1\le \overline z_0+\overline z_1$
,
$1\le \overline z_0+\overline z_1$
, 
 $z_0(\alpha )+z_1(\!-\alpha )$
 increases in
$z_0(\alpha )+z_1(\!-\alpha )$
 increases in 
 $\alpha$
 according to Proposition5.6, so there is a unique
$\alpha$
 according to Proposition5.6, so there is a unique 
 $\alpha ^*\in (0,\overline \alpha ]$
 that satisfies
$\alpha ^*\in (0,\overline \alpha ]$
 that satisfies 
 $z_0(\alpha ^*)+z_1(\!-\alpha ^*)=1.$
 Let
$z_0(\alpha ^*)+z_1(\!-\alpha ^*)=1.$
 Let 
 $z_0(\alpha ^*)=\Upsilon ^*$
 for such
$z_0(\alpha ^*)=\Upsilon ^*$
 for such 
 $\alpha ^*$
. Then the definitions of
$\alpha ^*$
. Then the definitions of 
 $z_0$
 and
$z_0$
 and 
 $z_1$
 yield
$z_1$
 yield 
 $\psi _0(\Upsilon ^*)=\alpha ^*$
 and
$\psi _0(\Upsilon ^*)=\alpha ^*$
 and 
 $\psi _1(1-\Upsilon ^*)=-\alpha ^*$
. So,
$\psi _1(1-\Upsilon ^*)=-\alpha ^*$
. So, 
 $\psi _0(\Upsilon ^*)+\psi _1(1-\Upsilon ^*)=0$
. Define
$\psi _0(\Upsilon ^*)+\psi _1(1-\Upsilon ^*)=0$
. Define
 \begin{equation*}T_{1,+}(x)=\left \{\begin{array}{lll} V_0(x;\,a^*_0(\Upsilon ^*)) & \mbox {for} \ x\in [0,\Upsilon ^*],\\ V_1(1-x;\,a^*_1(1-\Upsilon ^*)) & \mbox {for} \ x\in [\Upsilon ^*,1],\\ \end{array} \right .\end{equation*}
\begin{equation*}T_{1,+}(x)=\left \{\begin{array}{lll} V_0(x;\,a^*_0(\Upsilon ^*)) & \mbox {for} \ x\in [0,\Upsilon ^*],\\ V_1(1-x;\,a^*_1(1-\Upsilon ^*)) & \mbox {for} \ x\in [\Upsilon ^*,1],\\ \end{array} \right .\end{equation*}
where 
 $a^*_0(\Upsilon ^*)=a^*_0(z_0(\alpha ^*))$
 and
$a^*_0(\Upsilon ^*)=a^*_0(z_0(\alpha ^*))$
 and 
 $a^*_0(1-\Upsilon ^*)=a^*_1(z_1(\!-\alpha ^*))$
.
$a^*_0(1-\Upsilon ^*)=a^*_1(z_1(\!-\alpha ^*))$
.
 Then it becomes a unique increasing solution of (5.1). Then, for each integer 
 $n\geq 2,$
 we demonstrate the existence of
$n\geq 2,$
 we demonstrate the existence of 
 $T_{n,+}(x)$
. Let
$T_{n,+}(x)$
. Let 
 $\overline \alpha _n=\alpha _0=-\alpha _1$
 such that
$\overline \alpha _n=\alpha _0=-\alpha _1$
 such that 
 $z_0(\overline \alpha _n)+z_1(\!-\overline \alpha _n)=\dfrac {1}{n}$
. Since
$z_0(\overline \alpha _n)+z_1(\!-\overline \alpha _n)=\dfrac {1}{n}$
. Since 
 $ n\geq 2$
, we know
$ n\geq 2$
, we know 
 $\dfrac {1}{n}\lt 1\le \overline z_0+\overline z_1$
. As a result, we can construct a unique monotone increasing solution
$\dfrac {1}{n}\lt 1\le \overline z_0+\overline z_1$
. As a result, we can construct a unique monotone increasing solution 
 $\overline Z_{1,+}(x)$
 on
$\overline Z_{1,+}(x)$
 on 
 $[0,{1}/{n}]$
, where
$[0,{1}/{n}]$
, where 
 $\overline T_{1,+}(x)=V_0(x;\,\overline b^*_0)$
 with
$\overline T_{1,+}(x)=V_0(x;\,\overline b^*_0)$
 with 
 $\overline b^*_0=b^*_0(z_0(\overline \alpha _n))$
 for
$\overline b^*_0=b^*_0(z_0(\overline \alpha _n))$
 for 
 $x\in [0,z_0(\overline \alpha _n)]$
 and
$x\in [0,z_0(\overline \alpha _n)]$
 and 
 $\overline T_{1,+}(x)=V_1(({1}/{n})-x;\,\overline b^*_1)$
 with
$\overline T_{1,+}(x)=V_1(({1}/{n})-x;\,\overline b^*_1)$
 with 
 $\overline b^*_1=b^*_1(z_1(\!-\overline \alpha _n))$
 for
$\overline b^*_1=b^*_1(z_1(\!-\overline \alpha _n))$
 for 
 $x\in [z_0(\overline \alpha _n),{1}/{n}]$
. Now define
$x\in [z_0(\overline \alpha _n),{1}/{n}]$
. Now define
 \begin{equation*}T_{n,+}(x)=\left \{\begin{array}{lll} \overline T_{1,+}((2j/n)+x) & \mbox {for} \ x\in [2j/n,(2j+1)/n],\\ \overline T_{1,+}([2(j+1)/n]-x) & \mbox {for} \ x\in [(2j+1)/n,2(j+1)/n],\\ \end{array} \right .\end{equation*}
\begin{equation*}T_{n,+}(x)=\left \{\begin{array}{lll} \overline T_{1,+}((2j/n)+x) & \mbox {for} \ x\in [2j/n,(2j+1)/n],\\ \overline T_{1,+}([2(j+1)/n]-x) & \mbox {for} \ x\in [(2j+1)/n,2(j+1)/n],\\ \end{array} \right .\end{equation*}
where 
 $j=0,1,2,\cdots, [n/2]$
. Then
$j=0,1,2,\cdots, [n/2]$
. Then 
 $T_{n,+}(x)$
 is an
$T_{n,+}(x)$
 is an 
 $n$
-mode symmetric solution of (5.1). This completes the proof.
$n$
-mode symmetric solution of (5.1). This completes the proof.
Proof of Theorem 
5.3
 Since 
 $z_0(\alpha )+z_1(\!-\alpha )$
 is a strictly increasing function for
$z_0(\alpha )+z_1(\!-\alpha )$
 is a strictly increasing function for 
 $\alpha \in [0,\overline \alpha ]$
 from Proposition5.6, whose range is the same as
$\alpha \in [0,\overline \alpha ]$
 from Proposition5.6, whose range is the same as 
 $[0,\overline z_0+\overline z_1]\subset [0,1]$
. If
$[0,\overline z_0+\overline z_1]\subset [0,1]$
. If 
 $N_0$
 is the smallest positive integer greater than
$N_0$
 is the smallest positive integer greater than 
 $1/(\overline z_0+\overline z_1)$
, it follows that there is a unique
$1/(\overline z_0+\overline z_1)$
, it follows that there is a unique 
 $\alpha ^*_n$
 such that
$\alpha ^*_n$
 such that 
 $z_0(\alpha ^*_n)+z_1(\!-\alpha ^*_n)={1}/{n}(n\geq N_0)$
. Similar to the proof of Theorem5.5, we can create a unique monotone increasing solution
$z_0(\alpha ^*_n)+z_1(\!-\alpha ^*_n)={1}/{n}(n\geq N_0)$
. Similar to the proof of Theorem5.5, we can create a unique monotone increasing solution 
 $\widetilde B_{1,+}(x)$
 on
$\widetilde B_{1,+}(x)$
 on 
 $[0,z_0(\alpha ^*_n)+z_1(\!-\alpha ^*_n)]$
, where
$[0,z_0(\alpha ^*_n)+z_1(\!-\alpha ^*_n)]$
, where
 \begin{equation*}\widetilde B_{1,+}(x)=\left \{\begin{array}{lll} V_0(x;\,b^*_0) & \mbox {for} \ x\in [0,z_0(\alpha ^*_n)],\\ V_1(z_0(\alpha )+z_1(\!-\alpha )-x;\,b^*_1) & \mbox {for} \ x\in [z_0(\alpha ^*_n),E]\\ \end{array} \right .\end{equation*}
\begin{equation*}\widetilde B_{1,+}(x)=\left \{\begin{array}{lll} V_0(x;\,b^*_0) & \mbox {for} \ x\in [0,z_0(\alpha ^*_n)],\\ V_1(z_0(\alpha )+z_1(\!-\alpha )-x;\,b^*_1) & \mbox {for} \ x\in [z_0(\alpha ^*_n),E]\\ \end{array} \right .\end{equation*}
with 
 $b^*_0=b^*_0(z_0(\alpha ^*_n))$
,
$b^*_0=b^*_0(z_0(\alpha ^*_n))$
, 
 $b^*_1=b^*_1(z_1(\!-\alpha ^*_n))$
 and
$b^*_1=b^*_1(z_1(\!-\alpha ^*_n))$
 and 
 $E=z_0(\alpha ^*_n)+z_1(\!-\alpha ^*_n)$
. Then we extend
$E=z_0(\alpha ^*_n)+z_1(\!-\alpha ^*_n)$
. Then we extend 
 $\widetilde B_{1,+}(x)$
 to the interval
$\widetilde B_{1,+}(x)$
 to the interval 
 $[0,2E]$
 by
$[0,2E]$
 by
 \begin{equation*}\widetilde B_{2,+}(x)=\left \{\begin{array}{lll} \widetilde B_{1,+}(x) & \mbox {for} \ x\in [0,E],\\ \widetilde B_{1,+}(2E-x) & \mbox {for} \ x\in [E,2E].\\ \end{array} \right .\end{equation*}
\begin{equation*}\widetilde B_{2,+}(x)=\left \{\begin{array}{lll} \widetilde B_{1,+}(x) & \mbox {for} \ x\in [0,E],\\ \widetilde B_{1,+}(2E-x) & \mbox {for} \ x\in [E,2E].\\ \end{array} \right .\end{equation*}
We continue this process until 
 $x$
 reaches
$x$
 reaches 
 $x=1$
. Then
$x=1$
. Then 
 $\widetilde B_{n,+}(x)$
 is an
$\widetilde B_{n,+}(x)$
 is an 
 $n$
-mode symmetric solution of (5.1).
$n$
-mode symmetric solution of (5.1).
6. Existence and stability of bifurcation solutions
 Firstly, in order to study the stability of this equilibrium solution for system (1.1) on one-dimensional domain [0, 1], we analyse the spectrum of the linearized operator through the method in [Reference Zhang31]. Let 
 $(\widetilde {u},\widetilde {v})$
 be any constant solution of system (1.2) and
$(\widetilde {u},\widetilde {v})$
 be any constant solution of system (1.2) and
 \begin{equation*}f(u,v)=(f_1(u,v),d_2v{''}+f_2(u,v)),\end{equation*}
\begin{equation*}f(u,v)=(f_1(u,v),d_2v{''}+f_2(u,v)),\end{equation*}
then the Fre’chet derivative with respect to 
 $(u,v)$
 of
$(u,v)$
 of 
 $F$
 at
$F$
 at 
 $(\widetilde {u},\widetilde {v})$
 is expressed as follows
$(\widetilde {u},\widetilde {v})$
 is expressed as follows
 \begin{equation*}L={\left (\begin{array}{cc} f_{11} & f_{12} \\ f_{21} & d_2\frac {d^2}{dx^2}+f_{22} \end{array}\right )},\end{equation*}
\begin{equation*}L={\left (\begin{array}{cc} f_{11} & f_{12} \\ f_{21} & d_2\frac {d^2}{dx^2}+f_{22} \end{array}\right )},\end{equation*}
where 
 $f_{11}=f_{1u}(\widetilde {u},\widetilde {v})$
,
$f_{11}=f_{1u}(\widetilde {u},\widetilde {v})$
, 
 $f_{12}=f_{1v}(\widetilde {u},\widetilde {v})$
,
$f_{12}=f_{1v}(\widetilde {u},\widetilde {v})$
, 
 $f_{21}=f_{2u}(\widetilde {u},\widetilde {v})$
,
$f_{21}=f_{2u}(\widetilde {u},\widetilde {v})$
, 
 $f_{22}=f_{2v}(\widetilde {u},\widetilde {v})$
. Suppose that
$f_{22}=f_{2v}(\widetilde {u},\widetilde {v})$
. Suppose that 
 $\lambda$
 is an eigenvalue of
$\lambda$
 is an eigenvalue of 
 $L$
. Then we find that
$L$
. Then we find that 
 $\lambda$
 satisfies the characteristic equation
$\lambda$
 satisfies the characteristic equation
 \begin{equation} \lambda ^2-(f_{11}+f_{22}-d_2L_j)\lambda +f_{11}f_{22}-f_{12}f_{21}-d_2f_{11}L_j=0 \end{equation}
\begin{equation} \lambda ^2-(f_{11}+f_{22}-d_2L_j)\lambda +f_{11}f_{22}-f_{12}f_{21}-d_2f_{11}L_j=0 \end{equation}
for some 
 $j\ge 0$
, where
$j\ge 0$
, where 
 $L_j=(\pi j)^2$
,
$L_j=(\pi j)^2$
, 
 $j=0,1,2,\cdots, $
 are the eigenvalues for
$j=0,1,2,\cdots, $
 are the eigenvalues for 
 $\frac {d^2}{dx^2}$
 subject to Neumann boundary conditions. In addition,
$\frac {d^2}{dx^2}$
 subject to Neumann boundary conditions. In addition, 
 $\mbox {cos}(\pi jx)$
 is an eigenfunction corresponding to
$\mbox {cos}(\pi jx)$
 is an eigenfunction corresponding to 
 $L_j$
 and
$L_j$
 and 
 $\left \{ {\mbox {cos}(\pi jx)}\right \}_{j=0}^\infty$
 forms a basis of
$\left \{ {\mbox {cos}(\pi jx)}\right \}_{j=0}^\infty$
 forms a basis of 
 $L^2(0,1)$
.
$L^2(0,1)$
.
Theorem 6.1. 
For 
 $d_2\gt 0$
, the following assertions are true.
$d_2\gt 0$
, the following assertions are true.
 
(i) The trivial solution 
 $(\widetilde {u},\widetilde {v})=(0, 0)$
 and the semi-trivial solution
$(\widetilde {u},\widetilde {v})=(0, 0)$
 and the semi-trivial solution 
 $(\widetilde {u},\widetilde {v})=(K, 0)$
 are unstable.
$(\widetilde {u},\widetilde {v})=(K, 0)$
 are unstable.
 
(ii) The positive solution 
 $(\widetilde {u},\widetilde {v})=(u^*_2,v^*_2)$
 is locally asymptotically stable under the assumption of Proposition 
2.2
 (i).
$(\widetilde {u},\widetilde {v})=(u^*_2,v^*_2)$
 is locally asymptotically stable under the assumption of Proposition 
2.2
 (i).
 
(iii) The positive solution 
 $(\widetilde {u},\widetilde {v})=(u^*_3,v^*_3)$
 is locally asymptotically stable under the assumption of Proposition 
2.2
 (ii).
$(\widetilde {u},\widetilde {v})=(u^*_3,v^*_3)$
 is locally asymptotically stable under the assumption of Proposition 
2.2
 (ii).
Proof. The proof is simple. As a result, we omit the detail.
 Then, we consider 
 $d_2$
 as a bifurcation parameter and study the bifurcation problem near the constant steady state
$d_2$
 as a bifurcation parameter and study the bifurcation problem near the constant steady state 
 $(u^*_3,v^*_3)$
 in the boundary value problem
$(u^*_3,v^*_3)$
 in the boundary value problem
 \begin{equation} \left \{\begin{array}{lll} r\left (1-\dfrac {u}{K}\right )u-\dfrac {cuv}{m+bu}=0,& x\in [0,1],\\ d_2 v''-av+\dfrac {\beta cuv}{m+bu}=0,& x\in (0,1),\\ v'(0) =0,\quad v'(1)=0.\\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{lll} r\left (1-\dfrac {u}{K}\right )u-\dfrac {cuv}{m+bu}=0,& x\in [0,1],\\ d_2 v''-av+\dfrac {\beta cuv}{m+bu}=0,& x\in (0,1),\\ v'(0) =0,\quad v'(1)=0.\\ \end{array} \right . \end{equation}
Let
 \begin{equation*}\widetilde X=\left \{ {(u,v)|u\in C^0([0,1]),v\in C^2([0,1]),v'(0)=v'(1)=0}\right \},\end{equation*}
\begin{equation*}\widetilde X=\left \{ {(u,v)|u\in C^0([0,1]),v\in C^2([0,1]),v'(0)=v'(1)=0}\right \},\end{equation*}
 \begin{equation*}\widetilde Y=C^0([0,1]) \times C^0([0,1]).\end{equation*}
\begin{equation*}\widetilde Y=C^0([0,1]) \times C^0([0,1]).\end{equation*}
Then the 
 $Fr\acute{e}chet$
 derivative
$Fr\acute{e}chet$
 derivative 
 $\widetilde {L}$
 with respect to
$\widetilde {L}$
 with respect to 
 $(u,v)$
 of
$(u,v)$
 of 
 $F$
 at
$F$
 at 
 $(u^*_3,v^*_3)$
 can be written as follows
$(u^*_3,v^*_3)$
 can be written as follows
 \begin{equation*}\widetilde {L}={\left (\begin{array}{cc} \widetilde {f_{11}} & \widetilde {f_{12}} \\ \widetilde {f_{21}} & d_2\frac {d^2}{dx^2}+\widetilde {f_{22}} \end{array}\right )},\end{equation*}
\begin{equation*}\widetilde {L}={\left (\begin{array}{cc} \widetilde {f_{11}} & \widetilde {f_{12}} \\ \widetilde {f_{21}} & d_2\frac {d^2}{dx^2}+\widetilde {f_{22}} \end{array}\right )},\end{equation*}
where
 \begin{equation*}\widetilde {f_{11}}=\dfrac {r(K-2u^*_3)}{K}-\dfrac {mr(K-u^*_3)}{K(m+bu^*_3)}\gt 0,\quad \widetilde {f_{12}}=-\dfrac {cu^*_3}{m+bu^*_3}\lt 0,\end{equation*}
\begin{equation*}\widetilde {f_{11}}=\dfrac {r(K-2u^*_3)}{K}-\dfrac {mr(K-u^*_3)}{K(m+bu^*_3)}\gt 0,\quad \widetilde {f_{12}}=-\dfrac {cu^*_3}{m+bu^*_3}\lt 0,\end{equation*}
 \begin{equation*}\widetilde {f_{21}}=\dfrac {\beta cmv^*_3}{(m+bu^*_3)^2}\gt 0,\quad \widetilde {f_{22}}=0.\end{equation*}
\begin{equation*}\widetilde {f_{21}}=\dfrac {\beta cmv^*_3}{(m+bu^*_3)^2}\gt 0,\quad \widetilde {f_{22}}=0.\end{equation*}
Therefore, the characteristic equation (6.1) is transformed into
 \begin{equation} \lambda ^2-(\widetilde {f_{11}}-d_2L_j)\lambda -\widetilde {f_{12}}\widetilde {f_{21}}-d_2\widetilde {f_{11}}L_j=0. \end{equation}
\begin{equation} \lambda ^2-(\widetilde {f_{11}}-d_2L_j)\lambda -\widetilde {f_{12}}\widetilde {f_{21}}-d_2\widetilde {f_{11}}L_j=0. \end{equation}
Let 
 $\widetilde {d_2}=-{\widetilde {f_{12}}\widetilde {f_{21}}}/{\widetilde {f_{11}}L_j}.$
 Then
$\widetilde {d_2}=-{\widetilde {f_{12}}\widetilde {f_{21}}}/{\widetilde {f_{11}}L_j}.$
 Then 
 $\widetilde {d_2}\gt 0$
 for every
$\widetilde {d_2}\gt 0$
 for every 
 $j\ge 1.$
 If we assume that
$j\ge 1.$
 If we assume that 
 $\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\neq 0,$
 then at
$\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\neq 0,$
 then at 
 $d_2=\widetilde {d_2}$
 for every
$d_2=\widetilde {d_2}$
 for every 
 $j\ge 1,$
$j\ge 1,$
 
 $\widetilde {f_{11}}-d_2L_j=(\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}})/\widetilde {f_{11}}\neq 0$
. So, we see that the zero is a simple eigenvalue of (6.3). Therefore, the following conclusions can be drawn.
$\widetilde {f_{11}}-d_2L_j=(\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}})/\widetilde {f_{11}}\neq 0$
. So, we see that the zero is a simple eigenvalue of (6.3). Therefore, the following conclusions can be drawn.
Theorem 6.2. 
Assume that Proposition 
2.2
 (ii) holds. If 
 $\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\neq 0,$
 then
$\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\neq 0,$
 then 
 $(\widetilde {d_2},(u^*_3,v^*_3))$
 is a bifurcation point of
$(\widetilde {d_2},(u^*_3,v^*_3))$
 is a bifurcation point of 
 $F=0$
. Furthermore, there is a
$F=0$
. Furthermore, there is a 
 $\delta _0$
, which satisfies that (6.2) admits a one-parameter family of non-constant solutions
$\delta _0$
, which satisfies that (6.2) admits a one-parameter family of non-constant solutions 
 $\left \{ {(\widetilde {d_2}+d_2(l),(u(l),v(l))),|l|\lt \delta _0}\right \}$
 of the form
$\left \{ {(\widetilde {d_2}+d_2(l),(u(l),v(l))),|l|\lt \delta _0}\right \}$
 of the form 
 $u(l)=u^*_3+l\phi _j+o(l)$
,
$u(l)=u^*_3+l\phi _j+o(l)$
, 
 $v(l)=v^*_3+l\psi _j+o(l)$
, where
$v(l)=v^*_3+l\psi _j+o(l)$
, where 
 $\phi _j=\mbox {cos} (\pi jx)$
,
$\phi _j=\mbox {cos} (\pi jx)$
, 
 $\psi =-(\widetilde {f_{11}}\phi _j)/\widetilde {f_{12}}$
 and
$\psi =-(\widetilde {f_{11}}\phi _j)/\widetilde {f_{12}}$
 and 
 $d_2(0)=0$
. Particularly, there exist no solutions other than
$d_2(0)=0$
. Particularly, there exist no solutions other than 
 $\left \{ {(\widetilde {d_2}+d_2(l),(u(l),v(l))),|l|\lt \delta _0}\right \}\cup \left \{ {(d_2,(u^*_3,v^*_3)),|d_2-\widetilde {d_2}|\lt \delta _0}\right \}$
 in a small neighbourhood of
$\left \{ {(\widetilde {d_2}+d_2(l),(u(l),v(l))),|l|\lt \delta _0}\right \}\cup \left \{ {(d_2,(u^*_3,v^*_3)),|d_2-\widetilde {d_2}|\lt \delta _0}\right \}$
 in a small neighbourhood of 
 $(\widetilde {d_2},(u^*_3,v^*_3))$
 in
$(\widetilde {d_2},(u^*_3,v^*_3))$
 in 
 ${R}\times X$
.
${R}\times X$
.
Proof. Assume that 
 $\widetilde {\Phi }=(\widetilde {\phi },\widetilde {\psi })\in \mbox {ker}\widetilde {L}$
 and let
$\widetilde {\Phi }=(\widetilde {\phi },\widetilde {\psi })\in \mbox {ker}\widetilde {L}$
 and let 
 $\widetilde {\phi }=\sum _{ j}\widetilde {c_j}{\phi _j}$
,
$\widetilde {\phi }=\sum _{ j}\widetilde {c_j}{\phi _j}$
, 
 $\widetilde {\psi }=\sum _{ j}\widetilde {d_j}{\phi _j}$
, then
$\widetilde {\psi }=\sum _{ j}\widetilde {d_j}{\phi _j}$
, then 
 $\sum _{ j}^\infty D_j{\left (\begin{array}{cc} \widetilde {c_j} \\ \widetilde {d_j} \end{array}\right )}\phi _j=0$
, where
$\sum _{ j}^\infty D_j{\left (\begin{array}{cc} \widetilde {c_j} \\ \widetilde {d_j} \end{array}\right )}\phi _j=0$
, where
 \begin{equation*}D_j={\left (\begin{array}{cc} \widetilde {f_{11}} & \widetilde {f_{12}} \\ \widetilde {f_{21}} & -d_2L_j \end{array}\right )}.\end{equation*}
\begin{equation*}D_j={\left (\begin{array}{cc} \widetilde {f_{11}} & \widetilde {f_{12}} \\ \widetilde {f_{21}} & -d_2L_j \end{array}\right )}.\end{equation*}
Obviously, 
 $\mbox {det}D_j=0\Leftrightarrow d_2=\widetilde {d_2}$
. Let
$\mbox {det}D_j=0\Leftrightarrow d_2=\widetilde {d_2}$
. Let 
 $d_2=\widetilde {d_2}$
, we get
$d_2=\widetilde {d_2}$
, we get
 \begin{equation*}\mbox {ker}\widetilde {L}=\mbox {span}\{ {\Phi _0}\},\quad \Phi _0= {\left (\begin{array}{cc} \phi _0 \\ \psi _0 \end{array}\right )}={\left (\begin{array}{cc} \mbox {cos} (\pi jx) \\ -\frac {\widetilde {f_{11}}}{\widetilde {f_{12}}}\mbox {cos} (\pi jx) \end{array}\right )}.\end{equation*}
\begin{equation*}\mbox {ker}\widetilde {L}=\mbox {span}\{ {\Phi _0}\},\quad \Phi _0= {\left (\begin{array}{cc} \phi _0 \\ \psi _0 \end{array}\right )}={\left (\begin{array}{cc} \mbox {cos} (\pi jx) \\ -\frac {\widetilde {f_{11}}}{\widetilde {f_{12}}}\mbox {cos} (\pi jx) \end{array}\right )}.\end{equation*}
Similar to this, it is simple to calculate an eigenvector 
 $\Phi _0^*$
 of
$\Phi _0^*$
 of 
 $\widetilde {L}^*$
 associated with
$\widetilde {L}^*$
 associated with 
 $0$
 having the following form
$0$
 having the following form
 \begin{equation*}\mbox {ker}\widetilde {L}^*=\mbox {span}\{ {\Phi _0^*}\},\quad \Phi _0^*= {\left (\begin{array}{cc} \phi _0 \\ \psi _0 \end{array}\right )}={\left (\begin{array}{cc} \mbox {cos} (\pi jx) \\ -\frac {\widetilde {f_{11}}}{\widetilde {f_{21}}}\mbox {cos} (\pi jx) \end{array}\right )},\end{equation*}
\begin{equation*}\mbox {ker}\widetilde {L}^*=\mbox {span}\{ {\Phi _0^*}\},\quad \Phi _0^*= {\left (\begin{array}{cc} \phi _0 \\ \psi _0 \end{array}\right )}={\left (\begin{array}{cc} \mbox {cos} (\pi jx) \\ -\frac {\widetilde {f_{11}}}{\widetilde {f_{21}}}\mbox {cos} (\pi jx) \end{array}\right )},\end{equation*}
where 
 $\widetilde {L}^*$
 is the adjoint operator of
$\widetilde {L}^*$
 is the adjoint operator of 
 $\widetilde {L}$
, which is obtained by
$\widetilde {L}$
, which is obtained by
 \begin{equation*}\widetilde {L}^*={\left (\begin{array}{cc} \widetilde {f_{11}} & \widetilde {f_{21}} \\ \widetilde {f_{12}} & d_2\frac {d^2}{dx^2} \end{array}\right )}.\end{equation*}
\begin{equation*}\widetilde {L}^*={\left (\begin{array}{cc} \widetilde {f_{11}} & \widetilde {f_{21}} \\ \widetilde {f_{12}} & d_2\frac {d^2}{dx^2} \end{array}\right )}.\end{equation*}
Because 
 $\mbox {rang}\widetilde {L}=(\mbox {ker}\widetilde {L}^*)^\bot$
, we have
$\mbox {rang}\widetilde {L}=(\mbox {ker}\widetilde {L}^*)^\bot$
, we have 
 $\mbox {dim}\mbox {ker}\widetilde {L}^*=\mbox {codim}\,\mbox {rang}\,\widetilde {L}=1.$
 Finally, since
$\mbox {dim}\mbox {ker}\widetilde {L}^*=\mbox {codim}\,\mbox {rang}\,\widetilde {L}=1.$
 Finally, since
 \begin{equation*}\hat {L}=\dfrac {\partial \hat {L}}{\partial d_2}{\left (\begin{array}{cc} 0 & 0 \\ 0 & \frac {d^2}{dx^2} \end{array}\right )}\end{equation*}
\begin{equation*}\hat {L}=\dfrac {\partial \hat {L}}{\partial d_2}{\left (\begin{array}{cc} 0 & 0 \\ 0 & \frac {d^2}{dx^2} \end{array}\right )}\end{equation*}
and 
 $\hat {L}\Phi _0\notin \mbox {rang}\,\widetilde {L}$
, we get that the conditions required for the standard bifurcation theorem to apply, based on the presence of a simple eigenvalue [Reference Crandall and Rabinowitz8], are satisfied.
$\hat {L}\Phi _0\notin \mbox {rang}\,\widetilde {L}$
, we get that the conditions required for the standard bifurcation theorem to apply, based on the presence of a simple eigenvalue [Reference Crandall and Rabinowitz8], are satisfied.
 Next, we study the stability of bifurcation solutions. Suppose that 
 $L(l)$
 represent the linearized operator
$L(l)$
 represent the linearized operator 
 $\partial _UF(\widetilde {d_2}+d_2(l), \overline {U}+l\Phi _0+o(l))$
, where
$\partial _UF(\widetilde {d_2}+d_2(l), \overline {U}+l\Phi _0+o(l))$
, where 
 $U=(u,v)$
,
$U=(u,v)$
, 
 $\overline {U}=(u^*_3,v^*_3)$
 and
$\overline {U}=(u^*_3,v^*_3)$
 and 
 $(\widetilde {d_2}+d_2(l), \overline {U}+l\Phi _0+o(l))$
 is a bifurcation solution obtained by Theorem6.2.
$(\widetilde {d_2}+d_2(l), \overline {U}+l\Phi _0+o(l))$
 is a bifurcation solution obtained by Theorem6.2.
Definition 5. 
Let 
 $B(X,Y)$
 denote the set of bounded linear maps of
$B(X,Y)$
 denote the set of bounded linear maps of 
 $X$
 into
$X$
 into 
 $Y$
. Let
$Y$
. Let 
 $T,K\in B(X,Y)$
. Then
$T,K\in B(X,Y)$
. Then 
 $\mu \in R$
 is a
$\mu \in R$
 is a 
 $K$
-simple eigenvalue of
$K$
-simple eigenvalue of 
 $T$
 if
$T$
 if
 
(i)
 $dim N(T-\mu K)=codim R(T-\mu K)=1$
 and, if
$dim N(T-\mu K)=codim R(T-\mu K)=1$
 and, if 
 $N(T-\mu K)=span\left \{x_0\right \}$
,
$N(T-\mu K)=span\left \{x_0\right \}$
,
 
(ii)
 $Kx_0\in R(T-\mu K)$
.
$Kx_0\in R(T-\mu K)$
.
Proposition 6.3. 
For 
 $d_2=\widetilde {d_2}$
, 0 is an
$d_2=\widetilde {d_2}$
, 0 is an 
 $i$
-simple eigenvalue of
$i$
-simple eigenvalue of 
 $\widetilde {L}$
 and
$\widetilde {L}$
 and 
 $i$
 is the inclusion mapping
$i$
 is the inclusion mapping 
 $\widetilde X\rightarrow \widetilde Y$
.
$\widetilde X\rightarrow \widetilde Y$
.
Proof. According to the proof procedure of theorem6.2, we can get 
 $\mbox {dim}\mbox {ker}\widetilde {L}=\mbox {codim}\,\mbox {rang}\,\widetilde {L}=1$
. Then
$\mbox {dim}\mbox {ker}\widetilde {L}=\mbox {codim}\,\mbox {rang}\,\widetilde {L}=1$
. Then 
 $i\Phi \notin \mbox {rang}\,\widetilde {L}$
, where
$i\Phi \notin \mbox {rang}\,\widetilde {L}$
, where 
 $\Phi _0$
 satisfies
$\Phi _0$
 satisfies 
 $\mbox {ker}\,\widetilde {L}=\{ {\Phi _0}\}$
. Thus, it is clear that
$\mbox {ker}\,\widetilde {L}=\{ {\Phi _0}\}$
. Thus, it is clear that 
 $\widetilde {L}$
 possesses
$\widetilde {L}$
 possesses 
 $0$
 as an
$0$
 as an 
 $i$
-simple eigenvalue according to the definition of a
$i$
-simple eigenvalue according to the definition of a 
 $K$
-simple eigenvalue presented in [Reference Wu30].
$K$
-simple eigenvalue presented in [Reference Wu30].
 We have identified an 
 $i$
-simple eigenvalue
$i$
-simple eigenvalue 
 $\lambda _j(d_2)$
 for
$\lambda _j(d_2)$
 for 
 $\widetilde {L}$
 near
$\widetilde {L}$
 near 
 $d_2=\widetilde {d_2}$
, as well as an
$d_2=\widetilde {d_2}$
, as well as an 
 $i$
-simple eigenvalue
$i$
-simple eigenvalue 
 $\lambda (l)$
 for
$\lambda (l)$
 for 
 $L(l)$
 when
$L(l)$
 when 
 $|l|$
 is small enough. By making use of the well-known theorem by Crandall and Rabinowitz [Reference Crandall and Rabinowitz9], we obtain
$|l|$
 is small enough. By making use of the well-known theorem by Crandall and Rabinowitz [Reference Crandall and Rabinowitz9], we obtain
 \begin{equation*}\lim \limits _{s\to 0,\lambda (l)\neq 0}-\dfrac {ld_2'(l)\lambda _j'(\widetilde {d_2})}{\lambda (l)}=1.\end{equation*}
\begin{equation*}\lim \limits _{s\to 0,\lambda (l)\neq 0}-\dfrac {ld_2'(l)\lambda _j'(\widetilde {d_2})}{\lambda (l)}=1.\end{equation*}
Proposition 6.4. 
If 
 $\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\lt 0$
, then both
$\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\lt 0$
, then both 
 $\lambda (l)$
 and
$\lambda (l)$
 and 
 $-ld_2'(l)$
 possess the same sign. But if
$-ld_2'(l)$
 possess the same sign. But if 
 $\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\gt 0$
, then both
$\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\gt 0$
, then both 
 $\lambda (l)$
 and
$\lambda (l)$
 and 
 $ld_2'(l)$
 possess the same sign.
$ld_2'(l)$
 possess the same sign.
Proof. From (6.3) we get 
 $(\lambda _j(d_2))^2-(\widetilde {f_{11}}-d_2L_j)\lambda _j(d_2)-\widetilde {f_{12}}\widetilde {f_{21}}-d_2\widetilde {f_{11}}L_j=0$
. Taking the derivative of both sides with respect to
$(\lambda _j(d_2))^2-(\widetilde {f_{11}}-d_2L_j)\lambda _j(d_2)-\widetilde {f_{12}}\widetilde {f_{21}}-d_2\widetilde {f_{11}}L_j=0$
. Taking the derivative of both sides with respect to 
 $d_2$
, we get
$d_2$
, we get 
 $2\lambda _j(d_2)\lambda _j'(d_2)+L_j\lambda _j(d_2)-(\widetilde {f_{11}}-d_2L_j)\lambda _j'(d_2)-\widetilde {f_{11}}L_j=0.$
 So,
$2\lambda _j(d_2)\lambda _j'(d_2)+L_j\lambda _j(d_2)-(\widetilde {f_{11}}-d_2L_j)\lambda _j'(d_2)-\widetilde {f_{11}}L_j=0.$
 So, 
 $\lambda _j(\widetilde {d_2})=0$
 shows
$\lambda _j(\widetilde {d_2})=0$
 shows
 \begin{equation*}\lambda _j'(d_2)=\dfrac {-L_j\widetilde {f_{11}}}{\widetilde {f_{11}}-L_j\widetilde {d_2}}=\dfrac {-\widetilde {f_{11}}^2L_j}{\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}}.\end{equation*}
\begin{equation*}\lambda _j'(d_2)=\dfrac {-L_j\widetilde {f_{11}}}{\widetilde {f_{11}}-L_j\widetilde {d_2}}=\dfrac {-\widetilde {f_{11}}^2L_j}{\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}}.\end{equation*}
Thus, if 
 $\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\lt 0$
, then
$\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\lt 0$
, then 
 $\lambda _j'(d_2)\gt 0$
, which shows that both
$\lambda _j'(d_2)\gt 0$
, which shows that both 
 $\lambda (l)$
 and
$\lambda (l)$
 and 
 $-ld_2'(l)$
 possess the same sign. But if
$-ld_2'(l)$
 possess the same sign. But if 
 $\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\gt 0$
, then
$\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\gt 0$
, then 
 $\lambda _j'(d_2)\lt 0$
, which implies that both
$\lambda _j'(d_2)\lt 0$
, which implies that both 
 $\lambda (l)$
 and
$\lambda (l)$
 and 
 $ld_2'(l)$
 possess the same sign.
$ld_2'(l)$
 possess the same sign.
 Then we analyse the sign of 
 $\lambda (l)$
. Because
$\lambda (l)$
. Because 
 $(u^*_3,v^*_3)$
 is on the branch
$(u^*_3,v^*_3)$
 is on the branch 
 $u=h_1(v)$
, we study the following boundary value problem
$u=h_1(v)$
, we study the following boundary value problem
 \begin{equation} \left \{\begin{array}{lll} d_2 v''+d(v)=0,& x\in (0,1),\\ v'(0)=0,\qquad v'(1)=0,\\ \end{array} \right . \end{equation}
\begin{equation} \left \{\begin{array}{lll} d_2 v''+d(v)=0,& x\in (0,1),\\ v'(0)=0,\qquad v'(1)=0,\\ \end{array} \right . \end{equation}
where 
 $d(v)=f_2(h_1(v),v)$
.
$d(v)=f_2(h_1(v),v)$
.
Theorem 6.5. 
Let 
 $C=d'(v^*_3)$
,
$C=d'(v^*_3)$
, 
 $D=d''(v^*_3)$
,
$D=d''(v^*_3)$
, 
 $E=d'''(v^*_3)$
 and define
$E=d'''(v^*_3)$
 and define 
 $N=3CE-5D^2$
, then we have the following conclusions.
$N=3CE-5D^2$
, then we have the following conclusions.
 
(i) If 
 $\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\lt 0$
 and
$\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\lt 0$
 and 
 $N\gt 0$
, then
$N\gt 0$
, then 
 $\lambda (l)\lt 0$
 for
$\lambda (l)\lt 0$
 for 
 $0\lt l\ll 1.$
$0\lt l\ll 1.$
 
(ii) If 
 $\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\lt 0$
 and
$\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\lt 0$
 and 
 $N\lt 0$
, then
$N\lt 0$
, then 
 $\lambda (l)\gt 0$
 for
$\lambda (l)\gt 0$
 for 
 $0\lt l\ll 1.$
$0\lt l\ll 1.$
 
(iii) If 
 $\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\gt 0$
 and
$\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\gt 0$
 and 
 $N\gt 0$
, then
$N\gt 0$
, then 
 $\lambda (l)\gt 0$
 for
$\lambda (l)\gt 0$
 for 
 $0\lt l\ll 1.$
$0\lt l\ll 1.$
 
(iv) If 
 $\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\gt 0$
 and
$\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\gt 0$
 and 
 $N\lt 0$
, then
$N\lt 0$
, then 
 $\lambda (l)\lt 0$
 for
$\lambda (l)\lt 0$
 for 
 $0\lt l\ll 1.$
$0\lt l\ll 1.$
Proof. We only demonstrate the assertion (i) here, as we can approach the proof of the other assertions in a similar manner. By Proposition6.4, it is sufficient to compute 
 $d_2'(l)$
 to reveal the sign of
$d_2'(l)$
 to reveal the sign of 
 $\lambda (l)$
. We expand
$\lambda (l)$
. We expand 
 $v(x,l)$
 and
$v(x,l)$
 and 
 $d_2(l)$
 in
$d_2(l)$
 in 
 $l$
 to obtain
$l$
 to obtain
 \begin{align*}v(x,l)&=v^*_3+l\widetilde n_1(x)+l^2\widetilde n_2(x)+l^3\widetilde n_3(x)+\cdots, \\ d_2(l)&=\widetilde {d_2}+l\widetilde k_1+l^2\widetilde k_2+l^3\widetilde k_3+\cdots .\end{align*}
\begin{align*}v(x,l)&=v^*_3+l\widetilde n_1(x)+l^2\widetilde n_2(x)+l^3\widetilde n_3(x)+\cdots, \\ d_2(l)&=\widetilde {d_2}+l\widetilde k_1+l^2\widetilde k_2+l^3\widetilde k_3+\cdots .\end{align*}
Due to 
 $d(v^*_3)=f_2(h_1(v^*_3),v^*_3)=0$
, we obtain
$d(v^*_3)=f_2(h_1(v^*_3),v^*_3)=0$
, we obtain
 \begin{equation*}d(v)=C(v-v^*_3)+\dfrac {1}{2}D(v-v^*_3)^2+\dfrac {1}{6}E(v-v^*_3)^3+\cdots, \end{equation*}
\begin{equation*}d(v)=C(v-v^*_3)+\dfrac {1}{2}D(v-v^*_3)^2+\dfrac {1}{6}E(v-v^*_3)^3+\cdots, \end{equation*}
where
 \begin{equation*}C=\dfrac {\beta cmv^*_3}{(m+bh_1(v^*_3))^2}h_1'(v^*_3),\end{equation*}
\begin{equation*}C=\dfrac {\beta cmv^*_3}{(m+bh_1(v^*_3))^2}h_1'(v^*_3),\end{equation*}
 \begin{equation*}D=\dfrac {-2\beta cmbv^*_3}{(m+bh_1(v^*_3))^3}(h'_1(v^*_3))^2+\dfrac {2\beta cm}{(m+bh_1(v^*_3))^2}h'_1(v^*_3)+\dfrac {\beta cmv^*_3}{(m+bh_1(v^*_3))^2}h''_1(v^*_3),\end{equation*}
\begin{equation*}D=\dfrac {-2\beta cmbv^*_3}{(m+bh_1(v^*_3))^3}(h'_1(v^*_3))^2+\dfrac {2\beta cm}{(m+bh_1(v^*_3))^2}h'_1(v^*_3)+\dfrac {\beta cmv^*_3}{(m+bh_1(v^*_3))^2}h''_1(v^*_3),\end{equation*}
 \begin{equation*}E=\dfrac {6\beta cmb^2v^*_3}{(m+bh_1(v^*_3))^4}(h'_1(v^*_3))^3- \dfrac {6\beta cmb}{(m+bh_1(v^*_3))^3}(h'_1(v^*_3))^2-\end{equation*}
\begin{equation*}E=\dfrac {6\beta cmb^2v^*_3}{(m+bh_1(v^*_3))^4}(h'_1(v^*_3))^3- \dfrac {6\beta cmb}{(m+bh_1(v^*_3))^3}(h'_1(v^*_3))^2-\end{equation*}
 \begin{equation*}\dfrac {6\beta cmbv^*_3}{(m+bh_1(v^*_3))^3}h'_1(v^*_3)h''_1(v^*_3)+\dfrac {3\beta cm}{(m+bh_1(v^*_3))^2}h''_1(v^*_3)+\dfrac {\beta cmv^*_3}{(m+bh_1(v^*_3))^2}h'''_1(v^*_3).\end{equation*}
\begin{equation*}\dfrac {6\beta cmbv^*_3}{(m+bh_1(v^*_3))^3}h'_1(v^*_3)h''_1(v^*_3)+\dfrac {3\beta cm}{(m+bh_1(v^*_3))^2}h''_1(v^*_3)+\dfrac {\beta cmv^*_3}{(m+bh_1(v^*_3))^2}h'''_1(v^*_3).\end{equation*}
In addition, it is easy to find that 
 $f_1(h_1(v^*_3),v^*_3)=0.$
 So, we get
$f_1(h_1(v^*_3),v^*_3)=0.$
 So, we get 
 $h_1'(v^*_3)=-\widetilde {f_{12}}/\widetilde {f_{11}}.$
 Consequently, based on the previous definition of
$h_1'(v^*_3)=-\widetilde {f_{12}}/\widetilde {f_{11}}.$
 Consequently, based on the previous definition of 
 $\widetilde {f_{21}}$
 and
$\widetilde {f_{21}}$
 and 
 $\widetilde {d_2}$
, we know
$\widetilde {d_2}$
, we know
 \begin{equation*}C=\dfrac {\beta cmv^*_3}{(m+bh_1(v^*_3))^2}h_1'(v^*_3)=\dfrac {\beta cmv^*_3}{(m+bh_1(v^*_3))^2}(\dfrac {-\widetilde {f_{12}}}{\widetilde {f_{11}}})=-\dfrac {\widetilde {d_2}\widetilde {f_{11}}L_j}{\widetilde {f_{12}}}(\dfrac {-\widetilde {f_{12}}}{\widetilde {f_{11}}})=\widetilde {d_2}\pi ^2j^2.\end{equation*}
\begin{equation*}C=\dfrac {\beta cmv^*_3}{(m+bh_1(v^*_3))^2}h_1'(v^*_3)=\dfrac {\beta cmv^*_3}{(m+bh_1(v^*_3))^2}(\dfrac {-\widetilde {f_{12}}}{\widetilde {f_{11}}})=-\dfrac {\widetilde {d_2}\widetilde {f_{11}}L_j}{\widetilde {f_{12}}}(\dfrac {-\widetilde {f_{12}}}{\widetilde {f_{11}}})=\widetilde {d_2}\pi ^2j^2.\end{equation*}
By substituting these expressions into (6.4), we can obtain a sequence of equations by assigning a value of zero to the coefficient of each power of 
 $l$
.
$l$
.
 \begin{align} \widetilde {d_2}\widetilde n''_1&+C\widetilde n_1=0, \end{align}
\begin{align} \widetilde {d_2}\widetilde n''_1&+C\widetilde n_1=0, \end{align}
 \begin{align} \widetilde {d_2}\widetilde n''_2+\widetilde k_1\widetilde n''_1&+\dfrac {1}{2}D\widetilde n_1^2+C\widetilde n_2=0, \end{align}
\begin{align} \widetilde {d_2}\widetilde n''_2+\widetilde k_1\widetilde n''_1&+\dfrac {1}{2}D\widetilde n_1^2+C\widetilde n_2=0, \end{align}
 \begin{align} \widetilde {d_2}\widetilde n''_3+\widetilde k_1\widetilde n''_2+\widetilde k_2\widetilde n''_1&+C\widetilde n_3+D\widetilde n_1\widetilde n_2+\dfrac {1}{6}E\widetilde n_1^3=0. \\[8pt]\nonumber\end{align}
\begin{align} \widetilde {d_2}\widetilde n''_3+\widetilde k_1\widetilde n''_2+\widetilde k_2\widetilde n''_1&+C\widetilde n_3+D\widetilde n_1\widetilde n_2+\dfrac {1}{6}E\widetilde n_1^3=0. \\[8pt]\nonumber\end{align}
By (6.5), we know that 
 $\widetilde {d_2}\frac {d^2}{dx^2}+C$
 has an eigenvalue of 0. So, equation (6.6) can be solved only if
$\widetilde {d_2}\frac {d^2}{dx^2}+C$
 has an eigenvalue of 0. So, equation (6.6) can be solved only if
 \begin{equation} \widetilde k_1{\int }_{0}^{1}\widetilde n''_1\widetilde n_1\mbox {d}x+\dfrac {1}{2}D{\int }_{0}^{1}\widetilde n^3_1\mbox {d}x=0. \end{equation}
\begin{equation} \widetilde k_1{\int }_{0}^{1}\widetilde n''_1\widetilde n_1\mbox {d}x+\dfrac {1}{2}D{\int }_{0}^{1}\widetilde n^3_1\mbox {d}x=0. \end{equation}
When 
 $\widetilde n_1(x)=\mbox {cos}(\pi jx)$
, equation (6.5) is satisfied. By simple calculations, it can be deduced that
$\widetilde n_1(x)=\mbox {cos}(\pi jx)$
, equation (6.5) is satisfied. By simple calculations, it can be deduced that 
 $\widetilde k_1=0$
 when it is substituted into (6.8). Therefore, (6.6) is rewritten as
$\widetilde k_1=0$
 when it is substituted into (6.8). Therefore, (6.6) is rewritten as
 \begin{equation*}\widetilde {d_2}\widetilde n''_2+\dfrac {1}{2}D\widetilde n_1^2+C\widetilde n_2=0.\end{equation*}
\begin{equation*}\widetilde {d_2}\widetilde n''_2+\dfrac {1}{2}D\widetilde n_1^2+C\widetilde n_2=0.\end{equation*}
Since 
 $\widetilde n_1^2=(1+\mbox {cos}(2\pi jx))/2$
, then we know
$\widetilde n_1^2=(1+\mbox {cos}(2\pi jx))/2$
, then we know
 \begin{equation*}\widetilde {d_2}\widetilde n''_2+C\widetilde n_2=-\dfrac {1}{2}D\dfrac {1+\mbox {cos}(2\pi jx)}{2}.\end{equation*}
\begin{equation*}\widetilde {d_2}\widetilde n''_2+C\widetilde n_2=-\dfrac {1}{2}D\dfrac {1+\mbox {cos}(2\pi jx)}{2}.\end{equation*}
By simple calculations, it can be deduced that
 \begin{equation*}\widetilde n_2(x)=-\dfrac {D}{4C}-\dfrac {D}{4(C-4\widetilde {d_2}\pi ^2j^2)}\mbox {cos}(2\pi jx)=-\dfrac {D}{4C}+\dfrac {D}{12C}\mbox {cos}(2\pi jx).\end{equation*}
\begin{equation*}\widetilde n_2(x)=-\dfrac {D}{4C}-\dfrac {D}{4(C-4\widetilde {d_2}\pi ^2j^2)}\mbox {cos}(2\pi jx)=-\dfrac {D}{4C}+\dfrac {D}{12C}\mbox {cos}(2\pi jx).\end{equation*}
Now we consider equation (6.7). Since 
 $\widetilde k_1=0$
, it follows that (6.7) has a solution if and only if
$\widetilde k_1=0$
, it follows that (6.7) has a solution if and only if
 \begin{equation} \widetilde k_2{\int }_{0}^{1}\widetilde n''_1\widetilde n_1\mbox {d}x+D{\int }_{0}^{1}\widetilde n_1^2\widetilde n_2\mbox {d}x+\dfrac {1}{6}E{\int }_{0}^{1}\widetilde n^4_1\mbox {d}x=0. \end{equation}
\begin{equation} \widetilde k_2{\int }_{0}^{1}\widetilde n''_1\widetilde n_1\mbox {d}x+D{\int }_{0}^{1}\widetilde n_1^2\widetilde n_2\mbox {d}x+\dfrac {1}{6}E{\int }_{0}^{1}\widetilde n^4_1\mbox {d}x=0. \end{equation}
It can be directly calculated that
 \begin{equation*}{\int }_{0}^{1}\widetilde n''_1\widetilde n_1\mbox {d}x=-\dfrac {1}{2}L_j,\quad {\int }_{0}^{1}\widetilde n^2_1\widetilde n_2\mbox {d}x=-\dfrac {5D}{48C},\quad {\int }_{0}^{1}\widetilde n^4_1\mbox {d}x=\dfrac {3}{8}.\end{equation*}
\begin{equation*}{\int }_{0}^{1}\widetilde n''_1\widetilde n_1\mbox {d}x=-\dfrac {1}{2}L_j,\quad {\int }_{0}^{1}\widetilde n^2_1\widetilde n_2\mbox {d}x=-\dfrac {5D}{48C},\quad {\int }_{0}^{1}\widetilde n^4_1\mbox {d}x=\dfrac {3}{8}.\end{equation*}
So, (6.9) becomes
 \begin{equation*}-\dfrac {1}{2}\widetilde k_2L_j-\dfrac {5D^2}{48C}+\dfrac {E}{16}=0;\quad \mbox {so that}\quad \widetilde k_2=\dfrac {1}{24CL_j}(3CE-5D^2).\end{equation*}
\begin{equation*}-\dfrac {1}{2}\widetilde k_2L_j-\dfrac {5D^2}{48C}+\dfrac {E}{16}=0;\quad \mbox {so that}\quad \widetilde k_2=\dfrac {1}{24CL_j}(3CE-5D^2).\end{equation*}
Since 
 $\widetilde k_1=0$
 and
$\widetilde k_1=0$
 and 
 $l{d_2}'(l)=l(\widetilde k_1+2l\widetilde k_2+O(l^2)),$
 we have
$l{d_2}'(l)=l(\widetilde k_1+2l\widetilde k_2+O(l^2)),$
 we have 
 $l{d_2}'(l)=2l^2\widetilde k_2+O(l^3))$
. Thus, for
$l{d_2}'(l)=2l^2\widetilde k_2+O(l^3))$
. Thus, for 
 $|l|$
 sufficiently small, the sign of
$|l|$
 sufficiently small, the sign of 
 $l{d_2}'(l)$
 is the same as that of
$l{d_2}'(l)$
 is the same as that of 
 $\widetilde k_2$
. This implies that the sign of
$\widetilde k_2$
. This implies that the sign of 
 $-\lambda (l)$
 is the same as that of
$-\lambda (l)$
 is the same as that of 
 $\widetilde k_2$
 if
$\widetilde k_2$
 if 
 $\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\lt 0$
, according to Proposition6.4. So, if
$\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\lt 0$
, according to Proposition6.4. So, if 
 $\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\lt 0$
 and
$\widetilde {f_{11}}^2+\widetilde {f_{12}}\widetilde {f_{21}}\lt 0$
 and 
 $N\gt 0$
, then
$N\gt 0$
, then 
 $\widetilde k_2\gt 0$
; so that
$\widetilde k_2\gt 0$
; so that 
 $\lambda (l)\lt 0.$
$\lambda (l)\lt 0.$
7. Conclusions
In this paper, we examine a mechanism of pattern formation that occurs in a predator–prey model with Holling-II functional response. The model consists of a single reaction–diffusion equation coupled with an ordinary differential equation. The value of this paper reflects in three aspects.
7.1. Existence of non-constant regular solution
 We prove the existence of regular stationary solutions for system (3.14), using the method in [Reference Cygan, Marciniak–Czochra, Karch and Suzuki4]. If the internal equilibrium 
 $2\overline u_3$
 system (3.14) is greater than the carrying capacity
$2\overline u_3$
 system (3.14) is greater than the carrying capacity 
 $K$
, and the other parameters are non-negative. Furthermore, if the diffusion coefficient
$K$
, and the other parameters are non-negative. Furthermore, if the diffusion coefficient 
 $d_2\gt 0$
 of the predator, then the system (3.14) produces a non-constant regular solution (Theorem3.3).
$d_2\gt 0$
 of the predator, then the system (3.14) produces a non-constant regular solution (Theorem3.3).
7.2. Existence and uniqueness of steady states with jump discontinuity
 We apply various approaches to demonstrate the existence of steady states with jump discontinuities and investigate their characteristics on a one-dimensional spatial domain (refer to Theorem4.1 for domains of higher dimensions and Theorems5.1–5.3 for one-dimensional domains). These results show the existence of discontinuous steady-state solutions 
 $(u(x), v(x))$
 for system (1.1), where
$(u(x), v(x))$
 for system (1.1), where 
 $u(x)$
 displays a jump discontinuity while
$u(x)$
 displays a jump discontinuity while 
 $v(x)$
 is either monotonic or symmetric, depending on a fixed parameter
$v(x)$
 is either monotonic or symmetric, depending on a fixed parameter 
 $\gamma$
. Furthermore, it is observed that by selecting a smaller range for
$\gamma$
. Furthermore, it is observed that by selecting a smaller range for 
 $\gamma$
, the solution becomes unique. This uniqueness stems from the fact that
$\gamma$
, the solution becomes unique. This uniqueness stems from the fact that 
 $f_2(h_2(v),v)$
 is a strictly decreasing function in relation to
$f_2(h_2(v),v)$
 is a strictly decreasing function in relation to 
 $v$
 within this interval. It should be emphasized that these phenomena differ significantly from those observed in systems where both species exhibit diffusion or non-diffusion.
$v$
 within this interval. It should be emphasized that these phenomena differ significantly from those observed in systems where both species exhibit diffusion or non-diffusion.
7.3. Existence and stability of bifurcation solutions
 In Section 6, we focus on the bifurcating solutions of the system (1.1). It has been observed that stable patterns emerge near the constant equilibrium state in a partial differential equation 
 $(PDE)$
 system with diffusion-driven instability
$(PDE)$
 system with diffusion-driven instability 
 $(DDI)$
 property. However, the system (1.1) analysed in this paper exhibits the characteristic of
$(DDI)$
 property. However, the system (1.1) analysed in this paper exhibits the characteristic of 
 $DDI$
 (Theorem6.2), but all Turing-type patterns are unstable (Theorem6.5). This is significantly different from the classical diffusive model, exhibiting a notable difference.
$DDI$
 (Theorem6.2), but all Turing-type patterns are unstable (Theorem6.5). This is significantly different from the classical diffusive model, exhibiting a notable difference.
Funding statement
The work was partially supported by the National Natural Science Foundation of China (61872227, 12126420), Cultivation Project Funds for Beijing University of Civil Engineering and Architecture (No. X24007).
Authors contributions
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
Competing interest
None.
 
 



















