Loading [MathJax]/jax/output/SVG/config.js
Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2024, Volume 215, Issue 10, Pages 1321–1350
DOI: https://doi.org/10.4213/sm10081e
(Mi sm10081)
 

Some functionals for random walks and critical branching processes in an extremely unfavourable random environment

V. A. Vatutina, C. Dongb, E. E. Dyakonovaa

a Steklov Mathematical Institute of Russian Academy of Sciences, Moscow, Russia
b Xidian University, Xi'an, P. R. China
References:
Abstract: Let $\{S_{n},\,n\geq 0\}$ be a random walk whose increment distribution belongs without centering to the domain of attraction of an $\alpha$-stable law, that is, there are scaling constants $a_{n}$ such that the sequence $S_{n}/a_{n}$, $n=1,2,\dots$, converges weakly, as $n\to\infty$, to a random variable having an $\alpha$-stable distribution. Let $S_{0}=0$,
$$ L_{n}:=\min (S_{1},\dots,S_{n})\quad\text{and}\quad\tau_{n}:=\min \{ 0\leq k\leq n\colon S_{k}=\min (0,L_{n})\}. $$
Assuming that $S_{n}\leq h(n)$, where $h(n)$ is $o(a_{n})$ as $n\to\infty$ and the limit $\lim_{n\to\infty}h(n)\in [-\infty,+\infty]$ exists, we prove several limit theorems describing the asymptotic behaviour of the functionals
$$ \mathbf{E}[ e^{\lambda S_{\tau_{n}}};\, S_{n}\leq h(n)], \qquad \lambda>0, $$
as $n\to\infty$. The results obtained are applied to study the survival probability of a critical branching process evolving in an extremely unfavourable random environment.
Bibliography: 15 titles.
Keywords: stable random walks, branching processes, survival probability, extreme random environment.
Funding agency Grant number
Ministry of Science and Higher Education of the Russian Federation 075-15-2022-265
Ministry of Science and Technology (MOST) of China G2022174007L
The work of E. E. Dyakonova and V. A. Vatutin was performed at the Steklov International Mathematical Center and supported by the Ministry of Science and Higher Education of the Russian Federation (agreement no. 075-15-2022-265). The research of C. Dong and V. A. Vatutin was also supported by the Ministry of Science and Technology of PRC (project no. G2022174007L).
Received: 13.02.2024 and 01.07.2024
Bibliographic databases:
Document Type: Article
MSC: Primary 60G50; Secondary 60J80, 60K37
Language: English
Original paper language: Russian

§ 1. Introduction and main results

We consider the asymptotic behaviour of some functionals specified on the trajectories of a random walk

$$ \begin{equation*} S_{0}=0, \quad S_{n}=X_{1}+\dots+X_{n}, \qquad n\geqslant 1, \end{equation*} \notag $$
with independent identically distributed increments $X_{i}$, $i=1,2,\dots$ . To describe the conditions we impose on the increments, let
$$ \begin{equation*} \mathcal{A}:=\{\alpha \in (0,2)\setminus \{1\},\,|\beta|<1\}\cup \{\alpha =1,\,\beta=0\}\cup \{\alpha=2,\,\beta=0\} \end{equation*} \notag $$
be a subset in $\mathbb{R}^{2}$. For $(\alpha,\beta)\in \mathcal{A}$ and a random variable $X$ we write $X\in \mathcal{D}(\alpha,\beta)$ if the distribution of $X$ belongs to the domain of attraction of a stable law with density $g_{\alpha,\beta}(x)$, $x\in (-\infty,+\infty)$, and characteristic function
$$ \begin{equation*} G_{\alpha,\beta}(w)=\int_{-\infty}^{+\infty}e^{iwx}g_{\alpha,\beta}(x)\,dx =\exp \biggl\{ -c|w|^{\alpha}\biggl(1-i\beta \frac{w}{|w|}\tan\frac{\pi \alpha}{2}\biggr) \biggr\}, \qquad c>0, \end{equation*} \notag $$
and, in addition, $\mathbf{E}X=0$ if this moment exists. This implies, in particular, that there is an increasing sequence of positive numbers
$$ \begin{equation*} a_{n}=n^{1/\alpha}\ell(n) \end{equation*} \notag $$
with a slowly varying sequence $\ell(1),\ell(2),\dots$, such that, as $n\to\infty$
$$ \begin{equation*} \biggl\{ \frac{S_{[ nt]}}{a_{n}},\,t\geqslant 0\biggr\} \quad \Longrightarrow\quad \mathcal{Y}=\{ Y_{t},\,t\geqslant 0\}, \end{equation*} \notag $$
where
$$ \begin{equation*} \mathbf{E}e^{iwY_{t}}=G_{\alpha,\beta}(wt^{1/\alpha}), \qquad t\geqslant 0, \end{equation*} \notag $$
and the symbol $\Longrightarrow$ denotes weak convergence in the space $D[0,\infty)$ of càdlàg functions endowed with the Skorokhod topology. Observe that if $X_{n}\overset{d}{=}X\in \mathcal{D}(\alpha,\beta)$ for all $n\in \mathbb{N}:=\{1,2,\dots\}$, then
$$ \begin{equation*} \lim_{n\to \infty}\mathbf{P}(S_{n}>0)=:\rho =\mathbf{P}(Y_{1}>0) \in (0,1). \end{equation*} \notag $$

We now list our main restrictions on the properties of the random walk.

Condition A1. The random variables $X_{n}$, $n\in \mathbb{N}$, are independent copies of a random variable $X\in \mathcal{D}(\alpha,\beta)$. In addition, the distribution of $X$ is non-lattice.

Some of our statements need a stronger assumption.

Condition A2. The law of $X$ under $\mathbf{P}$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}$, and there exists $n\in\mathbb{N}$ such that the density $f_{n}(x):=\mathbf{P}(S_{n}\in dx)/dx$ of $S_{n}$ is bounded.

We set

$$ \begin{equation} \begin{gathered} \, M_{n}:=\max (S_{1},\dots,S_{n}), \qquad L_{n}:=\min (S_{1},\dots,S_{n}), \notag \\ \tau_{n} :=\min \{ 0\leqslant k\leqslant n\colon S_{k}=\min (0,L_{n})\}, \notag \\ b_{n}:=\frac{1}{a_{n}n}=\frac{1}{n^{1/\alpha +1}\ell(n)}, \end{gathered} \end{equation} \tag{1.1} $$
and introduce a renewal function:
$$ \begin{equation*} U(x):= \begin{cases} 0, &x<0, \\ \displaystyle 1+\sum_{n=1}^{\infty}\mathbf{P}(S_{n}\geqslant-x,\, M_{n}<0) \\ \displaystyle \qquad\ \ =1+\sum_{n=1}^{\infty}\mathbf{P}(S_{n}\geqslant -x,\, \tau_{n}=n), &x\geqslant 0, \end{cases} \end{equation*} \notag $$
where the last equality follows from the duality principle for random walks (see, for example, [7], Ch. XII, § 2).

We use, along with the function $U$, two more renewal functions:

$$ \begin{equation} \begin{gathered} \, V_0(x):= \begin{cases} 0, &x>0, \\ \displaystyle 1+\sum_{n=1}^{\infty}\mathbf{P}(S_{n}\leqslant -x,\, L_{n}\geqslant 0),&x\leqslant 0, \end{cases} \\ V(x):= \begin{cases} 0,&x\geqslant 0, \\ \displaystyle 1+\sum_{n=1}^{\infty}\mathbf{P}( S_{n}<-x,\, L_{n}\geqslant 0), & x<0. \end{cases} \end{gathered} \end{equation} \tag{1.2} $$
Observe that
$$ \begin{equation} V_{0}(0)=1+\sum_{n=1}^{\infty}\mathbf{P}(S_{n}=0,\, L_{n}\geqslant 0) =\frac{1}{1-\zeta}, \end{equation} \tag{1.3} $$
where
$$ \begin{equation*} \begin{aligned} \, \zeta &=\mathbf{P}(S_{1}=0) +\sum_{n=2}^{\infty}\mathbf{P}(S_{1}>0,\dots,S_{n-1}>0,\, S_{n}=0) \\ &=\mathbf{P}(S_{1}=0) +\sum_{n=2}^{\infty}\mathbf{P}(S_{1}<0,\dots,S_{n-1}<0,\, S_{n}=0) \in (0,1). \end{aligned} \end{equation*} \notag $$
Here, to justify the transition from the first to the second line it is necessary to use the fact that
$$ \begin{equation*} \{ S_{n}-S_{n-k},\,k=0,1,\dots,n\} \overset{d}{=}\{S_{k},\,k=0,1,\dots,n\}, \end{equation*} \notag $$
which follows from the duality principle.

One can check also that if Condition A2 is valid, then

$$ \begin{equation*} U(0+)=V(-0)=1 \end{equation*} \notag $$
and $V(x)=V_{0}(x)$ for all $x\in(-\infty,+\infty)$.

In what follows we will consider the random walks starting at time $n=0$ from an arbitrary point $x\in \mathbb{R}$ and denote the corresponding probabilities and expectations by $\mathbf{P}_{x}(\,\cdot\,)$ and $\mathbf{E}_{x}[\,\cdot\,]$. We will also write $\mathbf{P}$ and $\mathbf{E}$ instead of $\mathbf{P}_{0}$ and $\mathbf{E}_{0}$, respectively.

Now we formulate the main results of this paper dealing with the properties of random walks.

Theorem 1. Let Condition A1 be valid. If $\varphi(n)$, $n\in \mathbb{N}$, is a positive deterministic function such that $\varphi(n)\to+\infty$ as $n\to\infty$ and $\varphi(n)=o(a_{n})$, then for any $\lambda >0$

$$ \begin{equation*} \begin{aligned} \, \mathbf{E}[ e^{\lambda S_{\tau_{n}}};\, S_{n}\leqslant \varphi(n)] &\sim\theta \, \mathbf{P}(S_{n}\leqslant \varphi(n),\, L_{n}\geqslant 0)\int_{0}^{\infty }e^{-\lambda z}U(z)\,dz \\ &\sim \lambda g_{\alpha,\beta}(0)b_{n}\int_{0}^{\varphi (n)}V(-w)\,dw\int_{0}^{\infty}e^{-\lambda z}U(z)\,dz \end{aligned} \end{equation*} \notag $$
as $n\to\infty$.

In the following three statements we analyze the case when $S_{n}$ $\to-\infty$ almost surely as $n\to\infty$.

Theorem 2. Let Condition A1 be valid. If $\psi(n)$, $n\in \mathbb{N}$, is a deterministic function such that $\psi(n)\to-\infty$ as $n\to \infty$ and $\psi(n)=o(a_{n})$, then for any $\lambda >0$ and $x\leqslant 0$

$$ \begin{equation} \mathbf{E}_{x}[ e^{\lambda S_{n}};\, S_{n}\leqslant \psi(n),\, M_{n}<0] \sim g_{\alpha,\beta}(0)b_{n}V(x)U(-\psi(n)) \lambda ^{-1}e^{\lambda \psi(n)} \end{equation} \tag{1.4} $$
as $n\to\infty$.

Using the duality principle for random walks and setting $x=0$ in (1.4) we immediately obtain the following result.

Corollary 1. If Condition A1 is valid and $\psi(n)$, $n\in \mathbb{N}$, is a deterministic function such that $\psi(n)\to-\infty$ as $n\to\infty$ and $\psi(n)=o(a_{n})$, then for any $\lambda >0$

$$ \begin{equation} \mathbf{E}[ e^{\lambda S_{n}};\, S_{n}\leqslant \psi(n),\, \tau_{n}=n] \sim g_{\alpha,\beta}(0)b_{n}U(-\psi(n)) \lambda ^{-1}e^{\lambda\psi(n)} \end{equation} \tag{1.5} $$
as $n\to\infty$.

The next statement is a natural complement to Corollary 1.

Theorem 3. Let Condition A1 be valid. If $\psi(n)$, $n\in \mathbb{N}$, is a deterministic function such that $\psi(n)\to-\infty$ as $n\to \infty$ and $\psi(n)=o(a_{n})$, then for any $\lambda >0$

$$ \begin{equation*} \mathbf{E}[ e^{\lambda S_{\tau_{n}}};\, S_{n}\leqslant \psi(n)] \sim g_{\alpha,\beta}(0)b_{n}U(-\psi(n)) e^{\lambda \psi (n)}\int_{0}^{\infty}e^{-\lambda z}V_{0}(-z)\,dz. \end{equation*} \notag $$

We now consider the case $S_{n}\leqslant K$ for some fixed $K$.

Theorem 4. Let Conditions A1 and A2 be valid. Then for any fixed $K$ and $\lambda >0$

$$ \begin{equation*} \begin{aligned} \, \lim_{n\to \infty}\frac{\mathbf{E}[ e^{\lambda S_{\tau_{n}}};\, S_{n}\leqslant K]}{b_{n}} &=g_{\alpha,\beta}(0)\int_{-\infty}^{K\wedge 0}e^{\lambda x}\, U(-dx)\int_{0}^{K-x}V(-w)\,dw \\ &\qquad+g_{\alpha,\beta}(0)\int_{-\infty}^{K\wedge 0}e^{\lambda x}U(-x)V_{0}(K-x)\,dx. \end{aligned} \end{equation*} \notag $$

We note that problems related to the ones under consideration here were investigated by several authors.

Thus, Hirano [8] analyzed the asymptotic behaviour as $n\to\infty$ of the functional

$$ \begin{equation*} \mathbf{E}_{x}[ e^{\lambda S_{n}};\, M_{n}\leqslant K]=e^{\lambda x}\mathbf{E}[ e^{\lambda S_{n}};\, M_{n}\leqslant K-x], \end{equation*} \notag $$
assuming that $\mathbf{E}X^{2}<\infty$ and the pair $(K,x)$ is fixed. Hirano’s results were generalized in [2] to the case $X\in \mathcal{D}(\alpha,\beta)$ by showing that, as $n\to\infty$, for $\lambda >0$ and fixed $x\leqslant 0$,
$$ \begin{equation} \mathbf{E}_{x}[ e^{\lambda S_{n}}; M_{n}<0] \sim g_{\alpha,\beta }(0)b_{n}V(x)\int_{0}^{\infty}e^{-\lambda z}U(w)\,dw, \end{equation} \tag{1.6} $$
while for fixed $x\geqslant 0$
$$ \begin{equation*} \mathbf{E}_{x}[ e^{-\lambda S_{n}};\, L_{n}\geqslant 0] \sim g_{\alpha,\beta}(0)b_{n}U(x)\int_{0}^{\infty}e^{-\lambda z}V(-w)\,dw. \end{equation*} \notag $$
It follows from (1.6) and the continuity theorem for Laplace transforms that for fixed $y\leqslant 0$
$$ \begin{equation*} \mathbf{E}[ e^{\lambda S_{n}};\, S_{n}<y,\, M_{n}<0] \sim g_{\alpha,\beta}(0)b_{n}\int_{-y}^{\infty}e^{-\lambda z}U(w)\,dw \end{equation*} \notag $$
as $n\to\infty$, which hints at the form of the asymptotic behaviour of the left-hand side in (1.5).

The structure of the remaining sections of this paper looks as follows. In § 2 we formulate a number of known results for random walks conditioned to stay nonnegative or negative. In § 3 we prove Theorem 1. Section 4 is devoted to the proof of Theorem 2. Section 5 contains the proof of Theorem 3. The proof of Theorem 4 is given in § 6. In § 7 we introduce measures $\mathbf{P}_{x}^{+}$ and $\mathbf{P}_{x}^{-}$ generated, respectively, by random walks conditioned to stay nonnegative or negative and use Theorem 4 to investigate the survival probability of a critical branching process evolving in extreme random environment.

In what follows we denote by $C,C_{1},C_{2},\dots$, some positive constants that can be different in different formulae or even within one and the same formula.

§ 2. Auxiliary results

We now formulate a number of statements that show the importance of the functions $U,V_{0}$ and $V$.

We recall that a positive sequence $\{c_{n},\,n\in \mathbb{N}\}$ (or a real function $c(x)$, $x\geqslant x_{0}$) is said to be regularly varying at infinity with index $\gamma \in \mathbb{R}$, denoted by $c_{n}\in R_{\gamma}$ or $c(x)\in R_{\gamma}$, if $c_{n}= n^{\gamma}l(n)$ ($c(x)= x^{\gamma}l(x)$), where $l(x)$ is a slowly varying function, that is, a positive real function with the property that $l(tx)/l(x)\to1$ as $x\to\infty$ for any fixed $t>0$.

It is known (see, for instance, [10] and [11]) that if Condition A1 is valid, then

$$ \begin{equation} \mathbf{P}(M_{n}<0) \in R_{-\rho}, \qquad U(x)\in R_{\alpha \rho}, \end{equation} \tag{2.1} $$
and
$$ \begin{equation} \mathbf{P}(L_{n}\geqslant 0) \in R_{-(1-\rho)}, \qquad V(-x)\in R_{\alpha (1-\rho)}. \end{equation} \tag{2.2} $$

Some basic inequalities used below in our proofs are contained in the following lemma.

Lemma 1 (see [2], Proposition 2.3, and [12], Lemma 2). If Condition A1 is valid, then there is a positive constant $C$ such that, for all $n$ and $x,y\geqslant 0$,

$$ \begin{equation} \mathbf{P}_{x}(0\leqslant S_{n}<y,\, L_{n}\geqslant 0)\leqslant Cb_{n}U(x)\int_{0}^{y}V(-w)\,dw \end{equation} \tag{2.3} $$
and, for $x,z\leqslant 0$,
$$ \begin{equation} \mathbf{P}_{x}(z\leqslant S_{n}<z+1,\, M_{n}<0) \leqslant Cb_{n}V(x)U(-z), \end{equation} \tag{2.4} $$
and therefore
$$ \begin{equation*} \mathbf{P}_{x}(z\leqslant S_{n}<0,\, M_{n}<0)\leqslant Cb_{n}V(x)\int_{z}^{0}U(-w)\,dw. \end{equation*} \notag $$

We need some equivalence relations established by Doney [6] and rewritten below in our notation.

Lemma 2 ([6], Proposition 18). Suppose that $X\in\mathcal{D}(\alpha,\beta)$ and the distribution of $X$ is non-lattice. Then, for each fixed $\Delta >0$

$$ \begin{equation} \mathbf{P}_{z}(S_{n}\in [ y,y+\Delta),\, L_{n}\geqslant 0) \sim g_{\alpha,\beta}(0)b_{n}U(z)\int_{y}^{y+\Delta}V(-w)\,dw \end{equation} \tag{2.5} $$
and
$$ \begin{equation} \mathbf{P}_{-z}(S_{n}\in [ -y,-y+\Delta),\, M_{n}<0) \sim g_{\alpha,\beta}(0)b_{n}V(-z)\int_{y-\Delta}^{y}U(w)\,dw \end{equation} \tag{2.6} $$
uniformly in the nonnegative $z$ and $y$ such that $\max (z,y)\in [0,\delta_{n}a_{n}]$, where $\delta_{n}\to0$ as $n\to\infty$.

Integration (2.5) over $y\in (0,x)$ leads to the following important conclusion for ${z=0}$ (see also Theorem 4 in [15]).

Corollary 2. Under the assumptions of Lemma 2

$$ \begin{equation*} \mathbf{P}(S_{n}\leqslant x,\, L_{n}\geqslant 0)\sim g_{\alpha,\beta }(0)b_{n}\int_{0}^{x}V(-w)\,dw \end{equation*} \notag $$
uniformly in $x\in (0,\delta_{n}a_{n}]$, where $\delta_{n}\to0$ as $n\to\infty$.

Combining (2.1) and (2.6) we arrive at the following statement.

Corollary 3. Suppose that $X\in \mathcal{D}(\alpha,\beta)$ and the distribution of $X$ is non-lattice. Then for each fixed $\Delta >0$

$$ \begin{equation*} \mathbf{P}_{-x}(S_{n}\in [ -y,-y+\Delta),\, M_{n}<0) \sim g_{\alpha,\beta}(0)b_{n}V(-x)U(y)\Delta \end{equation*} \notag $$
uniformly in $x\in [0,\delta_{n}a_{n}]$ and $y\in [T_{n},\delta_{n}a_{n}]$, where $T_{n}\to\infty$ and $\delta_{n}\to0$ as $n\to\infty$ in such a way that $T_{n}<\delta_{n}a_{n}$.

We also need the following simple observation, which we refer to several times in what follows.

Lemma 3. Let $g(x)$, $x\geqslant 0$, be a positive nondecreasing function such that ${g(2x_{0})\geqslant 1}$ for some $x_{0}>0$. Then for any $x\geqslant x_{0}$ and $y\geqslant 0$

$$ \begin{equation*} g(x+y)\leqslant g(2x)(1+g(2y)). \end{equation*} \notag $$
If, in addition,
$$ \begin{equation*} \limsup_{x\to\infty}\frac{g(2x)}{g(x)}<\infty, \end{equation*} \notag $$
then there is a constant $C\in (0,\infty)$ such that
$$ \begin{equation} g(x+y)\leqslant Cg(x)(1+g(2y)) \end{equation} \tag{2.7} $$
for all $y\geqslant 0$ and all sufficiently large $x$.

§ 3. Proof of Theorem 1

We fix a positive integer $J<n$ and write

$$ \begin{equation} \mathbf{E}[ e^{\lambda S_{\tau_{n}}};\, S_{n}\leqslant \varphi(n)] =\sum_{j=0}^{J}\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant \varphi(n),\, \tau_{n}=j] +R(J+1,n), \end{equation} \tag{3.1} $$
where
$$ \begin{equation*} R(J+1,n):=\sum_{j=J+1}^{n}\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant \varphi(n),\, \tau_{n}=j] . \end{equation*} \notag $$
It follows from (2.4) and (2.3) that for all positive integers $n$ and $k$
$$ \begin{equation} \begin{split} &\mathbf{P}(S_{n}\in [ -k,-k+1),\, M_{n}<0) \\ &\qquad=\mathbf{P}(S_{n}\in [ -k,-k+1),\, \tau_{n}=n)\leqslant Cb_{n}U(k) \end{split} \end{equation} \tag{3.2} $$
and
$$ \begin{equation} \mathbf{P}(0\leqslant S_{n}<y,\, L_{n}\geqslant 0) \leqslant Cb_{n}\int_{0}^{y}V(-w)\,dw. \end{equation} \tag{3.3} $$
Let $\{S_{n}',\,n\geqslant 0\}$ be an independent probabilistic copy of the random walk $\{S_{n},n\geqslant 0\}$, and let $L_{n}'=\min\{S_{1}',\dots,S_{n}'\}$. Using (3.2) and (3.3) we obtain
$$ \begin{equation} \begin{aligned} \, &\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant \varphi(n),\, \tau_{n}=j] \notag \\ &\qquad =\mathbf{E}[ e^{\lambda S_{j}}\, \mathbf{P}(S_{n-j}'\leqslant \varphi(n)-S_{j},\, L_{n-j}'\geqslant 0\mid S_{j}); \, \tau_{j}=j] \notag \\ &\qquad \leqslant \sum_{k=1}^{\infty}e^{\lambda (-k+1)}\, \mathbf{P}(S_{j}\in [ -k,-k+1),\, \tau_{j}=j) \, \mathbf{P}(S_{n-j}\leqslant \varphi(n)+k,\, L_{n-j}\geqslant 0) \notag \\ &\qquad \leqslant Cb_{j}\sum_{k=1}^{\infty}e^{\lambda (-k+1)}U(k)\, \mathbf{P}(S_{n-j}\leqslant \varphi(n)+k,\, L_{n-j}\geqslant 0) \notag \\ &\qquad \leqslant Cb_{j}b_{n-j}\sum_{k=1}^{\infty}e^{-\lambda k}U(k)\int_{0}^{\varphi(n)+k}V(-z)\,dz. \end{aligned} \end{equation} \tag{3.4} $$
Note that in view of (1.1) and the properties of regularly varying functions there exists a constant $C\in (0,\infty)$ such that $b_{j}\leqslant Cb_{n}$ for all $n$ and $j\in [n/2,n]$.

Since $U(x)$ and $V(-x)$ are renewal functions, there exists a constant $C\in (0,\infty)$ such that for all $x\in \mathbb{R}$

$$ \begin{equation} U(x)+V(-x)\leqslant C(|x|+1) \quad\text{and}\quad U(x)V(-2x)\leqslant C(|x|+1)^2. \end{equation} \tag{3.5} $$
Therefore,
$$ \begin{equation*} \sum_{k=1}^{\infty}e^{-\lambda k}U(k)(1+V(-2k))<\mathbf{\infty} \end{equation*} \notag $$
for all $\lambda >0$. Using this estimate and inequality (2.7) for $g(x)=V(-x)$, we conclude that
$$ \begin{equation} \begin{aligned} \, &\sum_{j=[ n/2] +1}^{n}\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant \varphi(n),\, \tau_{n}=j] \notag \\ &\qquad \leqslant C\sum_{k=1}^{\infty}e^{-\lambda k}U(k)\sum_{j=[ n/2]+1}^{n}b_{j}\, \mathbf{P}(S_{n-j}\leqslant \varphi(n)+k,\, L_{n-j}\geqslant 0) \notag \\ &\qquad \leqslant C_{1}b_{n}\sum_{k=1}^{\infty}e^{-\lambda k}U(k)\sum_{j=[n/2] +1}^{n}\mathbf{P}(S_{n-j}\leqslant \varphi(n)+k,\, L_{n-j}\geqslant 0) \notag \\ &\qquad \leqslant C_{1}b_{n}\sum_{k=1}^{\infty}e^{-\lambda k}U(k)V(-\varphi(n)-k) \notag \\ &\qquad \leqslant C_{2}b_{n}V(-\varphi(n))\biggl(\sum_{k=1}^{\infty}e^{-\lambda k}U(k)(1+V(-2k))\biggr) \notag \\ &\qquad \leqslant C_{3}b_{n}V(-\varphi(n)). \end{aligned} \end{equation} \tag{3.6} $$

We know by (2.2) that $V(-w)\in R_{\alpha (1-\rho)}$. Consequently,

$$ \begin{equation} \int_{0}^{y}V(-w)\,dw\sim \frac{yV(-y)}{\alpha (1-\rho)+1}\in R_{\alpha(1-\rho)+1} \end{equation} \tag{3.7} $$
as $y\to\infty$ (see [7], Ch. VIII, § 9, Theorem 1). This fact, in combination with (3.6) and Corollary 2, shows that
$$ \begin{equation} \begin{aligned} \, &\sum_{j=[ n/2] +1}^{n}\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant \varphi(n),\, \tau_{n}=j] \leqslant Cb_{n}V(-\varphi(n)) \notag \\ &\qquad =o\biggl(b_{n}\int_{0}^{\varphi(n)}V(-w)\,dw\biggr) =o(\mathbf{P}(S_{n}\leqslant \varphi(n),\, L_{n}\geqslant 0)) \end{aligned} \end{equation} \tag{3.8} $$
as $n\to\infty$, and that there exists a positive constant $C$ such that
$$ \begin{equation*} \int_{0}^{2y}V(-w)\,dw\leqslant C\int_{0}^{y}V(-w)\,dw \end{equation*} \notag $$
for all sufficiently large $y$. Combining this estimate with (3.4) we conclude that
$$ \begin{equation} \begin{aligned} \, &\sum_{J+1\leqslant j\leqslant n/2}\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant \varphi(n),\, \tau_{n}=j] \notag \\ &\qquad \leqslant C\sum_{J\leqslant j\leqslant n/2}b_{j}b_{n-j}\sum_{k=1}^{\infty}e^{-\lambda k}U(k)\int_{0}^{\varphi(n)+k}V(-w)\,dw \notag \\ &\qquad \leqslant C_{1}b_{n}\int_{0}^{2\varphi(n)}V(-w)\,dw\sum_{J\leqslant j\leqslant n/2}b_{j}\biggl(\sum_{k=1}^{\infty}e^{-\lambda k}U(k)\biggr( 1+\int_{0}^{2k}V(-w)\,dw\biggr)\biggr) \notag \\ &\qquad \leqslant C_{2}\sum_{J\leqslant j\leqslant \infty}b_{j} b_{n}\int_{0}^{\varphi(n)}V(-w)\,dw=\varepsilon_{J}\mathbf{P}(S_{n}\leqslant \varphi(n),\, L_{n}\geqslant 0), \end{aligned} \end{equation} \tag{3.9} $$
where $\varepsilon_{J}\to0$ as $J\to\infty$ in view of (1.1) and Corollary 2. Estimates (3.8) and (3.9) imply that
$$ \begin{equation} \lim_{J\to \infty}\lim_{n\to \infty }\frac{R(J+1,n)}{\mathbf{P}(S_{n}\leqslant \varphi(n),\, L_{n}\geqslant 0)}\leqslant C\lim_{J\to \infty}\varepsilon_{J}=0. \end{equation} \tag{3.10} $$

We now fix $j\in [0,J]$ and write the decomposition

$$ \begin{equation*} \begin{aligned} \, &\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant \varphi(n),\, \tau_{n}=j] \\ &\qquad=\mathbf{E}\bigl[ e^{\lambda S_{j}};\, S_{n}\leqslant \varphi(n),\, \tau_{n}=j,\, S_{j}>-\sqrt{\varphi(n)}\, \bigr] \\ &\qquad\qquad +\mathbf{E}\bigl[ e^{\lambda S_{j}};\, S_{n}\leqslant \varphi(n),\, \tau_{n}=j,\, S_{j}\leqslant -\sqrt{\varphi(n)}\, \bigr] . \end{aligned} \end{equation*} \notag $$
According to Lemma 5 in [13], for each fixed $j$
$$ \begin{equation*} \mathbf{E}\bigl[ e^{\lambda S_{j}};\, S_{n}\leqslant \varphi(n),\, \tau_{n}=j,\, S_{j}\leqslant -\sqrt{\varphi(n)}\, \bigr]=o(\mathbf{P}(S_{n-j}\leqslant\varphi(n),\, L_{n-j}\geqslant 0)) \end{equation*} \notag $$
as $n\to\infty$. It is clear that
$$ \begin{equation*} \begin{aligned} \, &\mathbf{E}\bigl[ e^{\lambda S_{j}};\, S_{n}\leqslant \varphi(n),\, \tau_{n}=j,\, S_{j}>-\sqrt{\varphi(n)}\, \bigr] \\ &\qquad =\int_{-\sqrt{\varphi(n)}}^{0}e^{\lambda x}\, \mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \, \mathbf{P}(S_{n-j}\leqslant \varphi(n)-x,\, L_{n-j}\geqslant 0). \end{aligned} \end{equation*} \notag $$
For $x\in [-\sqrt{\varphi(n)},0]$ and fixed $j$ we have
$$ \begin{equation*} 0\leqslant \varphi(n)-x\leqslant \varphi(n)+\sqrt{\varphi(n)}=o(a_{n})=o(a_{n-j}) \end{equation*} \notag $$
as $n\to\infty$. This estimate and Corollary 2 imply that
$$ \begin{equation*} \mathbf{P}(S_{n-j}\leqslant \varphi(n)-x,\, L_{n-j}\geqslant 0)\sim g_{\alpha,\beta }(0)b_{n-j}\int_{0}^{\varphi(n)-x}V(-w)\,dw \end{equation*} \notag $$
as $n\to\infty$ uniformly in $x\in (-\sqrt{\varphi(n)},0]$. We know that $b_{n}\in R_{1+\alpha ^{-1}}$. Therefore, $b_{n-j}\sim b_{n}$ as $n\to\infty$ for each fixed $j$. Moreover,
$$ \begin{equation*} \lim_{n\to \infty}\sup_{x\in (-\sqrt{\varphi(n)},0]} \left(\frac{\displaystyle\int_{0}^{\varphi(n)-x}V(-w)\,dw} {\displaystyle\int_{0}^{\varphi(n)}V(-w)\,dw}-1\right)=0, \end{equation*} \notag $$
according to (3.7). Hence we conclude that for each fixed $j$
$$ \begin{equation*} \begin{aligned} \, \mathbf{P}(S_{n-j}\leqslant \varphi(n)-x,\, L_{n-j}\geqslant 0) &\sim g_{\alpha,\beta}(0)b_{n}\int_{0}^{\varphi(n)}V(-w)\,dw \\ &\sim \mathbf{P}(S_{n}\leqslant \varphi(n),\, L_{n}\geqslant 0) \end{aligned} \end{equation*} \notag $$
as $n\to\infty$ uniformly in $x\in (-\sqrt{\varphi(n)},0]$. Thus,
$$ \begin{equation*} \begin{aligned} \, &\int_{-\sqrt{\varphi(n)}}^{0}e^{\lambda x}\, \mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \, \mathbf{P}(S_{n-j}\leqslant \varphi(n)-x,\, L_{n-j}\geqslant 0) \\ &\qquad \sim \mathbf{P}(S_{n}\leqslant \varphi(n),\, L_{n}\geqslant 0)\int_{-\infty}^{0}e^{\lambda x}\, \mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \end{aligned} \end{equation*} \notag $$
as $n\to\infty$. Using (3.10) and summing the estimates above over $j$ from $0$ to $\infty$ we obtain from (3.1) that
$$ \begin{equation*} \begin{aligned} \, \mathbf{E}[ e^{\lambda S_{\tau_{n}}};\, S_{n}\leqslant \varphi(n)] &\sim\mathbf{P}(S_{n}\leqslant \varphi(n),\, L_{n}\geqslant 0)\int_{-\infty}^{0}e^{\lambda x}\sum_{j=0}^{\infty}\mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \\ &=\mathbf{P}(S_{n}\leqslant \varphi(n),\, L_{n}\geqslant 0) \biggl(1+\int_{-\infty}^{0}e^{\lambda x}U(-dx)\biggr) \\ &=\mathbf{P}(S_{n}\leqslant \varphi(n),\, L_{n}\geqslant 0)\lambda \int_{0}^{\infty }e^{-\lambda x}U(x)\,dx \end{aligned} \end{equation*} \notag $$
as $n\to\infty$, where we used the equality $U(0+)=1$ at the last step.

We complete the proof of Theorem 1 by combining the equivalence relation obtained with Corollary 2.

§ 4. Proof of Theorem 2

The starting point of our arguments is the representation

$$ \begin{equation*} \begin{aligned} \, &\mathbf{E}_{x}[ e^{\lambda S_{n}};\, S_{n}\leqslant \psi(n),\, M_{n}<0] =\int_{-\infty}^{\psi(n)}e^{\lambda w}\, \mathbf{P}_{x}(S_{n}\in dw,M_{n}<0) \\ &\qquad=e^{\lambda \psi(n)}\int_{0}^{\infty }e^{-\lambda y}\, \mathbf{P}_{x}(S_{n}\in \psi(n)-dy,\, M_{n}<0) . \end{aligned} \end{equation*} \notag $$

Introducing for $h\in (0,1]$ and $N\in \mathbb{N}$ the notation

$$ \begin{equation*} T_{1}(n,N,h):= \sum_{k=0}^{[ N/h]}e^{-\lambda kh}\, \mathbf{P}_{x}(S_{n}\in [ \psi(n)-(k+1) h,\, \psi (n)-kh),\, M_{n}<0) \end{equation*} \notag $$
and
$$ \begin{equation*} T_{2}(n,N):= \sum_{r=N}^{\infty}e^{-\lambda r}\, \mathbf{P}_{x}( S_{n}\in [ \psi(n)-(r+1),\, \psi(n)-r),\, M_{n}<0), \end{equation*} \notag $$
we have
$$ \begin{equation*} \begin{aligned} \, e^{-\lambda h}T_{1}(n,N,h) &\leqslant \int_{0}^{\infty}e^{-\lambda y}\, \mathbf{P}_{x}(S_{n}\in \psi(n)-dy,\, M_{n}<0) \\ &\leqslant T_{1}(n,N,h)+T_{2}(n,N). \end{aligned} \end{equation*} \notag $$

Using (2.4), Lemma 3 and the first inequality in (3.5), we conclude that there exists a positive integer $N_{0}$ such that the estimates

$$ \begin{equation*} \begin{aligned} \, T_{2}(n,N) &\leqslant Cb_{n}V(x)\sum_{r=N}^{\infty}e^{-\lambda r}U(r+1-\psi(n)) \\ &\leqslant Cb_{n}V(x)U(-\psi(n))\biggl(\sum_{r=N}^{\infty}e^{-\lambda r}(1+U(2r+2))\biggr) \\ &=Cb_{n}V(x)U(-\psi(n))e^{-\lambda N}\biggl(\sum_{r=0}^{\infty }e^{-\lambda r}(1+U(2r+2N+2))\biggr) \\ &\leqslant 2Cb_{n}V(x)U(-\psi(n))e^{-\lambda N}\sum_{r=0}^{\infty}e^{-\lambda r}(r+N+2) \\ &\leqslant C_{1}b_{n}V(x)U(-\psi(n))Ne^{-\lambda N} \end{aligned} \end{equation*} \notag $$
are valid for all $N\geqslant N_{0}$.

Further, if Condition A1 is satisfied, then, according to Corollary 3,

$$ \begin{equation} \begin{aligned} \, &\mathbf{P}_{x}(S_{n}\in [ \psi(n)-(k+1) h,\, \psi(n)-kh),\, M_{n}<0) \notag \\ &\qquad \sim g_{\alpha,\beta}(0)b_{n}V(x)hU((k+1)h-\psi(n)) \end{aligned} \end{equation} \tag{4.1} $$
as $n\to\infty$ uniformly in negative $x=o(a_{n})$ and $\psi(n)-(k+1)h=o(a_{n})$, where $(k+1)h\leqslant 2N$. Moreover,
$$ \begin{equation*} \frac{U(z-\psi(n))}{U(-\psi(n))}\to 1 \end{equation*} \notag $$
as $n\to\infty$ uniformly in $z=o(\psi(n))$. As a result,
$$ \begin{equation*} \begin{aligned} \, T_{1}(n,N,h) &=(1+o(1))g_{\alpha,\beta}(0)b_{n}V(x)h\sum_{k=0}^{[ N/h]}e^{-\lambda kh}U(kh-\psi(n)) \\ &=(1+o(1))g_{\alpha,\beta}(0)b_{n}V(x)U(-\psi(n)) h\sum_{k=0}^{[ N/h]}e^{-\lambda kh} \\ &\leqslant (1+\varepsilon)g_{\alpha,\beta}(0)b_{n}V(x)U(-\psi (n)) h(1-e^{-\lambda h})^{-1} \end{aligned} \end{equation*} \notag $$
for any $\varepsilon >0$ and sufficiently large $n\geqslant n_{0}(\varepsilon)$. Thus,
$$ \begin{equation*} \limsup_{n\to \infty}\frac{\displaystyle\int_{0}^{\infty}e^{-\lambda y}\, \mathbf{P}_{x}(S_{n}\in \psi(n)-dy,\, M_{n}<0)}{b_{n}U(-\psi(n))V(x)}\leqslant (1+\varepsilon)g_{\alpha,\beta}(0)h(1-e^{\lambda h})^{-1}. \end{equation*} \notag $$
Since $\varepsilon >0$ and $h>0$ can be selected arbitrary small, we conclude that
$$ \begin{equation*} \limsup_{n\to \infty}\frac{\displaystyle\int_{0}^{\infty}e^{-\lambda y}\, \mathbf{P}_{x}(S_{n}\in \psi(n)-dy,\, M_{n}<0)}{b_{n}U(-\psi(n))V(x)}\leqslant \frac{g_{\alpha,\beta}(0)}{\lambda}. \end{equation*} \notag $$
In a similar way we obtain
$$ \begin{equation*} \begin{aligned} \, e^{-\lambda h}T_{1}(n,N,h) &\geqslant (1-\varepsilon)g_{\alpha,\beta}(0)b_{n}V(x)U(-\psi(n)) h \\ &\qquad \times \biggl((1-e^{-\lambda h})^{-1}-\sum_{r=N}^{\infty}e^{-\lambda r}\biggr) \end{aligned} \end{equation*} \notag $$
for any $\varepsilon >0$ and sufficiently large $n$, which implies that
$$ \begin{equation*} \begin{aligned} \, &\liminf_{n\to \infty} \frac{\displaystyle\int_{0}^{\infty}e^{-\lambda y}\, \mathbf{P}_{x}(S_{n}\in \psi(n)-dy,\, M_{n}<0)}{b_{n}U(-\psi(n))V(x)} \\ &\qquad \geqslant \liminf_{N\to \infty}\liminf_{h\downarrow 0}\lim_{\varepsilon \downarrow 0}\liminf_{n\to \infty }\frac{e^{-\lambda h}T_{1}(n,N,h)}{b_{n}U(-\psi(n))V(x)}\geqslant \frac{g_{\alpha ,\beta}(0)}{\lambda}. \end{aligned} \end{equation*} \notag $$
It follows that
$$ \begin{equation*} \lim_{n\to \infty}\frac{\displaystyle\int_{0}^{\infty}e^{-\lambda y}\, \mathbf{P}_{x}(S_{n}\in \psi(n)-dy,\, M_{n}<0)}{b_{n}U(-\psi (n))V(x)}=\frac{g_{\alpha,\beta}(0)}{\lambda}, \end{equation*} \notag $$
as required.

§ 5. Proof of Theorem 3

We write

$$ \begin{equation*} \mathbf{E}[ e^{\lambda S_{\tau_{n}}};\, S_{n}\leqslant \psi(n)] =R(0,n-J)+\sum_{j=n-J+1}^{n}\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant \psi (n),\tau_{n}=j], \end{equation*} \notag $$
where
$$ \begin{equation*} R(0,n-J):=\sum_{j=0}^{n-J}\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant \psi(n),\, \tau_{n}=j] . \end{equation*} \notag $$
Clearly, for each $j\in [0,n]$
$$ \begin{equation} \begin{aligned} \, &\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant \psi(n),\, \tau_{n}=j] \notag \\ &\qquad =\int_{-\infty}^{\psi(n)}e^{\lambda x}\, \mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \, \mathbf{P}(S_{n-j}\leqslant \psi(n)-x,\, L_{n-j}\geqslant 0) \notag \\ &\qquad =e^{\lambda \psi(n)}T_{3}(n,j), \end{aligned} \end{equation} \tag{5.1} $$
where
$$ \begin{equation*} T_{3}(n,j):=\int_{0}^{\infty}e^{-\lambda y}\, \mathbf{P}(S_{j}\in \psi (n)-dy,\, \tau_{j}=j) \, \mathbf{P}(S_{n-j}\leqslant y,\, L_{n-j}\geqslant0). \end{equation*} \notag $$
In view of (2.2) there is a constant $C\in (0,\infty)$ such that
$$ \begin{equation} e^{-\lambda y}\int_{0}^{y}V(-w)\,dw\leqslant Ce^{-\lambda y/2} \end{equation} \tag{5.2} $$
for all $y\geqslant 0$. Using now the inequalities (2.3), (2.4) and (2.1) it is not difficult to check the validity of the following chain of inequalities:
$$ \begin{equation} \begin{aligned} \, T_{3}(n,j) &\leqslant Cb_{n-j}\int_{0}^{\infty}e^{-\lambda y}\, \mathbf{P}( S_{j}\in \psi(n)-dy,\, \tau_{j}=j) \int_{0}^{y}V(-w)\,dw \notag \\ &\leqslant Cb_{n-j}\sum_{k=0}^{\infty}e^{-\lambda k}\, \mathbf{P}(S_{j}\,{\in}\, [ \psi(n)\,{-}\,k\,{-}\,1,\, \psi(n)\,{-}\,k),\, \tau_{j}\,{=}\,j) \int_{0}^{k+1}V(-w)\,dw \notag \\ &\leqslant C_{1}b_{n-j}b_{j}\sum_{k=0}^{\infty}e^{-\lambda k/2}U(k+1-\psi(n)) \notag \\ &\leqslant C_{1}b_{n-j}b_{j}U(-2\psi(n))\sum_{k=0}^{\infty}e^{-\lambda k/2}(U(2k+2)+1) \notag \\ &\leqslant C_{2}b_{n-j}b_{j}U(-\psi(n)). \end{aligned} \end{equation} \tag{5.3} $$
In view of (1.1) $b_{n-j}\leqslant Cb_{n}$ for all $j\leqslant n/2$. Therefore,
$$ \begin{equation*} \begin{aligned} \, &\sum_{j=J}^{n-J}\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant \psi(n),\, \tau _{n}=j] \leqslant C_{2}U(-\psi(n))e^{\lambda \psi (n)}\sum_{j=J}^{n-J}b_{n-j}b_{j} \\ &\qquad \leqslant C_{3}b_{n}U(-\psi(n))e^{\lambda \psi (n)}\sum_{j=J}^{n/2}b_{j}\leqslant \varepsilon_{J}b_{n}U(-\psi(n))e^{\lambda \psi(n)}, \end{aligned} \end{equation*} \notag $$
where $\varepsilon_{J}=C_{3}\sum_{j=J}^{\infty}b_{j}\to0$ as $J\to \infty$. Further, for any fixed $j\in [0,J]$ we have
$$ \begin{equation*} \begin{aligned} \, T_{3}(n,j) &\leqslant Cb_{n-j}\sum_{k=0}^{\infty}e^{-\lambda k/2}\, \mathbf{P}(S_{j}\in [ \psi(n)-k-1,\, \psi(n)-k),\, \tau_{j}=j) \\ &\leqslant C_{1}b_{n}\, \mathbf{P}(S_{j}\leqslant \psi(n))\sum_{k=0}^{\infty}e^{-\lambda k/2}=o(b_{n}) \end{aligned} \end{equation*} \notag $$
as $n\to\infty$. Thus,
$$ \begin{equation*} \limsup_{J\to \infty}\limsup_{n\to \infty }\frac{R(0,n-J)}{b_{n}U(-\psi(n)) e^{\lambda \psi(n)}}=0. \end{equation*} \notag $$
Consider now $j=n-t$ for $t\in [0,J]$. In this case, for any $N\in \mathbb{N}$ and $h>0$ we have
$$ \begin{equation*} T_{3}(n,j)\leqslant T_{4}(n,N,h,j)+T_{5}(n,N,j), \end{equation*} \notag $$
where
$$ \begin{equation*} \begin{aligned} \, T_{4}(n,N,h,j) &:=\sum_{k=0}^{[ N/h]}e^{-\lambda kh}\, \mathbf{P}(S_{j}\in [ \psi(n)-(k+1)h,\, \psi(n)-kh),\, \tau_{j}=j) \\ &\qquad\times\mathbf{P}(S_{t}\leqslant (k+1)h,\, L_{t}\geqslant 0) \end{aligned} \end{equation*} \notag $$
and
$$ \begin{equation*} \begin{aligned} \, T_{5}(n,N,j) &:=\sum_{r=N}^{\infty}e^{-\lambda r}\, \mathbf{P}(S_{j}\in [ \psi(n)-r-1,\, \psi(n)-r),\, \tau_{j}=j) \\ &\qquad\times\mathbf{P}(S_{t}\leqslant r+1,\, L_{t}\geqslant 0) . \end{aligned} \end{equation*} \notag $$
Using (2.4) for $x=0$ we conclude that
$$ \begin{equation} \begin{aligned} \, T_{5}(n,N,j) &\leqslant Cb_{j}\sum_{r=N}^{\infty}e^{-\lambda r}U(r+1-\psi (n))\, \mathbf{P}(S_{t}\leqslant r+1,\, L_{t}\geqslant 0) \notag \\ &\leqslant Cb_{j}U(-2\psi(n))\sum_{r=N}^{\infty}e^{-\lambda r}( U(2r+2)+1) \notag \\ &\leqslant C_{1}b_{j}U(-\psi(n))e^{-\lambda N}\sum_{r=0}^{\infty}e^{-\lambda r}(U(2r+2N+2)+1) \notag \\ &\leqslant C_{2}b_{j}U(-\psi(n))e^{-\lambda N}\sum_{r=0}^{\infty}e^{-\lambda r}(r+N+2) \notag \\ &\leqslant C_{3}b_{j}U(-\psi(n))Ne^{-\lambda N}. \end{aligned} \end{equation} \tag{5.4} $$

Further, if $0\leqslant n-j\leqslant J$ and $n\to\infty$, then by (2.6) and the duality principle for random walks

$$ \begin{equation*} \begin{aligned} \, &\mathbf{P}(S_{j}\in [ \psi(n)-(k+1) h,\, \psi(n)-kh),\, \tau_{j}=j) \\ &\qquad =\mathbf{P}(S_{j}\in [ \psi(n)-(k+1)h,\, \psi(n)-kh),\, M_{j}<0) \\ &\qquad \sim g_{\alpha,\beta}(0)b_{j}\int_{kh-\psi(n)}^{( k+1) h-\psi(n)}U(z)\,dz\sim g_{\alpha,\beta}(0)b_{j}hU(-\psi(n)) \end{aligned} \end{equation*} \notag $$
as $n\to\infty$, uniformly in $0\leqslant (k+1)h\leqslant N$. Using these estimates we see that for each fixed $h>0$
$$ \begin{equation} T_{4}(n,N,h,j)\sim g_{\alpha,\beta}(0)b_{j}U(-\psi(n))h\sum_{k=0}^{[ N/h]}e^{-\lambda kh}\,\mathbf{P}(S_{t}\leqslant (k+1)h,\,L_{t}\geqslant 0) . \end{equation} \tag{5.5} $$
Some obvious estimates lead to the chains of inequalities
$$ \begin{equation} \begin{aligned} \, &h\sum_{k=0}^{[ N/h]}e^{-\lambda kh}\, \mathbf{P}(S_{t}\leqslant (k+1) h,\, L_{t}\geqslant 0) \notag \\ &\qquad\leqslant e^{2\lambda h}\sum_{k=0}^{[ N/h] }\int_{(k+1) h}^{(k+2) h}e^{-\lambda z}\, \mathbf{P}(S_{t}\leqslant z,\, L_{t}\geqslant 0) \,dz \notag \\ &\qquad \leqslant e^{2\lambda h}\int_{0}^{\infty}e^{-\lambda z}\, \mathbf{P}(S_{t}\leqslant z,\, L_{t}\geqslant 0) \,dz \end{aligned} \end{equation} \tag{5.6} $$
and
$$ \begin{equation} \begin{aligned} \, &h\sum_{k=0}^{[ N/h]}e^{-\lambda kh}\, \mathbf{P}(S_{t}\leqslant (k+1) h,\, L_{t}\geqslant 0) \notag \\ &\qquad \geqslant e^{-2\lambda h}\sum_{k=0}^{[ N/h]}he^{-\lambda (k-1)h}\, \mathbf{P}(S_{t}\leqslant kh,\, L_{t}\geqslant 0) \notag \\ &\qquad \geqslant e^{-2\lambda h}\int_{0}^{N-1}e^{-\lambda z}\, \mathbf{P}(S_{t}\leqslant z,\, L_{t}\geqslant 0) \,dz. \end{aligned} \end{equation} \tag{5.7} $$

Combining (5.5)(5.7) we see that for $j=n-t\in (n-J,n]$

$$ \begin{equation*} \lim_{h\downarrow 0}\lim_{N\to \infty}\lim_{n\to \infty }\frac{T_{4}(n,N,h,j)}{g_{\alpha,\beta}(0)b_{j}U(-\psi (n))}=\int_{0}^{\infty}e^{-\lambda z}\, \mathbf{P}(S_{t}\leqslant z,\, L_{t}\geqslant 0) \,dz. \end{equation*} \notag $$
Hence, by the equivalence $b_{j}\sim b_{n}$ for $j\in (n-J,n]$, taking (5.4) into account we deduce that
$$ \begin{equation} \limsup_{n\to \infty}\frac{T_{3}(n,j)}{g_{\alpha,\beta }(0)b_{n}U(-\psi(n))}\leqslant \int_{0}^{\infty}e^{-\lambda z}\mathbf{P}( S_{t}\leqslant z,L_{t}\geqslant 0) \,dz. \end{equation} \tag{5.8} $$
To obtain a similar estimate from below we observe that
$$ \begin{equation*} \begin{aligned} \, &T_{3}(n,j)\geqslant T_{6}(N,n,h,j) \\ &:=\sum_{k=1}^{[ N/h]}e^{-\lambda (k+1)h}\, \mathbf{P}( S_{j}\in [ \psi(n)-(k+1) h,\, \psi(n)-kh),\, \tau_{j}=j) \, \mathbf{P}(S_{t}\,{\leqslant}\, kh,\, L_{t}\,{\geqslant}\, 0) \\ &\sim g_{\alpha,\beta}(0)b_{j}U(-\psi(n))h\sum_{k=1}^{[ N/h] }e^{-\lambda (k+1)h}\, \mathbf{P}(S_{t}\leqslant kh,\, L_{t}\geqslant 0) \end{aligned} \end{equation*} \notag $$
as $n\to\infty$ and
$$ \begin{equation*} \begin{aligned} \, &h\sum_{k=1}^{[ N/h]}e^{-\lambda (k+1)h}\, \mathbf{P}( S_{t}\leqslant kh,\, L_{t}\geqslant 0) \\ &\qquad \geqslant e^{-2\lambda h}\sum_{k=1}^{[ N/h]}he^{-\lambda (k-1)h}\, \mathbf{P}(S_{t}\leqslant kh,\, L_{t}\geqslant 0) \\ &\qquad \geqslant e^{-2\lambda h}\int_{0}^{N-1}e^{-\lambda z}\, \mathbf{P}(S_{t}\leqslant z,\, L_{t}\geqslant 0)\, dz. \end{aligned} \end{equation*} \notag $$
Hence it follows that
$$ \begin{equation*} \begin{aligned} \, &\liminf_{n\to \infty}\frac{T_{3}(n,j)}{g_{\alpha,\beta }(0)b_{j}U(-\psi(n))} \geqslant \lim_{h\downarrow 0}\lim_{N\to \infty }\lim_{n\to \infty}\frac{T_{6}(n,N,h,j)}{g_{\alpha,\beta }(0)b_{j}U(-\psi(n))} \\ &\qquad =\int_{0}^{\infty}e^{-\lambda z}\, \mathbf{P}(S_{t}\leqslant z,\, L_{t}\geqslant 0) \,dz. \end{aligned} \end{equation*} \notag $$
Thus, for each $t=n-j\in [0,J]$
$$ \begin{equation*} \lim_{n\to \infty}\frac{T_{3}(n,j)}{g_{\alpha,\beta}(0)b_{n}U(-\psi(n))} =\int_{0}^{\infty}e^{-\lambda z}\, \mathbf{P}(S_{t}\leqslant z,\, L_{t}\geqslant 0) \,dz. \end{equation*} \notag $$
Using now (5.1) we conclude that for each fixed $J$
$$ \begin{equation*} \begin{aligned} \, &\sum_{j=n-J+1}^{n}\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant \psi(n),\, \tau_{n}=j] \\ &\qquad \sim g_{\alpha,\beta}(0)b_{n}U(-\psi(n))e^{\lambda \psi (n)}\int_{0}^{\infty}e^{-\lambda z}\sum_{t=0}^{J}\mathbf{P}(S_{t}\leqslant z,\, L_{t}\geqslant 0)\,dz \end{aligned} \end{equation*} \notag $$
as $n\to \infty $. Letting $J$ tend to infinity we see that
$$ \begin{equation*} \mathbf{E}[ e^{\lambda S_{\tau_{n}}};\, S_{n}\leqslant \psi(n)] \sim g_{\alpha,\beta}(0)b_{n}U(-\psi(n)) e^{\lambda \psi (n)}\int_{0}^{\infty}e^{-\lambda z}V_{0}(-z)\,dz, \end{equation*} \notag $$
as required.

§ 6. Proof of Theorem 4

We begin with the decomposition

$$ \begin{equation*} \mathbf{E}[ e^{\lambda S_{\tau_{n}}};\, S_{n}\leqslant K] =R(0,J)+R(J+1,n-J)+R(n-J+1,n), \end{equation*} \notag $$
where
$$ \begin{equation*} R(N_{1},N_{2}):=\sum_{j=N_{1}}^{N_{2}}\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant K,\, \tau_{n}=j] . \end{equation*} \notag $$
We show that the term $R(J+1,n-J)$ is negligibly small with respect the other terms in the expectation we are interested in. By Lemma 1 and estimate (5.2)
$$ \begin{equation*} \begin{aligned} \, &\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant K,\, \tau_{n}=j] \\ &\qquad =\int_{-\infty}^{K\wedge 0}e^{\lambda x}\, \mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \, \mathbf{P}(S_{n-j}\leqslant K-x,\, L_{n-j}\geqslant 0) \\ &\qquad =e^{\lambda K}\int_{K\vee 0}^{\infty}e^{-\lambda y}\, \mathbf{P}(S_{j}\in K-dy,\, M_{j}<0) \, \mathbf{P}(S_{n-j}\leqslant y,\, L_{n-j}\geqslant 0) \\ &\qquad \leqslant Cb_{n-j}\int_{K\vee 0}^{\infty}e^{-\lambda y}\, \mathbf{P}(S_{j}\in K-dy,\, M_{j}<0) \int_{0}^{y}V(-w)\,dw \\ &\qquad \leqslant Cb_{n-j}\sum_{k\geqslant K\vee 0}e^{-\lambda k}\, \mathbf{P}(S_{j}\in [ K-k-1,\, K-k),\, M_{j}<0) \int_{0}^{k+1}V(-w)\,dw \\ &\qquad \leqslant C_{1}b_{n-j}b_{j}\sum_{k\geqslant K\vee 0}e^{-\lambda k/2}U(k+1-K) \\ &\qquad \leqslant C_{1}b_{n-j}b_{j}U((-2K)\vee 0)\sum_{k\geqslant K\vee 0}e^{-\lambda k/2}(U(2k+2)+1) \leqslant C_{2}b_{n-j}b_{j}. \end{aligned} \end{equation*} \notag $$
Thus,
$$ \begin{equation*} R(J+1,n-J)\leqslant C\sum_{j=J+1}^{n-J}b_{n-j}b_{j}\leqslant C_{1}b_{n}\sum_{j=J+1}^{n/2}b_{j}\leqslant \varepsilon_{J}b_{n}, \end{equation*} \notag $$
where $\varepsilon_{J}=C_{1}\sum_{j=J+1}^{\infty}b_{j}\to0$ as $J\to\infty $.

Further, for fixed $j\in [0,J]$ we have

$$ \begin{equation*} \begin{aligned} \, &\frac{\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant K,\, \tau_{n}=j] }{b_{n}} \\ &\qquad=\int_{-\infty}^{K\wedge 0}e^{\lambda x}\, \mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \, \frac{\mathbf{P}(S_{n-j}\leqslant K-x,\, L_{n-j}\geqslant 0)}{b_{n}}. \end{aligned} \end{equation*} \notag $$
Note that for any fixed $x\leqslant K\wedge 0$
$$ \begin{equation*} \mathbf{P}(S_{n-j}\leqslant K-x,\, L_{n-j}\geqslant 0)\sim g_{\alpha,\beta}(0)b_{n-j}\int_{0}^{K-x}V(-w)\,dw \end{equation*} \notag $$
as $n\to\infty$, and by (2.3) there is a constant $C\in (0,\infty)$ such that
$$ \begin{equation*} \frac{\mathbf{P}(0\leqslant S_{n}<K-x,\, L_{n}\geqslant 0)}{b_{n}}\leqslant C\int_{0}^{K-x}V(-w)\,dw \end{equation*} \notag $$
for all $n$ and all $x\leqslant K\wedge 0$. These relations, in combination with (3.7), show that for any $\lambda >0$
$$ \begin{equation*} \begin{aligned} \, &\int_{-\infty}^{K\wedge 0}e^{\lambda x}\, \mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \int_{0}^{K-x}V(-w)\,dw \\ &\qquad \leqslant \int_{-\infty}^{0}e^{\lambda x}\, \mathbf{P}(S_{j}\in dx) \int_{0}^{K-x}V(-w)\,dw \\ &\qquad \leqslant C_{1}\int_{-\infty}^{0}e^{\lambda x}\, \mathbf{P}(S_{j}\in dx) (|x|^{2}+|K|^{2}+1) <\infty . \end{aligned} \end{equation*} \notag $$
Using Corollary 2 and applying the dominated convergence theorem we conclude that, for each $j\in [0,J]$
$$ \begin{equation*} \begin{aligned} \, &\lim_{n\to \infty}\frac{\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant K,\,\tau_{n}=j]}{b_{n}} \\ &\qquad =g_{\alpha,\beta}(0)\int_{-\infty}^{K\wedge 0}e^{\lambda x}\, \mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \int_{0}^{K-x}V(-w)\,dw. \end{aligned} \end{equation*} \notag $$
As a result,
$$ \begin{equation} \begin{aligned} \, &\lim_{J\to \infty}\lim_{n\to \infty}\frac{R(0,J)}{b_{n}} \notag \\ &\qquad=g_{\alpha,\beta}(0)\int_{-\infty}^{K\wedge 0}e^{\lambda x}\sum_{j=0}^{\infty}\mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \int_{0}^{K-x}V(-w)\,dw \notag \\ &\qquad =g_{\alpha,\beta}(0)\int_{-\infty}^{K\wedge 0}e^{\lambda x}\, U(-dx)\int_{0}^{K-x}V(-w)\,dw<\infty. \end{aligned} \end{equation} \tag{6.1} $$

To evaluate $R(n-J+1,n)$, for $j=n-t\in [n-J+1,n]$ we write the representation

$$ \begin{equation*} \begin{aligned} \, &\frac{\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant K,\, \tau_{n}=j] }{b_{n}} \\ &\qquad=\int_{-\infty}^{K\wedge 0}e^{\lambda x}\, \frac{\mathbf{P}( S_{j}\in dx,\, \tau_{j}=j)}{b_{n}}\, \mathbf{P}(S_{t}\leqslant K-x,\, L_{t}\geqslant 0) \end{aligned} \end{equation*} \notag $$
and observe that by (2.4) and the duality principle for random walks
$$ \begin{equation} \begin{aligned} \, &\int_{-\infty}^{-H}e^{\lambda x}\, \frac{\mathbf{P}(S_{j}\in dx,\, \tau _{j}=j)}{b_{n}}\, \mathbf{P}(S_{t}\leqslant K-x,\, L_{t}\geqslant 0) \notag \\ &\qquad \leqslant \mathbf{P}(L_{t}\geqslant 0) \int_{-\infty }^{-H}e^{\lambda x}\, \frac{\mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) }{b_{n}} \notag \\ &\qquad \leqslant \mathbf{P}(L_{t}\geqslant 0) \sum_{k=H}^{\infty}e^{-\lambda k}\, \frac{\mathbf{P}(S_{j}\in [ -k-1,-k),\, \tau_{j}=j)}{b_{n}} \notag \\ &\qquad \leqslant C_{1}\, \mathbf{P}(L_{t}\geqslant 0) \sum_{k=H}^{\infty}e^{-\lambda k}U(k+1), \end{aligned} \end{equation} \tag{6.2} $$
for any $H>0$, and therefore the right-hand side of this relation vanishes as $H\to\infty$. Thus, we are left with the integral
$$ \begin{equation*} \int_{-H}^{K\wedge 0}e^{\lambda x}\, \frac{\mathbf{P}(S_{j}\in dx,\, \tau _{j}=j)}{b_{n}}\, \mathbf{P}(S_{t}\leqslant K-x,\, L_{t}\geqslant 0) . \end{equation*} \notag $$

To estimate this term we use Theorem 5.1 in [3] (rewritten in our notation) according to which

$$ \begin{equation*} \frac{\mathbf{P}(S_{j}\in dx,\, \tau_{j}=j)}{b_{n}dx}\sim g_{\alpha,\beta}(0)U(-x) \end{equation*} \notag $$
as $n\to\infty$, uniformly in $x<0$ such that $ x=o(a_{n})$, if Condition A2 is valid. Thus,
$$ \begin{equation} \begin{aligned} \, &\lim_{n\to \infty}\int_{-H}^{K\wedge 0}e^{\lambda x}\, \frac{\mathbf{P}(S_{j}\in dx,\, \tau_{j}=j)}{b_{n}}\, \mathbf{P}(S_{t}\leqslant K-x,\, L_{t}\geqslant 0) \notag \\ &\qquad =g_{\alpha,\beta}(0)\int_{-H}^{K\wedge 0}e^{\lambda x}U(-x)\, \mathbf{P}(S_{t}\leqslant K-x,\,L_{t}\geqslant 0)\, dx. \end{aligned} \end{equation} \tag{6.3} $$
Combining (6.2) and (6.3) we deduce that for fixed $t=n-j\in [0,J]$
$$ \begin{equation*} \begin{aligned} \, &\lim_{n\to \infty}\frac{\mathbf{E}[ e^{\lambda S_{j}};\, S_{n}\leqslant K,\, \tau_{n}=j]}{b_{n}} \\ &\qquad=g_{\alpha,\beta}(0)\int_{-\infty}^{K\wedge 0}e^{\lambda x}U(-x)\, \mathbf{P}(S_{t}\leqslant K-x,\, L_{t}\geqslant 0) \,dx. \end{aligned} \end{equation*} \notag $$
Summing with respect to $t$ from $0$ to $\infty$ we obtain
$$ \begin{equation} \begin{aligned} \, &\lim_{J\to \infty}\lim_{n\to \infty}\frac{R(n-J+1,n)}{b_{n}} \nonumber \\ &\qquad=g_{\alpha,\beta}(0)\int_{-\infty}^{K\wedge 0}e^{\lambda x}U(-x)V_{0}(K-x)\,dx<\infty. \end{aligned} \end{equation} \tag{6.4} $$
Thus,
$$ \begin{equation*} \begin{aligned} \, &\lim_{n\to \infty}\frac{\mathbf{E}[ e^{\lambda S_{\tau_{n}}};\, S_{n}\leqslant K]}{b_{n}} =g_{\alpha,\beta}(0)\int_{-\infty}^{K\wedge 0}e^{\lambda x}\, U(-dx)\int_{0}^{K-x}V(-w)\,dw \\ &\qquad\qquad+g_{\alpha,\beta}(0)\int_{-\infty}^{K\wedge 0}e^{\lambda x}U(-x)V_{0}(K-x)\,dx, \end{aligned} \end{equation*} \notag $$
as required.

§ 7. Survival probability for branching processes evolving in extremely unfavourable random environment

In this section we apply the results obtained for random walks to study the asymptotic behaviour of the survival probabilities of the critical branching process evolving in an unfavourable random environment. For a formal description of the problems we are planning to consider we denote by $\mathfrak{F}=\{\mathfrak{f}\}$ the space of all probability measures on $\mathbb{N}_{0}:=\{0,1,2,\dots\}$. For notational reasons, we identify a measure $\mathfrak{f}=\{\mathfrak{f}(\{0\}),\mathfrak{f}(\{1\}),\dots\}\in \mathfrak{F}$ with the corresponding probability generating function

$$ \begin{equation*} f(s)=\sum_{k=0}^{\infty}\mathfrak{f}(\{ k\})s^{k}, \qquad s\in [0,1], \end{equation*} \notag $$
and make no difference between $\mathfrak{f}$ and $f$. Equipped with the metric of total variation, $\mathfrak{F}=\{\mathfrak{f}\}=\{f\}$ becomes a Polish space. Let
$$ \begin{equation*} F(s)=\sum_{j=0}^{\infty}F(\{ j\}) s^{j}, \qquad s\in[ 0,1], \end{equation*} \notag $$
be a random variable taking values in $\mathfrak{F}$, and let
$$ \begin{equation*} F_{n}(s)=\sum_{j=0}^{\infty}F_{n}(\{ j\}) s^{j}, \qquad s\in [ 0,1], \quad n\in \mathbb{N}, \end{equation*} \notag $$
be a sequence of independent probabilistic copies of the random variable $F$. The infinite sequence $\mathcal{E}=\{F_{n},\,n\in \mathbb{N}\}$ is called a random environment.

A sequence of nonnegative random variables $\mathcal{Z}=\{Z_{n},\, n\in \mathbb{N}_{0}\}$ specified on a probability space $(\Omega, \mathcal{F},\mathbf{P})$ is called a branching process in a random environment (BPRE) if $Z_{0}$ is independent of $\mathcal{E}$ and, given $\mathcal{E}$, the process $\mathcal{Z}$ is a Markov chain with

$$ \begin{equation*} \mathcal{L}(Z_{n}\mid Z_{n-1}=z_{n-1},\, \mathcal{E}=(f_{1},f_{2},\dots)) =\mathcal{L}(\xi_{n1}+\dots +\xi_{ny_{n-1}}) \end{equation*} \notag $$
for all $n\in \mathbb{N}$, $z_{n-1}\in \mathbb{N}_{0}$ and $f_{1},f_{2},\ldots\in \mathfrak{F}$, where $\xi_{n1},\xi_{n2},\dots$ is a sequence of independent identically distributed random variables with distribution $f_{n}$. Thus, $Z_{n-1}$ is the $(n-1)$st generation size of the population of the branching process and $f_{n}$ is the offspring distribution of an individual at generation $n-1$.

The sequence

$$ \begin{equation*} S_{0}=0, \qquad S_{n}=X_{1}+\dots+X_{n}, \quad n\geqslant 1, \end{equation*} \notag $$
where $X_{i}=\log F_{i}'(1)$, $i=1,2,\dots$, is called the associated random walk for the process $\mathcal{Z}$.

We assume below that $Z_{0}=1$ and impose the following restrictions on the properties of the BPRE.

Condition B1. The step of the associated random walk satisfies Conditions A1 and A2.

According to the classification of BPREs (see, for instance, [1] and [9]), Condition B1 means that we consider the critical BPREs.

Our second assumption on the environment concerns reproduction laws of particles. Set

$$ \begin{equation*} \gamma (b):=\frac{\sum_{k=b}^{\infty}k^{2}F(\{ k\}) }{\bigl(\sum_{i=0}^{\infty}iF(\{ i\})\bigr)^{2}}. \end{equation*} \notag $$

Condition B2. There exist $\varepsilon >0$ and $b\in\mathbb{N}$ such that

$$ \begin{equation*} \mathbf{E}[(\log ^{+}\gamma (b))^{\alpha +\varepsilon}]<\infty, \end{equation*} \notag $$
where $\log ^{+}x=\log (x\vee 1)$.

It is known (see [1], Theorem 1.1 and Corollary 1.2) that if Conditions B1 and B2 are valid, then there exist a number $\theta \in(0,\infty)$ and a sequence $l(1),l(2),\dots$ varying slowly at infinity such that, as $n\to\infty$,

$$ \begin{equation} \mathbf{P}(Z_{n}>0) \sim\theta \, \mathbf{P}(L_n\geqslant 0)\sim \theta n^{-(1-\rho)}l(n), \end{equation} \tag{7.1} $$
and for any $x\geqslant 0$
$$ \begin{equation} \begin{aligned} \, \mathbf{P}(Z_{n}>0,\, S_{n}\leqslant xa_{n}) &=\mathbf{P}(S_{n}\leqslant xa_{n}\mid Z_{n}>0) \, \mathbf{P}(Z_{n}>0) \notag \\ &\sim \mathbf{P}(Y_{1}^{+}\leqslant x)\, \mathbf{P}(Z_{n}>0), \end{aligned} \end{equation} \tag{7.2} $$
where $\mathcal{Y}^{+}=\{Y_{t}^{+},\,0\leqslant t\leqslant 1\}$ denotes the meander of a strictly $\alpha$-stable process $\mathcal{Y}$, so that $\mathcal{Y}^+$ is a strictly $\alpha$-stable Lévy process, which is assumed to be positive on the half-open interval $(0,1]$ (see [4] and [5]).

Thus, if a BPRE is critical, then, given $Z_{n}>0$, the random variable $S_{n}$, which is the value of the associated random walk that provides the survival of the population to a distant moment $n$, grows like $a_{n}$ times a random positive multiplier. Since $\mathbf{P}(Y_{1}^{+}\leqslant 0)=0$, it follows from (7.2) that if $\varphi(n)$ satisfies the restriction

$$ \begin{equation} \limsup_{n\to \infty}\frac{\varphi(n)}{a_{n}}\leqslant 0, \end{equation} \tag{7.3} $$
then
$$ \begin{equation*} \begin{aligned} \, \mathbf{P}(Z_{n}>0,\, S_{n}\leqslant \varphi(n)) &=\mathbf{P}(S_{n}\leqslant \varphi(n)\mid Z_{n}>0) \mathbf{P}(Z_{n}>0) \\ &=o(\mathbf{P}(Z_{n}>0)) \end{aligned} \end{equation*} \notag $$
as $n\to\infty$.

It is natural to consider an environment meeting condition (7.3) as unfavourable for the development of the critical BPRE.

An important case of an unfavourable random environment was considered in [12], and [14], where it was in particular shown that if $\varphi(n)\to\infty$ as $n\to\infty$ in such a way that $\varphi(n)=o(a_{n})$, then

$$ \begin{equation*} \mathbf{P}(Z_{n}>0,\, S_{n}\leqslant \varphi(n)) \sim C b_{n}\int_{0}^{\varphi(n)}V(-w)\,dw, \qquad C \in (0,\infty). \end{equation*} \notag $$
The authors of [12] also investigated the conditional distribution of the number of particles in the process at time $m\leqslant n$ given the event $\{Z_{n}>0,\,S_{n}\leqslant \varphi(n)\}$.

In this paper we complement the results of [12] by imposing even more stringent restrictions on the environment. Namely, we assume that at the time $n$ of observation the environment meets the assumption $S_{n}\leqslant K$ for a fixed constant $K$ and call such an environment extremely unfavourable.

Our main result looks as follows.

Theorem 5. Let Conditions B1 and B2 be valid. Then for any fixed $K$

$$ \begin{equation*} \lim_{n\to \infty}\frac{\mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K)}{b_{n}}=G_{\mathrm{left}}(K)+G_{\mathrm{right}}(K), \end{equation*} \notag $$
where the constants $G_{\mathrm{left}}(K)\in (0,\infty)$ and $G_{\mathrm{right}}(K)\in(0,\infty)$ are specified below by formulae (7.9) and (7.14), respectively.

To prove the theorem we introduce two new probability measures $\mathbf{P}^{+}$ and $\mathbf{P}^{-}$ by using the identities

$$ \begin{equation} \begin{aligned} \, \mathbf{E}[U(x+X);\, X+x\geqslant 0]&=U(x), \qquad x\geqslant 0 , \\ \mathbf{E}[V(x+X);\, X+x<0]&=V(x), \qquad x\leqslant 0, \end{aligned} \end{equation} \tag{7.4} $$
which are valid for any oscillating random walk (see [9], § 4.4.3). The construction procedure of these measures is standard and is explained for $\mathbf{P}^{+}$ and $\mathbf{P}^{-}$ in detail in [1] and [2] (see also [9], § 5.2). We recall here only some basic definitions related to this construction.

Let $\mathcal{F}_{n}$, $n\geqslant 0$, be the $\sigma $-field of events generated by the random variables $F_{1},F_{2},\dots,F_{n}$ and $Z_{0},Z_{1},\dots,Z_{n}$. This $\sigma $-field form a filtration $\mathcal{F}$. We assume that the random walk $\mathcal{S}=\{S_{n},\,n\geqslant0\}$ with the initial value $S_{0}=x$, $x\in \mathbb{R}$, is adapted to the filtration $\mathcal{F}$ and construct for $x\geqslant 0$ probability measures $\mathbf{P}_{x}^{+}$ and expectations $\mathbf{E}_{x}^{+}$ as follows. For every sequence $T_{0},T_{1},\dots$ of random variables with values in some space $\mathcal{T}$ and adopted to $\mathcal{F}$ and for any bounded and measurable function $g\colon\mathcal{T}^{n+1}\to\mathbb{R}$, $n\in \mathbb{N}_{0}$, we set

$$ \begin{equation*} \mathbf{E}_{x}^{+}[g(T_{0},\dots,T_{n})]:= \frac{1}{U(x)}\, \mathbf{E}_{x}[g(T_{0},\dots,T_{n})U(S_{n});\, L_{n}\geqslant 0]. \end{equation*} \notag $$
Similarly, for $x<0$, $V$ gives rise to probability measures $\mathbf{P}_{x}^{-}$ and expectations $\mathbf{E}_{x}^{-}$ characterized for each $n\in \mathbb{N}_{0}$ by the equation
$$ \begin{equation*} \mathbf{E}_{x}^{-}[g(T_{0},\dots,T_{n})]:= \frac{1}{V(x)}\, \mathbf{E}_{x}[g(T_{0},\dots,T_{n})V(S_{n});\, M_{n}<0]. \end{equation*} \notag $$

In virtue of (7.4) these definitions are consistent and in agreement with respect to $n$.

For the convenience of the reader we present the following two lemmas, which provide the major steps in the proof of Theorem 5.

Lemma 4 ([12], Lemma 4). Assume Condition B1. Let $H_{1},H_{2},\dots$, be a uniformly bounded sequence of real-valued random variables adapted to some filtration $\widetilde{\mathcal{F}}=\{\widetilde{\mathcal{F}}_{k},\,k\in \mathbb{N}\}$, which converges $\mathbf{P}^{+}$-almost surely to a random variable $H_{\infty}$. Suppose that $\varphi(n)$, $n\in\mathbb{N}$, is a real-valued function such that $\inf_{n\in \mathbb{N}}\varphi(n)\geqslant C>0$ and $\varphi(n)=o(a_{n})$ as $n\to\infty$. Then

$$ \begin{equation*} \lim_{n\to \infty}\frac{\mathbf{E}[ H_{n};\, S_{n}\leqslant \varphi(n),\, L_{n}\geqslant 0]}{\mathbf{P}(S_{n}\leqslant \varphi(n),\, L_{n}\geqslant 0)}=\mathbf{E}_{0}^{+}[ H_{\infty}] . \end{equation*} \notag $$

The statement of the following lemma uses the first $n$ elements $F_{1},\dots,F_{n}$ of the random environment $\mathcal{E}=\{F_{k},\,k\in \mathbb{N}\}$.

Lemma 5. Let $0<\delta <1$. Let

$$ \begin{equation} \widehat{W}_{n}=g_{n}(F_{1},\dots,F_{\lfloor \delta n\rfloor },Z_{0},Z_{1},\dots,Z_{\lfloor \delta n\rfloor}), \qquad n\geqslant 1, \end{equation} \tag{7.5} $$
be random variables with values in a Euclidean (or a Polish) space $\mathcal{W}$ such that
$$ \begin{equation} \widehat{W}_{n}\to \widehat{W}_{\infty} \quad \mathbf{P}_{x}^{+}\textit{-a.s.} \end{equation} \tag{7.6} $$
for some $\mathcal{W}$-valued random variable $\widehat{W}_{\infty}$ and for all $x\geqslant 0$. Also let $B_{n}=h_{n}(F_{1},\dots,F_{\lfloor\delta n\rfloor})$, $n\geqslant 1$, be random variables with values in a Euclidean (or a Polish) space $\mathcal{B}$ such that
$$ \begin{equation*} B_{n}\to B_{\infty} \quad \mathbf{P}^{-}\textit{-a.s.} \end{equation*} \notag $$
for some $\mathcal{B}$-valued random variable $B_{\infty}$. Set
$$ \begin{equation*} \widetilde{B}_{n}:=h_{n}(F_{n},\dots,F_{n-\lfloor \delta n\rfloor +1}). \end{equation*} \notag $$
Then for any bounded continuous function $\Psi\colon\mathcal{W}\times \mathcal{B}\times \mathbb{R}\to\mathbb{R}$ and for $\lambda >0$
$$ \begin{equation*} \begin{aligned} \, & \frac{\mathbf{E}[\Psi (\widehat{W}_{n}, \widetilde{B}_{n},S_{n})e^{\lambda S_{n}};\, \tau_{n}=n]}{\mathbf{E}[e^{\lambda S_{n}};\, \tau_{n}=n]} \\ &\qquad \to K_{1}\iiint \Psi (w,b,-z)\, \mathbf{P}_{z}^{+}(\widehat{W}_{\infty}\in dw) \, \mathbf{P}^{-}(B_{\infty}\in db)U(z)e^{-\lambda z}\,dz \end{aligned} \end{equation*} \notag $$
as $n\to\infty$, where
$$ \begin{equation*} K_{1}^{-1}=K_{1\lambda}^{-1}=\int_{0}^{\infty}e^{-\lambda z}U(z)\,dz. \end{equation*} \notag $$

Proof. First of all, note that the statement of the lemma coincides almost literally with the statement of Theorem 2.8 in [2]. The only difference is that [2] deals, instead of the sequence $\{\widehat{W}_{n},\,n\geqslant 1\}$, with the sequence
$$ \begin{equation*} W_{n}=g_{n}(F_{1},\dots,F_{\lfloor \delta n\rfloor}), \qquad n\geqslant 1, \end{equation*} \notag $$
meeting the condition
$$ \begin{equation*} W_n\to W_{\infty} \quad \mathbf{P}_{x}^{+}\text{-a.s.} \end{equation*} \notag $$
for some $\mathcal{W}$-valued random variable $W_{\infty}$ and all $x\geqslant 0$. In particular, it was shown in [2] that for all continuous bounded functions $\varphi_{1}(u)$, $\varphi_{2}(b)$ and $\varphi_{3}(z)$
$$ \begin{equation*} \begin{aligned} \, &\frac{\mathbf{E}[\varphi_1(W_n)\varphi_2(\widetilde{B}_n)\varphi_3(S_n)e^{\lambda S_{n}};\, \tau _{n}=n]}{\mathbf{E}[e^{\lambda S_{n}};\, \tau_{n}=n]} \\ &\qquad \to K_{1}\iiint \varphi_1(u)\varphi_2(b)\varphi_3(-z)\, \mathbf{P}_{z}^{+}(W_{\infty}\in du) \, \mathbf{P}^{-}(B_{\infty}\in db) U(z)e^{-\lambda z}\,dz \end{aligned} \end{equation*} \notag $$
as $n\to\infty$. Now let $\widehat{\varphi}_{1}(w),\,w\in (-\infty,+\infty)$ be a continuous function such that $|\widehat{\varphi}_{1}(w)|\leqslant C$ for all $w\in (-\infty,+\infty)$, and let
$$ \begin{equation*} W_{n}=W_n(F_{1},\dots,F_{\lfloor \delta n\rfloor})=\mathbf{E}[ \widehat{\varphi}_1(\widehat{W}_n)\mid F_{1},\dots,F_{\lfloor \delta n\rfloor}]. \end{equation*} \notag $$
It follows from (7.6) and the dominated convergence theorem that
$$ \begin{equation*} W_{n}\to W_{\infty}=\mathbf{E}[ \widehat{\varphi}_1(\widehat{W}_{\infty})\mid \mathcal{E}]\quad \mathbf{P}_{x}^{+}\text{-a.s.} \end{equation*} \notag $$

We introduce the function

$$ \begin{equation*} \varphi_{1}(u)= \begin{cases} 0 & \text{for } u<-2C, \\ -2C-u & \text{for } -2C\leqslant u<-C, \\ u & \text{for } -C\leqslant u\leqslant C, \\ 2C-u & \text{for } C<u<2C, \\ 0 &\text{for } 2C\leqslant u. \end{cases} \end{equation*} \notag $$
In accordance with the definition of $\varphi_{1}$, we have
$$ \begin{equation} \begin{aligned} \, \notag &\int_{-\infty}^{+\infty} \varphi_1(u)\, \mathbf{P}_{z}^{+}(W_{\infty}\in du) =\int_{-C}^C u\, \mathbf{P}_{z}^{+}(W_{\infty}\in du)=\mathbf{E}^+_z[W_{\infty}] \\ &\qquad=\mathbf{E}_z^+\bigl[\mathbf{E}[\widehat{\varphi}_1(\widehat{W}_{\infty}) \mid \mathcal{E}]\bigr]=\mathbf{E}_z^+[\widehat{\varphi}_1(\widehat{W}_{\infty})] =\int_{-\infty}^{+\infty} \widehat{\varphi}_1(w)\mathbf{P}_{z}^{+}( \widehat{W}_{\infty}\in dw). \end{aligned} \end{equation} \tag{7.7} $$

Since the random variables $\widetilde{B}_n$, $S_n$ and $\tau_n$ depend on the state of the environment rather than on the sequence $Z_0,Z_1,\dots,Z_n$, it follows that

$$ \begin{equation*} \begin{aligned} \, &\mathbf{E}[\widehat{\varphi}_1(\widehat{W}_n)\varphi_2(\widetilde{B}_n)\varphi_3(S_n)e^{\lambda S_{n}};\, \tau_{n}=n] \\ &\qquad=\mathbf{E}[\varphi_1(W_n)\varphi_2(\widetilde{B}_n)\varphi_3(S_n)e^{\lambda S_{n}};\, \tau_{n}=n]. \end{aligned} \end{equation*} \notag $$
This equality and (7.7) show that for any continuous bounded functions $\widehat{\varphi}_{1}(w)$, $\varphi_{2}(b)$ and $\varphi_{3}(z)$
$$ \begin{equation*} \begin{aligned} \, &\frac{\mathbf{E}[\widehat{\varphi}_1(\widehat{W}_n)\varphi_2(\widetilde{B}_n) \varphi_3(S_n)e^{\lambda S_{n}};\, \tau_{n}=n]}{\mathbf{E}[e^{\lambda S_{n}};\, \tau_{n}=n]} \\ &\qquad=\frac{\mathbf{E}[\varphi_1(W_n)\varphi_2(\widetilde{B}_n) \varphi_3(S_n)e^{\lambda S_{n}};\, \tau_{n}=n]}{\mathbf{E}[e^{\lambda S_{n}};\, \tau_{n}=n]} \\ &\qquad\to K_{1}\iiint \varphi_1(u)\varphi_2(b)\varphi_3(-z)\, \mathbf{P}_{z}^{+}( W_{\infty}\in dw)\, \mathbf{P}^{-}(B_{\infty}\in db)U(z)e^{-\lambda z}\,dz \\ &\qquad\to K_{1}\iiint \widehat{\varphi}_1(w)\varphi_2(b)\varphi_3(-z)\, \mathbf{P}_{z}^{+}( \widehat{W}_{\infty}\in dw) \, \mathbf{P}^{-}(B_{\infty}\in db) U(z)e^{-\lambda z}\,dz \end{aligned} \end{equation*} \notag $$
as $n\to\infty$.

To complete the proof of the lemma, it suffices to note that each continuous bounded function $\Psi(w,b,z)$ of three variables on a compact set can be approximated by linear combinations of products of functions of the form $\widehat{\varphi}_{1}(w)\varphi_{2}(b)\varphi_{3}(z)$ as accurately as needed and to observe that the elements of the sequences $\{\widehat{W}_{n},\,n\geqslant 1\}$, $\{\widetilde{B}_{n},n\geqslant 1\}$ and the random variables $\widehat{W}_{\infty}$ and $B_{\infty}$ are bounded, while the integral $\displaystyle\int_J^{\infty}U(z)e^{-\lambda z}\,dz$ can be made arbitrary small by the choice of the parameter $J$.

Lemma 5 is proved.

Proof of Theorem 5. First observe that
$$ \begin{equation*} \begin{aligned} \, \mathbf{P}(Z_{n}>0\mid \mathcal{E}) &\leqslant \min_{0\leqslant k\leqslant n}\mathbf{P}(Z_{k}>0\mid \mathcal{E}) \\ &\mathbf{\leqslant}\min_{0\leqslant k\leqslant n}\mathbf{E}[ Z_{k}\mid \mathcal{E}] =\min_{0\leqslant k\leqslant n}e^{S_{k}}=e^{S_{\tau_{n}}}. \end{aligned} \end{equation*} \notag $$
This estimate, in combination with the arguments used in the proof of Theorem 4, implies that
$$ \begin{equation} \begin{aligned} \, &\limsup_{J\to \infty}\limsup_{n\to \infty}\frac{\mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K,\, \tau_{n}\in [J+1,n-J])}{b_{n}} \notag \\ &\qquad=\limsup_{J\to \infty}\limsup_{n\to \infty}\frac{\mathbf{E}[ \mathbf{P}(Z_{n}>0\mid \mathcal{E});\, S_{n}\leqslant K,\, \tau_{n}\in [ J+1,n-J] ]}{b_{n}} \notag \\ &\qquad \leqslant \limsup_{J\to \infty}\limsup_{n\to \infty} \frac{\mathbf{E}[ e^{S_{\tau_{n}}};\, S_{n}\leqslant K,\, \tau_{n}\in [J+1,n-J] ]}{b_{n}}=0. \end{aligned} \end{equation} \tag{7.8} $$
Thus, it remains to analyze the case when
$$ \begin{equation*} \tau_{n}\in [ 0,J]\cup [ n-J+1,n]. \end{equation*} \notag $$
We fix sufficiently large positive integers $J$ and $N>|K|$ and, for $j\in [0,J]$, consider the chain of estimates
$$ \begin{equation*} \begin{aligned} \, &\mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K,\, \tau_{n}=j, \, S_{j}<-N) \\ &\qquad \leqslant \mathbf{E}[ e^{S_{j}};\, S_{n}\leqslant K, \, \tau_{n}=j,\, S_{j}<-N] \\ &\qquad =\int_{-\infty}^{-N}e^{x}\, \mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \, \mathbf{P}(S_{n-j}\leqslant K-x, \, L_{n-j}\geqslant 0) \\ &\qquad \leqslant \int_{-\infty}^{-N}e^{x}\, \mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \, \mathbf{P}(S_{n-j}\leqslant -2x, \, L_{n-j}\geqslant 0) . \end{aligned} \end{equation*} \notag $$
In view of (3.3) and (3.5), for any $\varepsilon >0$ there is a sufficiently large $N=N(\varepsilon)$ such that
$$ \begin{equation*} \begin{aligned} \, &\int_{-\infty}^{-N}e^{x} \, \mathbf{P}(S_{j}\in dx,\tau_{j}=j) \, \mathbf{P}(S_{n-j}\leqslant -2x,\, L_{n-j}\geqslant 0) \\ &\qquad \leqslant Cb_{n-j}\int_{-\infty}^{-N}e^{x}\, \mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \int_{0}^{-2x}V(-w)\,dw \\ &\qquad \leqslant C_{1}b_{n-j}\int_{-\infty}^{-N}e^{x} (|x|^{2}+1) \, \mathbf{P}(S_{j}\in dx) \leqslant \varepsilon b_{n-j} \end{aligned} \end{equation*} \notag $$
for all $n-j\geqslant 0$. Thus,
$$ \begin{equation*} \mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K,\, \tau_{n}=j,\, S_{j}<-N) \leqslant \varepsilon b_{n-j} \end{equation*} \notag $$
for all sufficiently large $N$.

Further, for any $Q\in \mathbb{N}$ and $N>K$, from (3.3) and (2.2) we obtain

$$ \begin{equation*} \begin{aligned} \, &\mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K,\, \tau_{n}=j,\, S_{j}\geqslant -N,\, Z_{j}>Q) \\ &\qquad \leqslant \mathbf{P}(S_{n}\leqslant K,\, \tau_{n}=j,\, S_{j}\geqslant -N,\, Z_{j}>Q) \\ &\qquad =\int_{-N}^{K\wedge 0}\mathbf{P}(S_{j}\in dx,\, \tau_{j}=j,\, Z_{j}>Q) \, \mathbf{P}(S_{n-j}\leqslant K-x,\, L_{n-j}\geqslant 0) \\ &\qquad \leqslant \int_{-N}^{0}\mathbf{P}(S_{j}\in dx,\, \tau_{j}=j,\, Z_{j}>Q) \, \mathbf{P}(S_{n-j}\leqslant 2N,\, L_{n-j}\geqslant 0) \\ &\qquad \leqslant \mathbf{P}(\tau_{j}=j,\, Z_{j}>Q) \, \mathbf{P}(S_{n-j}\leqslant 2N,\, L_{n-j}\geqslant 0) \\ &\qquad \leqslant Cb_{n-j}\, \mathbf{P}(\tau_{j}=j,\, Z_{j}>Q) \int_{0}^{2N}V(-w)\,dw \\ &\qquad \leqslant C_{1}b_{n-j}\, \mathbf{P}(\tau_{j}=j,\, Z_{j}>Q) N^{2}. \end{aligned} \end{equation*} \notag $$
Clearly,
$$ \begin{equation*} \lim_{Q\to \infty}\mathbf{P}(\tau_{j}=j,\, Z_{j}>Q) \leqslant \lim_{Q\to \infty}Q^{-1}\, \mathbf{E}[e^{S_{j}};\, \tau_{j}=j]=0. \end{equation*} \notag $$
Therefore, we can select a sufficiently large $Q$ such that
$$ \begin{equation*} \mathbf{P}(\tau_{j}=j,\, Z_{j}>Q) N^{2}\leqslant \varepsilon, \end{equation*} \notag $$
and obtain the estimate
$$ \begin{equation*} \mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K,\, \tau_{n}=j,\, Z_{j}>Q) \leqslant C_{1}\varepsilon b_{n-j}. \end{equation*} \notag $$

Now consider the probability

$$ \begin{equation*} \mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K,\, \tau_{n}=j,\, S_{j}\geqslant -N,\, Z_{j}\leqslant Q). \end{equation*} \notag $$
We introduce the event
$$ \begin{equation*} A_{\mathrm{u.s}}:=\{ Z_{n}>0\text{ for all }n\geqslant 0\} \end{equation*} \notag $$
and for $0\leqslant s\leqslant 1$ define the iterations
$$ \begin{equation*} F_{k,n}(s):=F_{k+1}(F_{k+2}(\dots F_{n}(s)\dots))\quad \text{if}\ 0\leqslant k<n\ \text{and}\ F_{n,n}(s):=s. \end{equation*} \notag $$
Since the limit
$$ \begin{equation*} \lim_{n\to \infty}\mathbf{P}(Z_{n}>0\mid \mathcal{E},\, Z_{0}=q)=\lim_{n\to \infty}(1-F_{0,n}^{q}(0))=:1-F_{0,\infty}^{q}(0) \end{equation*} \notag $$
exists $\mathbf{P}^{+}$-almost surely by the monotonicity of the extinction probability of branching processes, it follows from Lemma 4 that
$$ \begin{equation*} \mathbf{E}[ 1-F_{0,n}^{q}(0)\mid S_{n}\leqslant y_{n},\, L_{n}\geqslant 0] \to \mathbf{E}^{+}[ 1-F_{0,\infty}^{q}(0)] =\mathbf{P}_{q}^{+}(A_{\mathrm{u.s}}) \end{equation*} \notag $$
as $n\to\infty$ if $y_{n}=o(a_{n})$. In addition, $\mathbf{P}_{q}^{+}(A_{\mathrm{u.s}})>0$ for any $q=1,2,\dots$, according to Proposition 3.1 in [1].

Now Corollary 2 shows that for fixed $q\leqslant Q$ and $x\in [-N,K]$

$$ \begin{equation*} \begin{aligned} \, & \mathbf{P}(Z_{n-j}>0,\, S_{n-j}\leqslant K-x,\, L_{n-j}\geqslant 0,\, Z_{0}=q) \\ &\qquad =\mathbf{E}[ 1-F_{0,n-j}^{q}(0);\, S_{n-j}\leqslant K-x,\, L_{n-j}\geqslant 0] \\ &\qquad \sim \mathbf{P}_{q}^{+}(A_{\mathrm{u.s}}) \, \mathbf{P}(S_{n-j}\leqslant K-x,\, L_{n-j}\geqslant 0) \\ &\qquad \sim g_{\alpha,\beta}(0)\mathbf{P}_{q}^{+}(A_{\mathrm{u.s}})b_{n-j}\int_{0}^{K-x}V(-w)\,dw \end{aligned} \end{equation*} \notag $$
as $n-j\to\infty $. Hence, using (3.3) once again and applying the dominated convergence theorem we conclude that
$$ \begin{equation*} \begin{aligned} \, &\frac{\mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K,\, \tau_{n}=j,\, S_{j}\geqslant -N,\, Z_{j}=q)}{b_{n-j}} \\ &\quad =\int_{-N}^{K\wedge 0}\mathbf{P}(S_{j}\in dx,\, \tau_{j}=j,\, Z_{j}=q) \, \frac{\mathbf{E}[ 1-F_{0,n-j}^{q}(0);\, S_{n-j}\leqslant K-x,\, L_{n-j}\geqslant 0]}{b_{n-j}} \\ &\quad \sim g_{\alpha,\beta}(0)\, \mathbf{P}_{q}^{+}(A_{\mathrm{u.s}}) \int_{-N}^{K\wedge 0}\mathbf{P}(S_{j}\in dx,\, \tau_{j}=j,\, Z_{j}=q) \int_{0}^{K-x}V(-w)\,dw \end{aligned} \end{equation*} \notag $$
as $n-j\to\infty$.

Combining all these estimates, letting $Q$ and $N$ tend to infinity and using (1.1), we see that for any fixed $j\in [0,J]$

$$ \begin{equation*} \begin{aligned} \, m_{j} &:=\lim_{n\to \infty}\frac{\mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K,\, \tau_{n}=j)}{b_{n}} \\ &=g_{\alpha,\beta}(0)\int_{-\infty}^{K\wedge 0}\mathbf{E}\biggl[ \mathbf{P}_{Z_{j}}^{+}(A_{\mathrm{u.s}}) \, \mathbf{P}(S_{j}\in dx,\, \tau_{j}=j) \int_{0}^{K-x}V(-w)\,dw\biggr] . \end{aligned} \end{equation*} \notag $$
Thus,
$$ \begin{equation} G_{\mathrm{left}}(K):=\lim_{J\to \infty}\lim_{n\to \infty} \frac{\mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K,\, \tau_{n}\in [ 0,J])}{b_{n}}=\sum_{j=0}^{\infty}m_{j}. \end{equation} \tag{7.9} $$

Clearly,

$$ \begin{equation*} \begin{aligned} \, G_{\mathrm{left}}(K) &\geqslant m_{1} \\ &\geqslant g_{\alpha,\beta}(0)\, \mathbf{P}^{+}(A_{\mathrm{u.s}}\mid Z_{0}=1)\int_{-\infty}^{K\wedge 0}\mathbf{P}(S_{1}\in dx) \int_{0}^{K-x}V(-w)\,dw >0. \end{aligned} \end{equation*} \notag $$
The inequality $G_{\mathrm{left}}(K)<\infty$ follows from (6.1) and the estimate
$$ \begin{equation*} \begin{aligned} \, \mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K,\, \tau_{n}\in [ 0,J]) &=\mathbf{E}[ 1-F_{0,n}(0);\, S_{n}\leqslant K,\, \tau_{n}\in [ 0,J]] \\ &\leqslant \mathbf{E}[ e^{S_{\tau_{n}}};\, S_{n}\leqslant K,\, \tau_{n}\in [0,J]]=R(0,J). \end{aligned} \end{equation*} \notag $$

Now assume that $n-j=t\in [0,J]$. Using the independence of the tuples $F_{1},\dots,F_{j}$ and $F_{j+1},\dots,F_{n}$ we write

$$ \begin{equation*} \begin{aligned} \, &\mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K,\, \tau_{n}=j)=\mathbf{E}[1-F_{0,n}(0);\, S_{n}\leqslant K,\, \tau_{n}=j] \\ &\quad =\mathbf{E}[ 1-F_{0,j}(F_{j,n}(0));\, S_{j}\leqslant K-(S_{n}-S_{j}),\, \tau_{n}=j] \\ &\quad =\int_{K}^{\infty}\int_{0}^{1}\mathbf{P}(F_{0,t}(0)\in ds,\, S_{t}\in dx,\, L_{t}\geqslant 0) \, \mathbf{E}[1-F_{0,j}(s);\, S_{j}\leqslant K-x,\, \tau_{j}=j] . \end{aligned} \end{equation*} \notag $$
Our aim is to investigate the asymptotic behaviour of the quantity
$$ \begin{equation*} \mathbf{E}[ 1-F_{0,j}(s);\, S_{j}\leqslant K-x,\, \tau_{j}=j] \end{equation*} \notag $$
as $j\to\infty$ for fixed $s\in [0,1]$ and $x\geqslant K$.

Clearly,

$$ \begin{equation*} \begin{aligned} \, \mathbf{E}[ 1-s^{Z_{j}}\mid \mathcal{E}] &=\mathbf{E}\bigl[ \mathbf{E}[ 1-s^{Z_{j}}\mid Z_{[j/2]}] \mathcal{E}\bigr]=\mathbf{E}\bigl[ 1-(F_{[ j/2],j}(s)) ^{Z_{[j/2]}}\bigm|\mathcal{E}\bigr] \\ &=\mathbf{E}\bigl[ 1-(F_{[ j/2],j}(s))^{\exp\{ S_{[ j/2]}-S_{j}\} Z_{[j/2]} \exp \{ -S_{[j/2]}\} \exp \{ S_{j}\}}\bigm| \mathcal{E}\bigr]. \end{aligned} \end{equation*} \notag $$
For $w\geqslant 0$, $0\leqslant b\leqslant 1$ and $z\in \mathbb{R}$, setting $0^{0}=1$ we define the function
$$ \begin{equation*} \Psi_{K-x}(w,b,z)=\bigl(1-b^{w\exp \{ z\}}\bigr)e^{-z}I\{ z\leqslant K-x\}, \end{equation*} \notag $$
and extend $\Psi_{K-x}$ to the other values of $w$, $b$ and $z$ as a bounded smooth function. In doing so, points of discontinuity at $(0,0,z)$ and $(w,b,K-x)$ are unavoidable. Our aim is to apply Lemma 5 to $\Psi_{K-x}$. However it is possible to apply this lemma only to bounded continuous functions. We show that in the case of BPREs this difficulty can be bypassed.

With the notation introduced in view, we write

$$ \begin{equation} \begin{aligned} \, &\mathbf{E}[ 1-F_{0,j}(s);\, S_{j}\leqslant K-x,\, \tau_{j}=j] \nonumber \\ &\qquad=\mathbf{E}\bigl[ \Psi_{K-x}(\widehat{W}_{j},\widetilde{B}_{j}(s),S_{j})e^{S_{j}};\, \tau_{j}=j\bigr], \end{aligned} \end{equation} \tag{7.10} $$
where
$$ \begin{equation*} \widehat{W}_{j}:=Z_{[j/2]}e^{-S_{[ j/2]}} \quad\text{and}\quad \widetilde{B}_{j}(s):=(F_{[ j/2],j}(s)) ^{\exp \{ S_{[ j/2]}-S_{j}\}}. \end{equation*} \notag $$

Since the sequence $\{\widehat{W}_{j},\,j\geqslant 1\}$ is a nonnegative martingale we have

$$ \begin{equation*} \widehat{W}_{j}=Z_{[j/2]}e^{-S_{[ j/2]}}\to \widehat{W}_{\infty} \quad \mathbf{P}_{z}^{+}\text{-a.s.} \end{equation*} \notag $$
as $j\to\infty $, where $\widehat{W}_{\infty}$ is a nonnegative random variable. Moreover, it follows from Proposition 3.1 in [1] that
$$ \begin{equation} \mathbf{P}_{z}^{+}(\widehat{W}_{\infty}>0) >0 \end{equation} \tag{7.11} $$
for any $z\geqslant 0$. (Estimate (7.11) was proved in [1] for $\mathbf{P}_{0}^{+}$ only. To show the validity of this inequality for all $z>0$ it is sufficient to note that the initial value of the associated random walk has no influence on the reproduction laws and to repeat literally the corresponding arguments from [1], using $S_{\ast}+z$ instead of $S_{\ast}$.)

Further, according to Lemma 3.2 in [2] the sequence

$$ \begin{equation*} B_{j}(s):=(F_{[ j/2],0}(s))^{\exp \{ -S_{[j/2]}\}}, \qquad j=1,2,\dots, \end{equation*} \notag $$
is nondecreasing and $B_{j}(s)\geqslant s^{\exp \{-S_{0}\}}$. Thus,
$$ \begin{equation*} B_{j}(s)\to B_{\infty}(s)\geqslant s^{\exp \{ -S_{0}\}}=s \quad \mathbf{P}^{-}\text{-a.s.} \end{equation*} \notag $$
as $j\to\infty$. In view of the equality
$$ \begin{equation*} \mathbf{P}(\widetilde{B}_{j}(s)\geqslant s)=\mathbf{P}(B_{j}(s)\geqslant s)=1 \end{equation*} \notag $$
the second random variable in $\Psi_{K-x}(\widehat{W}_{j},\widetilde{B}_{j}(s),S_{j})$ is separated from zero with probability $1$.

Thus, aiming to apply Lemma 5 we can restrict $\Psi_{K-x}$ to the domain $(w,b,z\neq K-x)$ for $b>0$, where it is continuous. Finally, the additional discontinuity at $z=K-x$ has probability $0$ with respect to the measure

$$ \begin{equation*} \mu (dz) :=K_{1}e^{-z}U(z)\, \mathbf I\{ z\geqslant 0\}\, dz, \end{equation*} \notag $$
where $\displaystyle K_{1}^{-1}:=\int_{0}^{\infty}e^{-z}U(z)\,dz$.

As a result, we can apply Lemma 5 to conclude that, as $j\to\infty$,

$$ \begin{equation} \frac{\mathbf{E}[ \Psi_{K-x}(\widehat{W}_{j},\widetilde{B}_{j}(s),S_{j})e^{S_{j}};\, \tau_{j}=j]}{\mathbf{E}[ e^{S_{j}};\, \tau_{j}=j]}\to h(s,K-x), \end{equation} \tag{7.12} $$
where
$$ \begin{equation*} \begin{aligned} \, &h(s,K-x) \\ &\qquad:=K_{1}\iiint \Psi_{K-x}(w,b,-z)\, \mathbf{P}_{z}^{+}(\widehat{W}_{\infty}\in dw) \, \mathbf{P}^{-}(B_{\infty}(s)\in db)\, e^{-z}U(z)\,dz \end{aligned} \end{equation*} \notag $$
is a bounded function.

It was shown in the proof of Lemma 3.4 in [2] that $B_{\infty}(s)<1$ $\mathbf{P}^{-}$-almost surely under conditions B1 and B2. Altogether, this implies that the right-hand side of (7.12) is positive. Using now the inequality

$$ \begin{equation*} \mathbf{E}[ 1-F_{0,j}(s);\, S_{j}\leqslant K-x,\, \tau_{j}=j] \leqslant \mathbf{E}[ e^{S_{j}};\, \tau_{j}=j] \end{equation*} \notag $$
and the dominated convergence theorem we conclude that
$$ \begin{equation} \begin{aligned} \, &\lim_{j\to \infty}\int_{K}^{\infty}\int_{0}^{1}\mathbf{P}(F_{0,t}(0)\in ds,\, S_{t}\in dx,\, L_{t}\geqslant 0)\, \frac{\mathbf{E}[1-F_{0,j}(s);\, S_{j}\leqslant K-x,\, \tau_{j}=j]}{\mathbf{E}[e^{S_{j}};\, \tau_{j}=j]} \notag \\ &\qquad =\int_{K}^{\infty}\int_{0}^{1}\mathbf{P}(F_{0,t}(0)\in ds,\, S_{t}\in dx,\, L_{t}\geqslant 0) h(s,K-x). \end{aligned} \end{equation} \tag{7.13} $$
According to (1.6),
$$ \begin{equation*} \mathbf{E}[ e^{S_{j}};\, \tau_{j}=j]=\mathbf{E}[e^{S_{j}};\, M_{j}<0] \sim g_{\alpha,\beta}(0)b_{j}\int_{0}^{\infty}e^{-w}U(w)\,dw \end{equation*} \notag $$
as $j\to\infty $. Using this fact and summing (7.13) over $t$ from $0$ to $\infty$ we obtain
$$ \begin{equation} \begin{aligned} \, &G_{\mathrm{right}}(K):=\lim_{J\to \infty}\lim_{n\to \infty}\frac{\mathbf{P}(Z_{n}>0;\, S_{n}\leqslant K,\, \tau_{n}\in [ n-J+1,n])}{b_{n}} \notag \\ &\qquad =g_{\alpha,\beta}(0)\int_{0}^{\infty}e^{-w}U(w)\,dw \notag \\ &\qquad\qquad \times \int_{K}^{\infty}\int_{0}^{1}\sum_{t=0}^{\infty} \mathbf{P}(F_{0,t}(0)\in ds,\, S_{t}\in dx,\, L_{t}\geqslant 0) h(s,K-x)>0. \end{aligned} \end{equation} \tag{7.14} $$
The inequality $G_{\mathrm{right}}(K)<\infty$ follows from (6.4) and the estimate
$$ \begin{equation*} \begin{aligned} \, &\mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K,\, \tau_{n}\in [ n-J+1,n]) \\ &\qquad =\mathbf{E}[ 1-F_{0,n}(0);\, S_{n}\leqslant K,\, \tau_{n}\in [n-J+1,n]] \\ &\qquad \leqslant \mathbf{E}[ e^{S_{\tau_{n}}};\, S_{n}\leqslant K,\, \tau_{n}\in [ n-J+1,n]]=R(n-J+1,n). \end{aligned} \end{equation*} \notag $$

Combining (7.8) with (7.9) and (7.14) we obtain

$$ \begin{equation*} \lim_{n\to \infty}\frac{\mathbf{P}(Z_{n}>0,\, S_{n}\leqslant K)}{b_{n}} =G_{\mathrm{left}}(K)+G_{\mathrm{right}}(K), \end{equation*} \notag $$
as required.

Acknowledgement

The authors are grateful to the reviewer, whose constructive comments allowed us to improve the presentation of the results of the paper.


Bibliography

1. V. I. Afanasyev, J. Geiger, G. Kersting and V. A. Vatutin, “Criticality for branching processes in random environment”, Ann. Probab., 33:2 (2005), 645–673  crossref  mathscinet  zmath
2. V. I. Afanasyev, Ch. Böinghoff, G. Kersting and V. A. Vatutin, “Limit theorems for weakly subcritical branching processes in random environment”, J. Theoret. Probab., 25:3 (2012), 703–732  crossref  mathscinet  zmath
3. F. Caravenna and L. Chaumont, “An invariance principle for random walk bridges conditioned to stay positive”, Electron. J. Probab., 18 (2013), 60, 32 pp.  crossref  mathscinet  zmath
4. R. A. Doney, “Conditional limit theorems for asymptotically stable random walks”, Z. Wahrsch. Verw. Gebiete, 70:3 (1985), 351–360  crossref  mathscinet  zmath
5. R. Durrett, “Conditioned limit theorems for some null recurrent Markov processes”, Ann. Probab., 6:5 (1978), 798–828  crossref  mathscinet  zmath
6. R. A. Doney, “Local behaviour of first passage probabilities”, Probab. Theory Related Fields, 152:3–4 (2012), 559–588  crossref  mathscinet  zmath
7. W. Feller, An introduction to probability theory and its applications, v. 2, John Wiley & Sons, Inc., New York–London–Sydney, 1966, xviii+636 pp.  mathscinet  zmath
8. K. Hirano, “Determination of the limiting coefficient for exponential functionals of random walks with positive drift”, J. Math. Sci. Univ. Tokyo, 5:2 (1998), 299–332  mathscinet  zmath
9. G. Kersting and V. Vatutin, Discrete time branching processes in random environment, Math. Stat. Ser., John Wiley & Sons, London; ISTE, Hoboken, NJ, 2017, xiv+286 pp.  crossref  zmath
10. B. A. Rogozin, “The distribution of the first ladder moment and height and fluctuation of a random walk”, Theory Probab. Appl., 16:4 (1971), 575–595  mathnet  crossref  mathscinet  zmath
11. Ya. G. Sinai, “On the distribution of the first positive sum for a sequence of independent random variables”, Theory Probab. Appl., 2:1 (1957), 122–129  mathnet  crossref  zmath
12. V. A. Vatutin and E. E. Dyakonova, “Critical branching processes evolving in a unfavorable random environment”, Discrete Math. Appl., 34:3 (2024), 175–186  mathnet  crossref
13. V. A. Vatutin and E. E. Dyakonova, “Population size of a critical branching process evolving in an unfavorable environment”, Theory Probab. Appl., 68:3 (2023), 411–430  mathnet  crossref  mathscinet  zmath
14. V. A. Vatutin, C. Dong and E. E. Dyakonova, “Random walks conditioned to stay nonnegative and branching processes in an unfavourable environment”, Sb. Math., 214:11 (2023), 1501–1533  mathnet  crossref  mathscinet  zmath  adsnasa
15. V. A. Vatutin and V. Wachtel, “Local probabilities for random walks conditioned to stay positive”, Probab. Theory Related Fields, 143:1–2 (2009), 177–217  crossref  mathscinet  zmath

Citation: V. A. Vatutin, C. Dong, E. E. Dyakonova, “Some functionals for random walks and critical branching processes in an extremely unfavourable random environment”, Sb. Math., 215:10 (2024), 1321–1350
Citation in format AMSBIB
\Bibitem{VatDonDya24}
\by V.~A.~Vatutin, C.~Dong, E.~E.~Dyakonova
\paper Some functionals for random walks and critical branching processes in an extremely unfavourable random environment
\jour Sb. Math.
\yr 2024
\vol 215
\issue 10
\pages 1321--1350
\mathnet{http://mi.mathnet.ru/eng/sm10081}
\crossref{https://doi.org/10.4213/sm10081e}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4849359}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2024SbMat.215.1321V}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001406213400002}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85216121600}
Linking options:
  • https://www.mathnet.ru/eng/sm10081
  • https://doi.org/10.4213/sm10081e
  • https://www.mathnet.ru/eng/sm/v215/i10/p58
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Математический сборник Sbornik: Mathematics
    Statistics & downloads:
    Abstract page:279
    Russian version PDF:10
    English version PDF:13
    Russian version HTML:26
    English version HTML:85
    References:23
    First page:5
     
      Contact us:
    math-net2025_03@mi-ras.ru
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2025