1.4. NOTES AND COMMENTS 23

where

I(λ) =

⎧

⎨

⎩

λ log λ − λ + 1 λ ≥ 0,

+∞ λ 0.

In particular, compute the limit

lim

t→∞

1

t

log P Nt/t ≥ λ

for every λ 0.

Comment. Exercise 1.4.2 serves as a counterexample at an attempt to weaken

the conditions assumed in Theorem 1.2.1 and Theorem 1.2.4. In this example, the

LDP given by (1.2.1)-(1.2.2) and the LDP defined by (1.2.3) have different rate

functions, and none of them are strictly increasing on

R+.

Exercise 1.4.3. Prove the upper bound of Varadhan’s integral lemma given in

part (2) of Theorem 1.1.6.

Section 1.2.

Much material in this section exists in some recent research papers instead of

standard textbooks. Theorem 1.2.2 was essentially obtained in the paper by Bass,

Chen and Rosen ([8]), Theorem 1.2.7 appeared in Chen ([27], [28]). Theorem 1.2.8

is due to K¨ onig and M¨ orters ([114]). A weaker version of Theorem 1.2.8 is also

established in [114]). We put it into the following exercise.

Exercise 1.4.4. Let Y ≥ 0 be a random variable such that

lim sup

m→∞

1

m

log

1

(m!)γ

EY

m

≤ κ

for some γ 0 and κ ∈ R. Prove directly that

lim sup

t→∞

t−1/γ

log P{Y ≥ t} ≤

−γe−κ/γ

.

Hint: You may use Chebyshev inequality and Stirling formula.

We now consider the asymptotic behavior of the probability

P{Y ≤ } ( →

0+).

The study in this direction is often referred to as the problem on small ball prob-

ability, since very often Y = ||X|| for some variable X taking values in a Banach

space. We refer to the survey paper by Li and Shao ([133]) for an overview in this

area. Generally speaking, large deviation principle and small ball probability deal

with drastically different situations. One is for the probability that the random

variables take large values and another is for the probability that the random vari-

ables take small values. In the following exercise, we seek a small ball version of

the G¨ artner-Ellis theorem.