Paul Lévy’s construction of Brownian motion

Last updated: 2026-04-12

In 1923, Norbert Wiener demonstrated the existence of Standard Brownian motion in his paper "Differential Space." Subsequently, Paul Lévy has given the beautiful prove of this fact using dyadic numbers and properties of Gaussian variables. This article elaborates on Lévy's proof, supporting it by visually explaining key concepts.
Definition (Brownian Motion)
A real-valued stochastic process {B(t):t0}\{B(t): t \geq 0\} is called a standard Brownian Motion on [0,1][0, 1] with B0=0B_0 = 0 if the following properties holds
  • the process has independent increments, i.e., for all times 0t1<t2<<tn0 \leq t_1 < t_2 < \cdots < t_n, the increments B(tn)B(tn1),B(tn1)B(tn2),,B(t2)B(t1)B(t_n) - B(t_{n-1}), B(t_{n-1}) - B(t_{n-2}), \ldots, B(t_2) - B(t_1) are independent random variables,
  • for all t0t \geq 0 and h>0h > 0, the increments B(t+h)B(t)B(t + h) - B(t) are normally distributed with expectation zero and variance hh,
  • almost surely, the function tB(t)t \mapsto B(t) is continuous.
Theorem (Wiener 1923)
Standard Brownian motion exists.

Proof of Wiener Theorem

The foolowing proof follows [MörtersPeres] pages 9-12

\quad Proof. We construct Brownian motion as a uniform limit of continuous functions, to ensure that it automatically has continuous paths.

Building Brownian Motion on Dyadic Set

The idea is to construct the Brownian Motion on the set of dyadic numbers
D=nNDnandDn={k2n:0k2n}\begin{equation*}\mathcal{D} = \bigcup_{n \in \mathbb{N}} \mathcal{D}_n \quad \textrm{and} \quad \mathcal{D}_n = \left\{ \frac{k}{2^n} : 0 \leq k \leq 2^n \right\}\end{equation*}
and to interpolate the values on DD linearly and check that the uniform limit of these continuous functions exists is a Brownian motion.
To do this let (Ω,A,P) (\Omega, \mathcal{A}, P) be a probability space on which a collection {Zt:tD} \{Z_t : t \in \mathcal{D}\} of independent, standard normally distributed random variables can be defined.
Let B(0):=0 B(0) := 0 and B(1):=Z1 B(1) := Z_1 . For each nN n \in \mathbb{N} we define the random variables B(d),dDn B(d), \, d \in \mathcal{D}_n , such that
  • for all r<s<t r < s < t in Dn \mathcal{D}_n , the random variable B(t)B(s) B(t) - B(s) is normally distributed with mean zero and variance ts t - s , and is independent of B(s)B(r) B(s) - B(r) ,
  • the vectors (B(d):dDn) (B(d) : d \in \mathcal{D}_n) and (Zt:tDDn) (Z_t : t \in \mathcal{D} \setminus \mathcal{D}_n) are independent.
Note that we have already done this for n=0,D0={0,1} n=0, \, \mathcal{D}_0 = \{0,1\} . Proceeding inductively we may assume that we have succeeded in doing it for some n1n - 1. We then define B(d)B(d) for dDnDn1 d \in \mathcal{D}_n \setminus \mathcal{D}_{n-1} by
B(d)=B(d2n)+B(d+2n)2+Zd2(n+1)/2.\begin{equation*}B(d) = \frac{B(d - 2^{-n}) + B(d + 2^{-n})}{2} + \frac{Z_d}{2^{(n+1)/2}}.\end{equation*}
Since d±2nDn1d \, \pm \, 2^{-n} \, \in \, \mathcal{D}_{n-1}, then the first summand is the linear interpolation of the values of BB at the neighbouring points of dd in Dn1 \mathcal{D}_{n-1} . Therefore (B(d):dDn) (B(d) : d \in \mathcal{D}_n) is independent of {Zt:tDDn} \{Z_t : t \in \mathcal{D} \setminus \mathcal{D}_n\} and the second property is fulfilled.
Now we need to verify that successive increments B(d)B(d2n)B(d) - B(d - 2^{-n}) and B(d+2n)B(d)B(d + 2^{-n}) - B(d) for dDnDn1d \in \mathcal{D}_n \setminus \mathcal{D}_{n-1} are independent.
B(d)B(d2n)=B(d+2n)B(d2n)2+Zd2(n+1)/2.B(d+2n)B(d)=B(d+2n)B(d2n)2Zd2(n+1)/2.\begin{align*}B(d) - B(d - 2^{-n}) &= \frac{B(d + 2^{-n}) - B(d - 2^{-n})}{2} + \frac{Z_d}{2^{(n+1)/2}}. \\B(d + 2^{-n}) - B(d) &= \frac{B(d + 2^{-n}) - B(d - 2^{-n})}{2} - \frac{Z_d}{2^{(n+1)/2}}.\end{align*}
By our induction assumptions 12[B(d+2n)B(d2n)]N(0,2n+1)\frac{1}{2}[B(d+2^{-n}) - B(d-2^{-n})] \sim \mathcal{N}(0,2^{-n+1}) and depends only on (Zt:tDn1)(Z_t : t \in \mathcal{D}_{n-1}), it is independent of Zd/2(n+1)/2Z_d/2^{(n+1)/2}. Hence their sum B(d)B(d2n)B(d) - B(d - 2^{-n}) and their difference B(d+2n)B(d)B(d + 2^{-n}) - B(d) are independent and normally distributed with mean zero and variance 2n2^{-n}. The same are true for any dDnd \in \mathcal{D}_{n}. Two any increments over disjoint intervals are thus independent by summing independent increments over intervals of the form (k2n,(k+1)2n)(k 2^{-n}, (k+1) 2^{-n}).
Formally, we can define the functions
F0(t)={Z1for t=1,0for t=0,linear in between,\begin{equation*}F_0(t) = \begin{cases}Z_1 & \text{for } t = 1, \\0 & \text{for } t = 0, \\\text{linear in between},\end{cases}\end{equation*}
and, for each n1n \geq 1,
Fn(t)={2(n+1)/2Ztfor tDnDn10for tDn1linear between consecutive points in Dn.\begin{equation*}F_n(t) = \begin{cases}2^{-(n+1)/2} Z_t & \text{for } t \in D_n \setminus D_{n-1} \\0 & \text{for } t \in D_{n-1} \\\text{linear between consecutive points in } D_n.\end{cases}\end{equation*}
We can define the partial sum, which for given mm incorporates all induction steps up to mm
Btm=i=0mFi(t)\begin{equation*}B_t^m = \sum_{i=0}^{m} F_i(t)\end{equation*}

Uniform convergence BtmB_t^m

We need to prove that with probability one (Btm)mN(B_t^m)_{m \in \mathbb{N}} converges uniformly on [0,1][0, 1]. To do it, we can start with evaluation of the successive difference
supt[0,1]BtmBtm1=supt[0,1]Fm(t)=Fm2(m+1)/2max{Zd,dDm}\begin{equation*}\sup_{t \in [0,1]} | B_t^{m} - B_t^{m-1} | = \sup_{t \in [0,1]} | F_m(t) | = \| F_m \| \leq 2^{-(m+1)/2} \max \{ |Z_d|, \, d \in \mathcal{D}_m \}\end{equation*}
On the other hand, we can evaluate P(Zdcm)\mathbb{P}(|Z_d| \geq c\sqrt{m}) using the following estimation of the standard normal distribution:
P(Zdλ)2πσλexp(λ22σ2),\begin{equation*}\mathbb{P}(|Z_d| \geq \lambda) \leq \sqrt{\frac{2}{\pi}} \frac{\sigma}{\lambda} \exp\left(-\frac{\lambda^2}{2\sigma^2}\right),\end{equation*}
so that the series
P(2(m+1)/2max{Zd,dDm}>1m2)2mP(Z1>2(m+1)/2m2)12πm22m12exp(2mm4)\begin{equation*}\mathbb{P} \left( 2^{-(m+1)/2} \max \{ |Z_d|, \, d \in \mathcal{D}_m \} > \frac{1}{m^2}\right) \leq 2^m \mathbb{P} \left( |Z_1| > \frac{2^{(m+1)/2}}{m^2} \right) \leq \frac{1}{\sqrt{2 \pi}} m^2 2^{\frac{m-1}{2}} \exp\left(-\frac{2^{m}}{m^4}\right)\end{equation*}
The right-hand side is summable, for example, by ratio test. Then using the Borel-Cantelli lemma we can conclude that n0N,mn0\exists n_0 \in \mathbb{N}, \forall m \geq n_0:
supt[0,1]BtmBtm11m2for large enoughma.s.\begin{equation*}\sup_{t \in [0,1]} | B_t^{m} - B_t^{m-1} | \leq \frac{1}{m^2} \quad \textrm{for large enough} \, m \, \textrm{a.s.}\end{equation*}
and the uniform convergence follows from:
supt[0,1]Btm+nBtm1i=0nsupt[0,1]Btm+iBtm+i1i=0n1(m+i)2\begin{equation*}\sup_{t \in [0,1]} | B_t^{m+n} - B_t^{m-1} | \leq \sum^n_{i=0} \sup_{t \in [0,1]} | B_t^{m+i} - B_t^{m+i-1} | \leq \sum^n_{i=0} \frac{1}{(m+i)^2}\end{equation*}
Finally, the series
B(t)=n=0Fn(t)\begin{equation*}B(t) = \sum_{n=0}^{\infty} F_n(t)\end{equation*}
is uniformly convergent on [0,1][0,1]. We denote the continuous limit by {B(t):t[0,1]}\{B(t) : t \in [0,1]\}.

Continuous extension BtB_t to the whole [0,1][0,1]

It remains to check that the increments of this process have the right finite-dimensional distributions. This follows directly from the properties of BB on the dense set D[0,1]\mathcal{D} \cap [0,1] and the continuity of the paths. Indeed, suppose that t1<t2<<tnt_1 < t_2 < \ldots < t_n are in [0,1][0,1]. We find tkiDt_{k_i} \in \mathcal{D} such that limktki=ti\lim_{k \to \infty} t_{k_i} = t_i and infer from the continuity of BB that, for 1in11 \leq i \leq n-1,
B(ti+1)B(ti)=limk(B(tki+1)B(tki)).\begin{equation*}B(t_{i+1}) - B(t_i) = \lim_{k \to \infty} (B(t_{k_{i+1}}) - B(t_{k_i})).\end{equation*}
As limkE[B(tki+1)B(tki)]=0\lim_{k \to \infty} \mathbb{E}[B(t_{k_{i+1}}) - B(t_{k_i})] = 0 and
limkCov(B(tki+1)B(tki),B(tkj)B(tkj1))=limk1{j=i+1}(tki+1tki)=1{j=i+1}(ti+1ti),\begin{equation*}\lim_{k \to \infty} \text{Cov}(B(t_{k_{i+1}}) - B(t_{k_i}), B(t_{k_j}) - B(t_{k_{j-1}})) = \lim_{k \to \infty} 1_{\{j=i+1\}}(t_{k_{i+1}} - t_{k_i}) = 1_{\{j=i+1\}}(t_{i+1} - t_i),\end{equation*}
the increments B(ti+1)B(ti)B(t_{i+1}) - B(t_i) are independent Gaussian random variables with mean 0 and variance ti+1tit_{i+1} - t_i, as required.

Extension BtB_t to R\mathbb{R}

We have thus constructed a continuous process B:[0,1]RB: [0,1] \rightarrow \mathbb{R} with the same finite-dimensional distributions as Brownian motion. Take a sequence B0,B1,B_0, B_1, \ldots of independent C[0,1]C[0,1]-valued random variables with the distribution of this process, and define {B(t):t0}\{B(t) : t \geq 0\} by gluing together the parts, more precisely by
B(t)=Bt(tt)+=0t1B(1), for all t0.\begin{equation*}B(t) = B_{\lfloor t \rfloor}(t - \lfloor t \rfloor) + \sum_{\ell=0}^{\lfloor t \rfloor - 1} B_{\ell}(1), \text{ for all } t \geq 0.\end{equation*}
This defines a continuous random function B:[0,)RB: [0, \infty) \rightarrow \mathbb{R} and one can see easily from what we have shown so far that it is a standard Brownian motion. \Box

References