Intereting Posts

Maximizing volume of a rectangular solid, given surface area
What is the difference between “family” and “set”?
When to give up on math?
Do simply-connected open sets admit simply connected compact exhaustions?
A general formula for the $n$-th derivative of a parametrically defined function
¿Can you help me with axiom of regularity?
What does a homomorphism $\phi: M_k \to M_n$ look like?
Is there a conjecture with maximal prime gaps
Chain rule for second derivative
Continuity based on restricted continuity of two subsets
How many pairs of numbers are there so they are the inverse of each other and they have the same decimal part?
Showing that two maps of the sphere are homotopic if their values are never antipodal
Finite group is generated by a set of representatives of conjugacy classes.
Why are angles in “degrees” dimensionless?
Condition on a point on axis of the parabola so that $3$ distinct normals can be drawn from it to the parabola.

I’m reading Steven E. Shreve’s “Stochastic calculus for finance II”, and find myself not really understand the concept of “filtration”.

Yes, the definition of filtration is straight forward, it’s set of $\sigma$-algebra. However, when it comes to the Martingale Representation and Girsanov Theorem below, I’m lost on the different of a filtration generated by the Brownian motion or not.

First it’s **Theorem 5.3.1 (Martingale representation, one dimension)**:

Let $W(t)$, $0 \leq t \leq T$, be a Brownian motion on a probability space $(\Omega,\mathscr F, \mathbb P)$, and let $\mathscr F(t)$, $0 \leq t \leq T$, be the filtration generated by this Brownian motion. Let $M(t)$, $0 \leq t \leq T$, be a martingale with respect to this filtration (i.e., for every $t$, $M(t)$ is $\mathscr F(t)$-measurable and for $0 \leq s \leq t \leq T$, $\mathbb E [M(t) | \mathscr F(s)] = M(s)$).

Then there is an adapted process $\Gamma(u)$, $0 \leq u \leq T$, such that

- Conditional probability branching process
- The Laplace transform of the first hitting time of Brownian motion
- Bounded (from below) continuous local martingale is a supermartingale
- Preservation of Martingale property
- Prove that if $E(X\log X)<\infty$ then $E(\sup_n |S_n|/n)<\infty$.
- Monkey typing ABRACADABRA and gamblers

$$M(t) = M(0) + \int_0^t \Gamma(u) d W(u), 0 \leq t \leq T \tag{5.3.1} $$

Then Shreve says, ”

*The assumption that the filtration in Theorem 5.3.1 is the one generated
by the Brownian motion is more restrictive than the assumption of Girsanov’s
Theorem, Theorem 5.2.3, in which the filtration can be larger than the one
generated by the Brownian motion*.

*If we include this extra restriction in Girsanov’s Theorem, then we obtain the following corollary. The first paragraph
of this corollary is just a repeat of Girsanov’s Theorem; the second part contains the new assertion*” (the bold part “the filtration generated by this Brownian motion” I highlighted below, is the difference comparing to original Girsanov Theorem 5.2.3):

**Corollary 5.3.2**. Let $W(t)$, $0 \leq t \leq T$, be a Brownian motion on a probability

space $(\Omega,\mathscr F, \mathbb P)$, and let $\mathscr F(t)$, $0 \leq t \leq T$, be **the filtration generated by this
Brownian motion**. Let $\Theta(t)$, $0 \leq t \leq T$, be an adapted process, define

$Z(t) = \exp\left\{ – \int_0^t \Theta(u) d W(u) – \frac{1}{2} \int_0^t \Theta^2(u) d u \right\}$,

$\widetilde W(t) = W(t)+ \int_0^t \Theta(u) d u$, and assume that $\mathbb E \int_0^T \Theta^2(u) d u < \infty$. Set $Z = Z(T)$. Then $\mathbb E Z = 1$, and

under the probability measure $\widetilde P$ given by

$$\widetilde P(A) = \int_A Z(\omega) d P(\omega), \forall A \in \mathscr F \tag{5.2.1}$$

, the process $\widetilde W(t)$, $ 0 \leq t \leq T$, is a Brownian motion.

Now let $\widetilde M(t)$, $0 \leq t \leq T$, be a martingale under $\widetilde{\mathbb P}$. Then there is an

adapted process $\widetilde \Gamma(u)$, $0 \leq u \leq T$, such that

$$\widetilde M(t) = \widetilde M(0) + \int_0^t \widetilde \Gamma(u) d \widetilde W(u), 0 \leq t \leq T. \tag{5.3.2}$$

Shreve says: “*Corollary 5.3.2 is not a trivial consequence of the Martingale Representation Theorem, Theorem 5.3.1, with $\widetilde W(t)$ replacing $W(t)$ because the filtration

$\mathscr F(t)$ in this corollary is generated by the process $W(t)$, not the $\widetilde P$-Brownian

motion $\widetilde W(t)$”.

My problem is I could not visualize why the difference matters? I could not understand, if $\widetilde{\mathbb P}$ is defined based on $\mathbb P$, how different could they be?

Is there any example that could explain why Shreve “makes a big fuzz” here?

- Can we apply an Itō formula to find an expression for $f(t,X_t)$, if $f$ is taking values in a Hilbert space?
- Limit of a Wiener integral
- Decomposition of semimartingales
- Questions about geometric distribution
- Quadratic variation of Brownian motion and almost-sure convergence
- Continuous a.s. process
- positive martingale process
- Comparing the expected stopping times of two stochastically ordered random processes (Prove or give a counterexample for the two claims)
- Showing the square of a Markov process is or isn't Markov
- What is “white noise” and how is it related to the Brownian motion?

You have $$\widetilde{W}_t=W_t+\int\Theta(u)du$$ which is in general not a Brownian motion, because it has a drift component.

But 5.3.1 states

$$M_t=M_0+\int \Gamma(u)dW_u\tag{5.3.1}$$

, which holds only for a Brownian motion $W$ (and $M_t$ martingale).

So one cannot trivially replace $W_t$ and $W_t+\int\Theta(u)du=\widetilde{W}_t$ in 5.3.2 aswell by setting

$$\widetilde M_t = \widetilde M_0 + \int_0^t \widetilde \Gamma(u) d \widetilde W_u\tag{5.3.2}$$

(because $\widetilde W_t$ is not in general a Brownian motion).

5.3.2 holds only under the special change of measure defined as $$Z_t = \exp\left\{ – \int_0^t \Theta(u) d W(u) – \frac{1}{2} \int_0^t \Theta^2(u) d u \right\}$$

Then $\widetilde M$ is a martingale, and $\widetilde W$ becomes a Brownian motion (proof is not trivial).

But still the filtration of $W_t$ and $\widetilde{W}_t=W_t+\int\Theta(u)du$ is obviously not the same.

First of all, a filtration $( \mathscr{F}_t )_{t \geq 0 }$ is a “set” of sigma algebras indexed usually by time t that are increasing. That is, for every $t>0$, $\mathscr{F}_t$ is a sigma algebra and $\mathscr{F}_t \subseteq \mathscr{F}_T$ for all $0\leq t \leq T$. The canonical example, is the filtration generated by a process, say Brownian Motion $W$: The filtration $( \mathscr{F}_t )_{t \geq 0 }$ is such that $\mathscr{F}_0$ is the minimum $\sigma$-algebra such that $W_0$ is measurable with respect to it (that is if $W_0$ is $\mathscr{G}$-measurable, then $\mathscr{F}_0 \subset \mathscr{G}$ ) and $\mathscr{F}_t$ is the minimum $\sigma$-algebra such that all increments of $W$ up to time $t$ are measurable with respect to it. Having said that, the interpretation of filtration is that of the flow of information: as time progresses you know at least as much information as before. In particular they are useful to define the concept of a martingale.

We say $(M_t)_{t\geq0}$ is a martingale if the conditional expectation at time $t$ given the information up to time $s$ is the process at time $s$; that is, the best we can say about the process at time $t$ with the information up tp time $s < t$ is the process itself at time $s$, but which information and how to define it formally, this is where the concept of filtration comes in play. On a filtered probability space (a probability space with a filtration ) $( \Omega, \mathscr{F}, \mathbb{P}, (\mathscr{F} )_{t \geq 0 } ) $, the process $( M_t )_{ t \geq 0 } $ adapted to the filtration $( \mathscr{F}_t )_{t\geq 0}$ is a martingale if $M_0$ is integrable ( $M_0 \in L^1(\mathbb{P})$ ), and $$ \mathbb{E} \left [ M_t \right \vert \left. \mathscr{F}_s \right] = M_s, \quad \text{ for every }s \leq t $$

In particular, a process might be martingale with respect to one filtration and not with respect to another. Also, it is not necessarily the case in which, say the filtration generated by Brownian Motion $W$, and the filtration generated by $$X_t = W_t + \int_0^t \theta_s ds$$

are the same. I will give a famous example below.

I think the comment that Shreve wants to make is that the process $\widetilde M$ is a martingale in the filtered probability space $( \Omega, \mathscr{F},\widetilde{ \mathbb{P} }, (\mathscr{F} )_{t \geq 0 } ) $ (hence with respect to the filtration generated by $W$ ) still, it admits a stochastic representation representation like in Theorem 5.3.1 with $\widetilde W$. As you mentioned if the filtrations generated by $W$ and $\widetilde W$ were the same, then the result would be trivial, but they are in general not the same as I will give an example next.

This is a famous example due to Ito, and it can be found on the stochastic integration book by Protter, on the second section of the last chapter. Consider brownian motion $W$ with its respective filtration $(\mathscr{F}_t)_{t\geq 0}$. Now for every $t>0$, consider the filtration $\mathcal{G}_t$ that is the minimum filtration such that $\mathcal{F}_t \subseteq \mathcal{H}_t$ and $W_1$ is $\mathscr{H}_t$-measurable. It is easy to see that $(\mathscr{H}_t)_{t \geq 0 }$ is a filtration different from $( \mathscr{F}_t )_{ t \geq 0 }$ (since $W$ is not a martingale with respect to $\mathscr{H}$ ). What Ito showed was that the process $$ \beta_t = W_t – \int_0^t \frac{W_1 – W_s}{1-s}ds, t \in [0,1]$$ is a martingale with respect to the $\mathscr{H}$ filtration, and in fact (due to Levy’s theorem) a Brownian Motion. So there you have two completely different filtrations for processes related by a simple drift addition.

- How to visualize $1$-forms and $p$-forms?
- Is a polynomial $f$ zero at $(a_1,\ldots,a_n)$ iff $f$ lies in the ideal $(X_1-a_1,\ldots,X_n-a_n)$?
- Seeking proof for the formula relating Pi with its convergents
- Linear combinations of delta measures
- How prove binomial cofficients $\sum_{k=0}^{}(-1)^k\binom{n+1}{k}\binom{2n-3k}{n}=\sum_{k=}^n\binom{n+1}{k}\binom{k}{n-k}$
- How many resistors are needed?
- Number of permutations of $n$ where no number $i$ is in position $i$
- A hard problem from regular $n$-gon
- Who invented or used very first the double lined symbols $\mathbb{R},\mathbb{Q},\mathbb{N}$ etc
- Efficient way to compute $\sum_{i=1}^n \varphi(i) $
- Functional Equation $f(x+y)=f(x)+f(y)+f(x)f(y)$
- $x=\frac{-b\pm \sqrt{b^2-4ac}}{2a}$ show that $x=-c/b$ when $a=0$
- Evaluation of $ \lim_{x\to 0}\left\lfloor \frac{x^2}{\sin x\cdot \tan x}\right\rfloor$
- Roulette betting system probability
- Why can't $\sqrt{2}^{\sqrt{2}{^\sqrt{2}{^\cdots}}}>2$?