Between bayesian and measure theoretic approaches

I was wondering how a bayesian statistician would approach the problem of defining a probability density function for a random variable.

In a measure theoretic sense, If the distribution of the random variable is absolutely continuous w.r.t the lesbegue measure we have the very convenient Radon-Nikodym theorem.

Jaynes in (The logic of science) derive the density function by considering $P(X \leq a$ and $P(X > a)$, and still applying basic rules. He consider then that thoses probabilities should be a function $G$ of the variable $a$ and derive later that :
\begin{align}
P(a<X<b) = G(b) – G(a)
\end{align}

From there he says that we consider G monotonic increasing and differentiable. So basically using the fundamental theorem of calculus showing that :
\begin{align}
P(a<X<b) &= G(b) – G(a)
&= \int_a^b g(x)dx
\end{align}

And call g the density function.
My questions are it seems more restrictive than the measure theoretic approach ( absolutely continuous vs differentiable ), so :

1) How does bayesian statistician treat this case ?
2) Or measure theoretic is just more broad ?

Also if you have any other derivation from a bayesian point of view I would be glad to hear/read about it.

One last question :

3) Can measure theoretic using lesbegue integral can be used in conjunction of bayesian statistics ? Measure theory require a lot of set up ( sample space, measures, measurable spaces, etc …) So it is intimidating to start with a plausibility function that respect Cox’s axioms and making them fit the measure theoretic framework. for example it would be convenient to define :
\begin{align}
P(a<X<b) = \int_a^b g(x)dx
\end{align}

Independently of discrete or continuous cases.

Thanks for any input !!!

Solutions Collecting From Web of "Between bayesian and measure theoretic approaches"