Using martingale methods, we provide bounds for the entropy of a probability measure on Rd with the right-hand side given in a certain integral form. As a corollary, in the one-dimensional case, we obtain a weighted log-Sobolev inequality.

A probability measure μ on Rd is said to satisfy the log-Sobolev inequality if for every smooth compactly supported function f:Rd→R, the entropy of f2, which by definition equals
Entμf2=∫Rdf2logf2dμ−(∫Rdf2dμ)log(∫Rdf2dμ),
possesses a bound
Entμf2≤2c∫Rd‖∇f‖2dμ
with some constant c. The least possible constant c such that (1) holds for every compactly supported smooth f is called the log-Sobolev constant for the measure μ; the multiplier 2 in (1) is chosen in such a way that for the standard Gaussian measure on Rd, its log-Sobolev constant equals 1.

The weighted log-Sobolev inequality has the form
Entμf2≤2∫Rd‖W∇f‖2dμ,
where the function W, taking values in Rd×d, has the meaning of a weight. Clearly, one can consider (1) as a particular case of (2) with constant weight W equal to c multiplied by the identity matrix. The problem of giving explicit conditions on μ that ensure the log-Sobolev inequality or its modifications is intensively studied in the literature, in particular, because of numerous connections between these inequalities with measure concentration, semigroup properties, and so on (see, e.g., [8]). Motivated by this general problem, in this paper, we propose an approach that is based mainly on martingale methods and provides explicit bounds for the entropy with the right-hand side given in a certain integral form.

Our approach is motivated by the well-known fact that, on a path space of a Brownian motion, the log-Sobolev inequality possesses a simple proof based on fine martingale properties of the space (cf. [1, 6]). We observe that a part of this proof is, to a high extent, insensitive w.r.t. the structure of the probability space; we formulate a respective martingale bound for the entropy in Section 1.1. To apply this general bound on a probability space of the form (Rd,μ), one needs a proper martingale structure therein. In Section 2, we introduce such a structure in terms of a trimming filtration, defined in terms of a set of trimmed regions in Rd. This leads to an integral bound for the entropy on (Rd,μ). In Section 3, we show the way how this bound can be used to obtain a weighted log-Sobolev inequality; this is made in the one-dimensional case d=1, although we expect that similar arguments should be effective for the multidimensional case as well; this is a subject of our further research.

A martingale bound for the entropy

Let (Ω,F,P) be a probability space with filtration F={Ft,t∈[0,1]}, which is right-continuous and complete, that is, every Ft contains all P-null sets from F. Let {Mt,t∈[0,1]} be a nonnegative square-integrable martingale w.r.t. F on this space, with càdlàg trajectories. We will use the following standard facts and notation (see [4]).

The martingale M has unique decomposition M=Mc+Md, where Mc is a continuous martingale, and Md is a purely discontinuous martingale (see [4], Definition 9.20). Denote by ⟨Mc⟩ the quadratic variation of Mc, by
[M]t=⟨Mc⟩t+∑s≤t(Ms−Ms−)2
the optional quadratic variation of M, and by ⟨M⟩ the predictable quadratic variation of M, that is, the projection of [M] on the set of F-predictable processes. Alternatively, ⟨M⟩ is identified as the F-predictable process that appears in the Doob–Meyer decomposition for M2, that is, the F-predictable nondecreasing process A such that A0=0 and M2−A is a martingale.

For a nonnegative r.v. ξ, define its entropy by Entξ=Eξlogξ−Eξlog(Eξ) with the convention 0log0=0.

Let the σ-algebraF0be degenerate. Then for any nonnegative square-integrable martingale{Mt,t∈[0,1]}with càdlàg trajectories,EntM1≤E∫011Mt−d⟨M⟩t.

Consider first the case where
c1≤Mt≤c2,t∈[0,1],
with some positive constants c1,c2. Consider a smooth function Φ, bounded with all its derivatives, such that
Φ(x)=xlogx,x∈[c1,c2].
Then by the Itô formula (see [4], Theorem 12.19),
Φ(M1)−Φ(M0)=∫01Φ′(Mt−)dMt+12∫01Φ″(Mt−)d⟨Mc⟩t+∑0<t≤1[Φ(Mt)−Φ(Mt−)−Φ′(Mt−)(Mt−Mt−)].
Clearly,
E∫01Φ′(Mt−)dMt=0.
Because F0 is assumed to be degenerate, M0=E[M1|F0]=EM1 a.s., and hence
EntM1=E(Φ(M1)−Φ(M0))=12E∫01Φ″(Mt−)d⟨Mc⟩t+E∑0<t≤1[Φ(Mt)−Φ(Mt−)−Φ′(Mt−)(Mt−Mt−)].
For x∈[c1,c2], we have Φ′(x)=1+logx and Φ″(x)=1/x. Observe that for any x,δ such that x,x+δ∈[c1,c2],
Φ(x+δ)−Φ(x)−Φ′(x)δ=(x+δ)log(x+δ)−xlogx−δ(1+logx)=(x+δ)log(1+δx)−δ≤(x+δ)δx−δ=δ2x.
Then
EntM1≤12E∫011Mt−d⟨Mc⟩t+E∑0<t≤1(Mt−Mt−)2Mt−≤E∫011Mt−d[M]t.
Because the process Mt−,t∈[0,1], is F-predictable, we have
E∫011Mt−d[M]t=E∫011Mt−d⟨M⟩t,
which completes the proof of the required bound under assumption (3).

The upper bound in this assumption can be removed using the following standard localization procedure. For N≥1, define
τN=inf{t∈[0,1]:Mt≥N}
with the convention inf∅=1. Then, repeating the above argument, we get
EntMτN≤E∫0τN1Mt−d⟨M⟩t≤E∫011Mt−d⟨M⟩t.
We have MτN→M1,N→∞ a.s. On the other hand, EMτN2≤EM12, and
xlogx=o(x2),x→+∞.
Hence, the family {MτNlogMτN,N≥1} is uniformly integrable, and
EntMτN→EntM1,N→∞.
Passing to the limit as N→∞, we obtain the required statement under the assumption Mt≥c1>00$]]>. Taking Mt+(1/n) instead of Mt and then passing to the limit as n→∞, we complete the proof of the theorem. □

We further give two examples where the shown martingale bound for the entropy is applied. In these examples, it would be more convenient to assume that t varies in [0,∞) instead of [0,1]; a respective version of Theorem 1 can be proved by literally the same argument.

(Log-Sobolev inequality on a Brownian path space; [<xref ref-type="bibr" rid="j_vmsta16_ref_001">1</xref>, <xref ref-type="bibr" rid="j_vmsta16_ref_006">6</xref>]).

Let Bt,t≥0, be a Wiener process on (Ω,F,P) such that F=σ(B). Let {Ft} be the natural filtration for B. Then for every ζ∈L2(Ω,P), the following martingale representation is available:
ζ=Eζ+∫0∞ηsdBs
with the Itô integral of a (unique) square-integrable {Ft}-adapted process {ηt} in the right-hand side (cf. [3]). Take ξ∈L4(Ω,P) and put ζ=ξ2 and
Mt=E[ζ|Ft]=Eζ+∫0tηsdBs,t≥0.
Then the calculation from the proof of Theorem 1 gives the bound
Entξ2≤12E∫011Mt−d⟨Mc⟩t=12E∫01ηt2Mtdt=12E∫01ηt2E[ξ2|Ft]dt.
Note the extra term 1/2, which appears because the martingale M is continuous.

Next, recall the Ocone representation [10] for the process {ηt}, which is valid if ζ possesses the Malliavin derivative Dζ={Dtζ,t≥0}:ηt=E[Dtζ|Ft],t≥0.
We omit the details concerning the Malliavin calculus, referring the reader, if necessary, to [9]. Because the Malliavin derivative possesses the chain rule, we have
ηt2=4(E[ξDtξ|Ft])2≤4E[ξ2|Ft]E[(Dtξ)2|Ft],
and consequently the following log-Sobolev-type inequality holds:Entξ2≤2E∫01E[(Dtξ)2|Ft]dt=2E‖Dξ‖H2,
where Dξ is considered as a random element in H=L2(0,∞). By a proper approximation procedure one can show that (7) holds for every ξ∈L2(Ω,P) that has a Malliavin derivative Dξ∈L2(Ω,P,H).

The previous example is classic and well known. The next one apparently is new, which is a bit surprising because the main ingredients therein (the Malliavin calculus on the Poisson space and the respective analogue of the Clark–Ocone representation (4), (5)) are well known (cf. [2, 5]). (Log-Sobolev inequality on the Poisson path space).

Let Nt, t≥0, be a Poisson process with intensity λ, and F=σ(N). Denote by τk,k≥1, the moments of consequent jumps of the process N, and by Ft=σ(Ns,s≤t), t≥0, the natural filtration for N. For any variable of the form
ξ=F(τ1,…,τn)
with some n≥1 and some compactly supported F∈C1(Rn), define the random element Dξ in H=L2(0,∞) by
Dξ=−∑k=1nFk′(τ1,…,τn)1[0,τk].
Denote by the same symbol D the closure of D, considered as an unbounded operator L2(Ω,P)→L2(Ω,P,H). Then the following analogue of the Clark–Ocone representation (4), (5) is available ([5]): for every ζ that possesses the stochastic derivative Dζ, the following martingale representation holds:ζ=Eζ+1λ∫0∞ηsdN˜s,
where N˜t=Nt−λt denotes the compensated Poisson process corresponding to N, and {ηt} is the projection in L2(Ω,P,H) of Dξ on the subspace generated by the {Ft}-predictable processes.

Proceeding in the same way as we did in the previous example, we obtain the following log-Sobolev-type inequality on the Poisson path space:Entξ2≤4λ2E‖Dξ‖H2.

Trimmed regions on <inline-formula id="j_vmsta16_ineq_092"><alternatives>
<mml:math><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="italic">d</mml:mi></mml:mrow></mml:msup></mml:math>
<tex-math><![CDATA[${\mathbb{R}}^{d}$]]></tex-math></alternatives></inline-formula> and associated integral bounds for the entropy

Let μ be a probability measure on Rd with Borel σ-algebra B(Rd). Our further aim is to apply the general martingale bound from Theorem 1 in the particular setting (Ω,F,P)=(Rd,B(Rd),μ). To this end, we first construct a filtration {Ft,t∈[0,1]}.

In what follows, we denote Nμ={A∈F:μ(A)=0} (the class of μ-null Borel sets).

Fix the family {Dt,t∈[0,1]} of closed subsets of Rd such that:

Ds⊂Dt,s≤t;

D0∈Nμ, μ(Dt)<1 for t<1, and D1=Rd;

for every t>00$]]>,
Dt∖(⋃s<tDs)∈Nμ,
and for every t<1,
Dt=⋂s>tDs.t}D_{s}.\]]]>

We call the sets Dt,t∈[0,1], trimmed regions, following the terminology used frequently in the multivariate analysis (cf. [7]). Given the family {Dt}, we define the respective trimmed filtration{Ft} by the following convention. Denote Qt=Rd∖Dt. Then, by definition, a set A∈F belongs to Ft if either A∩Qt∈Nμ or Qt∖A∈Nμ.

By the construction, F={Ft} is complete. It is also clear that, by property (ii) of the family {Dt}, the σ-algebra F0 is degenerate and, by property (iii), the filtration F is continuous. Hence, we can apply Theorem 1.

Fix a Borel-measurable function g:Rd→R+ that is square-integrable w.r.t. μ. Consider it as a random variable on (Ω,F,P)=(Rd,B(Rd),μ) and define
gt=E[g|Ft],t∈[0,1].
Since the σ-algebra possesses an explicit description, we can calculate every gt directly; namely, for t>00$]]> and μ-a.a. x, we have
gt(x)={g(x),x∈Dt,Gt,x∈Qt,
where we denote
Gt=1μ(Qt)∫Qtg(y)μ(dy).
Note that μ(Qt)>00$]]> for t<1 and the function G:[0,1)→R+ is continuous. In what follows, we consider the modification of the process {gt} defined by (8) for everyx∈Rd. Its trajectories can be described as follows. Denote
τ(x)=inf{t:x∈Dt};
then by property (iii) of the family {Dt} we have τ(x)=min{t:x∈Dt}, and by property (ii) we have τ(x)<1,x∈Rd,τ(x)=0⇔x∈D0. Then, for a fixed x∈Rd, we have
gt(x)=g(x)1t≥τ(x)+Gt1t<τ(x),t∈[0,1],
which is a càdlàg function because {Gt} is continuous on [0,1).

Let g be a Borel-measurable functiong:Rd→R+, square-integrable w.r.t. μ. Let{Dt}be a family of trimmed regions that satisfy (i)–(iii).

ThenEntμg≤∫Rd(g(x)−Gτ(x))2Gτ(x)μ(dx),where the functions G and τ are defined by (9) and (10), respectively.

We have already verified the assumptions of Theorem 1: the filtration {Ft} is complete and right continuous, and the square-integrable martingale {gt} has càdlàg trajectories. Because g1=g a.s. and F0 is degenerate, by Theorem 1 we have the bound
Entμg≤E∫011gt−d⟨g⟩t.
Hence, we only have to specify the integral in the right-hand side of this bound. Namely, our aim is to prove that
E∫011gt−d⟨g⟩t=∫Rd(g(x)−Gτ(x))2Gτ(x)μ(dx).
First, we observe the following.

Let0<s<t<1, and let α be a boundedFs-measurable random variable. ThenE[α(⟨g⟩t−⟨g⟩s)]=∫Dt∖Dsα(x)(g(x)−Gτ(x))2μ(dx).

By the definition of ⟨g⟩,
E[α(⟨g⟩t−⟨g⟩s)]=E[α(gt2−gs2)]=E[α(E(gt2|Fs)−gs2)].
We have
gs2(x)=g2(x),x∈Ds,Gs2,x∈Qs,gt2(x)=g2(x),x∈Dt,Gt2,x∈Qt,
and applying formula (8) with g=gt2 and t=s, we get
E(gt2|Fs)(x)−gs2(x)=0,x∈Ds,Ht,sμ(Qs),x∈Qs,Ht,s=(∫Dt∖Ds(g2(x)−Gs2)μ(dx)+∫Qt(Gt2−Gs2)μ(dx)).
Because α is Fs-measurable, it equals a constant on Qsμ-a.s. Denote this constant by A; then the previous calculation gives
E[α(⟨g⟩t−⟨g⟩s)]=AHt,s.
Write Ht,s in the form
Ht,s=∫Dt∖Dsg2(x)μ(dx)+μ(Qt)Gt2−μ(Qs)Gs2.
Denote
μt=μ(Qt),It=∫Qtgdμ;
then
μ(Qt)Gt2=μtGt2=It2μt.
Observe that the functions μt,t∈[0,1] and It,t∈[0,1], are continuous functions of a bounded variation and μt>0,t<10,\hspace{0.1667em}t<1$]]>. Then
μ(Qt)Gt2−μ(Qs)Gs2=∫std(Iv2μv)=∫st(−Iv2μv2dμv+2IvμvdIv)=∫st(−Gv2dμv+2GvdIv).
It is easy to show that
−∫stGv2dμv=∫Dt∖DsGτ(x)2μ(dx).
Indeed, because G is continuous on [0,1), the left-hand side integral can be approximated by the integral sum
∑k=1mGvk2(μvk−1−μvk),
where s=v0<⋯<vm=t is some partition of [s,t]. This sum equals
∑k=1mGvk2μ(Dvk∖Dvk−1).
For x∈Dvk∖Dvk−1, we have τ(x)∈[vk−1,vk]. Hence, this sum equals
∑k=1m∫Dvk∖Dvk−1Gτ(x)2μ(dx)=∫Dt∖DsGτ(x)2μ(dx)
up to a residue term that is dominated by
supu,v∈[s,t],|u−v|≤maxk(vk−vk−1)|Gu2−Gv2|
and tends to zero as the size of the partition tends to zero. This proves (12). Similarly, we can show that
∫stGvdIv=−∫Dt∖DsGτ(x)g(x)μ(dx).
We can summarize this calculation as follows:
E[α(⟨g⟩t−⟨g⟩s)]=A∫Dt∖Ds(g(x)−Gτ(x))2μ(dx).
Because α(x)=A for μ-a.a. x∉Ds, this completes the proof. □

Let us continue with the proof of (11). Assume first that g≥c with some c>00$]]>. Then gt≥c, and consequently the process 1/gt− is left continuous and bounded. In addition, the function Gt=It/μt is bounded on every segment [0,T]⊂[0,1).

Fix T<1 and take a sequence {λn} of dyadic partitions of [0,T],
λn={tkn,k=0,…,2n},tkn=Tk2n,
and define
gtn=g01t=0+∑k=12ngtk−1n1t∈(tk−1n,tkn].
For every fixed t>00$]]>, the value gtn equals the value of g at some (dyadic) point tn<t, and tn→t−. Hence,
1gtn→1gt−,n→∞,
pointwise. In addition, because of the additional assumption g≥c, this sequence is bounded by 1/c. Hence, by the dominated convergence theorem,
E∫0T1gt−d⟨g⟩t=limn→∞E∑k=12n1gtk−1n(⟨g⟩tkn−⟨g⟩tk−1n);
here we take into account that the point t=0 in the left-hand side integral is negligible because gt→Eg,t→0+, in L2, and consequently ⟨g⟩t→0, t→0+, in L1. By Lemma 1,
E∑k=12n1gtk−1n(⟨g⟩tkn−⟨g⟩tk−1n)=E∑k=12n∫Dtkn∖Dtk−1n(g(x)−Gτ(x))2Gtk−1nμ(dx);
recall that gtk−1n(x)=Gtk−1n for x∉Dtk−1n. Next, for x∈Dtkn∖Dtk−1n, we have |τ(x)−tk−1n|≤2−n. Because Gt,t∈[0,T], is uniformly continuous and separated from zero, and Gτ(x),x∈DT is bounded, we obtain that
E∫0T1gt−d⟨g⟩t=limn→∞E∑k=12n∫Dtkn∖Dtk−1n(g(x)−Gτ(x))2Gtk−1nμ(dx)=∫DT(g(x)−Gτ(x))2Gτ(x)μ(dx).
Taking T→1− and applying the monotone convergence theorem to both sides of the above identity, we get (11).

To remove the additional assumption g≥c, consider the family gtn=gt+1/n. Then ⟨gn⟩=⟨g⟩, gn(x)−Gτ(x)n=g(x)−Gτ(x), gt−n=gt−+(1/n),Gτ(x)n=Gτ(x)+(1/n). Hence, we can write (11) for gn, apply the monotone convergence theorem to both sides of this identity, and get (11) for g. □

One corollary: a weighted log-Sobolev inequality on <inline-formula id="j_vmsta16_ineq_192"><alternatives>
<mml:math><mml:mi mathvariant="double-struck">R</mml:mi></mml:math>
<tex-math><![CDATA[$\mathbb{R}$]]></tex-math></alternatives></inline-formula>

In this section, we show the way how the integral bound for the entropy, established in Theorem 2, can be used to obtain weighted log-Sobolev inequalities. Consider a continuous probability measure μ on (R,B(R)) and denote by pμ the density of its absolutely continuous part. Fix a family of segments Dt=[at,bt],t∈[0,1), where a0=b0, the function at is continuous and decreasing to −∞ as t→1−, and the function b· is continuous and increasing to +∞ as t→1−. Then the family
Dt=[at,bt],t∈[0,1),D1=R,
satisfies the assumptions imposed before. Hence, Theorem 2 is applicable.

We call a function f:R→Rsymmetric w.r.t. the family {Dt} if
f(at)=f(bt),t∈[0,1).
In the following proposition, we apply Theorem 2 to g=f2, where f is smooth and symmetric.

Letf:R→Rbe a smooth function that is symmetric w.r.t. the family{Dt}. ThenEntμf2≤4∫RW(x)(f′(x))2μ(dx),whereW(x)=V2(x)log(1μτ(x)),V(x)=μ((−∞,x))pμ(x),x≤a0,μ((x,∞))pμ(x),x>a0.a_{0}.\end{array}\right.\]]]>

Write
g(x)−Gτ(x)=1μτ(x)∫Qτ(x)(g(x)−g(y))μ(dy)=1μτ(x)∫Qτ(x)∫xyg′(z)dzμ(dy).
Let us analyze the expression in the right-hand side. Observe that now Qτ(x) is the union of two intervals (−∞,aτ(x)) and (bτ(x),+∞). Denote
Qt+=(bt,∞),Qt−=(−∞,at),μt±=μ(Qt±).
The point x equals either aτ(x) or bτ(x); hence, because g=f2 is symmetric,
g(x)=g(aτ(x))=g(bτ(x)).
Then we have
∫xyg′(z)dz=∫bτ(x)yg′(z)dz,y∈Qτ(x)+,∫aτ(x)yg′(z)dz,y∈Qτ(x)−.
Consequently,
|g(x)−Gτ(x)|≤1μτ(x)[∫Qτ(x)+∫Qτ(x)+,τ(z)≤τ(y)|g′(z)|dzμ(dy)+∫Qτ(x)−∫Qτ(x)−,τ(z)≤τ(y)|g′(z)|dzμ(dy)].
Using Fubini’s theorem, we get
|g(x)−Gτ(x)|≤1μτ(x)[∫Qτ(x)+μτ(z)+|g′(z)|dz+∫Qτ(x)−μτ(z)−|g′(z)|dz]≤1μτ(x)∫Qτ(x)V(z)|g′(z)|μ(dz).
Because g=f2 and hence g′=2ff′, by the Cauchy inequality we then have
(g(x)−Gτ(x))2≤4(1μτ(x)∫Qτ(x)(V(z)f′(z))2μ(dz))(1μτ(x)∫Qτ(x)(f(z))2μ(dz))=4(1μτ(x)∫Qτ(x)(V(z)f′(z))2μ(dz))Gτ(x).
Observe that
z∈Qτ(x)⇔τ(z)>τ(x)⇔x∈Dτ(z)∖{aτ(z),bτ(z)}.\tau (x)\hspace{1em}\Leftrightarrow \hspace{1em}x\in D_{\tau (z)}\setminus \{a_{\tau (z)},b_{\tau (z)}\}.\]]]>
Hence, by Theorem 2 and Fubini’s theorem we have
Entμ(f2)≤4∫R(1μτ(x)∫Qτ(x)(V(z)f′(z))2μ(dz))μ(dx)=4∫R(∫Dτ(z)μ(dx)μτ(x))(V(z)f′(z))2μ(dz).
Similarly to the proof of (12), we can show that
∫Dtμ(dx)μτ(x)=logμs|s=0s=t=log(1μt);
the last identity holds because μ0=1. This completes the proof. □

Next, we develop a symmetrization procedure in order to remove the restriction for f to be symmetric. For any x≠a0, one border point of the segment Dτ(x) equals x; let us denote s(x) the other border point. Denote also s(a0)=a0. Define the σ-algebra Fˆ of symmetric sets A∈F, that is, such that x∈A⇔s(x)∈A. For a function f∈L2(R,μ), consider its L2-symmetrizationfˆ=(Eμ[f2|Fˆ])1/2.
It can be seen easily that there exists a measurable function p:R→[0,1] such that, for μ-a.a. x∈R,
(fˆ)2(x)=p(x)f2(x)+(1−p(x))f2(s(x))=Eνxf2,
where we denote
νx=p(x)δx+(1−p(x))δs(x),x∈R.

We have
Eμf2=Eμ(fˆ)2
and, consequently,
Entμf2−Entμ(fˆ)2=Eμf2logf2−Eμ(fˆ)2log(fˆ)2=Eμ(Eμ[f2logf2−(fˆ)2log(fˆ)2|Fˆ])=∫R(Entνxf2)μ(dx).

It is well known (cf. [8]) that for a Bernoulli measure ν=pδ1+qδ−1 (p+q=1), the following discrete analogue of the log-Sobolev inequality holds:
Entνf2≤Cp(Df)2,Cp=pqlogp−logqp−q,p≠q,12,p=q,
where we denote Df=f(1)−f(−1). This yields the bound
Entμf2−Entμ(fˆ)2≤∫RCp(x)(f(x)−f(s(x)))2μ(dx)=∫RCp(x)(∫Dτ(x)f′(z)dz)2μ(dx).
By the Cauchy inequality,
(∫Dτ(x)f′(z)dz)2≤(∫Dτ(x)(f′(z))2μτ(z)3/2pμ2(z)μ(dz))(∫Dτ(x)μ(dz)μτ(z)3/2),
and, similarly to the proof of (12), we can show that
∫Dτ(x)μ(dz)μτ(z)3/2=2(μτ(x)−1/2−1)<2μτ(x)−1/2.
This yields the following bound for the difference Entμf2−Entμ(fˆ)2, formulated in the terms of f′:
Entμf2−Entμ(fˆ)2≤2∫R(f′(z))2U(z)μ(dz),U(z)=μτ(z)3/2pμ2(z)∫Qτ(z)Cp(x)μ(dx)μτ(x)1/2.
Note that Cp≤1 for any p∈[0,1], and hence we have
U(z)≤2(μτ(z)pμ(z))2.

Assuming that the bound from Proposition 1 is applicable to fˆ (which is yet to be studied because fˆ may fail to be smooth), we obtain the following inequality, valid without the assumption of symmetry of f:
Entμf2≤∫R(4W(x)((fˆ)′(x))2+2U(x)(f′(x))2)μ(dx).
The right-hand side of this inequality contains the derivative of fˆ and hence depends on the choice of the family of trimmed regions {Dt}. We further give a particular corollary, which appears when {Dt} is the set of quantile trimmed regions. In what follows, we assume μ to possess a positive distribution density pμ and choose {Dt=[at,bt]} in the following way. Denote qv=Fμ−1(v), that is, the quantile of μ of the level v, and put
at=q1/2−t/2,bt=q1/2+t/2,t∈[0,1).
In particular, a0=b0=m, the median of μ. Denote also Fˆμ=min(Fμ,1−Fμ); observe that now we have
Fˆμ(x)=12μτ(x).

Let μ be a probability measure onRwith positive distribution densitypμ. Then, for any absolutely continuous f, we haveEntμf2≤∫RK(x)(f′(x))2μ(dx),K(x)=8(Fˆμ(x)pμ(x))2(log12Fˆμ(x)+1).

First, observe that now the L2-symmetrization of a function f has the form
fˆ(x)=12(f2(x)+f2(s(x))).
This identity is evident for functions f of the form 1(−∞,F−1(v)), v∈(0,1/2] and 1[F−1(v),∞), v∈[1/2,1), and then easily extends to general f.

Next, observe that
s(x)=Fμ−1(1−Fμ(x)),
and because Fμ is absolutely continuous and strictly increasing, s(x) is absolutely continuous as well. Then fˆ is absolutely continuous with
(fˆ)′(x)=f(x)f′(x)+f(s(x))f′(s(x))s′(x)2(f2(x)+f2(s(x)));
here and below the derivatives are well defined for a.a. x. Using a standard localization/approximation procedure, we can show that Proposition 1 is well applicable to any absolutely continuous function. Hence, it is applicable to fˆ, and (13) holds.

We have
((fˆ)′(x))2≤(f′(x))2+(f(s(x))f′(s(x))s′(x))2f2(x)+f2(s(x))≤(f′(x)f(x))2+(f′(s(x))s′(x))2.
The function W(x) in (13) now can be rewritten as
W(x)=(Fˆμ(x)pμ(x))2log12Fˆμ(x);
hence,
∫RW(x)((fˆ′(x)))2μ(dx)≤∫RW(x)(f′(x))2μ(dx)+∫RW(x)(f′(s(x)))2(s′(x))2μ(dx).
Let us analyze the second integral in the right-hand side. By (15),
s′(x)=−pμ(x)pμ(s(x));
hence,
∫RW(x)(f′(s(x)))2(s′(x))2μ(dx)=∫R(f′(s(x)))2(Fˆμ(x)pμ(s(x)))2log12Fˆμ(x)pμ(x)dx.
Change the variables y=s(x); observe that we have x=s(y) and Fˆμ(x)=Fˆμ(y). Then we finally get
∫RW(x)(f′(y))2(s′(x))2μ(dx)=∫R(f′(s(x)))2(Fˆμ(y)pμ(y))2log12Fˆμ(y)pμ(s(y))pμ(y)pμ(s(y))dy=∫RW(y)(f′(y))2μ(dy),
and therefore
∫RW(x)((fˆ′)(x))2μ(dx)≤2∫RW(x)(f′(x))2μ(dx).

On the other hand, by identity (14) we have now Cp(x)=1/2, and the function U(x) in (13) can be rewritten as
U(x)=(μτ(x)pμ(x))2=4(Fˆμ(x)pμ(x))2,
which completes the proof of the statement. □

ReferencesCapitaine, M., Hsu, E., Ledoux, M.: Martingale representation and a simple proof of logarithmic Sobolev inequalities on path spaces.Carlen, E., Pardoux, E.: Differential calculus and integration by parts on Poisson space. In: Clark, I.M.C.: The representation of functionals of Brownian motion by stochastic integrals. Elliott, R.J.: Elliott, R.J., Tsoi, A.H.: Integration by parts for Poisson processes. Gong, F.-Zh., Ma, Zh.-M.: Martingale representation and log-Sobolev inequality on loop space. Koshevoy, G.A., Mosler, K.: Zonoid trimming for multivariate distributions. Ledoux, M.: Concentration of measure and logarithmic Sobolev inequalities. In: Nualart, D.: Analysis in Wiener space and anticipating stochastic calculus, Springer, Berlin, Heidelberg, Lect. Notes Math., vol. 1690, pp. 123–227 (1998) MR1668111. doi:10.1007/BFb0092538Ocone, D.: Malliavin’s calcululs and stochastic integral representation of functionals of diffusion processes.