Coco notes — Generalizations of Fourier evaluation

Once I first discovered about Fourier sequence and integrals, I hated it
as a result of it appeared like a set of many ad-hoc definitions,
formally associated however very totally different.
To have a wider view of the topic, it helped me to appreciate that
Fourier sequence and integrals are a specific case of not one,
however many various constructions. Thus, they are often generalized in
broadly totally different instructions, resulting in in another way flavored views of
the unique idea.
$newcommand{R}{mathbf{R}}$
$newcommand{Z}{mathbf{Z}}$
$newcommand{Q}{mathbf{Q}}$
$newcommand{C}{mathbf{C}}$
$newcommand{U}{mathbf{U}}$
$newcommand{T}{mathbf{T}}$
$newcommand{ud}{mathrm{d}}$
- First you might have the 4 traditional instances that you could be study in
faculty: Fourier sequence of periodic capabilities, Fourier
transforms of integrable capabilities, the discrete-time Fourier
rework and the discrete Fourier rework.
These 4 traditional instances are elementary and you need to study
their definition and properties by coronary heart. - Then, you study sampling idea and also you see that
a number of the traditional instances could also be obtained from the others.
For instance, the discrete Fourier rework might be
thought-about a specific case of Fourier sequence of a
periodic operate.
These relationships might be neatly organized within the
so-called Fourier-Poisson dice. - Later, you study distribution idea, that
supplies a typical framework for indicators and their
samples utilizing Dirac combs. Thus every of the 4
traditional instances arises as a specific case of the
Fourier rework of tempered distributions on the
actual line. - A really totally different generalization is given by Pontryagin duality. This begins by realizing that
the area of definition of every traditional case has
all the time the construction of a commutative group ($R$,
$Z$, $S^1$ or $Z/NZ$). Then, Pontryagin duality
supplies a basic development for Fourier evaluation
on commutative teams, and the 4 traditional instances
are explicit instances of it. - By enjoyable the situation of commutativity, you get non-commutative harmonic evaluation. The case of a compact
non-commutative group is described utterly by the
Paley-Wiener theorem, and the final non-compact
non-commutative case is a big downside in illustration
idea, of which a lot is thought; particularly if the group has
some extra construction (semisimple, solvable). - The subsequent step is harmonic evaluation on homogeneous
areas. It seems that the group construction is just not
important, and you are able to do virtually every thing simply by having a
group performing in your area, which needn’t be itself a bunch.
For instance, the sphere $S^2$ is just not a bunch, however there’s the
group of $3D$ rotations performing over it, and this results in
spherical harmonics. - Lastly there’s spectral geometry, additionally referred to as the
spectral evaluation of the Laplace-Beltrami operator. In case your
area is only a potato (a compact Riemannian manifold), there
isn’t any group in anyway performing on it, however you continue to have a
Laplace-Beltrami operator, it has a discrete spectrum, and also you
can do the analogue of Fourier sequence on it. A big a part of
the traditional outcomes of Fourier sequence prolong to this case,
besides every thing associated to convolution—which is outlined
essentially utilizing the group construction.
Thus, what occurs whenever you ask a mathematician, “what’s
Fourier evaluation?” ?
If they’re a actual analyst, they’ll say that Fourier evaluation
are a set of examples within the research of tempered distributions.
If they’re an algebraist, they’ll say that Fourier evaluation
is a really explicit case of one-dimensional illustration idea.
If they’re a geometer, they’ll say that Fourier evaluation is
a specific case of spectral geometry for trivial flat manifolds.
Lastly, for those who ask a complicated analyst, they’ll say that
Fourier sequence are simply Taylor sequence evaluated on the unit circle.
And all of them shall be proper.
1. The 4 traditional instances
The traditional instances of Fourier evaluation are used to precise an
arbitrary operate $f(x)$ as a linear mixture of sinusoidal capabilities
of the shape $xto e^{ixi x}$. There are 4 instances, relying on
the area the place $x$ belongs.
1.1. Fourier sequence
Any periodic operate
$$
f:S^1toR
$$
might be expressed as a numerable linear mixture of sinusoidal waves. That is
referred to as the Fourier sequence of $f$
$$
f(theta) = sum_{ninZ} a_n e^{intheta}
$$
and the coefficients $a_n$ are computed as integrals of $f$
$$
a_n = frac{1}{2pi}int_{S^1} f(theta) e^{-intheta}
mathrm{d}theta
$$
1.2. Fourier rework
An arbitrary (integrable) operate
$$
f:RtoR
$$
might be expressed as a linear mixture of sinusoidal waves. The
coefficients of this linear mixture are referred to as the Fourier
integral of $f$, also referred to as Fourier rework or attribute
operate of $f$, relying on the context. Thus, $f$ is represented
as
$$
f(x) = int_R a(xi) e^{ixi x} mathrm{d} xi
$$
That is precisely analogous to the Fourier sequence above, however now the coefficients
$a$ of the linear mixture are listed by a steady index
$xiinmathbf{R}$ as an alternative of a discrete index $ninmathbf{Z}$. The values
of $a(xi)$ might be recovered by integrating once more the operate $f$:
$$
a(xi) = frac{1}{2pi}int_R f(x) e^{-ixi x} mathrm{d} x
$$
Discover that, even when their formulation look fairly related, the Fourier
sequence is just not a specific case of the Fourier rework. For
instance, a periodic operate isn’t integrable over the actual line
except it’s identically zero. Thus, you can not compute the Fourier
rework of a periodic operate.
1.3. Discrete Fourier rework
Within the finite case, you possibly can categorical any vector
start{equation}
(f_1, f_2, ldots f_N)
finish{equation}
as a linear mixture of “oscillating” vectors:
start{equation}
f_k = sum_l a_l e^{frac{2pi}{N}ikl}
finish{equation}
That is referred to as the discrete Fourier rework.
The coefficients $a_l$ might be recovered by inverting the matrix $M_{kl} =
e^{frac{2pi}{N}ikl}$, which is unitary. Thus
$$
a_l = frac{1}{N}sum_k f_k e^{-frac{2pi}{N}ikl}
$$
1.4. Discrete-time Fourier rework
Lastly, in case you have a doubly-infinite sequence:
start{equation}
ldots,f_{-2},f_{-1},f_0,f_1,f_2,ldots
finish{equation}
you possibly can categorical it as a linear mixture (integral) of sinusoidal capabilities
sampled on the integers, which is kind of a factor:
start{equation}
f_n = int_{S^1} a(theta) e^{intheta}
mathrm{d}theta
finish{equation}
The coefficients $a(theta)$ of this infinite linear mixture might be
recovered as a linear mixture of all of the values of $f$:
start{equation}
a(theta) = frac{1}{2pi}sum_n f_n e^{-intheta}
finish{equation}
Discover these two formulation are precisely the identical as Fourier sequence, however
reversing the roles of $a$ and $f$.
This is a crucial symmetry.
2. Pontryagin duality
Pontryagin duality extracts the essence of the definitions of Fourier
sequence, Fourier integrals and discrete Fourier transforms. The principle thought is
that we’ve got a spatial area $G$ and a frequency area $G^*$.
Then, any operate outlined on the spatial area
start{equation}
f:Gtomathbf{R}
finish{equation}
might be expressed as a linear mixture of sure capabilities $E$, listed by
the frequencies
start{equation}
f(x) = int_{G^*} a(xi) E(x,xi) mathrm{d} xi
finish{equation}
Right here the coefficients $a$ rely on the operate $f$ however the capabilities $E$
rely solely on the group $G$; they’re referred to as the characters of $G$.
The coefficients $a$ might be discovered by computing integrals over the spatial
area:
start{equation}
a(xi) = int_G f(x) overline{E(x,xi)} mathrm{d} xi
finish{equation}
the place the bar denotes complicated conjugation.
Discover that these formulation embody Fourier sequence, Fourier integrals, the
DFT and the DTFT as explicit instances, in response to the next desk
area | freq. | evaluation | synthesis | |
$G$ | $G^*$ | $displaystylewidehat{f}(xi)=int_G f(x) overline{E(xi,x)}ud x$ | $displaystyle f(x)=int_{G^*} widehat{f}(xi)E(xi,x)udxi$ | |
FS | $S^1$ | $Z$ |
$displaystyle f_n = frac{1}{2pi}int_0^{2pi} f(theta)e^{-intheta}udtheta$ |
$displaystyle f(theta)=sum_{ninmathbf{Z}} f_n |
FT | $R$ | $R$ | $displaystylewidehat{f}(xi)=frac{1}{sqrt{2pi}}int_R f(x)e^{-ixi x}ud x$ | $displaystyle f(x)=frac{1}{sqrt{2pi}}int_{mathbf{R}} widehat{f}(xi)e^{ixi x}ud xi$ |
DFT | $Z_N$ | $Z_N$ | $displaystylewidehat{f}_k=frac{1}{N}sum_{n=0}^{N-1}f_n,e^{-2pi ikn/N}$ | $displaystyle f_n=sum_{okay=0}^{N-1}widehat{f}_k,e^{2pi ikn/N}$ |
DTFT | $Z$ | $S^1$ | $displaystyle widehat{f}(theta)=sum_{ninmathbf{Z}} f_n e^{-intheta}$ | $displaystyle f_n = frac{1}{2pi}int_0^{2pi}widehat{f}(theta)e^{intheta}udtheta$ |
2.1. Domestically compact abelian teams
A topological group is a bunch along with a topology appropriate with the
group operation. A morphism between two topological teams is a mapping
which is on the identical time steady and a bunch morphism.
Right here we’re considering domestically compact abelian teams (LCAG). We
will denote the group operation by $x+y$, and the inverse of a bunch aspect
$x$ by $-x$.
The canonical instance of LCAG is $R^n$ with the same old topology and
the operation of sum of vectors. One other instance of LCAG is the
multiplicative group $U$ of complicated numbers of norm 1, which
topologically coincides with the unit circle $S^1$, and is isomorphic
to the additive group of actual numbers modulo $2pi$ referred to as the
one-dimensional torus $T=R/2piZ$. Different examples are any
finite abelian group with the discrete topology; or $Z$, the
additive group of integers with the discrete topology.
The group $U$ is essential within the following dialogue. It might
be denoted multiplicatively (by contemplating its parts as complicated numbers),
or additively (by contemplating its parts as angles). Each notations are
used henceforth, and they’re linked by the relation
[
e^{ialpha}e^{ibeta}
=
e^{i(alpha+beta)}
]
2.2. Characters and the twin group
Let $G$ be a LCAG. A personality of $G$ is a morphism from $G$ to
$U$. The set $G’$ of all characters of $G$ is a bunch (with the
operation of pointwise sum of mappings) and likewise a topological area (with the topology
of compact convergence). It seems that this group is domestically compact,
thus it’s a LCAG. It’s referred to as the twin group of $G$. There’s a canonical
morphism between $G$ and its bidual, and it may be seen simply that this
morphism is injective. The Pontryagin duality theorem states that $G$ is
isomorphic to its bidual. One other consequence states that $G$ is compact if an
provided that its twin is discrete.
For instance, the twin group of $R^n$ is itself. The integers
$Z$ and the unit circle $U$ are twin to one another.
The twin of any finite group is isomorphic (although non-canonically) to
itself.
The motion of a personality $xiin G’$ over a bunch aspect $xin G$ is
denoted by $E(xi,x)$ and even $e^{ixi x}$. Within the latter case, the complicated
conjugate of $e^{ixi x}$ is denoted by $e^{-ixi x}$. The exponential
notation is justified by the next properties, arising from the
definitions
- $E(xi,x)$ is a unit complicated quantity, thus it has the shape
$e^{itheta}$ for some actual quantity $theta$ - $E(xi,x+y) = E(xi,x)E(xi,y)$, by the definition of character
- $E(xi + eta,x) = E(xi,x)E(eta,x)$, by the definition of twin
group
2.3. Haar measures
Let $G$ be a LCAG. A non-vanishing measure over $G$ which is invariant by
translations is known as a Haar measure. Haar’s theorem states that there’s a
single Haar measure modulo multiplication by constructive constants. One other
consequence states that $G$ is compact if and provided that its complete Haar measure (any
one among them) is finite.
For instance, Lebesgue measure on $R^n$ is a Haar measure. The
counting measure of a discrete group is a Haar measure.
Given $G$, we repair a single Haar measure and we will speak concerning the areas
$L^p(G)$. The weather of this area are complex-valued capabilities such that
the $p$th energy of their norm has finite integral with respect to Haar’s
measure. Discover that the set $L^p(G)$ doesn’t rely on the precise selection
of normalization issue chosen for the definition of the Haar measure.
2.4. Fourier rework
Now we will outline a basic notion of Fourier transforms, for capabilities
belonging to the area $L^1(G)$. The Fourier
rework of a operate
start{equation}
f:Gtomathbf{C}
finish{equation}
is a operate
start{equation}
hat f:G’tomathbf{C}
finish{equation}
outlined by
start{equation}
hat f(xi) = int_G f(x) e^{-ixi x}mathrm{d} x
finish{equation}
Right here $e^{-ixi x}$ denotes the conjugate of the complicated quantity
$e^{ixi x}=E(xi,x)$. The inverse rework of a operate outlined on $G’$ is outlined equally, however
with out the conjugate:
start{equation}
examine f(x) = int_{G’} f(xi) e^{ixi x}mathrm{d} xi
finish{equation}
Observe that these definitions require deciding on Haar measures on $G$
and $G’$ (this quantities to fixing two arbitrary constants).
2.5. Harmonic evaluation on domestically compact abelian teams
To this point we’ve got simply given definitions: LCAG, characters, twin group, Haar
measure, and Fourier rework. Now it’s time to get better the principle outcomes
of harmonic evaluation.
The primary result’s the Fourier inversion theorem for $L^1(G)$, which
states that the inverse rework is definitely the inverse, for an acceptable
selection of scaling of the Haar measures on $G$ and $G’$. Such a pair of
measures are referred to as harmonized, or twin to one another. Within the following,
once we state a consequence involving integrals on $G$ and $G’$ we’ll all the time
assume that the Haar measures are harmonized.
The second result’s the power conservation theorem for $L^2(G)$,
which states that, when $f$ and $hat f$ are square-integrable, we’ve got
start{equation}
|f|_{L^2(G)}
=
|hat f|_{L^2(G’)}
finish{equation}
Specific instances of this theorem are the formulation of Parseval, Plancherel,
and many others.
The power conservation theorem is required to increase by continuity the
definition of Fourier transforms to $L^2(G)$
The third result’s the convolution theorem. First discover that the
group construction permits to outline the convolution of any two capabilities on
$L^1(G)$:
start{equation}
[f*g](x) = int_G f(y)g(x-y)mathrm{d} y
finish{equation}
Now, the convolution theorem says that the Fourier rework takes
convolution to point-wise multiplication
start{equation}
widehat{f*g} = hat f hat g
finish{equation}
There’s a lengthy checklist of outcomes, that may be discovered elsewhere. Allow us to point out
a final one. The twin group $G’$ is itself a LCAG, so it has a Fourier
rework in its personal proper. This mapping is the $L^2$ adjoint of the inverse
Fourier rework outlined from $G$.
Lastly, discover that within the case of finite teams all these outcomes are
trivial and so they quantity to elementary linear algebra. Within the steady case
they aren’t trivial, primarily as a result of we do not have an id aspect for
the convolution (e.g., the dirac delta operate), and to show the outcomes
one has to resort to successive approximations of the id.
The sequence of proofs sometimes begins by the convolution theorem,
which is used to show the conservation of power for capabilities that
belong to $L^1cap L^2$, then to increase by density the definition of
the Fourier rework to $L^2$ and at last to show the inversion
theorem. Besides the definition of the Haar measure and the
approximation of the id, that are explicit development, the
remainder of the proofs are similar to the corresponding proofs for the
case of Fourier transforms on the actual line. You simply need to examine
that each one the steps on the proof make sense in a bunch.
3. Sampling idea
Pontryagin duality provides an unified therapy of the 4 traditional
instances in Fourier evaluation: you’re all the time doing precisely the identical
factor, however in several teams. Nonetheless, it doesn’t say something
concerning the direct relationship between them. For instance, a Fourier
sequence the place all however a finite variety of the coefficients is zero can
be represented as a vector of size $N$. Does it have any
relationship with the discrete Fourier rework on $Z_N$? The
reply is sure, and it’s the most important results of sampling idea.
Allow us to begin with exactly this case. Suppose that we’ve got a
periodic operate $f(theta)$ whose Fourier sequence is finite (that is
referred to as a trigonometric polynomial). For
instance,
$$
f(theta)=sum_{n=0}^{N-1} f_ne ^{intheta}
$$
Now, we will do three various things with this object. One, we will categorical the coefficients $f_n$ as integrals of $f$:
$$
f_n = frac{1}{2pi}int_0^{2pi} f(theta)e^{-intheta}udtheta
$$
Two, we will take into account the vector of
coefficients $(f_0,ldots,f_{N-1})$ and compute its inverse DFT
$$
examine{f}_k = sum_{n=0}^{N-1} f_n,e^{2pi i nk/N}
$$
and three, only for enjoyable, we will consider the operate $f$
at $N$ factors evenly spaced alongside its interval
$$
fleft(frac{2picdot 0}{N}proper),
fleft(frac{2picdot 1}{N}proper),
fleft(frac{2picdot 2}{N}proper),
ldots
fleft(frac{2pi(N-1)}{N}proper)
$$
These three operations are, a-priori, unrelated. At the very least,
Pontryagin duality doesn’t say something about them, you’re
working with totally different teams $S^1$ and $Z_N$ that don’t have anything to
do with one another.
Nonetheless, various very humorous coincidences might be noticed:
- The $okay$-th pattern $fleft(frac{2pi okay}{N}proper)$ equals
$$
sum_{n=0}^{N-1}f_n,e^{2pi ikn/N}
$$
which is precisely $examine{f}_k$ - Thus, the vector of samples of the polynomial $f$ is the
IDFT of the vector of coefficients - Correspondingly, the vector of $N$ coefficients of the
polynomial $f$ is the DFT of the vector of $N$ uniform samples
of $f$ between $0$ and $2pi$. - In different phrases, the whole Fourier sequence of $f$ might be
obtained by evaluating the operate $f$ at $N$ factors. - In case you approximate the integral that evaluates $f_n$
from $f$ as a sum of $N$ step capabilities obtained by
sampling $f$, the computation is precise.
All these outcomes lie on the core of sampling idea.
They supply a fantastic, analog interpretation of the definition of
the discrete Fourier rework. In truth, whatever the
definition utilizing group characters, we might have outlined the discrete
fourier rework utilizing these outcomes! (property 3 above).
The sampling theorem takes many various kinds, but it surely all the time
quantities to a conservation of knowledge, or conservation of levels of
freedom. Thus, the properties above might be rephrased as
- Evaluating a trigonometric polynomial of $N$ coefficients
at $N$ factors is a linear map $C^NtoC^N$ - This linear map is invertible if and provided that the factors are totally different
(thus, the operate might be precisely recovered from $N$ of its samples) - If the factors are uniformly distributed, this linear map is
the discrete Fourier rework
The second assertion is usually referred to as the sampling theorem. The
situation that to get better a polynomial of $N$ coefficients
requires $N$ samples is known as the Nyquist situation. Since
it’s pure to think about trigonometric polynomials of the shape
$$
P(theta)=sum_{n=-N/2}^{N/2} p_n,e^{intheta}
$$
the Nyquist situation is usually said as the sampling price should
be a minimum of the double of the maximal frequency.
We have now thus associated Fourier sequence with the $N$-dimensional DFT, by way of
the operation of sampling at $N$ level. The reasoning is finite and
principally trivial. There are much more correspondences between the
4 traditional instances. For instance, Shannon-Whittaker interpolation
relates the Fourier rework with the discrete-time Fourier
rework: if the help of $hat f$ lies contained in the
interval $[-pi,pi]$, then $f$ might be recovered precisely by the
values $f(Z)$. A distinct development relates Fourier transforms
and Fourier sequence: if we’ve got a quickly lowering operate $f(x)$,
we will construct a $2pi$-periodic operate by folding it:
$$
tilde f(theta)=sum_{ninZ} f(theta+2pi n)
$$
and the Fourier sequence of $tilde f$ and the Fourier rework
of $f$ are intently associated.
All these relationships between the 4 traditional instances are neatly
encoded within the Fourier-Poisson dice, which is an superior commutative
diagram:
Allow us to outline the folding operation.
Suppose that $varphi$ is a quickly lowering easy operate.
We take a interval $P>0$ and outline the operate
$$
varphi_P(x):=sum_{kinZ}varphi(x+kP)
$$
Since $varphi$ is quickly lowering, this sequence converges
pointwise, and $varphi_P$ is a easy and $P$-periodic
operate. This folding operation transforms capabilities outlined on $R$ to
capabilities outlined on $R/2piZ$
Since $varphi$ is quickly lowering, we will compute it Fourier
rework
$$
widehatvarphi(y)=frac{1}{sqrt{2pi}}int_R f(x)e^{-ixy}ud x
$$
The Poisson summation formulation relates the Fourier rework
of $varphi$ with the Fourier sequence of $varphi_P$. Allow us to derive
it. Since $varphi_P$ is $P$-periodic, we will compute its Fourier
sequence
$$
varphi_P(x)=sum_{ninZ}c_nexpfrac{2pi inx}{P}
$$
which converges pointwise for any $x$ since $varphi_P$ is easy.
The Fourier coefficients $c_n$ are
$$
c_n = frac{1}{P}int_0^Pvarphi_P(x)expfrac{-2pi inx}{P}ud x
$$
by increasing the definition of $varphi_P$:
$$
c_n = frac{1}{P}int_0^Pleft(sum_{kinZ}varphi(x+kP)proper)expfrac{-2pi inx}{P}ud x
$$
and now, since $varphi$ is quickly lowering, we will interchange
the sum and the integral, (for instance, if $varphi$ is compactly
supported, the sum is finite):
$$
c_n = frac{1}{P}sum_{kinZ}int_0^Pvarphi(x+kP)expfrac{-2pi inx}{P}ud x
$$
now, by the change of variable $y=x+kP$,
$$
c_n =
frac{1}{P}sum_{kinZ}int_{kP}^{(okay+1)P}varphi(y)expfrac{-2pi
iny}{P}ud y
$$
the place we’ve got used the truth that $exp$ is $2pi i$-periodic to
simplify $e^{2pi ink}=1$. This sum of integrals over contiguous
intervals is just an integral over $R$, thus
$$
c_n =
frac{1}{P}int_Rvarphi(y)e^{-iyleft(frac{2pi n}{P}proper)}ud y
$$
the place we acknowledge the Fourier rework of $varphi$, thus
$$
c_n =frac{sqrt{2pi}}{P}widehatvarphileft(
frac{2pi n}{P}
proper).
$$
Changing the coefficients $c_n$ into the Fourier sequence
of $varphi_P$ we get hold of the id
$$
sum_{kinZ}varphi(x+kP)
=
frac{sqrt{2pi}}{P}
sum_{ninZ}
widehatvarphileft(
frac{2pi n}{P}
proper)
expfrac{2pi inx}{P}
$$
which might be evaluated at $x=0$ to present the Poisson summation formulation
$$
sum_{kinZ}varphi(kP)
=
frac{sqrt{2pi}}{P}
sum_{ninZ}
widehatvarphileft(
frac{2pi n}{P}
proper).
$$
The actual case $P=sqrt{2pi}$ is gorgeous:
$$
sum_{kinZ}varphileft(ksqrt{2pi}proper)
=
sum_{ninZ}widehatvarphileft(nsqrt{2pi}proper)
$$
As we’ll see under, the language of distribution idea permits to
categorical this formulation
as $$Xi_{sqrt{2pi}}=widehatXi_{sqrt{2pi}}$$ (the Fourier rework of a
Dirac comb is a Dirac comb).
4. Distributions
Discover that almost all of sampling idea might be accomplished with out recourse to
distributions. Certainly, Shannon, Nyquist, Whittaker, Borel, all
said and proved their outcomes method earlier than the invention of
distributions. These days, distribution idea supplies a satisfying
framework to state all of the traditional sampling leads to a unified
kind. It’s tough to evaluate which methodology is easier, as a result of the
traditional sampling outcomes all have elementary proofs, whereas the
detailed definition of tempered distributions is a bit concerned.
It’s higher to be conversant in each prospects.
In classical sampling idea, you pattern a steady
operate $f:RtoC$ by evaluating it at a discrete set of factors,
for instance $Z$, thus acquiring a sequence of
values $ldots,f(-2),f(-1),f(0),f(1),f(2),ldots$, which might be
interpreted as a operate $tilde f:ZtoC$. Thus, the sampling
operation is a mapping between very totally different areas: from the
steady capabilities outlined over $R$ into the capabilities outlined
over $Z$.
If you carry out sampling utilizing distributions, you pattern a
easy operate $f$ by multiplying it by a Dirac comb. Thus,
the sampling operation is linear a mapping between subspaces of the identical
area: tempered distributions.
4.1. Distributions: overview
Distributions are an extension of capabilities identical to the actual
numbers $R$ are an extension of the rationals $Q$. A lot of the
operations that may be accomplished with $Q$ might be accomplished with $R$, after which
some extra. Nonetheless, there’s a value to pay: there are some
operations that solely make sense on the smaller set. For instance,
whereas the “denominator” operate on $Q$ can’t be prolonged
meaningfully to $R$, the weather of $R$ can’t be enumerated like
these of $Q$, and many others. Nonetheless, if you wish to work with limits, the
area $Q$ is generally ineffective and also you want $R$.
There are a couple of areas of distributions. The three most well-known are
- $mathcal{D}’$ the area of all distributions
- $mathcal{S}’$ the area of tempered distributions
- $mathcal{E}’$ the area of compactly supported distributions
Every of those areas is a large generalization of an already very massive
area of capabilities:
- $mathcal{D}’$ incorporates all capabilities of $L^1_{loc}$
- $mathcal{S}’$ incorporates all capabilities of $L^1_{loc}$ that
are slowly rising (bounded, or going to infinity at a polynomial
price) - $mathcal{E}’$ incorporates all compactly supported integrable
capabilities
Right here $L^1_{loc}$ denotes the set of domestically integrable capabilities,
that’s, complex-valued capabilities such that $int_K|f| <+infty$ for
any compact $Ok$.
These are the properties that we earn with respect to the unique
areas:
- Most operations on capabilities prolong naturally to
distributions: sums, product by scalars, product by a operate,
affine adjustments of variable - Any distribution is infinitely derivable, and the by-product
belongs to the identical area - Any distribution is domestically integrable
- The Fourier rework is an isometry within the area of
tempered distributions - There’s a very simple to make use of definition of restrict of
distributions
And these are the costs to pay for the daring:
- You can’t consider a distribution at a degree
- You can’t multiply two distributions
- There isn’t any technique to outline a norm within the vector area of
distributions
4.2. Distributions: definition
There are a number of, quite totally different, definitions of distribution.
Essentially the most sensible definition right this moment appears to be because the topological
duals of areas of check capabilities:
- $mathcal{D}$ the area of all $mathcal{C}^infty$
capabilities of compact help - $mathcal{S}$ the area of all quickly
lowering $mathcal{C}^infty$ capabilities - $mathcal{E}$ the area of all $mathcal{C}^infty$
capabilities
Discover that $mathcal{D}$ and $mathcal{E}$ make sense for capabilities
outlined over an arbitrary open set, however $mathcal{S}$ solely makes
sense on the entire actual line.
The one downside with that is that the topologies on these areas of
check capabilities aren’t trivial to assemble. For instance, there’s
no pure technique to outline helpful norms on these areas. Thus,
topologies should be constructed utilizing households of seminorms, or by
different means (within the case of $mathcal{D}$). That is out of the scope
of this doc, but it surely’s an ordinary development that may be simply
discovered elsewhere (e.g., Gasqued-Witomski).
The essential topological property that we want is the definition of
restrict of a sequence of distributions. We are saying that {that a}
sequence $T_n$ of distributions converges to a distribution $T$ when
$$
T_n(varphi)to T(varphi)qquadtextrm{for any check operate } varphi
$$
Thus, the restrict of distributions is lowered to the restrict of scalars.
A sequences of distributions is convergent if and solely whether it is
“pointwise” convergent. That is way more easy than the case of
capabilities, the place there are a number of totally different and incompatible notions
of convergence.
A distribution is, by definition, a linear map on the area of check
capabilities. The next notations are frequent for the results of
making use of a distribution $T$ to a check operate $varphi$:
$$
T(varphi)
quad=quad
left<T,varphiright>
quad=quad
int Tvarphi
quad=quad
int T(x)varphi(x)ud x
$$
The final notation is especially insidious, as a result of for a generic
distribution, $T(x)$ doesn’t make sense. Nonetheless, it’s an abuse of
notation because of the following lemma:
Lemma. Let $f$ be a domestically integrable operate (slowly
rising, or compactly supported). Then the linear map
$$
T_f : varphimapstoint f(x)varphi(x)ud x
$$
is well-defined and steady on $mathcal{D}$ (or $mathcal{S}$,
or $mathcal{E}$). Thus it’s a distribution.
The lemma says that any operate might be interpreted as a
distribution.
This is essential, as a result of all the following
definitions on the area of distributions are crafted in order that, when
utilized to a operate they’ve the anticipated impact.
For instance, the by-product of a distribution $T$ is outlined by
$$
left<T’,varphiright>
:=
left<T,-varphi’proper>
$$
Two observations: (1) this definition is smart, as a result of $varphi$
is all the time a $mathcal{C}^infty$ operate, and so is $-varphi’$.
And (2) this definition extends the notion of by-product when $T$
corresponds to a derivable operate $f$. We write
$$
T_{f’}= {T_f}’
$$
to point that the proposed definition is appropriate with the
corresponding development for capabilities.
The same trick is used to increase the shift $tau_a$, scale $zeta_a$ and
symmetry $sigma$ of capabilities (the place $a>0$):
start{eqnarray*}
tau_a f(x) &:= f(x-a)
zeta_a f(x) &:= f(x/a)
sigma f(x) &:= f(-x)
finish{eqnarray*}
to the case of distributions:
start{eqnarray*}
left<tau_a T,varphiright> &:= left<T,tau_{-a}varphiright>
left<zeta_a T,varphiright> &:= left<T,a^{-1}zeta_{a^{-1}}varphiright>
left<sigma T,varphiright> &:= left<T,sigmavarphiright>
finish{eqnarray*}
and the compatibility might be checked by simple change of
variable.
After common capabilities, crucial instance of distribution
is the Dirac delta, outlined by $delta(varphi):=varphi(0)$.
Within the recurring notation we write
$$
intdelta(x)varphi(x)ud x = varphi(0)
$$
as a result of this kind could be very amenable to adjustments of variable.
An equal definition is $delta(x)=H'(x)$ the place $H$ is the
indicator operate of constructive numbers. This is smart as a result of $H$
is domestically integrable, and its by-product is well-defined within the
sense of distributions. The Dirac delta belongs to all three
areas $mathcal{D}’$, $mathcal{S}’$ and $mathcal{E}’$.
Utilizing Diracs, we will outline many different distributions, by making use of
shifts, derivatives, and vector area operations. For instance, the
Dirac comb is outlined as
$$
Xi(x)=sum_{ninZ}delta(x-n)
$$
the place the infinite sequence is to be interpreted as a restrict. That is
well-defined in $mathcal{D}’$ (the place the sum is finite because of the
compact help of the check operate)
and $mathcal{S}’$ (the place the sequence is trivially convergent due the
fast lower of the check operate) however not on $mathcal{E}’$ (the place
the sequence is just not essentially convergent for arbitrary check
capabilities, for instance $varphi=1inmathcal{E}$).
We will do different loopy issues, like $sum_{nge 0}delta^{(n)}(x-n)$,
which can also be properly outlined when utilized to a check operate. However we
can’t do every thing. For instance $sum_{nge 0}delta^{(n)}(x)$ is
not properly outlined, as a result of there’s not a assure that the sum of
all derivatives of a check operate on the identical level converges.
4.3. Fourier rework of distributions
How you can outline the Fourier rework of a distribution?
We have to discover a definition that extends the definition that we
have already got for capabilities, thus $widehat{T_f}=T_{widehat{f}}$.
It’s simple to examine that the definition
$$
left<widehat{T},varphiright>
:=
left<T,widehat{varphi}proper>
$$
does the trick, as a result of it corresponds to Plancherel Theorem when $T$
is a domestically integrable operate.
Nonetheless, discover that this definition doesn’t make sense
in $mathcal{D}’$: if $varphiinmathcal{D}$, then it has compact
help, so its Fourier rework doesn’t,
thus $widehat{varphi}notinmathcal{D}$.
The area $mathcal{S}$, referred to as the Schwartz area, has the attractive
property of being invariant by Fourier transforms. Certainly, the Fourier
rework, with acceptable normalization constants, is an $L^2$
isometry on $mathcal{S}$. Thus, tempered distributions are the
pure area the place to carry out Fourier transforms.
Now, we will compute the Fourier rework, within the sense of
distributions, of many capabilities! For instance, what’s the Fourier
rework of the operate $f(x)=1$? This operate is a temperate
distribution, so it will need to have a Fourier rework, would not it?
Certainly it does, and it may be simply discovered from the definitions:
$$
left<widehat{1},varphiright>
=
left<1,widehat{varphi}proper>
=
intwidehat{varphi}(x)ud x
=
frac{1}{sqrt{2pi}}varphi(0)
$$
So, the Fourier rework of a continuing is a Dirac!
By combining this consequence with the derivatives we will compute the
Fourier rework of polynomials. For instance $f(x)=x^2$ has the
property that $f”$ is fixed, thus $widehat{f”}$ is a Dirac, and
then $widehat{f}$ is the second by-product of a Dirac.
4.4. Sampling with Diracs
Can we compute the Fourier rework of $f(x)=e^x$ ? No, as a result of it
is just not a slowly rising operate, and it doesn’t correspond to any
tempered distribution.
Nonetheless, the operate $f(x)=e^{ix}$ is definitely slowly rising (it’s
bounded), so it has a Fourier rework as a tempered Distribution
that’s $widehat{f}(xi)=delta(xi-1)$. Utilizing trigonometric
identities, we discover the Fourier transforms of $sin$ and $cos$,
that are additionally sums of Diracs:
start{eqnarray*}
widehat{cos}(xi) &=frac{delta(x-1)+delta(x+1)}{2}
widehat{sin}(xi) &=frac{delta(x-1)-delta(x+1)}{2i}
finish{eqnarray*}
And, as we’d say in Catalan, the mom of the
eggsfootnote{“La mare dels ous”, or in french “où il gît le
lièvre”. I have no idea a equally colourful expression in english}: the
Fourier rework of a Dirac comb is one other Dirac comb.
Now I do not see
know the way to show this by combining the identities above, but it surely has
a easy proof by expressing the Dirac comb because the by-product of a
sawtooth operate and making use of it to a check operate, as accomplished on the
earlier part.
5. Spectral geometry
Spectral idea supplies a brutal generalization of a big a part of
Fourier evaluation. We get rid of the group construction (and thus with
the likelihood to have convolutions, that are based mostly on the motion
of the group). In trade, we have to work inside a compact area,
endowed by a Riemannian metric. For instance, a compact sub-manifold
of Euclidean area. The canonical instance is $S^1$, that in
the classical case results in Fourier sequence. Right here, we get better all
the outcomes of Fourier sequence (besides these associated to periodic
convolution) for capabilities outlined on our manifold.
Let $M$ be a compact Riemannian manifold (with or with out boundary), and
let $Delta$ be its Laplace-Beltrami operator, outlined
as $Delta=*d*d$, the place $d$ is the outside by-product (which is impartial
of the metric) and $*$ is the Hodge duality between $p$-forms
and $d-p$-forms (which is outlined utilizing the metric).
The next are normal leads to differential geometry (see e.g.
Warner’s ebook chapter
6 https://link.springer.com/content/pdf/10.1007%2F978-1-4757-1799-0_6.pdf)
- (1) There’s a sequence of $mathcal{C}^infty(M)$
capabilities $varphi_n$ and constructive
numbers $lambda_ntoinfty$ such that
$$Deltavarphi_n=-lambda_nvarphi_n$$ - (2) The capabilities $varphi_n$, suitably normalized, are an
orthonormal foundation of $L^2(M)$.
These outcomes generalize Fourier sequence to an arbitrary easy manifold $M$.
Any square-integrable operate $f:MtoR$ is written uniquely as
$$f(x)=sum_nf_nvarphi_n(x)$$ and the coefficients $f_n$ are computed by
$$f_n=int_Mfvarphi_n.$$ Some explicit instances are the recurring Fourier and
sine bases (however not the cosine foundation), bessel capabilities for the disk, and
spherical harmonics for the floor of a sphere.
$M$ | $varphi_n$ | $-lambda_n$ | |
interval |
$[0,2pi]$ | $sinleft(frac{nx}{2}proper)$ | $n^2/4$ |
circle | $S^1$ | $sin(ntheta),cos(ntheta)$ | $n^2$ |
sq. | $[0,2pi]^2$ | $sinleft(frac{nx}{2}proper)sinleft(frac{mtheta}{2}proper)$ | $frac{n^2+m^2}{4}$ |
torus | $(S^1)^2$ | $sin(nx)sin(my),ldots$ | $n^2+m^2$ |
disk | $|r|le1$ | $sin,cos(ntheta)J_n(rho_{m,n}r)$ | $rho_{m,n}$ roots of $J_n$ |
sphere | $S^2$ | $Y^m_l(theta,varphi)$ | $l^2+l$ |
The eigenfunctions $varphi_n$ are referred to as the vibration modes of $M$, and the
eigenvalues $lambda_n$ are referred to as the (squared) elementary frequencies of $M$.
A number of geometric properties of $M$ might be interpreted when it comes to the
Laplace-Beltrami spectrum. For instance, if $M$ has $okay$ related parts,
the primary $okay$ eigenfuntions shall be supported successively on every related
part. On a related manifold $M$, the primary vibration mode might be
taken to be constructive $varphi_1ge0$, thus all the opposite modes have
non-constant indicators (as a result of they’re orthogonal to $varphi_1$). In
explicit, the signal of $varphi_2$ cuts $M$ in two components in an optimum method,
it’s the Cheeger minimize of $M$, maximizing the perimeter/space ratio of the minimize.
The zeros of $varphi_n$ are referred to as the nodal curves (or nodal units) of $M$,
or additionally the Chladni patterns. If $M$ is a subdomain of the aircraft, these
patterns might be discovered by slicing an object within the form of $M$, pouring a
layer of sand over it, and letting it vibrate by high-volume sound waves at
totally different frequencies. For many frequencies, the sand won’t kind any
explicit sample, however when the frequency coincides with
a $sqrt{lambda_n}$, the sand will accumulate over the set $[varphi_n=0]$,
which is the set of factors of the floor that don’t transfer when the floor
vibrates at this frequency. Within the typical case, the variety of related
parts of $[varphi_n>0]$ grows linearly with $n$, thus the
capabilities $varphi_n$ turn into extra oscillating (much less common) as $n$ grows.
Usually, symmetries of $M$ come up as multiplicities of eigenvalues.
The Laplace-Beltrami spectrum ${lambda_1,lambda_2,lambda_3,ldots}$ is
intently associated, however not similar, to the geodesic size spectrum, that
measures the sequence of lengths of all closed geodesics of $M$. The grand
outdated man of this idea is Yves Colin de Verdière, pupil of Marcel Berger.
Geometry is just not normally a spectral invariant, however non-isometric manifolds
with the identical spectrum are tough to return by. The primary pair of distinct
however isospectral manifolds was wound in 1964 by John Milnor, in dimension 16.
The primary instance in dimension 2 was present in 1992 by Gordon, Webb and
Wolperd, and it answered negatively the well-known query of Marc Kac “Are you able to
hear the form of a drum?’.
In 2018, we’ve got some ways to assemble discrete and steady households of
isospectral manifolds in dimensions two and above.