oversampling
Penio Penev (penev@venezia.rockefeller.edu)
Mon, 5 Jan 1998 22:52:53 -0500
On Sun, 4 Jan 1998, KC5TJA wrote:
> hmmm...what's the difference between spatial and time domains?
Sampling in the amplitude domain means that one is recording the value of
the amplitude on the condition that the time is equal to something
predetermined (a threshold).
Sampling in the time domain means that one is recording the value of
the time on the condition that the amplitude is equal to something
predetermined (a threshold).
Sampling in another domain means that one is recording the value of some
by-product of the signal on the condition that some other by-product of
the signal is at some predetermined threshold.
[A philosophical aside: The nervous system communicates with "spikes" --
all-or-nothings objects that have an associated time value, but nothing
else. I.e., it samples the time. The only unsolved problem in
neuroscience is which exactly byproduct of the signal at the receptors is
being thresholded.]
> > The trick is to know how one is sampling and to calculate the
> > transformation matrix correctly. Of course, one has to do the error
>
> OK, please give a more complete example of this. I'm sorta getting what
> you're talking about, and I'm sorta not.
One example is the sin(x)/x reconstruction function that is used to
recover the signal from the amplitude sampling.
Another example is the Discrete Fourier transfer, which converts from the
amplitude sampling to the frequency sampling representations.
> What I'm getting confused about is how you are able to sample the
> instantaneous amplitude of a signal using nothing but zero crossings... :)
What follows is a very good example and I'll try to take it one step
further.
> > One can improve the quality of the reconstruction if one has more samples
> > -- in that case the problem is called "least square fitting," or "what is
> > the signal in the target subspace, closest to the measured one, which is
> > in a bigger space?"
>
> Yes, but you still need more resolution for amplitude. Let's say you have
> a 2-bit A/D converter, and you sample a sine wave:
>
> 1 2 3 2 1 0 1 ...
>
> Looks like a triangle wave to me. But that's due to the lack of time
> points. Now let's double the sampling frequency:
>
> 1 2 2 3 3 3 3 2 2 1 1 0 0 0 0 1 1 2 2 ...
>
> OK, now it's looking more like a sine wave. BUT, we reach a point of
> diminishing returns here.
No, doubling the number of samples gives you about a bit more per sample.
> If I were to double the sampling frequency
> again, I'd get no more information out of it... :(
Nope. Let's see.
You considered f(t)= A*sin(w*t), represented by the sequence:
S1: 1 2 2 3 3 3 3 2 2 1 1 0 0 0 0 1 1 2 2
We know the "spectral part" -- w, we need to estimate A. Suppose we start
changing A, say increasing it, and ask what happens to the observed
sequence. Well, for a while it stays the same, then changes -- the 2 -> 3
transition happen earlier, because now f(t) increases faster. An we have
the sequence:
S2: 1 2 3 3 3 3 3 3 2 1 0 0 0 ...
There is a minimum A at which this happens, say A_min(S2), which is also
A_max(S1). As we decrease A, the transition happens later:
S0: 1 2 2 2 3 3 2 2 2 1 ...
This is A_min(S1) = A_max(S0).
So, by observing S1, we have bounded the value of A :
A_min(S1) <= A_max(S1) .
This is equivalent of sampling A with the given resolution. The negative
log of the size of the uncertainty is proportional to the number of bits
in the effective resolution in A!
What happens when we observe S1 and now we decide to increase the temporal
sampling frequency?
S1: 1 2 2 3 3 3 3 2 2 1 1 0 0 0 0 1 1 2 2
S1': 1x2x2x3x3x3x3x2x2x1x1x0x0x0x0x1x1x2x2
OK, Some of the x-es we can fill in advance:
S1': 1x222x3333333x222x121x0000000x121x222
But note that some of the observations depend on A. If A is big, they get
the bigger value, if A is small, they get the smaller. Let's say the
threshold value is A_max(S10)=A_min(S11):
S10: 11222333333332222112110000000 ...
S11: 12222333333333222212100000000 ...
So, by observing either S10 or S11 we have a smaller bounding interval for
A, which effectively increases the _amplitude_ resolution, if we are lucky
-- by a whole bit.
And so on.
The whole story may be made rigorous and for signal with richer spectrum.
but as long as it is bounded and known in advance, sampling the timing
level-crossings _can_ substitute sampling amplitudes at regular times.
> > With a 1-bit A/D one is measuring the zero-crossings of some function.
>
> OK...I'm thinking of a different KIND of 1-bit D/A converter... I'm
> thinking of the converter where the output is either all-on or all-off,
> and you use an integrator, followed by a LPF, to achieve the output
> signal.
We are talking two different things here: A/D versus D/A. For D/A, I
have in mind exactly what you have in mind.
> Upon a zero crossing, what happens? Does the state of the output
> latch toggle, or does it pulse for a clock period?
I was talking A/D.
> > So, by noting the exact time of the zero crossing, we have actually
> > constrained the point through with the waveform passes extremely well.
>
> Right, but you're now completely clueless of the instantaneous amplitudes
> between two zero-crossing events. :)
But, remember, you've sampled the Fourier expansion coefficients well
enough, so that you can reconstruct the amplitudes between the
zero-crossings.
See above how we got A well enough by level-crossing. Now we can use what
we know: f(t)=A*sin(w*t) .
> > In other words, there aren't _that many_ possible waveforms in the
> > _band-passed subspace_ that conform to those requirements.
> ^^^^^^^^^^^^^^^^^^^^^^
>
> That may well be true, but you're still loosing amplitude information,
> even for those waveforms that ARE still in the bandpass region.
Remember, we bounded the _amplitude_ of the Fourier coefficients really
well. So, we are not loosing any amplitude information. There are no
longer _that_ many in-space waveforms any more.
> > As I said, one need a helluva DSP to sort things out :-)
>
> Hmmm...perhaps not. Could wavelet analysis have application in this
> picture? (I'm interested in wavelets, though I've only played with Haar
> wavelets.)
_Any_ complete basis of the subspace will do. The only question is which
one is convenient and how does the error propagation analysis play out.
[A philosophical aside: A neuron fires a spike with a reproducible
temporal resolution of about a millisecond about 100 times a second. People
put this at about 3--10 bits/spike. For about 10G neurons in the brain,
this means about 1 Tbyte/s _processed_.
At about 100 Watts heat dissipation, 15 cu. inch volume, and without the
need for a noisy fan, this is not a bad deal :-)
Add end-to-end encryption to that and you can see what a system this is.
Go, MISC, Go!]
--
Penio Penev <Penev@pisa.Rockefeller.edu> 1-212-327-7423
.