Sampling Theorem and reconstructionSampling frequency required to detect peaksSampling TheoremWhy is the voltage in this digitizer reducedWhy do digital scopes sample signals at a higher frequency than required by the sampling theorem?Fast Fourier Transformation of incomplete signalsSampling rate for a real signal, which is not band-limited?Am I using Shannon-Hartley Theorem and thermal noise correctly here?What is the difference between modulating with cosine and exponential function?Confusion with Nyquist theorem when sampling cosines versus sinesregarding the sampling frequence
Is it possible to use .desktop files to open local pdf files on specific pages with a browser?
My friend sent me a screenshot of a transaction hash, but when I search for it I find divergent data. What happened?
Folder comparison
What is the grammatical term for “‑ed” words like these?
On a tidally locked planet, would time be quantized?
Open a doc from terminal, but not by its name
How do I repair my stair bannister?
Why do IPv6 unique local addresses have to have a /48 prefix?
How to color a curve
What linear sensor for a keyboard?
How do I extrude a face to a single vertex
Can we have a perfect cadence in a minor key?
Reply 'no position' while the job posting is still there
Divine apple island
What is this type of notehead called?
Find last 3 digits of this monster number
Is possible to search in vim history?
Can the Supreme Court overturn an impeachment?
We have a love-hate relationship
Sampling Theorem and reconstruction
What (else) happened July 1st 1858 in London?
Proof of Lemma: Every nonzero integer can be written as a product of primes
Two-sided logarithm inequality
If a character with the Alert feat rolls a crit fail on their Perception check, are they surprised?
Sampling Theorem and reconstruction
Sampling frequency required to detect peaksSampling TheoremWhy is the voltage in this digitizer reducedWhy do digital scopes sample signals at a higher frequency than required by the sampling theorem?Fast Fourier Transformation of incomplete signalsSampling rate for a real signal, which is not band-limited?Am I using Shannon-Hartley Theorem and thermal noise correctly here?What is the difference between modulating with cosine and exponential function?Confusion with Nyquist theorem when sampling cosines versus sinesregarding the sampling frequence
$begingroup$
I do not understand a concept about the Nyquist - Shannon sampling theorem.
It says that it is possibile to perfectly get the original analog signal from the signal obtained by sampling if and only if the sampling frequency is higher than twice the maximum frequency of the initial signal.
I can understand it if I think at what happens in the frequency domain, in which the sampling produces replicas of the initial spectrum and therefore a low pass filter reconstructor can delete them and keep the original spectrum.
But in time domain sampling simply means to extract values of the original signal at instants separated by the sampling time T.
Once I have extracted these values, I have lost all the informations about the points between two consecutive instants of sampling. How can the reconstructor device perfectly obtain the original signal? It does not know how to connect the sampled points (they can be connected by infinite mathematical curves and all the information inside T time are lost). For example, it can connect them as in figure 1 (the correct original signal), or as in figure 2.
figure 1
figure 2
This makes me think that a very high sampling frequency is surely a good thing, since the points are very close together, but there is not a frequency that if overcome, allows a 100% perfect reconstruction, since sampling implies losing information.
analog signal signal-processing sampling signal-theory
$endgroup$
add a comment |
$begingroup$
I do not understand a concept about the Nyquist - Shannon sampling theorem.
It says that it is possibile to perfectly get the original analog signal from the signal obtained by sampling if and only if the sampling frequency is higher than twice the maximum frequency of the initial signal.
I can understand it if I think at what happens in the frequency domain, in which the sampling produces replicas of the initial spectrum and therefore a low pass filter reconstructor can delete them and keep the original spectrum.
But in time domain sampling simply means to extract values of the original signal at instants separated by the sampling time T.
Once I have extracted these values, I have lost all the informations about the points between two consecutive instants of sampling. How can the reconstructor device perfectly obtain the original signal? It does not know how to connect the sampled points (they can be connected by infinite mathematical curves and all the information inside T time are lost). For example, it can connect them as in figure 1 (the correct original signal), or as in figure 2.
figure 1
figure 2
This makes me think that a very high sampling frequency is surely a good thing, since the points are very close together, but there is not a frequency that if overcome, allows a 100% perfect reconstruction, since sampling implies losing information.
analog signal signal-processing sampling signal-theory
$endgroup$
2
$begingroup$
Your bottom signal has some much higher frequency components than the other ones here.
$endgroup$
– Hearth
3 hours ago
3
$begingroup$
There is exactly ONE curve that passes thru all those points AND is band limited to strictly less then Fs/2.
$endgroup$
– Dan Mills
3 hours ago
$begingroup$
it's important to note that unique reconstruction is only possible if the original signal is strictly bandlimited. Or to put it another way, given the samples, the assumption of strict bandlimiting allows a single signal to be reconstructed. To the extent that the bandlimited assumption is untrue, then the reconstructed signal will not match the original - this is called aliasing.
$endgroup$
– Neil_UK
3 hours ago
$begingroup$
Also note that in practice it may require higher sampling frequency to provide acceptable reconstruction since perfect band-limiting are not practical. For example Audio CDs use 44.1kHz sampling to provide 0-20kHz output. Oscilloscopes generally use 5-10 times the required minimum sampling frequency to provide acceptable waveform integrity as a sharp cutoff filter would tend to create waveform artifacts such as ringing.
$endgroup$
– Kevin White
2 hours ago
add a comment |
$begingroup$
I do not understand a concept about the Nyquist - Shannon sampling theorem.
It says that it is possibile to perfectly get the original analog signal from the signal obtained by sampling if and only if the sampling frequency is higher than twice the maximum frequency of the initial signal.
I can understand it if I think at what happens in the frequency domain, in which the sampling produces replicas of the initial spectrum and therefore a low pass filter reconstructor can delete them and keep the original spectrum.
But in time domain sampling simply means to extract values of the original signal at instants separated by the sampling time T.
Once I have extracted these values, I have lost all the informations about the points between two consecutive instants of sampling. How can the reconstructor device perfectly obtain the original signal? It does not know how to connect the sampled points (they can be connected by infinite mathematical curves and all the information inside T time are lost). For example, it can connect them as in figure 1 (the correct original signal), or as in figure 2.
figure 1
figure 2
This makes me think that a very high sampling frequency is surely a good thing, since the points are very close together, but there is not a frequency that if overcome, allows a 100% perfect reconstruction, since sampling implies losing information.
analog signal signal-processing sampling signal-theory
$endgroup$
I do not understand a concept about the Nyquist - Shannon sampling theorem.
It says that it is possibile to perfectly get the original analog signal from the signal obtained by sampling if and only if the sampling frequency is higher than twice the maximum frequency of the initial signal.
I can understand it if I think at what happens in the frequency domain, in which the sampling produces replicas of the initial spectrum and therefore a low pass filter reconstructor can delete them and keep the original spectrum.
But in time domain sampling simply means to extract values of the original signal at instants separated by the sampling time T.
Once I have extracted these values, I have lost all the informations about the points between two consecutive instants of sampling. How can the reconstructor device perfectly obtain the original signal? It does not know how to connect the sampled points (they can be connected by infinite mathematical curves and all the information inside T time are lost). For example, it can connect them as in figure 1 (the correct original signal), or as in figure 2.
figure 1
figure 2
This makes me think that a very high sampling frequency is surely a good thing, since the points are very close together, but there is not a frequency that if overcome, allows a 100% perfect reconstruction, since sampling implies losing information.
analog signal signal-processing sampling signal-theory
analog signal signal-processing sampling signal-theory
asked 4 hours ago
Kinka-ByoKinka-Byo
512
512
2
$begingroup$
Your bottom signal has some much higher frequency components than the other ones here.
$endgroup$
– Hearth
3 hours ago
3
$begingroup$
There is exactly ONE curve that passes thru all those points AND is band limited to strictly less then Fs/2.
$endgroup$
– Dan Mills
3 hours ago
$begingroup$
it's important to note that unique reconstruction is only possible if the original signal is strictly bandlimited. Or to put it another way, given the samples, the assumption of strict bandlimiting allows a single signal to be reconstructed. To the extent that the bandlimited assumption is untrue, then the reconstructed signal will not match the original - this is called aliasing.
$endgroup$
– Neil_UK
3 hours ago
$begingroup$
Also note that in practice it may require higher sampling frequency to provide acceptable reconstruction since perfect band-limiting are not practical. For example Audio CDs use 44.1kHz sampling to provide 0-20kHz output. Oscilloscopes generally use 5-10 times the required minimum sampling frequency to provide acceptable waveform integrity as a sharp cutoff filter would tend to create waveform artifacts such as ringing.
$endgroup$
– Kevin White
2 hours ago
add a comment |
2
$begingroup$
Your bottom signal has some much higher frequency components than the other ones here.
$endgroup$
– Hearth
3 hours ago
3
$begingroup$
There is exactly ONE curve that passes thru all those points AND is band limited to strictly less then Fs/2.
$endgroup$
– Dan Mills
3 hours ago
$begingroup$
it's important to note that unique reconstruction is only possible if the original signal is strictly bandlimited. Or to put it another way, given the samples, the assumption of strict bandlimiting allows a single signal to be reconstructed. To the extent that the bandlimited assumption is untrue, then the reconstructed signal will not match the original - this is called aliasing.
$endgroup$
– Neil_UK
3 hours ago
$begingroup$
Also note that in practice it may require higher sampling frequency to provide acceptable reconstruction since perfect band-limiting are not practical. For example Audio CDs use 44.1kHz sampling to provide 0-20kHz output. Oscilloscopes generally use 5-10 times the required minimum sampling frequency to provide acceptable waveform integrity as a sharp cutoff filter would tend to create waveform artifacts such as ringing.
$endgroup$
– Kevin White
2 hours ago
2
2
$begingroup$
Your bottom signal has some much higher frequency components than the other ones here.
$endgroup$
– Hearth
3 hours ago
$begingroup$
Your bottom signal has some much higher frequency components than the other ones here.
$endgroup$
– Hearth
3 hours ago
3
3
$begingroup$
There is exactly ONE curve that passes thru all those points AND is band limited to strictly less then Fs/2.
$endgroup$
– Dan Mills
3 hours ago
$begingroup$
There is exactly ONE curve that passes thru all those points AND is band limited to strictly less then Fs/2.
$endgroup$
– Dan Mills
3 hours ago
$begingroup$
it's important to note that unique reconstruction is only possible if the original signal is strictly bandlimited. Or to put it another way, given the samples, the assumption of strict bandlimiting allows a single signal to be reconstructed. To the extent that the bandlimited assumption is untrue, then the reconstructed signal will not match the original - this is called aliasing.
$endgroup$
– Neil_UK
3 hours ago
$begingroup$
it's important to note that unique reconstruction is only possible if the original signal is strictly bandlimited. Or to put it another way, given the samples, the assumption of strict bandlimiting allows a single signal to be reconstructed. To the extent that the bandlimited assumption is untrue, then the reconstructed signal will not match the original - this is called aliasing.
$endgroup$
– Neil_UK
3 hours ago
$begingroup$
Also note that in practice it may require higher sampling frequency to provide acceptable reconstruction since perfect band-limiting are not practical. For example Audio CDs use 44.1kHz sampling to provide 0-20kHz output. Oscilloscopes generally use 5-10 times the required minimum sampling frequency to provide acceptable waveform integrity as a sharp cutoff filter would tend to create waveform artifacts such as ringing.
$endgroup$
– Kevin White
2 hours ago
$begingroup$
Also note that in practice it may require higher sampling frequency to provide acceptable reconstruction since perfect band-limiting are not practical. For example Audio CDs use 44.1kHz sampling to provide 0-20kHz output. Oscilloscopes generally use 5-10 times the required minimum sampling frequency to provide acceptable waveform integrity as a sharp cutoff filter would tend to create waveform artifacts such as ringing.
$endgroup$
– Kevin White
2 hours ago
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
You can think of any perfectly bandlimited signal as the superposition of a set of $fracsin(t)t = textsinc(t)$ curves, with their peaks positioned uniformly along the time axis. Their spacing is $frac2BW$.
sinc(x) also happens to be the time-domain response of a perfect low-pass filter, and it explains how the continuous-time reconstruction (interpolation) is accomplished from a series of discrete samples.
When we uniformly sample a signal, each sample is a direct measurement of the amplitude of one of those sinc() waves. This works because it is a property of the sinc() function that it is zero at every sampling point, except at its own peak. In other words, when you take a measurement, you're not getting any "interference" from any of the other sinc() functions. Therefore, the set of N discrete measurements contains all of the information in the continuous-time signal represented by that collection of sinc() waves.
Now, it gets even weirder than what TimWestcott was alluding to — the samples do not even have to be uniformly spaced in time! It turns out that ANY N unique samples taken within a window of time (with certain limitations) of a perfectly bandlimited signal can be used to reconstruct that signal. It takes a lot more math to do it, though!
With nonuniform sampling, you are no longer getting a clean measurement of just one of the sinc() amplitudes. Instead, you're getting a mix of many, if not all of them. However, since you know exactly where you are on each one (obviously, each sample must be time-stamped), it is possible to solve the large system of linear equations to find the actual amplitudes and therefore reconstruct the original signal. Of course, this process is very sensitive to small perturbations (noise and math errors, for example), and I'm hand-waving away some details about constraints on the set of samples, but the general principle holds.
$endgroup$
add a comment |
$begingroup$
If the signal is perfectly bandlimited, then there is no additional information to be gotten out of it by sampling faster than twice the bandwidth. So perfect reconstruction must be possible. It's as @DanMills said: there's one and only one curve that'll pass through the sampled points and be correct, and that's the curve that you'd get from a perfect reconstruction filter.
(Note that it gets weirder -- at least in theory, if the bandwidth is $B$, then you don't need to sample $x(t)$ at $2B$ -- you can sample $x(t)$ and $fracd x(t)dt$ simultaneously at $B$, or sample out to the third derivative (i.e., collect four samples) at $fracB2$, or commit various other crimes to the signal before sampling an $N$ wide vector at $frac2BN$. Most such schemes (definitely the derivatives that I mention) would be horribly impractical, but in theory they'll work, and you do occasionally stumble across schemes that are actually used in reality.)
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
);
);
, "mathjax-editing");
StackExchange.ifUsing("editor", function ()
return StackExchange.using("schematics", function ()
StackExchange.schematics.init();
);
, "cicuitlab");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "135"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f428871%2fsampling-theorem-and-reconstruction%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You can think of any perfectly bandlimited signal as the superposition of a set of $fracsin(t)t = textsinc(t)$ curves, with their peaks positioned uniformly along the time axis. Their spacing is $frac2BW$.
sinc(x) also happens to be the time-domain response of a perfect low-pass filter, and it explains how the continuous-time reconstruction (interpolation) is accomplished from a series of discrete samples.
When we uniformly sample a signal, each sample is a direct measurement of the amplitude of one of those sinc() waves. This works because it is a property of the sinc() function that it is zero at every sampling point, except at its own peak. In other words, when you take a measurement, you're not getting any "interference" from any of the other sinc() functions. Therefore, the set of N discrete measurements contains all of the information in the continuous-time signal represented by that collection of sinc() waves.
Now, it gets even weirder than what TimWestcott was alluding to — the samples do not even have to be uniformly spaced in time! It turns out that ANY N unique samples taken within a window of time (with certain limitations) of a perfectly bandlimited signal can be used to reconstruct that signal. It takes a lot more math to do it, though!
With nonuniform sampling, you are no longer getting a clean measurement of just one of the sinc() amplitudes. Instead, you're getting a mix of many, if not all of them. However, since you know exactly where you are on each one (obviously, each sample must be time-stamped), it is possible to solve the large system of linear equations to find the actual amplitudes and therefore reconstruct the original signal. Of course, this process is very sensitive to small perturbations (noise and math errors, for example), and I'm hand-waving away some details about constraints on the set of samples, but the general principle holds.
$endgroup$
add a comment |
$begingroup$
You can think of any perfectly bandlimited signal as the superposition of a set of $fracsin(t)t = textsinc(t)$ curves, with their peaks positioned uniformly along the time axis. Their spacing is $frac2BW$.
sinc(x) also happens to be the time-domain response of a perfect low-pass filter, and it explains how the continuous-time reconstruction (interpolation) is accomplished from a series of discrete samples.
When we uniformly sample a signal, each sample is a direct measurement of the amplitude of one of those sinc() waves. This works because it is a property of the sinc() function that it is zero at every sampling point, except at its own peak. In other words, when you take a measurement, you're not getting any "interference" from any of the other sinc() functions. Therefore, the set of N discrete measurements contains all of the information in the continuous-time signal represented by that collection of sinc() waves.
Now, it gets even weirder than what TimWestcott was alluding to — the samples do not even have to be uniformly spaced in time! It turns out that ANY N unique samples taken within a window of time (with certain limitations) of a perfectly bandlimited signal can be used to reconstruct that signal. It takes a lot more math to do it, though!
With nonuniform sampling, you are no longer getting a clean measurement of just one of the sinc() amplitudes. Instead, you're getting a mix of many, if not all of them. However, since you know exactly where you are on each one (obviously, each sample must be time-stamped), it is possible to solve the large system of linear equations to find the actual amplitudes and therefore reconstruct the original signal. Of course, this process is very sensitive to small perturbations (noise and math errors, for example), and I'm hand-waving away some details about constraints on the set of samples, but the general principle holds.
$endgroup$
add a comment |
$begingroup$
You can think of any perfectly bandlimited signal as the superposition of a set of $fracsin(t)t = textsinc(t)$ curves, with their peaks positioned uniformly along the time axis. Their spacing is $frac2BW$.
sinc(x) also happens to be the time-domain response of a perfect low-pass filter, and it explains how the continuous-time reconstruction (interpolation) is accomplished from a series of discrete samples.
When we uniformly sample a signal, each sample is a direct measurement of the amplitude of one of those sinc() waves. This works because it is a property of the sinc() function that it is zero at every sampling point, except at its own peak. In other words, when you take a measurement, you're not getting any "interference" from any of the other sinc() functions. Therefore, the set of N discrete measurements contains all of the information in the continuous-time signal represented by that collection of sinc() waves.
Now, it gets even weirder than what TimWestcott was alluding to — the samples do not even have to be uniformly spaced in time! It turns out that ANY N unique samples taken within a window of time (with certain limitations) of a perfectly bandlimited signal can be used to reconstruct that signal. It takes a lot more math to do it, though!
With nonuniform sampling, you are no longer getting a clean measurement of just one of the sinc() amplitudes. Instead, you're getting a mix of many, if not all of them. However, since you know exactly where you are on each one (obviously, each sample must be time-stamped), it is possible to solve the large system of linear equations to find the actual amplitudes and therefore reconstruct the original signal. Of course, this process is very sensitive to small perturbations (noise and math errors, for example), and I'm hand-waving away some details about constraints on the set of samples, but the general principle holds.
$endgroup$
You can think of any perfectly bandlimited signal as the superposition of a set of $fracsin(t)t = textsinc(t)$ curves, with their peaks positioned uniformly along the time axis. Their spacing is $frac2BW$.
sinc(x) also happens to be the time-domain response of a perfect low-pass filter, and it explains how the continuous-time reconstruction (interpolation) is accomplished from a series of discrete samples.
When we uniformly sample a signal, each sample is a direct measurement of the amplitude of one of those sinc() waves. This works because it is a property of the sinc() function that it is zero at every sampling point, except at its own peak. In other words, when you take a measurement, you're not getting any "interference" from any of the other sinc() functions. Therefore, the set of N discrete measurements contains all of the information in the continuous-time signal represented by that collection of sinc() waves.
Now, it gets even weirder than what TimWestcott was alluding to — the samples do not even have to be uniformly spaced in time! It turns out that ANY N unique samples taken within a window of time (with certain limitations) of a perfectly bandlimited signal can be used to reconstruct that signal. It takes a lot more math to do it, though!
With nonuniform sampling, you are no longer getting a clean measurement of just one of the sinc() amplitudes. Instead, you're getting a mix of many, if not all of them. However, since you know exactly where you are on each one (obviously, each sample must be time-stamped), it is possible to solve the large system of linear equations to find the actual amplitudes and therefore reconstruct the original signal. Of course, this process is very sensitive to small perturbations (noise and math errors, for example), and I'm hand-waving away some details about constraints on the set of samples, but the general principle holds.
answered 3 hours ago
Dave Tweed♦Dave Tweed
122k9152264
122k9152264
add a comment |
add a comment |
$begingroup$
If the signal is perfectly bandlimited, then there is no additional information to be gotten out of it by sampling faster than twice the bandwidth. So perfect reconstruction must be possible. It's as @DanMills said: there's one and only one curve that'll pass through the sampled points and be correct, and that's the curve that you'd get from a perfect reconstruction filter.
(Note that it gets weirder -- at least in theory, if the bandwidth is $B$, then you don't need to sample $x(t)$ at $2B$ -- you can sample $x(t)$ and $fracd x(t)dt$ simultaneously at $B$, or sample out to the third derivative (i.e., collect four samples) at $fracB2$, or commit various other crimes to the signal before sampling an $N$ wide vector at $frac2BN$. Most such schemes (definitely the derivatives that I mention) would be horribly impractical, but in theory they'll work, and you do occasionally stumble across schemes that are actually used in reality.)
$endgroup$
add a comment |
$begingroup$
If the signal is perfectly bandlimited, then there is no additional information to be gotten out of it by sampling faster than twice the bandwidth. So perfect reconstruction must be possible. It's as @DanMills said: there's one and only one curve that'll pass through the sampled points and be correct, and that's the curve that you'd get from a perfect reconstruction filter.
(Note that it gets weirder -- at least in theory, if the bandwidth is $B$, then you don't need to sample $x(t)$ at $2B$ -- you can sample $x(t)$ and $fracd x(t)dt$ simultaneously at $B$, or sample out to the third derivative (i.e., collect four samples) at $fracB2$, or commit various other crimes to the signal before sampling an $N$ wide vector at $frac2BN$. Most such schemes (definitely the derivatives that I mention) would be horribly impractical, but in theory they'll work, and you do occasionally stumble across schemes that are actually used in reality.)
$endgroup$
add a comment |
$begingroup$
If the signal is perfectly bandlimited, then there is no additional information to be gotten out of it by sampling faster than twice the bandwidth. So perfect reconstruction must be possible. It's as @DanMills said: there's one and only one curve that'll pass through the sampled points and be correct, and that's the curve that you'd get from a perfect reconstruction filter.
(Note that it gets weirder -- at least in theory, if the bandwidth is $B$, then you don't need to sample $x(t)$ at $2B$ -- you can sample $x(t)$ and $fracd x(t)dt$ simultaneously at $B$, or sample out to the third derivative (i.e., collect four samples) at $fracB2$, or commit various other crimes to the signal before sampling an $N$ wide vector at $frac2BN$. Most such schemes (definitely the derivatives that I mention) would be horribly impractical, but in theory they'll work, and you do occasionally stumble across schemes that are actually used in reality.)
$endgroup$
If the signal is perfectly bandlimited, then there is no additional information to be gotten out of it by sampling faster than twice the bandwidth. So perfect reconstruction must be possible. It's as @DanMills said: there's one and only one curve that'll pass through the sampled points and be correct, and that's the curve that you'd get from a perfect reconstruction filter.
(Note that it gets weirder -- at least in theory, if the bandwidth is $B$, then you don't need to sample $x(t)$ at $2B$ -- you can sample $x(t)$ and $fracd x(t)dt$ simultaneously at $B$, or sample out to the third derivative (i.e., collect four samples) at $fracB2$, or commit various other crimes to the signal before sampling an $N$ wide vector at $frac2BN$. Most such schemes (definitely the derivatives that I mention) would be horribly impractical, but in theory they'll work, and you do occasionally stumble across schemes that are actually used in reality.)
answered 3 hours ago
TimWescottTimWescott
6,3631416
6,3631416
add a comment |
add a comment |
Thanks for contributing an answer to Electrical Engineering Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f428871%2fsampling-theorem-and-reconstruction%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
$begingroup$
Your bottom signal has some much higher frequency components than the other ones here.
$endgroup$
– Hearth
3 hours ago
3
$begingroup$
There is exactly ONE curve that passes thru all those points AND is band limited to strictly less then Fs/2.
$endgroup$
– Dan Mills
3 hours ago
$begingroup$
it's important to note that unique reconstruction is only possible if the original signal is strictly bandlimited. Or to put it another way, given the samples, the assumption of strict bandlimiting allows a single signal to be reconstructed. To the extent that the bandlimited assumption is untrue, then the reconstructed signal will not match the original - this is called aliasing.
$endgroup$
– Neil_UK
3 hours ago
$begingroup$
Also note that in practice it may require higher sampling frequency to provide acceptable reconstruction since perfect band-limiting are not practical. For example Audio CDs use 44.1kHz sampling to provide 0-20kHz output. Oscilloscopes generally use 5-10 times the required minimum sampling frequency to provide acceptable waveform integrity as a sharp cutoff filter would tend to create waveform artifacts such as ringing.
$endgroup$
– Kevin White
2 hours ago