How to resample nonperiodic signal with scipy? - scipy

The documentation for scipy.signal.resample says that it assumes the function is periodic. Is interp1d the equivalent function for nonperiodic signals? If not, what is?
I've spent a lot of time searching for this and reading through other posts, but they all seem to assume you have a signal processing background; I'm from more of an algorithms background. I kind of understand what "Fourier methods" are, but shouldn't there be some kind of Fourier-based resampler for finite-length signals? (Perhaps using a DFT, etc.)

Related

Windowing signals in MATLAB

I'm working with some accelerometer data and it has been suggested that I do some windowing for isolating different events in the signal. Unlike most things, windowing is poorly documented in MATLAB and I was hoping for some simple examples (or suggested reading and links) of windowing being implemented. I was also wondering why window at all instead of just breaking the data into sections and analysing the individual frames. Thanks.
An example of a test or event is shown below:
My initial data looked like this: Shown above is single spike expanded.
Also can some suggest how I would window the first plot using MATLAB.
Windowing is more in the realms of signal processing theory than programming, however it is very important when understanding the output of an FFT, so probably worth explaining in a little more detail.
Essentially, when you truncate a signal (for example process it in blocks), you are altering the frequency domain in a rather surprising way. You end up convolving (i.e. smearing) all frequency terms with a "window" function. If you do nothing other than truncate, then that function is sin()/sin(). What happens is that this spreads the frequency content of the original signal over the entire spectrum, and if there is a dominant component, then everything else gets buried by this. The shorter the blocks, the worse the effect is as the window gets fatter in the frequency domain.
Windowing with shaped window, such as Hamming, Hanning or Blackman, alters the frequency domain response, making the smearing more localised to the original signal. The resulting frequency domain is much clearer as a result.
To analyse a block of data, x, then what you should do is
transform=fft(x.*hanning(length(x)));
The result will be complex, which you can display with plot(20*log10(abs(transform)))
For a mathematical analysis see https://cnx.org/contents/4jyGq_c3#6/Spectrum-Analysis-Using-the-Di
If you want a practical hands-on experience of what windowing does, try https://cnx.org/contents/CJ3fYEow#2/Spectrum-Analyzer-MATLAB-Exerc

When(why) to use step/pulse/ramp functions in simulink?

Hello guys I'd like to know the answer to the question that is the titled named by.
For example if I have physical system described in differential equation(s), how should I know when I should use step, pulse or ramp generator?
What exactly does it do?
Thank you for your answers.
They are mostly the remnants of the classical control times. The main reason why they are so famous is because of their simple Laplace transform terms. 1,1/s and 1/s^2. Then you can multiply these with the plant and you would get the Laplace transform of the output.
Back in the day, what you only had was partial fraction expansion and Laplace transform tables to get an idea what the response would look like. And today, you can basically simulate whatever input you like. So they are not really neeeded which is the answer to your question.
But since people used these signals so often they have spotted certain properties. For example, step response is good for assessing the transients and the steady state tracking error value, ramp response is good for assessing (reference) following error (which introduces double integrators) and so on. Hence, some consider these signals as the characteristic functions though it is far from the truth. Especially, you should keep in mind that, just because the these responses are OK, the system is not necessarily stable.
However, keep in mind that these are extremely primitive ways of assessing the system. Currently, they are taught because they are good for giving homeworks and making people acquainted with Simulink etc.
They are used to determine system characteristics. If you are studying a system of differential equations you would want to know different characteristics from the response of the system from these kinds of inputs since these inputs are the very fundamental ones. For example a system whose output blows up for a pulse input is unstable, and you would not want to have such a system in real life(except in rare situations). It's too difficult for me to explain it all in an answer, you should start with this wiki page.

Why isn't there a simple function to reduce background noise of an audio signal in Matlab?

Is this because it's a complex problem ? I mean to wide and therefore it does not exist a simple / generic solution ?
Because every (almost) software making signal processing (Avisoft, GoldWave, Audacity…) have this function that reduce background noise of a signal. Usually it uses FFT. But I can't find a function (already implemented) in Matlab that allows us to do the same ? Is the right way to make it manually then ?
Thanks.
The common audio noise reduction approaches built-in to things like Audacity are based around spectral subtraction, which estimates the level of steady background noise in the Fourier transform magnitude domain, then removes that much energy from every frame, leaving energy only where the signal "pokes above" this noise floor.
You can find many implementations of spectral subtraction for Matlab; this one is highly rated on Matlab File Exchange:
http://www.mathworks.com/matlabcentral/fileexchange/7675-boll-spectral-subtraction
The question is, what kind of noise reduction are you looking for? There is no one solution that fits all needs. Here are a few approaches:
Low-pass filtering the signal reduces noise but also removes the high-frequency components of the signal. For some applications this is perfectly acceptable. There are lots of low-pass filter functions and Matlab helps you apply plenty of them. Some knowledge of how digital filters work is required. I'm not going into it here; if you want more details consider asking a more focused question.
An approach suitable for many situations is using a noise gate: simply attenuate the signal whenever its RMS level goes below a certain threshold, for instance. In other words, this kills quiet parts of the audio dead. You'll retain the noise in the more active parts of the signal, though, and if you have a lot of dynamics in the actual signal you'll get rid of some signal, too. This tends to work well for, say, slightly noisy speech samples, but not so well for very noisy recordings of classical music. I don't know whether Matlab has a function for this.
Some approaches involve making a "fingerprint" of the noise and then removing that throughout the signal. It tends to make the result sound strange, though, and in any case this is probably sufficiently complex and domain-specific that it belongs in an audio-specific tool and not in a rather general math/DSP system.
Reducing noise requires making some assumptions about the type of noise and the type of signal, and how they are different. Audio processors typically assume (correctly or incorrectly) something like that the audio is speech or music, and that the noise is typical recording session background hiss, A/C power hum, or vinyl record pops.
Matlab is for general use (microwave radio, data comm, subsonic earthquakes, heartbeats, etc.), and thus can make no such assumptions.
matlab is no exactly an audio processor. you have to implement your own filter. you will have to design your filter correctly, according to what you want.

How can i find a sound intensity by using Matlab?

I'm looking for some functions in MATLAB in order to find out some parameters of sound,such az intensity,density,frequency,time and spectral identity.
i know how to use 'audiorecorder' as a function to record the sampled voice,and also 'getaudio', in order to plot it.But i need to realize the parametres of a sampled recorded voice,that i mentioned above.i'd be so thankful if anyone could help me.
This is a very vague question, you may want to narrow it down (at first) and to add as much contextual details as you can, it will certainly attract a lot more answers (also as mentionned by Ion, you could post it at http://dsp.stackexchange.com).
Sound intensity: microphones usually measures pressure, but you can get the intensity from that quite easily (see this question). Your main problem is that microphones are not usually calibrated, this means that you cannot associate an amplitude with a pressure. You can get sound density from sound intensity.
Frequency: you can get the spectrum of your sound by using the Fast Fourier Transform (see the Matlab function fft).
As for spectral or time identity, I believe these are psychoacoustics notions, which is not really my area of expertise.
I'm no expert but I have played with Matlab a little in the past.
One function I remember was wavread() to input a sound signal into Matlab, which if executed in this form [Y, FS, NBITS]=WAVREAD("AUDIO.WAV") would return something like:
AUDIO.WAV:
Fs = 100 kHz
Bits per sample = 10
Size = 100000
(numbers from the top of my head)
Now about the other things you ask, I'm not really sure. You can expect a better answer from somebody else. I think this question should be moved to Signal Processing SE btw.

Peak detection in Performous code

I was looking to implement voice pitch detection in iphone using HPS method. But the detected tones are not very accurate. Performous does a decent job of pitch detection.
I looked through the code but i did not fully get the theory behind the calculations.
They use FFT and find the peaks. But the part where they use the phase of FFT output, got me confused.I figure they use some heuristics for voice frequencies.
So,Could anyone please explain the algorithm used in Performous to detect pitch?
[Performous][1] extracts pitch from the microphone. Also the code is open source. Here is a description of what the algorithm does, from the guy that coded it (Tronic on irc.freenode.net#performous).
PCM input (with buffering)
FFT (1024 samples at a time, remove 200 samples from front of the buffer afterwards)
Reassignment method (against the previous FFT that was 200 samples earlier)
Filtering of peaks (this part could be done much better or even left out)
Combining peaks into sets of harmonics (we call the combination a tone)
Temporal filtering of tones (update the set of tones detected earlier instead of simply using the newly detected ones)
Pick the best vocal tone (frequency limits, weighting, could use the harmonic array also but I don't think we do)
I still wasn't able from this information to figure it out and implement it. If anyone manages this, please please post your results here, and comment this response so that SO notifies me.
The task would be to create a minimal C++ wrapper around this code.