I have two wave files, one is normal version, another one is distorted version. In distorted version, I hear a long beep like sound. Here are the frequency domain plots of normal and distorted version sounds. First one is normal, second one is distorted. notice the scales.
How do I go about this?
It's a bit hard to tell without using a marker or zooming in, but it seems you have a sinusoid inserted to your signal, which would explain the continuous beep you hear and the delta like function you have in the spectrum. Try to locate the noise frequency using the marker and filtering it using the filter design tool (type "fdatool" in the command line). I would go for a notch filter at the frequency of the noise, and if this doesn't work a high (~1000) order high pass FIR filter.
Good luck
Since you have the signal in frequency domain, you can also remove the noise there (with a simple threshold) and then you take inverse Fourier transform and you get the noise free signal in time domain.
Related
I'm hoping someone will be able to tell me why no filtering is helping in my application.
I have a MEMS microphone monitoring the pressure of a small chamber, which has a membrane stretched over the far end. This device is placed on a human muscle and when I flex said muscle the membrane is disturbed, producing a pressure difference in the chamber, which the microphone picks up. Therefore, by flexing a muscle I can see nice spikes of activity. However, this method is very susceptible to noise, both motion artefacts and other undesirable artefacts.
The muscle activity I'm interested in is above 10Hz and below 100Hz, so I'm trying to bandpass (or at the very least, highpass) the noise. If I tap the device, or if I have the device on my upper forearm and tap my wrist, I'm to understand that this is a very low frequency noise, somewhere in the region of 1Hz/2Hz, but I can't get rid of this noise!
I'm using MATLAB to process. Generally I sample this microphone at 1KHz, but I currently have it hooked up to a DAQ at 5KHz sampling rate. I desperately want to get rid of this low frequency noise but nothing I try seems to make any difference, it's very hard to see what the filter is doing at all. It's definitely attenuating the signal, but not getting rid of the noise I want. I don't expect perfect results, but certainly better than what I'm seeing.
I've used lots of methods to create filters in MATLAB (manually and fdatool), along with different types of filters (Butterworth, Chebyshev, Elliptic) all not helping. I'm worried that my desired frequency of 10Hz is perhaps too close to the noise I'm trying to filter out, and it's not able to attenuate the noise enough.
Any ideas, code samples, or recommendations would be very helpful.
Tapping or percussive sounds are broad spectrum, producing frequency content well above the repeat rate of 1 Hz or so. So any linear band pass or high pass filter will not be able to completely remove this broad spectrum noise.
My project:
I'm developing a slot car with 3-axis accelerometer and gyroscope, trying to estimate the car pose (x, y, z, yaw, pitch) but I have a big problem with my vibration noise (while the car is running, the gears induce vibration and the track also gets it worse) because the noise takes values between ±4[g] (where g = 9.81 [m/s^2]) for the accelerometers, for example.
I know (because I observe it), the noise is correlated for all of my sensors
In my first attempt, I tried to work it out with a Kalman filter, but it didn't work because values of my state vectors had a really big noise.
EDIT2: In my second attempt I tried a low pass filter before the Kalman filter, but it only slowed down my system and didn't filter the low components of the noise. At this point I realized this noise might be composed of low and high frecuency components.
I was learning about adaptive filters (LMS and RLS) but I realized I don't have a noise signal and if I use one accelerometer signal to filter other axis' accelerometer, I don't get absolute values, so It doesn't work.
EDIT: I'm having problems trying to find some example code for adaptive filters. If anyone knows about something similar, I will be very thankful.
Here is my question:
Does anyone know about a filter or have any idea about how I could fix it and filter my signals correctly?
Thank you so much in advance,
XNor
PD: I apologize for any mistake I could have, english is not my mother tongue
The first thing i would do, would be to run a DFT on the sensor signal and see if there is actually a high and low frequency component of your accelerometer signals.
With a DFT you should be able to determine an optimum cutoff frequency of your lowpass/bandpass filter.
If you have a constant component on the Z axis, there is a chance that you haven't filtered out gravity. Note that if there is a significant pitch or roll this constant can be seen on your X and Y axes as well
Generally pose estimation with an accelerometer is not a good idea as you need to integrate the acceleration signals twice to get a pose. If the signal is noisy you are going to be in trouble already after a couple of seconds if the noise is not 100% evenly distributed between + and -.
If we assume that there is no noise coming from your gears, even the conversion accuracy of the Accelerometer might start to mess up your pose after a couple of minutes.
I would definately use a second sensor, eg a compass/encoder in combination with your mathematical model and combine all your sensor data in a kalmann filter(Sensor fusion).
You might also be able to derive a black box model of your noise by assuming that it is correlated with your motors RPM. (Box-jenkins/Arma/Arima).
I had similar problems with noise with low and high frequencies and I managed to decently remove it without removing good signal too by using an universal microphone shock mount. It does a good job with gyroscope too especially if you find one which fits it (or you can put it in a small case then mount it)
It basically uses elastic strings to remove shocks and vibration.
Have you tried a simple low-pass filter on the data? I'd guess that the vibration frequency is much higher than the frequencies in normal car acceleration data. At least in normal driving. Crashes might be another story...
So basically, my problem is that I have a speech signal in .wav format that is corrupted by a harmonic noise source at some frequency. My goal is to identify the frequency at which this noise occurs, and use a notch filter to remove said noise. So far, I have read the speech signal into matlab using:
[data, Fs] = wavread('signal.wav');
My question is how can I identify the frequency at which the harmonic noise is occurring, and once I've done that, how can I go about implementing a notch filter at that frequency?
NOTE: I do not have access to the iirnotch() command or fdesign.notch() due to the version of MATLAB I am currently using (2010).
The general procedure would be to analyse the spectrum, to identify the frequency in question, then design a filter around that frequency. For most real applications it's all a bit woolly: the frequencies move around and there's no easy way to distinguish noise from signal, so you have to use clever techniques and a bit of guesswork. However if you know you have a monotonic corruption then, yes, an FFT and a notch filter will probably do the trick.
You can analyse the signal with fft and design a filter with, among others, fir1, which I believe is part of the signal processing toolbox. If you don't have the signal processing toolbox you can do it 'by hand', as in transform to the frequency domain, remove the frequency(ies) you don't want (by zeroing the relevant elements of the frequency vector) and transform back to time domain. There's a tutorial on exactly that here.
The fft and fir1 functions are well documented: search the Mathworks site to get code examples to get you up and running.
To add to/ammend xenoclast's answer, filtering in the frequency domain may or may not work for you. There are many thorny issues with filtering in the frequency domain, some of which are covered here: http://blog.bjornroche.com/2012/08/why-eq-is-done-in-time-domain.html
One additional issue is that if you try to process your entire file at once, the "width" or Q of the filters will depend on the length of your file. This might work out for you, or it might not. If you have many files of different lengths, don't expect similar results this way.
To design your own IIR notch filter, you could use the RBJ audio filter cookbook. If you need help, I wrote up a tutorial here:
http://blog.bjornroche.com/2012/08/basic-audio-eqs.html
My tutorial uses bell/peaking filter, but it's easy to follow that and then replace it with a notch filter from RBJ.
One final note: assuming this is actually an audio signal in your .wav file, you can also use your ears to find and fix the problem frequencies:
Open the file in an audio editing program that lets you adjust filter settings in real-time (I am not sure if audacity lets you do this, but probably).
Use a "boost" or "parametric" filter set to a high gain and sweep the frequency setting until you hear the noise accentuated the most.
replace the boost filter with a notch filter of the same frequency. You may need to tweak the width to trade off noise elimination vs. signal preservation.
repete as needed (due to the many harmonics).
save the resulting file.
Of course, some audio editing apps have built-in harmonic noise reduction features that work especially well for 50/60 Hz noise.
I have FFT outputs that look like this:
At 523 Hz is the maximum value. However, being a messy FFT, there are lots of little peaks that are right near the large peaks. However, they're irrelevant, whereas the peaks shown aren't. Are the any algorithms I can use to extract the maxima of this FFT that matter; I.E., aren't just random peaks cropping up near "real" peaks? Perhaps there is some sort of filter I can apply to this FFT output?
EDIT: The context of this is that I am trying to take one-hit sound samples (like someone pressing a key on a piano) and extract the loudest partials. In the image below, the peaks above 2000 Hz are important, because they are discrete partials of the given sound (which happens to be a sort of bell). However, the peaks that are scattered about right near 523 seem to be just artifacts, and I want to ignore them.
If the peak is broad, it could indicate that the peak frequency is modulated (AM, FM or both), or is actually a composite of several spectral peaks, themselves each potentially modulated.
For instance, a piano note may be the result of the hammer hitting up to 3 strings that are all tuned just a tiny fraction differently, and they all can modulate as they exchange energy between strings though the piano frame. Guitar strings can change frequency as the pluck shape distortion smooths out and decays. Bells change shape after they are hit, which can modulate their spectrum. Etc.
If the sound itself is "messy" then you need a good definition of what you mean by the "real" peak, before applying any sort of smoothing or side-band rejection filter. e.g. All that "messiness" may be part of what makes a bell sound like a real bell instead of an electronic sinewave generator.
Try convolving your FFT (treating it as a signal) with a rectangular pulse( pulse = ones(1:20)/20; ). This might get rid of some of them. Your maxima will be shifted by 10 frequency bins to teh right, to take that into account. You would basically be integrating your signal. Similar techniques are used in Pan-Tompkins algorithm for heart beat identification.
I worked on a similar problem once, and choosed to use savitsky-golay filters for smoothing the spectrum data. I could get some significant peaks, and it didn't messed too much with the overall spectrum.
But I Had a problem with what hotpaw2 is alerting you, I have lost important characteristics along with the lost of "messiness", so I truly recommend you hear him. But, if you think you won't have a problem with that, I think savitsky-golay can help.
There are non-FFT methods for creating a frequency domain representation of time domain data which are better for noisy data sets, like Max-ent recontruction.
For noisy time-series data, a max-ent reconstruction will be capable of distinguising true peaks from noise very effectively (without adding any artifacts or other modifications to suppress noise).
Max ent works by "guessing" an FFT for a time domain specturm, and then doing an IFT, and comparing the results with the "actual" time-series data, iteratively. The final output of maxent is a frequency domain spectrum (like the one you show above).
There are implementations in java i believe for 1-d spectra, but I have never used one.
I'm trying to write a simple tuner (no, not to make yet another tuner app), and am looking at the AurioTouch sample source (has anyone tried to comment this code??).
My worry is that aurioTouch doesn't seem to actually work very well when looking at the frequency domain graph. I play a single note on an instrument and I don't see a nicely ordered, small, set of frequencies with one string peak at the appropriate frequency of the note.
Has anyone used aurioTouch enough to know whether the underlying code is functional or whether it is just a crude sample?
Other options I have are to use FFTW or KISS FFT. Anyone have any experience with those?
Thanks.
You're expecting the wrong thing!!
Not the library's fault
Whether the library produces it properly or not, you're looking for a pattern that rarely actually exists in real-life sounds. Only a perfect sine wave, electronically generated, will cause an even partway discrete appearing 'spike' in the freq. graph. If you don't believe it try firing up a 'spectrum analyzer' visualization in winamp or media player. It's not so much the PC's fault.
Real sound waves are complicated animals
Picture a sawtooth or sqaure wave in your mind's eye. those sharp turnaround - corners or points on the wave, look like tons of higher harmonics to the FFT or even a real fourier. And if you've ever seen a real 'sqaure wave/sawtooth' on a scope, or even a 'sine wave' produced by an instrument that is supposed to produce a sinewave, take a look at all the sharp nooks and crannies in just ONE note (if you don't have a scope just zoom way in on the wave in audacity - the more you zoom, the higher notes you're looking at). Yep, those deviations all count as frequencies.
It's hard to tell the difference between one note and a whole orchestra sometimes in a spectrum analysis.
But I hear single notes!
So how does the ear do it? It considers the entire waveform. Then your lower brain lies to your upper brain about what the input is: one note, not a mess of overtones.
You can't do it as fully, but you can approximate it via 'training.'
Approximation: building some smarts
PLAY the note on the instrument and 'save' the frequency graph. Do this for notes in several frequency ranges, or better yet all notes.
Then interpolate the notes to fill in gaps (by 1/2 or 1/4 steps) by multiplying the saved graphs for that instrument by 2^(1/12) (or 1/24 for 1/4 steps, etc).
Figure out how to store them in a quickly-searchable data structure like a BST or trie. Only it would have to return a 'how close is this' score. It would have to identify the match via proportions of frequencies as well, in case it came in different volumes.
Using the smarts
Next time you're looking for a note from that instrument, just take the 'heard' freq graph and find it in that data structure. You can record several instruments that make different waveforms and search for them too. If there are background sounds or multiple notes, take the closest match. Then if you want to identify other notes, 'subtract' the found frequency pattern from the sampled one, and rinse, lather repeat.
It won't work by your voice...
If you ever tried to tune yourself by singing into a guitar tuner, you'll know that tuners arent that clever. Of course some instruments (voice esp) really float around the pitch and generate an ever-evolving waveform (even without somebody singing).
What are you trying to accomplish?
You would not have to totally get this fancy for a 'simple' tuner app, but if you're not making just another tuner app them I'm guessing you actually want to identify notes (e.g., maybe you want to autogenerate midi files from songs on the radio ;-)
Good luck. I hope you find a library that does all this junk instead of having to roll your own.
Edit 2017
Note this webpage: http://www.feilding.net/sfuad/musi3012-01/html/lectures/015_instruments_II.htm
Well down the page, there are spectrum analyses of various organ pipes. There are many, many overtones. These are possible to detect - with enough work - if you 'train' your app with them first (just like telling a kid, 'this is what a clarinet sounds like...')
aurioTouch looks weird because the frequency axis is on a linear scale. It's very difficult to interpret FFT output when the x-axis is anything other than a logarithmic scale (traditionally log2).
If you can't use aurioTouch's integer-FFT, check out my library:
http://github.com/alexbw/iPhoneFFT
It uses double-precision, has support for multiple window types, and implements Welch's method (which should give you more stable spectra when viewed over time).
#zaph, the FFT does compute a true Discrete Fourier Transform. It is simply an efficient algorithm that takes advantage of the bit-wise representation of digital signals.
FFTs use frequency bins and the bin frequency width is based on the FFT parameters. To find a frequency you will need to record it sampled at a rate at least twice the highest frequency present in the sample. Then find the time between the cycles. If it is not a pure frequency this will of course be harder.
I am using Ooura FFT to compute the FFT of acceleromter data. I do not always obtain the correct spectrum. For some reason, Ooura FFT produces completely wrong results with spectral magnitudes of the order 10^200 across all frequencies.