I want to use data from a magnetometer to gain information about the motion of a metal object near it. After recording the data, I need to remove noise from the data before using it. What is a good method to remove noise? I read about filters in Matlab here but cannot decide which one to use. How can I decide which filter to use?
Edit:
The metal object moves at a steady rate and I want to find out the angle of its motion. I am adding a graph from my sample data which I want to filter. Sample Magnetometer data
I guess you're able to record the noise. And if you can do it, yo can also use some adaptive filtering.
From MathWorks' Overview of Adaptive Filters and Applications:
Block Diagram That Defines the Inputs and Output of a Generic RLS Adaptive Filter
You can use recorded noise as desired signal and your error signal should be around 0 without any motion near it, and should have some filtered value when the motion appears.
You can find an example of adaptive filtering on the MathWorks website:
Consider a pilot in an airplane. When the pilot speaks into a microphone, the engine noise in the cockpit combines with the voice signal. This additional noise makes the resultant signal heard by passengers of low quality. The goal is to obtain a signal that contains the pilot's voice, but not the engine noise. You can cancel the noise with an adaptive filter if you obtain a sample of the engine noise and apply it as the input to the adaptive filter.
Read more about adaptive filtering:
Overview: http://www.mathworks.com/help/dsp/ug/overview-of-adaptive-filters-and-applications.html
NN adaptive filters: http://www.mathworks.com/help/nnet/ug/adaptive-neural-network-filters.html
Related
I need to implement an LMS-based adaptive audio-cancellation algorithm on the Simulink Desktop Real-Time toolbox.
The physical system is composed of a microphone recording a noise source and another microphone recording the residual noise after the control process (antinoise being injected by a speaker controlled by Simulink).
For the (adaptive) LMS algorithm to work properly I need to be able to work on a sample-by-sample basis, that is at each sampled time instant I need to update the adaptive filter using the synchronised current sample value of both microphones. I realise some delay is inevitable but I was wondering whether it's possible on Simulink Desktop Real-Time to reduce the buffer size of the inputs to one sample and thus work on a sample-by-sample basis.
Thanks for your help in advance.
You can always implement the filter on a sample by sample basis.
But you still need a history of input values to perform the actual LMS calculation on. On a sample by sample basis this would just mean using a simple FIFO buffer.
If you have access to the DSP Toolbox then there is already an LMS Filter block that will do this for you.
My collegue and I are developping a sound and speech processing module on a Analog Device DSP. Because of the proximity of our single microphone and speaker, we have been experiencing some important echo. We want to implement an NLMS based algorithm to reduce this echo.
I first wanted to implement it and test the algorithm in Matlab but I am still having some issues. I think I might have some theoretical issue in my algorithm. I have a rough time understanding what would be the "desired signal" in the algorithm since I don't have access to a uncorrupted signal.
Here is an overview of my naive way to implement this in Matlab.
Simulink diagram here
Link to Simulink code (.slx)
Right now the code can't compile because of an "algeabric loop error" in Simulink, but I have a feeling there is more to this problem.
Any help would be appreciated.
The model you have is not fully correct. For acoustic echo cancellation you are using the adaptive filter to model the room. You are identifying the room characteristics using the adaptive filter. Once you do this you can then use your adaptive filter to identify the part of the far end signal from the loud speaker which goes back into the microphone and subtract that from the microphone signal to remove the echo.
For your adaptive filter your input should be the signal from far end which would be the signal going to the loud speaker in the room. Your desired signal is the signal coming out of the microphone in the room. The microphone signal contains signals from the voices from the person in the room and also a portion of sound from the loud speaker which is the echo.
Sound from far end ----|In | Out (You can ignore this)
| Adaptive Filter |
Sound from local microphone ----|Desired | Error
In this model Error output signal from adaptive filter is your desired echo free signal. This is because error is computed by subtracting adaptive filter output from desired which is basically removing the echo.
To simulate this system in Simulink you need a filter to represent the room. You can use an ordinary FIR filter for this. You should be able to get room impulse responses online. These are usually long (~1000) slowly decaying impulse responses. Your audio source can represent signal from the loud speaker. You feed the same audio signal into this room response filter and you will get your desired signal. Feeding both into adaptive filter will make adaptive filter adapt to the room response filter.
Is this because it's a complex problem ? I mean to wide and therefore it does not exist a simple / generic solution ?
Because every (almost) software making signal processing (Avisoft, GoldWave, Audacity…) have this function that reduce background noise of a signal. Usually it uses FFT. But I can't find a function (already implemented) in Matlab that allows us to do the same ? Is the right way to make it manually then ?
Thanks.
The common audio noise reduction approaches built-in to things like Audacity are based around spectral subtraction, which estimates the level of steady background noise in the Fourier transform magnitude domain, then removes that much energy from every frame, leaving energy only where the signal "pokes above" this noise floor.
You can find many implementations of spectral subtraction for Matlab; this one is highly rated on Matlab File Exchange:
http://www.mathworks.com/matlabcentral/fileexchange/7675-boll-spectral-subtraction
The question is, what kind of noise reduction are you looking for? There is no one solution that fits all needs. Here are a few approaches:
Low-pass filtering the signal reduces noise but also removes the high-frequency components of the signal. For some applications this is perfectly acceptable. There are lots of low-pass filter functions and Matlab helps you apply plenty of them. Some knowledge of how digital filters work is required. I'm not going into it here; if you want more details consider asking a more focused question.
An approach suitable for many situations is using a noise gate: simply attenuate the signal whenever its RMS level goes below a certain threshold, for instance. In other words, this kills quiet parts of the audio dead. You'll retain the noise in the more active parts of the signal, though, and if you have a lot of dynamics in the actual signal you'll get rid of some signal, too. This tends to work well for, say, slightly noisy speech samples, but not so well for very noisy recordings of classical music. I don't know whether Matlab has a function for this.
Some approaches involve making a "fingerprint" of the noise and then removing that throughout the signal. It tends to make the result sound strange, though, and in any case this is probably sufficiently complex and domain-specific that it belongs in an audio-specific tool and not in a rather general math/DSP system.
Reducing noise requires making some assumptions about the type of noise and the type of signal, and how they are different. Audio processors typically assume (correctly or incorrectly) something like that the audio is speech or music, and that the noise is typical recording session background hiss, A/C power hum, or vinyl record pops.
Matlab is for general use (microwave radio, data comm, subsonic earthquakes, heartbeats, etc.), and thus can make no such assumptions.
matlab is no exactly an audio processor. you have to implement your own filter. you will have to design your filter correctly, according to what you want.
So basically, my problem is that I have a speech signal in .wav format that is corrupted by a harmonic noise source at some frequency. My goal is to identify the frequency at which this noise occurs, and use a notch filter to remove said noise. So far, I have read the speech signal into matlab using:
[data, Fs] = wavread('signal.wav');
My question is how can I identify the frequency at which the harmonic noise is occurring, and once I've done that, how can I go about implementing a notch filter at that frequency?
NOTE: I do not have access to the iirnotch() command or fdesign.notch() due to the version of MATLAB I am currently using (2010).
The general procedure would be to analyse the spectrum, to identify the frequency in question, then design a filter around that frequency. For most real applications it's all a bit woolly: the frequencies move around and there's no easy way to distinguish noise from signal, so you have to use clever techniques and a bit of guesswork. However if you know you have a monotonic corruption then, yes, an FFT and a notch filter will probably do the trick.
You can analyse the signal with fft and design a filter with, among others, fir1, which I believe is part of the signal processing toolbox. If you don't have the signal processing toolbox you can do it 'by hand', as in transform to the frequency domain, remove the frequency(ies) you don't want (by zeroing the relevant elements of the frequency vector) and transform back to time domain. There's a tutorial on exactly that here.
The fft and fir1 functions are well documented: search the Mathworks site to get code examples to get you up and running.
To add to/ammend xenoclast's answer, filtering in the frequency domain may or may not work for you. There are many thorny issues with filtering in the frequency domain, some of which are covered here: http://blog.bjornroche.com/2012/08/why-eq-is-done-in-time-domain.html
One additional issue is that if you try to process your entire file at once, the "width" or Q of the filters will depend on the length of your file. This might work out for you, or it might not. If you have many files of different lengths, don't expect similar results this way.
To design your own IIR notch filter, you could use the RBJ audio filter cookbook. If you need help, I wrote up a tutorial here:
http://blog.bjornroche.com/2012/08/basic-audio-eqs.html
My tutorial uses bell/peaking filter, but it's easy to follow that and then replace it with a notch filter from RBJ.
One final note: assuming this is actually an audio signal in your .wav file, you can also use your ears to find and fix the problem frequencies:
Open the file in an audio editing program that lets you adjust filter settings in real-time (I am not sure if audacity lets you do this, but probably).
Use a "boost" or "parametric" filter set to a high gain and sweep the frequency setting until you hear the noise accentuated the most.
replace the boost filter with a notch filter of the same frequency. You may need to tweak the width to trade off noise elimination vs. signal preservation.
repete as needed (due to the many harmonics).
save the resulting file.
Of course, some audio editing apps have built-in harmonic noise reduction features that work especially well for 50/60 Hz noise.
I was looking to implement voice pitch detection in iphone using HPS method. But the detected tones are not very accurate. Performous does a decent job of pitch detection.
I looked through the code but i did not fully get the theory behind the calculations.
They use FFT and find the peaks. But the part where they use the phase of FFT output, got me confused.I figure they use some heuristics for voice frequencies.
So,Could anyone please explain the algorithm used in Performous to detect pitch?
[Performous][1] extracts pitch from the microphone. Also the code is open source. Here is a description of what the algorithm does, from the guy that coded it (Tronic on irc.freenode.net#performous).
PCM input (with buffering)
FFT (1024 samples at a time, remove 200 samples from front of the buffer afterwards)
Reassignment method (against the previous FFT that was 200 samples earlier)
Filtering of peaks (this part could be done much better or even left out)
Combining peaks into sets of harmonics (we call the combination a tone)
Temporal filtering of tones (update the set of tones detected earlier instead of simply using the newly detected ones)
Pick the best vocal tone (frequency limits, weighting, could use the harmonic array also but I don't think we do)
I still wasn't able from this information to figure it out and implement it. If anyone manages this, please please post your results here, and comment this response so that SO notifies me.
The task would be to create a minimal C++ wrapper around this code.