Is 'denoisingNetwork' in 'denoiseImage' specific to one noise? - matlab

What kind of noise removal training does the function 'denoisingNetwork' do, which is used as a part of 'denoiseImage'? Is it specific to some kind of noise and noise level or just a generalized network that gives an average output image?

It works only for gaussian noise, but with almost every level of noise.
It could be used to remove some other kind of noises, but that's not guaranteed.
However if you look at Matlab documentation it says that uses a pre-trained model called "DnCNN".
So i think that could be useful to see the relative paper:
link to paper

Related

How can the ideal low pass filter from the frequency domain can be applied?

I have an image where I add a Gaussian noise. I need to use the ideal low pass filter to remove the noise but I cannot really see any examples on the official Matlab documentation. There examples but not with images and I cannot really grasp the concept behind this filter. So could somebody explain how the ideal low pass filter can be used to remove noise?
image = imread('eight.tif');
imshow(image );
noisyImage = imnoise(image,'gaussian',0.02);
imshow(noisyImage);
If you know the standard deviation of the noise, It's good to use a Gaussian filter with that specific standard deviation. Although in most of the cases, it's good to use Bilateral filter (imbilatfilt) which is a gaussian filter with some other features that preserves the edges.
If you don't know what your noise is, It's best to use Wiener filter ([J,noise_out] = wiener2(I,[m n])). This filter observes the frequency behavior of the image and looks for a special pattern which is statistically consistent with noise. In other word, it estimate the noise of image and filter that specific noise for you. noise_out is the estimates of the additive noise power and m,n are the sizes of the filter's kernel (which I suggest something like 5*5 or 7*7).
Of course there are some other filtering methods including handmade ones, but those need more effort and lots of trial and error.

PCA on Sift desciptors and Fisher Vectors

I was reading this particular paper http://www.robots.ox.ac.uk/~vgg/publications/2011/Chatfield11/chatfield11.pdf and I find the Fisher Vector with GMM vocabulary approach very interesting and I would like to test it myself.
However, it is totally unclear (to me) how do they apply PCA dimensionality reduction on the data. I mean, do they calculate Feature Space and once it is calculated they perform PCA on it? Or do they just perform PCA on every image after SIFT is calculated and then they create feature space?
Is this supposed to be done for both training test sets? To me it's an 'obviously yes' answer, however it is not clear.
I was thinking of creating the feature space from training set and then run PCA on it. Then, I could use that PCA coefficient from training set to reduce each image's sift descriptor that is going to be encoded into Fisher Vector for later classification, whether it is a test or a train image.
EDIT 1;
Simplistic example:
[coef , reduced_feat_space]= pca(Feat_Space','NumComponents', 80);
and then (for both test and train images)
reduced_test_img = test_img * coef; (And then choose the first 80 dimensions of the reduced_test_img)
What do you think? Cheers
It looks to me like they do SIFT first and then do PCA. the article states in section 2.1 "The local descriptors are fixed in all experiments to be SIFT descriptors..."
also in the introduction section "the following three steps:(i) extraction
of local image features (e.g., SIFT descriptors), (ii) encoding of the local features in an image descriptor (e.g., a histogram of the quantized local features), and (iii) classification ... Recently several authors have focused on improving the second component" so it looks to me that the dimensionality reduction occurs after SIFT and the paper is simply talking about a few different methods of doing this, and the performance of each
I would also guess (as you did) that you would have to run it on both sets of images. Otherwise your would be using two different metrics to classify the images it really is like comparing apples to oranges. Comparing a reduced dimensional representation to the full one (even for the same exact image) will show some variation. In fact that is the whole premise of PCA, you are giving up some smaller features (usually) for computational efficiency. The real question with PCA or any dimensionality reduction algorithm is how much information can I give up and still reliably classify/segment different data sets
And as a last point, you would have to treat both images the same way, because your end goal is to use the Fisher Feature Vector for classification as either test or training. Now imagine you decided training images dont get PCA and test images do. Now I give you some image X, what would you do with it? How could you treat one set of images differently from another BEFORE you've classified them? Using the same technique on both sets means you'd process my image X then decide where to put it.
Anyway, I hope that helped and wasn't to rant-like. Good Luck :-)

Why isn't there a simple function to reduce background noise of an audio signal in Matlab?

Is this because it's a complex problem ? I mean to wide and therefore it does not exist a simple / generic solution ?
Because every (almost) software making signal processing (Avisoft, GoldWave, Audacity…) have this function that reduce background noise of a signal. Usually it uses FFT. But I can't find a function (already implemented) in Matlab that allows us to do the same ? Is the right way to make it manually then ?
Thanks.
The common audio noise reduction approaches built-in to things like Audacity are based around spectral subtraction, which estimates the level of steady background noise in the Fourier transform magnitude domain, then removes that much energy from every frame, leaving energy only where the signal "pokes above" this noise floor.
You can find many implementations of spectral subtraction for Matlab; this one is highly rated on Matlab File Exchange:
http://www.mathworks.com/matlabcentral/fileexchange/7675-boll-spectral-subtraction
The question is, what kind of noise reduction are you looking for? There is no one solution that fits all needs. Here are a few approaches:
Low-pass filtering the signal reduces noise but also removes the high-frequency components of the signal. For some applications this is perfectly acceptable. There are lots of low-pass filter functions and Matlab helps you apply plenty of them. Some knowledge of how digital filters work is required. I'm not going into it here; if you want more details consider asking a more focused question.
An approach suitable for many situations is using a noise gate: simply attenuate the signal whenever its RMS level goes below a certain threshold, for instance. In other words, this kills quiet parts of the audio dead. You'll retain the noise in the more active parts of the signal, though, and if you have a lot of dynamics in the actual signal you'll get rid of some signal, too. This tends to work well for, say, slightly noisy speech samples, but not so well for very noisy recordings of classical music. I don't know whether Matlab has a function for this.
Some approaches involve making a "fingerprint" of the noise and then removing that throughout the signal. It tends to make the result sound strange, though, and in any case this is probably sufficiently complex and domain-specific that it belongs in an audio-specific tool and not in a rather general math/DSP system.
Reducing noise requires making some assumptions about the type of noise and the type of signal, and how they are different. Audio processors typically assume (correctly or incorrectly) something like that the audio is speech or music, and that the noise is typical recording session background hiss, A/C power hum, or vinyl record pops.
Matlab is for general use (microwave radio, data comm, subsonic earthquakes, heartbeats, etc.), and thus can make no such assumptions.
matlab is no exactly an audio processor. you have to implement your own filter. you will have to design your filter correctly, according to what you want.

Fourier spectral analysis with Support Vector Machines

I did some reading this afternoon about SVM's. And have the hope that this looks very promising.
I am currently working on a problem, where I'm looking for a pattern in the fourier spectrum. What I'm saying is, that I have been looking at spectrums for days. I hope to find some repeating patterns. I found some criterias that match a certain pattern, but with the next sample, the whole pattern could look slightly different. So there is always slight deviation, which makes it hard to describe. Or in another way, I might be overlooking something. But I can clearly say, which is the training data.
I was hoping to make use of SVM to train it, and predict the classification. Means that if I have another set of new data, that it would tell me, that it matches the training data or it goes into the "other" group, which could be anything (no need to know).
Is that something a SVM is able to do, or am I completly off? I couldn't find any good examples of input data to see if my problem is something I could feed to SVM.
Currently using Matlab.
There actually has been tons of research done on this particular topic, but especially with Wavelet Transform. Google Wavelet Transform and SVM and you will find a number of papers. From there, you can easily go ahead with adjusting your model from Wavelet to FFT spectrum.
I don't have experience with SVM, but I do have experience with related techniques, and here's what I can say:
In all likelihood, you can't simply go from a spectrum to SVM to decision. You need to determine what it is about the spectrums that distinguish your various inputs. For example, if it's the way the data changes over time or the relationship between the high and low frequencies that makes the inputs different, you need to encode that a single parameter. Eg, you could make a parameter that's the ratio of some of your higher frequencies to some of your lower frequencies. You may also want to use parameters like frequency centroid and zero-crossing rate, which are simpler than the spectrum, but may still carry useful information (These are used in audio and speech. not sure if they apply to whatever you are looking at). Once you have these derived parameters, feed them to the SVM analysis, which will do the sorting.
Other techniques you might want to examine (which also have the same requirements) include HMM (Hidden Markov Models), K-Means, and Logistic Regression.

Notch filters and harmonic noise in matlab

So basically, my problem is that I have a speech signal in .wav format that is corrupted by a harmonic noise source at some frequency. My goal is to identify the frequency at which this noise occurs, and use a notch filter to remove said noise. So far, I have read the speech signal into matlab using:
[data, Fs] = wavread('signal.wav');
My question is how can I identify the frequency at which the harmonic noise is occurring, and once I've done that, how can I go about implementing a notch filter at that frequency?
NOTE: I do not have access to the iirnotch() command or fdesign.notch() due to the version of MATLAB I am currently using (2010).
The general procedure would be to analyse the spectrum, to identify the frequency in question, then design a filter around that frequency. For most real applications it's all a bit woolly: the frequencies move around and there's no easy way to distinguish noise from signal, so you have to use clever techniques and a bit of guesswork. However if you know you have a monotonic corruption then, yes, an FFT and a notch filter will probably do the trick.
You can analyse the signal with fft and design a filter with, among others, fir1, which I believe is part of the signal processing toolbox. If you don't have the signal processing toolbox you can do it 'by hand', as in transform to the frequency domain, remove the frequency(ies) you don't want (by zeroing the relevant elements of the frequency vector) and transform back to time domain. There's a tutorial on exactly that here.
The fft and fir1 functions are well documented: search the Mathworks site to get code examples to get you up and running.
To add to/ammend xenoclast's answer, filtering in the frequency domain may or may not work for you. There are many thorny issues with filtering in the frequency domain, some of which are covered here: http://blog.bjornroche.com/2012/08/why-eq-is-done-in-time-domain.html
One additional issue is that if you try to process your entire file at once, the "width" or Q of the filters will depend on the length of your file. This might work out for you, or it might not. If you have many files of different lengths, don't expect similar results this way.
To design your own IIR notch filter, you could use the RBJ audio filter cookbook. If you need help, I wrote up a tutorial here:
http://blog.bjornroche.com/2012/08/basic-audio-eqs.html
My tutorial uses bell/peaking filter, but it's easy to follow that and then replace it with a notch filter from RBJ.
One final note: assuming this is actually an audio signal in your .wav file, you can also use your ears to find and fix the problem frequencies:
Open the file in an audio editing program that lets you adjust filter settings in real-time (I am not sure if audacity lets you do this, but probably).
Use a "boost" or "parametric" filter set to a high gain and sweep the frequency setting until you hear the noise accentuated the most.
replace the boost filter with a notch filter of the same frequency. You may need to tweak the width to trade off noise elimination vs. signal preservation.
repete as needed (due to the many harmonics).
save the resulting file.
Of course, some audio editing apps have built-in harmonic noise reduction features that work especially well for 50/60 Hz noise.