Statistical Distributions - matlab

I work on image treatment, and I have been working on Ultrasound images to help in the diagnosis of a specific cardiac disease. I am applying statistical methods to characterize the speckle in the images. Those are the statistical distributions such as Gamma, Rayleigh, Nakagami, etc.
I am very much interested in finding MATLAB codes for the following distributions for parameter estimation:
- K
- Homodyned K
- Generalized Gamma
- Generalized Gamma Mixture Model
Your help is appreciated,
V

Related

Support vector Machine parameters matlab

I am working on my artificial intelligence problem and I am following the instructions from this example:
Matlab Deep Learning Example
There, they use a support vector machine to classify:
classifier = fitcecoc(trainingFeatures, trainingLabels, ...
'Learners', 'Linear', 'Coding', 'onevsall', 'ObservationsIn', 'columns');
I Tried this example with my own data set and It has an acurracy of 89.5%
it works pretty well, But now I would like to try with my own SVM with my own settings instead of the default settings.
I read in the documentation that fitcecoc uses a SVM with a Linear Kernel
by default, now I would like to try different kernels for instance Gaussian and Polynomial.
I know for the Machine learning course of coursera that SVM have a parameter ( Andrew NG refers to it as C) and also each kernel has it own parameter. Also I found info about the kernels parameters in this Mathworks URL:
Kernel paramters...
According to that link....
Gaussian kernel has its parameter SIGMA
And Polynomial Kernel has its paramter P which is the order of the polynomial
func
So I wrote Down this code:
Oursvm = templateSVM('KernelFunction','polynomial');
classifier = fitcecoc(trainingFeatures, trainingLabels,'Learners',...
Oursvm,'Coding', 'onevsall', 'ObservationsIn', 'columns');
Now, I would like to change the P parameter, In the Template SVM Doumentation I found that I can set it like this:
Oursvm = templateSVM('KernelFunction','polynomial','PolynomialOrder',9);
Template SVM
The default value is 3, but no matter which number I use for PolynomialOrder , the accurracy is always the same 3.2258 for p = 1 Or p = 2 or even p = 9
Isn't it weird?
What am I missing?
Also How can I set the SIGMA parameter for the gaussian kernel? because training with the default configuration the acurracy is very Low, And in the SVM template documentation they dont specify how to set this parameter clearly.
How can I set the C parameter of my SVM?
Finally I Have read that you need at least 10 times training samples
than dimensions of the input data, how is it possible that the deep
learning example uses only 201 samples (67 for each class, three
classes total) if the dimensions of the input data is 4096?
Andrew Ng describe your problematic on week7 kernels2 video:
Large C - gives lower bias, high variance(prone to overfitting)
Small C - gives higher bias, low variance(prone to underfitting)
Sigmas for Gaussian kernel are opposite:
Large Sigma - gives higher bias, low variance(prone to underfitting)
Small Sigma - gives lower bias, high variance(prone to overfitting)
So you could try to tune one parameter in time. And so as Andrew I don't see a reason for using polynomial kernels. Usually enogh linear and gaussian which depends of number examples and features. gl
For the last question, in case of low number of training examples and so much features you should try linear kernel

Discriminant analysis method to classify data

my aim is to classify the data into two sections- upper and lower- finding the mid line of the peaks.
I would like to apply machine learning methods- i.e. Discriminant analysis.
Could you let me know how to do that in MATLAB?
It seems that what you are looking for is GMM (gaussian mixture model). With K=2 (number of mixtures) and dimension equal 1 this will be simple, fast method, which will give you a direct solution. Given components it is easy to analytically find a local minima (which is just a weighted average of means, with weights proportional to the std's).

Hyper-parameters of Gaussian Processes for Regression

I know a Gaussian Process Regression model is mainly specified by its covariance matrix and the free hyper-parameters act as the 'weights'of the model. But could anyone explain what do the 2 hyper-parameters (length-scale & amplitude) in the covariance matrix represent (since they are not 'real' parameters)? I'm a little confused on the 'actual' meaning of these 2 parameters.
Thank you for your help in advance. :)
First off I would like to point out that there are infinite number of kernels that could be used in a gaussian process. One of the most common however is the RBF (also referred to as squared exponential, the expodentiated quadratic, etc). This kernel is of the following form:
The above equation is of course for the simple 1D case. Here l is the length scale and sigma is the variance parameter (note they go under different names depending on the source). Effectively the length scale controls how two points appear to be similar as it simply magnifies the distance between x and x'. The variance parameter controls how smooth the function is. These are related but not the same.
The Kernel Cookbook give a nice little description and compares RBF kernels to other commonly used kernels.

Beginners issue in polynomial curve fitting [Part 1]

I have just started understanding modeling techniques based on regression models and was going through MATLAB curve fitting toolbox and the SO. I have fundamental doubts and unable to proceed further. I have a single vector set with k=100 data points which I want to fit into an AR model,MA model,ARMA model successively to see which is better suited.Starting with an AR(p) model of the form y(k+1)=a*y(k)+ b*y(k-1)The command
coeff = polyfit(x,y,d)
will fit a polynomial of degree say d=1 with p number of coefficients indicating the order of the model (AR(p)). But I just have 1 set of data which is the recording of the angular moment.So,what will go as the first parameter (x) of the function signature i.e what will be x,y?Then, what if the linear models are not good enough so I may have to select the nonlinear models.Can somebody please guide with code snippets what are the steps in fitting,checking for overfitting,residual calculation etc.
x is likely to be k (index of y). And the whole code:
c =polyfit(1:length(y), y, d).
Matlab has a curve fitting toolbox. You could use it to check different nonlinear fitting in GUI to get some intuition.
If you want steps there's a great Coursera Machine Learning course. The beginning of this course is related to linear regression and I recommend you to spend some hours at least on that beginning.

wavelet denoising routine using the wden functions in matlab

I was reading a report today which looked at measuring heat storage of a lake from temperature measurements where to reduce the the impacts of temperature fluctuations that can confound estimates of short-term changes in heat storage, a wavelet de-noising routine was used (daubechies 4 wavelet, single rescaling, min/max thresholds used on the wden function in the wavelet toolbox) where 2 levels of wavelet filtering was applied. This technique results in smoother temporal variations in water temperature, while preserving patterns of diurnal heat gain and loss.
From this description, consider that my temperature measurements are similar to
load sumsin;
s = sumsin;
plot(s);
How would I apply the techniques described using the wden functions in matlab.
Apologies for the vagueness of this post, but seeing as I am clueless on how to complete this task I would be very greatfull for some advice.
I assume you're talking about de-noising by thresholding the detail coefficients of the wavelet transform. wden does do this. You've not specified however whether it is hard or soft thresholding.
For not wanting to reproduce matlab's help here,
help wden
Will give you what you need on how to use the function. Given the information you've provided, and the assumption that soft thresholding is appropriate; (as it is with most methods except Donoho's Visushrink, referred to by wden as 'sqtwolog')
[s_denoised, ~, ~] = wden(s, 'minimaxi', 's', 'sln', 2, 'db4');
Should give you what you want. This does also assume you're not interested in the decomposed wavelet tree