gaussian mixture model probability matlab - matlab

I have a data of dimension 50x100000. (100000 features, each has a dimension of 50).
I would like to fit a gaussian mixture model using this data. I used the following code.
obj = gmdistribution.fit(X',3);
What I need is when I give a new data Y I should be able to get the likelihood probabilities $p(Y|\theta)$, where $\theta$ are the gaussing mixture model parameters.
I used the following code to get the probability values.
P = pdf(obj,X');
But I am getting very low values all are about 0. Whay it is happning? How can i get the appropreate probability values?

In one dimension, the maximum value of the pdf of the Gaussian distribution is 1/sqrt(2*PI). So in 50 dimensions, the maximum value is going to be 1/(sqrt(2*PI)^50) which is around 1E-20. So the values of the pdf are all going to be of that order of magnitude, or smaller.

Related

Is there a way to get the probability from the probability density in multivariate kernel estimation?

I have a question about multivariate kernel density in matlab, which is my first time using it.
I have a 3-dimensional sample data (x, y, z in axes) and want to find a probability of being in a certain volume using kernel density estimation. So, I used the mvksdensity function in matlab and got the probability density (estimated function values) for the points I decided.
What I originally wanted to do was to (if I could fine the function) triple integral the multivariate function for a given volume. But the mvksdensity function only returns the density estimates and does not return the function. I thought there will be an easy way to compute the probability from the density, but I’m stuck. Does anyone have any useful information for this? Thanks in advance.
I thought about fitdist function to find the distribution, but it only works for univariate kernel distribution.
I also tried to use mvncdf, which is a function that returns the cdf of the multivariate normal distribution for the row of the sample data after setting the mean and the std. But then I have to calculate the probability for a given volume for every normal distribution in each data point and then add it, which will be inefficient for a large amount of data and I don't know if it's a correct way.
I can suggest the following Monte-Carlo approach. You find a master volume that contains the entire mass of the estimated probability density. This should be as small as possible for the sake of efficiency. Then you generate a large number of test points in the master volume, either on a grid or randomly according to a uniform distribution. The probability content of a specific volume V can be estimated by the sum of the density values of the test points in V over the sum of the density values of all test points. I am afraid, however, that in 3D you would need at least 1E6 test points, probably more. If you give me access to your sample, I would be pleased to try out my suggestion. It should also be fairly easy to work out an estimate of the standard error of the estimated probability content of V.

Fit MRI data to a noncentral chi distribution

I'm working on Magnetic Resonance Imaging data, on Matlab R2020a. In particular, i have to characterize the background noise of the image and i know it has a noncentral chi distribution. Now, i'm trying the mle method whitout results:
[phat,pci] = mle(data,'pdf',#(data,v,d)ncx2pdf(data,v,d),'start',[1 1]);
data is a row vector (1024,1), v are the dof of the distribution and d the noncentrality parameter (the two parameters that i have to find).
The problem lies in the fact that the distribution strongly depends on the value of the mean, and the order of magnitude of my data is 10^-6.
histogram of the data:
Does anyone know a method to fit the data to a noncentral chi distribution? I already tried the 0-1 and 0-255 normalization, but they produces unreliable mean values. Any suggestions are welcome.
data.mat

Transforming draws in Matlab from Gaussian mixture to uniform

Consider the following draws for a 2x1 vector in Matlab with a probability distribution that is a mixture of two Gaussian components.
P=10^3; %number draws
v=1;
%First component
mu_a = [0,0.5];
sigma_a = [v,0;0,v];
%Second component
mu_b = [0,8.2];
sigma_b = [v,0;0,v];
%Combine
MU = [mu_a;mu_b];
SIGMA = cat(3,sigma_a,sigma_b);
w = ones(1,2)/2; %equal weight 0.5
obj = gmdistribution(MU,SIGMA,w);
%Draws
RV_temp = random(obj,P);%Px2
% Transform each component of RV_temp into a uniform in [0,1] by estimating the cdf.
RV1=ksdensity(RV_temp(:,1), RV_temp(:,1),'function', 'cdf');
RV2=ksdensity(RV_temp(:,2), RV_temp(:,2),'function', 'cdf');
Now, if we check whether RV1 and RV2 are uniformly distributed on [0,1] by doing
ecdf(RV1)
ecdf(RV2)
we can see that RV1 is uniformly distributed on [0,1] (the empirical cdf is close to the 45 degree line) while RV2 is not.
I don't understand why. It seems that the more distant are mu_a(2)and mu_b(2), the worse the job done by ksdensity with a reasonable number of draws. Why?
When you have a mixture of N(0.5,v) and N(8.2,v) then the range of the generated data is larger than if you had expectation which were closer, like N(0,v) and N(0,v), as you have in the other dimension. Then you ask ksdensity to approximate a function using P points inside this range.
Like in standard linear interpolation, the denser the points the better approximation of the function (inside the range), this is the same case here. Thus in the N(0.5,v) and N(8.2,v) where the points are "sparse" (or sparser, is that a word?) the approximation is worse than in the N(0,v) and N(0,v) where the points are denser.
As a small side note, are there any reason that you do not apply ksdensity directly on the bivariate data? Also I cannot reproduce your comment where you say that 5e2points are also good. Final comment, 1e3 is typically prefered over 10^3.
I think this is simply about the number of samples you're using. For the first example, the means of the two Gaussians are relatively close, hence a thousand samples are enough to obtain a cdf really close the the U[0,1] cdf. On the second vector though, you have a higher difference, and need more samples. With 100000 samples, I obtained the following result:
With 1000 I obtained this:
Which is clearly farther from the Uniform cdf function. Try to increase the number of samples to a million and check if the result is again getting closer.

Simple binary logistic regression using MATLAB

I'm working on doing a logistic regression using MATLAB for a simple classification problem. My covariate is one continuous variable ranging between 0 and 1, while my categorical response is a binary variable of 0 (incorrect) or 1 (correct).
I'm looking to run a logistic regression to establish a predictor that would output the probability of some input observation (e.g. the continuous variable as described above) being correct or incorrect. Although this is a fairly simple scenario, I'm having some trouble running this in MATLAB.
My approach is as follows: I have one column vector X that contains the values of the continuous variable, and another equally-sized column vector Y that contains the known classification of each value of X (e.g. 0 or 1). I'm using the following code:
[b,dev,stats] = glmfit(X,Y,'binomial','link','logit');
However, this gives me nonsensical results with a p = 1.000, coefficients (b) that are extremely high (-650.5, 1320.1), and associated standard error values on the order of 1e6.
I then tried using an additional parameter to specify the size of my binomial sample:
glm = GeneralizedLinearModel.fit(X,Y,'distr','binomial','BinomialSize',size(Y,1));
This gave me results that were more in line with what I expected. I extracted the coefficients, used glmval to create estimates (Y_fit = glmval(b,[0:0.01:1],'logit');), and created an array for the fitting (X_fit = linspace(0,1)). When I overlaid the plots of the original data and the model using figure, plot(X,Y,'o',X_fit,Y_fit'-'), the resulting plot of the model essentially looked like the lower 1/4th of the 'S' shaped plot that is typical with logistic regression plots.
My questions are as follows:
1) Why did my use of glmfit give strange results?
2) How should I go about addressing my initial question: given some input value, what's the probability that its classification is correct?
3) How do I get confidence intervals for my model parameters? glmval should be able to input the stats output from glmfit, but my use of glmfit is not giving correct results.
Any comments and input would be very useful, thanks!
UPDATE (3/18/14)
I found that mnrval seems to give reasonable results. I can use [b_fit,dev,stats] = mnrfit(X,Y+1); where Y+1 simply makes my binary classifier into a nominal one.
I can loop through [pihat,lower,upper] = mnrval(b_fit,loopVal(ii),stats); to get various pihat probability values, where loopVal = linspace(0,1) or some appropriate input range and `ii = 1:length(loopVal)'.
The stats parameter has a great correlation coefficient (0.9973), but the p values for b_fit are 0.0847 and 0.0845, which I'm not quite sure how to interpret. Any thoughts? Also, why would mrnfit work over glmfit in my example? I should note that the p-values for the coefficients when using GeneralizedLinearModel.fit were both p<<0.001, and the coefficient estimates were quite different as well.
Finally, how does one interpret the dev output from the mnrfit function? The MATLAB document states that it is "the deviance of the fit at the solution vector. The deviance is a generalization of the residual sum of squares." Is this useful as a stand-alone value, or is this only compared to dev values from other models?
It sounds like your data may be linearly separable. In short, that means since your input data is one dimensional, that there is some value of x such that all values of x < xDiv belong to one class (say y = 0) and all values of x > xDiv belong to the other class (y = 1).
If your data were two-dimensional this means you could draw a line through your two-dimensional space X such that all instances of a particular class are on one side of the line.
This is bad news for logistic regression (LR) as LR isn't really meant to deal with problems where the data are linearly separable.
Logistic regression is trying to fit a function of the following form:
This will only return values of y = 0 or y = 1 when the expression within the exponential in the denominator is at negative infinity or infinity.
Now, because your data is linearly separable, and Matlab's LR function attempts to find a maximum likelihood fit for the data, you will get extreme weight values.
This isn't necessarily a solution, but try flipping the labels on just one of your data points (so for some index t where y(t) == 0 set y(t) = 1). This will cause your data to no longer be linearly separable and the learned weight values will be dragged dramatically closer to zero.

Why Kernel smoothing function, ksdensity, in MATLAB, results in values greater than one?

I have a set of samples, S, and I want to find its PDF. The problem is when I use ksdensity I get values greater than one!
[f,xi] = ksdensity(S)
In array f, most of the values are greater than one! Would you please tell me what the problem can be? Thanks for your help.
For example:
S=normrnd(0.3035, 0.0314,1,1000);
ksdensity(S)
ksdensity, as the name says, estimates a probability density function over a continuous variable. Probability densities can be larger than 1, they can actually have arbitrary values from zero upwards. The constraint on probabilities is that their sum over an exhaustive range of possibilities has to be 1. For probability densities, the constraint is that the integral over the whole range of values is 1.
A crude approximation of an integral of the pdf estimated by ksdensity can be obtained in Matlab like this:
sum(f) * min(diff(xi))
assuming that the values in xi are equally spaced. The value of this expression should be approximately 1.
If in your application you believe this approximation is not close enough to 1, you might want to specify the grid of estimation points (second parameter pts) such that the spacing is finer or the range is wider than the one automatically generated by ksdensity.