Fisher information matrix - matlab

Can Fisher Information matrix be calculated for any matrix? I am doing my work on image processing field for face identification. How can I calculate Fisher information matrix on my input image(which is a matrix of pixels indeed)?

You can use empirical Fisher information, however you would need to specify a parametric likelihood for your data. Given that you know the form of your likelihood, you can evaluate the Hessian at your parameter values. Intuitively, if the Hessian or curvature of log likelihood is high you are more certain about the parameter estimates. To compute the Fisher information matrix you would then take the empirical average of the observed information matrix.

Related

How can I validate my estimated covariance matrix?

I have two covariances of size 6*6, one is supposed be the true covariance and the other is the Maximum likelihood estimate for my covariance. Is there any way I could validate my estimated covariance?
I don't know how exactly you determined your covariance matrix, but generally it is a good first step to check the confidence intervals of your estimators.
Heuristically speaking a wide confidence interval suggests that your estimator has a lot of uncertainty.
Take a look at the Matlab function corrcoef, which also gives lower and upper bounds for the estimated correlation coefficients,
cf. https://uk.mathworks.com/help/matlab/ref/corrcoef.html#bunkanr .
Maybe using this function on your data gives you a good starting point. If you use your own function to estimate the ML estimators, you will have to add the confidence intervals yourself.

How to calculate correlation coefficient and P-value between three dimensional matrix

I have two gridded matrix having latitude,longitude and time(180x360x12). I have calculated correlation coefficient between both matrices using following: http://in.mathworks.com/matlabcentral/answers/15884-correlation-for-multi-dimensional-arrays
now I want to find p-value (0.05) for each grid cell. than I want to set correlation values in matrix in three part: one will show positively significant (<0.05), second will show positively insignificant (>0.05) and third will show negatively significant (<0.05) correlation. Can anyone help me in this regard ?
If you use the scipy pearsonr function to calculate your correlations then this will give you the p values as well.
If cor are the correlations and p are the p-values then retrieving the significant values is then as simple as:
significant_correlations = corr[p<0.05]

Matlab: Determinant of VarianceCovariance matrix

When solving the log likelihood expression for autoregressive models, I cam across the variance covariance matrix Tau given under slide 9 Parameter estimation of time series tutorial. Now, in order to use
fminsearch
to maximize the likelihood function expression, I need to express the likelihood function where the variance covariance matrix arises. Can somebody please show with an example how I can implement (determinant of Gamma)^-1/2 ? Any other example apart from autoregressive model will also do.
How about sqrt(det(Gamma)) for the sqrt-determinant and inv(Gamma) for inverse?
But if you do not want to implement it yourself you can look at yulewalkerarestimator
UPD: For estimation of autocovariance matrix use xcov
also, this topic is a bit more explained here

Creating a 1D Second derivative of gaussian Window

In MATLAB I need to generate a second derivative of a gaussian window to apply to a vector representing the height of a curve. I need the second derivative in order to determine the locations of the inflection points and maxima along the curve. The vector representing the curve may be quite noise hence the use of the gaussian window.
What is the best way to generate this window?
Is it best to use the gausswin function to generate the gaussian window then take the second derivative of that?
Or to generate the window manually using the equation for the second derivative of the gaussian?
Or even is it best to apply the gaussian window to the data, then take the second derivative of it all? (I know these last two are mathematically the same, however with the discrete data points I do not know which will be more accurate)
The maximum length of the height vector is going to be around 100-200 elements.
Thanks
Chris
I would create a linear filter composed of the weights generated by the second derivative of a Gaussian function and convolve this with your vector.
The weights of a second derivative of a Gaussian are given by:
Where:
Tau is the time shift for the filter. If you are generating weights for a discrete filter of length T with an odd number of samples, set tau to zero and allow t to vary from [-T/2,T/2]
sigma - varies the scale of your operator. Set sigma to a value somewhere between T/6. If you are concerned about long filter length then this can be reduced to T/4
C is the normalising factor. This can be derived algebraically but in practice I always do this numerically after calculating the filter weights. For unity gain when smoothing periodic signals, I will set C = 1 / sum(G'').
In terms of your comment on the equivalence of smoothing first and taking a derivative later, I would say it is more involved than that. As which derivative operator would you use in the second step? A simple central difference would not yield the same results.
You can get an equivalent (but approximate) response to a second derivative of a Gaussian by filtering the data with two Gaussians of different scales and then taking the point-wise differences between the two resulting vectors. See Difference of Gaussians for that approach.

scaling when sampling from multivariate gaussian

I have a data matrix A (with dependencies between columns) of which I estimate the covariance matrix S. I now want to use this covariance matrix to simulate a new matrix A_sim. Since I assume that the underlying data generator of A was gaussian, I can simply sample from a gaussian specified by S. I do that in matlab as follows:
A_sim = randn(size(A))*chol(S);
However, the values in A_sim are way larger than in A. if I scale down S by a factor of 100, A_sim looks much better. I am now looking for a way to determine this scaling factor in a principled way. can anyone give advise or suggest literature that might be helpful?
Matlab has the function mvnrnd which generates multivariate random variables for you.