Data in LoLiMot sub-spaces have linear distribution or gaussian? - neural-network

I'm confused that lolimot approximate data by sum of linear models or by some of gaussian models? I see here on page 184 that lolimot is a model for dividing space into linear sub-spaces, but structure of lolimot is sum ofweighted gaussian models. Actually I mean data in every subspace have linear distribution or gaussian distribution?
Thanks

Related

How to transform simulated innovations to financial returns with GJR GARCH model?

I am investigating the time varying dependence between financial return series using copula theory.
For each marginal time series I have fitted a GJR GARCH model with t-distributed innovations and extracted the standardized residuals.
These residuals I use as input for my copula models.
Now, with the estimated copula models I have simulated 1000 standardized residuals for each point t.
I have already transformed these simulated uniformly distributed residuals back to their t-distribution.
I am wondering how I can transform these simulated t-distributed residuals back to returns with the GJR GARCH model for each point in time.
Thank you in advance!

Fit MRI data to a noncentral chi distribution

I'm working on Magnetic Resonance Imaging data, on Matlab R2020a. In particular, i have to characterize the background noise of the image and i know it has a noncentral chi distribution. Now, i'm trying the mle method whitout results:
[phat,pci] = mle(data,'pdf',#(data,v,d)ncx2pdf(data,v,d),'start',[1 1]);
data is a row vector (1024,1), v are the dof of the distribution and d the noncentrality parameter (the two parameters that i have to find).
The problem lies in the fact that the distribution strongly depends on the value of the mean, and the order of magnitude of my data is 10^-6.
histogram of the data:
Does anyone know a method to fit the data to a noncentral chi distribution? I already tried the 0-1 and 0-255 normalization, but they produces unreliable mean values. Any suggestions are welcome.
data.mat

How can I reduce extract features from a set of Matrices and vectors to be used in Machine Learning in MATLAB

I have a task where I need to train a machine learning model to predict a set of outputs from multiple inputs. My inputs are 1000 iterations of a set of 3x 1 vectors, a set of 3x3 covariance matrices and a set of scalars, while my output is just a set of scalars. I cannot use regression learner app because these inputs need to have the same dimensions, any idea on how to unify them?
One possible way to solve this is to flatten the covariance matrix into a vector. Once you did that, you can construct a 1000xN matrix where 1000 refers to the number of samples in your dataset and N is the number of features. For example if your features consist of a 3x1 vector, a 3x3 covariance matrix and lets say 5 other scalars, N could be 3+3*3+5=17. You then use this matrix to train an arbitrary model such as a linear regressor or more advanced models like a tree or the like.
When training machine learning models it is important to understand your data and exploit its structure to help the learning algorithms. For example we could use the fact that a covariance matrix is symmetric and positive semi-definite and thus lives in a closed convex cone. Symmetry of the matrix implies that it lives in a subspace of the set of all 3x3 matrices. In fact the dimension of the space of 3x3 symmetric matrices is only 6. You can use that knowledge to reduce redundancy in your data.

Fisher information matrix

Can Fisher Information matrix be calculated for any matrix? I am doing my work on image processing field for face identification. How can I calculate Fisher information matrix on my input image(which is a matrix of pixels indeed)?
You can use empirical Fisher information, however you would need to specify a parametric likelihood for your data. Given that you know the form of your likelihood, you can evaluate the Hessian at your parameter values. Intuitively, if the Hessian or curvature of log likelihood is high you are more certain about the parameter estimates. To compute the Fisher information matrix you would then take the empirical average of the observed information matrix.

scaling when sampling from multivariate gaussian

I have a data matrix A (with dependencies between columns) of which I estimate the covariance matrix S. I now want to use this covariance matrix to simulate a new matrix A_sim. Since I assume that the underlying data generator of A was gaussian, I can simply sample from a gaussian specified by S. I do that in matlab as follows:
A_sim = randn(size(A))*chol(S);
However, the values in A_sim are way larger than in A. if I scale down S by a factor of 100, A_sim looks much better. I am now looking for a way to determine this scaling factor in a principled way. can anyone give advise or suggest literature that might be helpful?
Matlab has the function mvnrnd which generates multivariate random variables for you.