General Linear mode mri data - neural-network

General Linear model analysis is usually done for fMRI data. I have applied the same analysis to MRI data and found the clusters which are linearly related to columns of behavioral scores(design matrix). I wanted to know if this analysis will give me the correct results or not. Please let me know if anyone has an idea about it, I can share more required information.
I am doing this clustering so that I can find the interesting regions in the brain MRI to create a mask and then pass it to CNN for better classification results.

Related

niftynet multi-class 3D segmentation with dense vnet

Neural network newbie here. I've been testing Niftynet and achieved decent single-class 3D segmentation predictions on an own MRI data set with dense_vnet. However, I ran out of luck when I tried to add a second label. The network seems to spot the correct organs but can't get rid of additional artifacts as if it cannot get out of a local minimum or it doesn't have enough degrees of freedom or something. This is one of the better looking prediction slices which does show some correct labels but also additional noise.
Why would a single-class segmentation work better than a multi-class segmentation? Is it even reasonable to expect good multi-class 3D segmentation results out of DenseVnet? If yes, is there a specific approach to improve the results?
P.S.
Niftynet's site refers to stackoverflow for general questions.
Apparently, DenseVnet does handle multi-class segmentation okay. They have provided a ready model with a Dice loss extension. It worked with my MRI data without any pre-processing even though it's been designed for CT images and Hounsfield units.

Non-image data with cnn [Matlab Specific]

I am trying to use a cnn to build a classifier for my data.
The training set is comprised of 2D numerical matrices which are not image data.
It seems that Matlab's cnns only work with image inputs:
https://uk.mathworks.com/help/nnet/ref/imageinputlayer-class.html
Does anyone have experience with cnns and non-image data using Matlab's deep learning toolbox?
Thank you.
Well I first would like to understand why you want to use a CNN with non-image data? CNNs are specially good because they take into account information in the neighborhood. Unless your data has some kind of region pattern (like pixels that get together to create a pattern or sentences where word order is relevant) the CNN would not be the best approach to handle it.
That been said, if you still want to use it you could convert the matrix to images. I'm not sure if that would help though.
Function to convert: mat2gray

PCA on Sift desciptors and Fisher Vectors

I was reading this particular paper http://www.robots.ox.ac.uk/~vgg/publications/2011/Chatfield11/chatfield11.pdf and I find the Fisher Vector with GMM vocabulary approach very interesting and I would like to test it myself.
However, it is totally unclear (to me) how do they apply PCA dimensionality reduction on the data. I mean, do they calculate Feature Space and once it is calculated they perform PCA on it? Or do they just perform PCA on every image after SIFT is calculated and then they create feature space?
Is this supposed to be done for both training test sets? To me it's an 'obviously yes' answer, however it is not clear.
I was thinking of creating the feature space from training set and then run PCA on it. Then, I could use that PCA coefficient from training set to reduce each image's sift descriptor that is going to be encoded into Fisher Vector for later classification, whether it is a test or a train image.
EDIT 1;
Simplistic example:
[coef , reduced_feat_space]= pca(Feat_Space','NumComponents', 80);
and then (for both test and train images)
reduced_test_img = test_img * coef; (And then choose the first 80 dimensions of the reduced_test_img)
What do you think? Cheers
It looks to me like they do SIFT first and then do PCA. the article states in section 2.1 "The local descriptors are fixed in all experiments to be SIFT descriptors..."
also in the introduction section "the following three steps:(i) extraction
of local image features (e.g., SIFT descriptors), (ii) encoding of the local features in an image descriptor (e.g., a histogram of the quantized local features), and (iii) classification ... Recently several authors have focused on improving the second component" so it looks to me that the dimensionality reduction occurs after SIFT and the paper is simply talking about a few different methods of doing this, and the performance of each
I would also guess (as you did) that you would have to run it on both sets of images. Otherwise your would be using two different metrics to classify the images it really is like comparing apples to oranges. Comparing a reduced dimensional representation to the full one (even for the same exact image) will show some variation. In fact that is the whole premise of PCA, you are giving up some smaller features (usually) for computational efficiency. The real question with PCA or any dimensionality reduction algorithm is how much information can I give up and still reliably classify/segment different data sets
And as a last point, you would have to treat both images the same way, because your end goal is to use the Fisher Feature Vector for classification as either test or training. Now imagine you decided training images dont get PCA and test images do. Now I give you some image X, what would you do with it? How could you treat one set of images differently from another BEFORE you've classified them? Using the same technique on both sets means you'd process my image X then decide where to put it.
Anyway, I hope that helped and wasn't to rant-like. Good Luck :-)

Spatio-temopral wavelet analysis

Am quite new to wavelet analysis as well as stackoverflow and would wold like some help. I am performing a spatio-temporal analysis of rainfall data.
With PCA, I can reduce the dimension of the rainfall data into a few leading modes, yielding EOFs (which explain spatial variability)
and principal components (explaining temporal variability).
I would like to perform a similar analysis with wavelets using Matlab Wavelet Toolbox. As of now, I am able to decompose a 2D data (spatial decomposition)
but unable to take into account the temporal variability in the data.
My first course of action has been to first compress the data with PCA and then perform wavelet decomposition of the leading modes in both the
spatial (EOFs) and temporal (PCs) domain.
I am wondering if this is the right way to perform such an analysis and would like suggestions as to how to proceed.
Thanks alot.

Convert SURFpoints object MATLAB

Is there any way I can convert the SURFpoints object, generated by matlab, into a matrix with x and y positions, for feeding into a neural network?
I am a pretty much complete beginner, but from what I can tell, and by looking at documentation, I wasn't sure if there was a way to get SURFpoints into neural networks?
Many thanks,
Hugh
SURFPoints has a field, Location, that is an n x 2 matrix that has the (x,y) coordinates of each SURF point detected in the image.
Note, however, that SURF points have other attributes beside their location (such as scale and orientation). If you only take into account the (x,y) locations, you are throwing away a lot of data.
Also, it's unclear how you would feed this information into a neural network. A neural network, like many other machine learning models, expects a uniform length feature vector of an entity. If your task is something like image classification, you'll have to come up with some way to convert the list of SURF points into a feature vector that captures the properties you want your classifier to care about. Depending on your application, a neural network may or may not be the best way to go. In the context of computer vision and image processing, neural networks these days are more commonly used for unsupervised feature discovery (see "deep learning"). For supervised learning tasks, other models like boosted decision trees and SVMs give better theoretical guarantees and have fared much better in practice.