PCA using princomp in MATLAB (for face recognition) - matlab

I'm trying to do dimensionality reduction using MATLAB's princomp, but I'm not sure I'm doing it right.
Here is the my code just for testing, but I'm not sure that I'm doing projection right:
A = rand(4,3)
AMean = mean(A)
[n m] = size(A)
Ac = (A - repmat(AMean,[n 1]))
pc = princomp(A)
k = 2; %Number of first principal components
A_pca = Ac * pc(1:k,:)' %Not sure I'm doing projection right
reconstructedA = A_pca * pc(1:k,:)
error = reconstructedA- Ac
And my code for face recognition using ORL dataset:
%load orl_data 400x768 double matrix (400 images 768 features)
%make labels
orl_label = [];
for i = 1:40
orl_label = [orl_label;ones(10,1)*i];
end
n = size(orl_data,1);
k = randperm(n);
s = round(0.25*n); %Take 25% for train
%Raw pixels
%Split on test and train sets
data_tr = orl_data(k(1:s),:);
label_tr = orl_label(k(1:s),:);
data_te = orl_data(k(s+1:end),:);
label_te = orl_label(k(s+1:end),:);
tic
[nn_ind, estimated_label] = EuclDistClassifier(data_tr,label_tr,data_te);
toc
rate = sum(estimated_label == label_te)/size(label_te,1)
%Using PCA
tic
pc = princomp(data_tr);
toc
mean_face = mean(data_tr);
pc_n = 100;
f_pc = pc(1:pc_n,:)';
data_pca_tr = (data_tr - repmat(mean_face, [s,1])) * f_pc;
data_pca_te = (data_te - repmat(mean_face, [n-s,1])) * f_pc;
tic
[nn_ind, estimated_label] = EuclDistClassifier(data_pca_tr,label_tr,data_pca_te);
toc
rate = sum(estimated_label == label_te)/size(label_te,1)
If I choose enough principal components it gives me equal recognition rates. If I use a small number of principal components (PCA) then the rate using PCA is poorer.
Here are some questions:
Is princomp function the best way to calculate first k principal components using MATLAB?
Using PCA projected features vs raw features don't give extra accuracy, but only smaller features vector size? (faster to compare feature vectors).
How to automatically choose min k (number of principal components) that give the same accuracy vs raw feature vector?
What if I have very big set of samples can I use only subset of them with comparable accuracy? Or can I compute PCA on some set and later "add" some other set (I don't want to recompute pca for set1+set2, but somehow iteratively add information from set2 to existing PCA from set1)?
I also tried the GPU version simply using gpuArray:
%Test using GPU
tic
A_cpu = rand(30000,32*24);
A = gpuArray(A_cpu);
AMean = mean(A);
[n m] = size(A)
pc = princomp(A);
k = 100;
A_pca = (A - repmat(AMean,[n 1])) * pc(1:k,:)';
A_pca_cpu = gather(A_pca);
toc
clear;
tic
A = rand(30000,32*24);
AMean = mean(A);
[n m] = size(A)
pc = princomp(A);
k = 100;
A_pca = (A - repmat(AMean,[n 1])) * pc(1:k,:)';
toc
clear;
It is working faster, but it's not suitable for big matrices. Maybe I'm wrong?
If I use a big matrix, it gives me:
Error using gpuArray Out of memory on device.

"Is princomp function the best way to calculate first k principal components using MATLAB?"
It's computing a full SVD, so it will be slow on large datasets. You can speed this up significantly by specifying the number of dimensions you need at the start and computing a partial svd. The matlab functions for a partial svd is svds.
If svds' not fast enough for you there's a more modern implementation here:
http://cims.nyu.edu/~tygert/software.html (matlab version: http://code.google.com/p/framelet-mri/source/browse/pca.m )
(cf the paper describing the algorithm http://cims.nyu.edu/~tygert/blanczos.pdf )
You can control the error of your approximation by increasing the number of singular vectors computed, there's precise bounds on that in the linked paper. Here's an example:
>> A = rand(40,30); %random rank-30 matrix
>> [U,S,V] = pca(A,2); %compute a rank-2 approximation to A
>> norm(A-U*S*V',2)/norm(A,2) %relative error
ans =
0.1636
>> [U,S,V] = pca(A,25); %compute a rank-25 approximation to A
>> norm(A-U*S*V',2)/norm(A,2) %relative error
ans =
0.0410
When you have large data and a sparse matrix computing a full SVD is often impossible since the factors will never be sparse. In this case you must compute a partial SVD to fit within memory. Example:
>> A = sprandn(5000,5000,10000);
>> tic;[U,S,V]=pca(A,2);toc;
no pivots
Elapsed time is 124.282113 seconds.
>> tic;[U,S,V]=svd(A);toc;
??? Error using ==> svd
Use svds for sparse singular values and vectors.
>> tic;[U,S,V]=princomp(A);toc;
??? Error using ==> svd
Use svds for sparse singular values and vectors.
Error in ==> princomp at 86
[U,sigma,coeff] = svd(x0,econFlag); % put in 1/sqrt(n-1) later
>> tic;pc=princomp(A);toc;
??? Error using ==> eig
Use eigs for sparse eigenvalues and vectors.
Error in ==> princomp at 69
[coeff,~] = eig(x0'*x0);

Related

Understanding PCA in MATLAB

What are the difference between the following two functions?
prepTransform.m
function [mu trmx] = prepTransform(tvec, comp_count)
% Computes transformation matrix to PCA space
% tvec - training set (one row represents one sample)
% comp_count - count of principal components in the final space
% mu - mean value of the training set
% trmx - transformation matrix to comp_count-dimensional PCA space
% this is memory-hungry version
% commented out is the version proper for Win32 environment
tic;
mu = mean(tvec);
cmx = cov(tvec);
%cmx = zeros(size(tvec,2));
%f1 = zeros(size(tvec,1), 1);
%f2 = zeros(size(tvec,1), 1);
%for i=1:size(tvec,2)
% f1(:,1) = tvec(:,i) - repmat(mu(i), size(tvec,1), 1);
% cmx(i, i) = f1' * f1;
% for j=i+1:size(tvec,2)
% f2(:,1) = tvec(:,j) - repmat(mu(j), size(tvec,1), 1);
% cmx(i, j) = f1' * f2;
% cmx(j, i) = cmx(i, j);
% end
%end
%cmx = cmx / (size(tvec,1)-1);
toc
[evec eval] = eig(cmx);
eval = sum(eval);
[eval evid] = sort(eval, 'descend');
evec = evec(:, evid(1:size(eval,2)));
% save 'nist_mu.mat' mu
% save 'nist_cov.mat' evec
trmx = evec(:, 1:comp_count);
pcaTransform.m
function [pcaSet] = pcaTransform(tvec, mu, trmx)
% tvec - matrix containing vectors to be transformed
% mu - mean value of the training set
% trmx - pca transformation matrix
% pcaSet - output set transforrmed to PCA space
pcaSet = tvec - repmat(mu, size(tvec,1), 1);
%pcaSet = zeros(size(tvec));
%for i=1:size(tvec,1)
% pcaSet(i,:) = tvec(i,:) - mu;
%end
pcaSet = pcaSet * trmx;
Which one is actually doing PCA?
If one is doing PCA, what is the other one doing?
The first function prepTransform is actually doing the PCA on your training data where you are determining the new axes to represent your data onto a lower dimensional space. What it does is that it finds the eigenvectors of the covariance matrix of your data and then orders the eigenvectors such that the eigenvector with the largest eigenvalue appears in the first column of the eigenvector matrix evec and the eigenvector with the smallest eigenvalue appears in the last column. What's important with this function is that you can define how many dimensions you want to reduce the data down to by keeping the first N columns of evec which will allow you to reduce your data down to N dimensions. The discarding of the other columns and keeping only the first N is what is set as trmx in the code. The variable N is defined by the prep_count variable in prepTransform function.
The second function pcaTransform finally transforms data that is defined within the same domain as your training data but not necessarily the training data itself (it could be if you wish) onto the lower dimensional space that is defined by the eigenvectors of the covariance matrix. To finally perform the reduction of dimensions, or dimensionality reduction as it is popularly known, you simply take your training data where each feature is subtracted from its mean and you multiply your training data by the matrix trmx. Note that prepTransform outputting the mean of each feature in the vector mu is important in order to mean subtract your data when you finally call pcaTransform.
How to use these functions
To use these functions effectively, first determine the trmx matrix, which contain the principal components of your data by first defining how many dimensions you want to reduce your data down to as well as the mean of each feature stored in mu:
N = 2; % Reduce down to two dimensions for example
[mu, trmx] = prepTransform(tvec, N);
Next you can finally perform dimensionality reduction on your data that is defined within the same domain as tvec (or even tvec if you wish, but it doesn't have to be) by:
pcaSet = pcaTransform(tvec, mu, trmx);
In terms of vocabulary, pcaSet contain what are known as the principal scores of your data, which is the term used for the transformation of your data to the lower dimensional space.
If I can recommend something...
Finding PCA through the eigenvector approach is known to be unstable. I highly recommend you use the Singular Value Decomposition via svd on the covariance matrix where the V matrix of the result already gives you the eigenvectors sorted which correspond to your principal components:
mu = mean(tvec, 1);
[~,~,V] = svd(cov(tvec));
Then perform the transformation by taking the mean subtracted data per feature and multiplying by the V matrix, once you subset and grab the first N columns of V:
N = 2;
X = bsxfun(#minus, tvec, mu);
pcaSet = X*V(:, 1:N);
X is the mean subtracted data which performs the same thing as doing pcaSet = tvec - repmat(mu, size(tvec,1), 1);, but you are not explicitly replicating the mean vector over each training example but letting bsxfun do that for you internally. However, taking advantage of MATLAB R2016b, this repeating can be done without the explicit call to bsxfun:
X = tvec - mu;
Further Reading
If you fully want to understand the code that was written and the theory behind what it's doing, I recommend the following two Stack Overflow posts that I have written that talk about the topic:
What does selecting the largest eigenvalues and eigenvectors in the covariance matrix mean in data analysis?
How to use eigenvectors obtained through PCA to reproject my data?
The first post brings the code you presented into light which performs PCA using the eigenvector approach. The second post touches base on how you'd do it using the SVD towards the end of the answer. This answer I've written here is a mix between the two posts above.

Loopless Gaussian mixture model in Matlab

I have several Gaussian distributions and I want to draw different values from all of them at the same time. Since this is basically what a GMM does, I have looked into Matlab GMM implementation (gmrnd) and I have seen that it performs a simple loop over all the components.
I would like to implement it in a faster way, but the problem is that 3d matrices are involved. A simple code (with loop) would be
n = 10; % number of Gaussians
d = 2; % dimension of each Gaussian
mu = rand(d,n); % init some means
U = rand(d,d,n); % init some covariances with their Cholesky decomposition (Cov = U'*U)
I = repmat(triu(true(d,d)),1,1,n);
U(~I) = 0;
r = randn(d,n); % random values for drawing samples
samples = zeros(d,n);
for i = 1 : n
samples(:,i) = U(:,:,i)' * r(:,i) + mu(:,i);
end
Is it possible to speed it up? I do not know how to deal with the 3d covariances matrix (without using cellfun, which is much slower).
Few improvements (hopefully are improvements) could be suggested here.
PARTE #1 You can replace the following piece of code -
I = repmat(triu(true(d,d)),[1,1,n]);
U(~I) = 0;
with bsxfun(#times,..) one-liner -
U = bsxfun(#times,triu(true(d,d)),U)
PARTE #2 You can kill the loopy portion of the code again with bsxfun(#times,..) like so -
samples = squeeze(sum(bsxfun(#times,U,permute(r,[1 3 2])),2)) + mu
I'm not fully convinced this is faster, but it gets rid of the loop. It would be interesting to see benchmarking results if you can do that. I also think this code makes is rather ugly and it's a bit hard to deduce what's going on, but I'll let you decide between readability and performance.
Anyway, I decided to define a big n*d dimensional Gaussian where each block d of variates are independent of each other (as in the original). This allows defining the covariance as a block diagonal matrix, for which I use blkdiag. From there, it is a matter of applying bsxfun to remove the need for looping.
Using the same random seed, I can recover the same samples as your code:
%// sampling with block diagonal covariance matrix
rng(1) %// set random seed
Ub = mat2cell(U, d, d, ones(n,1)); %// 1-by-1-by-10 cell of 2-by-2 matrices
C = blkdiag(Ub{:});
Ns = 1; %// number of samples
joint_samples = bsxfun(#plus, C'*randn(d*n, Ns), mu(:));
new_samples = reshape(joint_samples, [d n]); %// or [d n Ns] if Ns > 1
%//Compare to original
rng(1) %// set same seed for repeatability
r = randn(d,n); % random values for drawing samples
samples = zeros(d,n);
for i = 1 : n
samples(:,i) = U(:,:,i)' * r(:,i) + mu(:,i);
end
isequal(samples, new_samples) %// true

Matlab - FFT of Gaussian - Equivalency

simple problem:
I plot out a 2D Gaussian function with a certain resolution in Matlab. I test with variance or sigma = 1.0. I want to compare it to the result of FFT(Gaussian), which should result in another Gaussian with a variance of (1./sigma). Since I am testing with sigma = 1.0, I would think that I should get two equivalent, 2D kernels.
i.e.
g1FFT = buildKernel(rows, cols, mu, sigma) % uses normpdf over arbitrary resolution (rows, cols, 3) with the peak in the center
buildKernel:
function result = buildKernel(rows, cols, mu, sigma)
result = zeros(rows, cols, 3);
center_w = floor(cols / 2);
center_h = floor(rows / 2);
for i = 1:rows
for j = 1:cols
distance = sqrt((center_w - j).^2 + (center_h - i).^2);
g_val = normpdf(distance, mu, sigma);
result(i, j, :) = g_val;
end
end
% normalize so that kernel sums to 1
sumKernel = sum(result(:));
result = result ./ sumKernel;
end
I am testing with mu = 0.0 (always), and variance or sigma = 1.0. I want to compare it to the result of FFT(Gaussian), which should result in another Gaussian with a variance of (1./sigma).
i.e.
g1FFT = circshift(g1FFT, [rows/2, cols/2, 0]); % fft2 expects center to be in corners
freq_G1 = fft2(g1FFT);
freq_G1 = circshift(freq_G1, [-rows/2, -cols/2, 0]); % shift back to center, for comparison's sake
Since I am testing with sigma = 1.0, I would think that I should get two equivalent, 2D kernels, because if sigma = 1.0, then 1.0/sigma = 1.0. So, g1FFT would equal freq_G1.
However, I do not. They have different magnitudes, even after normalization. Is there something I am missing?
To keep things simple, I will first cover the case for one-dimensional signals. Similar observations can be made for multi-dimensional cases.
The Fourier Transform of a continuous time Gaussian signal is itself a Gaussian function as indicated in this table. One can note that the wider the Gaussian in the time domain, the narrower the transformed Gaussian in the frequency domain and that for mu=0 and sigma=1/sqrt(2π) (which corresponds to a=1/(2*sigma^2)=π in the above transform table), the Fourier Transform of the continuous time function
would be the similar function (where only a change of variables occurred):
That's all good, but this is for a continuous time signal and we are really interested in discreet time signals.
Unfortunately, and as also indicated on wikipedia, the Discrete Fourier Transform of a kernel obtained by sampling the continuous time Gaussian function, is not itself a sampled Gaussian function.
Fortunately, this relationship is still often approximately true (without going into too much details, it requires the time-domain kernel to be wide enough but not too wide such that the frequency-domain approximation is also wide enough for the relationship to also be approximately true for the inverse transform). In this case, the Discrete Fourier Transform of the periodic extension (with period N) of the discrete time signal
where mu=0 and sigma=sqrt(N/2π) could be approximated by the similar function (up to a scaling factor and a change of variables):
You could then modify buildKernel to support different standard deviations sqrt(rows/2π) and sqrt(cols/2π) along the rows and columns respectively:
function result = buildKernel(rows, cols, mu, sigma)
if (length(mu)>1)
mu_h = mu(1);
mu_w = mu(2);
else
mu_h = mu;
mu_w = mu;
endif
if (length(sigma)>1)
sigma_h = sigma(1);
sigma_w = sigma(2);
else
sigma_h = sigma;
sigma_w = sigma;
endif
center_w = mu_w + floor(cols / 2);
center_h = mu_h + floor(rows / 2);
r = transpose(normpdf([0:rows-1],center_h,sigma_h));
c = normpdf([0:cols-1],center_w,sigma_w);
result = repmat(r * c, [1 1 3]);
% normalize so that kernel sums to 1
sumKernel = sum(result(:));
result = result ./ sumKernel;
end
which you could use to get a kernel whose FFT is a scaled version of itself. In other words a kernel obtained using
g1FFTin = buildKernel(rows, cols, mu, [sqrt(rows/2/pi) sqrt(cols/2/pi)]);
would be such that freq_G1 (as computed in your posted code) is nearly equal to g1FFTin * sqrt(rows*cols).
Finally given that your intention is really only to test that the kernel's FFT is also (approximately) Gaussian, you may wish to compare the FFT of a more arbitrary kernel with standard deviation sigma against another appropriately scaled Gaussian kernel computed directly in the frequency domain. In other words, assuming a spatial domain kernel obtained with:
g1FFTin = buildKernel(rows, cols, mu, sigma);
with corresponding frequency-domain representation obtained with:
g1FFT = circshift(g1FFTin, [rows/2, cols/2, 0]);
freq_G1 = fft2(g1FFT);
freq_G1 = circshift(freq_G1, [-rows/2, -cols/2, 0]);
Then freq_G1 can be compared against another appropriately scaled Gaussian kernel computed directly in the frequency domain:
freq_G1_approx = (rows*cols/(2*pi*sigma^2))*buildKernel(rows, cols, mu, [rows cols]/(2*pi*sigma));

Gaussian random function

By using normrnd, I would like to create a normal distribution function with mean and sigma values expressed as vectors of size 1x45 varying from 1:45 and plot this simulated PDF with ideal values.
Whenever I create a normrnd like the one expressed below,
Gaussian = normrnd([1 45],[1 45],[1 500],length(c_t));
I am obtaining the following error,
Size information is inconsistent.
The reason for creating this PDF is to compute Chemical kinetics of a tracer with variable gaussian noise model. Basically i have an Ideal characteristics of a Tracer now i would like to add gaussian noise and understand how the chemical kinetics of a tracer vary with changing noise.
Basically there are different computational models for understanding chemical kinetics of tracer, one of which is Three compartmental model ,others are viz shape analysis,constrained shape analysis model.
I currently have ideal curve for all respective models, now i would like to add noise to these models and understand how each particular model behaves with varying noise
This is why i would like to create a variable noise model with normrnd add this model to ideal characteristics and compute Noise(Sigma) Vs Error -This analysis will give me an approximate estimation how different models behave with varying noise and which model is suitable for estimating chemical kinetics of tracer.
function [c_t,c_t_noise] =Noise_ConstrainedK2(t,a1,a2,a3,b1,b2,b3,td,tmax,k1,k2,k3)
K_1 = (k1*k2)/(k2+k3);
K_2 = (k1*k3)/(k2+k3);
%DV_free= k1/(k2+k3);
c_t = zeros(size(t));
ind = (t > td) & (t < tmax);
c_t(ind)= conv(((t(ind) - td) ./ (tmax - td) * (a1 + a2 + a3)),(K_1*exp(-(k2+k3)*t(ind)+K_2)),'same');
ind = (t >= tmax);
c_t(ind)=conv((a1 * exp(-b1 * (t(ind) - tmax))+ a2 * exp(-b2 * (t(ind) - tmax))) + a3 * exp(-b3 * (t(ind) - tmax)),(K_1*exp(-(k2+k3)*t(ind)+K_2)),'same');
meanAndVar = (rand(45,2)-0.5)*2;
numPoints = 500;
randSamples = zeros(1,numPoints);
for ii = 1:numPoints
idx = mod(ii,size(meanAndVar,1))+1;
randSamples(ii) = normrnd(meanAndVar(idx,1),meanAndVar(idx,2));
c_t_noise = c_t + randSamples(ii);
end
scatter(1:numPoints,randSamples)
dg = [0 0.5 0];
plot(t,c_t,'r');
hold on;
plot(t,c_t_noise,'Color',dg);
hold off;
axis([0 50 0 1900]);
xlabel('Time[mins]');
ylabel('concentration [Mbq]');
title('My signal');
%plot(t,c_tnp);
end
The output characteristics from the above function are as follows,Here i could not visualize any noise
The only remotely close thing to what you want to be done can be done as follows, but will involve looping because you can not request 500 data points from only 45 different means and variances, without the assumption that multiple sets can be revisited.
This is my interpretation of what you want, though I am still not entirely sure.
Random Gaussian Function Selection
meanAndVar = rand(45,2);
numPoints = 500;
randSamples = zeros(1,numPoints);
for ii = 1:numPoints
randMeanVarIdx = randi([1,size(meanAndVar,1)]);
randSamples(ii) = normrnd(meanAndVar(randMeanVarIdx,1),meanAndVar(randMeanVarIdx,2));
end
scatter(1:numPoints,randSamples)
The above code generates a random 2-D matrix of mean and variance (1st col = mean, 2nd col = variance). We then preallocate some space.
Inside the loop we chose a random set of mean and variance to use (uniformly) and then take that mean and variance, plug it into a random gaussian value function, and store it.
the matrix randSamples will contain a list of random values generated by a random set of gaussian functions chosen in a randomly uniform manner.
Sequential Function Selection
If you do not want to randomly select which function to use, and just want to go sequentially you loop using modulus to get the index of which set of values to use.
meanAndVar = (rand(45,2)-0.5)*2; % zero shift and make bounds [-1,1]
numPoints = 500;
randSamples = zeros(1,numPoints);
for ii = 1:numPoints
idx = mod(ii,size(meanAndVar,1))+1;
randSamples(ii) = normrnd(meanAndVar(idx,1),meanAndVar(idx,2));
end
scatter(1:numPoints,randSamples)
The problem with this statement
Gaussian = normrnd([1 45],[1 45],[1 500],length(c_t));
is that you supply two mu values and two sigma values, and ask for a matrix of size [1 500] x length(c_t). You need to pass the size in a uniform way, so either
Gaussian = normrnd(mu, sigma,[500 length(c_t)]);
or
Gaussian = normrnd(mu, sigma, 500, length(c_t));
Then you should make sure that the size of the mu/sigma vectors match the size of the matrix you ask for. So if you want a 500 x length(c_t) matrix as output you need to pass 500 x length(c_t) (mu,sigma) pairs. If you only want to vary one of mu or sigma you can pass a single value for the other parameter
To get N values from a normal distribution with fixed mean and steadily increasing sigma you can do
noise = #(mu, s0, s1, n) normrnd(mu, s0:(s1-s0)/(n-1):s1, 1,n)
where s0 is the lowest sigma value and s1 is the largest sigma value. To get 10 values drawn from distributions with mu=0 and sigma increasing from 1 to 5 you can do
noise(0,1,5,10)
If you want to introduce some randomness in the increase of sigma you can do
noise_rand = #(mu, s0, s1, n) normrnd(mu, (s0:(s1-s0)/(n-1):s1) .* rand(1,n), 1,n)

MATLAB: Correlation with a seed region

By default, all built-in functions for computing correlation or covariance return a matrix. I am trying to write an efficient function that will compute the correlation between a seed region and various other regions, but I do not need the correlations between the other regions. I assume that computing the full correlation matrix would therefore be inefficient.
I could instead compute a the correlation matrix between each region and the seed region, choose one of the off diagonal points and store it, but I feel like looping in this situation is also inefficient.
To be more concrete, each point in my 3-dimensional space has a time dimension. I am attempting to compute the mean correlation between a given point and all points in space within a given radius. I want to repeat this procedure hundreds of thousands of times, for many different radius lengths, and so on, so I would like for this to be as efficient as possible.
So, what is the best way to compute the correlation between a single vector and several others, without computing correlations that I will just ignore?
Thank you,
Chris
EDIT: Here is my code now...
function [corrMap] = TIME_meanCorrMap(A,radius)
% Even though the variable is "radius", we work with cubes for simplicity...
% So, the radius is the distance (in voxels) from the center of the cube an edge.
denom = ((radius*2)^3)-1;
dim = size(A);
corrMap = zeros(dim(1:3));
for x = radius+1:dim(1)-radius
rx = [x-radius : x+radius];
for y = radius+1:dim(2)-radius
ry = [y-radius : y+radius];
for z = radius+1:dim(3)-radius
rz = [z-radius : z+radius];
corrCoefs = zeros(1,denom);
seed = A(x,y,z,:);
i=0;
for xx = rx
for yy = ry
for zz = rz
if ~all([x y z] == [xx yy zz])
i = i + 1;
temp = corrcoef(seed,A(xx,yy,zz,:));
corrCoeffs(i) = temp(1,2);
end
end
end
end
corrMap = mean(corrCoeffs);
end
end
end
EDIT: Here are some more times to supplement the accepted answer.
Using bsxfun() to do normalization, and matrix multiplication to compute correlations:
tic; for i=1:10000
x=rand(100);
xz = bsxfun(#rdivide,bsxfun(#minus,x,mean(x)),std(x));
cc = xz(:,2:end)' * xz(:,1) ./ 99;
end; toc
Elapsed time is 6.928251 seconds.
Using zscore() to normalize, matrix multiplication to compute correlations:
tic; for i=1:10000
x=rand(100);
xz = zscore(x);
cc = xz(:,2:end)' * xz(:,1) ./ 99;
end; toc
Elapsed time is 7.040677 seconds.
Using bsxfun() to normalize, and corr() to compute correlations.
tic; for i=1:10000
x=rand(100);
xz = bsxfun(#rdivide,bsxfun(#minus,x,mean(x)),std(x));
cc = corr(x(:,1),x(:,2:end));
end; toc
Elapsed time is 11.385707 seconds.
It is certainly possible to improve upon the for loop that you are currently employing. The correlation compuattions can be parallelized using matrix multiplications if you have sufficient RAM. However, it will require you to unwrap your 4-dimensional data matrix A into a different shape. most likely you are dealing with 3-dimensional voxelwise fMRI data, in which case you'll have to reshape from [x y z time] matrix to an [index time] matrix. I will assume you can deal with that reshaping. Once you have your seed timecourse [Time by 1] and your target timecourses [Time by NumTargets] ready, you can perform some much more efficient computations.
A quick way to efficiently compute the desired correlation is using the corr function in MATLAB. This function will accept 2 matrix arguments and it will quite efficiently compute all pairwise correlations between the columns of argument 1 and the columns of argument 2, e.g.
T = 200; %time samples
N = 20; %number of other voxels
seed = randn(T,1); %data from seed voxel
targets = randn(T,N); %data from target voxels
%here is the for loop method
tic
for n = 1:N
tmp = corrcoef(seed, targets(:,n));
tmpcc = tmp(1,2);
end
looptime = toc;
%here is the parallel method
tic
cc = corr(seed, targets);
matrixtime = toc;
On my machine, the parallel operation in corr is faster than the loop method by a factor proportional to T*N.
It is possible to go a little faster than the corr function if you are willing to perofrm the underlying matrix operations yourself, and in any case it is worth knowing what they are. The correlation between two vectors is basically a normalized dot product, so using the conventions above you can compute the correlations in the following way
zseed = zscore(seed); %normalize the seed timecourse by z-scoring
ztargets= zscore(targets); %normalize the target timecourses by z-scoring
ztargets = ztargets'; %flip columns and rows for convenience
cc2 = ztargets*zseed./(T-1); %compute many dot products with one matrix multiplication
The code above is basically what the corr function will do which is why it is much faster than the loop. Note that most of the operation time is in the zscore operations, and you can improve on the performance of the corr function if you efficiently compute the zscore using the bsxfun command. For now, I hope this gives you some direction on how to compute a correlation between a seed timecourse and many target timecourses without having to loop through and compute each one separately.