Samples by samples cross-correlation(Xcorr) matlab - matlab

I am using the xcorr function for identifying the similarity of the signals. the following is the code,
r1 = max(abs(xcorr(S1, shat1,'coeff')));
r2 = max(abs(xcorr(S1,shat2,'coeff')));
if r1>r2
dn=shat2;
else
dn=shat1;
end
It worked perfectly. But the problem is the signals are having 40,000 samples each. Practically I do get a lot of delay. I have to send bunch of samples (like 250samples)into the xcorr for getting rid of the delay. But how do I do that? I know that I have to use a for loop, but found difficult in doing that. Can some one suggest me how do I do that.I tried something like this
for i=1:250:40000
r1 = max(abs(xcorr(S1(:,i), shat1(:,i),'coeff')));
but totally lost. Someone suggest something please....

If I understand you correctly, you want to cross-correlate block of 250 samples, one after the other. Adapting from your attempt, try
for i=1:250:40000
r1 = max(abs(xcorr(S1(i:i+249), shat1(i:i+249),'coeff')));
end
As a side note, do you know anything about the maximum lag between your signals? If you can safely assume that the temporal shift between your signals is below 250 (which the idea of splitting it into intervals suggests), you could save calculation time by modifying your original code with using maxlags, a parameter for xcorr:
maxlags=250; %# or some other reasonable value, maybe even 100? 50?
r1 = max(abs(xcorr(S1, shat1,maxlags, 'coeff')));
r2 = max(abs(xcorr(S1, shat2,maxlags, 'coeff')));
...
I haven't tested how fast that would be, but my guess is you might be able to avoid your loop altogether with this...

Related

Avoiding for loop with cells and matrixes involved

I am trying to avoid the for loops and I have been reading through all the old posts there are about it but I am not able to solve my problem. I am new in MATLAB, so apologies for my ignorance.
The thing is that I have a 300x2 cell and in each one I have a 128x128x256 matrix. Each one is an image with 128x128 pixels and 256 channels per pixel. In the first column of the 300x2 cell I have my parallel intensity values and in the second one my perpendicular intensity values.
What I want to do is to take every pixel of every image (for each component) and sum the intensity values channel by channel.
The code I have is the following:
Image_par_channels=zeros(128,128,256);
Image_per_channels=zeros(128,128,256);
Image_tot_channels=zeros(128,128,256);
for a=1:128
for b=1:128
for j=1:256
for i=1:numfiles
Image_par_channels(a,b,j)=Image_par_channels(a,b,j)+Image_cell_par_per{i,1}(a,b,j);
Image_per_channels(a,b,j)=Image_per_channels(a,b,j)+Image_cell_par_per{i,2}(a,b,j);
end
Image_tot_channels(a,b,j)=Image_par_channels(a,b,j)+2*G*Image_per_channels(a,b,j);
end
end
end
I think I could speed it up introducing (:,:,j) instead of specifying a and b. But still a for loop. I am trying to use cellfun without any success due to my lack of expertise. Could you please give me a hand?
I would really appreciate it.
Many thanks and have a nice day!
Y
I believe you could do something like
Image_par_channels=zeros(128,128,256);
Image_per_channels=zeros(128,128,256);
Image_tot_channels=zeros(128,128,256);
for i=1:numfiles
Image_par_channels = Image_par_channels + Image_cell_par_per{i,1};
Image_per_channels = Image_per_channels + Image_cell_par_per{i,2};
end
Image_tot_channels = Image_par_channels + 2*G*Image_per_channels;
I haven't work with matlab in a long time, but I seem to recall you can do something like this. g is a constant.
EDIT:
Removed the +=. Incremental assignment is not an operator available in matlab. You should also note that Image_tot_channels can be build directly in the loop, if you don't need the other two variables later.

vectorizing "for" loop with bidirectionally related variables

Last week I asked the following:
https://stackoverflow.com/questions/32658199/vectorizing-gibbs-sampler-in-matlab
Perhaps it was not that clear what I want to do, so this might be more clear.
I would like to vectorize a "for" loop in matlab, where some variables inside of the loop are bidirectionally related. So, here is an example:
A=2;
B=3;
for i=1:10000
A=3*B;
B=exp(A*(-1/2))
end
Thank you once again for your time.
A quick Excel calculation indicates that this quickly converges to 0.483908 (after much less than 10000 loops - so one way of speeding it up would be to check for convergence). If A and B are always 2 and 3 respectively, you could just replace the loop with this value.
Alternatively, using some series analysis you might be able to come up with an analytical expression for B when i is large - although with the nested exponents deriving this is a bit beyond my own abilities!
Edit
A bit of googling reveals this. Wikipedia states that for a tetration of x to infinity (i.e. x^x^x^x^x...), the solution y satisfies y = x^y. In your case, for example, 0.483908 = e^(-3/2)^0.483908, so 0.483908 is a solution. Not sure how you would exploit this though.
Wikipedia also gives a convergence condition, which might be of use to you: x lies between e^-e and e^1/e.
Final Edit (?)
Turns out you need Lambert's W function to solve for equations of the form of y = x^y. There seems to be no native function for this, but there seems to be something in the FileExchange - see here and here.

Resample factors are too large

I have a large vector of recorded data which I need to resample. The problem I encounter is that when using resample, I get the following error:
??? Error using ==> upfirdn at 82 The product of the downsample factor
Q and the upsample factor P must be less than 2^31.
Now, I understand why this is happening - my two sampling rates are very close together, so the integer factors need to be quite large (something like 73999/74000). Unfortunately this means that the appropriate filter can't be created by MATLAB. I also tried resampling just up, with the intention of then resampling down, but there is not enough memory to do this to even 1 million samples of data (mine is 93M).
What other methods could I use to properly resample this data?
An interpolated polyphase FIR filter can be used to interpolate just the new set of sample points without using an upsampling+downsampling process.
But if performance is completely unimportant, here's a Quick and Dirty windowed-Sinc interpolator in Basic.
here's my code, I hope it helps :
function resig = resamplee(sig,upsample,downsample)
if upsample*downsample<2^31
resig = resample(sig,upsample,downsample);
else
sig1half=sig(1:floor(length(sig)/2));
sig2half=sig(floor(length(sig)/2):end);
resig1half=resamplee(sig1half,floor(upsample/2),length(sig1half));
resig2half=resamplee(sig2half,upsample-floor(upsample/2),length(sig2half));
resig=[resig1half;resig2half];
end

non vectorized to vectorized , matlab

I have two issues. Please see followinf code
it=0:0.01:360;
jt=0:0.01:270;
LaserS=zeros(size(it,2)*size(jt,2),2);
p=1;
for m=it
for n=jt
LaserS(p,:)=[m,n];
p=p+1;
end
end
It is very slow and also takes a lot of memory (about 7.7765e+009 bytes). So I can't run it. How can i improve it and solve memory issue.
I'm using win7 64 with 8Gb RAM.
What are you trying to do? 'reshape' should solve your problem.
LaserS=zeros(size(it,2)*size(jt,2),2);
JT=reshape(repmat(jt,[1,numel(it)]),1,numel(jt)*numel(it));
IT=reshape(repmat(it,[numel(jt),1]),1,numel(jt)*numel(it));
LaserS = [JT.', IT.'];
Pre-allocating array is going to save you memory hit. Otherwise there is not memory optimization here.
You can't reduce memory usage unless you use fewer values. Can it and jt step by 0.1 instead of 0.01?
Here's a way to build your result matrix without a loop.
LaserS = [rempat(it.', length(jt), 1), kron(ones(length(it), 1), jt.')];
This code seems to be doing "nothing" in the sense that, after it runs, you end up with the matrix LaserS with a shape of 972000000 x 2. If you really need these values to be loaded to memory at the same time, that is the size and there is not much to do about it.
What would be my first approach, which cannot be inferred directly from the code you posted, is that maybe you can achieve the overall goal of your program if you generate the matrix data "on the fly" while you perform further processing over LaserS.
Hope this helps!
This should do it:
it = it(:);
jt = jt(:);
jt = repmat(jt,size(it,1),1)
it = repmat(it',size(jt,1),1);
it = it(:);
LaserS = [it, jt]
In addition to the nice solutions presented here already, If you want to reduce memory, there's no reason to use double. You can use single and half the memory required. You can encode the step size 0.01 to a unit step (that is, it=uint16(0:1:36000) thus encoding the numbers as integers uint16, this will use only one quarter of the memory. etc...
Meshgrid will do this cleanly as well, perhaps with a reshape if you insist on having it in an (n*m)-by-2 matrix. But why do you want that? It seems like you're actually after something else, and it's possible that bsxfun(,it,jt') would do what you want.

MATLAB runs out of memory during program execution

I have been happily using MATLAB to solve some project Euler problems. Yesterday, I wrote some code to solve one of these problems (14). When I write code containing long loops I always test the code by running it with short loops. If it runs fine and it does what it's supposed to do I assume this will also be the case when the length of the loop is longer.
This assumption turned out to be wrong. While executing the code below, MATLAB ran out of memory somewhere around the 75000th iteration.
c=1;
e=1000000;
for s=c:e
n=s;
t=1;
while n>1
a(s,t)=n;
if mod(n,2) == 0
n=n/2;
else
n=3*n+1;
end
a(s,t+1)=n;
t=t+1;
end
end
What can I do to prevent this from happening? Do I need to clear variables or free up memory somewhere in the process? Will saving the resulting matrix a to the hard drive help?
Here is the solution, staying as close as possible to your code (which is very close, the main difference is that you only need a 1D matrix):
c=1;
e=1000000;
a=zeros(e,1);
for s=c:e
n=s;
t=1;
while n>1
if mod(n,2) == 0
n=n/2;
else
n=3*n+1;
end
t=t+1;
end
a(s)=t;
end
[f g]=max(a);
This takes a few seconds (note the preallocation), and the result g unlocks the Euler 14 door.
Simply put, there's not enough memory to hold the matrix a.
Why are you making a two-dimensional matrix here anyway? You're storing information that you can compute just as fast as looking it up.
There's a much better thing to memoize here.
EDIT: Looking again, you're not even using the stuff you put in that matrix! Why are you bothering to create it?
The code appears to be storing every sequence in a different row of a matrix. The number of columns of that matrix will be equal to the length of the longest sequence currently found. This means that a sequence of two numbers will be padded with a bunch of right hand zeros.
I am sure you can see how this is incredibly inefficient. That may be the point of the exercise, or it will be for you in this implementation.
Better is to keep a variable like "Seed of longest solution found" which would store the seed for the longest solution. I would also keep a "length of longest solution found" keep the length. As you try every new seed, if it wins the title of longest, then update those variables.
This will keep only what you need in memory.
Short Answer:Use a 2d sparse matrix instead.
Long Answer: http://www.mathworks.com/access/helpdesk/help/techdoc/ref/sparse.html