I was wondering how i could achieve the following: I have a vector (scalar) called TimeSteps. TimeSteps increments according to the length of a vector (nBins_max) which will normally be set at a fixed length (but i may change it occasionally).
To declare a 5 full system rotations i would use:
TimeSteps = 5*nBins_max;
I would like to retrieve data for each rotation of my system. in pseduo-code i'm looking to achieve something like the following:
where TimeSteps = each multiple of nBins_max
retrieve data
end
I could set this manually at each number of timesteps i'm interested in, however, due to the number of rollers in some of my systems this could prove to be tedious and potentially error prone! Also, timesteps varies considerably in it's range, sometimes 1*nBins_max, sometimes 1000*nBins_max, perhaps more than this!
Any pointers or general help is appreciated!
Thanks for reading
Richard
The modulus is zero at each multiple of nBins_max:
where mod(TimeSteps, nBins_max)==0
retrieve data
end
Hope that helps?
Related
I've been using MatLab as a statistics tool. I like how much I can customise and code myself.
I was delighted to find that it's fairly straightforward to do a weighted linear regression in MatLab. As a slightly silly example, I can load the "carbig" data file and compare horsepower vs mileage for US cars to that of cars from other countries, but decide I only trust 8-cylinder cars.
load carbig
w=(Cylinders==8)+0.5*(Cylinders~=8)%1 if 8 cylinders, 0.5 otherwise.
for i=1:length(org)
o(i,1)=strcmp(org(i,:),org(1,:));%strcmp only works on one string.
end
x1=Horsepower(o==1)
x2=Horsepower(o==0)
y1=MPG(o==1)
y2=MPG(o==0)
w1=w(o==1)
w2=w(o==0)
lm1=fitlm(x1,y1,'Weights',w1)
lm2=fitlm(x2,y2,'Weights',w2)
This way, data from 8-cylinder cars will count as one data-point, and data frm 3,4,5,6-cylinder cars will count as half a data point.
Problem is, the obvious way to compare the two regressions is to use ANCOVA, which MatLab has a function for:
aoctool(Horsepower,MPG,o)
This function compares linear regressions on the two groups, but I haven't found an obvious way to include weights.
I suspect I can have a closer look at what the ANCOVA does and include the weights manually. Any easier solution?
I figured if I give the "trusted" measuremets weight 2, the "untrusted" measurements weight 1, for regression purposes that's the same thing as having an extra 1 identical measurement for each trusted one. Setting the weight to 1 and 0.5 should do the same thing. I can do this with a script.
That also increases the degrees of freedom quite a bit, so I manually set the degrees of freedom to sum(w)-rank instead on n-rank.
x=[];
y=[];
g=[];
w=(Cylinders==8)+0.5*(Cylinders~=8);
df=sum(w)
for i=1:length(w)
while w(i)>0
x=[x;Horsepower(i)];
y=[y;MPG(i)];
g=[g;o(i)];
w(i)=w(i)-0.5
end
end
I then copied the aoctool.m file (edit aoctool) and inserted the value of df somewhere in the new file. It isn't elegant, but it seems to work.
edit aoctool.m
%(insert new df somewhere. Save as aoctool2.m)
aoctool2(x,y,g)
I am trying to avoid the for loops and I have been reading through all the old posts there are about it but I am not able to solve my problem. I am new in MATLAB, so apologies for my ignorance.
The thing is that I have a 300x2 cell and in each one I have a 128x128x256 matrix. Each one is an image with 128x128 pixels and 256 channels per pixel. In the first column of the 300x2 cell I have my parallel intensity values and in the second one my perpendicular intensity values.
What I want to do is to take every pixel of every image (for each component) and sum the intensity values channel by channel.
The code I have is the following:
Image_par_channels=zeros(128,128,256);
Image_per_channels=zeros(128,128,256);
Image_tot_channels=zeros(128,128,256);
for a=1:128
for b=1:128
for j=1:256
for i=1:numfiles
Image_par_channels(a,b,j)=Image_par_channels(a,b,j)+Image_cell_par_per{i,1}(a,b,j);
Image_per_channels(a,b,j)=Image_per_channels(a,b,j)+Image_cell_par_per{i,2}(a,b,j);
end
Image_tot_channels(a,b,j)=Image_par_channels(a,b,j)+2*G*Image_per_channels(a,b,j);
end
end
end
I think I could speed it up introducing (:,:,j) instead of specifying a and b. But still a for loop. I am trying to use cellfun without any success due to my lack of expertise. Could you please give me a hand?
I would really appreciate it.
Many thanks and have a nice day!
Y
I believe you could do something like
Image_par_channels=zeros(128,128,256);
Image_per_channels=zeros(128,128,256);
Image_tot_channels=zeros(128,128,256);
for i=1:numfiles
Image_par_channels = Image_par_channels + Image_cell_par_per{i,1};
Image_per_channels = Image_per_channels + Image_cell_par_per{i,2};
end
Image_tot_channels = Image_par_channels + 2*G*Image_per_channels;
I haven't work with matlab in a long time, but I seem to recall you can do something like this. g is a constant.
EDIT:
Removed the +=. Incremental assignment is not an operator available in matlab. You should also note that Image_tot_channels can be build directly in the loop, if you don't need the other two variables later.
I have a large vector of recorded data which I need to resample. The problem I encounter is that when using resample, I get the following error:
??? Error using ==> upfirdn at 82 The product of the downsample factor
Q and the upsample factor P must be less than 2^31.
Now, I understand why this is happening - my two sampling rates are very close together, so the integer factors need to be quite large (something like 73999/74000). Unfortunately this means that the appropriate filter can't be created by MATLAB. I also tried resampling just up, with the intention of then resampling down, but there is not enough memory to do this to even 1 million samples of data (mine is 93M).
What other methods could I use to properly resample this data?
An interpolated polyphase FIR filter can be used to interpolate just the new set of sample points without using an upsampling+downsampling process.
But if performance is completely unimportant, here's a Quick and Dirty windowed-Sinc interpolator in Basic.
here's my code, I hope it helps :
function resig = resamplee(sig,upsample,downsample)
if upsample*downsample<2^31
resig = resample(sig,upsample,downsample);
else
sig1half=sig(1:floor(length(sig)/2));
sig2half=sig(floor(length(sig)/2):end);
resig1half=resamplee(sig1half,floor(upsample/2),length(sig1half));
resig2half=resamplee(sig2half,upsample-floor(upsample/2),length(sig2half));
resig=[resig1half;resig2half];
end
I'm interested in understanding the variety of zeroes that a given function produces with the ultimate goal of identifying the what frequencies are passed in high/low pass filters. My idea is that finding the lowest value zero of a filter will identify the passband for a LPF specifically. I'm attempting to use the [hz,hp,ht] = zplane(z,p) function to do so.
The description for that function reads "returns vectors of handles to the zero lines, hz". Could someone help me with what a vector of a handle is and what I do with one to be able to find the various zeros?
For example, a simple 5-point running average filter:
runavh = (1/5) * ones(1,5);
using zplane(runavh) gives an acceptable pole/zero plot, but running the [hz,hp,ht] = zplane(z,p) function results in hz=175.1075. I don't know what this number represents and how to use it.
Many thanks.
Using the get command, you can find out things about the data.
For example, type G=get(hz) to get a list of properties of the zero lines. Then the XData is given by G.XData, i.e. X=G.XData.
Alternatively, you can only pull out the data you want
X=get(hz,'XData')
Hope that helps.
I have two lists of timestamps and I'm trying to create a map between them that uses the imu_ts as the true time and tries to find the nearest vicon_ts value to it. The output is a 3xd matrix where the first row is the imu_ts index, the third row is the unix time at that index, and the second row is the index of the closest vicon_ts value above the timestamp in the same column.
Here's my code so far and it works, but it's really slow. I'm not sure how to vectorize it.
function tmap = sync_times(imu_ts, vicon_ts)
tstart = max(vicon_ts(1), imu_ts(1));
tstop = min(vicon_ts(end), imu_ts(end));
%trim imu data to
tmap(1,:) = find(imu_ts >= tstart & imu_ts <= tstop);
tmap(3,:) = imu_ts(tmap(1,:));%Use imu_ts as ground truth
%Find nearest indecies in vicon data and map
vic_t = 1;
for i = 1:size(tmap,2)
%
while(vicon_ts(vic_t) < tmap(3,i))
vic_t = vic_t + 1;
end
tmap(2,i) = vic_t;
end
The timestamps are already sorted in ascending order, so this is essentially an O(n) operation but because it's looped it runs slowly. Any vectorized ways to do the same thing?
Edit
It appears to be running faster than I expected or first measured, so this is no longer a critical issue. But I would be interested to see if there are any good solutions to this problem.
Have a look at knnsearch in MATLAB. Use cityblock distance and also put an additional constraint that the data point in vicon_ts should be less than its neighbour in imu_ts. If it is not then take the next index. This is required because cityblock takes absolute distance. Another option (and preferred) is to write your custom distance function.
I believe that your current method is sound, and I would not try and vectorize any further. Vectorization can actually be harmful when you are trying to optimize some inner loops, especially when you know more about the context of your data (e.g. it is sorted) than the Mathworks engineers can know.
Things that I typically look for when I need to optimize some piece of code liek this are:
All arrays are pre-allocated (this is the biggest driver of performance)
Fast inner loops use simple code (Matlab does pretty effective JIT on basic commands, but must interpret others.)
Take advantage of any special data features that you have, e.g. use sort appropriate algorithms and early exit conditions from some loops.
You're already doing all this. I recommend no change.
A good start might be to get rid of the while, try something like:
for i = 1:size(tmap,2)
C = max(0,tmap(3,:)-vicon_ts(i));
tmap(2,i) = find(C==min(C));
end