Matlab: Calculate vector lengths between different points on a rectangle - matlab

I am interested in creating a Matlab script that will consider a rectangle and allow me to calculate the length of vectors between one point and several other points placed along the perimeter of the rectangle. For example, calculating the length of all the vectors indicated in red on the image below, using point 9 as the origin.
This will need to include the ability to specify the location of each point and should be adaptable to rectangles with different dimensions. I would like to be able to calculate the vector lengths using any of the specified points as the origin. For example from point 1 to all other points on the perimeter.
I realize this is a potentially time consuming task so any help would be greatly appreciated, as I am a novice with Matlab. Look forward to seeing some ideas! Cheers.

Building on top of #ihcgeneva's post, I would avoid using loops all together and use bsxfun instead. The code by #ihcgeneva can be greatly simplified to:
xList = [1, 2, 3, 4, 5];
yList = [5, 4, 2, 2, 1];
rootPoint = 3; %The point you want as your 'base'
Distance = sqrt(sum(bsxfun(#minus, [xList; yList].', [xList(rootPoint) yList(rootPoint)]).^2, 2));
Note that there is no need to define the anonymous function d. In addition, there is also no need for a loop. With MATLAB, you are always encouraged to vectorize your code. What vectorization means is that there are certain functions in MATLAB that will accept an array or matrix of inputs and the function will operate on each entry individually. The output of these functions will give you an array or matrix of the same size that has each of those values having the function applied to those elements. It has been shown to be much faster over looping through each element in your array or matrix and applying the function to each element one at a time. It's mostly due to function call overhead. It would be more efficient to just call the function once rather than many times for as many elements as you want to apply the function to.
Now, the above code is quite a handful to absorb, but still pretty easy to understand once you get the hang of it. bsxfun stands for Binary Singleton Expansion Function. If we look inside the function, we are invoking the minus function between a single point in your rectangle found at the index rootPoint with all of the other co-ordinates in the rectangle. What we will do is place the co-ordinates into a 2D matrix where the first column denotes the x co-ordinate and the second column denotes the y co-ordinate. Next, what bsxfun is doing is that it duplicates the point located at rootPoint so that it is the same size as this 2D matrix. bsxfun will then do an element by element subtraction between this duplicated matrix with the original 2D matrix that you created.
This will perform the first part of the Euclidean distance where you subtract the corresponding dimensions. This creates an output 2D matrix where the first column is the subtraction of the x components and the second column is the subtraction of the y components. We then square each value in the matrix, then sum over the columns then take the square root, thus completing the Euclidean distance operation. #lhcgeneva has put you on the right track where the shortest distance between the point you are looking at with the other points in the rectangle is the Euclidean distance.
Now if you want to plot the lines from one point to another like you have in your image, you actually don't need to calculate the lengths at all. You just need to know where the points along the rectangle are located, show the image, then use plot and plot lines from each point in the rectangle to the source point. This looks very much like an IC Pin layout diagram, so I'm going to use one that I found on the Internet:
Let's use pin #3 as the source. I've also gone through the image and pin-pointed the location of the middle of each pin:
points = [49 84; 49 133; 49 178; 49 229; 49 277; 49 325; 49 372; 205 374; 205 325; 205 276; 205 228; 205 181; 205 131; 205 87];
The first column is the x or column co-ordinate while the second column is the y or row co-ordinate of where the centre is for each pin in this image. Now, all you have to do is show this image, use hold on to make sure that you can place multiple lines on the plot without it erasing, and plot lines from the source point to each point in the matrix:
im = imread('http://www.infraredremote.com/images/14-pin-IC.jpg');
imshow(im);
hold on;
points = [49 84; 49 133; 49 178; 49 229; 49 277; 49 325; 49 372; 205 374; 205 325; 205 276; 205 228; 205 181; 205 131; 205 87];
rootPoint = 3;
for idx = 1 : size(points, 1)
plot([points(rootPoint, 1) points(idx, 1)], [points(rootPoint, 2) points(idx, 2)], 'r', 'LineWidth', 5);
end
The above code loads in the image directly from the Internet. We then show the image with imshow then use hold on like we talked about before. Next, we choose our root point, which is pin 3, then we loop over all of the points and draw a line from the root point to each pin. We make the line red, as well as making the width of the line 5 pixels thick. In this case, we do need to loop over the points to make it easy. We can vectorize the plotting, but it will become a bit sophisticated given your knowledge of MATLAB so far.
In any case, this is what I get:
Edit
In your comments, you said you wanted to display the distances from the root point to each point in your rectangle. You can do this with a loop. Unfortunately when it comes to printing, there isn't a way to do it easily with vectorization, but looping just to print out statements should take very little time so we shouldn't worry about vectorization here.
As such, you can do something like this:
%// Define points along rectangle and root point
points = [49 84; 49 133; 49 178; 49 229; 49 277; 49 325; 49 372; 205 374; 205 325; 205 276; 205 228; 205 181; 205 131; 205 87];
rootPoint = 3;
%// Find distances
Distance = sqrt(sum(bsxfun(#minus, points, points(rootPoint,:)).^2, 2));
for idx = 1 : numel(Distance)
fprintf('Distance between reference point %d and point %d is %f\n', ...
rootPoint, idx, Distance(idx));
end
Note that I had to modify the code slightly with respect to the distances. Because our points are now in a 2D array, the core algorithm is still the same, but I had to get the points in a slightly different way. Specifically, I didn't need to construct the 2D matrix inside bsxfun as that was created already. I can also easily extract out the root point by getting all of the columns for a single row located at the row indexed by rootPoint. Next, we loop over each distance from the root point to each point in the rectangle and we simply print those out. This is the output I get:
Distance between reference point 3 and point 1 is 94.000000
Distance between reference point 3 and point 2 is 45.000000
Distance between reference point 3 and point 3 is 0.000000
Distance between reference point 3 and point 4 is 51.000000
Distance between reference point 3 and point 5 is 99.000000
Distance between reference point 3 and point 6 is 147.000000
Distance between reference point 3 and point 7 is 194.000000
Distance between reference point 3 and point 8 is 250.503493
Distance between reference point 3 and point 9 is 214.347848
Distance between reference point 3 and point 10 is 184.228119
Distance between reference point 3 and point 11 is 163.816971
Distance between reference point 3 and point 12 is 156.028843
Distance between reference point 3 and point 13 is 162.926364
Distance between reference point 3 and point 14 is 180.601772
This looks about right, and certainly makes sense as the distance between point 3 and itself (3rd row of the print-out) is 0.

Related

Pick specific timepoints from a timeseries

I have a 164 x 246 matrix called M. M is data for time series containing 246 time points of 164 brain regions. I want to work on only specific blocks of the time series, not the whole thing. To do so, I created a vector called onsets containing the time onset of each block.
onsets = [7;37;82;112;145;175;190;220];
In this example, there are 8 blocks total (though this number can vary), each blocks containing 9 time points. So for instance, the first block would contain time point 7, 8, 9,..., 15; the second block would contain time point 37, 38, 39,..., 45. I would like to extract the time points for these 8 blocks from M and concatenate 8 these blocks. Thus, the output should be a 164 x 72 matrix (i.e., 164 regions, 8 blocks x 9 time points/per block).
This seems like a very simple indexing problem but I'm struggling to do this efficiently. I've tried indexing each block in M (for intance, vertcat(M(onsets(1,1):onsets(1,1)+8,:));) then use vertcat but this seems very clumsy. Can anyone help?
Try this:
% create sample data
M = rand(164,246);
% create index vector
idx = false(1,size(M,2));
onsets = [7;37;82;112;145;175;190;220];
for i=1:numel(onsets)
idx(onsets(i):onsets(i)+8) = true;
end
% create output matrix
MM = M(:,idx);
You seem to have switched the dimensions somehow, i.e. you try to operate on the rows of M whilst according to your description you need to operate on the columns. Hope this helps.

Writing (and using) principal component analysis in matlab

I (hope to) obtain a matrix with data on different characteristics on rat calls (in the ultrasound). Variables include starting frequency, ending frequency, duration etc etc. The observations will include all the rat calls in my audio recording.
I want to use PCA to analyze my data, hopefully decorrelating any principal components that are not important to the structure of these calls and how they work, allowing me to group the calls up.
My problem is that while I have a basic understanding of how PCA works, I don't have an understanding of the finer points including how to implement this in Matlab.
I know you should standardise my data. All methods I have seen involve means adjusting by subtracting the mean. However some others also divide by the standard deviation or divide the transpose of the means adjusted data by the square root of N-1 (N being the number of variables).
I know with the standardised data, you can then find the covariance matrix, and extract the eigen values and vectors such as with using eig(cov(...)). some others use svd(...) instead. I still don't understand what this is and why it is important
I know there are different ways to implement PCA, but I don't like how I get different results for all of them.
There is even a pca(...) command also.
While reconstructing the data, some people multiply the means adjust data with the principal component data, others do the same but with the transpose of the principal component data
I just want to be able to analyse my data by plotting graphs of the principal components, and of the data (with the most insignificant principal components removed). I want to know about the variances of these eigen vectors and how much they represent the total variance of the data. I want to be able to fully exploit all the information PCA can allow me to get out
can anyone help?
=========================================================
This code seems to work based on pg 20 of http://people.maths.ox.ac.uk/richardsonm/SignalProcPCA.pdf
X = [105 103 103 66; 245 227 242 267; 685 803 750 586;...
147 160 122 93; 193 235 184 209; 156 175 147 139;...
720 874 566 1033; 253 265 171 143; 488 570 418 355;...
198 203 220 187; 360 365 337 334; 1102 1137 957 674;...
1472 1582 1462 1494; 57 73 53 47; 1374 1256 1572 1506;...
375 475 458 135; 54 64 62 41];
[M,N] = size(X);
mn = mean(X,2);
data = X - repmat(mn,1,N);
Y = data' / sqrt(N-1);
[~,S,PC] = svd(Y);
S = diag(S);
V = S .* S;
signals = PC' * data;
%plotting single PC1 on its own
figure;
plot(signals(1,:),zeros(1,size(signals,2)),'b.','markersize',15)
xlabel('PC1')
title('plotting single PC1 on its own')
%plotting PC1 against PC2
figure;
plot(signals(1,:),signals(2,:),'b.','markersize',15)
xlabel('PC1'),ylabel('PC2')
title('plotting PC1 against PC2')
figure;
plot(PC(:,1),PC(:,2),'m.','markersize',15)
xlabel('effect(PC1)'),ylabel('effect(PC2)')
but where is the standard deviation? how is the result different to
B=zscore(X);
[PC, D] = eig(cov(B));
D = diag(D);
cumsum(flipud(D)) / sum(D)
PC*B %(note how this says PC whereas above it says PC')
If the principal components are represented as columns, then I can remove the most insignificant eigen vectors by finding the smallest eigenvalue and setting its corresponding eigen vector column to a column of zeros.
How can either of these methods above be applied by using the pca(...) command and achieve THE SAME result? can anyone help explain this to me (and ideally show me how all of these can achieve the same results)?

Evaluate a 3D spline in Matlab

I'm struggling to find a solution to this problem since the last week and still got no result.
I have a 3d spline in matlab, necessarily defined (I can't change the representation) with the spap2 command, and I need to evaluate the spline itself given two coordinates (say x and y). I tried to use the fnval command with different sintaxes but with no success.
Example: I'd like to get the z at x=26, y=120 with the spline defined with
x=[13 56 90 67 89 43];
y=[112 156 136 144 144 128];
z=[63 95 48 78 77 15];
sp = spap2(4,4,1:length(x),[x; y; z]);
Could anyone help me?
Thank you very much!
A spline is an approximation. It does not need to go through the coordinates (x=26,y=120) at all. There is no immediate definition of what z value would be reasonable for these (x, y) values.
Your x values (independent values) are 1:length(x) and the output (dependent values) is [x;y;z].
For example fnval(sp, 1.5) gives a reasonable output.

How to visualize binary data?

I have a dataset 6x1000 of binary data (6 data points, 1000 boolean dimensions).
I perform cluster analysis on it
[idx, ctrs] = kmeans(x, 3, 'distance', 'hamming');
And I get the three clusters. How can I visualize my result?
I have 6 rows of data each having 1000 attributes; 3 of them should be alike or similar in a way. Applying clustering will reveal the clusters. Since I know the number of clusters
I only need to find similar rows. Hamming distance tell us the similarity between rows and the result is correct that there are 3 clusters.
[EDIT: for any reasonable data, kmeans will always finds asked number
of clusters]
I want to take that knowledge
and make it easily observable and understandable without having to write huge explanations.
Matlab's example is not suitable since it deals with numerical 2D data while my questions concerns n-dimensional categorical data.
The dataset is here http://pastebin.com/cEWJfrAR
[EDIT1: how to check if clusters are significant?]
For more information please visit the following link:
https://chat.stackoverflow.com/rooms/32090/discussion-between-oleg-komarov-and-justcurious
If the question is not clear ask, for anything you are missing.
For representing the differences between high-dimensional vectors or clusters, I have used Matlab's dendrogram function. For instance, after loading your dataset into the matrix x I ran the following code:
l = linkage(a, 'average');
dendrogram(l);
and got the following plot:
The height of the bar that connects two groups of nodes represents the average distance between members of those two groups. In this case it looks like (5 and 6), (1 and 2), and (3 and 4) are clustered.
If you would rather use the hamming distance rather than the euclidian distance (which linkage does by default), then you can just do
l = linkage(x, 'average', {'hamming'});
although it makes little difference to the plot.
You can start by visualizing your data with a 'barcode' plot and then labeling rows with the cluster group they belong:
% Create figure
figure('pos',[100,300,640,150])
% Calculate patch xy coordinates
[r,c] = find(A);
Y = bsxfun(#minus,r,[.5,-.5,-.5, .5])';
X = bsxfun(#minus,c,[.5, .5,-.5,-.5])';
% plot patch
patch(X,Y,ones(size(X)),'EdgeColor','none','FaceColor','k');
% Set axis prop
set(gca,'pos',[0.05,0.05,.9,.9],'ylim',[0.5 6.5],'xlim',[0.5 1000.5],'xtick',[],'ytick',1:6,'ydir','reverse')
% Cluster
c = kmeans(A,3,'distance','hamming');
% Add lateral labeling of the clusters
nc = numel(c);
h = text(repmat(1010,nc,1),1:nc,reshape(sprintf('%3d',c),3,numel(c))');
cmap = hsv(max(c));
set(h,{'Background'},num2cell(cmap(c,:),2))
Definition
The Hamming distance for binary strings a and b the Hamming distance is equal to the number of ones (population count) in a XOR b (see Hamming distance).
Solution
Since you have six data strings, so you could create a 6 by 6 matrix filled with the Hamming distance. The matrix would be symetric (distance from a to b is the same as distance from b to a) and the diagonal is 0 (distance for a to itself is nul).
For example, the Hamming distance between your first and second string is:
hamming_dist12 = sum(xor(x(1,:),x(2,:)));
Loop that and fill your matrix:
hamming_dist = zeros(6);
for i=1:6,
for j=1:6,
hamming_dist(i,j) = sum(xor(x(i,:),x(j,:)));
end
end
(And yes this code is a redundant given the symmetry and zero diagonal, but the computation is minimal and optimizing not worth the effort).
Print your matrix as a spreadsheet in text format, and let the reader find which data string is similar to which.
This does not use your "kmeans" approach, but your added description regarding the problem helped shaping this out-of-the-box answer. I hope it helps.
Results
0 182 481 495 490 500
182 0 479 489 492 488
481 479 0 180 497 517
495 489 180 0 503 515
490 492 497 503 0 174
500 488 517 515 174 0
Edit 1:
How to read the table? The table is a simple distance table. Each row and each column represent a series of data (herein a binary string). The value at the intersection of row 1 and column 2 is the Hamming distance between string 1 and string 2, which is 182. The distance between string 1 and 2 is the same as between string 2 and 1, this is why the matrix is symmetric.
Data analysis
Three clusters can readily be identified: 1-2, 3-4 and 5-6, whose Hamming distance are, respectively, 182, 180, and 174.
Within a cluster, the data has ~18% dissimilarity. By contrast, data not part of a cluster has ~50% dissimilarity (which is random given binary data).
Presentation
I recommend Kohonen network or similar technique to present your data in, say, 2 dimensions. In general this area is called Dimensionality reduction.
I you can also go simpler way, e.g. Principal Component Analysis, but there's no quarantee you can effectively remove 9998 dimensions :P
scikit-learn is a good Python package to get you started, similar exist in matlab, java, ect. I can assure you it's rather easy to implement some of these algorithms yourself.
Concerns
I have a concern over your data set though. 6 data points is really a small number. moreover your attributes seem boolean at first glance, if that's the case, manhattan distance if what you should use. I think (someone correct me if I'm wrong) Hamming distance only makes sense if your attributes are somehow related, e.g. if attributes are actually a 1000-bit long binary string rather than 1000 independent 1-bit attributes.
Moreover, with 6 data points, you have only 2 ** 6 combinations, that means 936 out of 1000 attributes you have are either truly redundant or indistinguishable from redundant.
K-means almost always finds as many clusters as you ask for. To test significance of your clusters, run K-means several times with different initial conditions and check if you get same clusters. If you get different clusters every time or even from time to time, you cannot really trust your result.
I used a barcode type visualization for my data. The code which was posted here earlier by Oleg was too heavy for my solution (image files were over 500 kb) so I used image() to make the figures
function barcode(A)
B = (A+1)*2;
image(B);
colormap flag;
set(gca,'Ydir','Normal')
axis([0 size(B,2) 0 size(B,1)]);
ax = gca;
ax.TickDir = 'out'
end

calculating x2 from poisson distributed data

So I have a table of values
v=0 1 2 3 4 5 6 7 8 9
#times obs.: 5 19 23 21 14 12 3 2 1 0
I am supposed to calculate chi squared assuming the data fits a poisson dist. with mean u=3.
I have to group values >=6 all in one bin.
I am unsure of how to plot the poisson dist., and most of all how to control what goes into what bin, if that makes sense.
I have plotted a histogram using histc before..but it was with random numbers that I normalized. The amount in each bin was set for me.
I am super new...sorry if this question sucks.
You use bar to plot a bar graph in matlab.
So this is what you do:
v=0:9;
f=[5 19 23 21 14 12 3 2 1 0];
fc=f(find(v<6)); % copy elements where v<=6 into a new array
fc(end+1)=sum(f(v=>6)); % append the sum of elements where v=>6 to that array
figure
bar(v(v<=6), fc);
That should do the trick...
Now you didn't actually ask about the chi squared calculation. I would urge you not to put values of v>6 all into one bin for that calculation, as it will give you a really bad result.
There is another technique: if you use the hist function, you can choose the bins - and Matlab will automatically put things that exceed the limits into the last bin. So if your observations were in the array Obs, you can do what was asked with:
h = hist(Obs, 0:6);
figure
bar(0:6, h)
The advantage is that you have the array h available (frequencies) for other calculations.
If you do instead
hist(Obs, 0:6)
Matlab will plot the graph for you in a single statement (but you don't have the values...)