How to define matrix values as a index in matlab? - matlab

I have 1788x3 double matrix.
My goal is split first and seconds columns values as a coordinates and create 256*256 matrix. Missing values will be zero.
That is the part of my matrix:
For example in 256*256 matrix (161,37) coordinates value will be 0.347365914411139
161 37 0.347365914411139
162 38 0.414350944291199
160 38 -0.904597803215328
165 35 -0.853613950415835
163 38 -0.926329070526244
166 35 -1.37361928823183
168 37 0.661707825299905
Looking forward your answers.
Regards;

The easiest, but not necessarily most efficient way to do this would be using a loop, i.e.
% if m = you 1788x3 data
x = sparse(256,256) %// x = zeros(256); % //use either of these
for nn = 1:size(m,1)
x(m(nn,1),m(nn,2)) = m(nn,3);
end

Related

How to sample a plot in Matlab?

The plot in MATLAB looks like this:
The code to generate this is very simple:
y = [0 18 450];
x = [0 5.3 6.575];
plot(x,y);
How could I know the values of 119 equally spaced discrete points on this plot?
In simple MATLAB plots, the points are connected together by simple linear interpolation. Simply put, a straight line is drawn between each pair of points. You can't physically get these points from the graph other than those you used to plot the points (at least not easily...).
If you however do desire 119 points at equally spaced intervals that would theoretically be obtained from the above set of 4 points, you can use the interp1 function to do so:
y = [0 18 450];
x = [0 5.3 6.575]
yy = interp1(x, y, linspace(min(x),max(x),119), 'linear');
interp1 performs linear (note the 'linear' flag at the end...) interpolation given a set of key points defined by x and y points and a set of x points to use to interpolate between the key x points to generate the interpolated y points stored in yy. linspace in this case generates a linearly increasing array from the smallest value in x to the largest value in x with 119 of these points.
Here's a running example with your data:
>> format compact;
>> y = [0 18 450];
>> x = [0 5.3 6.575];
>> yy = interp1(x, y, linspace(min(x),max(x),119), 'linear');
>> yy
yy =
Columns 1 through 8
0 0.1892 0.3785 0.5677 0.7570 0.9462 1.1354 1.3247
Columns 9 through 16
1.5139 1.7031 1.8924 2.0816 2.2709 2.4601 2.6493 2.8386
Columns 17 through 24
3.0278 3.2171 3.4063 3.5955 3.7848 3.9740 4.1633 4.3525
Columns 25 through 32
4.5417 4.7310 4.9202 5.1094 5.2987 5.4879 5.6772 5.8664
Columns 33 through 40
6.0556 6.2449 6.4341 6.6234 6.8126 7.0018 7.1911 7.3803
Columns 41 through 48
7.5696 7.7588 7.9480 8.1373 8.3265 8.5157 8.7050 8.8942
Columns 49 through 56
9.0835 9.2727 9.4619 9.6512 9.8404 10.0297 10.2189 10.4081
Columns 57 through 64
10.5974 10.7866 10.9759 11.1651 11.3543 11.5436 11.7328 11.9220
Columns 65 through 72
12.1113 12.3005 12.4898 12.6790 12.8682 13.0575 13.2467 13.4360
Columns 73 through 80
13.6252 13.8144 14.0037 14.1929 14.3822 14.5714 14.7606 14.9499
Columns 81 through 88
15.1391 15.3283 15.5176 15.7068 15.8961 16.0853 16.2745 16.4638
Columns 89 through 96
16.6530 16.8423 17.0315 17.2207 17.4100 17.5992 17.7885 17.9777
Columns 97 through 104
34.6540 53.5334 72.4128 91.2921 110.1715 129.0508 147.9302 166.8096
Columns 105 through 112
185.6889 204.5683 223.4477 242.3270 261.2064 280.0857 298.9651 317.8445
Columns 113 through 119
336.7238 355.6032 374.4826 393.3619 412.2413 431.1206 450.0000

Element-by-element max values in multidimensional matrix

I have a few multidimensional matrices of dimensions mxnxt, where each element in mxn is an individual sensor input, and t is time. What I want to do is analyse only the peak values for each element in mxn over t, so I would end up with a single 2D matrix of mxn containing only max values.
I know there are are ways to get a single overall max value, but is there a way to combine this with element-by-element operations like bsxfun so that it examines each individual element over t?
I'd be grateful for any help you can give because I'm really stuck at the moment. Thanks in advance!
Is this what you want?
out = max(A,[],3); %// checking maximum values in 3rd dimension
Example:
A = randi(50,3,3,3); %// Random 3x3x3 dim matrix
out = max(A,[],3);
Results:
A(:,:,1) =
35 5 8
38 12 42
23 46 27
A(:,:,2) =
50 6 39
4 49 41
23 1 44
A(:,:,3) =
5 41 10
20 22 14
13 46 8
>> out
out =
50 41 39
38 49 42
23 46 44
You can call max() with the matrix and select the dimension (look the documentation) on which the operation will be calculated, e.g
M = max(A,[],3)

Contour plot coloured by clustering of points matlab

I have two vectors which are paired values
size(X)=1e4 x 1; size(Y)=1e4 x 1
Is it possible to plot a contour plot of some sort making the contours by the highest density of points? Ie highest clustering=red, and then gradient colour elsewhere?
If you need more clarification please ask.
Regards,
EXAMPLE DATA:
X=[53 58 62 56 72 63 65 57 52 56 52 70 54 54 59 58 71 66 55 56];
Y=[40 33 35 37 33 36 32 36 35 33 41 35 37 31 40 41 34 33 34 37 ];
scatter(X,Y,'ro');
Thank you for everyone's help. Also remembered we can use hist3:
x={0:0.38/4:0.38}; % # How many bins in x direction
y={0:0.65/7:0.65}; % # How many bins in y direction
ncount=hist3([X Y],'Edges',[x y]);
pcolor(ncount./sum(sum(ncount)));
colorbar
Anyone know why edges in hist3 have to be cells?
This is basically a question about estimating the probability density function generating your data and then visualizing it in a good and meaningful way I'd say. To that end, I would recommend using a more smooth estimate than the histogram, for instance Parzen windowing (a generalization of the histogram method).
In my code below, I have used your example dataset, and estimated the probability density in a grid set up by the range of your data. You here have 3 variables you need to adjust to use on your original data; Borders, Sigma and stepSize.
Border = 5;
Sigma = 5;
stepSize = 1;
X=[53 58 62 56 72 63 65 57 52 56 52 70 54 54 59 58 71 66 55 56];
Y=[40 33 35 37 33 36 32 36 35 33 41 35 37 31 40 41 34 33 34 37 ];
D = [X' Y'];
N = length(X);
Xrange = [min(X)-Border max(X)+Border];
Yrange = [min(Y)-Border max(Y)+Border];
%Setup coordinate grid
[XX YY] = meshgrid(Xrange(1):stepSize:Xrange(2), Yrange(1):stepSize:Yrange(2));
YY = flipud(YY);
%Parzen parameters and function handle
pf1 = #(C1,C2) (1/N)*(1/((2*pi)*Sigma^2)).*...
exp(-( (C1(1)-C2(1))^2+ (C1(2)-C2(2))^2)/(2*Sigma^2));
PPDF1 = zeros(size(XX));
%Populate coordinate surface
[R C] = size(PPDF1);
NN = length(D);
for c=1:C
for r=1:R
for d=1:N
PPDF1(r,c) = PPDF1(r,c) + ...
pf1([XX(1,c) YY(r,1)],[D(d,1) D(d,2)]);
end
end
end
%Normalize data
m1 = max(PPDF1(:));
PPDF1 = PPDF1 / m1;
%Set up visualization
set(0,'defaulttextinterpreter','latex','DefaultAxesFontSize',20)
fig = figure(1);clf
stem3(D(:,1),D(:,2),zeros(N,1),'b.');
hold on;
%Add PDF estimates to figure
s1 = surfc(XX,YY,PPDF1);shading interp;alpha(s1,'color');
sub1=gca;
view(2)
axis([Xrange(1) Xrange(2) Yrange(1) Yrange(2)])
Note, this visualization is actually 3-dimensional:
See this 4 minute video on the mathworks site:
http://blogs.mathworks.com/videos/2010/01/22/advanced-making-a-2d-or-3d-histogram-to-visualize-data-density/
I believe this should provide very close to exactly the functionality you require.
I would divide the area the plot covers into a grid and then count the number of points in each square of the grid. Here's an example of how that could be done.
% Get random data with high density
X=randn(1e4,1);
Y=randn(1e4,1);
Xmin=min(X);
Xmax=max(X);
Ymin=min(Y);
Ymax=max(Y);
% guess of grid size, could be divided into nx and ny
n=floor((length(X))^0.25);
% Create x and y-axis
x=linspace(Xmin,Xmax,n);
y=linspace(Ymin,Ymax,n);
dx=x(2)-x(1);
dy=y(2)-y(1);
griddata=zeros(n);
for i=1:length(X)
% Calculate which bin the point is positioned in
indexX=floor((X(i)-Xmin)/dx)+1;
indexY=floor((Y(i)-Ymin)/dy)+1;
griddata(indexX,indexY)=griddata(indexX,indexY)+1;
end
contourf(x,y,griddata)
Edit: The video in the answer by Marm0t uses the same technique but probably explains it in a better way.

Random sampling from gridded data: How to implement this in Matlab?

I have a 200x200 gridded data points. I want to randomly pick 15 grid points from that grid and replace the values in those grids with values selected from a known distribution shown below. All 15 grid points are assigned random values from the given distribution.
The given distribution is:
Given Distribution
314.52
1232.8
559.93
1541.4
264.2
1170.5
500.97
551.83
842.16
357.3
751.34
583.64
782.54
537.28
210.58
805.27
402.29
872.77
507.83
1595.1
The given distribution is made up from 20 values, which are part of those gridded data points. These 20 grid points are fixed i.e. they must not be part of randomly picking 15 points. The coordinates of these 20 points, which are fixed and should not be part of random picking, are:
x 27 180 154 183 124 146 16 184 138 122 192 39 194 129 115 33 47 65 1 93
y 182 81 52 24 168 11 90 153 133 79 183 25 63 107 161 14 65 2 124 79
Can someone help with how to implement this problem in Matlab?
Building off of my answer to your simpler question, here is a solution for how you can choose 15 random integer points (i.e. subscripted indices into your 200-by-200 matrix) and assign random values drawn from your set of values given above:
mat = [...]; %# Your 200-by-200 matrix
x = [...]; %# Your 20 x coordinates given above
y = [...]; %# Your 20 y coordinates given above
data = [...]; %# Your 20 data values given above
fixedPoints = [x(:) y(:)]; %# Your 20 points in one 20-by-2 matrix
randomPoints = randi(200,[15 2]); %# A 15-by-2 matrix of random integers
isRepeated = ismember(randomPoints,fixedPoints,'rows'); %# Find repeated sets of
%# coordinates
while any(isRepeated)
randomPoints(isRepeated,:) = randi(200,[sum(isRepeated) 2]); %# Create new
%# coordinates
isRepeated(isRepeated) = ismember(randomPoints(isRepeated,:),...
fixedPoints,'rows'); %# Check the new
%# coordinates
end
newValueIndex = randi(20,[1 15]); %# Select 15 random indices into data
linearIndex = sub2ind([200 200],randomPoints(:,1),...
randomPoints(:,2)); %# Get a linear index into mat
mat(linearIndex) = data(newValueIndex); %# Update the 15 points
In the above code I'm assuming that the x coordinates correspond to row indices and the y coordinates correspond to column indices into mat. If it's actually the other way around, swap the second and third inputs to the function SUB2IND.
I think yoda already gave the basic idea. Call randi twice to get the grid coordinate to replace, and then replace it with the appropriate value. Do that 15 times.

How to compare the pairs of coordinates most efficiently without using nested loops in Matlab?

If I have 20 pairs of coordinates, whose x and y values are say :
x y
27 182
180 81
154 52
183 24
124 168
146 11
16 90
184 153
138 133
122 79
192 183
39 25
194 63
129 107
115 161
33 14
47 65
65 2
1 124
93 79
Now if I randomly generate 15 pairs of coordinates (x,y) and want to compare with these 20 pairs of coordinates given above, how can I do that most efficiently without nested loops?
If you're trying to see if any of your 15 randomly generated coordinate pairs are equal to any of your 20 original coordinate pairs, an easy solution is to use the function ISMEMBER like so:
oldPts = [...]; %# A 20-by-2 matrix with x values in column 1
%# and y values in column 2
newPts = randi(200,[15 2]); %# Create a 15-by-2 matrix of random
%# values from 1 to 200
isRepeated = ismember(newPts,oldPts,'rows');
And isRepeated will be a 15-by-1 logical array with ones where a row of newPts exists in oldPts and zeroes otherwise.
If your coordinates are 1) actually integers and 2) their span is reasonable (otherwise use sparse matrix), I'll utilize a simple truth table. Like
x_0= [27 180 ...
y_0= [182 81 ...
s= [200 200]; %# span of coordinates
T= false(s);
T(sub2ind(s, x_0, y_0))= true;
%# now obtain some other coordinates
x_1= [...
y_1= [...
%# and common coordinates of (x_0, y_0) and (x_1, y_1) are just
T(sub2ind(s, x_1, y_1))
If your original twenty points aren't going to change, you'd get better efficiency if you sorted them O(n log n); then you could see if each random point was in the list with a O(log n) search.
If your "original" points list changes (insertions / deletions), you could get equivalent performance with a binary tree.
BUT: If the number of points you're working with is really as low as in your question, your double loop might just be the fastest method! Algorithms with low Big-O curves will be faster as the amount of data gets really big, but it's often at the cost of a one-time slowdown (in your case, the sort) - and with only 15x20 data points... There won't be a human-perceptible difference; you might see one if you're timing it on your system clock. Or you might not.
Hope this helps!