I am constructing an adjacency list based on intensity difference of the pixels in an image.
The code snippet in Matlab is as follows:
m=1;
len = size(cur_label, 1);
for j=1:len
for k=1:len
if(k~=j) % avoiding diagonal elements
intensity_diff = abs(indx_intensity(j)-indx_intensity(k)); %intensity defference of two pixels.
if intensity_diff<=10 % difference thresholded by 10
adj_list(m, 1) = j; % storing the vertices of the edge
adj_list(m, 2) = k;
m = m+1;
end
end
end
end
y = sparse(adj_list(:,1),adj_list(:,2),1); % creating a sparse matrix from the adjacency list
How can I avoid these nasty nested for loops? If the image size is big, then its working just as disaster. If anyone have any solution, it would be a great help for me.
Regards
Ratna
I am assuming the input indx_intensity as a 1D array here. With that assumption, here's a vectorized approach with broadcasting/bsxfun -
%// Threshold parameter
thresh = 10;
%// Get elementwise differentiation between elements in indx_intensity
diffs = abs(bsxfun(#minus,indx_intensity(:),indx_intensity(:).')) %//'
%// Threshold the differentiations against the threshold, thus giving us a
%// 2D square matrix. Then, set the diagonal elements to zero to avoid them.
mask = diffs <= thresh;
mask(1:len+1:end) = 0;
%// Get the indices of the TRUE elements in the valid mask as final output.
[R,C] = find(mask);
adj_list_out = [C R];
Related
Is there a matlab command for generating a random n by n matrix, with elements taken in the interval [0,1], with x% of the entries on the off-diagonal to be 0. Then, additionally setting the element in the diagonal to be the sum of every element in its respective column? In order to create a diagonally dominant dense/sparse matrix? This may be easy enough to write a code for but I was wondering if there was already a built in function with this capability.
EDIT:
I am new to Matlab/programming so this was an easier said than done. I'm having trouble making the matrix with the percentage ignoring the diagonal. It's a n x n matrix, so there are $n^2$ entries, with n of them on the diagonal, I want the percentage of zeros to be taken from $n^2 - n$ elements, i.e. all the off-diagonal elements. I cannot implement this correctly. I do not know how to initialize my M (see below) to correspond correctly.
% Enter percentage as a decimal
function [M] = DiagDomSparse(n,x)
M = rand(n);
disp("Original matrix");
disp(M);
x = sum(M);
for i=1:n
for j=1:n
if(i == j)
M(i,j) = x(i);
end
end
end
disp(M);
Here is one approach that you could use. I'm sure you will get some other answers now with a more clever approach, but I like to keep things simple and understandable.
What I'm doing below is creating the data to be put in the off-diagonal elements first. I create an empty matrix and copy this data into the off-diagonal elements using linear indexing. Now I can compute the sum of columns and write those into the diagonal elements using linear indexing again. Because the matrix was initialized to zero, the diagonal elements are still zero when I compute the sum of columns, so they don't interfere.
n = 5;
x = 0.3; % fraction of zeros in off-diagonal
k = round(n*(n-1)*x); % number of zeros in off-diagonal
data = randn(n*(n-1)-k,1); % random numbers, pick your distribution here!
data = [data;zeros(k,1)]; % the k zeros
data = data(randperm(length(data))); % shuffle
diag_index = 1:n+1:n*n; % linear index to all diagonal elements
offd_index = setdiff(1:n*n,diag_index); % linear index to all other elements
M = zeros(n,n);
M(offd_index) = data; % set off-diagonal elements to data
M(diag_index) = sum(M,1); % set diagonal elements to sum of columns
To refer to the diagonal you want eye(n,'logical'). Here is a solution:
n=5;
M = rand(n);
disp("Original matrix");
disp(M);
x = sum(M);
for i=1:n
for j=1:n
if(i == j)
M(i,j) = x(i);
end
end
end
disp('loop solution:')
disp(M);
M(eye(n,'logical'))=x;
disp('eye solution:')
disp(M);
I have a constant 2D double matrix mat1. I also have a 2D cell array mat2 where every cell contains a 2D or 3D double matrix. These double matrices have the same number of rows and columns as mat1. I need to dot multiply (.*) mat1 with every slice of each double matrix within mat2. The result needs to be another cell array results with the same size as mat2, whereby the contatining double matrices must equal the double matrices of mat2 in terms of size.
Here's my code to generate mat1 and mat2 for illustrating purposes. I am struggling at the point where the multiplication should take place.
rowCells = 5;
colCells = 3;
rowTimeSeries = 300;
colTimeSeries = 5;
slices = [1;10];
% Create 2D double matrix
mat1 = rand(rowTimeSeries, colTimeSeries);
% Create 2D cell matrix comprisiong 2D and/or 3D double matrices
mat2 = cell(rowCells,colCells);
for c = 1:colCells
for r = 1:rowCells
slice = randsample(slices, 1, true);
mat2{r,c} = rand(rowTimeSeries, colTimeSeries, slice);
end
end
% Multiply (.*) mat1 with mat2 (every slice)
results = cell(rowCells,colCells);
for c = 1:colCells
for r = 1:rowCells
results{r,c} = ... % I am struggling here!!!
end
end
You could use bsxfun to remove the need for your custom function multiply2D3D, it works in a similar way! Updated code:
results = cell(rowCells,colCells);
for c = 1:colCells
for r = 1:rowCells
results{r,c} = bsxfun(#times, mat1, mat2{r,c});
end
end
This will work for 2D and 3D matrices where the number of rows and cols is the same in each of your "slices", so it should work in your case.
You also don't need to loop over the rows and the columns of your cell array separately. This loop has the same number of iterations, but it is one loop not two, so the code is a little more streamlined:
results = cell(size(mat2));
for n = 1:numel(mat2) % Loop over every element of mat2. numel(mat2) = rowCells*colCells
results{n} = bsxfun(#times, mat1, mat2{n});
end
I had almost the exact same answer as Wolfie but he beat me to it.
Anyway, here is a one liner that I think is slightly nicer:
nR = rowCells; % Number of Rows
nC = colCells; % Number of Cols
results = arrayfun(#(I) bsxfun(#times, mat1, mat2{I}), reshape(1:nR*nC,[],nC), 'un',0);
This uses arrayfun to perform the loop indexing and bsxfun for the multiplications.
A few advantages
1) Specifying 'UniformOutput' ('un') in arrayfun returns a cell array so the results variable is also a cell array and doesn't need to be initialised (in contrast to using loops).
2) The dimensions of the indexes determine the dimensions of results at the output, so they can match what you like.
3) The single line can be used directly as an input argument to a function.
Disadvantage
1) Can run slower than using for loops as Wolfie pointed out in the comments.
One solution I came up with is to outsource the multiplication of a 2D with a 3D matrix into a function. However, I am curious to know whether this is the most efficient way to solve this problem?
rowCells = 5;
colCells = 3;
rowTimeSeries = 300;
colTimeSeries = 5;
slices = [1;10];
% Create 2D double matrix
mat1 = rand(rowTimeSeries, colTimeSeries);
% Create 2D cell matrix comprisiong 2D and/or 3D double matrices
mat2 = cell(rowCells,colCells);
for c = 1:colCells
for r = 1:rowCells
slice = randsample(slices, 1, true);
mat2{r,c} = rand(rowTimeSeries, colTimeSeries, slice);
end
end
% Multiply (.*) mat1 with mat2 (every slice)
results = cell(rowCells,colCells);
for c = 1:colCells
for r = 1:rowCells
results{r,c} = multiply2D3D(mat1, mat2{r,c});
end
end
function vout = multiply2D3D(mat2D, mat3D)
%MULTIPLY2D3D multiplies a 2D double matrix with every slice of a 3D
% double matrix.
%
% INPUTs:
% mat2D:
% 2D double matrix
%
% mat3D:
% 3D double matrix where the third dimension is equal or greater than 1.
%
% OUTPUT:
% vout:
% 3D double matrix with the same size as mat3D. Every slice in vout
% is the result of a multiplication of mat2D with every individual slice
% of mat3D.
[rows, cols, slices] = size(mat3D);
vout = zeros(rows, cols, slices);
for s = 1 : slices
vout(:,:,s) = mat2D .* mat3D(:,:,s);
end
end
I need to plot a NxN matrix 'M' full of zeros, but only show the cases where m(x,y) is different from 0.
t_max = 10; % set the maximum number of iterations
n = 10; % dimension n*n
d = 1; % the probability of changing place
x = randi([1 n]); % random row
y = randi([1 n]); % random column
grid = zeros(10); % set an empty gride n*n
grid(x,y) = 1; % put an agent in a random place
for t=1:t_max
newgrid = randomwalk1(grid,d); % call the function random walk for one agent
end
I tried image(m) but it's not giving satisfying results since I need also to keep track of the element that is different to 0, hold on doesn't work in this case.
You are looking for the spy() function. Just type spy(m) and see what happens.
we are working on a project and trying to get some results with KPCA.
We have a dataset (handwritten digits) and have taken the 200 first digits of each number so our complete traindata matrix is 2000x784 (784 are the dimensions).
When we do KPCA we get a matrix with the new low-dimensionality dataset e.g.2000x100. However we don't understand the result. Shouldn;t we get other matrices such as we do when we do svd for pca? the code we use for KPCA is the following:
function data_out = kernelpca(data_in,num_dim)
%% Checking to ensure output dimensions are lesser than input dimension.
if num_dim > size(data_in,1)
fprintf('\nDimensions of output data has to be lesser than the dimensions of input data\n');
fprintf('Closing program\n');
return
end
%% Using the Gaussian Kernel to construct the Kernel K
% K(x,y) = -exp((x-y)^2/(sigma)^2)
% K is a symmetric Kernel
K = zeros(size(data_in,2),size(data_in,2));
for row = 1:size(data_in,2)
for col = 1:row
temp = sum(((data_in(:,row) - data_in(:,col)).^2));
K(row,col) = exp(-temp); % sigma = 1
end
end
K = K + K';
% Dividing the diagonal element by 2 since it has been added to itself
for row = 1:size(data_in,2)
K(row,row) = K(row,row)/2;
end
% We know that for PCA the data has to be centered. Even if the input data
% set 'X' lets say in centered, there is no gurantee the data when mapped
% in the feature space [phi(x)] is also centered. Since we actually never
% work in the feature space we cannot center the data. To include this
% correction a pseudo centering is done using the Kernel.
one_mat = ones(size(K));
K_center = K - one_mat*K - K*one_mat + one_mat*K*one_mat;
clear K
%% Obtaining the low dimensional projection
% The following equation needs to be satisfied for K
% N*lamda*K*alpha = K*alpha
% Thus lamda's has to be normalized by the number of points
opts.issym=1;
opts.disp = 0;
opts.isreal = 1;
neigs = 30;
[eigvec eigval] = eigs(K_center,[],neigs,'lm',opts);
eig_val = eigval ~= 0;
eig_val = eig_val./size(data_in,2);
% Again 1 = lamda*(alpha.alpha)
% Here '.' indicated dot product
for col = 1:size(eigvec,2)
eigvec(:,col) = eigvec(:,col)./(sqrt(eig_val(col,col)));
end
[~, index] = sort(eig_val,'descend');
eigvec = eigvec(:,index);
%% Projecting the data in lower dimensions
data_out = zeros(num_dim,size(data_in,2));
for count = 1:num_dim
data_out(count,:) = eigvec(:,count)'*K_center';
end
we have read lots of papers but still cannot get the hand of kpca's logic!
Any help would be appreciated!
PCA Algorithm:
PCA data samples
Compute mean
Compute covariance
Solve
: Covariance matrix.
: Eigen Vectors of covariance matrix.
: Eigen values of covariance matrix.
With the first n-th eigen vectors you reduce the dimensionality of your data to the n dimensions. You can use this code for the PCA, it has an integraded example and it is simple.
KPCA Algorithm:
We choose a kernel function in you code this is specified by:
K(x,y) = -exp((x-y)^2/(sigma)^2)
in order to represent your data in a high dimensional space hopping that, in this space your data will be well represented for further porposes like classification or clustering whereas this task could be harder to be solved in the initial feature space. This trick is aslo known as "Kernel trick". Look figure.
[Step1] Constuct gram matrix
K = zeros(size(data_in,2),size(data_in,2));
for row = 1:size(data_in,2)
for col = 1:row
temp = sum(((data_in(:,row) - data_in(:,col)).^2));
K(row,col) = exp(-temp); % sigma = 1
end
end
K = K + K';
% Dividing the diagonal element by 2 since it has been added to itself
for row = 1:size(data_in,2)
K(row,row) = K(row,row)/2;
end
Here because the gram matrix is symetric the half of the values are computed and the final result is obtained by adding the computed so far gram matrix and its transpose. Finally, we divide by 2 as the comments mention.
[Step2] Normalize the kernel matrix
This is done by this part of your code:
K_center = K - one_mat*K - K*one_mat + one_mat*K*one_mat;
As the comments mention a pseudocentering procedure must be done. For an idea about the proof here.
[Step3] Solve the eigenvalue problem
For this task this part of the code is responsible.
%% Obtaining the low dimensional projection
% The following equation needs to be satisfied for K
% N*lamda*K*alpha = K*alpha
% Thus lamda's has to be normalized by the number of points
opts.issym=1;
opts.disp = 0;
opts.isreal = 1;
neigs = 30;
[eigvec eigval] = eigs(K_center,[],neigs,'lm',opts);
eig_val = eigval ~= 0;
eig_val = eig_val./size(data_in,2);
% Again 1 = lamda*(alpha.alpha)
% Here '.' indicated dot product
for col = 1:size(eigvec,2)
eigvec(:,col) = eigvec(:,col)./(sqrt(eig_val(col,col)));
end
[~, index] = sort(eig_val,'descend');
eigvec = eigvec(:,index);
[Step4] Change representaion of each data point
For this task this part of the code is responsible.
%% Projecting the data in lower dimensions
data_out = zeros(num_dim,size(data_in,2));
for count = 1:num_dim
data_out(count,:) = eigvec(:,count)'*K_center';
end
Look the details here.
PS: I encurage you to use code written from this author and contains intuitive examples.
Suppose I have D, an X-by-Y-by-Z data matrix. I also have M, an X-by-Y "masking" matrix. My goal is to set the elements (Xi,Yi,:) in D to NaN when (Xi,Yi) in M is false.
Is there any way to avoid doing this in a loop? I tried using ind2sub, but that fails:
M = logical(round(rand(3,3))); % mask
D = randn(3,3,2); % data
% try getting x,y pairs of elements to be masked
[x,y] = ind2sub(size(M),find(M == 0));
D_masked = D;
D_masked(x,y,:) = NaN; % does not work!
% do it the old-fashioned way
D_masked = D;
for iX = 1:size(M,1)
for iY = 1:size(M,2)
if ~M(iX,iY), D_masked(iX,iY,:) = NaN; end
end
end
I suspect I'm missing something obvious here. (:
You can do this by replicating your logical mask M across the third dimension using REPMAT so that it is the same size as D. Then, index away:
D_masked = D;
D_masked(repmat(~M,[1 1 size(D,3)])) = NaN;
If replicating the mask matrix is undesirable, there is another alternative. You can first find a set of linear indices for where M equals 0, then replicate that set size(D,3) times, then shift each set of indices by a multiple of numel(M) so it indexes a different part of D in the third dimension. I'll illustrate this here using BSXFUN:
D_masked = D;
index = bsxfun(#plus,find(~M),(0:(size(D,3)-1)).*numel(M));
D_masked(index) = NaN;
Reshape is basically for free, you can use it here for an efficient solution. reducing the whole to a 2d problem.
sz=size(D);
D=reshape(D,[],sz(3)); %reshape to 2d
D(isnan(M(:)),:)=nan; %perform the operation on the 2d matrix
D=reshape(D,sz); %reshape back to 3d
My Matlab is a bit rusty but I think logical indexing should work:
D_masked = D;
D_masked[ M ] = NaN;
(which probably can be combined into one statement with a conditional expression on the rhs...)