I want to create a matrix (n by n, n being odd) in MATLAB that has its central element fixed, and its surrounding elements increasing/decreasing by some constant value. For example:
where my center element is 0 and the surrounding elements are decrementing by 0.1. I am pretty much blank from where to start exactly. Your time and help is highly appreciated.
This alternative seems a bit faster than the for loop.
n = 7; % size
vector = -abs((1-n)/2:(n-1)/2)/10; % entries in middle row/column
x = min(vector,vector.') % final result
% works for only odd numbers as your requirement
n = 5; % matrix size
r = (n-1)/2; % surrounding rows
x = zeros(n); % array initialization
c = r-1:-1:0;
% assigning values
for i = 1:r
x([1+c(i), end-c(i)], :) = -i/10;
x(:,[1+c(i), end-c(i)]) = -i/10;
end
x % final matrix
Related
I have a matrix (image) of size m x n that I am trying to rearrange differently. I want to design a feature matrix of size (m x n) x 9 where each row corresponds to a matrix element centered around its 8-neighborhood elements. My attempt was to sequentially loop through each element of the original matrix to extract the neighborhood values, however this method is too computationally heavy and takes too long to perform as the matrix size is exhaustively large. Is there anyway to do this cost-beneficially?
Attempt
M_feat = nan(size(img,1)*size(img,2), 9);
temp = zeros(size(img)+2);
temp(2:end-1,2:end-1) = double(img);
for i = 2:size(img,1)+1
for j = 2:size(img,2)+1
neighbors = temp(i-1:i+1, j-1:j+1); % read 3-by-3 mini-matrix
neighbors = reshape(neighbors, 1, []);
M_feat((i-2)*size(img,1) + (j-1),:) = neighbors; % store row-wise
end
end
Got it!
new_img = zeros(size(img)+2);
new_img(2:end-1,2:end-1) = double(img);
% Image boundary coordinates without first/last row/column
inner_coord = [2 2; size(new_img,1)-1 size(new_img,2)-1];
% 9x2 matrix with 1 row for the relative shift to reach neighbors
[d2, d1] = meshgrid(-1:1, -1:1);
d = [d1(:) d2(:)];
% Cell array to store all 9 shifted images
temp = {};
for i = 1:size(d,1)
% x-indices of the submatrix when shifted by d(i,1)
coord_x = (inner_coord(1,1):inner_coord(2,1)) + d(i,1);
% y-indices of the submatrix when shifted by d(i,2)
coord_y = (inner_coord(1,2):inner_coord(2,2)) + d(i,2);
% image matrix resulting from shift by d(i,)
temp{i} = reshape(new_img(coord_x, coord_y), 1, []);
end
% Column-wise bind all 9 shifted images (as vectors) from the array
M_feat = vertcat(temp{:}).';
Is there a matlab command for generating a random n by n matrix, with elements taken in the interval [0,1], with x% of the entries on the off-diagonal to be 0. Then, additionally setting the element in the diagonal to be the sum of every element in its respective column? In order to create a diagonally dominant dense/sparse matrix? This may be easy enough to write a code for but I was wondering if there was already a built in function with this capability.
EDIT:
I am new to Matlab/programming so this was an easier said than done. I'm having trouble making the matrix with the percentage ignoring the diagonal. It's a n x n matrix, so there are $n^2$ entries, with n of them on the diagonal, I want the percentage of zeros to be taken from $n^2 - n$ elements, i.e. all the off-diagonal elements. I cannot implement this correctly. I do not know how to initialize my M (see below) to correspond correctly.
% Enter percentage as a decimal
function [M] = DiagDomSparse(n,x)
M = rand(n);
disp("Original matrix");
disp(M);
x = sum(M);
for i=1:n
for j=1:n
if(i == j)
M(i,j) = x(i);
end
end
end
disp(M);
Here is one approach that you could use. I'm sure you will get some other answers now with a more clever approach, but I like to keep things simple and understandable.
What I'm doing below is creating the data to be put in the off-diagonal elements first. I create an empty matrix and copy this data into the off-diagonal elements using linear indexing. Now I can compute the sum of columns and write those into the diagonal elements using linear indexing again. Because the matrix was initialized to zero, the diagonal elements are still zero when I compute the sum of columns, so they don't interfere.
n = 5;
x = 0.3; % fraction of zeros in off-diagonal
k = round(n*(n-1)*x); % number of zeros in off-diagonal
data = randn(n*(n-1)-k,1); % random numbers, pick your distribution here!
data = [data;zeros(k,1)]; % the k zeros
data = data(randperm(length(data))); % shuffle
diag_index = 1:n+1:n*n; % linear index to all diagonal elements
offd_index = setdiff(1:n*n,diag_index); % linear index to all other elements
M = zeros(n,n);
M(offd_index) = data; % set off-diagonal elements to data
M(diag_index) = sum(M,1); % set diagonal elements to sum of columns
To refer to the diagonal you want eye(n,'logical'). Here is a solution:
n=5;
M = rand(n);
disp("Original matrix");
disp(M);
x = sum(M);
for i=1:n
for j=1:n
if(i == j)
M(i,j) = x(i);
end
end
end
disp('loop solution:')
disp(M);
M(eye(n,'logical'))=x;
disp('eye solution:')
disp(M);
I have two lists of functions, for instance: log(n*x), n=1:2017 and cos(m*x), m=1:6. I want/need to construct the matrix product of these vectors and then integrating each element of the matrix between 10 and 20.
I have read this post:
Matrix of symbolic functions
but I think that it is not useful for this problem.
I'm trying to do this by using a loop but I can not get it.
Thanks in advance for reading it.
You can solve this problem by assigning the appropriate vectors to n and m as follows:
n = (1:2017)'; % column vector
m = 1:6; % row vector
syms x;
l = log(n*x); % column vector of logs
c = cos(m*x); % row vector of cos
product = l*c; % matrix product
i = int(product, x, 10, 20); % integral from 10 to 20
iDouble = double(i); % convert the result to double
I am constructing an adjacency list based on intensity difference of the pixels in an image.
The code snippet in Matlab is as follows:
m=1;
len = size(cur_label, 1);
for j=1:len
for k=1:len
if(k~=j) % avoiding diagonal elements
intensity_diff = abs(indx_intensity(j)-indx_intensity(k)); %intensity defference of two pixels.
if intensity_diff<=10 % difference thresholded by 10
adj_list(m, 1) = j; % storing the vertices of the edge
adj_list(m, 2) = k;
m = m+1;
end
end
end
end
y = sparse(adj_list(:,1),adj_list(:,2),1); % creating a sparse matrix from the adjacency list
How can I avoid these nasty nested for loops? If the image size is big, then its working just as disaster. If anyone have any solution, it would be a great help for me.
Regards
Ratna
I am assuming the input indx_intensity as a 1D array here. With that assumption, here's a vectorized approach with broadcasting/bsxfun -
%// Threshold parameter
thresh = 10;
%// Get elementwise differentiation between elements in indx_intensity
diffs = abs(bsxfun(#minus,indx_intensity(:),indx_intensity(:).')) %//'
%// Threshold the differentiations against the threshold, thus giving us a
%// 2D square matrix. Then, set the diagonal elements to zero to avoid them.
mask = diffs <= thresh;
mask(1:len+1:end) = 0;
%// Get the indices of the TRUE elements in the valid mask as final output.
[R,C] = find(mask);
adj_list_out = [C R];
we are working on a project and trying to get some results with KPCA.
We have a dataset (handwritten digits) and have taken the 200 first digits of each number so our complete traindata matrix is 2000x784 (784 are the dimensions).
When we do KPCA we get a matrix with the new low-dimensionality dataset e.g.2000x100. However we don't understand the result. Shouldn;t we get other matrices such as we do when we do svd for pca? the code we use for KPCA is the following:
function data_out = kernelpca(data_in,num_dim)
%% Checking to ensure output dimensions are lesser than input dimension.
if num_dim > size(data_in,1)
fprintf('\nDimensions of output data has to be lesser than the dimensions of input data\n');
fprintf('Closing program\n');
return
end
%% Using the Gaussian Kernel to construct the Kernel K
% K(x,y) = -exp((x-y)^2/(sigma)^2)
% K is a symmetric Kernel
K = zeros(size(data_in,2),size(data_in,2));
for row = 1:size(data_in,2)
for col = 1:row
temp = sum(((data_in(:,row) - data_in(:,col)).^2));
K(row,col) = exp(-temp); % sigma = 1
end
end
K = K + K';
% Dividing the diagonal element by 2 since it has been added to itself
for row = 1:size(data_in,2)
K(row,row) = K(row,row)/2;
end
% We know that for PCA the data has to be centered. Even if the input data
% set 'X' lets say in centered, there is no gurantee the data when mapped
% in the feature space [phi(x)] is also centered. Since we actually never
% work in the feature space we cannot center the data. To include this
% correction a pseudo centering is done using the Kernel.
one_mat = ones(size(K));
K_center = K - one_mat*K - K*one_mat + one_mat*K*one_mat;
clear K
%% Obtaining the low dimensional projection
% The following equation needs to be satisfied for K
% N*lamda*K*alpha = K*alpha
% Thus lamda's has to be normalized by the number of points
opts.issym=1;
opts.disp = 0;
opts.isreal = 1;
neigs = 30;
[eigvec eigval] = eigs(K_center,[],neigs,'lm',opts);
eig_val = eigval ~= 0;
eig_val = eig_val./size(data_in,2);
% Again 1 = lamda*(alpha.alpha)
% Here '.' indicated dot product
for col = 1:size(eigvec,2)
eigvec(:,col) = eigvec(:,col)./(sqrt(eig_val(col,col)));
end
[~, index] = sort(eig_val,'descend');
eigvec = eigvec(:,index);
%% Projecting the data in lower dimensions
data_out = zeros(num_dim,size(data_in,2));
for count = 1:num_dim
data_out(count,:) = eigvec(:,count)'*K_center';
end
we have read lots of papers but still cannot get the hand of kpca's logic!
Any help would be appreciated!
PCA Algorithm:
PCA data samples
Compute mean
Compute covariance
Solve
: Covariance matrix.
: Eigen Vectors of covariance matrix.
: Eigen values of covariance matrix.
With the first n-th eigen vectors you reduce the dimensionality of your data to the n dimensions. You can use this code for the PCA, it has an integraded example and it is simple.
KPCA Algorithm:
We choose a kernel function in you code this is specified by:
K(x,y) = -exp((x-y)^2/(sigma)^2)
in order to represent your data in a high dimensional space hopping that, in this space your data will be well represented for further porposes like classification or clustering whereas this task could be harder to be solved in the initial feature space. This trick is aslo known as "Kernel trick". Look figure.
[Step1] Constuct gram matrix
K = zeros(size(data_in,2),size(data_in,2));
for row = 1:size(data_in,2)
for col = 1:row
temp = sum(((data_in(:,row) - data_in(:,col)).^2));
K(row,col) = exp(-temp); % sigma = 1
end
end
K = K + K';
% Dividing the diagonal element by 2 since it has been added to itself
for row = 1:size(data_in,2)
K(row,row) = K(row,row)/2;
end
Here because the gram matrix is symetric the half of the values are computed and the final result is obtained by adding the computed so far gram matrix and its transpose. Finally, we divide by 2 as the comments mention.
[Step2] Normalize the kernel matrix
This is done by this part of your code:
K_center = K - one_mat*K - K*one_mat + one_mat*K*one_mat;
As the comments mention a pseudocentering procedure must be done. For an idea about the proof here.
[Step3] Solve the eigenvalue problem
For this task this part of the code is responsible.
%% Obtaining the low dimensional projection
% The following equation needs to be satisfied for K
% N*lamda*K*alpha = K*alpha
% Thus lamda's has to be normalized by the number of points
opts.issym=1;
opts.disp = 0;
opts.isreal = 1;
neigs = 30;
[eigvec eigval] = eigs(K_center,[],neigs,'lm',opts);
eig_val = eigval ~= 0;
eig_val = eig_val./size(data_in,2);
% Again 1 = lamda*(alpha.alpha)
% Here '.' indicated dot product
for col = 1:size(eigvec,2)
eigvec(:,col) = eigvec(:,col)./(sqrt(eig_val(col,col)));
end
[~, index] = sort(eig_val,'descend');
eigvec = eigvec(:,index);
[Step4] Change representaion of each data point
For this task this part of the code is responsible.
%% Projecting the data in lower dimensions
data_out = zeros(num_dim,size(data_in,2));
for count = 1:num_dim
data_out(count,:) = eigvec(:,count)'*K_center';
end
Look the details here.
PS: I encurage you to use code written from this author and contains intuitive examples.