I wrote naive gauss elimination without pivoting:
function [x] = NaiveGaussianElimination(A, b)
N = length(b);
x = zeros(N,1);
mulDivOp = 0;
subAddOp = 0;
for column=1:(N-1)
for row = (column+1):N
mul = A(row,column)/A(column,column);
A(row,:) = A(row,:)-mul*A(column,:);
b(row) = b(row)-mul*b(column);
mulDivOp = mulDivOp+N-column+2;
subAddOp = subAddOp +N-column+1;
end
end
for row=N:-1:1
x(row) = b(row);
for i=(row+1):N
x(row) = x(row)-A(row,i)*x(i);
end
x(row) = x(row)/A(row,row);
mulDivOp = mulDivOp + N-row + 1;
subAddOp = subAddOp + N-row;
end
x = x';
mulDivOp
subAddOp
return
end
but I am curious if I can reduce the number of multiplications/divisions and additions/subtractions in case I know which elements of matrix are 0:
For N = 10:
A =
96 118 0 0 0 0 0 0 0 63
154 -31 -258 0 0 0 0 0 0 0
0 -168 257 -216 0 0 0 0 0 0
0 0 202 24 308 0 0 0 0 0
0 0 0 -262 -36 -244 0 0 0 0
0 0 0 0 287 -308 171 0 0 0
0 0 0 0 0 197 229 -258 0 0
0 0 0 0 0 0 -62 -149 186 0
0 0 0 0 0 0 0 -43 255 -198
-147 0 0 0 0 0 0 0 -147 -220
(non-zero values are from randi). In general, non-zero elements are a_{1, N}, a_{N,1} and a_{i,j} when abs(i-j) <= 1.
Probably not. There are nice algorithms for reducing tridiagonal matrices (which these aren't, but they are close) to diagonal matrices. Indeed, this is one way in which the SVD of a matrix is produced, using orthogonal similarity transformations, not Gaussian elimination.
The problem is that when you use Gaussian elimination to remove the nonzero entries in the first column, you will have introduced additional nonzero entries in the other columns. The further you proceed, the more you destroy the structure of the matrix. It may be that Gaussian elimination is simply the wrong approach for the problem you are trying to solve, at least if you are trying to exploit the structure of the matrix.
Related
I am trying to generate a rectangular matrix with 1s on the diagonal above the main diagonal and -1s on the main diagonal. I used "eye" which does not create the diagonal above the main.
Please find my attempt to this below.
N = 5
M1 = -eye([N-1 N])
M2 = eye([N N-1])'
M = M1+M2
I am unable to resolve this issue on my own. Any help or links to relevant documentation would be greatly appreciated.
I don't know of any prebuild function, but you can easily make such a matrix yourself:
N=5;
M=7;
diag=-eye(N,M);
upper_diag=horzcat(zeros(N,1),eye(N,M-1))
final=diag+upper_diag
using the identity matrix and some concatenation to shift the diagonal around. This example assumes you are looking for a square matrix.
The result looks like:
final =
-1 1 0 0 0 0 0
0 -1 1 0 0 0 0
0 0 -1 1 0 0 0
0 0 0 -1 1 0 0
0 0 0 0 -1 1 0
Just create eye and diag matrices as per normal, add them together, then chop away the rows you do not need:
nCol = 7;
nRow = 5;
M = -eye(nCol) + diag(ones(nCol - 1, 1), 1);
M = M(1:nRow, 1:nCol)
produces
M =
-1 1 0 0 0 0 0
0 -1 1 0 0 0 0
0 0 -1 1 0 0 0
0 0 0 -1 1 0 0
0 0 0 0 -1 1 0
The four-input version of spdiags does just that, producing a sparse matrix. You may need to convert to full then.
M = 5; %// number of rows
N = 7; %// number of columns
d = [0 1]; %// specify main diagonal and the one above
v = [-1 1]; %// values in those diagonals
result = full(spdiags(ones(M,1)*v, d, M, N));
This gives
result =
-1 1 0 0 0 0 0
0 -1 1 0 0 0 0
0 0 -1 1 0 0 0
0 0 0 -1 1 0 0
0 0 0 0 -1 1 0
I need a matlab script that is going to return the n nodes of maximum degree in a graph.
For exemple:
N = maxnodes(Graph,n)
Graph is a matrix
n the number of nodes that we need
N is a vector that conatains the n nodes.
Here is my source code (script). But it doesn't work well.
M = [0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0;
1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0;
0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0;
0 0 1 1 0 1 0 0 0 0 0 1 0 0 0 0;
1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0;
1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0;
0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0;
0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 1;
0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0;
0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0;
0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0;
0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0;
0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 1;
0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0;];
n = 5; % The number of nodes that we want
G=[]; % I'll store here the n nodes of maximum degree
for i=1:size(M)
G1(1,i)=sum(M(i,:)); % I'm storing each node with its degree in G1
G1(2,i)=i;
C(1,i)=G1(1,i); %I store only degree of nodes
end
C1 = sort(C,'descend'); % I sort "descendly" the degrees of nodes
for i=1:n %We want to take only the n nodes that we need and save it in C2
C2(1,i) = C1(1,i);
end
C2; % This vector stores the n descend maximum degrees that I need.
%My actual problem is here. How could I find the node that correspond to each degree?
%I tried to do it with the following loop:
for j=1:n
for i=1:size(M)
if C2(1,j) == G1(1,i)
G2(1,j)=G1(2,i);
end
end
end %But this loop doesn't store well the nodes in G2 because it repeats nodes.
G2
You have absolutely shown no effort so you actually shouldn't be getting any help from anyone.... but I love graph problems, so I'll throw you a bone.
I'm going to assume that Graph is an adjacency matrix, where each element (i,j) in the matrix corresponds to an edge connected between the two nodes i and j. I also am assuming that you have an undirected graph as input. If you examine the nature of the adjacency matrix (that Wikipedia article has a great example), it's not hard to see that the degree of a node i is simply the sum over all of the columns of row i in the adjacency matrix. Recall that the degree is defined as the total number of edges connected to a particular node. As such, all you have to do is sum over all of the columns for each row, and determine the rows that have the largest degree in your graph. Once we do this, we simply return the nodes that have this largest degree, which is up to n.
However, we will put in a safeguard where if we specify n to be larger than number of nodes having this maximum degree, we will cap it so that we only show up to this many nodes rather than n.
Therefore:
function [N] = maxnodes(Graph, n)
%// Find degrees of each node
degs = sum(Graph, 2);
%// Find those nodes that have the largest degree
locs = find(degs == max(degs));
%// If n is larger than the total number of nodes
%// having this maximum degree, then cap it
if n > numel(locs)
n = numel(locs);
end
%// Return those nodes that have this maximum degree
N = locs(1:n);
Here is a script that works very well and solves my problem. Otherwise, I would have liked well that my source code above would be debug.
function N = maxnodes(M,n)
nb1_rows= sum(M,2);
[nbs,is] = sort(nb1_rows,'descend');
N = transpose(is(1:n));
I have a matrix m = zeros(1000, 1000). Within this matrix I want to draw an estimate of the line which passes through 2 points from my matrix. Let's say x = [122 455]; and y = [500 500];.
How can I do this in Matlab? Are there any predefined functions to do this? I am using Matlab 2012b.
I'll denote the two endpoints as p1 and p2 because I'm planning to use x and y for something else. I'm also assuming that the first coordinate of p1 and p2 is x and the second is y. So here's a rather simple way to do it:
Obtain the equation of the line y = ax + b. In MATLAB, this can be done by:
x = p1(1):p2(1)
dx = p2(1) - p1(1);
dy = p2(2) - p1(2);
y = round((x - p1(1)) * dy / dx + p1(2));
Convert the values of x and y to indices of elements in the matrix, and set those elements to 1.
idx = sub2ind(size(m), y, x);
m(idx) = 1;
Example
Here's an example for a small 10-by-10 matrix:
%// This is our initial conditon
m = zeros(10);
p1 = [1, 4];
p2 = [5, 7];
%// Ensure the new x-dimension has the largest displacement
[max_delta, ix] = max(abs(p2 - p1));
iy = length(p1) - ix + 1;
%// Draw a line from p1 to p2 on matrix m
x = p1(ix):p2(ix);
y = round((x - p1(ix)) * (p2(iy) - p1(iy)) / (p2(ix) - p1(ix)) + p1(iy));
m(sub2ind(size(m), y, x)) = 1;
m = shiftdim(m, ix > iy); %// Transpose result if necessary
The result is:
m =
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
0 0 1 1 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
Update: I have patched this algorithm to work when dy > dx by treating the dimension with the largest displacement as if it were the x-dimension, and then transposing the result if necessary.
Neither of the provided answers work for displacements in y greater than in x (dy > dx).
As pointed out, Bresenham's line algorithm is exactly meant for that.
The matlab file provided here works similarly than the examples provided in the other answers but covers all the use-cases.
To relate to the previously provided example, the script can be used like this:
% initial conditions
m = zeros(10);
p1 = [1, 4];
p2 = [5, 10];% note dy > dx
% use file provided on file exchange
[x y] = bresenham(p1(1),p1(2),p2(1),p2(2));
% replace entries in matrix m
m(sub2ind(size(m), y, x)) = 1;
result looks like this:
m =
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0
For me (matlab R2013b) following line did not work, when p1(1)>p2(2) (":" can not count backwards):
x = p1(1):p2(1);
E.G.:
1:10
1 2 3 4 5 6 7 8 9 10
10:1
Empty matrix: 1-by-0
But it worked when I used linspac instead:
x = linspace(p1(1), p2(1), abs(p2(1)-p1(1))+1);
I am writing an algorithm in matlab to for bicubic interpolation of a surface Psi(x,y). I have a bug in the code and cannot seem to track it down. I am trying a test case with Psi=X^2-0.25 so that its easier to track down the bug. It seems as if my interpolation has an offset. My comments are included in my code. Any help would be appreciated.
Plot of Psi=X^2 in blue and interpolation in red
Countour lines of Psi are plotted and the red dot is the point I am computing the interpolation about. The thick red line is the interpolation which is offset quite a bit from the red dot.
function main()
epsilon=0.000001;
xMin=-1+epsilon;
xMax= 1+epsilon;
yMin=-1+epsilon;
yMax= 1+epsilon;
dx=0.1; Nx=ceil((xMax-xMin)/dx)+1;
dy=0.1; Ny=ceil((yMax-yMin)/dy)+1;
x=xMin:dx:xMax; x=x(1:Nx);
y=yMin:dy:yMax; y=y(1:Ny);
[XPolInX,XPolInY]=GetGhostMatricies(Nx,Ny); %Linear extrapolation matrix
[D0x,D0y]=GetDiffMatricies(Nx,Ny,dx,dy); %derivative matricies: D0x is central differencing in x
[X,Y]=meshgrid(x,y);
Psi=X.^2-0.25; %Note that my algorithm is being written for a Psi that may not have a analytic representation. This Psi is only a test case.
psi=zeros(Nx+2,Ny+2); %linearly extrapolate psi (for solving differential equation not shown here)
psi(2:(Nx+1),2:(Ny+1))=Psi';
psi=(XPolInY*(XPolInX*psi)')';
%compute derivatives of psi
psi_x =D0x*psi; psi_x =(XPolInY*(XPolInX*psi_x)')';
psi_y =(D0y*psi')'; psi_y =(XPolInY*(XPolInX*psi_y)')';
psi_xy=D0x*psi_y; psi_xy=(XPolInY*(XPolInX*psi_xy)')';
% i have verified that my derivatives are computed correctly
biCubInv=GetBiCubicInverse(dx,dy);
i=5; %lets compute the bicubic interpolation at this x(i), y(j)
j=1;
psiVoxel=[psi( i,j),psi( i+1,j),psi( i,j+1),psi( i+1,j+1),...
psi_x( i,j),psi_x( i+1,j),psi_x( i,j+1),psi_x( i+1,j+1),...
psi_y( i,j),psi_y( i+1,j),psi_y( i,j+1),psi_y( i+1,j+1),...
psi_xy(i,j),psi_xy(i+1,j),psi_xy(i,j+1),psi_xy(i+1,j+1)]';
a=biCubInv*psiVoxel; %a=[a00 a01 ... a33]; polynomial coefficients; 1st index is power of (x-xi), 2nd index is power of (y-yj)
xi=x(5); yj=y(1);
clear x y
x=(xi-.2):.01:(xi+.2); %this is a local region about the point we are interpolating
y=(yj-.2):.01:(yj+.2);
[dX,dY]=meshgrid(x,y);
Psi=dX.^2-0.25;
figure(2) %just plotting the 0 level contour of Psi here
plot(xi,yj,'.r','MarkerSize',20)
hold on
contour(x,y,Psi,[0 0],'r','LineWidth',2)
set(gca,'FontSize',14)
axis([x(1) x(end) y(1) y(end)])
grid on
set(gca,'xtick',(xi-.2):.1:(xi+.2));
set(gca,'ytick',(yj-.2):.1:(yj+.2));
xlabel('x')
ylabel('y')
[dX dY]=meshgrid(x-xi,y-yj);
%P is my interpolating polynomial
P = a(1) + a(5) *dY + a(9) *dY.^2 + a(13) *dY.^3 ...
+ a(2)*dX + a(6)*dX .*dY + a(10)*dX .*dY.^2 + a(14)*dX .*dY.^3 ...
+ a(3)*dX.^2 + a(7)*dX.^2.*dY + a(11)*dX.^2.*dY.^2 + a(15)*dX.^2.*dY.^3 ...
+ a(4)*dX.^3 + a(8)*dX.^3.*dY + a(12)*dX.^3.*dY.^2 + a(16)*dX.^3.*dY.^3 ;
[c h]=contour(x,y,P)
clabel(c,h)
figure(3)
plot(x,x.^2-.25) %this is the exact function
hold on
plot(x,P(1,:),'-r*')
%See there is some offset here
end
%-------------------------------------------------------------------------
function [XPolInX,XPolInY]=GetGhostMatricies(Nx,Ny)
XPolInX=diag(ones(1,Nx+2),0);
XPolInY=diag(ones(1,Ny+2),0);
XPolInX(1,1) =0; XPolInX(1,2) =2; XPolInX(1,3) =-1;
XPolInY(1,1) =0; XPolInY(1,2) =2; XPolInY(1,3) =-1;
XPolInX(Nx+2,Nx+2)=0; XPolInX(Nx+2,Nx+1)=2; XPolInX(Nx+2,Nx)=-1;
XPolInY(Ny+2,Ny+2)=0; XPolInY(Ny+2,Ny+1)=2; XPolInY(Ny+2,Ny)=-1;
fprintf('Done GetGhostMatricies\n')
end
%-------------------------------------------------------------------------
function [D0x,D0y]=GetDiffMatricies(Nx,Ny,dx,dy)
D0x=diag(ones(1,Nx-1),1)-diag(ones(1,Nx-1),-1);
D0y=diag(ones(1,Ny-1),1)-diag(ones(1,Ny-1),-1);
D0x(1,1)=-3; D0x(1,2)=4; D0x(1,3)=-1;
D0y(1,1)=-3; D0y(1,2)=4; D0y(1,3)=-1;
D0x(Nx,Nx)=3; D0x(Nx,Nx-1)=-4; D0x(Nx,Nx-2)=1;
D0y(Ny,Ny)=3; D0y(Ny,Ny-1)=-4; D0y(Ny,Ny-2)=1;
%pad with ghost cells which are simply zeros
tmp=D0x; D0x=zeros(Nx+2,Nx+2); D0x(2:(Nx+1),2:(Nx+1))=tmp; tmp=0;
tmp=D0y; D0y=zeros(Ny+2,Ny+2); D0y(2:(Ny+1),2:(Ny+1))=tmp; tmp=0;
%scale appropriatley by dx & dy
D0x=D0x/(2*dx);
D0y=D0y/(2*dy);
end
%-------------------------------------------------------------------------
function biCubInv=GetBiCubicInverse(dx,dy)
%p(x,y)=a00+a01(x-xi)+a02(x-xi)^2+...+a33(x-xi)^3(y-yj)^3
%biCubic*a=[psi(i,j) psi(i+1,j) psi(i,j+1) psi(i+1,j+1) psi_x(i,j) ... psi_y(i,j) ... psi_xy(i,j) ... psi_xy(i+1,j+1)]
%here, psi_x is the x derivative of psi
%I verified that this matrix is correct by setting dx=dy=1 and comparing to the inverse here http://en.wikipedia.org/wiki/Bicubic_interpolation
biCubic=[
%00 10 20 30 01 11 21 31 02 12 22 32 03 13 23 33
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
1 dx dx^2 dx^3 0 0 0 0 0 0 0 0 0 0 0 0;
1 0 0 0 dy 0 0 0 dy^2 0 0 0 dy^3 0 0 0;
1 dx dx^2 dx^3 dy dx*dy dx^2*dy dx^3*dy dy^2 dx*dy^2 dx^2*dy^2 dx^3*dy^2 dy^3 dx*dy^3 dx^2*dy^3 dx^3*dy^3;
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 1 2*dx 3*dx^2 0 0 0 0 0 0 0 0 0 0 0 0;
0 1 0 0 0 dy 0 0 0 dy^2 0 0 0 dy^3 0 0;
0 1 2*dx 3*dx^2 0 dy 2*dx*dy 3*dx^2*dy 0 dy^2 2*dx*dy^2 3*dx^2*dy^2 0 dy^3 2*dx*dy^3 3*dx^2*dy^3;
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 1 dx dx^2 dx^3 0 0 0 0 0 0 0 0;
0 0 0 0 1 0 0 0 2*dy 0 0 0 3*dy^2 0 0 0;
0 0 0 0 1 dx dx^2 dx^3 2*dy 2*dx*dy 2*dx^2*dy 2*dx^3*dy 3*dy^2 3*dx*dy^2 3*dx^2*dy^2 3*dx^3*dy^2;
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 1 2*dx 3*dx^2 0 0 0 0 0 0 0 0;
0 0 0 0 0 1 0 0 0 2*dy 0 0 0 3*dy^2 0 0;
0 0 0 0 0 1 2*dx 3*dx^2 0 2*dy 4*dx*dy 6*dx^2*dy 0 3*dy^2 6*dx*dy^2 9*dx^2*dy^2];
biCubInv=inv(biCubic);
end
%-------------------------------------------------------------------------
I found my error. I pad my matricies with ghost cells, however I forget that now the i in Psi without ghost cells is i+1 in psi with ghost cells. Hence, I should be evaluating my interpolating polynomial P at xi=x(6); yj=y(2), not xi=x(5); yj=y(1).
Can somebody explain how to use this function in Matlab
"sequentialfs"
it looks straight forward but I do not know how can we design a function handler for it?!
any clue?!
Here's a simpler example than the one in the documentation.
First let's create a very simple dataset. We have some class labels y. 500 are from class 0, and 500 are from class 1, and they are randomly ordered.
>> y = [zeros(500,1); ones(500,1)];
>> y = y(randperm(1000));
And we have 100 variables x that we want to use to predict y. 99 of them are just random noise, but one of them is highly correlated with the class label.
>> x = rand(1000,99);
>> x(:,100) = y + rand(1000,1)*0.1;
Now let's say we want to classify the points using linear discriminant analysis. If we were to do this directly without applying any feature selection, we would first split the data up into a training set and a test set:
>> xtrain = x(1:700, :); xtest = x(701:end, :);
>> ytrain = y(1:700); ytest = y(701:end);
Then we would classify them:
>> ypred = classify(xtest, xtrain, ytrain);
And finally we would measure the error rate of the prediction:
>> sum(ytest ~= ypred)
ans =
0
and in this case we get perfect classification.
To make a function handle to be used with sequentialfs, just put these pieces together:
>> f = #(xtrain, ytrain, xtest, ytest) sum(ytest ~= classify(xtest, xtrain, ytrain));
And pass all of them together into sequentialfs:
>> fs = sequentialfs(f,x,y)
fs =
Columns 1 through 16
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Columns 17 through 32
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Columns 33 through 48
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Columns 49 through 64
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Columns 65 through 80
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Columns 81 through 96
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Columns 97 through 100
0 0 0 1
The final 1 in the output indicates that variable 100 is, as expected, the best predictor of y among the variables in x.
The example in the documentation for sequentialfs is a little more complex, mostly because the predicted class labels are strings rather than numerical values as above, so ~strcmp is used to calculate the error rate rather than ~=. In addition it makes use of cross-validation to estimate the error rate, rather than direct evaluation as above.