I have a problem multiplying a vector times the inverse of a matrix in Matlab. The code I am using is the following:
% Final Time
T = 0.1;
% Number of grid cells
N=20;
%N=40;
L=20;
% Delta x
dx=1/N
% define cell centers
%x = 0+dx*0.5:dx:1-0.5*dx;
x = linspace(-L/2, L/2, N)';
%define number of time steps
NTime = 100; %NB! Stability conditions-dersom NTime var 50 ville en fått helt feil svar pga lambda>0,5
%NTime = 30;
%NTime = 10;
%NTime = 20;
%NTime = 4*21;
%NTime = 4*19;
% Time step dt
dt = T/NTime
% Define a vector that is useful for handling teh different cells
J = 1:N; % number the cells of the domain
J1 = 2:N-1; % the interior cells
J2 = 1:N-1; % numbering of the cell interfaces
%define vector for initial data
u0 = zeros(1,N);
L = x<0.5;
u0(L) = 0;
u0(~L) = 1;
plot(x,u0,'-r')
grid on
hold on
% define vector for solution
u = zeros(1,N);
u_old = zeros(1,N);
% useful quantity for the discrete scheme
r = dt/dx^2
mu = dt/dx;
% calculate the numerical solution u by going through a loop of NTime number
% of time steps
A=zeros(N,N);
alpha(1)=A(1,1);
d(1)=alpha(1);
b(1)=0;
c(1)=b(1);
gamma(1,2)=A(1,2);
% initial state
u_old = u0;
pause
for j = 2:NTime
A(j,j)=1+2*r;
A(j,j-1)=-(1/dx^2);
A(j,j+1)=-(1/dx^2);
u=u_old./A;
% plotting
plot(x,u,'-')
xlabel('X')
ylabel('P(X)')
hold on
grid on
% update "u_old" before you move forward to the next time level
u_old = u;
pause
end
hold off
The error message I get is:
Matrix dimensions must agree.
Error in Implicit_new (line 72)
u=u_old./A;
My question is therefore how it is possible to perform u=u_old*[A^(-1)] in Matlab?
David
As knedlsepp said, v./A is the elementwise division, which is not what you wanted. You can use either
v/A provided that v is a row vector and its length is equal to the number of columns in A. The result is a row vector.
A\v provided that v is a column vector and its length is equal to the number of rows in A
The results differ only in shape: v/A is the transpose of A'\v'
Related
I have a stepped shaft as per the attached image. Following information available as an input parameters:
Young's modulus 123e3N/mm^2.
Cross-sectional area 300mm^2 for the length of 400mm
Cross-sectional area 400mm^2 for the length of 250mm
Axial force of 200kN acts axially on the shaft and the location of load is at 200mm from the one end of the shaft on cross-sectional area of 300mm^2
I need help to make do finite element analysis in MATALB.
Please help me in making MATLAB code for this.
%% Clearing workspace
clc
clear
close all
%% Element specifications
ne = 3; % Number of elements
nne = 2; % Number of nodes per element
nn = ne*(nne - 1) + 1; % total number of nodes
ndof = 1; % Number of degress of freedom per node
sg = nn*ndof; % size of global stiffness matrix
se = nne*ndof; % size of elemental stiffness matrix
KG = zeros(sg,sg); % Global stiffness matrix
Ke = zeros(se,se); % Elemental Stiffness MAtrix
Fe = zeros(se,1); % Elemental Force Vector
FG = zeros(sg,1); % Global Force Vector
%% Geometrical parameters
E = 123e3*ones(1,ne); % Young's Modulus in N/mm^2
P = 200e3; % Force in N
F = P;
A = ones(1,ne) ; % Area of cross-section
A(1)=300; % Area of cross-section of 1st element in mm^2
A(2)=300; % Area of cross-section of 2nd element in mm^2
A(3)=400; % Area of cross-section of 3rd element in mm^2
L = ones(1,ne); % Length of elements in mm
L(1)=200; % Length of 1st element in mm
L(2)=200; % Length of 2nd element in mm
L(3)=250; % Length of 3rd element in mm
%% Assembly of Global Stiffness Matrix
for i = 1:ne
Ke = (A(i)*E(i)/L(i))*[1 -1;-1 1]; % Element Stiffness Matrix
for j = 1:se
for k = 1:se
KG(i + j - 1, i + k - 1) = KG(i + j - 1, i + k - 1) + Ke(j,k);
end
end
end
%% Concentrated Load Vector at end
FG(2,1) = F; % Defining location of concentrated load
%% Application of boundary conditions
KGS = KG;
cdof = [1 4]; % specify fixed degree of freedom number
Lcdof = length(cdof);
for a = 1:Lcdof
KGS(cdof(a),:) = 0;
KGS(:,cdof(a)) = 1;
FG(cdof(a),1) = 0;
end
FGL = length(FG);
for b = 1:FGL
if(b > length(FG))
elseif(FG(b)<0)
FG(b) = [];
end
end
%% Solving for displacement
U = linsolve(KGS,FG)
U1=KGS\FG
%% Calculation of Reaction Forces
FR = KG*U1
I'm generating 3d fractal noise in MATLAB using a variety of methods. It's working relatively well, but I'm having an issue where I see vertical striping artifacts in my noise. This happens regardless of what data type or resolution I use.
Edit: I figured it out. The solution is posted as an answer below. Thanks everyone for your thoughts and guidance!
expo = 2^6;
dims = [expo,expo,expo];
beta = -4.5;
render = randnd(beta, dims); % Create volumetric fractal
render = render - min(render); % Set floor to zero
render = render ./ max(render); % Set ceiling to one
%render = imbinarize(render); % BW Threshold option
render = render .* 255; % For greyscale
slicer = 1; % Turn on image slicer/saver
i = 0; % Page counter
format = '.png';
imagename = '___testDump/slice';
imshow(render(:,:,1),[0 255]); %Single test image
if slicer == 1
for c = 1:length(render)
i = i+1;
pagenumber = num2str(i);
filename = [imagename, pagenumber, format];
imwrite(uint8(render(:,:,i)),filename)
end
end
function X = randnd(beta,varargin)
seed = 999;
rng(seed); % Set seed
%% X = randnd(beta,varargin)
% Based on similar functions by Jon Yearsley and Hristo Zhivomirov
% Written by Marcin Konowalczyk
% Timmel Group # Oxford University
%% Parse the input
narginchk(0,Inf); nargoutchk(0,1);
if nargin < 2 || isempty(beta); beta = 0; end % Default to white noise
assert(isnumeric(beta) && isequal(size(beta),[1 1]),'''beta'' must be a number');
assert(-6 <= beta && beta <= 6,'''beta'' out of range'); % Put on reasonable bounds
%% Generate N-dimensional white noise with 'randn'
X = randn(varargin{:});
if isempty(X); return; end; % Usually happens when size vector contains zeros
% Squeeze prevents an error if X has more than one leading singleton dimension
% This is a slight deviation from the pure functionality of 'randn'
X = squeeze(X);
% Return if white noise is requested
if beta == 0; return; end;
%% Generate corresponding N-dimensional matrix of multipliers
N = size(X);
% Create matrix of multipliers (M) of X in the frequency domain
M = [];
for j = 1:length(N)
n = N(j);
if (rem(n,2)~=0) % if n is odd
% Nyquist frequency bin does not show up in odd-numbered fft
k = ifftshift(-(n-1)/2:(n-1)/2);
else
k = ifftshift(-n/2:n/2-1);
end
% Spectral multipliers
m = (k.^2)';
if isempty(M);
M = m;
else
% Create the permutation vector
M_perm = circshift(1:length(size(M))+1,[0 1]);
% Permute a singleton dimension to the beginning of M
M = permute(M,M_perm);
% Add m along the first dimension of M
M = bsxfun(#plus,M,m);
end
end
% Reverse M to match X (since new dimensions were being added form the left)
M = permute(M,length(size(M)):-1:1);
assert(isequal(size(M),size(X)),'Bad programming error'); % This should never occur
% Shape the amplitude multipliers by beta/4 which corresponds to shaping the power by beta
M = M.^(beta/4);
% Set the DC component to zero
M(1,1) = 0;
%% Multiply X by M in frequency domain
Xstd = std(X(:));
Xmean = mean(X(:));
X = real(ifftn(fftn(X).*M));
% Force zero mean unity standard deviation
X = X - mean(X(:));
X = X./std(X(:));
% Restore the standard deviation and mean from before the spectral shaping.
% This ensures the random sample from randn is truly random. After all, if
% the mean was always exactly zero it would not be all that random.
X = X + Xmean;
X = X.*Xstd;
end
Here is my solution:
My "min/max" code (lines 6 and 7) was bad. I wanted to divide all values in the matrix by the single largest value in the matrix so that all values would be between 0 and 1. Because I used max() improperly, I was stepping through the max value of each column and using that as my divisor; thus the vertical stripes.
In the end this is what my code looks like. X is the 3 dimensional matrix:
minVal = min(X,[],'all'); % Get the lowest value in the entire matrix
X = X - minVal; % Set min value to zero
maxVal = max(X,[],'all'); % Get the highest value in the entire matrix
X = X ./ maxVal; % Set max value to one
The formula for the discrete double Fourier series that I'm attempting to code in MATLAB is:
The coefficient in front of the trigonometric sum (Fourier amplitude) is what I'm trying to extract from the fitting of the data through the double Fourier series seen above. Using my current code, the original function is not reconstructed, therefore my coefficients cannot be correct. I'm not certain if this is of any significance or insight, but the second term for the A coefficients (Akn(1))) is 13 orders of magnitude larger than any other coefficient.
Any suggestions, modifications, or comments about my program would be greatly appreciated.
%data = csvread('digitized_plot_data.csv',1);
%xdata = data(:,1);
%ydata = data(:,2);
%x0 = xdata(1);
lambda = 20; %km
tau = 20; %s
vs = 7.6; %k/s (velocity of CHAMP satellite)
L = 4; %S
% Number of terms to use:
N = 100;
% set up matrices:
M = zeros(length(xdata),1+2*N);
M(:,1) = 1;
for k=1:N
for n=1:N %error using *, inner matrix dimensions must agree...
M(:,2*n) = cos(2*pi/lambda*k*vs*xdata).*cos(2*pi/tau*n*xdata);
M(:,2*n+1) = sin(2*pi/lambda*k*vs*xdata).*sin(2*pi/tau*n*xdata);
end
end
C = M\ydata;
%least squares coefficients:
A0 = C(1);
Akn = C(2:2:end);
Bkn = C(3:2:end);
% reconstruct original function values (verification check):
y = A0;
for k=1:length(Akn)
y = y + Akn(k)*cos(2*pi/lambda*k*vs*xdata).*cos(2*pi/tau*n*xdata) + Bkn(k)*sin(2*pi/lambda*k*vs*xdata).*sin(2*pi/tau*n*xdata);
end
% plotting
hold on
plot(xdata,ydata,'ko')
plot(xdata,yk,'b--')
legend('Data','Least Squares','location','northeast')
xlabel('Centered Time Event [s]'); ylabel('J[\muA/m^2]'); title('Single FAC Event (50 Hz)')
I have a matrix X with tens of rows and thousands of columns, all elements are categorical and re-organized to an index matrix. For example, ith column X(:,i) = [-1,-1,0,2,1,2]' is converted to X2(:,i) = ic of [x,ia,ic] = unique(X(:,i)), for convenient use of function accumarray. I randomly selected a submatrix from the matrix and counted the number of unique values of each column of the submatrix. I performed this procedure 10,000 times. I know several methods for counting number of unique values in a column, the fasted way I found so far is shown below:
mx = max(X);
for iter = 1:numperm
for j = 1:ny
ky = yrand(:,iter)==uy(j);
% select submatrix from X where all rows correspond to rows in y that y equals to uy(j)
Xk = X(ky,:);
% specify the sites where to put the number of each unique value
mxj = mx*(j-1);
mxi = mxj+1;
mxk = max(Xk)+mxj;
% iteration to count number of unique values in each column of the submatrix
for i = 1:c
pxs(mxi(i):mxk(i),i) = accumarray(Xk(:,i),1);
end
end
end
This is a way to perform random permutation test to calculate information gain between a data matrix X of size n by c and categorical variable y, under which y is randomly permutated. In above codes, all randomly permutated y are stored in matrix yrand, and the number of permutations is numperm. The unique values of y are stored in uy and the unique number is ny. In each iteration of 1:numperm, submatrix Xk is selected according to the unique element of y and number of unique elements in each column of this submatrix is counted and stored in matrix pxs.
The most time costly section in the above code is the iterations of i = 1:c for large c.
Is it possible to perform the function accumarray in a matrix manner to avoid for loop? How else can I improve the above code?
-------
As requested, a simplified test function including above codes is provided as
%% test
function test(x,y)
[r,c] = size(x);
x2 = x;
numperm = 1000;
% convert the original matrix to index matrix for suitable and fast use of accumarray function
for i = 1:c
[~,~,ic] = unique(x(:,i));
x2(:,i) = ic;
end
% get 'numperm' rand permutations of y
yrand(r, numperm) = 0;
for i = 1:numperm
yrand(:,i) = y(randperm(r));
end
% get statistic of y
uy = unique(y);
nuy = numel(uy);
% main iterations
mx = max(x2);
pxs(max(mx),c) = 0;
for iter = 1:numperm
for j = 1:nuy
ky = yrand(:,iter)==uy(j);
xk = x2(ky,:);
mxj = mx*(j-1);
mxk = max(xk)+mxj;
mxi = mxj+1;
for i = 1:c
pxs(mxi(i):mxk(i),i) = accumarray(xk(:,i),1);
end
end
end
And a test data
x = round(randn(60,3000));
y = [ones(30,1);ones(30,1)*-1];
Test the function
tic; test(x,y); toc
return Elapsed time is 15.391628 seconds. in my computer. In the test function, 1000 permutations is set. So if I perform 10,000 permutation and do some additional computations (are negligible comparing to the above code), time more than 150 s is expected. I think whether the code can be improved. Intuitively, perform accumarray in a matrix manner can save lots of time. Can I?
The way suggested by #rahnema1 has significantly improved the calculations, so I posted my answer here, as also requested by #Dev-iL.
%% test
function test(x,y)
[r,c] = size(x);
x2 = x;
numperm = 1000;
% convert the original matrix to index matrix for suitable and fast use of accumarray function
for i = 1:c
[~,~,ic] = unique(x(:,i));
x2(:,i) = ic;
end
% get 'numperm' rand permutations of y
yrand(r, numperm) = 0;
for i = 1:numperm
yrand(:,i) = y(randperm(r));
end
% get statistic of y
uy = unique(y);
nuy = numel(uy);
% main iterations
mx = max(max(x2));
% preallocation
pxs(mx*nuy,c) = 0;
% set the edges of the bin for function histc
binrg = (1:mx)';
% preallocation of the range of matrix into which the results will be stored
mxr = mx*(0:nuy);
for iter = 1:numperm
yt = yrand(:,iter);
for j = 1:nuy
pxs(mxr(j)+1:mxr(j),:) = histc(x2(yt==uy(j)),binrg);
end
end
Test results:
>> x = round(randn(60,3000));
>> y = [ones(30,1);ones(30,1)*-1];
>> tic; test(x,y); toc
Elapsed time is 15.632962 seconds.
>> tic; test(x,y); toc % using the way suggested by rahnema1, i.e., revised function posted above
Elapsed time is 2.900463 seconds.
I am coding a Gaussian Process regression algorithm. Here is the code:
% Data generating function
fh = #(x)(2*cos(2*pi*x/10).*x);
% range
x = -5:0.01:5;
N = length(x);
% Sampled data points from the generating function
M = 50;
selection = boolean(zeros(N,1));
j = randsample(N, M);
% mark them
selection(j) = 1;
Xa = x(j);
% compute the function and extract mean
f = fh(Xa) - mean(fh(Xa));
sigma2 = 1;
% computing the interpolation using all x's
% It is expected that for points used to build the GP cov. matrix, the
% uncertainty is reduced...
K = squareform(pdist(x'));
K = exp(-(0.5*K.^2)/sigma2);
% upper left corner of K
Kaa = K(selection,selection);
% lower right corner of K
Kbb = K(~selection,~selection);
% upper right corner of K
Kab = K(selection,~selection);
% mean of posterior
m = Kab'*inv(Kaa+0.001*eye(M))*f';
% cov. matrix of posterior
D = Kbb - Kab'*inv(Kaa + 0.001*eye(M))*Kab;
% sampling M functions from from GP
[A,B,C] = svd(Kaa);
F0 = A*sqrt(B)*randn(M,M);
% mean from GP using sampled points
F0m = mean(F0,2);
F0d = std(F0,0,2);
%%
% put together data and estimation
F = zeros(N,1);
S = zeros(N,1);
F(selection) = f' + F0m;
S(selection) = F0d;
% sampling M function from posterior
[A,B,C] = svd(D);
a = A*sqrt(B)*randn(N-M,M);
% mean from posterior GPs
Fm = m + mean(a,2);
Fmd = std(a,0,2);
F(~selection) = Fm;
S(~selection) = Fmd;
%%
figure;
% show what we got...
plot(x, F, ':r', x, F-2*S, ':b', x, F+2*S, ':b'), grid on;
hold on;
% show points we got
plot(Xa, f, 'Ok');
% show the whole curve
plot(x, fh(x)-mean(fh(x)), 'k');
grid on;
I expect to get some nice figure where the uncertainty of unknown data points would be big and around sampled data points small. I got an odd figure and even odder is that the uncertainty around sampled data points is bigger than on the rest. Can someone explain to me what I am doing wrong? Thanks!!
There are a few things wrong with your code. Here are the most important points:
The major mistake that makes everything go wrong is the indexing of f. You are defining Xa = x(j), but you should actually do Xa = x(selection), so that the indexing is consistent with the indexing you use on the kernel matrix K.
Subtracting the sample mean f = fh(Xa) - mean(fh(Xa)) does not serve any purpose, and makes the circles in your plot be off from the actual function. (If you choose to subtract something, it should be a fixed number or function, and not depend on the randomly sampled observations.)
You should compute the posterior mean and variance directly from m and D; no need to sample from the posterior and then obtain sample estimates for those.
Here is a modified version of the script with the above points fixed.
%% Init
% Data generating function
fh = #(x)(2*cos(2*pi*x/10).*x);
% range
x = -5:0.01:5;
N = length(x);
% Sampled data points from the generating function
M = 5;
selection = boolean(zeros(N,1));
j = randsample(N, M);
% mark them
selection(j) = 1;
Xa = x(selection);
%% GP computations
% compute the function and extract mean
f = fh(Xa);
sigma2 = 2;
sigma_noise = 0.01;
var_kernel = 10;
% computing the interpolation using all x's
% It is expected that for points used to build the GP cov. matrix, the
% uncertainty is reduced...
K = squareform(pdist(x'));
K = var_kernel*exp(-(0.5*K.^2)/sigma2);
% upper left corner of K
Kaa = K(selection,selection);
% lower right corner of K
Kbb = K(~selection,~selection);
% upper right corner of K
Kab = K(selection,~selection);
% mean of posterior
m = Kab'/(Kaa + sigma_noise*eye(M))*f';
% cov. matrix of posterior
D = Kbb - Kab'/(Kaa + sigma_noise*eye(M))*Kab;
%% Plot
figure;
grid on;
hold on;
% GP estimates
plot(x(~selection), m);
plot(x(~selection), m + 2*sqrt(diag(D)), 'g-');
plot(x(~selection), m - 2*sqrt(diag(D)), 'g-');
% Observations
plot(Xa, f, 'Ok');
% True function
plot(x, fh(x), 'k');
A resulting plot from this with 5 randomly chosen observations, where the true function is shown in black, the posterior mean in blue, and confidence intervals in green.