Currently, I have used ode45 with a function that outputs a matrix (C) that 3945x9. These are supposed to be 3x3 matrices, so I did C = reshape(C.',3,3,[]). Now, I have a 3x3x3945 matrix. What I want to do is find the error of each 3x3 matrix. This is done using C*C.' - eye(3). However, I do not know how to do this with all my data now. It will work for something like C(:,:,1)*C(:,:,1).' - eye(3). However, not C(:,:,:)*C(:,:,:) - eye(3).
Use the nD matrix multiply routine pagemtimes. E.g.,
pagemtimes(C,'none',C,'transpose') - eye(3)
Related
Because it is possible to create a (non-constant) diagonal matrix in Matlab, f.i. A = diag([1;2;3]), I wonder if there is an easy way to create a non-constant tridiagonal matrix. Because the gallery('tridiag',...) command is only working with constant tridiagonal matrices.
If I understood your question correctly, then you can create random tridiagonal matrices using the below line of code
n=10;
p=3;
T=toeplitz([1 randn(1,n-p-1) zeros(1,p)], [1 randn(1,n-p-1) zeros(1,p)])*diag(randn(1,n))
Note, you can also change the 1 in the toeplitz function or you can remove it all along, but then you get a warning.
In my current analysis, I am trying to multiply a matrix (flm), of dimension nxm, with the inverse of a matrix nxmxp, and then use this result to multiply it by the inverse of the matrix (flm).
I was trying using the following code:
flm = repmat(Data.fm.flm(chan,:),[1 1 morder]); %chan -> is a vector 1by3
A = (flm(:,:,:)/A_inv(:,:,:))/flm(:,:,:);
However. due to the problem of dimensions, I am getting the following error message:
Error using ==> mrdivide
Inputs must be 2-D, or at least one
input must be scalar.
To compute elementwise RDIVIDE, use
RDIVIDE (./) instead.
I have no idea on how to proceed without using a for loop, so anyone as any suggestion?
I think you are looking for a way to conveniently multiply matrices when one is of higher dimensionality than the other. In that case you can use bxsfun to automatically 'expand' the smaller matrix.
x = rand(3,4);
y = rand(3,4,5);
bsxfun(#times,x,y)
It is quite simple, and very efficient.
Make sure to check out doc bsxfun for more examples.
This is the link that explain to solve inverse kinematics using ANFIS
http://www.mathworks.com/help/fuzzy/examples/modeling-inverse-kinematics-in-a-robotic-arm.html
But the example is only for 2 DOFs Robot. How to make the data set if the robot using 4 motors?
Because there is always an error that says :"Error using meshgrid. Too many input arguments." when running the code:
a= 0:(1*pi/180):(180*pi/180);
b= 0:(1*pi/180):(180*pi/180);
c= 0:(1*pi/180):(180*pi/180);
d= (25*180/pi):(1*pi/180):(180*pi/180);
[THETA1, THETA2, THETA3, THETA4] = meshgrid(a, b, c, d);
Any Suggestion will be appreciated
Thanks!
meshgrid is specifically for 2D or 3D data. For arbitrary n-dimensional data, the appropriately-named ndgrid is the guy you want.
Note that meshgrid is intended for working intuitively with Cartesian X,Y{,Z} data, so swaps the first two dimensions in the shape of its output to reflect X,Y order rather than row,column. ndgrid, being more general, just gives you standard multidimensional matrix order.
I have two matrices X and Y, both of order mxn. I want to create a new matrix O of order mxm such that each i,j th entry in this new matrix is computed by applying a function to ith and jth row of X and Y respectively. In my case m = 10000 and n = 500. I tried using a loop but it takes forever. Is there an efficient way to do it?
I am targeting two functions dot product -- dot(row_i, row_j) and exp(-1*norm(row_i-row_j)). But I was wondering if there is a general way so that I can plugin any function.
Solution #1
For the first case, it looks like you can simply use matrix multiplication after transposing Y -
X*Y'
If you are dealing with complex numbers -
conj(X*ctranspose(Y))
Solution #2
For the second case, you need to do a little more work. You need to use bsxfun with permute to re-arrange dimensions and employ the raw form of norm calculations and finally squeeze to get a 2D array output -
squeeze(exp(-1*sqrt(sum(bsxfun(#minus,X,permute(Y,[3 2 1])).^2,2)))
If you would like to avoid squeeze, you can use two permute's -
exp(-1*sqrt(sum(bsxfun(#minus,permute(X,[1 3 2]),permute(Y,[3 1 2])).^2,3)))
I would also advise you to look into this problem - Efficiently compute pairwise squared Euclidean distance in Matlab.
In conclusion, there isn't a common most efficient way that could be employed for every function to ith and jth row of X. If you are still hell bent on that, you can use anonymous function handles with bsxfun, but I am afraid it won't be the most efficient technique.
For the second part, you could also use pdist2:
result = exp(-pdist2(X,Y));
I'm using the following code To get a partial correlation matrix (original code from http://www.fmrib.ox.ac.uk/analysis/netsim/)
ic=-inv(cov(ts1)); % raw negative inverse covariance matrix
r=(ic ./ repmat(sqrt(diag(ic)),1,Nnodes)) ./ repmat(sqrt(diag(ic))',Nnodes,1); % use diagonal to get normalised coefficients
r=r+eye(Nnodes); % remove diagonal
My original matrix (ts1) is a brain activity over time course (X variable) in multiple voxels -volumetric pixel 3X3 (Y variable).
The problem is, I have more dependent variables(y -voxels ) than independent variables(x - time course).
I get the following Warning-
Warning: Matrix is close to singular or badly scaled.
Results may be inaccurate. RCOND = 4.998365e-022.
Any thoughts on how to fix the code so I'll get the partial correlation between all of the voxels?
The warning is from Matlab having a problem inverting the covariance matrix.
One solution might be to try pinv()
http://www.mathworks.com/help/techdoc/ref/pinv.html