Matlab : (.)^*T operation for complex numbers - matlab

The answer to the question asked here Why is complex conjugate transpose the default in Matlab
says that for complex numbers we can use ' symbol to denote the transpose operation that is used for real numbers. Mathematically, the transpose operation that is done for real valued numbers is denoted by the symbol (.)^T . For the transpose of complex numbers the equivalent symbol is (.)^H. The way it is done is -- First we take the conjugate of the complex number and then take its transpose. This is the operation (.)^H.
I want to implement the operation (.)^{*T} = (.)^H for complex number. I have used the symbol apostrophe for this. Please correct me where wrong.
I wanted to confirm if my implementation of the concept is correct or not using Matlab. For example, for real valued vector A_r, I want to multiply it with its transpose multiply_r = A_r*A_r'
Replicating the same for complex valued vector A_c, this operation would become multiply_c = A_c * A_c'
A_r =[1,2,3]; %real valued vector
B_r = A_r'; %transpose of real valued vector
multiply_r =A_r*B_r;
A_c = [1 + sqrt(-1)*1, 2+sqrt(-1)*2, 3+sqrt(-1)*3]; %complex valued vector
B_c = A_c'; %transpose of complex valued vector
multiply_c = A_c*B_c;
Is this okay?
UPDATE : I am trying to take the normal transpose of this complex valued array, so that it is arranged in 3 rows and 1 column intead of 1 row and 3 columns. Using the operator .' I am getting weird values because the array is now increased in size !! What is the proper way?
h = [ -5.1053 + 3.6797i 1.3327 + 5.7339i 4.1302 -10.7521i].'
h =
-5.1053 + 3.6797i
1.3327 + 5.7339i
4.1302
0 -10.7521i

As you noted, Matlab has both matrix "transpose" ((.)^T) and "conjugate transpose" ((.)^H) defined.
For real-valued transpose, you have transpose, that can be expressed as an operator .' (note the '.' before the '):
aT = transpose(a);
isequal( aT, a.' ); % transpose() and .' are the same
For complex conjugate transpose you have ctranspose, that can be expressed as an operator ' (note there is no . before the '):
aH = ctranspose(a);
isequal( aH, a' ); % ctranspose and ' are the same
You can verify using conj:
isequal( a', conj(a).' );

Related

Complex vector transpose returns result with wrong signs: MATLAB [duplicate]

This question already has answers here:
Using transpose versus ctranspose in MATLAB
(4 answers)
Closed 5 years ago.
In matlab I have a matrix As
As = zeros(m, n);
Next I assign values to As and transpose the specific columns:
for idx = 1:n
% Assign value to As, then assign to 'a' and 's'
a = As(:, idx)';
s = As(:, idx);
end
Then s is a column vector like:
s = [0.1 - 0.2i
0.3 + 0.4i]
But elements in a have the flipped signs:
a = [0.1 + 0.2i, 0.3 - 0.4i]
This is confusing me, I mean the transpose of s should be a row (no problem) with the symbols in the order -, + like
a = [0.1 - 0.2i, 0.3 + 0.4i]
Can anyone tell me what the problem is?
The prime operator ' in matlab is actually an alias to ctranspose, which does not only convert rows to columns, or columns to rows of ordinary matrices or vectors, but also calculates the complex conjugate, i.e. changes the sign of the imaginary part.
The non-conjugate transpose operator A.', performs a transpose without conjugation. That is, it doesn't change the imaginary parts of the elements.

Plotting equally spaced points for a graph on MATLAB

The function I need to plot is y = exp(-0.3*t)*(2*cos(2*t) + 4*sin(2*t)) for the range of values of t between 0 and 2*pi.
I entered the following commands on MATLAB:
>> t=linspace(0,2*pi,101);
>> y=exp(-0.3*t)*(2*cos(2*t) + 4*sin(2*t));
And I come up with the following error:
Error using *
Inner matrix dimensions must agree.
I don't know why. Can someone point out why and suggest the correct command line argument?
Thanks!
Your issue is in this term:
exp(-0.3*t) * (2*cos(2*t) + 4*sin(2*t));
You are multiplying 2 vectors. You want to be doing element-wise operations, i.e. each element of exp(-0.3*t) times each corresponding element of (2*cos(2*t) + 4*sin(2*t)), rather than the vector product of the two.
To achieve what you want, simply add a dot . before the multiplication *, like so
y = exp(-0.3*t) .* (2*cos(2*t) + 4*sin(2*t));
See this documentation for array vs. element-wise operations: http://uk.mathworks.com/help/matlab/matlab_prog/array-vs-matrix-operations.html
The "*" operator is a matrix-multiplication operator, like https://en.wikipedia.org/wiki/Matrix_multiplication
You need to use an ".*" operator which is a per-element operator. You must use it to match elements from one vector or matrix to the elements from the other vector or matrix one-to-one.
So you have to do
y=exp(-0.3*t).*(2*cos(2*t) + 4*sin(2*t));
Note that ".*" is not needed when multiplying by constant, because the effect is the same for matrix and per-element operation

MATLAB function that gives all the positive integers in a column vector

I need to create a function that has the input argument n, a integer , n>1 , and an output argument v, which is a column vector of length n containing all the positive integers smaller than or equal to n, arranged in such a way that no element of the vector equals its own index.
I know how to define the function
This is what I tried so far but it doesn't work
function[v]=int_col(n)
[1,n] = size(n);
k=1:n;
v=n(1:n);
v=k'
end
Let's take a look at what you have:
[1,n] = size(n);
This line doesn't make a lot of sense: n is an integer, which means that size(n) will give you [1,1], you don't need that. (Also an expression like [1,n] can't be on the left hand side of an assignment.) Drop that line. It's useless.
k=1:n;
That line is pretty good, k is now a row vector of size n containing the integers from 1 to n.
v=n(1:n);
Doesn't make sense. n isn't a vector (or you can say it's a 1x1 vector) either way, indexing into it (that's what the parentheses do) doesn't make sense. Drop that line too.
v=k'
That's also a nice line. It makes a column vector v out of your row vector k. The only thing that this doesn't satisfy is the "arranged in such a way that no element of the vector equals its own index" part, since right now every element equals its own index. So now you need to find a way to either shift those elements or shuffle them around in some way that satisfies this condition and you'd be done.
Let's give a working solution. You should really look into it and see how this thing works. It's important to solve the problem in smaller steps and to know what the code is doing.
function [v] = int_col(n)
if n <= 1
error('argument must be >1')
end
v = 1:n; % generate a row-vector of 1 to n
v = v'; % make it a column vector
v = circshift(v,1); % shift all elements by 1
end
This is the result:
>> int_col(5)
ans =
5
1
2
3
4
Instead of using circshift you can do the following as well:
v = [v(end);v(1:end-1)];

Vectorize call to function of two vectors (treat matrix as array of vector)

I wish to compute the cumulative cosine distance between sets of vectors.
The natural representation of a set of vectors is a matrix...but how do I vectorize the following?
function d = cosdist(P1,P2)
ds = zeros(size(P1,2),1);
for k=1:size(P1,2)
%#used transpose() to avoid SO formatting on '
ds(k)=transpose(P1(:,k))*P2(:,k)/(norm(P1(:,k))*norm(P2(:,k)));
end
d = prod(ds);
end
I can of course write
fz = #(v1,v2) transpose(v1)*v2/(norm(v1)*norm(v2));
ds = cellfun(fz,P1,P2);
...so long as I recast my matrices as cell arrays of vectors. Is there a better / entirely numeric way?
Also, will cellfun, arrayfun, etc. take advantage of vector instructions and/or multithreading?
Note probably superfluous in present company but for column vectors v1'*v2 == dot(v1,v2) and is significantly faster in Matlab.
Since P1 and P2 are of the same size, you can do element-wise operations here. v1'*v equals sum(v1.*v2), by the way.
d = prod(sum(P1.*P2,1)./sqrt(sum(P1.^2,1) .* sum(P2.^2,1)));
#Jonas had the right idea, but the normalizing denominator might be incorrect. Try this instead:
%# matrix of column vectors
P1 = rand(5,8);
P2 = rand(5,8);
d = prod( sum(P1.*P2,1) ./ sqrt(sum(P1.^2,1).*sum(P2.^2,1)) );
You can compare this against the results returned by PDIST2 function:
%# PDIST2 returns one minus cosine distance between all pairs of vectors
d2 = prod( 1-diag(pdist2(P1',P2','cosine')) );

How can I apply a function to every row/column of a matrix in MATLAB?

You can apply a function to every item in a vector by saying, for example, v + 1, or you can use the function arrayfun. How can I do it for every row/column of a matrix without using a for loop?
Many built-in operations like sum and prod are already able to operate across rows or columns, so you may be able to refactor the function you are applying to take advantage of this.
If that's not a viable option, one way to do it is to collect the rows or columns into cells using mat2cell or num2cell, then use cellfun to operate on the resulting cell array.
As an example, let's say you want to sum the columns of a matrix M. You can do this simply using sum:
M = magic(10); %# A 10-by-10 matrix
columnSums = sum(M, 1); %# A 1-by-10 vector of sums for each column
And here is how you would do this using the more complicated num2cell/cellfun option:
M = magic(10); %# A 10-by-10 matrix
C = num2cell(M, 1); %# Collect the columns into cells
columnSums = cellfun(#sum, C); %# A 1-by-10 vector of sums for each cell
You may want the more obscure Matlab function bsxfun. From the Matlab documentation, bsxfun "applies the element-by-element binary operation specified by the function handle fun to arrays A and B, with singleton expansion enabled."
#gnovice stated above that sum and other basic functions already operate on the first non-singleton dimension (i.e., rows if there's more than one row, columns if there's only one row, or higher dimensions if the lower dimensions all have size==1). However, bsxfun works for any function, including (and especially) user-defined functions.
For example, let's say you have a matrix A and a row vector B. E.g., let's say:
A = [1 2 3;
4 5 6;
7 8 9]
B = [0 1 2]
You want a function power_by_col which returns in a vector C all the elements in A to the power of the corresponding column of B.
From the above example, C is a 3x3 matrix:
C = [1^0 2^1 3^2;
4^0 5^1 6^2;
7^0 8^1 9^2]
i.e.,
C = [1 2 9;
1 5 36;
1 8 81]
You could do this the brute force way using repmat:
C = A.^repmat(B, size(A, 1), 1)
Or you could do this the classy way using bsxfun, which internally takes care of the repmat step:
C = bsxfun(#(x,y) x.^y, A, B)
So bsxfun saves you some steps (you don't need to explicitly calculate the dimensions of A). However, in some informal tests of mine, it turns out that repmat is roughly twice as fast if the function to be applied (like my power function, above) is simple. So you'll need to choose whether you want simplicity or speed.
I can't comment on how efficient this is, but here's a solution:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :))
applyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1))'
% Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #sum;
applyToRows(myFunc, myMx)
Building on Alex's answer, here is a more generic function:
applyToGivenRow = #(func, matrix) #(row) func(matrix(row, :));
newApplyToRows = #(func, matrix) arrayfun(applyToGivenRow(func, matrix), 1:size(matrix,1), 'UniformOutput', false)';
takeAll = #(x) reshape([x{:}], size(x{1},2), size(x,1))';
genericApplyToRows = #(func, matrix) takeAll(newApplyToRows(func, matrix));
Here is a comparison between the two functions:
>> % Example
myMx = [1 2 3; 4 5 6; 7 8 9];
myFunc = #(x) [mean(x), std(x), sum(x), length(x)];
>> genericApplyToRows(myFunc, myMx)
ans =
2 1 6 3
5 1 15 3
8 1 24 3
>> applyToRows(myFunc, myMx)
??? Error using ==> arrayfun
Non-scalar in Uniform output, at index 1, output 1.
Set 'UniformOutput' to false.
Error in ==> #(func,matrix)arrayfun(applyToGivenRow(func,matrix),1:size(matrix,1))'
For completeness/interest I'd like to add that matlab does have a function that allows you to operate on data per-row rather than per-element. It is called rowfun (http://www.mathworks.se/help/matlab/ref/rowfun.html), but the only "problem" is that it operates on tables (http://www.mathworks.se/help/matlab/ref/table.html) rather than matrices.
Adding to the evolving nature of the answer to this question, starting with r2016b, MATLAB will implicitly expand singleton dimensions, removing the need for bsxfun in many cases.
From the r2016b release notes:
Implicit Expansion: Apply element-wise operations and functions to arrays with automatic expansion of dimensions of length 1
Implicit expansion is a generalization of scalar expansion. With
scalar expansion, a scalar expands to be the same size as another
array to facilitate element-wise operations. With implicit expansion,
the element-wise operators and functions listed here can implicitly
expand their inputs to be the same size, as long as the arrays have
compatible sizes. Two arrays have compatible sizes if, for every
dimension, the dimension sizes of the inputs are either the same or
one of them is 1. See Compatible Array Sizes for Basic Operations and
Array vs. Matrix Operations for more information.
Element-wise arithmetic operators — +, -, .*, .^, ./, .\
Relational operators — <, <=, >, >=, ==, ~=
Logical operators — &, |, xor
Bit-wise functions — bitand, bitor, bitxor
Elementary math functions — max, min, mod, rem, hypot, atan2, atan2d
For example, you can calculate the mean of each column in a matrix A,
and then subtract the vector of mean values from each column with A -
mean(A).
Previously, this functionality was available via the bsxfun function.
It is now recommended that you replace most uses of bsxfun with direct
calls to the functions and operators that support implicit expansion.
Compared to using bsxfun, implicit expansion offers faster speed,
better memory usage, and improved readability of code.
None of the above answers worked "out of the box" for me, however, the following function, obtained by copying the ideas of the other answers works:
apply_func_2_cols = #(f,M) cell2mat(cellfun(f,num2cell(M,1), 'UniformOutput',0));
It takes a function f and applies it to every column of the matrix M.
So for example:
f = #(v) [0 1;1 0]*v + [0 0.1]';
apply_func_2_cols(f,[0 0 1 1;0 1 0 1])
ans =
0.00000 1.00000 0.00000 1.00000
0.10000 0.10000 1.10000 1.10000
With recent versions of Matlab, you can use the Table data structure to your advantage. There's even a 'rowfun' operation but I found it easier just to do this:
a = magic(6);
incrementRow = cell2mat(cellfun(#(x) x+1,table2cell(table(a)),'UniformOutput',0))
or here's an older one I had that doesn't require tables, for older Matlab versions.
dataBinner = cell2mat(arrayfun(#(x) Binner(a(x,:),2)',1:size(a,1),'UniformOutput',0)')
The accepted answer seems to be to convert to cells first and then use cellfun to operate over all of the cells. I do not know the specific application, but in general I would think using bsxfun to operate over the matrix would be more efficient. Basically bsxfun applies an operation element-by-element across two arrays. So if you wanted to multiply each item in an n x 1 vector by each item in an m x 1 vector to get an n x m array, you could use:
vec1 = [ stuff ]; % n x 1 vector
vec2 = [ stuff ]; % m x 1 vector
result = bsxfun('times', vec1.', vec2);
This will give you matrix called result wherein the (i, j) entry will be the ith element of vec1 multiplied by the jth element of vec2.
You can use bsxfun for all sorts of built-in functions, and you can declare your own. The documentation has a list of many built-in functions, but basically you can name any function that accepts two arrays (vector or matrix) as arguments and get it to work.
I like splitapply, which allows a function to be applied to the columns of A using splitapply(fun,A,1:size(A,2)).
For example
A = magic(5);
B = splitapply(#(x) x+1, A, 1:size(A,2));
C = splitapply(#std, A, 1:size(A,2));
To apply the function to the rows, you could use
splitapply(fun, A', 1:size(A,1))';
(My source for this solution is here.)
Stumbled upon this question/answer while seeking how to compute the row sums of a matrix.
I would just like to add that Matlab's SUM function actually has support for summing for a given dimension, i.e a standard matrix with two dimensions.
So to calculate the column sums do:
colsum = sum(M) % or sum(M, 1)
and for the row sums, simply do
rowsum = sum(M, 2)
My bet is that this is faster than both programming a for loop and converting to cells :)
All this can be found in the matlab help for SUM.
if you know the length of your rows you can make something like this:
a=rand(9,3);
b=rand(9,3);
arrayfun(#(x1,x2,y1,y2,z1,z2) line([x1,x2],[y1,y2],[z1,z2]) , a(:,1),b(:,1),a(:,2),b(:,2),a(:,3),b(:,3) )