IBM ESSL: DFT - Real to complex & Complex to real - Final array bigger than initial one - double

I have a real 2D double precision array. I want to perform a FFT on it, some operations on the result, and an inverse FFT. I am using IBM ESSL library on Blue Gene Q.
The function DRCFT2 is doing the real to complex transform (http://www-01.ibm.com/support/knowledgecenter/SSFHY8_5.3.0/com.ibm.cluster.essl.v5r3.essl100.doc/am5gr_hsrcft2.htm?lang=en). The function DCRFT2 is doing the complex to real transform (http://www-01.ibm.com/support/knowledgecenter/SSFHY8_5.3.0/com.ibm.cluster.essl.v5r3.essl100.doc/am5gr_hscrft2.htm?lang=en).
Beginning real array size is (nx,nz). After DRCFT2, the complex array size is (nx/2+1,nz). After DCRFT2, the final real array size is (nx+2,nz).
Beginning and final real arrays have a different size, how can I compare them?
ps: If I put the first real array in a complex one and perform complex to complex DFTs (DCFT2), then the final result and the first one will have the same size and I can compare them. Anyway to do something similar with DRCFT2 and DCRFT2?

According to the DCRFT2 documentation you link to:
x
is the array X, containing n2 columns of data to be transformed. Due to complex conjugate symmetry, the input consists of only the first ((n1)/2)+1 rows of the array
[...]
On Return
y
[...]
is the array Y, containing n1 rows and n2 columns of results of the real discrete Fourier transform of X.
Where in you case n1=nx and n2=nz. In other words, if you put in a complex array of size (nx/2+1,nz) as input argument to DCRFT2 you should get a real array output of size (nx,nz), so you can readily compare your beginning and final real arrays.

Related

How to perform operations along a certain dimension of an array?

I have a 3D array containing five 3-by-4 slices, defined as follows:
rng(3372061);
M = randi(100,3,4,5);
I'd like to collect some statistics about the array:
The maximum value in every column.
The mean value in every row.
The standard deviation within each slice.
This is quite straightforward using loops,
sz = size(M);
colMax = zeros(1,4,5);
rowMean = zeros(3,1,5);
sliceSTD = zeros(1,1,5);
for indS = 1:sz(3)
sl = M(:,:,indS);
sliceSTD(indS) = std(sl(1:sz(1)*sz(2)));
for indC = 1:sz(1)
rowMean(indC,1,indS) = mean(sl(indC,:));
end
for indR = 1:sz(2)
colMax(1,indR,indS) = max(sl(:,indR));
end
end
But I'm not sure that this is the best way to approach the problem.
A common pattern I noticed in the documentation of max, mean and std is that they allow to specify an additional dim input. For instance, in max:
M = max(A,[],dim) returns the largest elements along dimension dim. For example, if A is a matrix, then max(A,[],2) is a column vector containing the maximum value of each row.
How can I use this syntax to simplify my code?
Many functions in MATLAB allow the specification of a "dimension to operate over" when it matters for the result of the computation (several common examples are: min, max, sum, prod, mean, std, size, median, prctile, bounds) - which is especially important for multidimensional inputs. When the dim input is not specified, MATLAB has a way of choosing the dimension on its own, as explained in the documentation; for example in max:
If A is a vector, then max(A) returns the maximum of A.
If A is a matrix, then max(A) is a row vector containing the maximum value of each column.
If A is a multidimensional array, then max(A) operates along the first array dimension whose size does not equal 1, treating the elements as vectors. The size of this dimension becomes 1 while the sizes of all other dimensions remain the same. If A is an empty array whose first dimension has zero length, then max(A) returns an empty array with the same size as A.
Then, using the ...,dim) syntax we can rewrite the code as follows:
rng(3372061);
M = randi(100,3,4,5);
colMax = max(M,[],1);
rowMean = mean(M,2);
sliceSTD = std(reshape(M,1,[],5),0,2); % we use `reshape` to turn each slice into a vector
This has several advantages:
The code is easier to understand.
The code is potentially more robust, being able to handle inputs beyond those it was initially designed for.
The code is likely faster.
In conclusion: it is always a good idea to read the documentation of functions you're using, and experiment with different syntaxes, so as not to miss similar opportunities to make your code more succinct.

MATLAB spending an incredible amount of time writing a relatively small matrix

I have a small MATLAB script (included below) for handling data read from a CSV file with two columns and hundreds of thousands of rows. Each entry is a natural number, with zeros only occurring in the second column. This code is taking a truly incredible amount of time (hours) to run what should be achievable in at most some seconds. The profiler identifies that approximately 100% of the run time is spent writing a matrix of zeros, whose size varies depending on input, but in all usage is smaller than 1000x1000.
The code is as follows
function [data] = DataHandler(D)
n = size(D,1);
s = max(D,1);
data = zeros(s,s);
for i = 1:n
data(D(i,1),D(i,2)+1) = data(D(i,1),D(i,2)+1) + 1;
end
It's the data = zeros(s,s); line that takes around 100% of the runtime. I can make the code run quickly by just changing out the s's in this line for 1000, which is a sufficient upper bound to ensure it won't run into errors for any of the data I'm looking at.
Obviously there're better ways to do this, but being that I just bashed the code together to quickly format some data I wasn't too concerned. As I said, I fixed it by just replacing s with 1000 for my purposes, but I'm perplexed as to why writing that matrix would bog MATLAB down for several hours. New code runs instantaneously.
I'd be very interested if anyone has seen this kind of behaviour before, or knows why this would be happening. Its a little disconcerting, and it would be good to be able to be confident that I can initialize matrices freely without killing MATLAB.
Your call to zeros is incorrect. Looking at your code, D looks like a D x 2 array. However, your call of s = max(D,1) would actually generate another D x 2 array. By consulting the documentation for max, this is what happens when you call max in the way you used:
C = max(A,B) returns an array the same size as A and B with the largest elements taken from A or B. Either the dimensions of A and B are the same, or one can be a scalar.
Therefore, because you used max(D,1), you are essentially comparing every value in D with the value of 1, so what you're actually getting is just a copy of D in the end. Using this as input into zeros has rather undefined behaviour. What will actually happen is that for each row of s, it will allocate a temporary zeros matrix of that size and toss the temporary result. Only the dimensions of the last row of s is what is recorded. Because you have a very large matrix D, this is probably why the profiler hangs here at 100% utilization. Therefore, each parameter to zeros must be scalar, yet your call to produce s would produce a matrix.
What I believe you intended should have been:
s = max(D(:));
This finds the overall maximum of the matrix D by unrolling D into a single vector and finding the overall maximum. If you do this, your code should run faster.
As a side note, this post may interest you:
Faster way to initialize arrays via empty matrix multiplication? (Matlab)
It was shown in this post that doing zeros(n,n) is in fact slow and there are several neat tricks to initializing an array of zeros. One way is to accomplish this by empty matrix multiplication:
data = zeros(n,0)*zeros(0,n);
One of my personal favourites is that if you assume that data was not declared / initialized, you can do:
data(n,n) = 0;
If I can also comment, that for loop is quite inefficient. What you are doing is calculating a 2D histogram / accumulation of data. You can replace that for loop with a more efficient accumarray call. This also avoids allocating an array of zeros and accumarray will do that under the hood for you.
As such, your code would basically become this:
function [data] = DataHandler(D)
data = accumarray([D(:,1) D(:,2)+1], 1);
accumarray in this case will take all pairs of row and column coordinates, stored in D(i,1) and D(i,2) + 1 for i = 1, 2, ..., size(D,1) and place all that match the same row and column coordinates into a separate 2D bin, we then add up all of the occurrences and the output at this 2D bin gives you the total tally of how many values at this 2D bin which corresponds to the row and column coordinate of interest mapped to this location.

Accumulating votes in MATLAB

First, a little background to my problem:
I am building an object recognition system using a geometric hashing technique. My hash table is indexed by the affine co-ordinates of points in a model determined by a basis triplet (allowing an affine invariant representation of any learned object). Each hash table entry is a structure :
entry = struct('ModelName', modelName, 'BasisTriplet', [a; b; c])];
Now, an arbitrary basis triplet is extracted from image points then the affine co-ordinates of all other points are calculated relative to this basis and used as indices to the hash table. For each entry that exists in this hash bin, a vote is cast for the modelName and basis triplet.
After checking all points, the models and their corresponding basis triplets with a sufficiently high number of votes are taken as candidates for an object and a further verification step is performed.
I am unsure however what is the most efficient method of casting these votes. Currently I am using a dynamic cell array, each time a new model and basis triplet pair is voted for, an additional row is added to the array. Otherwise the vote count of an existing candidate is incremented.
for keylist = 1:length(keylist)
% Where keylist is an array of indicies to the relevant keys to look up
% xkeys is the n by 2 array of all of the keys in the hash table
% Obtain this hash bin
bin = hashTable(xkeys(keylist(i), 1), xkeys(keylist(i), 2));
% Vote for every entry in the bin
for entry = 1:length(bin)
% Find the index of this model/basis in the voting accumulator
indAcc = find( strcmp(bin.ModelName, v_models(:, 1)) & myIsEqual(v_basisTriplets, bin.BasisTriplet) );
if isempty(indAcc)
% If entries do not exist yet, Add new entries
v_models = [v_models; {bin.ModelName, 1}];
v_basisTriplets = cat(3, v_basisTriplets, bin.BasisTriplet);
else
% Otherwise increment the count
v_models(indAcc, 2) = v_models(indAcc, 2)+1;
end
end
end
There is a separate 3D array (v_basisTriplets) in which the 2D basis array is concatenated and indexed along the 3rd dimension. I did have these basis triplets in the cell array also, however I had difficulty searching this cell array for a 2D array. The myIsEqual function just searches through the third dimension and checks if the 2D array at each index is equal, returning a 1D vector of which arrays are equal for use in the find.
function ind = myIsEqual(vec3D, A)
ind = zeros(size(vec3D, 3), 1);
for i = 1:size(vec3D, 3)
ind(i) = isequal(vec3D(:, :, i), A);
end
This is most certainly not the most efficient way. Immediately I can see that it would be more efficient to initialize the arrays to store the votes beforehand. However however is there a better way in general of going about this? I need to try and find the most efficient and elegant way of voting as there are usually hundreds of points to check and time is valuable.
Thanks
If you are only considering time efficiency, consider using a 4d matrix.
The dimensions would be:
Model
coordinateA
coordinateB
coordinateC
Depending on the ratio between this matrix size and the amount of points that you check, consider using a sparse matrix.
Note that especially if you can't use a sparse array, this method can be rather memory inefficient and may therefore be infeasible.

How can I convert double values into integers for indices to create a sparse matrix in MATLAB?

I am using MATLAB to load a text file that I want to make a sparse matrix out of. The columns in the text file refer to the row indices and are double type. I need them to be integers to be able to use them as indices for rows and columns. I tried using uint8, int32 and int64 to convert them to integers to use them to build a sparse matrix as so:
??? Undefined function or method 'sparse' for input
arguments of type 'int64'.
Error in ==> make_network at 5
graph =sparse(int64(listedges(:,1)),int64(listedges(:,2)),ones(size(listedges,1),1));
How can I convert the text file entries loaded as double so as to be used by the sparse function?
There is no need for any conversion, keep the indices double:
r = round(listedges);
graph = sparse(r(:, 1), r(:, 2), ones(size(listedges, 1), 1));
There are two reasons why one might want to convert to int:
The first, because you have data type restrictions.
The second, your inputs may contain fractions and are un-fit to be used as integers.
If you want to convert because of the first reason - then there's no need to: Matlab works with double type by default and often treats doubles as ints (for example, when used as indices).
However, if you want to convert to integers becuase of the second reason (numbers may be fractionals), then you should use round(), ceil() or floor() - whatever suits your purpose best.
There is another very good reason ( and really the primary one..) why one may want to convert indices of any structure (array, matrix, etc.) to int.
If you ever program in any language other than Matlab, you would be familiar with wanting to save memory space, especially with large structures. Being able to address elements in such structures with indices other than double is key.
One major issue with Matlab is the inability to more finely control the size of multidimensional structures in this way. There are sparse matrix solutions, but those are not adequate for many cases. Cell arrays will preserve the data types upon access, however the storage for every element in the cell array is extremely wasteful in terms of storage (113 bytes for a single uint8 encapsulated in a cell).

What's an appropriate data structure for a matrix with random variable entries?

I'm currently working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. To motivate this let me say I have the following matrix:
[a b; c d]
I want to find a data structure that will allow for a, b, c, d to either be real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 0 and standard deviation 1.
The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate a uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix.
I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do this. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.
EDIT #1:
Thank you all for the feedback. I have decided to use a cell structure and store random variables as function handles. To save some processing time for large scale applications, I have decided to reference the location of the random variables to save time during the "evaluation" part.
One solution is to create your matrix initially as a cell array containing both numeric values and function handles to functions designed to generate a value for that entry. For your example, you could do the following:
generatorMatrix = {1 -1; 2 #randn};
Then you could create a function that takes a matrix of the above form, evaluates the cells containing function handles, then combines the results with the numeric cell entries to create a numeric matrix to use for further calculations:
function numMatrix = create_matrix(generatorMatrix)
index = cellfun(#(c) isa(c,'function_handle'),... %# Find function handles
generatorMatrix);
generatorMatrix(index) = cellfun(#feval,... %# Evaluate functions
generatorMatrix(index),...
'UniformOutput',false);
numMatrix = cell2mat(generatorMatrix); %# Change from cell to numeric matrix
end
Some additional things you can do would be to use anonymous functions to do more complicated things with built-in functions or create cell entries of varying size. This is illustrated by the following sample matrix, which can be used to create a matrix with the first row containing a 5 followed by 9 ones and the other 9 rows containing a 1 followed by 9 numbers drawn from a uniform distribution between 5 and 10:
generatorMatrix = {5 ones(1,9); ones(9,1) #() 5*rand(9)+5};
And each time this matrix is passed to create_matrix it will create a new 10-by-10 matrix where the 9-by-9 submatrix will contain a different set of random values.
An alternative solution...
If your matrix can be easily broken into blocks of submatrices (as in the second example above) then using a cell array to store numeric values and function handles may be your best option.
However, if the random values are single elements scattered sparsely throughout the entire matrix, then a variation similar to what user57368 suggested may work better. You could store your matrix data in three parts: a numeric matrix with placeholders (such as NaN) where the randomly-generated values will go, an index vector containing linear indices of the positions of the randomly-generated values, and a cell array of the same length as the index vector containing function handles for the functions to be used to generate the random values. To make things easier, you can even store these three pieces of data in a structure.
As an example, the following defines a 3-by-3 matrix with 3 random values stored in indices 2, 4, and 9 and drawn respectively from a normal distribution, a uniform distribution from 5 to 10, and an exponential distribution:
matData = struct('numMatrix',[1 nan 3; nan 2 4; 0 5 nan],...
'randIndex',[2 4 9],...
'randFcns',{{#randn , #() 5*rand+5 , #() -log(rand)/2}});
And you can define a new create_matrix function to easily create a matrix from this data:
function numMatrix = create_matrix(matData)
numMatrix = matData.numMatrix;
numMatrix(matData.randIndex) = cellfun(#feval,matData.randFcns);
end
If you were using NumPy, then masked arrays would be the obvious place to start, but I don't know of any equivalent in MATLAB. Cell arrays might not be compact enough, and if you did use a cell array, then you would have to come up with an efficient way to find the non-real entries and replace them with a sample from the right distribution.
Try using a regular or sparse matrix to hold the real values, and leave it at zero wherever you want a random variable. Then alongside that store a sparse matrix of the same shape whose non-zero entries correspond to the random variables in your matrix. If you want, the value of the entry in the second matrix can be used to indicate which distribution (ie. 1 for uniform, 2 for normal, etc.).
Whenever you want to get a purely real matrix to work with, you iterate over the non-zero values in the second matrix to convert them to samples, and then add that matrix to your first.