In KDB, is there an equivalent of numpy's array.shape? - kdb

I'm trying to work with matrices in KDB, and am frequently having to query their dimensions.
Currently I'm doing count and count flip, but this is verbose and repetitive. Is there a more elegant way to query the dimensions of an n-D matrix?

Assuming that we are in front of a well formed matrix, a function that would achieve your objective is:
shape:{:(count x;count x[0]);};
If you use it very often, you can save in the startup file q.q in the q directory, so it is going to be loaded on launch and readily available.
Clearly, flipping the entire matrix is more expensive in terms of time:
t:(100;100)#til 10000
q)\t:1000000 {:(count x; count flip x);}[t]
33808
q)\t:1000000 {:(count x;count x[0]);}[t]
282
Having said that, the flip method will guarantee that the matrix is well formed, which is not going to be caught by the proposed method:
q)t2:((2;3;4);(2;3))
q){:((#)x;(#)x[0]);}[t2]
2 3
q){:(count x; count flip x);}[t2]
'length
[1] {:(count x; count flip x);}

Related

(matlab matrix operation), Is it possible to get a group of value from matrix without loop?

I'm currently working on implementing a gradient check function in which it requires to get certain index values from the result matrix. Could someone tell me how to get a group of values from the matrix?
To be specific, for a result matrx res with size M x N, I'll need to get element res(3,1), res(4,2), res(1,3), res(2,4)...
In my case, M is dimension and N is batch size and there's a label array whose size is 1xbatch_size, [3 4 1 2...]. So the desired values are res(label(:),1:batch_size). Since I'm trying to practice vectorization programming and it's better not using loop. Could someone tell me how to get a group of value without a iteration?
Cheers.
--------------------------UPDATE----------------------------------------------
The only idea I found is firstly building a 'mask matrix' then use the original result matrix to do element wise multiplication (technically called 'Hadamard product', see in wiki). After that just get non-zero element out and do the sum operation, the code in matlab should look like:
temp=Mask.*res;
desired_res=temp(temp~=0); %Note: the temp(temp~=0) extract non-zero elements in a 'column' fashion: it searches temp matrix column by column then put the non-zero number into container 'desired_res'.
In my case, what I wanna do next is simply sum(desired_res) so I don't need to consider the order of those non-zero elements in 'desired_res'.
Based on this idea above, creating mask matrix is the key aim. There are two methods to do this job.
Codes are shown below. In my case, use accumarray function to add '1' in certain location (which are stored in matrix 'subs') and add '0' to other space. This will give you a mask matrix size [rwo column]. The usage of full(sparse()) is similar. I made some comparisons on those two methods (repeat around 10 times), turns out full(sparse) is faster and their time costs magnitude is 10^-4. So small difference but in a large scale experiments, this matters. One benefit of using accumarray is that it could define the matrix size while full(sparse()) cannot. The full(sparse(subs, 1)) would create matrix with size [max(subs(:,1)), max(subs(:,2))]. Since in my case, this is sufficient for my requirement and I only know few of their usage. If you find out more, please share with us. Thanks.
The detailed description of those two functions could be found on matlab's official website. accumarray and full, sparse.
% assume we have a label vector
test_labels=ones(10000,1);
% method one, accumarray(subs,1,[row column])
tic
subs=zeros(10000,2);
subs(:,1)=test_labels;
subs(:,2)=1:10000;
k1=accumarray(subs,1,[10, 10000]);
t1=toc % to compare with method two to check which one is faster
%method two: full(sparse(),1)
tic
k2=full(sparse(test_labels,1:10000,1));
t2=toc

MATLAB spending an incredible amount of time writing a relatively small matrix

I have a small MATLAB script (included below) for handling data read from a CSV file with two columns and hundreds of thousands of rows. Each entry is a natural number, with zeros only occurring in the second column. This code is taking a truly incredible amount of time (hours) to run what should be achievable in at most some seconds. The profiler identifies that approximately 100% of the run time is spent writing a matrix of zeros, whose size varies depending on input, but in all usage is smaller than 1000x1000.
The code is as follows
function [data] = DataHandler(D)
n = size(D,1);
s = max(D,1);
data = zeros(s,s);
for i = 1:n
data(D(i,1),D(i,2)+1) = data(D(i,1),D(i,2)+1) + 1;
end
It's the data = zeros(s,s); line that takes around 100% of the runtime. I can make the code run quickly by just changing out the s's in this line for 1000, which is a sufficient upper bound to ensure it won't run into errors for any of the data I'm looking at.
Obviously there're better ways to do this, but being that I just bashed the code together to quickly format some data I wasn't too concerned. As I said, I fixed it by just replacing s with 1000 for my purposes, but I'm perplexed as to why writing that matrix would bog MATLAB down for several hours. New code runs instantaneously.
I'd be very interested if anyone has seen this kind of behaviour before, or knows why this would be happening. Its a little disconcerting, and it would be good to be able to be confident that I can initialize matrices freely without killing MATLAB.
Your call to zeros is incorrect. Looking at your code, D looks like a D x 2 array. However, your call of s = max(D,1) would actually generate another D x 2 array. By consulting the documentation for max, this is what happens when you call max in the way you used:
C = max(A,B) returns an array the same size as A and B with the largest elements taken from A or B. Either the dimensions of A and B are the same, or one can be a scalar.
Therefore, because you used max(D,1), you are essentially comparing every value in D with the value of 1, so what you're actually getting is just a copy of D in the end. Using this as input into zeros has rather undefined behaviour. What will actually happen is that for each row of s, it will allocate a temporary zeros matrix of that size and toss the temporary result. Only the dimensions of the last row of s is what is recorded. Because you have a very large matrix D, this is probably why the profiler hangs here at 100% utilization. Therefore, each parameter to zeros must be scalar, yet your call to produce s would produce a matrix.
What I believe you intended should have been:
s = max(D(:));
This finds the overall maximum of the matrix D by unrolling D into a single vector and finding the overall maximum. If you do this, your code should run faster.
As a side note, this post may interest you:
Faster way to initialize arrays via empty matrix multiplication? (Matlab)
It was shown in this post that doing zeros(n,n) is in fact slow and there are several neat tricks to initializing an array of zeros. One way is to accomplish this by empty matrix multiplication:
data = zeros(n,0)*zeros(0,n);
One of my personal favourites is that if you assume that data was not declared / initialized, you can do:
data(n,n) = 0;
If I can also comment, that for loop is quite inefficient. What you are doing is calculating a 2D histogram / accumulation of data. You can replace that for loop with a more efficient accumarray call. This also avoids allocating an array of zeros and accumarray will do that under the hood for you.
As such, your code would basically become this:
function [data] = DataHandler(D)
data = accumarray([D(:,1) D(:,2)+1], 1);
accumarray in this case will take all pairs of row and column coordinates, stored in D(i,1) and D(i,2) + 1 for i = 1, 2, ..., size(D,1) and place all that match the same row and column coordinates into a separate 2D bin, we then add up all of the occurrences and the output at this 2D bin gives you the total tally of how many values at this 2D bin which corresponds to the row and column coordinate of interest mapped to this location.

matlab percentage change between cells

I'm a newbie to Matlab and just stumped how to do a simple task that can be easily performed in excel. I'm simply trying to get the percent change between cells in a matrix. I would like to create a for loop for this task. The data is setup in the following format:
DAY1 DAY2 DAY3...DAY 100
SUBJECT RESULTS
I could only perform getting the percent change between two data points. How would I conduct it if across multiple days and multiple subjects? And please provide explanation
Thanks a bunch
FOR EXAMPLE, FOR DAY 1 SUBJECT1(RESULT=1), SUBJECT2(RESULT=4), SUBJECT3(RESULT=5), DAY 2 SUBJECT1(RESULT=2), SUBJECT2(RESULT=8), SUBJECT3(RESULT=10), DAY 3 SUBJECT1(RESULT=1), SUBJECT2(RESULT=4), SUBJECT3(RESULT=5).
I WANT THE PERCENT CHANGE SO OUTPUT WILL BE DAY 2 SUBJECT1(RESULT=100%), SUBJECT2(RESULT=100%), SUBJECT3(RESULT=100%). DAY3 SUBJECT1(RESULT=50%), SUBJECT2(RESULT=50%), SUBJECT3(RESULT=50%)
updated:
Hi thanks for responding guys. sorry for the confusion. zebediah49 is pretty close to what I'm looking for. My data is for example a 10 x 10 double. I merely wanted to get the percentage change from column to column. For example, if I want the percentage change from rows 1 through 10 on all columns (from columns 2:10). I would like the code to function for any matrix dimension (e.g., 1000 x 1000 double) zebediah49 could you explain the code you posted? thanks
updated2:
zebediah49,
(data(1:end,100)- data(1:end,99))./data(1:end,99)
output=[data(:,2:end)-data(:,1:end-1)]./data(:,1:end-1)*100;
Observing the code above, How would I go about modifying it so that column 100 is used as the index against all of the other columns(1-99)? If I change the code to the following:
(data(1:end,100)- data(1:end,:))./data(1:end,:)
matlab is unable because of exceeding matrix dimensions. How would I go about implementing that?
UPDATE 3
zebediah49,
Worked perfectly!!! Originally I created a new variable for the index and repmat the index to match the matrices which was not a good idea. It took forever to replicate when dealing with large numbers.
Thanks for you contribution once again.
Thanks Chris for your contribution too!!! I was looking more on how to address and manipulate arrays within a matrix.
It's matlab; you don't actually want a loop.
output=input(2:end,:)./input(1:end-1,:)*100;
will probably do roughly what you want. Since you didn't give anything about your matlab structure, you may have to change index order, etc. in order to make it work.
If it's not obvious, that line defines output as a matrix consisting of the input matrix, divided by the input matrix shifted right by one element. The ./ operator is important, because it means that you will divide each element by its corresponding one, as opposed to doing matrix division.
EDIT: further explanation was requested:
I assumed you wanted % change of the form 1->1->2->3->1 to be 100%, 200%, 150%, 33%.
The other form can be obtained by subtracting 100%.
input(2:end,:) will grab a sub-matrix, where the first row is cut off. (I put the time along the first dimension... if you want it the other way it would be input(:,2:end).
Matlab is 1-indexed, and lets you use the special value end to refer to the las element.
Thus, end-1 is the second-last.
The point here is that element (i) of this matrix is element (i+1) of the original.
input(1:end-1,:), like the above, will also grab a sub-matrix, except that that it's missing the last column.
I then divide element (i) by element (i+1). Because of how I picked out the sub-matrices, they now line up.
As a semi-graphical demonstration, using my above numbers:
input: [1 1 2 3 1]
input(2,end): [1 2 3 1]
input(1,end-1): [1 1 2 3]
When I do the division, it's first/first, second/second, etc.
input(2:end,:)./input(1:end-1,:):
[1 2 3 1 ]
./ [1 1 2 3 ]
---------------------
== [1.0 2.0 1.5 0.3]
The extra index set to (:) means that it will do that procedure across all of the other dimension.
EDIT2: Revised question: How do I exclude a row, and keep it as an index.
You say you tried something to the effect of (data(1:end,100)- data(1:end,:))./data(1:end,:). Matlab will not like this, because the element-by-element operators need them to be the same size. If you wanted it to only work on the 100th column, setting the second index to be 100 instead of : would do that.
I would, instead, suggest setting the first to be the index, and the rest to be data.
Thus, the data is processed by cutting off the first:
output=[data(2:end,2:end)-data(2:end,1:end-1)]./data(2:end,1:end-1)*100;
OR, (if you neglect the start, matlab assumes 1; neglect the end and it assumes end, making (:) shorthand for (1:end).
output=[data(2:,2:end)-data(2:,1:end-1)]./data(2:,1:end-1)*100;
However, you will probably still want the indices back, in which case you will need to append that subarray back:
output=[data(1,1:end-1) data(2:,2:end)-data(2:,1:end-1)]./data(2:,1:end-1)*100];
This is probably not how you should be doing it though-- keep data in one matrix, and time or whatever else in a separate array. That makes it much easier to do stuff like this to data, without having to worry about excluding time. It's especially nice when graphing.
Oh, and one more thing:
(data(:,2:end)-data(:,1:end-1))./data(:,1:end-1)*100;
is identically equivalent to
data(:,2:end)./data(:,1:end-1)*100-100;
Assuming zebediah49 guessed right in the comment above and you want
1 4 5
2 8 10
1 4 5
to turn into
1 1 1
-.5 -.5 -.5
then try this:
data = [1,4,5; 2,8,10; 1,4,5];
changes_absolute = diff(data);
changes_absolute./data(1:end-1,:)
ans =
1.0000 1.0000 1.0000
-0.5000 -0.5000 -0.5000
You don't need the intermediate variable, you can directly write diff(data)./data(1:end,:). I just thought the above might be easier to read. Getting from that result to percentage numbers is left as an exercise to the reader. :-)
Oh, and if you really want 50%, not -50%, just use abs around the final line.

For loop inside another for loop to make new set of vectors

I would like to use a for loop within a for loop (I think) to produce a number of vectors which I can use separately to use polyfit with.
I have a 768x768 matrix and I have split this into 768 separate cell vectors. However I want to split each 1x768 matrix into sections of 16 points - i.e. 48 new vectors which are 16 values in length. I want then to do some curve fitting with this information.
I want to name each of the 48 vectors something different however I want to do this for each of the 768 columns. I can easily do this for either separately but I was hoping that there was a way to combine them. I tried to do this as a for statement within a for statement however it doesn't work, I wondered if anyone could give me some hints on how to produce what I want. I have attached the code.
Qne is my 768*768 matrix with all the points.
N1=768;
x=cell(N,1);
for ii=1:N1;
x{ii}=Qnew(1:N1,ii);
end
for iii = 1:768;
x2{iii}=x{iii};
for iv = 1:39
N2=20;
x3{iii}=x2{iii}(1,(1+N2*iv:N2+N2*iv));
%Gx{iv}=(x3{iv});
end
end
Use a normal 2D matrix for your inner split. Why? It's easy to reshape, and many of the fitting operations you'll likely use will operate on columns of a matrix already.
for ii=1:N1
x{ii} = reshape(Qnew(:, ii), 16, 48);
end
Now x{ii} is a 2D matrix, size 16x48. If you want to address the jj'th split window separately, you can say x{ii}(:, jj). But often you won't have to. If, for example, you want the mean of each window, you can just say mean(x{ii}), which will take the mean of each column, and give you a 48-element row vector back out.
Extra reference for the unasked question: If you ever want overlapping windows of a vector instead of abutting, see buffer in the signal processing toolbox.
Editing my answer:
Going one step further, a 3D matrix is probably the best representation for equal-sized vectors. Remembering that reshape() reads out columnwise, and fills the new matrix columnwise, this can be done with a single reshape:
x = reshape(Qnew, 16, 48, N1);
x is now a 16x48x768 3D array, and the jj'th window of the ii'th vector is now x(:, jj, ii).

matlab: out of memory when concatenating sparse matrix with a vector

I create a 3560 x 3560 sparse matrix, A. I then create two 1 X 3560 vectors, S and T.
When I run the following code (which concatenates S and T as rows in A and afterwards also as columns in A)
A=[A;S;T];
S=[S 0 0];
T=[T 0 0];
A=[A, S', T'];
The last line produces an out of memory error.
I guess I am running out of memory since I have other variables stored, but it seems odd to me that adding two 3560 vectors would be the point in which I am exactly hitting my limit, so I think (or more accurately, wishfully think) that somehow the concatenations aren't done in a smart way...
Am I right or is there no hope (except for optimizing other pieces in my code)?
EDIT:
At the request of yoda, I am posting the full code.
Basically what it does is get a N X N matrix of edge weights between the nodes of a graph, and adds two vectors that will act as a source and sink in a max flow computation.
nbr_sim(nbr_sim<0.8)=0;
A=sparse(size(nbr_sim,1)+2,size(nbr_sim,2)+2);
nelements=size(nbr_sim,1);
A(nbr_sim>0)=nbr_sim(nbr_sim>0);
clear nbr_sim;
S=abs([1 0 0]*n);
T=abs([0 1 0]*n);
A(1:nelements,end-1)=S';
A(1:nelements,end)=T';
A(end-1,1:nelements)=S;
A(end,1:nelements)=T;
EDIT:
As you say you have used considerable resources before this operation, it is entirely likely that you are close to the tipping point, when MATLAB gives you an out of memory error.
Remember that when you grow matrices on the fly either by concatenating or by indexing out of range, MATLAB creates a copy of the matrix in memory. So you're not just using up resources for that extra row, but for a copy of that entire matrix!
Here's an example on my machine where I try to grow a vector that's large enough to tip it over the memory limit.
clear
a=rand(2*10^9+1,1); %#create a large array
whos a
Name Size Bytes Class Attributes
a 2000000001x1 16000000008 double
%#Now repeat the same, but by growing the array by one element
clear
a=rand(2*10^9,1);
a=[a;0];
??? Error using ==> vertcat
Out of memory. Type HELP MEMORY for your options.
So you see that although MATLAB can create a matrix with 2*10^9+1 elements in one go, when you try to create an array of the same size by append a single element to a 2*10^9 element vector, it runs out of memory.
If S and T are column vectors as you say, then A=[A;S;T] should give you an error:
??? Error using ==> vertcat
CAT arguments dimensions are not consistent.
So you must be doing something else. Concatenating will not change sparseness of the matrix i.e., it won't switch from sparse to full.
A=sprand(3560,3560,0.01); %#test matrices
S=rand(3560,1);
T=rand(3560,1);
B=[A,S,T]; %#join the columns
issparse(B)
ans =
1
Moreover, a 3560x3560 matrix of doubles is only ~97 MB, which shouldn't give you an "out of memory" error...
When dealing with large matrix:
For full matrix, you'd better preallocate memory to avoid memory copy during extending.see why
The sparse case is more complicated, and can be even less efficiency than extending in full matrix, because the elements is stored in a compressed manner. Setting an "inner" entry may cause large memory overwrites(have a look here).
So you'd better edit all the entries in advance and create with sparse() function, rather than call sparse() and then pad the data.