space complexity of multi-dimensional hash - hash

I would like to store an input of tab separated values where say C1, C2, C3 and C4 represent the columns of the data and there are N rows of data. If so, I could do lookups in the hash to see if some given values for C1,C2,C3,C4 exist. Someone suggested to me that, in the worst case, the space complexity of this was N4. I would like help formulating a clear explanation as to why that is not true.

The other person is thinking that if you try to store an N by N grid of points, there will be N4 points to store.
But if you have N points, then you're just storing a hash. And a hash with N data points typically takes O(N) space. (Technically it takes the size of the hash table plus the space for the data, but people usually dynamically size the hash table to be the same order of magnitude size as the data set.)

Related

Spark: Searching for matches in large DataSets

I need to count how many values of one of the columns of df1 are present in one of the columns of df2. (I just need the number of matched values)
I wouldn't be asking this question if efficiency wasn't such a big concern:
df1 contains 100,000,000+ records
df2 contains 1,000,000,000+ records
Just an off the top of my head idea for the case that intersection won't cut it:
For the datatype that is contained in the columns, find two hash functions h1, h2 such that
h1 produces hashes roughly uniformly between 0 and N
h2 produces hashes roughly uniformly between 0 and M
such that M * N is approximately 1B, e.g. M = 10k, N = 100k,
then:
map each entry x from the column from df1 to (h1(x), x)
map each entry x from the column from df2 to (h1(x), x)
group both by h1 into buckets with xs
join on h1 (that's gonna be the nasty shuffle)
then locally, for each pair of buckets (b1, b2) that came from df1 and df2 and had the same h1 hash code, do essentially the same:
compute h2 for all bs from b1 and from b2,
group by the hash code h2
Compare the small sub-sub-buckets that remain by converting everything toSet and computing the intersection directly.
Everything that remains after intersection is present in both df1 and df2, so compute size and sum the results across all partitions.
The idea is to select N small enough so that the buckets with M entries still comfortably fit on a single node, but at the same time prevent that the whole application dies on the first shuffle trying to find out where is what by sending every key to everyone else. For example, using SHA-256 as "hash code" for h1 wouldn't help much, because the keys would be essentially unique, so that you could take the original data directly and try to do a shuffle with that. However, if you restrict N to some reasonably small number, e.g. 10k, you obtain a rough approximation of where what is, so that you can then regroup the buckets and start the second stage with h2.
Essentially it's just a random guess, I didn't test it. It could well be that the built-in intersection is smarter than everything I could possibly come up with.

Matlab : Conceptual difficulty in How to create multiple hash tables in Locality sensitive Hashing

The key idea of Locality sensitive hashing (LSH) is that neighbor points, v are more likely
mapped to the same bucket but points far from each other are more likely mapped to different buckets. In using Random projection, if the the database contains N samples each of higher dimension d, then theory says that we must create k randomly generated hash functions, where k is the targeted reduced dimension denoted as g(**v**) = (h_1(v),h_2(v),...,h_k(v)). So, for any vector point v, the point is mapped to a k-dimensional vector with a g-function. Then the hash code is the vector of reduced length /dimension k and is regarded as a bucket. Now, to increase probability of collision, theory says that we should have L such g-functions g_1, g_2,...,g_L at random. This is the part that I do not understand.
Question : How to create multiple hash tables? How many buckets are contained in a hash table?
I am following the code given in the paper Sparse Projections for High-Dimensional Binary Codes by Yan Xia et. al Link to Code
In the file Coding.m
dim = size(X_train, 2);
R = randn(dim, bit);
% coding
B_query = (X_query*R >= 0);
B_base = (X_base*R >=0);
X_query is the set of query data each of dimension d and there are 1000 query samples; R is the random projection and bit is the target reduced dimensionality. The output of B_query and B_base are N strings of length k taking 0/1 values.
Does this way create multiple hash tables i.e. N is the number of hash tables? I am confused as to how. A detailed explanation will be very helpful.
How to create multiple hash tables?
LSH creates hash-table using (amplified) hash functions by concatenation:
g(p) = [h1(p), h2(p), · · · , hk (p)], hi ∈R H
g() is a hash function and it corresponds to one hashtable. So we map the data, via g() to that hashtable and with probability, the close ones will fall into the same bucket and the non-close ones will fall into different buckets.
We do that L times, thus we create L hashtables. Note that every g() is/should most likely to be different that the other g() hash functions.
Note: Large k ⇒ larger gap between P1, P2. Small P1 ⇒ larer L so as to find neighbors. A practical choice is L = 5 (or 6). P1 and P2 are defined in the image below:
How many buckets are contained in a hash table?
Wish I knew! That's a difficult question, how about sqrt(N) where N is the number of points in the dataset. Check this: Number of buckets in LSH
The code of Yan Xia
I am not familiar with that, but from what you said, I believe that the query data you see are 1000 in number, because we wish to pose 1000 queries.
k is the length of the strings, because we have to hash the query to see in which bucket of a hashtable it will be mapped. The points inside that bucket are potential (approximate) Nearest Neighbors.

MATLAB spending an incredible amount of time writing a relatively small matrix

I have a small MATLAB script (included below) for handling data read from a CSV file with two columns and hundreds of thousands of rows. Each entry is a natural number, with zeros only occurring in the second column. This code is taking a truly incredible amount of time (hours) to run what should be achievable in at most some seconds. The profiler identifies that approximately 100% of the run time is spent writing a matrix of zeros, whose size varies depending on input, but in all usage is smaller than 1000x1000.
The code is as follows
function [data] = DataHandler(D)
n = size(D,1);
s = max(D,1);
data = zeros(s,s);
for i = 1:n
data(D(i,1),D(i,2)+1) = data(D(i,1),D(i,2)+1) + 1;
end
It's the data = zeros(s,s); line that takes around 100% of the runtime. I can make the code run quickly by just changing out the s's in this line for 1000, which is a sufficient upper bound to ensure it won't run into errors for any of the data I'm looking at.
Obviously there're better ways to do this, but being that I just bashed the code together to quickly format some data I wasn't too concerned. As I said, I fixed it by just replacing s with 1000 for my purposes, but I'm perplexed as to why writing that matrix would bog MATLAB down for several hours. New code runs instantaneously.
I'd be very interested if anyone has seen this kind of behaviour before, or knows why this would be happening. Its a little disconcerting, and it would be good to be able to be confident that I can initialize matrices freely without killing MATLAB.
Your call to zeros is incorrect. Looking at your code, D looks like a D x 2 array. However, your call of s = max(D,1) would actually generate another D x 2 array. By consulting the documentation for max, this is what happens when you call max in the way you used:
C = max(A,B) returns an array the same size as A and B with the largest elements taken from A or B. Either the dimensions of A and B are the same, or one can be a scalar.
Therefore, because you used max(D,1), you are essentially comparing every value in D with the value of 1, so what you're actually getting is just a copy of D in the end. Using this as input into zeros has rather undefined behaviour. What will actually happen is that for each row of s, it will allocate a temporary zeros matrix of that size and toss the temporary result. Only the dimensions of the last row of s is what is recorded. Because you have a very large matrix D, this is probably why the profiler hangs here at 100% utilization. Therefore, each parameter to zeros must be scalar, yet your call to produce s would produce a matrix.
What I believe you intended should have been:
s = max(D(:));
This finds the overall maximum of the matrix D by unrolling D into a single vector and finding the overall maximum. If you do this, your code should run faster.
As a side note, this post may interest you:
Faster way to initialize arrays via empty matrix multiplication? (Matlab)
It was shown in this post that doing zeros(n,n) is in fact slow and there are several neat tricks to initializing an array of zeros. One way is to accomplish this by empty matrix multiplication:
data = zeros(n,0)*zeros(0,n);
One of my personal favourites is that if you assume that data was not declared / initialized, you can do:
data(n,n) = 0;
If I can also comment, that for loop is quite inefficient. What you are doing is calculating a 2D histogram / accumulation of data. You can replace that for loop with a more efficient accumarray call. This also avoids allocating an array of zeros and accumarray will do that under the hood for you.
As such, your code would basically become this:
function [data] = DataHandler(D)
data = accumarray([D(:,1) D(:,2)+1], 1);
accumarray in this case will take all pairs of row and column coordinates, stored in D(i,1) and D(i,2) + 1 for i = 1, 2, ..., size(D,1) and place all that match the same row and column coordinates into a separate 2D bin, we then add up all of the occurrences and the output at this 2D bin gives you the total tally of how many values at this 2D bin which corresponds to the row and column coordinate of interest mapped to this location.

How to make volume range calculation more RAM friendly?

I am trying to find out the mean, media and percentile ranges of price movements for a given volume to be filled using trade data. Attaching the code below. The problem is that the code gives me wsfull error when i run it on ~80k records. I am using a 4g linux box. At the moment I can only run it for ~30k records and even then q uses >70% of my ram.
Is there any way to make it more memory friendly?
rangeForVol : {[symIn; vol; dt]
data: select from table where sym=symIn, date=dt;
data: update cumVol: sums quantity, cVol: sums quantity from data;
data: update cumVolTgt: cumVol + vol from data;
data: update pxLst: price[where each ((cumVol>=/:cVol) and (cumVol<=/:cumVolTgt))=1] from data;
.Q.gc[];
data: update minPx: min each pxLst, maxPx: max each pxLst from data;
data: update range: maxPx - minPx from data;
data
};
select count i by floor range%0.5 from rangeForVol[`ABC; 2500; 2012.06.04]
The code you quote above almost certainly does not do what you were trying to achieve.
The column cumVol and cVol are both identical (in that they contain a running total of that day's volume). Later you calculate cumVol>=/:cVol. /: means that for every element in cVol you will compare it to the entire vector cumVol. As they are identical, you will get the identity matrix (plus some extra 1b for any non-distinct values).
q)(til 4)=\:til 4
1000b
0100b
0010b
0001b
It seems you wanted to perform an element-wise comparison between the two vectors (though comparing a vector to itself also doesn't make sense), and if you want to do this explicitly, each-both would be the correct adverb (='). However, in q, the = operator will implicitly apply item-wise to two vectors of the same length (or a vector and a scalar, as is happening in your each-left example), making any adverb unnecessary.
The fact you are creating two n x n matrices when you probably intended a length n vector is probably the reason you're running out of memory.

how to pick a modulo for integer or string hash?

Typically, we do hashing by calculating the integer or string according to a rule, then return hash(int-or-str) % m as the index in the hash table, but how do we choose the modulo m? Is there any convention to follow?
There are two possible conventions. One is to use a prime number, which yields good performance with quadratic probing.
The other is to use a power of two, since n mod m where m = 2^k is a fast operation; it's a bitwise AND with m-1. Of course, the modulus must be equal to the size of the hash table, and powers of two mean your hash table must double in size whenever it's overcrowded. This gives you amortized O(1) insertion in a similar way that a dynamic array does.
Since [val modulo m] is used as an index into the table, m is the number of elements in that table. Are you free to choose that ? Then use a big enough prime number. If you need to resize the table, you can either chose to use a bigger prime number, or (if you choose doubling the table for resizing) you'd better make sure that your hash function has enough entropy in the lower bits.