collection.mutable.OpenHashMap vs collection.mutable.HashMap - scala

For put and get operations OpenHashMap outperform HashMap by about 5 times: https://gist.github.com/1423303
Are any cases when HashMap should be preferred over OpenHashMap?

Your code exactly matches one of the use cases for OpenHashMap. Your code:
println ("scala OpenHashMap: " + time (warmup) {
val m = new scala.collection.mutable.OpenHashMap[Int,Int];
var i = 0;
var start = System.currentTimeMillis();
while(i<100000) { m.put(i,i);i=i+1;};
})
The explanation for OpenHashMap (scaladoc):
A mutable hash map based on an open hashing scheme. The precise scheme
is undefined, but it should make a reasonable effort to ensure that an
insert with consecutive hash codes is not unneccessarily penalised. In
particular, mappings of consecutive integer keys should work without
significant performance loss.
My emphasis. Which explains your findings. When to use OpenHashMap rather than HashMap? See Wikipedia. From there:
Chained hash tables with linked lists are popular because they require
only basic data structures with simple algorithms, and can use simple
hash functions that are unsuitable for other methods.
The cost of a table operation is that of scanning the entries of the
selected bucket for the desired key. If the distribution of keys is
sufficiently uniform, the average cost of a lookup depends only on the
average number of keys per bucket—that is, on the load factor.
Chained hash tables remain effective even when the number of table
entries n is much higher than the number of slots. Their performance
degrades more gracefully (linearly) with the load factor. For example,
a chained hash table with 1000 slots and 10,000 stored keys (load
factor 10) is five to ten times slower than a 10,000-slot table (load
factor 1); but still 1000 times faster than a plain sequential list,
and possibly even faster than a balanced search tree.
For separate-chaining, the worst-case scenario is when all entries
were inserted into the same bucket, in which case the hash table is
ineffective and the cost is that of searching the bucket data
structure. If the latter is a linear list, the lookup procedure may
have to scan all its entries; so the worst-case cost is proportional
to the number n of entries in the table.
This is a generic explanation. As ever with these things, your performance will vary depending upon the use case, if you care about it, you need to measure it.

Related

Hash Table Confusion - How much space is needed for Hash Table with a good (eg. Cryptographic) Hash Function?

I am learning about Hash Tables, Hash Maps etc. I have just implemented a Hash Table in C, with operations: insert(HTable, key), delete(HTable, key), initialize(HTable) and search(HTable, key).
I would like to ask something. Since in a (proper) Hash Table the computed hashed indexes could be very large, doesn't this mean that the space consumed will be like INT_MAX (which is still O(n) of course), or more? I mean given the input element that we want to store in a hash table (ie insert it in), the insert() function would call the hash function which would then compute the hashed index for the element to go in. Thus it would use the hash function to find this index.
When we use the hash function to operate on the element, the hashed index could become very large. With a proper, for example cryptographic hash function, this index could become huge (they are using prime numbers with 300 digits - Diffie Hellman public key cryptography etc.), right? I know that in normal hash functions (such as the trivial ones beginners use to learn) we apply mod operation in order for the element to fit within the hash table's bounds, but in doing so, maybe we limit the hash function potential?
So to uniquely map an element to the hash table we must use a HUGE Hash Table. How are these cryptographic hash tables implemented? They must be completely secure, right? Even the Stack Overflow tag on "cryptographichashfunction" says that it is extremely unlikely to find two inputs that will map to the same element (as such the possibility of collisions is tiny). Wouldn't this require though a HUGE array to be stored in memory (or to disk)? Therefore, the memory consumption would be huge.
Of course, the time complexity is not a problem. We just see the start address of the hash table / array add it with the index and just go that place in memory to get the value (O(1) - search principle of Hash Table).
Am i wrong somewhere? Is there something i'm missing? I hope i made myself clear. So to conclude, i would like confirmation on this. Does a good hash function require a huge array (Hash Table) and as such a very large amount of memory to be properly implemented? Is so much space justified, or is there something i don't quite get? Thanks.
In general cryptographic hash values are not used for hash tables. Instead a fast hash is used. Of that hash value only as many bits may be used to tweak the size of the table. If multiple key values map to the same index then the values are stored in a separate structure, possibly with additional information to choose between the two.
It is not required that the hash output is unique; the hash function output would be too large and the table required would certainly not fit in memory. Besides that, cryptographic hashes are generally quite slow.
Cryptographic hash functions are usually build from operations also used in symmetric block ciphers. That means mixing and bitwise operators used in a large amount of rounds. Modular arithmetic, as used for e.g. RSA are commonly not used.
All in all, the main thing is that the index generated doesn't need to be unique. Usually if one hash leeds to multiple values they are stored in a list or set where the key can be compared by value.

PostgreSQL using UUID vs Text as primary key

Our current PostgreSQL database is using GUID's as primary keys and storing them as a Text field.
My initial reaction to this is that trying to perform any kind of minimal cartesian join would be a nightmare of indexing trying to find all the matching records. However, perhaps my limited understanding of database indexing is wrong here.
I'm thinking that we should be using UUID as these are stored as a binary representation of the GUID where a Text is not and the amount of indexing that you get on a Text column is minimal.
It would be a significant project to change these, and I'm wondering if it would be worth it?
When dealing with UUID numbers store them as data type uuid. Always. There is simply no good reason to even consider text as alternative. Input and output is done via text representation by default anyway. The cast is very cheap.
The data type text requires more space in RAM and on disk, is slower to process and more error prone. #khampson's answer provides most of the rationale. Oddly, he doesn't seem to arrive at the same conclusion.
This has all been asked and answered and discussed before. Related questions on dba.SE with detailed explanation:
Would index lookup be noticeably faster with char vs varchar when all values are 36 chars
What is the optimal data type for an MD5 field?
bigint?
Maybe you don't need UUIDs (GUIDs) at all. Consider bigint instead. It only occupies 8 bytes and is faster in every respect. It's range is often underestimated:
-9223372036854775808 to +9223372036854775807
That's 9.2 millions of millions of millions positive numbers. IOW, nine quintillion two hundred twenty-three quadrillion three hundred seventy-two trillion thirty-six something billion.
If you burn 1 million IDs per second (which is an insanely high number) you can keep doing so for 292471 years. And then another 292471 years for negative numbers. "Tens or hundreds of millions" is not even close.
UUID is really just for distributed systems and other special cases.
As #Kevin mentioned, the only way to know for sure with your exact data would be to compare and contrast both methods, but from what you've described, I don't see why this would be different from any other case where a string was either the primary key in a table or part of a unique index.
What can be said up front is that your indexes will probably larger, since they have to store larger string values, and in theory the comparisons for the index will take a bit longer, but I wouldn't advocate premature optimization if to do so would be painful.
In my experience, I have seen very good performance on a unique index using md5sums on a table with billions of rows. I have found it tends to be other factors about a query which tend to result in performance issues. For example, when you end up needing to query over a very large swath of the table, say hundreds of thousands of rows, a sequential scan ends up being the better choice, so that's what the query planner chooses, and it can take much longer.
There are other mitigating strategies for that type of situation, such as chunking the query and then UNIONing the results (e.g. a manual simulation of the sort of thing that would be done in Hive or Impala in the Hadoop sphere).
Re: your concern about indexing of text, while I'm sure there are some cases where a dataset produces a key distribution such that it performs terribly, GUIDs, much like md5sums, sha1's, etc. should index quite well in general and not require sequential scans (unless, as I mentioned above, you query a huge swath of the table).
One of the big factors about how an index would perform is how many unique values there are. For that reason, a boolean index on a table with a large number of rows isn't likely to help, since it basically is going to end up having a huge number of row collisions for any of the values (true, false, and potentially NULL) in the index. A GUID index, on the other hand, is likely to have a huge number of values with no collision (in theory definitionally, since they are GUIDs).
Edit in response to comment from OP:
So are you saying that a UUID guid is the same thing as a Text guid as far as the indexing goes? Our entire table structure is using Text fields with a guid-like string, but I'm not sure Postgre recognizes it as a Guid. Just a string that happens to be unique.
Not literally the same, no. However, I am saying that they should have very similar performance for this particular case, and I don't see why optimizing up front is worth doing, especially given that you say to do so would be a very involved task.
You can always change things later if, in your specific environment, you run into performance problems. However, as I mentioned earlier, I think if you hit that scenario, there are other things that would likely yield better performance than changing the PK data types.
A UUID is a 128-bit data type (so, 16 bytes), whereas text has 1 or 4 bytes of overhead plus the actual length of the string. For a GUID, that would mean a minimum of 33 bytes, but could vary significantly depending on the encoding used.
So, with that in mind, certainly indexes of text-based UUIDs will be larger since the values are larger, and comparing two strings versus two numerical values is in theory less efficient, but is not something that's likely to make a huge difference in this case, at least not usual cases.
I would not optimize up front when to do so would be a significant cost and is likely to never be needed. That bridge can be crossed if that time does come (although I would persue other query optimizations first, as I mentioned above).
Regarding whether Postgres knows the string is a GUID, it definitely does not by default. As far as it's concerned, it's just a unique string. But that should be fine for most cases, e.g. matching rows and such. If you find yourself needing some behavior that specifically requires a GUID (for example, some non-equality based comparisons where a GUID comparison may differ from a purely lexical one), then you can always cast the string to a UUID, and Postgres will treat the value as such during that query.
e.g. for a text column foo, you can do foo::uuid to cast it to a uuid.
There's also a module available for generating uuids, uuid-ossp.

HBase row key design for monotonically increasing keys

I've an HBase table where I'm writing the row keys like:
<prefix>~1
<prefix>~2
<prefix>~3
...
<prefix>~9
<prefix>~10
The scan on the HBase shell gives an output:
<prefix>~1
<prefix>~10
<prefix>~2
<prefix>~3
...
<prefix>~9
How should a row key be designed so that the row with key <prefix>~10 comes last? I'm looking for some recommended ways or the ways that are more popular for designing HBase row keys.
How should a row key be designed so that the row with key ~10 comes last?
You see the scan output in this way because rowkeys in HBase are kept sorted lexicographically irrespective of the insertion order. This means that they are sorted based on their string representations. Remember that rowkeys in HBase are treated as an array of bytes having a string representation. The lowest order rowkey appears first in a table. That's why 10 appears before 2 and so on. See the sections Rows on this page to know more about this.
When you left pad the integers with zeros their natural ordering is kept intact while sorting lexicographically and that's why you see the scan order same as the order in which you had inserted the data. To do that you can design your rowkeys as suggested by #shutty.
I'm looking for some recommended ways or the ways that are more popular for designing HBase row keys.
There are some general guidelines to be followed in order to devise a good design :
Keep the rowkey as small as possible.
Avoid using monotonically increasing rowkeys, such as timestamp etc. This is a poor shecma design and leads to RegionServer hotspotting. If you can't avoid that use someway, like hashing or salting to avoid hotspotting.
Avoid using Strings as rowkeys if possible. String representation of a number takes more bytes as compared to its integer or long representation. For example : A long is 8 bytes. You can store an unsigned number up to 18,446,744,073,709,551,615 in those eight bytes. If you stored this number as a String -- presuming a byte per character -- you need nearly 3x the bytes.
Use some mechanism, like hashing, in order to get uniform distribution of rows in case your regions are not evenly loaded. You could also create pre-splitted tables to achieve this.
See this link for more on rowkey design.
HTH
HBase stores rowkeys in lexicographical order, so you can try to use this schema with fixed-length rowrey:
<prefix>~0001
<prefix>~0002
<prefix>~0003
...
<prefix>~0009
<prefix>~0010
Keep in mind that you also should use random prefixes to avoid region hot-spotting (when a single region accepts most of the writes, while the other regions are idle).
monotonically increasing keys isnt a good schema for hbase.
you can read more here:
http://hbase.apache.org/book/rowkey.design.html
there also a link there to OpenTSDB that solve this problem.
Fixed length keys are really recommended if possible. Bytes.toBytes(Long value) can be used to get a byte array from a counter. It will sort well for positive longs less than Long.MAX_VALUE.

Find global subscript midpoint

In Caché ObjectScript (Intersystems' dialect of MUMPS), is there a way to efficiently skip to the approximate midpoint or a linear point in the key for a global subscript range? Equal, based on the number of records.
I want to divide up the the subscript key range into approximately equal chunks and then process each chunk in parallel.
Knowing that the keys in a global are arranged in a binary tree of some kind, this should be a simple operation for the underlying data storage engine but I'm not sure if there is an interface to do this.
I can do it by scanning the global's whole keyspace but that would defeat the purpose of trying to run the operation in parallel. A sequential scan takes hours on this global. I need the keyspace divided up BEFORE I begin scanning.
I want each thread will to an approximately equal sized contiguous chunk of the keyspace to scan individually; the problem is calculating what key range to give each thread.
you can use second parameter "direction" (1 or -1) in function $order or $query
For my particular need, I found that the application I'm using has what I would call an index global. Another global maintained by the app with different keys, linking back to the main table. I can scan that in a fraction of the time and break up the keyset from there.
If someone comes up with a way to do what I want given only the main global, I'll change the accepted answer to that.

Optimizing word count

(This is rather hypothetical in nature as of right now, so I don't have too many details to offer.)
I have a flat file of random (English) words, one on each line. I need to write an efficient program to count the number of occurrences of each word. The file is big (perhaps about 1GB), but I have plenty of RAM for everything. They're stored on permanent media, so read speeds are slow, so I need to just read through it once linearly.
My two off-the-top-of-my-head ideas were to use a hash with words => no. of occurrences, or a trie with the no. of occurrences at the end node. I have enough RAM for a hash array, but I'm thinking that a trie would have as fast or faster lookups.
What approach would be best?
I think a trie with the count as the leaves could be faster.
Any decent hash table implementation will require reading the word fully, processing it using a hash function, and finally, a look-up in the table.
A trie can be implemented such that the search occurs as you are reading the word. This way, rather than doing a full look-up of the word, you could often find yourself skipping characters once you've established the unique word prefix.
For example, if you've read the characters: "torto", a trie would know that the only possible word that starts this way is tortoise.
If you can perform this inline searching faster on a word faster than the hashing algorithm can hash, you should be able to be faster.
However, this is total overkill. I rambled on since you said it was purely hypothetical, I figured you'd like a hypothetical-type of answer. Go with the most maintainable solution that performs the task in a reasonable amount of time. Micro-optimizations typically waste more time in man-hours than they save in CPU-hours.
I'd use a Dictionary object where the key is word converted to lower case and the value is the count. If the dictionary doesn't contain the word, add it with a value of 1. If it does contain the word, increment the value.
Given slow reading, it's probably not going to make any noticeable difference. The overall time will be completely dominated by the time to read the data anyway, so that's what you should work at optimizing. For the algorithm (mostly data structure, really) in memory, just use whatever happens to be most convenient in the language you find most comfortable.
A hash table is (if done right, and you said you had lots of RAM) O(1) to count a particular word, while a trie is going to be O(n) where n is the length of the word.
With a sufficiently large hash space, you'll get much better performance from a hash table than from a trie.
I think that a trie is overkill for your use case. A hash of word => # of occurrences is exactly what I would use. Even using a slow interpreted language like Perl, you can munge a 1GB file this way in just a few minutes. (I've done this before.)
I have enough RAM for a hash array, but I'm thinking that a trie would have as fast or faster lookups.
How many times will this code be run? If you're just doing it once, I'd say optimize for your time rather than your CPU's time, and just do whatever's fastest to implement (within reason). If you have a standard library function that implements a key-value interface, just use that.
If you're doing it many times, then grab a subset (or several subsets) of the data file, and benchmark your options. Without knowing more about your data set, it'd be dubious to recommend one over another.
Use Python!
Add these elements to a set data type as you go line by line, before asking whether it is in the hash table. After you know it is in the set, then add a dictionary value of 2, since you already added it to the set once before.
This will take some of the memory and computation away from asking the dictionary every single time, and instead will handle unique valued words better, at the end of the call just dump all the words that are not in the dictionary out of the set with a value of 1. (Intersect the two collections in respect to the set)
To a large extent, it depends on what you want you want to do with the data once you've captured it. See Why Use a Hash Table over a Trie (Prefix Tree)?
a simple python script:
import collections
f = file('words.txt')
counts = collections.defaultdict(int)
for line in f:
counts[line.strip()] +=1
print "\n".join("%s: %d" % (word, count) for (word, count) in counts.iteritems())