I'm new to hashing and here's my question:
Can you insert in a DELETED slot of the hash table?
Yes, you can insert to a deleted slot. But...
At first you should know that there is soft-deletion and hard-deletion. In soft-delete you just flip a flag and mark your slot as "deleted", in hard-delete you empty the slot.
Let me explain why we need soft-delete: For example you're using a hash table with linear probing and somehow your hash function maps 3 input values to the same slot. By using linear probing you place these three elements by advancing linearly on the table until you find an empty slot. In this case if you use hard-delete for deletion, you will break the hash table since there will be an empty slot while try to retrieve a value so one value will become unreachable.
On the other hand; if you have a perfect hash function you are OK to use hard-delete. A perfect hash function maps every input value to slots uniquely. So no probing scheme is needed and hard-delete doesn't break your table.
Now coming back to your question, you should also consider and figure out how to avoid duplicate insertions.
Related
I have a table which uses the snowflake hash function to store values in some columns.
Is there any way to reverse the encrytion from the hash function and get the original values from the table?
As per the documentation, the function is not "not a cryptographic hash function", and will always return the same result for the same input expression.
Example :
select hash(1) always returns -4730168494964875235
select hash('a') always returns -947125324004678632
select hash('1234') always returns -4035663806895772878
I was wondering if there is any way to reverse the hashing and get the original input expression from the hashed values.
I think these disclaimers are for preventing potential legal disputes:
Cryptographic hash functions have a few properties which this function
does not, for example:
The cryptographic hashing of a value cannot be inverted to find the
original value.
It's not possible to reserve a hash value in general. If you consider that when you even send a very long text, and it is represented in a 64-bit value, it's obvious that the data is not preserved. On the other hand, if you use a brute force technique, you may find the actual value producing the hash, and it can be counted as reserving the hash value.
For example, if you store all hash values for the numbers between 0 and 5000 in a table, when I came with hash value '-7875472545445966613', you can look up that value in your table, and say it belongs to 1000 (number).
Consider an initially empty hash table of size M and hash function h(x) = x mod M. In the worst case, what is the time complexity (in Big-Oh notation) to insert n keys into the table if separate chaining is used to resolve collisions (without rehashing)? Suppose that each entry (bucket) of the table stores an unordered linked list. When adding a new element to an unordered linked list, such an element is inserted at the beginning of the list.
In the absence of collisions, inserting a key into a hash table/map is O(1), since looking up the bucket is a constant time operation. I would not expect this to vary in the case of collisions, assuming that collisions are resolved using a linked list and that the new element is inserted to the head of the list. The reason for this is that adding an new element to the head of a linked list it also basically O(1). So, inserting under these assumptions should also be O(1), and therefore inserting n keys should be O(n).
I implemented a hashmap based on cuckoo hashing.
My hash functions take values of any length and return keys of type long. To match the keys to my array size n, I do key % n.
I'm thinking about following scenario:
Insert value A with key A.key into location A.key % n
Find value B with key A.key
So for this example I get the entry for value A and it is not recognized that value B hasn't even been inserted. This happens if my hash function returns the same key for two different values. Collisions with different keys but same locations are no problem.
What is the best way to detect those collisions?
Do I have to check every time I insert or search an item if the original values are equal?
As with most hashing schemes, in cuckoo hashing, the hash code tells you where to look in the table for the element in question, but the expectation is that you store both the key and the value in the table so that before returning the stored value, you first check the key stored at that slot against the key you're looking for. That way, if you get the same hash code for two objects, you can determine which object was stored at that slot.
I am using the h2 database to store data.
Each record has to be unique in the database (unique in the sense that the combination of timestamp, name, message,.. doesn't appear twice in the table). Therefore one column in the table is the hash of the data in the record. To speed up searching if the record already exists I created an index on the hash column. Indeed searching for a record with given hash is very fast.
But here is the problem: While in the beginning insertion of 10k records is fast enough (takes about a second), it gets awefully slow when having already one million records in the database (takes a minute). This probably because the new hashes need to be integrated into the existing index b-tree.
Is there any way to speed this up or is there a better way to ensure uniqueness of data records in the table?
Edit: To be more concrete:
Let's say my records are transactions which have the following fields:
time stamp, type, sender recipient, amount, message
A transaction should only appear once in the table so before inserting a new transaction I have to check if the transaction is already in the table. Since the sha 256 hash of all fields is unique my idea was to add a column 'hash' to the table where the hash of the fields is put in. Before inserting a new record I calculate the hash of the fields and query the table for the hash.
Index has its own over head. If you have a table where you will be having lots of insertions, I would suggest to avoid indexing on it as it has over-head of hash.
May I know what do you mean by --> one column in the table is the hash of the data in the record??
You can create a unique key constraint (here it will be the composite key of all those 3 mentioned columns), Let me know the requirements, may be we can give you a better solution of doing it in a simpler way :)
Danyal
Man, this is probably not a good way to query all the records, check them for duplicates and then insert the new row :). As soon as you move ahead, the overhead will increase as the number of the records increase.
Create a unique key constraint (check http://www.h2database.com/html/grammar.html ) on the combination of these field, you don't need to compute the hash, database will handle the hash thing. Just try to add the duplicate record, you will get the exception, catch the exception and show the error message as duplicate insertion..
Once you create the unique index, it won't allow you to insert any duplicate records. It is pretty secure and safe.
Indexing randomly distributed data is bad for performance. Once there are more entries in the index than fit in the cache, then updating the index will get very slow, specially when using a hard disk. This is because seeks on a hard disk are very slow. This, in combination with the random distribution of the data, will lead to very bad performance. With solid state disks it's a bit better, because random access reads are faster there.
I have a Cassandra ColumnFamily (0.6.4) that will have new entries from users. I'd like to query Cassandra for those new entries so that I can process that data in another system.
My sense was that I could use a TimeUUIDType as the key for my entry, and then query on a KeyRange that starts either with "" as the startKey, or whatever the lastStartKey was. Is this the correct method?
How does get_range_slice actually create a range? Doesn't it have to know the data type of the key? There's no declaration of the data type of the key anywhere. In the storage_conf.xml file, you declare the type of the columns, but not of the keys. Is the key assumed to be of the same type as the columns? Or does it do some magic sniffing to guess?
I've also seen reference implementations where people store TimeUUIDType in columns. However, this seems to have scale issues as this particular key would then become "hot" since every change would have to update it.
Any pointers in this case would be appreciated.
When sorting data only the column-keys are important. The data stored is of no consequence neither is the auto-generated timestamp. The CompareWith attribute is important here. If you set CompareWith as UTF8Type then the keys will be interpreted as UTF8Types. If you set the CompareWith as TimeUUIDType then the keys are automatically interpreted as timestamps. You do not have to specify the data type. Look at the SlicePredicate and SliceRange definitions on this page http://wiki.apache.org/cassandra/API This is a good place to start. Also, you might find this article useful http://www.sodeso.nl/?p=80 In the third part or so he talks about slice ranging his queries and so on.
Doug,
Writing to a single column family can sometimes create a hot spot if you are using an Order-Preserving Partitioner, but not if you are using the default Random Partitioner (unless a subset of users create vastly more data than all other users!).
If you sorted your rows by time (using an Order-Preserving Partitioner) then you are probably even more likely to create hotspots, since you will be adding rows sequentially and a single node will be responsible for each range of the keyspace.
Columns and Keys can be of any type, since the row key is just the first column.
Virtually, the cluster is a circular hash key ring, and keys get hashed by the partitioner to get distributed around the cluster.
Beware of using dates as row keys however, since even the randomization of the default randompartitioner is limited and you could end up cluttering your data.
What's more, if that date is changing, you would have to delete the previous row since you can only do inserts in C*.
Here is what we know :
A slice range is a range of columns in a row with a start value and an end value, this is used mostly for wide rows as columns are ordered. Known column names defined in the CF are indexed however so they can be retrieved specifying names.
A key slice, is a key associated with the sliced column range as returned by Cassandra
The equivalent of a where clause uses secondary indexes, you may use inequality operators there, however there must be at least ONE equals clause in your statement (also see https://issues.apache.org/jira/browse/CASSANDRA-1599).
Using a key range is ineffective with a Random Partitionner as the MD5 hash of your key doesn't keep lexical ordering.
What you want to use is a Column Family based index using a Wide Row :
CompositeType(TimeUUID | UserID)
In order for this not to become hot, add a first meaningful key ("shard key") that would split the data accross nodes such as the user type or the region.
Having more data than necessary in Cassandra is not a problem, it's how it is designed, so what you must ask yourself is "what do I need to query" and then design a Column Family for it rather than trying to fit everything in one CF like you'd do in an RDBMS.