Seed for hash-table non cryptographic hash functions - hash

If one sets the hash table seed during resize or table creation to a random number, will that prevent the DDoS attacks on such hash table or, knowing the hash algorithm, the attacker will still easily get around the seed? What if the algorithm uses the Pearson hash function with randomly generated tables, unknown to the attacker? Does such table hash still need a seed or it is safe enough?
Context: I want to use an on-disk hash table for a key-value database for my toy web server, where the keys may depend on the user input.

There is exist several approaches to protect your hash-subsystem from "adverse selection" attack, most popular of them is named Universal Hashing, where hash-function or it's property randomly selected, at initialization.
In my own approach, I am using same hash function, where each char adding to result with non-linear mixing, dependends of random array of uint32_t[256]. Array is created during system initialization, and in my code, it happening at each start, by reading the /dev/urandom. See my implementation in open source emerSSL program. You're welcome for borrow this entire hash-table implementation, or hash-function only.
Currently, my hash-function from the referred source computes two independent hashes for double hashing search algorithm.
There is "reduced" hash-function form the source, to demonstrate idea of non-linear mixing with S-block array"
uint32_t S_block[0x100]; // Substitute block, random contains
#define NLF(h, c) (S_block[(unsigned char)(c + h)] ^ c)
#define ROL(x, n) (((x) << (n)) | ((x) >> (32 - (n))))
int32_t hash(const char *key) {
uint32_t h = 0x1F351F35; // Barker code * 2
char c;
for(int i = 0; c = key[i]; i++) {
h = ROL(h, 5);
h += NLF(h, c);
}
return h;
}

Related

How can I make a good hash function without unsigned integers?

I'm looking for a simple hash function that doesn't rely on integer overflow, and doesn't rely on unsigned integers.
The problem is that I have to create the hash function in blueprint from Unreal Engine (only has signed 32 bit integer, with undefined overflow behavior) and in PHP5, with a version that uses 64 bit signed integers.
So when I use the 'common' simple hash functions, they don't give the same result on both platforms because they all rely on bit-overflowing behavior of unsigned integers.
The only thing that is really important is that is has good 'randomness'. Does anyone know something simple that would accomplish this?
It's meant for a very basic signing symstem for sending messages to a server. Doesn't need to be top security... it's for storing high scores of a simple game on a server. The idea is that I would generate several hash-integers from the message (using different 'start numbers') and append them to make a hash-signature ). I just need to make sure that if people sniff the network messages send to the server that they cannot easily send faked messages. They would need to provide the correct hash-signature with their message, which they shouldn't be able to do unless they know the hash function being used. Ofcourse if they reverse engineer the game they can still 'hack' it, but I wouldn't know how to counter that...
I have no access to existing hash functions in the unreal engine blueprint system.
The first thing I would try would be to simulate the behavior of unsigned integers using signed integers, by explicitly applying the modulo operator whenever the accumulated hash-value gets large enough that it might risk overflowing.
Example code in C (apologies for the poor hash function, but the same technique should be applicable to any hash function, at least in principle):
#include <stdio.h>
#include <string.h>
int hashFunction(const char * buf, int numBytes)
{
const int multiplier = 33;
const int maxAllowedValue = 2147483648-256; // assuming 32-bit ints here
const int maxPreMultValue = maxAllowedValue/multiplier;
int hash = 536870912; // arbitrary starting number
for (int i=0; i<numBytes; i++)
{
hash = hash % maxPreMultValue; // make sure hash cannot overflow in the next operation!
hash = (hash*multiplier)+buf[i];
}
return hash;
}
int main(int argc, char ** argv)
{
while(1)
{
printf("Enter a string to hash:\n");
char buf[1024]; fgets(buf, sizeof(buf), stdin);
printf("Hash code for that string is: %i\n", hashFunction(buf, strlen(buf)));
}
}

Does 'mixing' the result of a 32-bit hash to create a 64-bit hash have any value?

For example, if you're programming in Java, and you want to create a 64-bit hash function for an arbitrary object, does it make sense to apply something like murmurHash3's 'finalizer' to the result of Object.hashCode()?
Specifically, is the following hash function
long Mix(int i)
{
long result = i;
return result ^ (result << 32) ^ (result << 33); // Or some 'better' way of mixing up the bits of i.
}
long Hash(Object o)
{
return Mix(o.hashCode());
}
better than simply doing
long Hash(Object o)
{
return o.hashCode();
}
(I'm well aware that the second one gives you nothing over a 32-bit hash)
The hash is going to be used to implement (recursive) hash-join, and the buckets are going to be determined by doing hash % prime. A concern is that it's going to be hard to make a good sequence of independent hash functions for the 'recursive' part if we only have 32-bits to start out with.
I'm thinking the answer is 'no', and that you really need to start out with a 64-bit hash which was computed directly from the value of the object.
I guess a side question is whether you actually need a 64-bit hash in the first place for the purposes of hash-join.

kdb c++ interface: create byte list from std::string

The following is very slow for long strings:
std::string s = "long string";
K klist = DBVec::CreateList(KG , s.length());
for (int i=0; i<s.length(); i++)
{
kG(klist)[i]=s.c_str()[i];
}
It works acceptably fast (<100ms) for strings up to 100k, but slows to a crawl (tens of minutes, possibly hours) for strings of a few million characters. I don't see anything other than kG that can create nonlinearity. I don't see any reason for accessor function kG to be non-constant time, but there is just nothing else in this loop. Unfortunately I don't know how kG works due to lack of documentation.
Question: given a blob of binary data as std::string, what's the efficient way to construct a byte list?
kG is a macro defined in k.h which expands to ((x)->G0), i.e. follow the G0 pointer of the K object
http://kx.com/q/d/a/c.htm#Strings documents kp, which creates a K string object directly from a string, so presumably you could do K klist = kp(s.c_str()), which is probably faster
This works:
memcpy(kG(klist), s.c_str(), s.length());
Still wonder why that loop is not O(N).

Why do I need to add the original salt to each hash iteration of a password?

I understand it is important to hash passwords over multiple iterations to make things harder for an attacker. I have read numerous times that when processing these iterations, it is critical to hash not only the result of the previous hashing, but also append the original salt each time. In other words:
I need to not do this:
var hash = sha512(salt + password);
for (i = 0; i < 1000; i++) {
hash = sha512(hash);
}
And instead, need to do this:
var hash = sha512(salt + password);
for (i = 0; i < 1000; i++) {
hash = sha512(salt + hash);
}
My question is regarding the math here. Why does my bad example above make things easier for an attacker? I've heard that it would increase the likelihood of collisions but I am not understanding why.
It is not that you simply need to do "hash = sha512(salt + hash)" - it's more complex than that. An HMAC is a better way of adding your salt (and PBKDF2 is based on HMAC - see below for more detail on PBKDF2) - there's a good discussion at When is it safe to use a broken hash function? for those details.
You are correct in that you need to have multiple iterations of a hash function for security.
However, don't roll your own. See How to securely hash passwords?, and note that PBKDF2, BCrypt, and Scrypt are all means of doing so.
PBKDF2, also known as PKCS#5v2 and RFC2898 is in fact reasonably close to what you're doing (multiple iterations of a normal hash function), particular in the form of PBKDF2-HMAC-SHA-512, in particular section 5.2 lists:
For each block of the derived key apply the function F defined
below to the password P, the salt S, the iteration count c, and
the block index to compute the block:
T_1 = F (P, S, c, 1) ,
T_2 = F (P, S, c, 2) ,
...
T_l = F (P, S, c, l) ,
where the function F is defined as the exclusive-or sum of the
first c iterates of the underlying pseudorandom function PRF
applied to the password P and the concatenation of the salt S
and the block index i:
F (P, S, c, i) = U_1 \xor U_2 \xor ... \xor U_c
where
U_1 = PRF (P, S || INT (i)) ,
U_2 = PRF (P, U_1) ,
...
U_c = PRF (P, U_{c-1}) .
Here, INT (i) is a four-octet encoding of the integer i, most
significant octet first.
P.S. SHA-512 was a good choice of hash primitive - SHA-512 (and SHA-384) are also superior to MD5, SHA-1, and even SHA-224 and SHA-256 because SHA-384 and up use 64-bit operations which current GPU's (early 2014) do not have as much of an advantage over current CPU's with as they do 32-bit operations, thus reducing the margin of superiority attackers have for offline attacks.

Purpose of avalanching

I was researching different hash functions and came across SuperFastHash. This hashing function used a technique called "avalanching" which was defined like this:
/* Force "avalanching" of final 127 bits */
hash ^= hash << 3;
hash += hash >> 5;
hash ^= hash << 4;
hash += hash >> 17;
hash ^= hash << 25;
hash += hash >> 6;
What is the purpose of avalanching? Why are theese specific bit shift steps used (3, 5, 4..)?
Avalanching is just a term to define the "difussion" of small changes on input to the final result, for criptographic hashes where non-reversability is a really crucial having similar inputs provide really different results is a desirable feature to avoid an approximation attack crack a single hash.
See more info about this at http://en.wikipedia.org/wiki/Avalanche_effect
I can not see why it uses that steps but it is using AND and XOR with the own shifted result to increase the diffusion, probably other values will perform similar but that will need a deeper analysis