The dart implementation behind String.hashCode for use in other language - flutter

I mistakenly depended on String.hashCode's value in a flutter app and stored it in a server side. Now I want to be able to calculate a hash code of a string in nodeJS and get the same result as dart...
How can I achieve it? Didn't find the actual implementation of the String.hashCode in dart.

The vm implementation of String.hashCode can be found in the sdk repo on github (https://github.com/dart-lang/sdk):
Example:
StringHasher in runtime/vm/object.cc
CombineHashes/FinalizeHash in runtime/vm/hash.h

Here is a simple Swift implementation that should return the same hashCode value for a simple String in iOS and Flutter:
extension String {
func hashCode() -> UInt32 {
var hash: UInt32 = 0
for i in unicodeScalars.filter({ $0.isASCII }).map({ $0.value }) {
hash = hash &+ i
hash = hash &+ (hash << 10)
hash = hash ^ (hash >> 6)
}
hash = hash &+ (hash << 3)
hash = hash ^ (hash >> 11)
hash = hash &+ (hash << 15)
hash = hash & ((1 << 30) &- 1)
return (hash == 0) ? 1 : hash
}
}

Related

How to perform bitwise operations with dart List<Int8List>.?

I am using Dart Int8List in place of byte type(Java). I want to do bitwise operations with Int8List. But bitwise operations aren't defined in Int8Listclass. How to solve this.?
My block of code for your reference :
List<Int8List> myList;
int offset = 0;
int read16Bit() {
int x = ((myList[offset] & 0xFE) << 8) |
(myList[offset+1] & 0xFE);
offset += 2;
return x;
}

Unsigned right shift operator '>>>' in Swift

How would you implement the equivalent to Java's unsigned right shift operator in Swift?
According to Java's documentation, the unsigned right shift operator ">>>" shifts a zero into the leftmost position, while the leftmost position after ">>" depends on sign extension.
So, for instance,
long s1 = (-7L >>> 16); // result is 281474976710655L
long s2 = (-7L >> 16); // result is -1
In order to implement this in Swift, I would take all the bits except the sign bit by doing something like,
let lsb = Int64.max + negativeNumber + 1
Notice that the number has to be negative! If you overflow the shift operator, the app crashes with EXC_BAD_INSTRUCTION, which it's not very nice...
Also, I'm using Int64 on purpose. Because there's no bigger datatype, doing something like (1 << 63) would overflow the Int64 and also crash. So instead of doing ((1 << 63) - 1 + negativeNumber) in a bigger datatype, I wrote it as Int64.max + negativeNumber - 1.
Then, shift that positive number with the normal logical shift, and OR the bit from the sign in the first left bit after the sign.
let shifted = (lsb >> bits) | 0x4000000000000000
However, that doesn't give me the expected result,
((Int64.max - 7 + 1) >> 16) | 0x4000000000000000 // = 4611826755915743231
Not sure what I'm doing wrong...
Also, would it be possible to name this operator '>>>' and extend Int64?
Edit:
Adding here the solution from OOper below,
infix operator >>> : BitwiseShiftPrecedence
func >>> (lhs: Int64, rhs: Int64) -> Int64 {
return Int64(bitPattern: UInt64(bitPattern: lhs) >> UInt64(rhs))
}
I was implementing the Java Random class in Swift, which also involves truncating 64-bit ints into 32-bit. Thanks to OOper I just realized I can use the truncatingBitPattern initializer to avoid overflow exceptions. The function 'next' as described here becomes this in Swift,
var seed: Int64 = 0
private func next(_ bits: Int32) -> Int32 {
seed = (seed &* 0x5DEECE66D &+ 0xB) & ((1 << 48) - 1)
let shifted : Int64 = seed >>> (48 - Int64(bits))
return Int32(truncatingBitPattern: shifted)
}
One sure way to do it is using the unsigned shift operation of unsigned integer type:
infix operator >>> : BitwiseShiftPrecedence
func >>> (lhs: Int64, rhs: Int64) -> Int64 {
return Int64(bitPattern: UInt64(bitPattern: lhs) >> UInt64(rhs))
}
print(-7 >>> 16) //->281474976710655
(Using -7 for testing with bit count 16 does not seem to be a good example, it loses all significant bits with 16-bit right shift.)
If you want to do it in your way, the bitwise-ORed missing sign bit cannot be a constant 0x4000000000000000. It needs to be 0x8000_0000_0000_0000 (this constant overflows in Swift Int64) when bit count == 0, and needs to be logically shifted with the same bits.
So, you need to write something like this:
infix operator >>>> : BitwiseShiftPrecedence
func >>>> (lhs: Int64, rhs: Int64) -> Int64 {
if lhs >= 0 {
return lhs >> rhs
} else {
return (Int64.max + lhs + 1) >> rhs | (1 << (63-rhs))
}
}
print(-7 >>>> 16) //->281474976710655
It seems far easier to work with unsigned integer types when you need unsigned shift operation.
Swift has unsigned integer types, so there is no need to a separate unsigned right shift operator. That's a choice in Java that followed from the decision to not have unsigned types.

Why can't I get a negative number using bit manipulation in Swift?

This is a LeetCode question. I wrote 4 answers in different versions of that question. When I tried to use "Bit manipulation", I got the error. Since no one in LeetCode can answer my question, and I can't find any Swift doc about this. I thought I would try to ask here.
The question is to get the majority element (>n/2) in a given array. The following code works in other languages like Java, so I think it might be a general question in Swift.
func majorityElement(nums: [Int]) -> Int {
var bit = Array(count: 32, repeatedValue: 0)
for num in nums {
for i in 0..<32 {
if (num>>(31-i) & 1) == 1 {
bit[i] += 1
}
}
}
var ret = 0
for i in 0..<32 {
bit[i] = bit[i]>nums.count/2 ? 1 : 0
ret += bit[i] * (1<<(31-i))
}
return ret
}
When the input is [-2147483648], the output is 2147483648. But in Java, it can successfully output the right negative number.
Swift doc says :
Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.
Well, it is 2,147,483,647, the input is 1 larger than that number. When I ran pow(2.0, 31.0) in playground, it shows 2147483648. I got confused. What's wrong with my code or what did I miss about Swift Int?
A Java int is a 32-bit integer. The Swift Int is 32-bit or 64-bit
depending on the platform. In particular, it is 64-bit on all OS X
platforms where Swift is available.
Your code handles only the lower 32 bits of the given integers, so that
-2147483648 = 0xffffffff80000000
becomes
2147483648 = 0x0000000080000000
So solve the problem, you can either change the function to take 32-bit integers as arguments:
func majorityElement(nums: [Int32]) -> Int32 { ... }
or make it work with arbitrary sized integers by computing the
actual size and use that instead of the constant 32:
func majorityElement(nums: [Int]) -> Int {
let numBits = sizeof(Int) * 8
var bit = Array(count: numBits, repeatedValue: 0)
for num in nums {
for i in 0..<numBits {
if (num>>(numBits-1-i) & 1) == 1 {
bit[i] += 1
}
}
}
var ret = 0
for i in 0..<numBits {
bit[i] = bit[i]>nums.count/2 ? 1 : 0
ret += bit[i] * (1<<(numBits-1-i))
}
return ret
}
A more Swifty way would be to use map() and reduce()
func majorityElement(nums: [Int]) -> Int {
let numBits = sizeof(Int) * 8
let bitCounts = (0 ..< numBits).map { i in
nums.reduce(0) { $0 + ($1 >> i) & 1 }
}
let major = (0 ..< numBits).reduce(0) {
$0 | (bitCounts[$1] > nums.count/2 ? 1 << $1 : 0)
}
return major
}

How does Jenkins one-at-a-time hash output have fixed length since it doesn't use mod system?

I understand how algorithms like SHA and MD5 always have fixed output for n-length string as they use a mod system, how does jenkins hash achieve this using just bitwise operators? Any help appreciated (links, in depth explanations).
uint32_t jenkins_one_at_a_time_hash(char *key, size_t len)
{
uint32_t hash, i;
for(hash = i = 0; i < len; ++i)
{
hash += key[i];
hash += (hash << 10);
hash ^= (hash >> 6);
}
hash += (hash << 3);
hash ^= (hash >> 11);
hash += (hash << 15);
return hash;
}

What type of hash is this?

I am not very good at this and have searched high and low to try and motivate if this is a good or bad hash. I hope that one of you could enlighten me with some information regarding this.
unsigned int hash(bytearray[] msg)
{
unsigned int hash = 0xDECAFBAD;
for(i = 0; i < msg.length(); i++)
{
hash = ((hash << 5) XOR (hash >> 27)) XOR msg[i];
}
return (hash BITWISE-AND 0x7FFFFFFF);
}