I am trying to generate hashes to use in a blockchain project, when looking for a crypto library i stumbled accross tomcrypt and chose to download it since it was easy to install, but now i have a problem, when I create the hashes (btw i'm usign SHA3_512 but the bug is present in every other SHA hashing algorithm) sometimes it outputs the correct hash but truncated
photo example
Hash truncating example
this is the code for the hashing function
string hashSHA3_512(const std::string& input) {
//Initial
unsigned char* hashResult = new unsigned char[sha3_512_desc.hashsize];
//Initialize a state variable for the hash
hash_state md;
sha3_512_init(&md);
//Process the text - remember you can call process() multiple times
sha3_process(&md, (const unsigned char*) input.c_str(), input.size());
//Finish the hash calculation
sha3_done(&md, hashResult);
// Convert to string
string stringifiedHash(reinterpret_cast<char*>(hashResult));
// Return the result
return stringToHex(stringifiedHash);
}
and here is the code for the toHex function even if I already checked and the truncating hash problem pops up before this function is called
string stringToHex(const std::string& input)
{
static const char hex_digits[] = "0123456789abcdef";
std::string output;
output.reserve(input.length() * 2);
for (unsigned char c : input)
{
output.push_back(hex_digits[c >> 4]);
output.push_back(hex_digits[c & 15]);
}
return output;
}
if someone has knowledge about this library or in general about this problem and possible fixes pls explain to me, i'm stuck from 3 days
UPDATE
I figured out the program is truncating the hashes when it encounters 2 consecutive zeros in hex so 8 zeros in binary (or simply 2 bytes) but I still don't understand why, if you do pls let me and hopefully other people with the same problem know
Related
I understand that LEB128 decoders need to know whether an encoded number is signed or unsigned, but the encoder seems to work identically either way (though Wikipedia uses distinct functions for encoding signed and unsigned numbers).
If positive numbers are encoded the same way in Signed and Unsigned LEB128 (only the range changes), and negative numbers only occur in Signed LEB128, it seems more sensible to create a single function that encodes any integer (using the two's compliment when the argument is negative).
I implemented a function that works the way I described, and it seems to work fine.
This is not an implementation detail (unless I've misunderstood something). Any function that can encode Signed LEB128 makes any function that encodes Unsigned LEB128 completely redundant, so there would never be a good reason to create both.
I used JavaScript, but the actual implementation is not important. Is there ever a reason to have a Signed LEB128 encoder and an Unsigned one?
const toLEB128 = function * (arg) {
/* This generator takes any BigInt, LEB128 encodes it, and
yields the result, one byte at a time (little-endian). */
const digits = arg.toString(2).length;
const length = digits + (7 - digits % 7);
const sevens = new RegExp(".{1,7}", "g");
const number = BigInt.asUintN(length, arg);
const padded = "000000" + number.toString(2);
const string = padded.slice(padded.length % 7);
const eights = string.match(sevens).map(function(string, index) {
/* This callback takes each string of seven digits and its
index (big-endian), prepends the correct continuation digit,
converts the 8-bit result to a BigInt, then returns it. */
return BigInt("0b" + Boolean(index) * 1 + string);
});
while (eights.length) yield eights.pop();
};
I'm looking for a simple hash function that doesn't rely on integer overflow, and doesn't rely on unsigned integers.
The problem is that I have to create the hash function in blueprint from Unreal Engine (only has signed 32 bit integer, with undefined overflow behavior) and in PHP5, with a version that uses 64 bit signed integers.
So when I use the 'common' simple hash functions, they don't give the same result on both platforms because they all rely on bit-overflowing behavior of unsigned integers.
The only thing that is really important is that is has good 'randomness'. Does anyone know something simple that would accomplish this?
It's meant for a very basic signing symstem for sending messages to a server. Doesn't need to be top security... it's for storing high scores of a simple game on a server. The idea is that I would generate several hash-integers from the message (using different 'start numbers') and append them to make a hash-signature ). I just need to make sure that if people sniff the network messages send to the server that they cannot easily send faked messages. They would need to provide the correct hash-signature with their message, which they shouldn't be able to do unless they know the hash function being used. Ofcourse if they reverse engineer the game they can still 'hack' it, but I wouldn't know how to counter that...
I have no access to existing hash functions in the unreal engine blueprint system.
The first thing I would try would be to simulate the behavior of unsigned integers using signed integers, by explicitly applying the modulo operator whenever the accumulated hash-value gets large enough that it might risk overflowing.
Example code in C (apologies for the poor hash function, but the same technique should be applicable to any hash function, at least in principle):
#include <stdio.h>
#include <string.h>
int hashFunction(const char * buf, int numBytes)
{
const int multiplier = 33;
const int maxAllowedValue = 2147483648-256; // assuming 32-bit ints here
const int maxPreMultValue = maxAllowedValue/multiplier;
int hash = 536870912; // arbitrary starting number
for (int i=0; i<numBytes; i++)
{
hash = hash % maxPreMultValue; // make sure hash cannot overflow in the next operation!
hash = (hash*multiplier)+buf[i];
}
return hash;
}
int main(int argc, char ** argv)
{
while(1)
{
printf("Enter a string to hash:\n");
char buf[1024]; fgets(buf, sizeof(buf), stdin);
printf("Hash code for that string is: %i\n", hashFunction(buf, strlen(buf)));
}
}
I am a complete beginner with the D language.
How to get, as an uint unsigned 32 bits integer in the D language, some hash of a string...
I need a quick and dirty hash code (I don't care much about the "randomness" or the "lack of collision", I care slightly more about performance).
import std.digest.crc;
uint string_hash(string s) {
return crc320f(s);
}
is not good...
(using gdc-5 on Linux/x86-64 with phobos-2)
While Adams answer does exactly what you're looking for, you can also use a union to do the casting.
This is a pretty useful trick so may as well put it here:
/**
* Returns a crc32Of hash of a string
* Uses a union to store the ubyte[]
* And then simply reads that memory as a uint
*/
uint string_hash(string s){
import std.digest.crc;
union hashUnion{
ubyte[4] hashArray;
uint hashNumber;
}
hashUnion x;
x.hashArray = crc32Of(s); // stores the result of crc32Of into the array.
return x.hashNumber; // reads the exact same memory as the hashArray
// but reads it as a uint.
}
A really quick thing could just be this:
uint string_hash(string s) {
import std.digest.crc;
auto r = crc32Of(s);
return *(cast(uint*) r.ptr);
}
Since crc32Of returns a ubyte[4] instead of the uint you want, a conversion is necessary, but since ubyte[4] and uint are the same thing to the machine, we can just do a reinterpret cast with the pointer trick seen there to convert types for free at runtime.
I apologize for asking somewhat of a programming question, but I want to be sure I'm properly using this library cryptographically.
I have managed to implement ed25519-donna except for hashing the data for a signature.
As far as I can tell, this is the function that hashes data:
void ed25519_hash(uint8_t *hash, const uint8_t *in, size_t inlen);
but I can't figure out what *hash is. I'm fairly certain that *in and inlen are the data to be hashed and its length.
Is it something specific to SHA512?
How can one hash with ed25519-donna?
Program hangs
I've compiled with ed25519-donna-master/ed25519.o and the OpenSSL flags -lssl -lcrypto. The key generation, signing, and verification functions work as expected.
It's running without error, but the application hangs on these lines, and the cores are not running at 100%, so I don't think it's busy processing:
extern "C"
{
#include "ed25519-donna-master/ed25519.h"
#include "ed25519-donna-master/ed25519-hash.h"
}
#include <openssl/rand.h>
unsigned char* hash;
const unsigned char* in = convertStringToUnsignedCharStar( myString );
std::cout << in << std::endl;
std::cout << "this is the last portion output and 'in' outputs correctly" << std::endl;
ed25519_hash(hash, in, sizeof(in) );
std::cout << hash << std::endl;
std::cout << "this is never output" << std::endl;
How can this code be modified so that ed25519_hash can function? It works the same way regardless of whether hash and in are unsigned char* or uint8_t*s.
For uint8_t*, I used this code:
uint8_t* hash;
const uint8_t* in = reinterpret_cast<const uint8_t*>(myString.c_str());
“…but I can't figure out what *hash is.”
That uint8_t *hash is the buffer (unsigned char*) that will contain the resulting hash after you called the function.
So, you're looking at a function that expects 3 parameters (also known as arguments):
an uint8_t * buffer to hold the resulting hash,
the input data to be hashed,
the length of the input data to be hashed.
“Is it something specific to SHA512?”
Nope, it's regular C source. But I think you’re a bit confused by the documentation. It states…
If you are not compiling against OpenSSL, you will need a hash function.
…
To use a custom hash function, use -DED25519_CUSTOMHASH
when compiling ed25519.c and put your custom hash implementation
in ed25519-hash-custom.h. The hash must have a 512bit digest and
implement
…
void ed25519_hash(uint8_t *hash, const uint8_t *in, size_t inlen);
So, unless you are not compiling against OpenSSL and implementing your own hash function, you won't be needing this function. Looking at your code, you are compiling against OpenSSL, which means you're playing with the wrong function.
“How can one hash with ed25519-donna?”
By using the provided functionality the library offers.
Your question makes me wonder if you scrolled down to the “Usage” part of the readme, because it completely answers your question and tells you what functions to use.
For your convenience, let me point you to the part of the documentation you need to follow and where you find the functions you need to hash, sign, verify etc. using ed25519-donna:
To use the code, link against ed25519.o -mbits and:
#include "ed25519.h"
Add -lssl -lcrypto when using OpenSSL (Some systems don't
need -lcrypto? It might be trial and error).
To generate a private key, simply generate 32 bytes from a secure cryptographic source:
ed25519_secret_key sk;
randombytes(sk, sizeof(ed25519_secret_key));
To generate a public key:
ed25519_public_key pk;
ed25519_publickey(sk, pk);
To sign a message:
ed25519_signature sig;
ed25519_sign(message, message_len, sk, pk, signature);
To verify a signature:
int valid = ed25519_sign_open(message, message_len, pk, signature) == 0;
To batch verify signatures:
const unsigned char *mp[num] = {message1, message2..}
size_t ml[num] = {message_len1, message_len2..}
const unsigned char *pkp[num] = {pk1, pk2..}
const unsigned char *sigp[num] = {signature1, signature2..}
int valid[num]
/* valid[i] will be set to 1 if the individual signature was valid, 0 otherwise */
int all_valid = ed25519_sign_open_batch(mp, ml, pkp, sigp, num, valid) == 0;
…
As you see, it's all in there… just follow the documentation.
The following is very slow for long strings:
std::string s = "long string";
K klist = DBVec::CreateList(KG , s.length());
for (int i=0; i<s.length(); i++)
{
kG(klist)[i]=s.c_str()[i];
}
It works acceptably fast (<100ms) for strings up to 100k, but slows to a crawl (tens of minutes, possibly hours) for strings of a few million characters. I don't see anything other than kG that can create nonlinearity. I don't see any reason for accessor function kG to be non-constant time, but there is just nothing else in this loop. Unfortunately I don't know how kG works due to lack of documentation.
Question: given a blob of binary data as std::string, what's the efficient way to construct a byte list?
kG is a macro defined in k.h which expands to ((x)->G0), i.e. follow the G0 pointer of the K object
http://kx.com/q/d/a/c.htm#Strings documents kp, which creates a K string object directly from a string, so presumably you could do K klist = kp(s.c_str()), which is probably faster
This works:
memcpy(kG(klist), s.c_str(), s.length());
Still wonder why that loop is not O(N).