This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Converting string to integer C
I wanted to convert a number I received from the command argv[1] into an "int". I know the argv is stored as a string, but how can I get an integer value from them. Supposing argv[1] = 10.
You can use the atoi() function from the C standard library:
http://en.cppreference.com/w/cpp/string/byte/atoi
The POSIX C library includes different functions to convert strings to actual numerical values. Here are a few examples:
long some_long = strtol(argv[1], NULL, 10);
long long some_longlong = strtoll(argv[1], NULL, 10);
unsigned long some_unsigned_long = strtoul(argv[1], NULL, 10);
unsigned long long some_unsigned_longlong = strtoull(argv[1], NULL, 10);
There are functions even for conversion to floating-point values:
double some_double = strtod(argv[1], NULL);
float some_float = strtof(argv[1], NULL);
long double some_long_double = strtold(argv[1], NULL);
For further reference, read here and some more places (strtoul.html and strtod.html on the same web page).
Related
I am trying to generate hashes to use in a blockchain project, when looking for a crypto library i stumbled accross tomcrypt and chose to download it since it was easy to install, but now i have a problem, when I create the hashes (btw i'm usign SHA3_512 but the bug is present in every other SHA hashing algorithm) sometimes it outputs the correct hash but truncated
photo example
Hash truncating example
this is the code for the hashing function
string hashSHA3_512(const std::string& input) {
//Initial
unsigned char* hashResult = new unsigned char[sha3_512_desc.hashsize];
//Initialize a state variable for the hash
hash_state md;
sha3_512_init(&md);
//Process the text - remember you can call process() multiple times
sha3_process(&md, (const unsigned char*) input.c_str(), input.size());
//Finish the hash calculation
sha3_done(&md, hashResult);
// Convert to string
string stringifiedHash(reinterpret_cast<char*>(hashResult));
// Return the result
return stringToHex(stringifiedHash);
}
and here is the code for the toHex function even if I already checked and the truncating hash problem pops up before this function is called
string stringToHex(const std::string& input)
{
static const char hex_digits[] = "0123456789abcdef";
std::string output;
output.reserve(input.length() * 2);
for (unsigned char c : input)
{
output.push_back(hex_digits[c >> 4]);
output.push_back(hex_digits[c & 15]);
}
return output;
}
if someone has knowledge about this library or in general about this problem and possible fixes pls explain to me, i'm stuck from 3 days
UPDATE
I figured out the program is truncating the hashes when it encounters 2 consecutive zeros in hex so 8 zeros in binary (or simply 2 bytes) but I still don't understand why, if you do pls let me and hopefully other people with the same problem know
I understand that LEB128 decoders need to know whether an encoded number is signed or unsigned, but the encoder seems to work identically either way (though Wikipedia uses distinct functions for encoding signed and unsigned numbers).
If positive numbers are encoded the same way in Signed and Unsigned LEB128 (only the range changes), and negative numbers only occur in Signed LEB128, it seems more sensible to create a single function that encodes any integer (using the two's compliment when the argument is negative).
I implemented a function that works the way I described, and it seems to work fine.
This is not an implementation detail (unless I've misunderstood something). Any function that can encode Signed LEB128 makes any function that encodes Unsigned LEB128 completely redundant, so there would never be a good reason to create both.
I used JavaScript, but the actual implementation is not important. Is there ever a reason to have a Signed LEB128 encoder and an Unsigned one?
const toLEB128 = function * (arg) {
/* This generator takes any BigInt, LEB128 encodes it, and
yields the result, one byte at a time (little-endian). */
const digits = arg.toString(2).length;
const length = digits + (7 - digits % 7);
const sevens = new RegExp(".{1,7}", "g");
const number = BigInt.asUintN(length, arg);
const padded = "000000" + number.toString(2);
const string = padded.slice(padded.length % 7);
const eights = string.match(sevens).map(function(string, index) {
/* This callback takes each string of seven digits and its
index (big-endian), prepends the correct continuation digit,
converts the 8-bit result to a BigInt, then returns it. */
return BigInt("0b" + Boolean(index) * 1 + string);
});
while (eights.length) yield eights.pop();
};
I have ported Java code to C#.
Could you please explain why I have compile-time error in the follow line (I use VS 2008):
private long l = 0xffffffffffffffffL; // 16 'f' got here
Cannot convert source type ulong to target type long
I need the same value here as for origin Java code.
Java doesn't mind if a constant overflows in this particular situation - the value you've given is actually -1.
The simplest way of achieving the same effect in C# is:
private long l = -1;
If you want to retain the 16 fs you could use:
private long l = unchecked((long) 0xffffffffffffffffUL);
If you actually want the maximum value for a signed long, you should use:
// Java
private long l = Long.MAX_VALUE;
// C#
private long l = long.MaxValue;
Assuming you aren't worried about negative values, you could try using an unsigned long:
private ulong l = 0xffffffffffffffffL;
In Java the actual value of l would be -1, because it would overflow the 2^63 - 1 maximum value, so you could just replace your constant with -1.
0xffffffffffffffff is larger than a signed long can represent.
You can insert a cast:
private long l = unchecked( (long)0xffffffffffffffffL);
Since C# uses two's complement, 0xffffffffffffffff represents -1:
private long l = -1;
Or declare the variable as unsigned, which is probably the cleanest choice if you want to represent bit patterns:
private ulong l = 0xffffffffffffffffL;
private ulong l = ulong.MaxValue;
The maximal value of a singed long is:
private long l = 0x7fffffffffffffffL;
But that's better written as long.MaxValue.
You could do this:
private long l = long.MaxValue;
... but as mdm pointed out, you probably actually want a ulong.
private ulong l = ulong.MaxValue;
How to convert Char array to long in obj c
unsigned char *composite[4];
composite[0]=spIndex;
composite[1]= minor;
composite[2]=shortss[0];
composite[3]=shortss[1];
i need to convert this to Long int..Anyone please help
If you are looking at converting what is essentially already a binary number then a simple type cast would suffice but you would need to reverse the indexes to get the same result as you would in Java: long value = *((long*)composite);
You might also consider this if you have many such scenarios:
union {
unsigned char asChars[4];
long asLong;
} value;
value.asChar[3] = 1;
value.asChar[2] = 9;
value.asChar[1] = 0;
value.asChar[0] = 10;
// Outputs 17367050
NSLog(#"Value as long %ld",value.asLong);
I have a HTTP connector in my iPhone project and queries must have a parameter set from the username using the Fowler–Noll–Vo (FNV) Hash.
I have a Java implementation working at this time, this is the code :
long fnv_prime = 0x811C9DC5;
long hash = 0;
for(int i = 0; i < str.length(); i++)
{
hash *= fnv_prime;
hash ^= str.charAt(i);
}
Now on the iPhone side, I did this :
int64_t fnv_prime = 0x811C9DC5;
int64_T hash = 0;
for (int i=0; i < [myString length]; i++)
{
hash *= fnv_prime;
hash ^= [myString characterAtIndex:i];
}
This script doesn't give me the same result has the Java one.
In first loop, I get this :
hash = 0
hash = 100 (first letter is "d")
hash = 1865261300 (for hash = 100 and fnv_prime = -2128831035 like in Java)
Do someone see something I'm missing ?
Thanks in advance for the help !
In Java, this line:
long fnv_prime = 0x811C9DC5;
will yield in fnv_prime the numerical value -2128831035, because the constant is interpreted as an int, which is a 32-bit signed value in Java. That value is then sign-extended when written in a long.
Conversely, in the Objective-C code:
int64_t fnv_prime = 0x811C9DC5;
the 0x811C9DC5 is interpreted as an unsigned int constant (because it does not fit in a signed 32-bit int), with numerical value 2166136261. That value is then written into fnv_prime, and there is no sign to extend since, as far as the C compiler is concerned, the value is positive.
Thus you end up with distinct values for fnv_prime, which explains your distinct results.
This can be corrected in Java by adding a "L" suffix, like this:
long fnv_prime = 0x811C9DC5L;
which forces the Java compiler to interpret the constant as a long, with the same numerical value than what you get with the Objective-C code.
Incidentally, 0x811C9DC5 is not a FNV prime (it is not even prime); it is the 32 bit FNV "offset basis". You will get incorrect hash values if you use this value (and more hash collisions). The correct value for the 32 bit FNV prime is 0x1000193. See http://www.isthe.com/chongo/tech/comp/fnv/index.html
It is a difference in sign extension assigning the 32-bit value 0x811C9DC5 to a 64-bit var.
Are the characters in Java and Objective-c the same? NSString will give you unichars.