Why does "UInt64(1 << 63)" crash? - swift

println(UInt8(1 << 7)) // OK
println(UInt16(1 << 15)) // OK
println(UInt32(1 << 31)) // OK
println(UInt64(1 << 63)) // Crash
I would like to understand why this happens for UInt64 only. Thanks!
Edit:
To make matters more confusing, the following all work:
println(1 << UInt8(7))
println(1 << UInt16(15))
println(1 << UInt32(31))
println(1 << UInt64(63))
My guess is that an intermediate result produced by computing 1 << 63 is too large.

Try println(UInt64(1) << UInt64(63)).
The type inferrer didn't do its job well and decided that 1 << 63 is a UInt32 and used this function: func <<(lhs: UInt32, rhs: UInt32) -> UInt32
println(1 << UInt64(63)) works because the compiler knows that since UInt64(63) is a UInt64, then the integer literal 1 is inferred to be a UInt64, therefore the operation results in a UInt64 and is not out of bounds.

Related

How To Create and Verify Blind RSA Signatures With Crypto++?

I've read through the whitepapers and specifications relating to blind signatures which I've been able to come across, inclusive of the Wikipedia entries, but these tend to focus on the mathematical theory behind it.
Is there a concise practical implementation of RSA blind signatures within c++ using the Crypto++ library?
Is there a concise practical implementation of RSA blind signatures within c++ using the Crypto++ library?
Yes. The Crypto++ wiki has a section on blind signatures for RSA at Raw RSA | RSA Blind Signature. Below is the code taken from the wiki.
Crypto++ lacks blind signature classes. The method below follows the basic algorithm as detailed at Blind Signatures. However, it differs from Wikipedia by applying the s(s'(x)) = x cross-check. The cross-check was present in Chaum's original paper, but it is missing from the wiki article. A second difference from Chaum's paper and wikipedia is, the code below uses H(m) rather than m. That's due to Rabin in 1979.
As far as we know there is no standard covering the signature scheme. The lack of standardization will surely cause interop problems. For example, the code below uses SHA256 to hash the message to be signed, while RSA Blind Signature Scheme for golang uses full domain hashing. Also see Is there a standard padding/format for RSA Blind Signatures? on Crypto.SE.
You may want to apply a padding function first per Usability of padding scheme in blinded RSA signature? or RSA blind signatures in practice.
#include "cryptlib.h"
#include "integer.h"
#include "nbtheory.h"
#include "osrng.h"
#include "rsa.h"
#include "sha.h"
using namespace CryptoPP;
#include <iostream>
#include <stdexcept>
using std::cout;
using std::endl;
using std::runtime_error;
int main(int argc, char* argv[])
{
// Bob artificially small key pair
AutoSeededRandomPool prng;
RSA::PrivateKey privKey;
privKey.GenerateRandomWithKeySize(prng, 64);
RSA::PublicKey pubKey(privKey);
// Convenience
const Integer& n = pubKey.GetModulus();
const Integer& e = pubKey.GetPublicExponent();
const Integer& d = privKey.GetPrivateExponent();
// Print params
cout << "Pub mod: " << std::hex << pubKey.GetModulus() << endl;
cout << "Pub exp: " << std::hex << e << endl;
cout << "Priv mod: " << std::hex << privKey.GetModulus() << endl;
cout << "Priv exp: " << std::hex << d << endl;
// For sizing the hashed message buffer. This should be SHA256 size.
const size_t SIG_SIZE = UnsignedMin(SHA256::BLOCKSIZE, n.ByteCount());
// Scratch
SecByteBlock buff1, buff2, buff3;
// Alice original message to be signed by Bob
SecByteBlock orig((const byte*)"secret", 6);
Integer m(orig.data(), orig.size());
cout << "Message: " << std::hex << m << endl;
// Hash message per Rabin (1979)
buff1.resize(SIG_SIZE);
SHA256 hash1;
hash1.CalculateTruncatedDigest(buff1, buff1.size(), orig, orig.size());
// H(m) as Integer
Integer hm(buff1.data(), buff1.size());
cout << "H(m): " << std::hex << hm << endl;
// Alice blinding
Integer r;
do {
r.Randomize(prng, Integer::One(), n - Integer::One());
} while (!RelativelyPrime(r, n));
// Blinding factor
Integer b = a_exp_b_mod_c(r, e, n);
cout << "Random: " << std::hex << b << endl;
// Alice blinded message
Integer mm = a_times_b_mod_c(hm, b, n);
cout << "Blind msg: " << std::hex << mm << endl;
// Bob sign
Integer ss = privKey.CalculateInverse(prng, mm);
cout << "Blind sign: " << ss << endl;
// Alice checks s(s'(x)) = x. This is from Chaum's paper
Integer c = pubKey.ApplyFunction(ss);
cout << "Check sign: " << c << endl;
if (c != mm)
throw runtime_error("Alice cross-check failed");
// Alice remove blinding
Integer s = a_times_b_mod_c(ss, r.InverseMod(n), n);
cout << "Unblind sign: " << s << endl;
// Eve verifies
Integer v = pubKey.ApplyFunction(s);
cout << "Verify: " << std::hex << v << endl;
// Convert to a string
size_t req = v.MinEncodedSize();
buff2.resize(req);
v.Encode(&buff2[0], buff2.size());
// Hash message per Rabin (1979)
buff3.resize(SIG_SIZE);
SHA256 hash2;
hash2.CalculateTruncatedDigest(buff3, buff3.size(), orig, orig.size());
// Constant time compare
bool equal = buff2.size() == buff3.size() && VerifyBufsEqual(
buff2.data(), buff3.data(), buff3.size());
if (!equal)
throw runtime_error("Eve verified failed");
cout << "Verified signature" << endl;
return 0;
}
Here is the result of building and running the program:
$ g++ blind.cxx ./libcryptopp.a -o blind.exe
$ ./blind.exe
Pub mod: b55dc5e79993680fh
Pub exp: 11h
Priv mod: b55dc5e79993680fh
Priv exp: 1b4fc70ff2e97f1h
Message: 736563726574h
H(m): 2bb80d537b1da3e3h
Random: 72dd6819f0fc5e5fh
Blinded msg: 27a2e2e5e6f4fbfh
Blind sign: 84e7039495bf0570h
Check sign: 27a2e2e5e6f4fbfh
Unblind sign: 61054203e843f380h
Verify: 2bb80d537b1da3e3h
Verified signature

Bitwise and arithmetic operations in swift

Honestly speaking, porting to swift3(from obj-c) is going hard. The easiest but the swiftiest question.
public func readByte() -> UInt8
{
// ...
}
public func readShortInteger() -> Int16
{
return (self.readByte() << 8) + self.readByte();
}
Getting error message from compiler: "Binary operator + cannot be applied to two UInt8 operands."
What is wrong?
ps. What a shame ;)
readByte returns a UInt8 so:
You cannot shift a UInt8 left by 8 bits, you'll lose all its bits.
The type of the expression is UInt8 which cannot fit the Int16 value it is computing.
The type of the expression is UInt8 which is not the annotated return type Int16.
d
func readShortInteger() -> Int16
{
let highByte = self.readByte()
let lowByte = self.readByte()
return Int16(highByte) << 8 | Int16(lowByte)
}
While Swift have a strictly left-right evaluation order of the operands, I refactored the code to make it explicit which byte is read first and which is read second.
Also an OR operator is more self-documenting and semantic.
Apple has some great Swift documentation on this, here:
https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/AdvancedOperators.html
let shiftBits: UInt8 = 4 // 00000100 in binary
shiftBits << 1 // 00001000
shiftBits << 2 // 00010000
shiftBits << 5 // 10000000
shiftBits << 6 // 00000000
shiftBits >> 2 // 00000001

Why does gtkmm row get_value not work?

With this code:
size = 100;
uint64_t work;
row.get_value(3, work);
cout << "value was " << work << endl;
work += size;
cout << "value set to " << work << endl;
row.set_value(3, work);
row.get_value(3, work);
cout << "value now " << work << endl;
I expect this output:
value was 0
value set to 100
value now 100
but I get:
value was 0
value set to 100
value now 0
The updated value, 100, does display correctly in the tree view widget, I just cannot read it with get_value. What am I doing wrong?
Turns out the problem was the uint64_t; row[3] was defined (in Glade) as a guint, the work variable must match that type exactly or get_value will not work.

Does Rust have a way to convert several bytes to a number? [duplicate]

This question already has answers here:
Converting number primitives (i32, f64, etc) to byte representations
(5 answers)
Closed 6 years ago.
And convert a number to a byte array?
I'd like to avoid using transmute, but it's most important to reach maximum performance.
A u32 being 4 bytes, you may be able to use std::mem::transmute to interpret a [u8; 4] as a u32 however:
beware of alignment
beware of endianness
A no-dependency solution is simply to perform the maths, following in Rob Pike's steps:
fn as_u32_be(array: &[u8; 4]) -> u32 {
((array[0] as u32) << 24) +
((array[1] as u32) << 16) +
((array[2] as u32) << 8) +
((array[3] as u32) << 0)
}
fn as_u32_le(array: &[u8; 4]) -> u32 {
((array[0] as u32) << 0) +
((array[1] as u32) << 8) +
((array[2] as u32) << 16) +
((array[3] as u32) << 24)
}
It compiles down to reasonably efficient code.
If dependencies are an option though, using the byteorder crate is just simpler.
There is T::from_str_radix to convert from a string (you can choose the base and T can be any integer type).
To convert an integer to a String you can use format!:
format!("{:x}", 42) == "2a"
format!("{:X}", 42) == "2A"
To reinterpret an integer as bytes, just use the byte_order crate.
Old answer, I don't advise this any more:
If you want to convert between u32 and [u8; 4] (for example) you can use transmute, it’s what it is for.
Note also that Rust has to_be and to_le functions to deal with endianess:
unsafe { std::mem::transmute::<u32, [u8; 4]>(42u32.to_le()) } == [42, 0, 0, 0]
unsafe { std::mem::transmute::<u32, [u8; 4]>(42u32.to_be()) } == [0, 0, 0, 42]
unsafe { std::mem::transmute::<[u8; 4], u32>([0, 0, 0, 42]) }.to_le() == 0x2a000000
unsafe { std::mem::transmute::<[u8; 4], u32>([0, 0, 0, 42]) }.to_be() == 0x0000002a

How to use a struct passed by value in Red/System from a DLL

I have some c code that looks like this:
extern "C" __declspec(dllexport) inline sfVector2f __cdecl sf_vector_create(
float x, float y
) {
std::cout << "x: " << x << " y: " << y << std::endl;
sfVector2f vec = {x,y}; // just a struct of two floats
return vec;
}
extern "C" __declspec(dllexport) inline void __cdecl test(
sfSprite* sprite, sfVector2f wtf
) {
std::cout << wtf.x << " " << wtf.y << std::endl;
sfSprite_setPosition(sprite, wtf);
}
I invoke it from reds like this:
vec: sf-vector-create as float32! 100.0 as float32! 100.0
test mario-sprite vec
When I invoke this in reds, I get garbled results... why?
The C code is returning the vec struct on stack instead of returning a struct pointer. So in R/S, I guess you get back only the first entry of the struct. R/S does not yet support passing structs by value. But you can retrieve the rest of the values by some clever use of system/stack/* accessors to get a pointer on the beginning of the struct.
Something like this should work:
sf-vector-create as float32! 100.0 as float32! 100.0
p: as byte-ptr! system/stack/top
vec: as vector! p - size? vector!
(Answer from #DocKimbel)