How do I convert a string to hex in Rust? - type-conversion

I want to convert a string of characters (a SHA256 hash) to hex in Rust:
extern crate crypto;
extern crate rustc_serialize;
use rustc_serialize::hex::ToHex;
use crypto::digest::Digest;
use crypto::sha2::Sha256;
fn gen_sha256(hashme: &str) -> String {
let mut sh = Sha256::new();
sh.input_str(hashme);
sh.result_str()
}
fn main() {
let hash = gen_sha256("example");
hash.to_hex()
}
The compiler says:
error[E0599]: no method named `to_hex` found for type `std::string::String` in the current scope
--> src/main.rs:18:10
|
18 | hash.to_hex()
| ^^^^^^
I can see this is true; it looks like it's only implemented for [u8].
What am I to do? Is there no method implemented to convert from a string to hex in Rust?
My Cargo.toml dependencies:
[dependencies]
rust-crypto = "0.2.36"
rustc-serialize = "0.3.24"
edit I just realized the string is already in hex format from the rust-crypto library. D'oh.

I will go out on a limb here, and suggest that the solution is for hash to be of type Vec<u8>.
The issue is that while you can indeed convert a String to a &[u8] using as_bytes and then use to_hex, you first need to have a valid String object to start with.
While any String object can be converted to a &[u8], the reverse is not true. A String object is solely meant to hold a valid UTF-8 encoded Unicode string: not all bytes pattern qualify.
Therefore, it is incorrect for gen_sha256 to produce a String. A more correct type would be Vec<u8> which can, indeed, accept any bytes pattern. And from then on, invoking to_hex is easy enough:
hash.as_slice().to_hex()

It appears the source for ToHex has the solution I'm looking for. It contains a test:
#[test]
pub fn test_to_hex() {
assert_eq!("foobar".as_bytes().to_hex(), "666f6f626172");
}
My revised code is:
let hash = gen_sha256("example");
hash.as_bytes().to_hex()
This appears to work. I will take some time before I accept this solution if anyone has an alternative answer.

A hexadecimal representation can be generated with a function like this:
pub fn hex_push(buf: &mut String, blob: &[u8]) {
for ch in blob {
fn hex_from_digit(num: u8) -> char {
if num < 10 {
(b'0' + num) as char
} else {
(b'A' + num - 10) as char
}
}
buf.push(hex_from_digit(ch / 16));
buf.push(hex_from_digit(ch % 16));
}
}
This is a tad more efficient than the generic radix formatting implemented currently in the language.
Here's a benchmark:
test bench_specialized_hex_push ... bench: 12 ns/iter (+/- 0) = 250 MB/s
test bench_specialized_fomat ... bench: 42 ns/iter (+/- 12) = 71 MB/s
test bench_specialized_format ... bench: 47 ns/iter (+/- 2) = 63 MB/s
test bench_specialized_hex_string ... bench: 76 ns/iter (+/- 9) = 39 MB/s
test bench_to_hex ... bench: 82 ns/iter (+/- 12) = 36 MB/s
test bench_format ... bench: 97 ns/iter (+/- 8) = 30 MB/s

Thanks to the user jey in the ##rust irc channel in freenode. You can just use the hex representation fmt provides,
>> let mut s = String::new();
>> use std::fmt::Write as FmtWrite; // renaming import to avoid collision
>> for b in "hello world".as_bytes() { write!(s, "{:02x}", b); }
()
>> s
"68656c6c6f20776f726c64"
>>
or a bit silly one,
>> "hello world".as_bytes().iter().map(|x| format!("{:02x}", x)).collect::<String>()
"68656c6c6f20776f726c64"

Using the hex crate, it is very easy:
use hex;
println!("{}", hex::encode(String("some str")));

Related

MongoDB findOne() return 404 "Not found" in Postman but in commend line it comes out [duplicate]

How do I convert a string to an integer in JavaScript?
The simplest way would be to use the native Number function:
var x = Number("1000")
If that doesn't work for you, then there are the parseInt, unary plus, parseFloat with floor, and Math.round methods.
parseInt()
var x = parseInt("1000", 10); // You want to use radix 10
// So you get a decimal number even with a leading 0 and an old browser ([IE8, Firefox 20, Chrome 22 and older][1])
Unary plus
If your string is already in the form of an integer:
var x = +"1000";
floor()
If your string is or might be a float and you want an integer:
var x = Math.floor("1000.01"); // floor() automatically converts string to number
Or, if you're going to be using Math.floor several times:
var floor = Math.floor;
var x = floor("1000.01");
parseFloat()
If you're the type who forgets to put the radix in when you call parseInt, you can use parseFloat and round it however you like. Here I use floor.
var floor = Math.floor;
var x = floor(parseFloat("1000.01"));
round()
Interestingly, Math.round (like Math.floor) will do a string to number conversion, so if you want the number rounded (or if you have an integer in the string), this is a great way, maybe my favorite:
var round = Math.round;
var x = round("1000"); // Equivalent to round("1000", 0)
Try parseInt function:
var number = parseInt("10");
But there is a problem. If you try to convert "010" using parseInt function, it detects as octal number, and will return number 8. So, you need to specify a radix (from 2 to 36). In this case base 10.
parseInt(string, radix)
Example:
var result = parseInt("010", 10) == 10; // Returns true
var result = parseInt("010") == 10; // Returns false
Note that parseInt ignores bad data after parsing anything valid.
This guid will parse as 51:
var result = parseInt('51e3daf6-b521-446a-9f5b-a1bb4d8bac36', 10) == 51; // Returns true
There are two main ways to convert a string to a number in JavaScript. One way is to parse it and the other way is to change its type to a Number. All of the tricks in the other answers (e.g., unary plus) involve implicitly coercing the type of the string to a number. You can also do the same thing explicitly with the Number function.
Parsing
var parsed = parseInt("97", 10);
parseInt and parseFloat are the two functions used for parsing strings to numbers. Parsing will stop silently if it hits a character it doesn't recognise, which can be useful for parsing strings like "92px", but it's also somewhat dangerous, since it won't give you any kind of error on bad input, instead you'll get back NaN unless the string starts with a number. Whitespace at the beginning of the string is ignored. Here's an example of it doing something different to what you want, and giving no indication that anything went wrong:
var widgetsSold = parseInt("97,800", 10); // widgetsSold is now 97
It's good practice to always specify the radix as the second argument. In older browsers, if the string started with a 0, it would be interpreted as octal if the radix wasn't specified which took a lot of people by surprise. The behaviour for hexadecimal is triggered by having the string start with 0x if no radix is specified, e.g., 0xff. The standard actually changed with ECMAScript 5, so modern browsers no longer trigger octal when there's a leading 0 if no radix has been specified. parseInt understands radixes up to base 36, in which case both upper and lower case letters are treated as equivalent.
Changing the Type of a String to a Number
All of the other tricks mentioned above that don't use parseInt, involve implicitly coercing the string into a number. I prefer to do this explicitly,
var cast = Number("97");
This has different behavior to the parse methods (although it still ignores whitespace). It's more strict: if it doesn't understand the whole of the string than it returns NaN, so you can't use it for strings like 97px. Since you want a primitive number rather than a Number wrapper object, make sure you don't put new in front of the Number function.
Obviously, converting to a Number gives you a value that might be a float rather than an integer, so if you want an integer, you need to modify it. There are a few ways of doing this:
var rounded = Math.floor(Number("97.654")); // other options are Math.ceil, Math.round
var fixed = Number("97.654").toFixed(0); // rounded rather than truncated
var bitwised = Number("97.654")|0; // do not use for large numbers
Any bitwise operator (here I've done a bitwise or, but you could also do double negation as in an earlier answer or a bit shift) will convert the value to a 32 bit integer, and most of them will convert to a signed integer. Note that this will not do want you want for large integers. If the integer cannot be represented in 32 bits, it will wrap.
~~"3000000000.654" === -1294967296
// This is the same as
Number("3000000000.654")|0
"3000000000.654" >>> 0 === 3000000000 // unsigned right shift gives you an extra bit
"300000000000.654" >>> 0 === 3647256576 // but still fails with larger numbers
To work correctly with larger numbers, you should use the rounding methods
Math.floor("3000000000.654") === 3000000000
// This is the same as
Math.floor(Number("3000000000.654"))
Bear in mind that coercion understands exponential notation and Infinity, so 2e2 is 200 rather than NaN, while the parse methods don't.
Custom
It's unlikely that either of these methods do exactly what you want. For example, usually I would want an error thrown if parsing fails, and I don't need support for Infinity, exponentials or leading whitespace. Depending on your use case, sometimes it makes sense to write a custom conversion function.
Always check that the output of Number or one of the parse methods is the sort of number you expect. You will almost certainly want to use isNaN to make sure the number is not NaN (usually the only way you find out that the parse failed).
ParseInt() and + are different
parseInt("10.3456") // returns 10
+"10.3456" // returns 10.3456
Fastest
var x = "1000"*1;
Test
Here is little comparison of speed (macOS only)... :)
For Chrome, 'plus' and 'mul' are fastest (>700,000,00 op/sec), 'Math.floor' is slowest. For Firefox, 'plus' is slowest (!) 'mul' is fastest (>900,000,000 op/sec). In Safari 'parseInt' is fastest, 'number' is slowest (but results are quite similar, >13,000,000 <31,000,000). So Safari for cast string to int is more than 10x slower than other browsers. So the winner is 'mul' :)
You can run it on your browser by this link
https://jsperf.com/js-cast-str-to-number/1
I also tested var x = ~~"1000";. On Chrome and Safari, it is a little bit slower than var x = "1000"*1 (<1%), and on Firefox it is a little bit faster (<1%).
I use this way of converting string to number:
var str = "25"; // String
var number = str*1; // Number
So, when multiplying by 1, the value does not change, but JavaScript automatically returns a number.
But as it is shown below, this should be used if you are sure that the str is a number (or can be represented as a number), otherwise it will return NaN - not a number.
You can create simple function to use, e.g.,
function toNumber(str) {
return str*1;
}
Try parseInt.
var number = parseInt("10", 10); //number will have value of 10.
I love this trick:
~~"2.123"; //2
~~"5"; //5
The double bitwise negative drops off anything after the decimal point AND converts it to a number format. I've been told it's slightly faster than calling functions and whatnot, but I'm not entirely convinced.
Another method I just saw here (a question about the JavaScript >>> operator, which is a zero-fill right shift) which shows that shifting a number by 0 with this operator converts the number to a uint32 which is nice if you also want it unsigned. Again, this converts to an unsigned integer, which can lead to strange behaviors if you use a signed number.
"-2.123" >>> 0; // 4294967294
"2.123" >>> 0; // 2
"-5" >>> 0; // 4294967291
"5" >>> 0; // 5
In JavaScript, you can do the following:
ParseInt
parseInt("10.5") // Returns 10
Multiplying with 1
var s = "10";
s = s*1; // Returns 10
Using the unary operator (+)
var s = "10";
s = +s; // Returns 10
Using a bitwise operator
(Note: It starts to break after 2140000000. Example: ~~"2150000000" = -2144967296)
var s = "10.5";
s = ~~s; // Returns 10
Using Math.floor() or Math.ceil()
var s = "10";
s = Math.floor(s) || Math.ceil(s); // Returns 10
Please see the below example. It will help answer your question.
Example Result
parseInt("4") 4
parseInt("5aaa") 5
parseInt("4.33333") 4
parseInt("aaa"); NaN (means "Not a Number")
By using parseint function, it will only give op of integer present and not the string.
Beware if you use parseInt to convert a float in scientific notation!
For example:
parseInt("5.6e-14")
will result in
5
instead of
0
Also as a side note: MooTools has the function toInt() which is used on any native string (or float (or integer)).
"2".toInt() // 2
"2px".toInt() // 2
2.toInt() // 2
We can use +(stringOfNumber) instead of using parseInt(stringOfNumber).
Example: +("21") returns int of 21, like the parseInt("21").
We can use this unary "+" operator for parsing float too...
To convert a String into Integer, I recommend using parseFloat and not parseInt. Here's why:
Using parseFloat:
parseFloat('2.34cms') //Output: 2.34
parseFloat('12.5') //Output: 12.5
parseFloat('012.3') //Output: 12.3
Using parseInt:
parseInt('2.34cms') //Output: 2
parseInt('12.5') //Output: 12
parseInt('012.3') //Output: 12
So if you have noticed parseInt discards the values after the decimals, whereas parseFloat lets you work with floating point numbers and hence more suitable if you want to retain the values after decimals. Use parseInt if and only if you are sure that you want the integer value.
There are many ways in JavaScript to convert a string to a number value... All are simple and handy. Choose the way which one works for you:
var num = Number("999.5"); //999.5
var num = parseInt("999.5", 10); //999
var num = parseFloat("999.5"); //999.5
var num = +"999.5"; //999.5
Also, any Math operation converts them to number, for example...
var num = "999.5" / 1; //999.5
var num = "999.5" * 1; //999.5
var num = "999.5" - 1 + 1; //999.5
var num = "999.5" - 0; //999.5
var num = Math.floor("999.5"); //999
var num = ~~"999.5"; //999
My prefer way is using + sign, which is the elegant way to convert a string to number in JavaScript.
Try str - 0 to convert string to number.
> str = '0'
> str - 0
0
> str = '123'
> str - 0
123
> str = '-12'
> str - 0
-12
> str = 'asdf'
> str - 0
NaN
> str = '12.34'
> str - 0
12.34
Here are two links to compare the performance of several ways to convert string to int
https://jsperf.com/number-vs-parseint-vs-plus
http://phrogz.net/js/string_to_number.html
Here is the easiest solution
let myNumber = "123" | 0;
More easy solution
let myNumber = +"123";
In my opinion, no answer covers all edge cases as parsing a float should result in an error.
function parseInteger(value) {
if(value === '') return NaN;
const number = Number(value);
return Number.isInteger(number) ? number : NaN;
}
parseInteger("4") // 4
parseInteger("5aaa") // NaN
parseInteger("4.33333") // NaN
parseInteger("aaa"); // NaN
The easiest way would be to use + like this
const strTen = "10"
const numTen = +strTen // string to number conversion
console.log(typeof strTen) // string
console.log(typeof numTen) // number
I actually needed to "save" a string as an integer, for a binding between C and JavaScript, so I convert the string into an integer value:
/*
Examples:
int2str( str2int("test") ) == "test" // true
int2str( str2int("t€st") ) // "t¬st", because "€".charCodeAt(0) is 8364, will be AND'ed with 0xff
Limitations:
maximum 4 characters, so it fits into an integer
*/
function str2int(the_str) {
var ret = 0;
var len = the_str.length;
if (len >= 1) ret += (the_str.charCodeAt(0) & 0xff) << 0;
if (len >= 2) ret += (the_str.charCodeAt(1) & 0xff) << 8;
if (len >= 3) ret += (the_str.charCodeAt(2) & 0xff) << 16;
if (len >= 4) ret += (the_str.charCodeAt(3) & 0xff) << 24;
return ret;
}
function int2str(the_int) {
var tmp = [
(the_int & 0x000000ff) >> 0,
(the_int & 0x0000ff00) >> 8,
(the_int & 0x00ff0000) >> 16,
(the_int & 0xff000000) >> 24
];
var ret = "";
for (var i=0; i<4; i++) {
if (tmp[i] == 0)
break;
ret += String.fromCharCode(tmp[i]);
}
return ret;
}
String to Number in JavaScript:
Unary + (most recommended)
+numStr is easy to use and has better performance compared with others
Supports both integers and decimals
console.log(+'123.45') // => 123.45
Some other options:
Parsing Strings:
parseInt(numStr) for integers
parseFloat(numStr) for both integers and decimals
console.log(parseInt('123.456')) // => 123
console.log(parseFloat('123')) // => 123
JavaScript Functions
Math functions like round(numStr), floor(numStr), ceil(numStr) for integers
Number(numStr) for both integers and decimals
console.log(Math.floor('123')) // => 123
console.log(Math.round('123.456')) // => 123
console.log(Math.ceil('123.454')) // => 124
console.log(Number('123.123')) // => 123.123
Unary Operators
All basic unary operators, +numStr, numStr-0, 1*numStr, numStr*1, and numStr/1
All support both integers and decimals
Be cautious about numStr+0. It returns a string.
console.log(+'123') // => 123
console.log('002'-0) // => 2
console.log(1*'5') // => 5
console.log('7.7'*1) // => 7.7
console.log(3.3/1) // =>3.3
console.log('123.123'+0, typeof ('123.123' + 0)) // => 123.1230 string
Bitwise Operators
Two tilde ~~numStr or left shift 0, numStr<<0
Supports only integers, but not decimals
console.log(~~'123') // => 123
console.log('0123'<<0) // => 123
console.log(~~'123.123') // => 123
console.log('123.123'<<0) // => 123
// Parsing
console.log(parseInt('123.456')) // => 123
console.log(parseFloat('123')) // => 123
// Function
console.log(Math.floor('123')) // => 123
console.log(Math.round('123.456')) // => 123
console.log(Math.ceil('123.454')) // => 124
console.log(Number('123.123')) // => 123.123
// Unary
console.log(+'123') // => 123
console.log('002'-0) // => 2
console.log(1*'5') // => 5
console.log('7.7'*1) // => 7.7
console.log(3.3/1) // => 3.3
console.log('123.123'+0, typeof ('123.123'+0)) // => 123.1230 string
// Bitwise
console.log(~~'123') // => 123
console.log('0123'<<0) // => 123
console.log(~~'123.123') // => 123
console.log('123.123'<<0) // => 123
function parseIntSmarter(str) {
// ParseInt is bad because it returns 22 for "22thisendsintext"
// Number() is returns NaN if it ends in non-numbers, but it returns 0 for empty or whitespace strings.
return isNaN(Number(str)) ? NaN : parseInt(str, 10);
}
You can use plus.
For example:
var personAge = '24';
var personAge1 = (+personAge)
then you can see the new variable's type bytypeof personAge1 ; which is number.
Summing the multiplication of digits with their respective power of ten:
i.e: 123 = 100+20+3 = 1100 + 2+10 + 31 = 1*(10^2) + 2*(10^1) + 3*(10^0)
function atoi(array) {
// Use exp as (length - i), other option would be
// to reverse the array.
// Multiply a[i] * 10^(exp) and sum
let sum = 0;
for (let i = 0; i < array.length; i++) {
let exp = array.length - (i+1);
let value = array[i] * Math.pow(10, exp);
sum += value;
}
return sum;
}
The safest way to ensure you get a valid integer:
let integer = (parseInt(value, 10) || 0);
Examples:
// Example 1 - Invalid value:
let value = null;
let integer = (parseInt(value, 10) || 0);
// => integer = 0
// Example 2 - Valid value:
let value = "1230.42";
let integer = (parseInt(value, 10) || 0);
// => integer = 1230
// Example 3 - Invalid value:
let value = () => { return 412 };
let integer = (parseInt(value, 10) || 0);
// => integer = 0
Another option is to double XOR the value with itself:
var i = 12.34;
console.log('i = ' + i);
console.log('i ⊕ i ⊕ i = ' + (i ^ i ^ i));
This will output:
i = 12.34
i ⊕ i ⊕ i = 12
I only added one plus(+) before string and that was solution!
+"052254" // 52254
Number()
Number(" 200.12 ") // Returns 200.12
Number("200.12") // Returns 200.12
Number("200") // Returns 200
parseInt()
parseInt(" 200.12 ") // Return 200
parseInt("200.12") // Return 200
parseInt("200") // Return 200
parseInt("Text information") // Returns NaN
parseFloat()
It will return the first number
parseFloat("200 400") // Returns 200
parseFloat("200") // Returns 200
parseFloat("Text information") // Returns NaN
parseFloat("200.10") // Return 200.10
Math.floor()
Round a number to the nearest integer
Math.floor(" 200.12 ") // Return 200
Math.floor("200.12") // Return 200
Math.floor("200") // Return 200
function doSth(){
var a = document.getElementById('input').value;
document.getElementById('number').innerHTML = toNumber(a) + 1;
}
function toNumber(str){
return +str;
}
<input id="input" type="text">
<input onclick="doSth()" type="submit">
<span id="number"></span>
This (probably) isn't the best solution for parsing an integer, but if you need to "extract" one, for example:
"1a2b3c" === 123
"198some text2hello world!30" === 198230
// ...
this would work (only for integers):
var str = '3a9b0c3d2e9f8g'
function extractInteger(str) {
var result = 0;
var factor = 1
for (var i = str.length; i > 0; i--) {
if (!isNaN(str[i - 1])) {
result += parseInt(str[i - 1]) * factor
factor *= 10
}
}
return result
}
console.log(extractInteger(str))
Of course, this would also work for parsing an integer, but would be slower than other methods.
You could also parse integers with this method and return NaN if the string isn't a number, but I don't see why you'd want to since this relies on parseInt internally and parseInt is probably faster.
var str = '3a9b0c3d2e9f8g'
function extractInteger(str) {
var result = 0;
var factor = 1
for (var i = str.length; i > 0; i--) {
if (isNaN(str[i - 1])) return NaN
result += parseInt(str[i - 1]) * factor
factor *= 10
}
return result
}
console.log(extractInteger(str))

Swift sign extension with variable number of bits

I need to sign-extend an 8bit value to 12 bits. In C, I can do it this way. I read Apple's BinaryInteger protocol documentation, but it didn't explain sign extending to a variable number of bits (and i'm also pretty new at Swift). How can I do this in Swift, assuming val is UInt8 and numbits is 12?
#define MASKBITS(numbits) ((1 << numbits) - 1)
#define SIGNEXTEND_TO_16(val, numbits) \
( \
(int16_t)((val & MASKBITS(numbits)) | ( \
(val & (1 << (numbits-1))) ? ~MASKBITS(numbits) : 0) \
))
You can use Int8(bitPattern:) to convert the given unsigned
value to a signed value with the same binary representation,
then sign extend by converting to Int16, make unsigned again, and finally truncate
to the given number of bits:
func signExtend(val: UInt8, numBits: Int) -> UInt16 {
// Sign extend to unsigned 16-bit:
var extended = UInt16(bitPattern: Int16(Int8(bitPattern: val)))
// Truncate to given number of bits:
if numBits < 16 {
extended = extended & ((1 << numBits) - 1)
}
return extended
}
Example:
for i in 1...16 {
let x = signExtend(val: 200, numBits: i)
print(String(format: "%2d %04X", i, x))
}
Output:
1 0000
2 0000
3 0000
4 0008
5 0008
6 0008
7 0048
8 00C8
9 01C8
10 03C8
11 07C8
12 0FC8
13 1FC8
14 3FC8
15 7FC8
16 FFC8
I had the same question in the context of bitstream parsing. I needed code to parse n bit two's complement values into Int32. Here is my solution that works without any condition:
extension UInt32 {
func signExtension(n: Int) -> Int32 {
let signed = Int32.init(bitPattern: self << (32 - n))
let result = signed >> (32 - n)
return result
}
}
And a unit test function showing how to use that code:
func testSignExtension_N_2_3() {
let unsignedValue: UInt32 = 0b110
let signedValue: Int32 = unsignedValue.signExtension(n: 3)
XCTAssertEqual(signedValue, -2)
}

Apple Watch crashes on bit operations with error code Thread1: exc_breakpoint(code=exc_arm_breakpoint,subcode=0xe7ffdefe)

I am hashing user data on an Apple Watch using SHA1 and when running the SHA1Bytes function, I get the following error:
Thread1: exc_breakpoint(code=exc_arm_breakpoint,subcode=0xe7ffdefe).
This specific line gives me the error:
j = ( UInt32((msg[i]<<24) | (msg[i+1]<<16) | (msg[i+2]<<8) | msg[i+3]) )
This is the piece of code from which the above line is extracted:
class func SHA1Bytes(msg: [Int])->String{
func rotateLeft(number: UInt32, rotateBy: UInt32)->UInt32{
return ((number << rotateBy) | (number>>(32-rotateBy)))
}
func cvt_hex(value: UInt32)->String{
var str = ""
for i:UInt32 in stride(from: 7, through: 0, by: -1){
let v: UInt32 = (value >> (i*4)&0x0f)
str += String(v,radix: 16, uppercase: false)
}
return str
}
var W = [UInt32](repeating: 0, count: 80)
var H0 = UInt32("67452301",radix: 16)!
var H1 = UInt32("EFCDAB89",radix: 16)!
var H2 = UInt32("98BADCFE",radix: 16)!
var H3 = UInt32("10325476",radix: 16)!
var H4 = UInt32("C3D2E1F0",radix: 16)!
var wordArray = [UInt32]()
for k in stride(from: 0, to: msg.count-3, by: 4) {
let j = ( UInt32((msg[k]<<24) | (msg[k+1]<<16) | (msg[k+2]<<8) | msg[k+3]) )
wordArray.append(j)
}
...
return encoded.uppercased()
}
The exact same code runs perfectly in an iOS Playground, but crashes when running on a first generation Apple Watch. I have checked and the input array exists, I am trying to access existing elements of it and the result of j should not overflow.
The code fails with the following variable values:
j=(UInt32) 2308511418, k=(Int)48, msg=([Int])56values
and these are the values of msg:
[47] Int 217
[48] Int 137
[49] Int 153
[50] Int 22
[51] Int 186
[52] Int 163
[53] Int 41
[54] Int 208
[55] Int 104
The first generation Apple Watch is a 32-bit device and has different overflow limits compared to a 64-bit device.
On a 32-bit device, the Int type is 32-bit and you risk shifting into the sign bit.
Try the following in a playground:
UInt32(Int64(153) << 24) // Equivalent to your code
UInt32(Int32(153) << 24) // Simulates a 32-bit device and crashes.
UInt32(153) << 24 // A possible solution
I managed to figure out that the simulator is not crashing even if a UInt32 overflows, however, the 32bit Apple Watch does crash in this case.
The solution was to use an overflow operator, which only exists for addition, subtraction and multiplication. Hence, I changed bitwise left shift to multiplication by 2^(number of bits to be shifted).
This is the correct solution: UInt32(msg[k])&*UInt32(1<<24)
let j = (UInt32(msg[k])&*UInt32(1<<24))|UInt32(msg[k+1]&*(1<<16))|UInt32(msg[k+2]&*(1<<8))|UInt32(msg[k+3])

Bitwise and arithmetic operations in swift

Honestly speaking, porting to swift3(from obj-c) is going hard. The easiest but the swiftiest question.
public func readByte() -> UInt8
{
// ...
}
public func readShortInteger() -> Int16
{
return (self.readByte() << 8) + self.readByte();
}
Getting error message from compiler: "Binary operator + cannot be applied to two UInt8 operands."
What is wrong?
ps. What a shame ;)
readByte returns a UInt8 so:
You cannot shift a UInt8 left by 8 bits, you'll lose all its bits.
The type of the expression is UInt8 which cannot fit the Int16 value it is computing.
The type of the expression is UInt8 which is not the annotated return type Int16.
d
func readShortInteger() -> Int16
{
let highByte = self.readByte()
let lowByte = self.readByte()
return Int16(highByte) << 8 | Int16(lowByte)
}
While Swift have a strictly left-right evaluation order of the operands, I refactored the code to make it explicit which byte is read first and which is read second.
Also an OR operator is more self-documenting and semantic.
Apple has some great Swift documentation on this, here:
https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/AdvancedOperators.html
let shiftBits: UInt8 = 4 // 00000100 in binary
shiftBits << 1 // 00001000
shiftBits << 2 // 00010000
shiftBits << 5 // 10000000
shiftBits << 6 // 00000000
shiftBits >> 2 // 00000001

Convert half precision float (bytes) to float in Swift

I would like to be able to read in half floats from a binary file and convert them to a float in Swift. I've looked at several conversions from other languages such as Java and C#, however I have not been able to get the correct value corresponding to the half float. If anyone could help me with an implementation I would appreciate it. A conversion from Float to Half Float would also be extremely helpful. Here's an implementation I attempted to convert from this Java implementation.
static func toFloat(value: UInt16) -> Float {
let value = Int32(value)
var mantissa = Int32(value) & 0x03ff
var exp: Int32 = Int32(value) & 0x7c00
if(exp == 0x7c00) {
exp = 0x3fc00
} else if exp != 0 {
exp += 0x1c000
if(mantissa == 0 && exp > 0x1c400) {
return Float((value & 0x8000) << 16 | exp << 13 | 0x3ff)
}
} else if mantissa != 0 {
exp = 0x1c400
repeat {
mantissa << 1
exp -= 0x400
} while ((mantissa & 0x400) == 0)
mantissa &= 0x3ff
}
return Float((value & 0x80000) << 16 | (exp | mantissa) << 13)
}
If you have an array of half-precision data, you can convert all of it to float at once using vImageConvert_Planar16FtoPlanarF, which is provided by Accelerate.framework:
import Accelerate
let n = 2
var input: [UInt16] = [ 0x3c00, 0xbc00 ]
var output = [Float](count: n, repeatedValue: 0)
var src = vImage_Buffer(data:&input, height:1, width:UInt(n), rowBytes:2*n)
var dst = vImage_Buffer(data:&output, height:1, width:UInt(n), rowBytes:4*n)
vImageConvert_Planar16FtoPlanarF(&src, &dst, 0)
// output now contains [1.0, -1.0]
You can also use this method to convert individual values, but it's fairly heavyweight if that's all that you're doing; on the other hand it's extremely efficient if you have large buffers of values to convert.
If you need to convert isolated values, you might put something like the following C function in your bridging header and use it from Swift:
#include <stdint.h>
static inline float loadFromF16(const uint16_t *pointer) { return *(const __fp16 *)pointer; }
This will use hardware conversion instructions when you're compiling for targets that have them (armv7s, arm64, x86_64h), and call a reasonably good software conversion routine when compiling for targets that don't have hardware support.
addendum: going the other way
You can convert float to half-precision in pretty much the same way:
static inline storeAsF16(float value, uint16_t *pointer) { *(const __fp16 *)pointer = value; }
Or use the function vImageConvert_PlanarFtoPlanar16F.