"Implicit Conversion Loses Integer Precision" error - iphone

I have this code and get the following error: Implicit Conversion Loses Integer Precision
size_t BitArray::wordsForBits(size_t bits) {
int arraySize = (bits + bitsPerWord_ - 1) >> logBits_;
return arraySize;
}
How can I resolve this?

Related

How to convert a binary number into a decimal fraction in dart?

Hi i have been wondering if there is a way in which to convert binary numbers into decimal fractions.
I know how to change base as an example through this code
String binary = "11110010";
//I'd like to change this line so it produces a decimal value
String denary = int.parse(binary, radix: 2).toRadixString(10);
If anyone still wondering how to convert decimal to binary and the inverse:
print(55.toRadixString(2)); // Outputs 110111
print(int.parse("110111", radix: 2)); Outputs 55
int binaryToDecimal(int n)
{
int num = n;
int dec_value = 0;
// Initializing base value to 1, i.e 2^0
int base = 1;
int temp = num;
while (temp) {
int last_digit = temp % 10;
temp = temp / 10;
dec_value += last_digit * base;
base = base * 2;
}
return dec_value;
}
int main()
{
int num = 10101001;
cout << binaryToDecimal(num) << endl;
}
This is my c++ solution but you can implement any language

How to convert a fractional decimal to binary fraction in Swift 3(4)?

I am looking for a way to convert a fractional decimal to binary fraction in Swift 3. But unfortunately Xcode shows error message
"Cannot assign value of type 'Double' to type 'Int'" // line 8
"Binary operator '*' cannot be applied to operands of type 'Double' // line 9
and
'Int'" and "Cannot assign value of type 'Int' to type 'Double'" // line 11
Code itself:
var fraDecimal: Double, fraBinary: Double, bFractional: Double = 0.0, dFractional: Double = 0.6817, fraFactor: Double = 0.1
var dIntegral: Int, bIntegral: Int = 0
var intFactor: Int = 1, remainder: Int, temp: Int, i: Int
var signs: Int = 6
while (signs > 0 && dFractional > 0){
dFractional = dFractional * 2
temp = dFractional
bFractional = bFractional + fraFactor * temp
if(temp == 1) {
dFractional = Int(dFractional) - temp
}
fraFactor=fraFactor/10
signs-=1
}
Maybe there's some kind of workaround?
Thank you for your attention to my request

Unsigned right shift operator '>>>' in Swift

How would you implement the equivalent to Java's unsigned right shift operator in Swift?
According to Java's documentation, the unsigned right shift operator ">>>" shifts a zero into the leftmost position, while the leftmost position after ">>" depends on sign extension.
So, for instance,
long s1 = (-7L >>> 16); // result is 281474976710655L
long s2 = (-7L >> 16); // result is -1
In order to implement this in Swift, I would take all the bits except the sign bit by doing something like,
let lsb = Int64.max + negativeNumber + 1
Notice that the number has to be negative! If you overflow the shift operator, the app crashes with EXC_BAD_INSTRUCTION, which it's not very nice...
Also, I'm using Int64 on purpose. Because there's no bigger datatype, doing something like (1 << 63) would overflow the Int64 and also crash. So instead of doing ((1 << 63) - 1 + negativeNumber) in a bigger datatype, I wrote it as Int64.max + negativeNumber - 1.
Then, shift that positive number with the normal logical shift, and OR the bit from the sign in the first left bit after the sign.
let shifted = (lsb >> bits) | 0x4000000000000000
However, that doesn't give me the expected result,
((Int64.max - 7 + 1) >> 16) | 0x4000000000000000 // = 4611826755915743231
Not sure what I'm doing wrong...
Also, would it be possible to name this operator '>>>' and extend Int64?
Edit:
Adding here the solution from OOper below,
infix operator >>> : BitwiseShiftPrecedence
func >>> (lhs: Int64, rhs: Int64) -> Int64 {
return Int64(bitPattern: UInt64(bitPattern: lhs) >> UInt64(rhs))
}
I was implementing the Java Random class in Swift, which also involves truncating 64-bit ints into 32-bit. Thanks to OOper I just realized I can use the truncatingBitPattern initializer to avoid overflow exceptions. The function 'next' as described here becomes this in Swift,
var seed: Int64 = 0
private func next(_ bits: Int32) -> Int32 {
seed = (seed &* 0x5DEECE66D &+ 0xB) & ((1 << 48) - 1)
let shifted : Int64 = seed >>> (48 - Int64(bits))
return Int32(truncatingBitPattern: shifted)
}
One sure way to do it is using the unsigned shift operation of unsigned integer type:
infix operator >>> : BitwiseShiftPrecedence
func >>> (lhs: Int64, rhs: Int64) -> Int64 {
return Int64(bitPattern: UInt64(bitPattern: lhs) >> UInt64(rhs))
}
print(-7 >>> 16) //->281474976710655
(Using -7 for testing with bit count 16 does not seem to be a good example, it loses all significant bits with 16-bit right shift.)
If you want to do it in your way, the bitwise-ORed missing sign bit cannot be a constant 0x4000000000000000. It needs to be 0x8000_0000_0000_0000 (this constant overflows in Swift Int64) when bit count == 0, and needs to be logically shifted with the same bits.
So, you need to write something like this:
infix operator >>>> : BitwiseShiftPrecedence
func >>>> (lhs: Int64, rhs: Int64) -> Int64 {
if lhs >= 0 {
return lhs >> rhs
} else {
return (Int64.max + lhs + 1) >> rhs | (1 << (63-rhs))
}
}
print(-7 >>>> 16) //->281474976710655
It seems far easier to work with unsigned integer types when you need unsigned shift operation.
Swift has unsigned integer types, so there is no need to a separate unsigned right shift operator. That's a choice in Java that followed from the decision to not have unsigned types.

What is the 0xFFFFFFFF doing in this example?

I understand that arc4random returns an unsigned integer up to (2^32)-1. In this scenario it it always gives a number between 0 and 1.
var x:UInt32 = (arc4random() / 0xFFFFFFFF)
How does the division by 0xFFFFFFFF cause the number to be between 0 - 1?
As you've stated,
arc4random returns an unsigned integer up to (2^32)-1
0xFFFFFFFF is equal to (2^32)-1, which is the largest possible value of arc4random(). So the arithmetic expression (arc4random() / 0xFFFFFFFF) gives you a ratio that is always between 0 and 1 — and as this is an integer division, the result can only be between 0 and 1.
to receive value between 0 and 1, the result must be floating point value
import Foundation
(1..<10).forEach { _ in
let x: Double = (Double(arc4random()) / 0xFFFFFFFF)
print(x)
}
/*
0.909680047749933
0.539794033984606
0.049406117305487
0.644912529188421
0.00758233550181201
0.0036165844657497
0.504160538898818
0.879743074271768
0.980051155663107
*/

Convert half precision float (bytes) to float in Swift

I would like to be able to read in half floats from a binary file and convert them to a float in Swift. I've looked at several conversions from other languages such as Java and C#, however I have not been able to get the correct value corresponding to the half float. If anyone could help me with an implementation I would appreciate it. A conversion from Float to Half Float would also be extremely helpful. Here's an implementation I attempted to convert from this Java implementation.
static func toFloat(value: UInt16) -> Float {
let value = Int32(value)
var mantissa = Int32(value) & 0x03ff
var exp: Int32 = Int32(value) & 0x7c00
if(exp == 0x7c00) {
exp = 0x3fc00
} else if exp != 0 {
exp += 0x1c000
if(mantissa == 0 && exp > 0x1c400) {
return Float((value & 0x8000) << 16 | exp << 13 | 0x3ff)
}
} else if mantissa != 0 {
exp = 0x1c400
repeat {
mantissa << 1
exp -= 0x400
} while ((mantissa & 0x400) == 0)
mantissa &= 0x3ff
}
return Float((value & 0x80000) << 16 | (exp | mantissa) << 13)
}
If you have an array of half-precision data, you can convert all of it to float at once using vImageConvert_Planar16FtoPlanarF, which is provided by Accelerate.framework:
import Accelerate
let n = 2
var input: [UInt16] = [ 0x3c00, 0xbc00 ]
var output = [Float](count: n, repeatedValue: 0)
var src = vImage_Buffer(data:&input, height:1, width:UInt(n), rowBytes:2*n)
var dst = vImage_Buffer(data:&output, height:1, width:UInt(n), rowBytes:4*n)
vImageConvert_Planar16FtoPlanarF(&src, &dst, 0)
// output now contains [1.0, -1.0]
You can also use this method to convert individual values, but it's fairly heavyweight if that's all that you're doing; on the other hand it's extremely efficient if you have large buffers of values to convert.
If you need to convert isolated values, you might put something like the following C function in your bridging header and use it from Swift:
#include <stdint.h>
static inline float loadFromF16(const uint16_t *pointer) { return *(const __fp16 *)pointer; }
This will use hardware conversion instructions when you're compiling for targets that have them (armv7s, arm64, x86_64h), and call a reasonably good software conversion routine when compiling for targets that don't have hardware support.
addendum: going the other way
You can convert float to half-precision in pretty much the same way:
static inline storeAsF16(float value, uint16_t *pointer) { *(const __fp16 *)pointer = value; }
Or use the function vImageConvert_PlanarFtoPlanar16F.