I'm trying to assign -5 to signedInt but getting an error:
Integer literal '133' overflows when stored into 'Int8'
let signedInt: Int8 = 0b10000101
print(signedInt)
print(String(signedInt, radix: 2))
Your value is not -5, but -123.
You can't get there with a direct assignment because the value is a signed Int and interpreted as 133.
To assign a negative value, use Int8(bitpattern:) to convert the value from a UInt8 to an Int8:
let signedInt = Int8(bitPattern: 0b10000101)
print(signedInt)
-123
-5 is 0b11111011 which is the 2's complement of 0b00000101.
To form the 2's complement, start with the binary pattern for 5:
0b00000101
invert all of the bits:
0b11111010
and add 1:
0b11111011
You can use UInt8(bitPattern:) to find the representation of the number:
let signedInt: Int8 = -5
print(String(UInt8(bitPattern: signedInt), radix: 2))
11111011
Related
Working with Swift.
I am converting value from String to double.
let value: String = "8"
var size: Double
size = Double(value)
size = 8.0 // 8.0
Result should be 8 unless string value is 8.0
Double only stores a numeric magnitude. "8" and "8.0" have the same magnitude, so they get represented by the same Double value. Whether you show trailing 0s or not is a consequence of how you choose to format and present your values.
print does it one way for debugging, but NumberFormatter gives you more control to format numbers for real, non-debugging purposes.
I found that making operand in Decimal can reduce numeric error in Swift
0.1 + 0.2 = 0.30000000000000004
Decimal(0.1) + Decimal(0.2) = 0.3
So I tried to make a function that calculate calculation String in Decimal like this:
func calculate(expression: String) -> Decimal {
let expression = NSExpression(format: expression)
let value = expression.expressionValue(with: nil, context: nil) as? Decimal
return value ?? 0.0
}
But value property keep getting nil value and function always returning 0.0. Can I get some any help on this?
Thanks
The Decimal type does not reduce numeric error. It just computes values in decimal. That can increase or decrease error, depending on the calculation. 1/10 happens to be bad in binary for the same reason that 1/3 is bad in decimal. Your code doesn't actually compute anything in Decimal. It's just trying to convert the final Double value to Decimal at the end, which introduces binary-to-decimal rounding (making it less accurate).
That said, expressionValue(with:context:) returns an NSNumber. You can't convert that to Decimal with as?. You need to use .decimalValue:
let number = expression.expressionValue(with: nil, context: nil) as? NSNumber
let value = number?.decimalValue
This will compute the value in Double and then round it to a Decimal.
But if you want to do calculations in Decimal, I don't believe that NSExpression can do that.
Im reading accelerometer data from an ibeacon that appears in the following string format:
x hex string value: "0160"
y hex string value: "ff14"
z hex string value: "0114"
Im expecting to see these values as double values ranging from 0g to 1g. How would you convert these hex strings into doubles in swift?
Get the integer value from hex string with Int(_:radix:)
let string = "ff14"
let hexValue = Int(string, radix: 16)!
and divide by 65535 (16 bit) to get values between 0.0 and 1.0
let result = Double(hexValue) / 65535
I'm trying print the result of division for example:
let division = (4/6)
print(division)
In this case the print out is 0.
How can I print the numeric value of the division without losing the numeric value. I mean without casting the output to string.
You are performing integer division. You need to perform floating point division.
In your code, division is an Int value. 4 / 6 is zero in integer division.
You need:
let division = 4.0/6.0
print(division)
ans = Double(no1)/Double(no2)
return ans
If you want the value should be correct, then try as
let division = Float(v1) / Float(v2)
print(division)
What are the differences between the data types Int & UInt8 in swift.
Looks like UInt8 is used for binary data, i need to convert UInt8 to Int is this possible.
That U in UInt stands for unsigned int.
It is not just using for binary data. Uint is used for positive numbers only, like natural numbers.
I recommend you to get to know how negative numbers are understood from a computer.
Int8 is an Integer type which can store positive and negative values.
UInt8 is an unsigned integer which can store only positive values.
You can easily convert UInt8 to Int8 but if you want to convert Int8 to UInt8 then make sure value should be positive.
UInt8 is an 8bit store, while Int not hardly defined or defined by the compiler:
https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/TheBasics.html
Int could be 32 or 64 bits
Updated for swift:
Operation Output Range Bytes per Element
uint8 0 to 255 1
Int - 9223372036854775808 to 9223372036854775807 2 or 4
If you want to find the max and min range of Int or UInt8:
let maxIntValue = Int.max
let maxUInt8Value = UInt8.max
let minIntValue = Int.min
let minUInt8Value = UInt8.min
If you want to convert UInt8 to Int, used below simple code:
func convertToInt(unsigned: UInt) -> Int {
let signed = (unsigned <= UInt(Int.max)) ?
Int(unsigned) :
Int(unsigned - UInt(Int.max) - 1) + Int.min
return signed
}