How to read a BLE Characteristic Float in Swift - swift

I am trying to connect to a Bluetooth LE / Bluetooth Smart / BLE health device's Health Thermometer Service (0x1809), as officially described here: https://developer.bluetooth.org/gatt/services/Pages/ServiceViewer.aspx?u=org.bluetooth.service.health_thermometer.xml. Specifically, I'm requesting notifications from the Health Thermometer Characteristic (0x2A1C), with description here: https://developer.bluetooth.org/gatt/characteristics/Pages/CharacteristicViewer.aspx?u=org.bluetooth.characteristic.temperature_measurement.xml.
I have a decent Swift 2 background, but I've never worked this closely with NSData, bytes, or bitwise operators and I'm completely new to Little Endian vs. Big Endian, so this is pretty new for me and I could use some help. The characteristic has some logic built in that determines what data you will receive. I have received data in the order of Flags, Temperature Measurement Value and Time Stamp 100% of the time so far, but unfortunately I'm always going to get control logic of "010" which means I'm reading the flags incorrectly. In truth, I think I'm bringing in everything shy of the timestamp incorrectly. I'm including what data I'm seeing in the code comments.
I've tried multiple ways of obtaining this binary data. The flags are a single byte with bit operators. The temperature measurement itself is a Float, which it took me some time to realize that it's not a Swift Float, but rather a ISO/IEEE Standard "IEEE-11073 32-bit FLOAT" with what the BLE spec says has "NO EXPONENT VALUE" here: https://www.bluetooth.com/specifications/assigned-numbers/format-types. I don't even know what that means. Here is my code from the didUpdateValueForCharacteristic() function where you can view my multiple attempts that I commented out as I tried a new one:
// Parse Characteristic Response
let stream = NSInputStream( data: characteristic.value! )
stream.open() // IMPORTANT
// Retrieve Flags
var readBuffer = Array<UInt8>( count: 1, repeatedValue: 0 )
stream.read( &readBuffer, maxLength: readBuffer.count )
var flags = String( readBuffer[ 0 ], radix: 2 )
flags = String( count: 8 - flags.characters.count, repeatedValue: Character( "0" ) ) + flags
flags = String( flags.characters.reverse() )
print( "FLAGS: \( flags )" )
// Example data:
// ["01000000"]
//
// This appears to be wrong. I should be getting "10000000" according to spec
// Bluetooth FLOAT-TYPE is defined in ISO/IEEE Std. 11073
// FLOATs are 32 bit
// Format [8bit exponent][24bit mantissa]
/* Attempt 1 - Read in a Float - Doesn't work since it's an IEEE Float
readBuffer = Array<UInt8>( count: 4, repeatedValue: 0 )
stream.read( &readBuffer, maxLength: readBuffer.count )
var tempData = UnsafePointer<Float>( readBuffer ).memory
// Attempt 2 - Inverted bytes- Doesn't work since it's wrong and it's an IEEE Float
let readBuffer2 = [ readBuffer[ 3 ], readBuffer[ 2 ], readBuffer[ 1 ], readBuffer[ 0 ] ]
var tempValue = UnsafePointer<Float>( readBuffer2 ).memory
print( "TEMP: \( tempValue )" )
// Attempt 3 - Doesn't work for 1 or 2 since it's an IEEE Float
var f:Float = 0.0
memccpy(&f, readBuffer, 4, 4)
print( "TEMP: \( f )" )
var f2:Float = 0.0
memccpy(&f2, readBuffer2, 4, 4)
print( "TEMP: \( f2 )" )
// Attempt 4 - Trying to Read an Exponent and a Mantissa - Didn't work
readBuffer = Array<UInt8>( count: 1, repeatedValue: 0 )
stream.read( &readBuffer, maxLength: readBuffer.count )
let exponent = UnsafePointer<Int8>( readBuffer ).memory
readBuffer = Array<UInt8>( count: 3, repeatedValue: 0 )
stream.read( &readBuffer, maxLength: readBuffer.count )
let mantissa = UnsafePointer<Int16>( readBuffer ).memory
let temp = NSDecimalNumber( mantissa: mantissa, exponent: exponent, isNegative: false )
print( "TEMP: \( temp )" )
// Attempt 5 - Invert bytes - Doesn't work
readBuffer = Array<UInt8>( count: 4, repeatedValue: 0 )
stream.read( &readBuffer, maxLength: readBuffer.count )
let exponentBuffer = [ readBuffer[ 3 ] ]
let mantissaBuffer = [ readBuffer[ 2 ], readBuffer[ 1 ], readBuffer[ 0 ] ]
let exponent = UnsafePointer<Int16>( exponentBuffer ).memory
let mantissa = UnsafePointer<UInt64>( mantissaBuffer ).memory
let temp = NSDecimalNumber( mantissa: mantissa, exponent: exponent, isNegative: false )
print( "TEMP: \( temp )" )
// Attempt 6 - Tried a bitstream frontwards and backwards - Doesn't work
readBuffer = Array<UInt8>( count: 4, repeatedValue: 0 )
stream.read( &readBuffer, maxLength: readBuffer.count )
var bitBuffer: [String] = Array<String>( count:4, repeatedValue: "" )
for var i = 0; i < bitBuffer.count; i++ {
bitBuffer[ i ] = String( readBuffer[ i ], radix: 2 )
bitBuffer[ i ] = String( count: 8 - bitBuffer[ i ].characters.count, repeatedValue: Character( "0" ) ) + bitBuffer[ i ]
//bitBuffer[ i ] = String( bitBuffer[ i ].characters.reverse() )
}
print( "TEMP: \( bitBuffer )" )
// Attempt 7 - More like the Obj. C code - Doesn't work
readBuffer = Array<UInt8>( count: 4, repeatedValue: 0 )
stream.read( &readBuffer, maxLength: readBuffer.count )
let value = UnsafePointer<UInt32>( readBuffer ).memory
let tempData = CFSwapInt32LittleToHost( value )
let exponent = tempData >> 24
let mantissa = tempData & 0x00FFFFFF
if ( tempData == 0x007FFFFF ) {
print(" *** INVALID *** ")
return
}
let tempValue = Double( mantissa ) * pow( 10.0, Double( exponent ) )
print( "TEMP: \( tempValue )" )
// Attempt 8 - Saw that BLE spec says "NO Exponent" - Doesnt' work
readBuffer = Array<UInt8>( count: 1, repeatedValue: 0 )
stream.read( &readBuffer, maxLength: readBuffer.count )
readBuffer = Array<UInt8>( count: 3, repeatedValue: 0 )
stream.read( &readBuffer, maxLength: readBuffer.count )
let tempValue = UnsafePointer<Float>( readBuffer ).memory
print( "TEMP: \( tempValue )" )
// Example data:
// ["00110110", "00000001", "00000000", "11111111"]
//
// Only the first byte appears to ever change.
*/
// Timestamp - Year - works
readBuffer = Array<UInt8>( count: 2, repeatedValue: 0 )
stream.read( &readBuffer, maxLength: readBuffer.count )
let year = UnsafePointer<UInt16>( readBuffer ).memory
// Timestamp Remainder - works
readBuffer = Array<UInt8>( count: 5, repeatedValue: 0 )
stream.read( &readBuffer, maxLength: readBuffer.count )
let month = readBuffer[ 0 ]
let day = readBuffer[ 1 ]
let hour = readBuffer[ 2 ]
let minute = readBuffer[ 3 ]
let second = readBuffer[ 4 ]
print( "TIMESTAMP: \( month )/\( day )/\( year ) \( hour ):\( minute ):\( second )" )
I've found this example in Objective C, which I don't know (https://github.com/AngelSensor/angel-sdk/blob/b7459d9c86c6a5c72d8e58b696345b642286b876/iOS/SDK/Services/HealthThermometer/ANHTTemperatureMeasurmentCharacteristic.m), and I've tried to work from it, but it's not clear to me what exactly is going on:
// flags
uint8_t flags = dataPointer[0];
dataPointer++;
// temperature
uint32_t tempData = (uint32_t)CFSwapInt32LittleToHost(*(uint32_t *)dataPointer);
dataPointer += 4;
int8_t exponent = (int8_t)(tempData >> 24);
int32_t mantissa = (int32_t)(tempData & 0x00FFFFFF);
if (tempData == 0x007FFFFF) {
return;
}
float tempValue = (float)(mantissa*pow(10, exponent));
If someone could help me out with how to pull the flags and thermometer measurements from this BLE Characteristic, I would be very grateful. Thanks.
I was asked to give sample data below. Here's my sample data (12 bytes total):
["00000010", "00110011", "00000001", "00000000", "11111111", "11100000", "00000111", "00000100", "00001111", "00000001", "00000101", "00101100"]
-OR-
<025e0100 ffe00704 0f11150f>

It can be a bit tricky to get around sometimes, but here's my simple implementation, hope it helps you out
private func parseThermometerReading(withData someData : NSData?) {
var pointer = UnsafeMutablePointer<UInt8>(someData!.bytes)
let flagsValue = Int(pointer.memory) //First 8 bytes are the flag
let temperatureUnitisCelsius = (flagsValue & 0x01) == 0
let timeStampPresent = (flagsValue & 0x02) > 0
let temperatureTypePresent = ((flagsValue & 0x04) >> 2) > 0
pointer = pointer.successor() //Jump over the flag byte (pointer is 1 bytes, so successor will automatically hot 8 bits), you can also user pointer = pointer.advanceBy(1), which is the same
let measurementValue : Float32 = self.parseFloat32(withPointer: pointer) //the parseFloat32 method is where the IEEE float conversion magic happens
pointer = pointer.advancedBy(4) //Skip 32 bits (Since pointer holds 1 byte (8 bits), to skip over 32 bits we need to jump 4 bytes (4 * 8 = 32 bits), we are now jumping over the measurement FLOAT
var timeStamp : NSDate?
if timeStampPresent {
//Parse timestamp
//ParseDate method is also a simple way to convert the 7 byte timestamp to an NSDate object, see it's implementation for more details
timeStamp = self.parseDate(withPointer: pointer)
pointer = pointer.advancedBy(7) //Skip over 7 bytes of timestamp
}
var temperatureType : Int = -1 //Some unknown value
if temperatureTypePresent {
//Parse measurement Type
temperatureType = Int(pointer.memory))
}
}
Now to the little method that converts the bytes to a IEEE Float
internal func parseFloat32(withPointer aPointer : UnsafeMutablePointer<UInt8>) -> Float32 {
// aPointer is 8bits long, we need to convert it to an 32Bit value
var rawValue = UnsafeMutablePointer<UInt32>(aPointer).memory //rawValue is now aPointer, but with 32 bits instead of just 8
let tempData = Int(CFSwapInt32LittleToHost(rawValue)) //We need to convert from BLE Little endian to match the current host's endianness
// The 32 bit value consists of a 8 bit exponent and a 24 bit mantissa
var mantissa : Int32 = Int32(tempData & 0x00FFFFFF) //We get the mantissa using bit masking (basically we mask out first 8 bits)
//UnsafeBitCast is the trick in swift here, since this is the only way to convert an UInt8 to a signed Int8, this is not needed in the ObjC examples that you'll see online since ObjC supports SInt* types
let exponent = unsafeBitCast(UInt8(tempData >> 24), Int8.self)
//And we get the exponent by shifting 24 bits, 32-24 = 8 (the exponent)
var output : Float32 = 0
//Here we do some checks for specific cases of Negative infinity/infinity, Reserved MDER values, etc..
if mantissa >= Int32(FIRST_RESERVED_VALUE.rawValue) && mantissa <= Int32(ReservedFloatValues.MDER_NEGATIVE_INFINITY.rawValue) {
output = Float32(RESERVED_FLOAT_VALUES[mantissa - Int32(FIRST_S_RESERVED_VALUE.rawValue)])
}else{
//This is not a special reserved value, do the normal mathematical calculation to get the float value using mantissa and exponent.
if mantissa >= 0x800000 {
mantissa = -((0xFFFFFF + 1) - mantissa)
}
let magnitude = pow(10.0, Double(exponent))
output = Float32(mantissa) * Float32(magnitude)
}
return output
}
And here is how the date is parsed into an NSDate object
internal func parseDate(withPointer aPointer : UnsafeMutablePointer<UInt8>) -> NSDate {
var bytePointer = aPointer //The given Unsigned Int8 pointer
var wordPointer = UnsafeMutablePointer<UInt16>(bytePointer) //We also hold a UInt16 pointer for the year, this is optional really, just easier to read
var year = Int(CFSwapInt16LittleToHost(wordPointer.memory)) //This gives us the year
bytePointer = bytePointer.advancedBy(2) //Skip 2 bytes (year)
//bytePointer = wordPointer.successor() //Or you can do this using the word Pointer instead (successor will make it jump 2 bytes)
//The rest here is self explanatory
var month = Int(bytePointer.memory)
bytePointer = bytePointer.successor()
var day = Int(bytePointer.memory)
bytePointer = bytePointer.successor()
var hours = Int(bytePointer.memory)
bytePointer = bytePointer.successor()
var minutes = Int(bytePointer.memory)
bytePointer = bytePointer.successor()
var seconds = Int(bytePointer.memory)
//Timestamp components parsed, create NSDate object
var calendar = NSCalendar.currentCalendar()
var dateComponents = calendar.components([.Year, .Month, .Day, .Hour, .Minute, .Second], fromDate: NSDate())
dateComponents.year = year
dateComponents.month = month
dateComponents.day = day
dateComponents.hour = hours
dateComponents.minute = minutes
dateComponents.second = seconds
return calendar.dateFromComponents(dateComponents)!
}
This is pretty much all the tricks you will also need for any other BLE characteristic that uses a FLOAT type

I've done some stuff similar to you... I'm not sure if this still relevant to you, but let's dig it... Maybe my code could give you some insight:
First, get the NSData to an array of UInt8:
let arr = Array(UnsafeBufferPointer(start: UnsafePointer<UInt8>(data.bytes), count: data.length))
The spec that we are following says that the first 3 positions in this array will be representing the mantissa and the last one, will be the exponent (in the range -128..127):
let exponentRaw = input[3]
var exponent = Int16(exponentRaw)
if exponentRaw > 0x7F {
exponent = Int16(exponentRaw) - 0x100
}
let mantissa = sumBits(Array(input[0...2]))
let magnitude = pow(10.0, Float32(exponent))
let value = Float32(mantissa) * magnitude
... auxiliary function:
func sumBits(arr: [UInt8]) -> UInt64 {
var sum : UInt64 = 0
for (idx, val) in arr.enumerate() {
sum += UInt64(val) << ( 8 * UInt64(idx) )
}
return sum
}

Related

How to convert from 8byte hex to real

I'm currently working on a file converter. I've never done anything using binary file reading before. There are many converters available for this file type (gdsII to text), but none in swift that I can find.
I've gotten all the other data types working (2byte int, 4byte int), but I'm really struggling with the real data type.
From a spec document :
http://www.cnf.cornell.edu/cnf_spie9.html
Real numbers are not represented in IEEE format. A floating point number is made up of three parts: the sign, the exponent, and the mantissa. The value of the number is defined to be (mantissa) (16) (exponent). If "S" is the sign bit, "E" is exponent bits, and "M" are mantissa bits then an 8-byte real number has the format
SEEEEEEE MMMMMMMM MMMMMMMM MMMMMMMM
MMMMMMMM MMMMMMMM MMMMMMMM MMMMMMMM
The exponent is in "excess 64" notation; that is, the 7-bit field shows a number that is 64 greater than the actual exponent. The mantissa is always a positive fraction greater than or equal to 1/16 and less than 1. For an 8-byte real, the mantissa is in bits 8 to 63. The decimal point of the binary mantissa is just to the left of bit 8. Bit 8 represents the value 1/2, bit 9 represents 1/4, and so on.
I've tried implementing something similar to what I've seen in python or Perl but each language has features that swift doesn't have, also the type conversions get very confusing.
This is one method I tried, based on Perl. Doesn't seem to get the right value. Bitwise math is new to me.
var sgn = 1.0
let andSgn = 0x8000000000000000 & bytes8_test
if( andSgn > 0) { sgn = -1.0 }
// var sgn = -1 if 0x8000000000000000 & num else 1
let manta = bytes8_test & 0x00ffffffffffffff
let exp = (bytes8_test >> 56) & 0x7f
let powBase = sgn * Double(manta)
let expPow = (4.0 * (Double(exp) - 64.0) - 56.0)
var testReal = pow( powBase , expPow )
Another I tried:
let bitArrayDecode = decodeBitArray(bitArray: bitArray)
let valueArray = calcValueOfArray(bitArray: bitArrayDecode)
var exponent:Int16
//calculate exponent
if(negative){
exponent = valueArray - 192
} else {
exponent = valueArray - 64
}
//calculate mantessa
var mantissa = 0.0
//sgn = -1 if 0x8000000000000000 & num else 1
//mant = num & 0x00ffffffffffffff
//exp = (num >> 56) & 0x7f
//return math.ldexp(sgn * mant, 4 * (exp - 64) - 56)
for index in 0...7 {
//let mantaByte = bytes8_1st[index]
//mantissa += Double(mantaByte) / pow(256.0, Double(index))
let bit = pow(2.0, Double(7-index))
let scaleBit = pow(2.0, Double( index ))
var mantab = (8.0 * Double( bytes8_1st[1] & UInt8(bit)))/(bit*scaleBit)
mantissa = mantissa + mantab
mantab = (8.0 * Double( bytes8_1st[2] & UInt8(bit)))/(256.0 * bit * scaleBit)
mantissa = mantissa + mantab
mantab = (8.0 * Double( bytes8_1st[3] & UInt8(bit)))/(256.0 * bit * scaleBit)
mantissa = mantissa + mantab
}
let real = mantissa * pow(16.0, Double(exponent))
UPDATE:
The following part seems to work for the exponent. Returns -9 for the data set I'm working with. Which is what I expect.
var exp = Int16((bytes8 >> 56) & 0x7f)
exp = exp - 65 //change from excess 64
print(exp)
var sgnVal = 0x8000000000000000 & bytes8
var sgn = 1.0
if(sgnVal == 1){
sgn = -1.0
}
For the mantissa though I can't get the calculation correct some how.
The data set:
3d 68 db 8b ac 71 0c b4
38 6d f3 7f 67 5e f6 ec
I think it should return 1e-9 for exponent and 0.0001
The closest I've gotten real Double 0.0000000000034907316148746757
var bytes7 = Array<UInt8>()
for (index, by) in data.enumerated(){
if(index < 4) {
bytes7.append(by[0])
bytes7.append(by[1])
}
}
for index in 0...7 {
mantissa += Double(bytes7[index]) / (pow(256.0, Double(index) + 1.0 ))
}
var real = mantissa * pow(16.0, Double(exp));
print(mantissa)
END OF UPDATE.
Also doesn't seem to produce the correct values. This one was based on a C file.
If anyone can help me out with an English explanation of what the spec means, or any pointers on what to do I would really appreciate it.
Thanks!
According to the doc, this code returns the 8-byte Real data as Double.
extension Data {
func readUInt64BE(_ offset: Int) -> UInt64 {
var value: UInt64 = 0
_ = Swift.withUnsafeMutableBytes(of: &value) {bytes in
copyBytes(to: bytes, from: offset..<offset+8)
}
return value.bigEndian
}
func readReal64(_ offset: Int) -> Double {
let bitPattern = readUInt64BE(offset)
let sign: FloatingPointSign = (bitPattern & 0x80000000_00000000) != 0 ? .minus: .plus
let exponent = (Int((bitPattern >> 56) & 0x00000000_0000007F)-64) * 4 - 56
let significand = Double(bitPattern & 0x00FFFFFF_FFFFFFFF)
let result = Double(sign: sign, exponent: exponent, significand: significand)
return result
}
}
Usage:
//Two 8-byte Real data taken from the example in the doc
let data = Data([
//1.0000000000000E-03
0x3e, 0x41, 0x89, 0x37, 0x4b, 0xc6, 0xa7, 0xef,
//1.0000000000000E-09
0x39, 0x44, 0xb8, 0x2f, 0xa0, 0x9b, 0x5a, 0x54,
])
let real1 = data.readReal64(0)
let real2 = data.readReal64(8)
print(real1, real2) //->0.001 1e-09
Another example from "UPDATE":
//0.0001 in "UPDATE"
let data = Data([0x3d, 0x68, 0xdb, 0x8b, 0xac, 0x71, 0x0c, 0xb4, 0x38, 0x6d, 0xf3, 0x7f, 0x67, 0x5e, 0xf6, 0xec])
let real = data.readReal64(0)
print(real) //->0.0001
Please remember that Double has only 52-bit significand (mantissa), so this code loses some significant bits in the original 8-byte Real. I'm not sure that can be an issue or not.

Splitting a UInt16 to 2 UInt8 bytes and getting the hexa string of both. Swift

I need 16383 to be converted to 7F7F but I can only get this to be converted to 3fff or 77377.
I can convert 8192 to hexadecimal string 4000 which is essentially the same thing.
If I use let firstHexa = String(format:"%02X", a) It stops at 3fff hexadecimal for the first number and and 2000 hexadecimal for the second number. here is my code
public func intToHexString(_ int: Int16) -> String {
var encodedHexa: String = ""
if int >= -8192 && int <= 8191 {
let int16 = int + 8192
//convert to two unsigned Int8 bytes
let a = UInt8(int16 >> 8 & 0x00ff)
let b = UInt8(int16 & 0x00ff)
//convert the 2 bytes to hexadecimals
let first1Hexa = String(a, radix: 8 )
let second2Hexa = String(b, radix: 8)
let firstHexa = String(format:"%02X", a)
let secondHexa = String(format:"%02X", b)
//combine the 2 hexas into 1 string with 4 characters...adding 0 to the beggining if only 1 character.
if firstHexa.count == 1 {
let appendedFHexa = "0" + firstHexa
encodedHexa = appendedFHexa + secondHexa
} else if secondHexa.count == 1 {
let appendedSHexa = "0" + secondHexa
encodedHexa = firstHexa + appendedSHexa
} else {
encodedHexa = firstHexa + secondHexa
}
}
return encodedHexa
}
Please help ma'ams and sirs! Thanks.
From your test cases, it seems like your values are 7 bits per byte.
You want 8192 to convert to 4000.
You want 16383 to convert to 7F7F.
Note that:
(0x7f << 7) + 0x7f == 16383
Given that:
let a = UInt8((int16 >> 7) & 0x7f)
let b = UInt8(int16 & 0x7f)
let result = String(format: "%02X%02X", a , b)
This gives:
"4000" for 8128
"7F7F" for 16383
To reverse the process:
let str = "7F7F"
let value = Int(str, radix: 16)!
let result = ((value >> 8) & 0x7f) << 7 + (value & 0x7f)
print(result) // 16383

How in swift to convert Int16 to two UInt8 Bytes

I have some binary data that encodes a two byte value as a signed integer.
bytes[1] = 255 // 0xFF
bytes[2] = 251 // 0xF1
Decoding
This is fairly easy - I can extract an Int16 value from these bytes with:
Int16(bytes[1]) << 8 | Int16(bytes[2])
Encoding
This is where I'm running into issues. Most of my data spec called for UInt and that is easy but I'm having trouble extracting the two bytes that make up an Int16
let nv : Int16 = -15
UInt8(nv >> 8) // fail
UInt8(nv) // fail
Question
How would I extract the two bytes that make up an Int16 value
You should work with unsigned integers:
let bytes: [UInt8] = [255, 251]
let uInt16Value = UInt16(bytes[0]) << 8 | UInt16(bytes[1])
let uInt8Value0 = uInt16Value >> 8
let uInt8Value1 = UInt8(uInt16Value & 0x00ff)
If you want to convert UInt16 to bit equivalent Int16 then you can do it with specific initializer:
let int16Value: Int16 = -15
let uInt16Value = UInt16(bitPattern: int16Value)
And vice versa:
let uInt16Value: UInt16 = 65000
let int16Value = Int16(bitPattern: uInt16Value)
In your case:
let nv: Int16 = -15
let uNv = UInt16(bitPattern: nv)
UInt8(uNv >> 8)
UInt8(uNv & 0x00ff)
You could use init(truncatingBitPattern: Int16) initializer:
let nv: Int16 = -15
UInt8(truncatingBitPattern: nv >> 8) // -> 255
UInt8(truncatingBitPattern: nv) // -> 241
I would just do this:
let a = UInt8(nv >> 8 & 0x00ff) // 255
let b = UInt8(nv & 0x00ff) // 241
extension Int16 {
var twoBytes : [UInt8] {
let unsignedSelf = UInt16(bitPattern: self)
return [UInt8(truncatingIfNeeded: unsignedSelf >> 8),
UInt8(truncatingIfNeeded: unsignedSelf)]
}
}
var test : Int16 = -15
test.twoBytes // [255, 241]

Generate random number of certain amount of digits

Hy,
I have a very Basic Question which is :
How can i create a random number with 20 digits no floats no negatives (basically an Int) in Swift ?
Thanks for all answers XD
Step 1
First of all we need an extension of Int to generate a random number in a range.
extension Int {
init(_ range: Range<Int> ) {
let delta = range.startIndex < 0 ? abs(range.startIndex) : 0
let min = UInt32(range.startIndex + delta)
let max = UInt32(range.endIndex + delta)
self.init(Int(min + arc4random_uniform(max - min)) - delta)
}
}
This can be used this way:
Int(0...9) // 4 or 1 or 1...
Int(10...99) // 90 or 33 or 11
Int(100...999) // 200 or 333 or 893
Step 2
Now we need a function that receive the number of digits requested, calculates the range of the random number and finally does invoke the new initializer of Int.
func random(digits:Int) -> Int {
let min = Int(pow(Double(10), Double(digits-1))) - 1
let max = Int(pow(Double(10), Double(digits))) - 1
return Int(min...max)
}
Test
random(1) // 8
random(2) // 12
random(3) // 829
random(4) // 2374
Swift 5: Simple Solution
func random(digits:Int) -> String {
var number = String()
for _ in 1...digits {
number += "\(Int.random(in: 1...9))"
}
return number
}
print(random(digits: 1)) //3
print(random(digits: 2)) //59
print(random(digits: 3)) //926
Note It will return value in String, if you need Int value then you can do like this
let number = Int(random(digits: 1)) ?? 0
Here is some pseudocode that should do what you want.
generateRandomNumber(20)
func generateRandomNumber(int numDigits){
var place = 1
var finalNumber = 0;
for(int i = 0; i < numDigits; i++){
place *= 10
var randomNumber = arc4random_uniform(10)
finalNumber += randomNumber * place
}
return finalNumber
}
Its pretty simple. You generate 20 random numbers, and multiply them by the respective tens, hundredths, thousands... place that they should be on. This way you will guarantee a number of the correct size, but will randomly generate the number that will be used in each place.
Update
As said in the comments you will most likely get an overflow exception with a number this long, so you'll have to be creative in how you'd like to store the number (String, ect...) but I merely wanted to show you a simple way to generate a number with a guaranteed digit length. Also, given the current code there is a small chance your leading number could be 0 so you should protect against that as well.
you can create a string number then convert the number to your required number.
func generateRandomDigits(_ digitNumber: Int) -> String {
var number = ""
for i in 0..<digitNumber {
var randomNumber = arc4random_uniform(10)
while randomNumber == 0 && i == 0 {
randomNumber = arc4random_uniform(10)
}
number += "\(randomNumber)"
}
return number
}
print(Int(generateRandomDigits(3)))
for 20 digit you can use Double instead of Int
Here is 18 decimal digits in a UInt64:
(Swift 3)
let sz: UInt32 = 1000000000
let ms: UInt64 = UInt64(arc4random_uniform(sz))
let ls: UInt64 = UInt64(arc4random_uniform(sz))
let digits: UInt64 = ms * UInt64(sz) + ls
print(String(format:"18 digits: %018llu", digits)) // Print with leading 0s.
16 decimal digits with leading digit 1..9 in a UInt64:
let sz: UInt64 = 100000000
let ld: UInt64 = UInt64(arc4random_uniform(9)+1)
let ms: UInt64 = UInt64(arc4random_uniform(UInt32(sz/10)))
let ls: UInt64 = UInt64(arc4random_uniform(UInt32(sz)))
let digits: UInt64 = ld * (sz*sz/10) + (ms * sz) + ls
print(String(format:"16 digits: %llu", digits))
Swift 3
appzyourlifz's answer updated to Swift 3
Step 1:
extension Int {
init(_ range: Range<Int> ) {
let delta = range.lowerBound < 0 ? abs(range.lowerBound) : 0
let min = UInt32(range.lowerBound + delta)
let max = UInt32(range.upperBound + delta)
self.init(Int(min + arc4random_uniform(max - min)) - delta)
}
}
Step 2:
func randomNumberWith(digits:Int) -> Int {
let min = Int(pow(Double(10), Double(digits-1))) - 1
let max = Int(pow(Double(10), Double(digits))) - 1
return Int(Range(uncheckedBounds: (min, max)))
}
Usage:
randomNumberWith(digits:4) // 2271
randomNumberWith(digits:8) // 65273410
Swift 4 version of Unome's validate response plus :
Guard it against overflow and 0 digit number
Adding support for Linux's device because "arc4random*" functions don't exit
With linux device don't forgot to do
#if os(Linux)
srandom(UInt32(time(nil)))
#endif
only once before calling random.
/// This function generate a random number of type Int with the given digits number
///
/// - Parameter digit: the number of digit
/// - Returns: the ramdom generate number or nil if wrong parameter
func randomNumber(with digit: Int) -> Int? {
guard 0 < digit, digit < 20 else { // 0 digit number don't exist and 20 digit Int are to big
return nil
}
/// The final ramdom generate Int
var finalNumber : Int = 0;
for i in 1...digit {
/// The new generated number which will be add to the final number
var randomOperator : Int = 0
repeat {
#if os(Linux)
randomOperator = Int(random() % 9) * Int(powf(10, Float(i - 1)))
#else
randomOperator = Int(arc4random_uniform(9)) * Int(powf(10, Float(i - 1)))
#endif
} while Double(randomOperator + finalNumber) > Double(Int.max) // Verification to be sure to don't overflow Int max size
finalNumber += randomOperator
}
return finalNumber
}

Swift - Turn Int to binary representations

I receive an Int from my server which I’d like to explode in to an array of bit masks. So for example, if my server gives me the number 3, we get two values, a binary 1 and a binary 2.
How do I do this in Swift?
You could use:
let number = 3
//radix: 2 is binary, if you wanted hex you could do radix: 16
let str = String(number, radix: 2)
println(str)
prints "11"
let number = 79
//radix: 2 is binary, if you wanted hex you could do radix: 16
let str = String(number, radix: 16)
println(str)
prints "4f"
I am not aware of any nice built-in way, but you could use this:
var i = 3
let a = 0..<8
var b = a.map { Int(i & (1 << $0)) }
// b = [1, 2, 0, 0, 0, 0, 0, 0]
Here is a straightforward implementation:
func intToMasks(var n: Int) -> [Int] {
var masks = [Int]()
var mask = 1
while n > 0 {
if n & mask > 0 {
masks.append(mask)
n -= mask
}
mask <<= 1
}
return masks
}
println(intToMasks(3)) // prints "[1,2]"
println(intToMasks(1000)) // prints "[8,32,64,128,256,512]"
public extension UnsignedInteger {
/// The digits that make up this number.
/// - Parameter radix: The base the result will use.
func digits(radix: Self = 10) -> [Self] {
sequence(state: self) { quotient in
guard quotient > 0
else { return nil }
let division = quotient.quotientAndRemainder(dividingBy: radix)
quotient = division.quotient
return division.remainder
}
.reversed()
}
}
let digits = (6 as UInt).digits(radix: 0b10) // [1, 1, 0]
digits.reversed().enumerated().map { $1 << $0 } // [0, 2, 4]
Reverse the result too, if you need it.