How to generate a random number in Swift? - swift

I realize the Swift book provided an implementation of a random number generator. Is the best practice to copy and paste this implementation? Or is there a library that does this that we can use now?

Swift 4.2+
Swift 4.2 shipped with Xcode 10 introduces new easy-to-use random functions for many data types.
You simply call the random() method on numeric types.
let randomInt = Int.random(in: 0..<6)
let randomDouble = Double.random(in: 2.71828...3.14159)
let randomBool = Bool.random()

Use arc4random_uniform(n) for a random integer between 0 and n-1.
let diceRoll = Int(arc4random_uniform(6) + 1)
Cast the result to Int so you don't have to explicitly type your vars as UInt32 (which seems un-Swifty).

Edit: Updated for Swift 3.0
arc4random works well in Swift, but the base functions are limited to 32-bit integer types (Int is 64-bit on iPhone 5S and modern Macs). Here's a generic function for a random number of a type expressible by an integer literal:
public func arc4random<T: ExpressibleByIntegerLiteral>(_ type: T.Type) -> T {
var r: T = 0
arc4random_buf(&r, MemoryLayout<T>.size)
return r
}
We can use this new generic function to extend UInt64, adding boundary arguments and mitigating modulo bias. (This is lifted straight from arc4random.c)
public extension UInt64 {
public static func random(lower: UInt64 = min, upper: UInt64 = max) -> UInt64 {
var m: UInt64
let u = upper - lower
var r = arc4random(UInt64.self)
if u > UInt64(Int64.max) {
m = 1 + ~u
} else {
m = ((max - (u * 2)) + 1) % u
}
while r < m {
r = arc4random(UInt64.self)
}
return (r % u) + lower
}
}
With that we can extend Int64 for the same arguments, dealing with overflow:
public extension Int64 {
public static func random(lower: Int64 = min, upper: Int64 = max) -> Int64 {
let (s, overflow) = Int64.subtractWithOverflow(upper, lower)
let u = overflow ? UInt64.max - UInt64(~s) : UInt64(s)
let r = UInt64.random(upper: u)
if r > UInt64(Int64.max) {
return Int64(r - (UInt64(~lower) + 1))
} else {
return Int64(r) + lower
}
}
}
To complete the family...
private let _wordSize = __WORDSIZE
public extension UInt32 {
public static func random(lower: UInt32 = min, upper: UInt32 = max) -> UInt32 {
return arc4random_uniform(upper - lower) + lower
}
}
public extension Int32 {
public static func random(lower: Int32 = min, upper: Int32 = max) -> Int32 {
let r = arc4random_uniform(UInt32(Int64(upper) - Int64(lower)))
return Int32(Int64(r) + Int64(lower))
}
}
public extension UInt {
public static func random(lower: UInt = min, upper: UInt = max) -> UInt {
switch (_wordSize) {
case 32: return UInt(UInt32.random(UInt32(lower), upper: UInt32(upper)))
case 64: return UInt(UInt64.random(UInt64(lower), upper: UInt64(upper)))
default: return lower
}
}
}
public extension Int {
public static func random(lower: Int = min, upper: Int = max) -> Int {
switch (_wordSize) {
case 32: return Int(Int32.random(Int32(lower), upper: Int32(upper)))
case 64: return Int(Int64.random(Int64(lower), upper: Int64(upper)))
default: return lower
}
}
}
After all that, we can finally do something like this:
let diceRoll = UInt64.random(lower: 1, upper: 7)

Edit for Swift 4.2
Starting in Swift 4.2, instead of using the imported C function arc4random_uniform(), you can now use Swift’s own native functions.
// Generates integers starting with 0 up to, and including, 10
Int.random(in: 0 ... 10)
You can use random(in:) to get random values for other primitive values as well; such as Int, Double, Float and even Bool.
Swift versions < 4.2
This method will generate a random Int value between the given minimum and maximum
func randomInt(min: Int, max: Int) -> Int {
return min + Int(arc4random_uniform(UInt32(max - min + 1)))
}

I used this code:
var k: Int = random() % 10;

As of iOS 9, you can use the new GameplayKit classes to generate random numbers in a variety of ways.
You have four source types to choose from: a general random source (unnamed, down to the system to choose what it does), linear congruential, ARC4 and Mersenne Twister. These can generate random ints, floats and bools.
At the simplest level, you can generate a random number from the system's built-in random source like this:
GKRandomSource.sharedRandom().nextInt()
That generates a number between -2,147,483,648 and 2,147,483,647. If you want a number between 0 and an upper bound (exclusive) you'd use this:
GKRandomSource.sharedRandom().nextIntWithUpperBound(6)
GameplayKit has some convenience constructors built in to work with dice. For example, you can roll a six-sided die like this:
let d6 = GKRandomDistribution.d6()
d6.nextInt()
Plus you can shape the random distribution by using things like GKShuffledDistribution. That takes a little more explaining, but if you're interested you can read my tutorial on GameplayKit random numbers.

You can do it the same way that you would in C:
let randomNumber = arc4random()
randomNumber is inferred to be of type UInt32 (a 32-bit unsigned integer)

Use arc4random_uniform()
Usage:
arc4random_uniform(someNumber: UInt32) -> UInt32
This gives you random integers in the range 0 to someNumber - 1.
The maximum value for UInt32 is 4,294,967,295 (that is, 2^32 - 1).
Examples:
Coin flip
let flip = arc4random_uniform(2) // 0 or 1
Dice roll
let roll = arc4random_uniform(6) + 1 // 1...6
Random day in October
let day = arc4random_uniform(31) + 1 // 1...31
Random year in the 1990s
let year = 1990 + arc4random_uniform(10)
General form:
let number = min + arc4random_uniform(max - min + 1)
where number, max, and min are UInt32.
What about...
arc4random()
You can also get a random number by using arc4random(), which produces a UInt32 between 0 and 2^32-1. Thus to get a random number between 0 and x-1, you can divide it by x and take the remainder. Or in other words, use the Remainder Operator (%):
let number = arc4random() % 5 // 0...4
However, this produces the slight modulo bias (see also here and here), so that is why arc4random_uniform() is recommended.
Converting to and from Int
Normally it would be fine to do something like this in order to convert back and forth between Int and UInt32:
let number: Int = 10
let random = Int(arc4random_uniform(UInt32(number)))
The problem, though, is that Int has a range of -2,147,483,648...2,147,483,647 on 32 bit systems and a range of -9,223,372,036,854,775,808...9,223,372,036,854,775,807 on 64 bit systems. Compare this to the UInt32 range of 0...4,294,967,295. The U of UInt32 means unsigned.
Consider the following errors:
UInt32(-1) // negative numbers cause integer overflow error
UInt32(4294967296) // numbers greater than 4,294,967,295 cause integer overflow error
So you just need to be sure that your input parameters are within the UInt32 range and that you don't need an output that is outside of that range either.

Example for random number in between 10 (0-9);
import UIKit
let randomNumber = Int(arc4random_uniform(10))
Very easy code - simple and short.

I've been able to just use rand() to get a random CInt. You can make it an Int by using something like this:
let myVar: Int = Int(rand())
You can use your favourite C random function, and just convert to value to Int if needed.

#jstn's answer is good, but a bit verbose. Swift is known as a protocol-oriented language, so we can achieve the same result without having to implement boilerplate code for every class in the integer family, by adding a default implementation for the protocol extension.
public extension ExpressibleByIntegerLiteral {
public static func arc4random() -> Self {
var r: Self = 0
arc4random_buf(&r, MemoryLayout<Self>.size)
return r
}
}
Now we can do:
let i = Int.arc4random()
let j = UInt32.arc4random()
and all other integer classes are ok.

Updated: June 09, 2022.
Swift 5.7
Let's assume we have an array:
let numbers: [Int] = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
For iOS and macOS you can use system-wide random source in Xcode's framework GameKit. Here you can find GKRandomSource class with its sharedRandom() class method:
import GameKit
private func randomNumberGenerator() -> Int {
let rand = GKRandomSource.sharedRandom().nextInt(upperBound: numbers.count)
return numbers[rand]
}
randomNumberGenerator()
Also you can use a randomElement() method that returns a random element of a collection:
let randomNumber = numbers.randomElement()!
print(randomNumber)
Or use arc4random_uniform(). Pay attention that this method returns UInt32 type.
let generator = Int(arc4random_uniform(11))
print(generator)
And, of course, we can use a makeIterator() method that returns an iterator over the elements of the collection.
let iterator: Int = (1...10).makeIterator().shuffled().first!
print(iterator)
The final example you see here returns a random value within the specified range with a help of static func random(in range: ClosedRange<Int>) -> Int.
let randomizer = Int.random(in: 1...10)
print(randomizer)
Pseudo-random Double number generator drand48() returns a value between 0.0 and 1.0.
import Foundation
let randomInt = Int(drand48() * 10)

In Swift 4.2 you can generate random numbers by calling the random() method on whatever numeric type you want, providing the range you want to work with. For example, this generates a random number in the range 1 through 9, inclusive on both sides
let randInt = Int.random(in: 1..<10)
Also with other types
let randFloat = Float.random(in: 1..<20)
let randDouble = Double.random(in: 1...30)
let randCGFloat = CGFloat.random(in: 1...40)

Since Swift 4.2
There is a new set of APIs:
let randomIntFrom0To10 = Int.random(in: 0 ..< 10)
let randomDouble = Double.random(in: 1 ... 10)
All numeric types now have the random(in:) method that takes range.
It returns a number uniformly distributed in that range.
TL;DR
Well, what is wrong with the "good" old way?
You have to use imported C APIs (They are different between platforms).
And moreover...
What if I told you that the random is not that random?
If you use arc4random() (to calculate the remainder) like arc4random() % aNumber, the result is not uniformly distributed between the 0 and aNumber. There is a problem called the Modulo bias.
Modulo bias
Normally, the function generates a random number between 0 and MAX (depends on the type etc.). To make a quick, easy example, let's say the max number is 7 and you care about a random number in the range 0 ..< 2 (or the interval [0, 3) if you prefer that).
The probabilities for individual numbers are:
0: 3/8 = 37.5%
1: 3/8 = 37.5%
2: 2/8 = 25%
In other words, you are more likely to end up with 0 or 1 than 2.
Of course, bare in mind that this is extremely simplified and the MAX number is much higher, making it more "fair".
This problem is addressed by SE-0202 - Random unification in Swift 4.2

Here is a library that does the job well
https://github.com/thellimist/SwiftRandom
public extension Int {
/// SwiftRandom extension
public static func random(lower: Int = 0, _ upper: Int = 100) -> Int {
return lower + Int(arc4random_uniform(UInt32(upper - lower + 1)))
}
}
public extension Double {
/// SwiftRandom extension
public static func random(lower: Double = 0, _ upper: Double = 100) -> Double {
return (Double(arc4random()) / 0xFFFFFFFF) * (upper - lower) + lower
}
}
public extension Float {
/// SwiftRandom extension
public static func random(lower: Float = 0, _ upper: Float = 100) -> Float {
return (Float(arc4random()) / 0xFFFFFFFF) * (upper - lower) + lower
}
}
public extension CGFloat {
/// SwiftRandom extension
public static func random(lower: CGFloat = 0, _ upper: CGFloat = 1) -> CGFloat {
return CGFloat(Float(arc4random()) / Float(UINT32_MAX)) * (upper - lower) + lower
}
}

let MAX : UInt32 = 9
let MIN : UInt32 = 1
func randomNumber()
{
var random_number = Int(arc4random_uniform(MAX) + MIN)
print ("random = ", random_number);
}

I would like to add to existing answers that the random number generator example in the Swift book is a Linear Congruence Generator (LCG), it is a severely limited one and shouldn't be except for the must trivial examples, where quality of randomness doesn't matter at all. And a LCG should never be used for cryptographic purposes.
arc4random() is much better and can be used for most purposes, but again should not be used for cryptographic purposes.
If you want something that is guaranteed to be cryptographically secure, use SecCopyRandomBytes(). Note that if you build a random number generator into something, someone else might end up (mis)-using it for cryptographic purposes (such as password, key or salt generation), then you should consider using SecCopyRandomBytes() anyway, even if your need doesn't quite require that.

Swift 4.2
Bye bye to import Foundation C lib arc4random_uniform()
// 1
let digit = Int.random(in: 0..<10)
// 2
if let anotherDigit = (0..<10).randomElement() {
print(anotherDigit)
} else {
print("Empty range.")
}
// 3
let double = Double.random(in: 0..<1)
let float = Float.random(in: 0..<1)
let cgFloat = CGFloat.random(in: 0..<1)
let bool = Bool.random()
You use random(in:) to generate random digits from ranges.
randomElement() returns nil if the range is empty, so you unwrap the returned Int? with if let.
You use random(in:) to generate a random Double, Float or CGFloat and random() to return a random Bool.
More # Official

var randomNumber = Int(arc4random_uniform(UInt32(5)))
Here 5 will make sure that the random number is generated through zero to four. You can set the value accordingly.

Without arc4Random_uniform() in some versions of Xcode(in 7.1 it runs but doesn't autocomplete for me). You can do this instead.
To generate a random number from 0-5.
First
import GameplayKit
Then
let diceRoll = GKRandomSource.sharedRandom().nextIntWithUpperBound(6)

The following code will produce a secure random number between 0 and 255:
extension UInt8 {
public static var random: UInt8 {
var number: UInt8 = 0
_ = SecRandomCopyBytes(kSecRandomDefault, 1, &number)
return number
}
}
You call it like this:
print(UInt8.random)
For bigger numbers it becomes more complicated.
This is the best I could come up with:
extension UInt16 {
public static var random: UInt16 {
let count = Int(UInt8.random % 2) + 1
var numbers = [UInt8](repeating: 0, count: 2)
_ = SecRandomCopyBytes(kSecRandomDefault, count, &numbers)
return numbers.reversed().reduce(0) { $0 << 8 + UInt16($1) }
}
}
extension UInt32 {
public static var random: UInt32 {
let count = Int(UInt8.random % 4) + 1
var numbers = [UInt8](repeating: 0, count: 4)
_ = SecRandomCopyBytes(kSecRandomDefault, count, &numbers)
return numbers.reversed().reduce(0) { $0 << 8 + UInt32($1) }
}
}
These methods use an extra random number to determine how many UInt8s are going to be used to create the random number. The last line converts the [UInt8] to UInt16 or UInt32.
I don't know if the last two still count as truly random, but you can tweak it to your likings :)

Swift 4.2
Swift 4.2 has included a native and fairly full-featured random number API in the standard library. (Swift Evolution proposal SE-0202)
let intBetween0to9 = Int.random(in: 0...9)
let doubleBetween0to1 = Double.random(in: 0...1)
All number types have the static random(in:) which takes the range and returns the random number in the given range

Xcode 14, swift 5
public extension Array where Element == Int {
static func generateNonRepeatedRandom(size: Int) -> [Int] {
guard size > 0 else {
return [Int]()
}
return Array(0..<size).shuffled()
}
}
How to use:
let array = Array.generateNonRepeatedRandom(size: 15)
print(array)
Output

You can use GeneratorOf like this:
var fibs = ArraySlice([1, 1])
var fibGenerator = GeneratorOf{
_ -> Int? in
fibs.append(fibs.reduce(0, combine:+))
return fibs.removeAtIndex(0)
}
println(fibGenerator.next())
println(fibGenerator.next())
println(fibGenerator.next())
println(fibGenerator.next())
println(fibGenerator.next())
println(fibGenerator.next())

I use this code to generate a random number:
//
// FactModel.swift
// Collection
//
// Created by Ahmadreza Shamimi on 6/11/16.
// Copyright © 2016 Ahmadreza Shamimi. All rights reserved.
//
import GameKit
struct FactModel {
let fun = ["I love swift","My name is Ahmadreza","I love coding" ,"I love PHP","My name is ALireza","I love Coding too"]
func getRandomNumber() -> String {
let randomNumber = GKRandomSource.sharedRandom().nextIntWithUpperBound(fun.count)
return fun[randomNumber]
}
}

Details
xCode 9.1, Swift 4
Math oriented solution (1)
import Foundation
class Random {
subscript<T>(_ min: T, _ max: T) -> T where T : BinaryInteger {
get {
return rand(min-1, max+1)
}
}
}
let rand = Random()
func rand<T>(_ min: T, _ max: T) -> T where T : BinaryInteger {
let _min = min + 1
let difference = max - _min
return T(arc4random_uniform(UInt32(difference))) + _min
}
Usage of solution (1)
let x = rand(-5, 5) // x = [-4, -3, -2, -1, 0, 1, 2, 3, 4]
let x = rand[0, 10] // x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Programmers oriented solution (2)
Do not forget to add Math oriented solution (1) code here
import Foundation
extension CountableRange where Bound : BinaryInteger {
var random: Bound {
return rand(lowerBound-1, upperBound)
}
}
extension CountableClosedRange where Bound : BinaryInteger {
var random: Bound {
return rand[lowerBound, upperBound]
}
}
Usage of solution (2)
let x = (-8..<2).random // x = [-8, -7, -6, -5, -4, -3, -2, -1, 0, 1]
let x = (0..<10).random // x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
let x = (-10 ... -2).random // x = [-10, -9, -8, -7, -6, -5, -4, -3, -2]
Full Sample
Do not forget to add solution (1) and solution (2) codes here
private func generateRandNums(closure:()->(Int)) {
var allNums = Set<Int>()
for _ in 0..<100 {
allNums.insert(closure())
}
print(allNums.sorted{ $0 < $1 })
}
generateRandNums {
(-8..<2).random
}
generateRandNums {
(0..<10).random
}
generateRandNums {
(-10 ... -2).random
}
generateRandNums {
rand(-5, 5)
}
generateRandNums {
rand[0, 10]
}
Sample result

Related

Split fractional and integral parts of a Double [duplicate]

I'm trying to separate the decimal and integer parts of a double in swift. I've tried a number of approaches but they all run into the same issue...
let x:Double = 1234.5678
let n1:Double = x % 1.0 // n1 = 0.567800000000034
let n2:Double = x - 1234.0 // same result
let n3:Double = modf(x, &integer) // same result
Is there a way to get 0.5678 instead of 0.567800000000034 without converting to the number to a string?
You can use truncatingRemainder and 1 as the divider.
Returns the remainder of this value divided by the given value using truncating division.
Apple doc
Example:
let myDouble1: Double = 12.25
let myDouble2: Double = 12.5
let myDouble3: Double = 12.75
let remainder1 = myDouble1.truncatingRemainder(dividingBy: 1)
let remainder2 = myDouble2.truncatingRemainder(dividingBy: 1)
let remainder3 = myDouble3.truncatingRemainder(dividingBy: 1)
remainder1 -> 0.25
remainder2 -> 0.5
remainder3 -> 0.75
Same approach as Alessandro Ornano implemented as an instance property of FloatingPoint protocol:
Xcode 11 • Swift 5.1
import Foundation
extension FloatingPoint {
var whole: Self { modf(self).0 }
var fraction: Self { modf(self).1 }
}
1.2.whole // 1
1.2.fraction // 0.2
If you need the fraction digits and preserve its precision digits you would need to use Swift Decimal type and initialize it with a String:
extension Decimal {
func rounded(_ roundingMode: NSDecimalNumber.RoundingMode = .plain) -> Decimal {
var result = Decimal()
var number = self
NSDecimalRound(&result, &number, 0, roundingMode)
return result
}
var whole: Decimal { rounded(sign == .minus ? .up : .down) }
var fraction: Decimal { self - whole }
}
let decimal = Decimal(string: "1234.99999999")! // 1234.99999999
let fractional = decimal.fraction // 0.99999999
let whole = decimal.whole // 1234
let sum = whole + fractional // 1234.99999999
let negativeDecimal = Decimal(string: "-1234.99999999")! // -1234.99999999
let negativefractional = negativeDecimal.fraction // -0.99999999
let negativeWhole = negativeDecimal.whole // -1234
let negativeSum = negativeWhole + negativefractional // -1234.99999999
Swift 2:
You can use:
modf(x).1
or
x % floor(abs(x))
Without converting it to a string, you can round up to a number of decimal places like this:
let x:Double = 1234.5678
let numberOfPlaces:Double = 4.0
let powerOfTen:Double = pow(10.0, numberOfPlaces)
let targetedDecimalPlaces:Double = round((x % 1.0) * powerOfTen) / powerOfTen
Your output would be
0.5678
Swift 5.1
let x:Double = 1234.5678
let decimalPart:Double = x.truncatingRemainder(dividingBy: 1) //0.5678
let integerPart:Double = x.rounded(.towardZero) //1234
Both of these methods return Double value.
if you want an integer number as integer part, you can just use
Int(x)
Use Float since it has less precision digits than Double
let x:Double = 1234.5678
let n1:Float = Float(x % 1) // n1 = 0.5678
There’s a function in C’s math library, and many programming languages, Swift included, give you access to it. It’s called modf, and in Swift, it works like this
// modf returns a 2-element tuple,
// with the whole number part in the first element,
// and the fraction part in the second element
let splitPi = modf(3.141592)
splitPi.0 // 3.0
splitPi.1 // 0.141592
You can create an extension like below,
extension Double {
func getWholeNumber() -> Double {
return modf(self).0
}
func getFractionNumber() -> Double {
return modf(self).1
}
}
You can get the Integer part like this:
let d: Double = 1.23456e12
let intparttruncated = trunc(d)
let intpartroundlower = Int(d)
The trunc() function truncates the part after the decimal point and the Int() function rounds to the next lower value. This is the same for positive numbers but a difference for negative numbers. If you subtract the truncated part from d, then you will get the fractional part.
func frac (_ v: Double) -> Double
{
return (v - trunc(v))
}
You can get Mantissa and Exponent of a Double value like this:
let d: Double = 1.23456e78
let exponent = trunc(log(d) / log(10.0))
let mantissa = d / pow(10, trunc(log(d) / log(10.0)))
Your result will be 78 for the exponent and 1.23456 for the Mantissa.
Hope this helps you.
It's impossible to create a solution that will work for all Doubles. And if the other answers ever worked, which I also believe is impossible, they don't anymore.
let _5678 = 1234.5678.description.drop { $0 != "." } .description // ".5678"
Double(_5678) // 0.5678
let _567 = 1234.567.description.drop { $0 != "." } .description // ".567"
Double(_567) // 0.5669999999999999
extension Double {
/// Gets the decimal value from a double.
var decimal: Double {
Double("0." + string.split(separator: ".").last.string) ?? 0.0
}
var string: String {
String(self)
}
}
This appears to solve the Double precision issues.
Usage:
print(34.46979988898988.decimal) // outputs 0.46979988898988
print(34.46.decimal) // outputs 0.46

How do I convert a bitmask Int into a set of Ints?

I want a function that takes in a bitmask Int, and returns its masked values as a set of Ints. Something like this:
func split(bitmask: Int) -> Set<Int> {
// Do magic
}
such that
split(bitmask: 0b01001110) == [0b1000000, 0b1000, 0b100, 0b10]
One solution is to check each bit and add the corresponding mask if the bit is set.
func split(bitmask: Int) -> Set<Int> {
var results = Set<Int>()
// Change 31 to 63 or some other appropriate number based on how big your numbers can be
for shift in 0...31 {
let mask = 1 << shift
if bitmask & mask != 0 {
results.insert(mask)
}
}
return results
}
print(split(bitmask: 0b01001110))
For the binary number 0b01001110 the results will be:
[64, 2, 4, 8]
which are the decimal equivalent of the results in your question.
For the hex number 0x01001110 (which is 1000100010000 in binary) the results will be:
[16, 256, 4096, 16777216]
Here's another solution that doesn't need to know the size of the value and it's slightly more efficient for smaller numbers:
func split(bitmask: Int) -> Set<Int> {
var results = Set<Int>()
var value = bitmask
var mask = 1
while value > 0 {
if value % 2 == 1 {
results.insert(mask)
}
value /= 2
mask = mask &* 2
}
return results
}
Note that the most common use cases for bit masks include packing a collection of specific, meaningful Boolean flags into a single word-sized value, and performing tests against those flags. Swift provides facilities for this in the OptionSet type.
struct Bits: OptionSet {
let rawValue: UInt // unsigned is usually best for bitfield math
init(rawValue: UInt) { self.rawValue = rawValue }
static let one = Bits(rawValue: 0b1)
static let two = Bits(rawValue: 0b10)
static let four = Bits(rawValue: 0b100)
static let eight = Bits(rawValue: 0b1000)
}
let someBits = Bits(rawValue: 13)
// the following all return true:
someBits.contains(.four)
someBits.isDisjoint(with: .two)
someBits == [.one, .four, .eight]
someBits == [.four, .four, .eight, .one] // set algebra: order/duplicates moot
someBits == Bits(rawValue: 0b1011)
(In real-world use, of course, you'd give each of the "element" values in your OptionSet type some value that's meaningful to your use case.)
An OptionSet is actually a single value (that supports set algebra in terms of itself, instead of in terms of an element type), so it's not a collection — that is, it doesn't provide a way to enumerate its elements. But if the way you intend to use a bitmask only requires setting and testing specific flags (or combinations of flags), maybe you don't need a way to enumerate elements.
And if you do need to enumerate elements, but also want all the set algebra features of OptionSet, you can combine OptionSet with bit-splitting math such as that found in #rmaddy's answer:
extension OptionSet where RawValue == UInt { // try being more generic?
var discreteElements: [Self] {
var result = [Self]()
var bitmask = self.rawValue
var element = RawValue(1)
while bitmask > 0 && element < ~RawValue.allZeros {
if bitmask & 0b1 == 1 {
result.append(Self(rawValue: element))
}
bitmask >>= 1
element <<= 1
}
return result
}
}
someBits.discreteElements.map({$0.rawValue}) // => [1, 4, 8]
Here's my "1 line" version:
let values = Set(Array(String(0x01001110, radix: 2).characters).reversed().enumerated().map { (offset, element) -> Int in
Int(String(element))! << offset
}.filter { $0 != 0 })
Not super efficient, but fun!
Edit: wrapped in split function...
func split(bitmask: Int) -> Set<Int> {
return Set(Array(String(bitmask, radix: 2).characters).reversed().enumerated().map { (offset, element) -> Int in
Int(String(element))! << offset
}.filter { $0 != 0 })
}
Edit: a bit shorter
let values = Set(String(0x01001110, radix: 2).utf8.reversed().enumerated().map { (offset, element) -> Int in
Int(element-48) << offset
}.filter { $0 != 0 })

Generating random numbers with Swift

I need to generate a random number.
It appears the arc4random function no longer exists as well as the arc4random_uniform function.
The options I have are arc4random_stir(), arc4random_buf(UnsafeMutablePointer<Void>, Int), and arc4random_addrandom(UnsafeMutablePointer<UInt8>, Int32).
I can't find any docs on the functions and no comments in the header files give hints.
let randomIntFrom0To10 = Int.random(in: 1..<10)
let randomFloat = Float.random(in: 0..<1)
// if you want to get a random element in an array
let greetings = ["hey", "hi", "hello", "hola"]
greetings.randomElement()
You could try as well:
let diceRoll = Int(arc4random_uniform(UInt32(6)))
I had to add "UInt32" to make it work.
Just call this function and provide minimum and maximum range of number and you will get a random number.
eg.like randomNumber(MIN: 0, MAX: 10) and You will get number between 0 to 9.
func randomNumber(MIN: Int, MAX: Int)-> Int{
return Int(arc4random_uniform(UInt32(MAX-MIN)) + UInt32(MIN));
}
Note:- You will always get output an Integer number.
After some investigation I wrote this:
import Foundation
struct Math {
private static var seeded = false
static func randomFractional() -> CGFloat {
if !Math.seeded {
let time = Int(NSDate().timeIntervalSinceReferenceDate)
srand48(time)
Math.seeded = true
}
return CGFloat(drand48())
}
}
Now you can just do Math.randomFraction() to get random numbers [0..1[ without having to remember seeding first. Hope this helps someone :o)
Update with swift 4.2 :
let randomInt = Int.random(in: 1..<5)
let randomFloat = Float.random(in: 1..<10)
let randomDouble = Double.random(in: 1...100)
let randomCGFloat = CGFloat.random(in: 1...1000)
Another option is to use the xorshift128plus algorithm:
func xorshift128plus(seed0 : UInt64, _ seed1 : UInt64) -> () -> UInt64 {
var state0 : UInt64 = seed0
var state1 : UInt64 = seed1
if state0 == 0 && state1 == 0 {
state0 = 1 // both state variables cannot be 0
}
func rand() -> UInt64 {
var s1 : UInt64 = state0
let s0 : UInt64 = state1
state0 = s0
s1 ^= s1 << 23
s1 ^= s1 >> 17
s1 ^= s0
s1 ^= s0 >> 26
state1 = s1
return UInt64.addWithOverflow(state0, state1).0
}
return rand
}
This algorithm has a period of 2^128 - 1 and passes all the tests of the BigCrush test suite. Note that while this is a high-quality pseudo-random number generator with a long period, it is not a cryptographically secure random number generator.
You could seed it from the current time or any other random source of entropy. For example, if you had a function called urand64() that read a UInt64 from /dev/urandom, you could use it like this:
let rand = xorshift128plus(urand64(), urand64())
for _ in 1...10 {
print(rand())
}
let MAX : UInt32 = 9
let MIN : UInt32 = 1
func randomNumber()
{
var random_number = Int(arc4random_uniform(MAX) + MIN)
print ("random = ", random_number);
}
In Swift 3 :
It will generate random number between 0 to limit
let limit : UInt32 = 6
print("Random Number : \(arc4random_uniform(limit))")
My implementation as an Int extension. Will generate random numbers in range from..<to
public extension Int {
static func random(from: Int, to: Int) -> Int {
guard to > from else {
assertionFailure("Can not generate negative random numbers")
return 0
}
return Int(arc4random_uniform(UInt32(to - from)) + UInt32(from))
}
}
This is how I get a random number between 2 int's!
func randomNumber(MIN: Int, MAX: Int)-> Int{
var list : [Int] = []
for i in MIN...MAX {
list.append(i)
}
return list[Int(arc4random_uniform(UInt32(list.count)))]
}
usage:
print("My Random Number is: \(randomNumber(MIN:-10,MAX:10))")
Another option is to use GKMersenneTwisterRandomSource from GameKit. The docs say:
A deterministic pseudo-random source that generates random numbers
based on a mersenne twister algorithm. This is a deterministic random
source suitable for creating reliable gameplay mechanics. It is
slightly slower than an Arc4 source, but more random, in that it has a
longer period until repeating sequences. While deterministic, this is
not a cryptographic random source. It is however suitable for
obfuscation of gameplay data.
import GameKit
let minValue = 0
let maxValue = 100
var randomDistribution: GKRandomDistribution?
let randomSource = GKMersenneTwisterRandomSource()
randomDistribution = GKRandomDistribution(randomSource: randomSource, lowestValue: minValue, highestValue: maxValue)
let number = randomDistribution?.nextInt() ?? 0
print(number)
Example taken from Apple's sample code: https://github.com/carekit-apple/CareKit/blob/master/CareKitPrototypingTool/OCKPrototyper/CareKitPatient/RandomNumberGeneratorHelper.swift
I'm late to the party 🤩🎉
Using a function that allows you to change the size of the array and the range selection on the fly is the most versatile method. You can also use map so it's very concise. I use it in all of my performance testing/bench marking.
elements is the number of items in the array
only including numbers from 0...max
func randArr(_ elements: Int, _ max: Int) -> [Int] {
return (0..<elements).map{ _ in Int.random(in: 0...max) }
}
Code Sense / Placeholders look like this.
randArr(elements: Int, max: Int)
10 elements in my array ranging from 0 to 1000.
randArr(10, 1000) // [554, 8, 54, 87, 10, 33, 349, 888, 2, 77]
you can use this in specific rate:
let die = [1, 2, 3, 4, 5, 6]
let firstRoll = die[Int(arc4random_uniform(UInt32(die.count)))]
let secondRoll = die[Int(arc4random_uniform(UInt32(die.count)))]
Lets Code with Swift for the random number or random string :)
let quotes: NSArray = ["R", "A", "N", "D", "O", "M"]
let randomNumber = arc4random_uniform(UInt32(quotes.count))
let quoteString = quotes[Int(randomNumber)]
print(quoteString)
it will give you output randomly.
Don't forget that some numbers will repeat! so you need to do something like....
my totalQuestions was 47.
func getRandomNumbers(totalQuestions:Int) -> NSMutableArray
{
var arrayOfRandomQuestions: [Int] = []
print("arraySizeRequired = 40")
print("totalQuestions = \(totalQuestions)")
//This will output a 40 random numbers between 0 and totalQuestions (47)
while arrayOfRandomQuestions.count < 40
{
let limit: UInt32 = UInt32(totalQuestions)
let theRandomNumber = (Int(arc4random_uniform(limit)))
if arrayOfRandomQuestions.contains(theRandomNumber)
{
print("ping")
}
else
{
//item not found
arrayOfRandomQuestions.append(theRandomNumber)
}
}
print("Random Number set = \(arrayOfRandomQuestions)")
print("arrayOutputCount = \(arrayOfRandomQuestions.count)")
return arrayOfRandomQuestions as! NSMutableArray
}
look, i had the same problem but i insert
the function as a global variable
as
var RNumber = Int(arc4random_uniform(9)+1)
func GetCase(){
your code
}
obviously this is not efficent, so then i just copy and paste the code into the function so it could be reusable, then xcode suggest me to set the var as constant so my code were
func GetCase() {
let RNumber = Int(arc4random_uniform(9)+1)
if categoria == 1 {
}
}
well thats a part of my code so xcode tell me something of inmutable and initialization but, it build the app anyway and that advice simply dissapear
hope it helps

Rounding a double value to x number of decimal places in swift

Can anyone tell me how to round a double value to x number of decimal places in Swift?
I have:
var totalWorkTimeInHours = (totalWorkTime/60/60)
With totalWorkTime being an NSTimeInterval (double) in second.
totalWorkTimeInHours will give me the hours, but it gives me the amount of time in such a long precise number e.g. 1.543240952039......
How do I round this down to, say, 1.543 when I print totalWorkTimeInHours?
You can use Swift's round function to accomplish this.
To round a Double with 3 digits precision, first multiply it by 1000, round it and divide the rounded result by 1000:
let x = 1.23556789
let y = Double(round(1000 * x) / 1000)
print(y) /// 1.236
Unlike any kind of printf(...) or String(format: ...) solutions, the result of this operation is still of type Double.
EDIT:
Regarding the comments that it sometimes does not work, please read this: What Every Programmer Should Know About Floating-Point Arithmetic
Extension for Swift 2
A more general solution is the following extension, which works with Swift 2 & iOS 9:
extension Double {
/// Rounds the double to decimal places value
func roundToPlaces(places:Int) -> Double {
let divisor = pow(10.0, Double(places))
return round(self * divisor) / divisor
}
}
Extension for Swift 3
In Swift 3 round is replaced by rounded:
extension Double {
/// Rounds the double to decimal places value
func rounded(toPlaces places:Int) -> Double {
let divisor = pow(10.0, Double(places))
return (self * divisor).rounded() / divisor
}
}
Example which returns Double rounded to 4 decimal places:
let x = Double(0.123456789).roundToPlaces(4) // x becomes 0.1235 under Swift 2
let x = Double(0.123456789).rounded(toPlaces: 4) // Swift 3 version
How do I round this down to, say, 1.543 when I print totalWorkTimeInHours?
To round totalWorkTimeInHours to 3 digits for printing, use the String constructor which takes a format string:
print(String(format: "%.3f", totalWorkTimeInHours))
With Swift 5, according to your needs, you can choose one of the 9 following styles in order to have a rounded result from a Double.
#1. Using FloatingPoint rounded() method
In the simplest case, you may use the Double rounded() method.
let roundedValue1 = (0.6844 * 1000).rounded() / 1000
let roundedValue2 = (0.6849 * 1000).rounded() / 1000
print(roundedValue1) // returns 0.684
print(roundedValue2) // returns 0.685
#2. Using FloatingPoint rounded(_:) method
let roundedValue1 = (0.6844 * 1000).rounded(.toNearestOrEven) / 1000
let roundedValue2 = (0.6849 * 1000).rounded(.toNearestOrEven) / 1000
print(roundedValue1) // returns 0.684
print(roundedValue2) // returns 0.685
#3. Using Darwin round function
Foundation offers a round function via Darwin.
import Foundation
let roundedValue1 = round(0.6844 * 1000) / 1000
let roundedValue2 = round(0.6849 * 1000) / 1000
print(roundedValue1) // returns 0.684
print(roundedValue2) // returns 0.685
#4. Using a Double extension custom method built with Darwin round and pow functions
If you want to repeat the previous operation many times, refactoring your code can be a good idea.
import Foundation
extension Double {
func roundToDecimal(_ fractionDigits: Int) -> Double {
let multiplier = pow(10, Double(fractionDigits))
return Darwin.round(self * multiplier) / multiplier
}
}
let roundedValue1 = 0.6844.roundToDecimal(3)
let roundedValue2 = 0.6849.roundToDecimal(3)
print(roundedValue1) // returns 0.684
print(roundedValue2) // returns 0.685
#5. Using NSDecimalNumber rounding(accordingToBehavior:) method
If needed, NSDecimalNumber offers a verbose but powerful solution for rounding decimal numbers.
import Foundation
let scale: Int16 = 3
let behavior = NSDecimalNumberHandler(roundingMode: .plain, scale: scale, raiseOnExactness: false, raiseOnOverflow: false, raiseOnUnderflow: false, raiseOnDivideByZero: true)
let roundedValue1 = NSDecimalNumber(value: 0.6844).rounding(accordingToBehavior: behavior)
let roundedValue2 = NSDecimalNumber(value: 0.6849).rounding(accordingToBehavior: behavior)
print(roundedValue1) // returns 0.684
print(roundedValue2) // returns 0.685
#6. Using NSDecimalRound(_:_:_:_:) function
import Foundation
let scale = 3
var value1 = Decimal(0.6844)
var value2 = Decimal(0.6849)
var roundedValue1 = Decimal()
var roundedValue2 = Decimal()
NSDecimalRound(&roundedValue1, &value1, scale, NSDecimalNumber.RoundingMode.plain)
NSDecimalRound(&roundedValue2, &value2, scale, NSDecimalNumber.RoundingMode.plain)
print(roundedValue1) // returns 0.684
print(roundedValue2) // returns 0.685
#7. Using NSString init(format:arguments:) initializer
If you want to return a NSString from your rounding operation, using NSString initializer is a simple but efficient solution.
import Foundation
let roundedValue1 = NSString(format: "%.3f", 0.6844)
let roundedValue2 = NSString(format: "%.3f", 0.6849)
print(roundedValue1) // prints 0.684
print(roundedValue2) // prints 0.685
#8. Using String init(format:_:) initializer
Swift’s String type is bridged with Foundation’s NSString class. Therefore, you can use the following code in order to return a String from your rounding operation:
import Foundation
let roundedValue1 = String(format: "%.3f", 0.6844)
let roundedValue2 = String(format: "%.3f", 0.6849)
print(roundedValue1) // prints 0.684
print(roundedValue2) // prints 0.685
#9. Using NumberFormatter
If you expect to get a String? from your rounding operation, NumberFormatter offers a highly customizable solution.
import Foundation
let formatter = NumberFormatter()
formatter.numberStyle = NumberFormatter.Style.decimal
formatter.roundingMode = NumberFormatter.RoundingMode.halfUp
formatter.maximumFractionDigits = 3
let roundedValue1 = formatter.string(from: 0.6844)
let roundedValue2 = formatter.string(from: 0.6849)
print(String(describing: roundedValue1)) // prints Optional("0.684")
print(String(describing: roundedValue2)) // prints Optional("0.685")
In Swift 5.5 and Xcode 13.2:
let pi: Double = 3.14159265358979
String(format:"%.2f", pi)
Example:
PS.: It still the same since Swift 2.0 and Xcode 7.2
This is a fully worked code
Swift 3.0/4.0/5.0 , Xcode 9.0 GM/9.2 and above
let doubleValue : Double = 123.32565254455
self.lblValue.text = String(format:"%.f", doubleValue)
print(self.lblValue.text)
output - 123
let doubleValue : Double = 123.32565254455
self.lblValue_1.text = String(format:"%.1f", doubleValue)
print(self.lblValue_1.text)
output - 123.3
let doubleValue : Double = 123.32565254455
self.lblValue_2.text = String(format:"%.2f", doubleValue)
print(self.lblValue_2.text)
output - 123.33
let doubleValue : Double = 123.32565254455
self.lblValue_3.text = String(format:"%.3f", doubleValue)
print(self.lblValue_3.text)
output - 123.326
Building on Yogi's answer, here's a Swift function that does the job:
func roundToPlaces(value:Double, places:Int) -> Double {
let divisor = pow(10.0, Double(places))
return round(value * divisor) / divisor
}
In Swift 3.0 and Xcode 8.0:
extension Double {
func roundTo(places: Int) -> Double {
let divisor = pow(10.0, Double(places))
return (self * divisor).rounded() / divisor
}
}
Use this extension like so:
let doubleValue = 3.567
let roundedValue = doubleValue.roundTo(places: 2)
print(roundedValue) // prints 3.56
Swift 4, Xcode 10
yourLabel.text = String(format:"%.2f", yourDecimalValue)
The code for specific digits after decimals is:
var a = 1.543240952039
var roundedString = String(format: "%.3f", a)
Here the %.3f tells the swift to make this number rounded to 3 decimal places.and if you want double number, you may use this code:
// String to Double
var roundedString = Double(String(format: "%.3f", b))
Use the built in Foundation Darwin library
SWIFT 3
extension Double {
func round(to places: Int) -> Double {
let divisor = pow(10.0, Double(places))
return Darwin.round(self * divisor) / divisor
}
}
Usage:
let number:Double = 12.987654321
print(number.round(to: 3))
Outputs: 12.988
If you want to round Double values, you might want to use Swift Decimal so you don't introduce any errors that can crop up when trying to math with these rounded values. If you use Decimal, it can accurately represent decimal values of that rounded floating point value.
So you can do:
extension Double {
/// Convert `Double` to `Decimal`, rounding it to `scale` decimal places.
///
/// - Parameters:
/// - scale: How many decimal places to round to. Defaults to `0`.
/// - mode: The preferred rounding mode. Defaults to `.plain`.
/// - Returns: The rounded `Decimal` value.
func roundedDecimal(to scale: Int = 0, mode: NSDecimalNumber.RoundingMode = .plain) -> Decimal {
var decimalValue = Decimal(self)
var result = Decimal()
NSDecimalRound(&result, &decimalValue, scale, mode)
return result
}
}
Then, you can get the rounded Decimal value like so:
let foo = 427.3000000002
let value = foo.roundedDecimal(to: 2) // results in 427.30
And if you want to display it with a specified number of decimal places (as well as localize the string for the user's current locale), you can use a NumberFormatter:
let formatter = NumberFormatter()
formatter.maximumFractionDigits = 2
formatter.minimumFractionDigits = 2
if let string = formatter.string(for: value) {
print(string)
}
A handy way can be the use of extension of type Double
extension Double {
var roundTo2f: Double {return Double(round(100 *self)/100) }
var roundTo3f: Double {return Double(round(1000*self)/1000) }
}
Usage:
let regularPie: Double = 3.14159
var smallerPie: Double = regularPie.roundTo3f // results 3.142
var smallestPie: Double = regularPie.roundTo2f // results 3.14
This is a sort of a long workaround, which may come in handy if your needs are a little more complex. You can use a number formatter in Swift.
let numberFormatter: NSNumberFormatter = {
let nf = NSNumberFormatter()
nf.numberStyle = .DecimalStyle
nf.minimumFractionDigits = 0
nf.maximumFractionDigits = 1
return nf
}()
Suppose your variable you want to print is
var printVar = 3.567
This will make sure it is returned in the desired format:
numberFormatter.StringFromNumber(printVar)
The result here will thus be "3.6" (rounded). While this is not the most economic solution, I give it because the OP mentioned printing (in which case a String is not undesirable), and because this class allows for multiple parameters to be set.
Either:
Using String(format:):
Typecast Double to String with %.3f format specifier and then back to Double
Double(String(format: "%.3f", 10.123546789))!
Or extend Double to handle N-Decimal places:
extension Double {
func rounded(toDecimalPlaces n: Int) -> Double {
return Double(String(format: "%.\(n)f", self))!
}
}
By calculation
multiply with 10^3, round it and then divide by 10^3...
(1000 * 10.123546789).rounded()/1000
Or extend Double to handle N-Decimal places:
extension Double {
func rounded(toDecimalPlaces n: Int) -> Double {
let multiplier = pow(10, Double(n))
return (multiplier * self).rounded()/multiplier
}
}
I would use
print(String(format: "%.3f", totalWorkTimeInHours))
and change .3f to any number of decimal numbers you need
This is more flexible algorithm of rounding to N significant digits
Swift 3 solution
extension Double {
// Rounds the double to 'places' significant digits
func roundTo(places:Int) -> Double {
guard self != 0.0 else {
return 0
}
let divisor = pow(10.0, Double(places) - ceil(log10(fabs(self))))
return (self * divisor).rounded() / divisor
}
}
// Double(0.123456789).roundTo(places: 2) = 0.12
// Double(1.23456789).roundTo(places: 2) = 1.2
// Double(1234.56789).roundTo(places: 2) = 1200
The best way to format a double property is to use the Apple predefined methods.
mutating func round(_ rule: FloatingPointRoundingRule)
FloatingPointRoundingRule is a enum which has following possibilities
Enumeration Cases:
case awayFromZero
Round to the closest allowed value whose magnitude is greater than or equal to that of the source.
case down
Round to the closest allowed value that is less than or equal to the source.
case toNearestOrAwayFromZero
Round to the closest allowed value; if two values are equally close, the one with greater magnitude is chosen.
case toNearestOrEven
Round to the closest allowed value; if two values are equally close, the even one is chosen.
case towardZero
Round to the closest allowed value whose magnitude is less than or equal to that of the source.
case up
Round to the closest allowed value that is greater than or equal to the source.
var aNumber : Double = 5.2
aNumber.rounded(.up) // 6.0
round a double value to x number of decimal
NO. of digits after decimal
var x = 1.5657676754
var y = (x*10000).rounded()/10000
print(y) // 1.5658
var x = 1.5657676754
var y = (x*100).rounded()/100
print(y) // 1.57
var x = 1.5657676754
var y = (x*10).rounded()/10
print(y) // 1.6
For ease to use, I created an extension:
extension Double {
var threeDigits: Double {
return (self * 1000).rounded(.toNearestOrEven) / 1000
}
var twoDigits: Double {
return (self * 100).rounded(.toNearestOrEven) / 100
}
var oneDigit: Double {
return (self * 10).rounded(.toNearestOrEven) / 10
}
}
var myDouble = 0.12345
print(myDouble.threeDigits)
print(myDouble.twoDigits)
print(myDouble.oneDigit)
The print results are:
0.123
0.12
0.1
Thanks for the inspiration of other answers!
Not Swift but I'm sure you get the idea.
pow10np = pow(10,num_places);
val = round(val*pow10np) / pow10np;
Swift 5
using String method
var yourDouble = 3.12345
//to round this to 2 decimal spaces i could turn it into string
let roundingString = String(format: "%.2f", myDouble)
let roundedDouble = Double(roundingString) //and than back to double
// result is 3.12
but it's more accepted to use extension
extension Double {
func round(to decimalPlaces: Int) -> Double {
let precisionNumber = pow(10,Double(decimalPlaces))
var n = self // self is a current value of the Double that you will round
n = n * precisionNumber
n.round()
n = n / precisionNumber
return n
}
}
and then you can use:
yourDouble.round(to:2)
This seems to work in Swift 5.
Quite surprised there isn't a standard function for this already.
//Truncation of Double to n-decimal places with rounding
extension Double {
func truncate(to places: Int) -> Double {
return Double(Int((pow(10, Double(places)) * self).rounded())) / pow(10, Double(places))
}
}
To avoid Float imperfections use Decimal
extension Float {
func rounded(rule: NSDecimalNumber.RoundingMode, scale: Int) -> Float {
var result: Decimal = 0
var decimalSelf = NSNumber(value: self).decimalValue
NSDecimalRound(&result, &decimalSelf, scale, rule)
return (result as NSNumber).floatValue
}
}
ex.
1075.58 rounds to 1075.57 when using Float with scale: 2 and .down
1075.58 rounds to 1075.58 when using Decimal with scale: 2 and .down
var n = 123.111222333
n = Double(Int(n * 10.0)) / 10.0
Result: n = 123.1
Change 10.0 (1 decimal place) to any of 100.0 (2 decimal place), 1000.0 (3 decimal place) and so on, for the number of digits you want after decimal..
The solution worked for me. XCode 13.3.1 & Swift 5
extension Double {
func rounded(decimalPoint: Int) -> Double {
let power = pow(10, Double(decimalPoint))
return (self * power).rounded() / power
}
}
Test:
print(-87.7183123123.rounded(decimalPoint: 3))
print(-87.7188123123.rounded(decimalPoint: 3))
print(-87.7128123123.rounded(decimalPoint: 3))
Result:
-87.718
-87.719
-87.713
I found this wondering if it is possible to correct a user's input. That is if they enter three decimals instead of two for a dollar amount. Say 1.111 instead of 1.11 can you fix it by rounding? The answer for many reasons is no! With money anything over i.e. 0.001 would eventually cause problems in a real checkbook.
Here is a function to check the users input for too many values after the period. But which will allow 1., 1.1 and 1.11.
It is assumed that the value has already been checked for successful conversion from a String to a Double.
//func need to be where transactionAmount.text is in scope
func checkDoublesForOnlyTwoDecimalsOrLess()->Bool{
var theTransactionCharacterMinusThree: Character = "A"
var theTransactionCharacterMinusTwo: Character = "A"
var theTransactionCharacterMinusOne: Character = "A"
var result = false
var periodCharacter:Character = "."
var myCopyString = transactionAmount.text!
if myCopyString.containsString(".") {
if( myCopyString.characters.count >= 3){
theTransactionCharacterMinusThree = myCopyString[myCopyString.endIndex.advancedBy(-3)]
}
if( myCopyString.characters.count >= 2){
theTransactionCharacterMinusTwo = myCopyString[myCopyString.endIndex.advancedBy(-2)]
}
if( myCopyString.characters.count > 1){
theTransactionCharacterMinusOne = myCopyString[myCopyString.endIndex.advancedBy(-1)]
}
if theTransactionCharacterMinusThree == periodCharacter {
result = true
}
if theTransactionCharacterMinusTwo == periodCharacter {
result = true
}
if theTransactionCharacterMinusOne == periodCharacter {
result = true
}
}else {
//if there is no period and it is a valid double it is good
result = true
}
return result
}
You can add this extension :
extension Double {
var clean: String {
return self.truncatingRemainder(dividingBy: 1) == 0 ? String(format: "%.0f", self) : String(format: "%.2f", self)
}
}
and call it like this :
let ex: Double = 10.123546789
print(ex.clean) // 10.12
Here's one for SwiftUI if you need a Text element with the number value.
struct RoundedDigitText : View {
let digits : Int
let number : Double
var body : some View {
Text(String(format: "%.\(digits)f", number))
}
}
//find the distance between two points
let coordinateSource = CLLocation(latitude: 30.7717625, longitude:76.5741449 )
let coordinateDestination = CLLocation(latitude: 29.9810859, longitude: 76.5663599)
let distanceInMeters = coordinateSource.distance(from: coordinateDestination)
let valueInKms = distanceInMeters/1000
let preciseValueUptoThreeDigit = Double(round(1000*valueInKms)/1000)
self.lblTotalDistance.text = "Distance is : \(preciseValueUptoThreeDigit) kms"

How do you cast a UInt64 to an Int64?

Trying to call dispatch_time in Swift is doing my head in, here's why:
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, 10 * NSEC_PER_SEC), dispatch_get_main_queue(), {
doSomething()
})
Results in the error: "Could not find an overload for '*' that accepts the supplied arguments".
NSEC_PER_SEC is an UInt64 so time for some experiments:
let x:UInt64 = 1000
let m:Int64 = 10 * x
Results in the same error as above
let x:UInt64 = 1000
let m:Int64 = 10 * (Int64) x
Results in "Consecutive statements on a line must be separated by ';'"
let x:UInt64 = 1000
let m:Int64 = 10 * ((Int64) x)
Results in "Expected ',' separator"
let x:UInt64 = 1000
let m:Int64 = (Int64)10 * (Int64) x
Results in "Consecutive statements on a line must be separated by ';'"
Etc. etc.
Damn you Swift compiler, I give up. How do I cast a UInt64 to Int64, and/or how do you use dispatch_time in swift?
You can "cast" between different integer types by initializing a new integer with the type you want:
let uint:UInt64 = 1234
let int:Int64 = Int64(uint)
It's probably not an issue in your particular case, but it's worth noting that different integer types have different ranges, and you can end up with out of range crashes if you try to convert between integers of different types:
let bigUInt:UInt64 = UInt64(Int64.max) - 1 // 9,223,372,036,854,775,806
let bigInt:Int64 = Int64(bigUInt) // no problem
let biggerUInt:UInt64 = UInt64(Int64.max) + 1 // 9,223,372,036,854,775,808
let biggerInt:Int64 = Int64(biggerUInt) // crash!
Each integer type has .max and .min class properties that you can use for checking ranges:
if (biggerUInt <= UInt64(Int64.max)) {
let biggerInt:Int64 = Int64(biggerUInt) // safe!
}
To construct an Int64 using the bits of a UInt64, use the init seen here: https://developer.apple.com/reference/swift/int64/1538466-init
let myInt64 = Int64(bitPattern: myUInt64)
Try this:
let x:UInt64 = 1000 // 1,000
let m:Int64 = 10 * Int64(x) // 10,000
or even :
let x:UInt64 = 1000 // 1,000
let m = 10 * Int64(x) // 10,000
let n = Int64(10 * x) // 10,000
let y = Int64(x) // 1,000, as Int64 (as per #Bill's question)
It's not so much casting as initialising with a separate type...
Casting a UInt64 to an Int64 is not safe since a UInt64 can have a number which is greater than Int64.max, which will result in an overflow.
Here's a snippet for converting a UInt64 to Int64 and vice-versa:
// Extension for 64-bit integer signed <-> unsigned conversion
extension Int64 {
var unsigned: UInt64 {
let valuePointer = UnsafeMutablePointer<Int64>.allocate(capacity: 1)
defer {
valuePointer.deallocate(capacity: 1)
}
valuePointer.pointee = self
return valuePointer.withMemoryRebound(to: UInt64.self, capacity: 1) { $0.pointee }
}
}
extension UInt64 {
var signed: Int64 {
let valuePointer = UnsafeMutablePointer<UInt64>.allocate(capacity: 1)
defer {
valuePointer.deallocate(capacity: 1)
}
valuePointer.pointee = self
return valuePointer.withMemoryRebound(to: Int64.self, capacity: 1) { $0.pointee }
}
}
This simply interprets the binary data of UInt64 as an Int64, i.e. numbers greater than Int64.max will be negative because of the sign bit at the most significat bit of the 64-bit integer.
If you just want positive integers, just get the absolute value.
EDIT: Depending on behavior, you can either get the absolute value, or:
if currentValue < 0 {
return Int64.max + currentValue + 1
} else {
return currentValue
}
The latter option is similar to stripping the sign bit. Ex:
// Using an 8-bit integer for simplicity
// currentValue
0b1111_1111 // If this is interpreted as Int8, this is -1.
// Strip sign bit
0b0111_1111 // As Int8, this is 127. To get this we can add Int8.max
// Int8.max + currentValue + 1
127 + (-1) + 1 = 127
Better solution for converting:
UInt64 Int64_2_UInt64(Int64 Value)
{
return (((UInt64)((UInt32)((UInt64)Value >> 32))) << 32)
| (UInt64)((UInt32)((UInt64)Value & 0x0ffffffff));
}
Int64 UInt64_2_Int64(UInt64 Value)
{
return (Int64)((((Int64)(UInt32)((UInt64)Value >> 32)) << 32)
| (Int64)((UInt32)((UInt64)Value & 0x0ffffffff)));
}
simple solution for Swift 3 is an inbuilt function that takes care of overflows and buffer management.
var a:UInt64 = 1234567890
var b:Int64 = numericCast(a)