swift range greater than lower bound - swift

I need to implement experience filter like this
0 to 2 years
2+ to 4 years
How to express it in swift range?
Problem is I can't express more than 2 to 4 years. While I can do less than upper bounds. e.g. like this
let underTen = 0.0..<10.0
I need something like this (greater than lower bound)
let uptoTwo = 0.0...2.0
let twoPlus = 2.0>..4.0 // compiler error
Currently I am doing
let twoPlus = 2.1...4.0
But this is not perfect.

nextUp from the FloatingPoint protocol
You can make use of the nextUp property of Double, as blueprinted in the FloatingPoint protocol to which Double conforms
nextUp
The least representable value that compares greater than this value.
For any finite value x, x.nextUp is greater than x. ...
I.e.:
let uptoTwo = 0.0...2.0
let twoPlus = 2.0.nextUp...4.0
The property ulp, also blueprinted in the FloatingPoint protocol, has been mentioned in the comments to your question. For most numbers, this is the difference between self and the next greater representable number:
ulp
The unit in the last place of self.
This is the unit of the least significant digit in the significand of
self. For most numbers x, this is the difference between x and
the next greater (in magnitude) representable number. ...
nextUp does, in essence, return the value of self with the addition of ulp. So for your example above, the following is equivalent (whereas, imo, nextup should be preferred in this use case).
let uptoTwo = 0.0...2.0
let twoPlus = (2.0+2.0.ulp)...4.0
You might also want to consider replacing the lower bound literal in twoPlus with the upperBound property of the preceding uptoTwo range:
let uptoTwo = 0.0...2.0 // [0, 2] closed-closed
let twoPlus = uptoTwo.upperBound.nextUp...4.0 // (2, 4] open-closed
if uptoTwo.overlaps(twoPlus) {
print("the two ranges overlap ...")
}
else {
print("ranges are non-overlapping, ok!")
}
// ranges are non-overlapping, ok!

Rather than create a new type of range you can instead create a method that will identify values above the lower bound:
extension ClosedRange {
func containsAboveLowerBound(value:Bound) -> Bool {
if value > self.lowerBound {
return self.contains(value)
}
else {
return false
}
}
}
implementing it like so:
let r = 2.0...3.0
r.containsAboveLowerBound(value: 2.0) // false
r.containsAboveLowerBound(value: 2.01) // true

If your actual purpose is to use ranges for filtering, how about making them as closures?
let underTen = {0.0 <= $0 && $0 < 10.0}
let upToTwo = {0.0 <= $0 && $0 <= 2.0}
let twoPlus = {2.0 < $0 && $0 <= 4.0}
You can use such filtering closures like this:
class Client: CustomStringConvertible {
var experience: Double
init(experience: Double) {self.experience = experience}
var description: String {return "Client(\(experience))"}
}
let clients = [Client(experience: 1.0),Client(experience: 2.0),Client(experience: 3.0)]
let filteredUnderTen = clients.filter {underTen($0.experience)}
print(filteredUnderTen) //->[Client(1.0), Client(2.0), Client(3.0)]
let filteredUpToTwo = clients.filter {upToTwo($0.experience)}
print(filteredUpToTwo) //->[Client(1.0), Client(2.0)]
let filteredTwoPlus = clients.filter {twoPlus($0.experience)}
print(filteredTwoPlus) //->[Client(3.0)]

I think this does it,
extension ClosedRange where Bound == Int {
func containsExclusiveOfBounds(_ bound: Bound) -> Bool {
return !(bound == lowerBound || bound == upperBound)
}
}

Related

Split fractional and integral parts of a Double [duplicate]

I'm trying to separate the decimal and integer parts of a double in swift. I've tried a number of approaches but they all run into the same issue...
let x:Double = 1234.5678
let n1:Double = x % 1.0 // n1 = 0.567800000000034
let n2:Double = x - 1234.0 // same result
let n3:Double = modf(x, &integer) // same result
Is there a way to get 0.5678 instead of 0.567800000000034 without converting to the number to a string?
You can use truncatingRemainder and 1 as the divider.
Returns the remainder of this value divided by the given value using truncating division.
Apple doc
Example:
let myDouble1: Double = 12.25
let myDouble2: Double = 12.5
let myDouble3: Double = 12.75
let remainder1 = myDouble1.truncatingRemainder(dividingBy: 1)
let remainder2 = myDouble2.truncatingRemainder(dividingBy: 1)
let remainder3 = myDouble3.truncatingRemainder(dividingBy: 1)
remainder1 -> 0.25
remainder2 -> 0.5
remainder3 -> 0.75
Same approach as Alessandro Ornano implemented as an instance property of FloatingPoint protocol:
Xcode 11 • Swift 5.1
import Foundation
extension FloatingPoint {
var whole: Self { modf(self).0 }
var fraction: Self { modf(self).1 }
}
1.2.whole // 1
1.2.fraction // 0.2
If you need the fraction digits and preserve its precision digits you would need to use Swift Decimal type and initialize it with a String:
extension Decimal {
func rounded(_ roundingMode: NSDecimalNumber.RoundingMode = .plain) -> Decimal {
var result = Decimal()
var number = self
NSDecimalRound(&result, &number, 0, roundingMode)
return result
}
var whole: Decimal { rounded(sign == .minus ? .up : .down) }
var fraction: Decimal { self - whole }
}
let decimal = Decimal(string: "1234.99999999")! // 1234.99999999
let fractional = decimal.fraction // 0.99999999
let whole = decimal.whole // 1234
let sum = whole + fractional // 1234.99999999
let negativeDecimal = Decimal(string: "-1234.99999999")! // -1234.99999999
let negativefractional = negativeDecimal.fraction // -0.99999999
let negativeWhole = negativeDecimal.whole // -1234
let negativeSum = negativeWhole + negativefractional // -1234.99999999
Swift 2:
You can use:
modf(x).1
or
x % floor(abs(x))
Without converting it to a string, you can round up to a number of decimal places like this:
let x:Double = 1234.5678
let numberOfPlaces:Double = 4.0
let powerOfTen:Double = pow(10.0, numberOfPlaces)
let targetedDecimalPlaces:Double = round((x % 1.0) * powerOfTen) / powerOfTen
Your output would be
0.5678
Swift 5.1
let x:Double = 1234.5678
let decimalPart:Double = x.truncatingRemainder(dividingBy: 1) //0.5678
let integerPart:Double = x.rounded(.towardZero) //1234
Both of these methods return Double value.
if you want an integer number as integer part, you can just use
Int(x)
Use Float since it has less precision digits than Double
let x:Double = 1234.5678
let n1:Float = Float(x % 1) // n1 = 0.5678
There’s a function in C’s math library, and many programming languages, Swift included, give you access to it. It’s called modf, and in Swift, it works like this
// modf returns a 2-element tuple,
// with the whole number part in the first element,
// and the fraction part in the second element
let splitPi = modf(3.141592)
splitPi.0 // 3.0
splitPi.1 // 0.141592
You can create an extension like below,
extension Double {
func getWholeNumber() -> Double {
return modf(self).0
}
func getFractionNumber() -> Double {
return modf(self).1
}
}
You can get the Integer part like this:
let d: Double = 1.23456e12
let intparttruncated = trunc(d)
let intpartroundlower = Int(d)
The trunc() function truncates the part after the decimal point and the Int() function rounds to the next lower value. This is the same for positive numbers but a difference for negative numbers. If you subtract the truncated part from d, then you will get the fractional part.
func frac (_ v: Double) -> Double
{
return (v - trunc(v))
}
You can get Mantissa and Exponent of a Double value like this:
let d: Double = 1.23456e78
let exponent = trunc(log(d) / log(10.0))
let mantissa = d / pow(10, trunc(log(d) / log(10.0)))
Your result will be 78 for the exponent and 1.23456 for the Mantissa.
Hope this helps you.
It's impossible to create a solution that will work for all Doubles. And if the other answers ever worked, which I also believe is impossible, they don't anymore.
let _5678 = 1234.5678.description.drop { $0 != "." } .description // ".5678"
Double(_5678) // 0.5678
let _567 = 1234.567.description.drop { $0 != "." } .description // ".567"
Double(_567) // 0.5669999999999999
extension Double {
/// Gets the decimal value from a double.
var decimal: Double {
Double("0." + string.split(separator: ".").last.string) ?? 0.0
}
var string: String {
String(self)
}
}
This appears to solve the Double precision issues.
Usage:
print(34.46979988898988.decimal) // outputs 0.46979988898988
print(34.46.decimal) // outputs 0.46

How does one iterate over an Array of Ranges?

Swift 3's compiler will not let me compile the following:
let a = 0
let b = 10
var arr = [ClosedRange<Int>]()
let myRange: ClosedRange = a...b
arr.append(myRange)
for each in arr {
for every in each {
print(every)
}
}
...due to ClosedRange<Int> not conforming to the Sequence protocol. In the past, a simple extension to the class like so would have been enough:
extension ClosedRange<Int>: Sequence {}
...but now the compiler asks that the extension be declared with a where clause, which makes me think that I'm going about this all wrong. What am I missing?
The problem is not that you have an array of ranges, but that
ClosedRange in Swift 3 represents
An interval over a comparable type, from a lower bound up to, and including, an upper bound.
For example, a closed range can be used with Double
let r: ClosedRange<Double> = 1.1...2.2
where enumerating all possible values does not make much sense.
What you need is CountableClosedRange which is
A closed range that forms a collection of consecutive values.
and in particular is a collection and can be iterated over:
let a = 0
let b = 10
var arr = [CountableClosedRange<Int>]()
let myRange: CountableClosedRange = a...b
arr.append(myRange)
for each in arr {
for every in each {
print(every)
}
}
You can just write
let myRange = a...b
since by default, the ... operator produces a CountableClosedRange
if its operands are Strideable.
Similarly there is Range and CountableRange for half-open ranges.
For more information, see Range Types in SE-0065 A New Model for Collections and Indices.
let a = 0
let b = 10
var arr = [ClosedRange<Int>]()
let myRange: ClosedRange = a...b
arr.append(myRange)
for each in arr {
for every in [Int](each.lowerBound..<each.upperBound) {
print(every)
}
}
and remove that extention.

Idiomatic way to model opcodes with embedded data in Swift

Swift enums seem like a great fit for modeling byte opcodes in Swift3. For example:
enum MathOpCode:UInt8 {
case Add = 0
case Subtract = 1
case Multiply = 2
case Divide = 3
}
let a = 42
let b = 13
let someByte = 2
if let opcode = OpCode(rawValue: someByte) {
switch opcode {
case .Add:
return a + b
case .Subtract:
return a - b
case .Multiply:
return a * b
case .Divide:
return a / b
}
}
This can be really expressive for writing binary protocols. The enum nicely captures the logical opcodes, and the switches read nicely then. Where it's breaking down for me, is where OpCodes include small amounts of data. IOW, let's say I add an OpCode called AddSmallConstant which is meant to represent all opcodes matching 0b01nnnnnn where the top two bits must be 01, but the bottom 6 bits are an embedded constant ranging 0-63. I could add 64 cases to my enum...
enum MathOpCode:UInt8 {
...
case AddConstant0 = 0b01000000
case AddConstant1 = 0b01000001
...
case AddConstant63 = 0b01111111
}
This doesn't really scale well. And to get the embedded value, I have to use rawValue and masking operations to get it anyway. And I can't have a switch statement that looks like
case MathOpCode.AddConstant0...MathOpCode.AddConstant63
to match the whole range, because enum cases can't be turned into ranges. The alternate is to use rawValue all over the place:
switch opCode.rawValue {
case MathOpCode.Add.rawValue:
...
case MathOpCode.Subtract.rawValue:
...
case (MathOpCode.AddConstant0.rawValue)...(MathOpCode.AddConstant63.rawValue):
...
}
Now the enum just seems like baggage, better to just define a bunch of constant let's up top of my my file. Am I missing a better pattern I can use in Swift to express these types of relationships and patterns?
As you know, Swift has a fancy feature called associated value for enum, but unfortunately, you cannot control raw bit representation of enum with associated value.
If you want to control raw bit representation of your MathOpCode type, you may need to create a RawRepresentable type.
For example, you can write something like this:
struct MathOpCode: RawRepresentable {
var rawValue: UInt8
init(rawValue: UInt8) {self.rawValue = rawValue}
static let add = MathOpCode(rawValue: 0)
static let subtract = MathOpCode(rawValue: 1)
static let multiply = MathOpCode(rawValue: 2)
static let divide = MathOpCode(rawValue: 3)
static let addConstant = MathOpCode(rawValue: 0b0100_0000)
static func add(constant value: UInt8) -> MathOpCode {
guard value < 64 else {fatalError("constant out of bounds")}
return MathOpCode(rawValue: self.addConstant.rawValue + value)
}
var isAddConstant: Bool {return self.rawValue & 0b1100_0000 == MathOpCode.addConstant.rawValue}
var constant: UInt8 {return self.rawValue & 0b0011_1111}
}
//Prepare `matches` operator for `switch`.
func ~= (lhs: MathOpCode, rhs: MathOpCode) -> Bool {
return lhs == MathOpCode.addConstant && rhs.isAddConstant
|| lhs == rhs
}
You can use it as:
let opCode = MathOpCode.add(constant: 22)
switch opCode {
case MathOpCode.add:
print("add")
case MathOpCode.subtract:
print("subtract")
case MathOpCode.multiply:
print("multiply")
case MathOpCode.divide:
print("divide")
case MathOpCode.addConstant:
print("addConstant", opCode.constant)
default:
print("invalid opCode")
}
//->addConstant 22

Index and Iterate over CollectionType in swift

I have code which is basically like this:
func arrayHalvesEqual(data:[UInt8]) -> Bool {
let midPoint = data.count / 2
for i in 0..<midPoint {
let b = data[i]
let b2 = data[i + midPoint]
if b != b2 {
return false
}
}
return true
}
This works fine, but sometimes I want to pass in Arrays, and other times ArraySlice. I thought I'd change it to use generics and the CollectionType protocol, which converts as follows:
func arrayHalvesEqual<ByteArray : CollectionType where ByteArray.Generator.Element == UInt8>(data:ByteArray) -> Bool {
let midPoint = data.count / 2
for i in 0..<midPoint {
let b = data[i]
let b2 = data[i + midPoint]
if b != b2 {
return false
}
}
return true
}
However, I get the following compiler error:
error: binary operator '..<' cannot be applied to operands of type 'Int' and 'ByteArray.Index.Distance'
for i in 0..<midPoint {
I can switch the for loop to for i in data.indices which makes that compile, but then I can no longer divide it by 2 to get the midPoint, as data.indices returns the abstract CollectionType.Index whereas / 2 is an Int.
Is it possible to do something like this in Swift? Can I bridge between an abstract protocol Index type and some real type I can do maths on?
P.S: I've seen and found other examples for iterating over the whole collection by using indices and enumerate, but I explicitly only want to iterate over half the collection which requires some sort of division by 2
Thanks
You can restrict the method to collections which are indexed
by Int:
func arrayHalvesEqual<ByteArray : CollectionType where ByteArray.Index == Int, ByteArray.Generator.Element == UInt8>
(data:ByteArray) -> Bool { ... }
This covers both Array and ArraySlice.
And if you use indices.startIndex instead of 0 as initial index
then it suffices to restrict the index type to IntegerType.
Also the data type UInt8 can be replaced by a generic Equatable,
and the entire method shortened to
func arrayHalvesEqual<ByteArray : CollectionType where ByteArray.Index : IntegerType, ByteArray.SubSequence.Generator.Element : Equatable>
(data:ByteArray) -> Bool {
let midPoint = (data.indices.endIndex - data.indices.startIndex)/2
let firstHalf = data[data.indices.startIndex ..< midPoint]
let secondHalf = data[midPoint ..< data.indices.endIndex]
return !zip(firstHalf, secondHalf).contains { $0 != $1 }
}

How to generate a random number in Swift?

I realize the Swift book provided an implementation of a random number generator. Is the best practice to copy and paste this implementation? Or is there a library that does this that we can use now?
Swift 4.2+
Swift 4.2 shipped with Xcode 10 introduces new easy-to-use random functions for many data types.
You simply call the random() method on numeric types.
let randomInt = Int.random(in: 0..<6)
let randomDouble = Double.random(in: 2.71828...3.14159)
let randomBool = Bool.random()
Use arc4random_uniform(n) for a random integer between 0 and n-1.
let diceRoll = Int(arc4random_uniform(6) + 1)
Cast the result to Int so you don't have to explicitly type your vars as UInt32 (which seems un-Swifty).
Edit: Updated for Swift 3.0
arc4random works well in Swift, but the base functions are limited to 32-bit integer types (Int is 64-bit on iPhone 5S and modern Macs). Here's a generic function for a random number of a type expressible by an integer literal:
public func arc4random<T: ExpressibleByIntegerLiteral>(_ type: T.Type) -> T {
var r: T = 0
arc4random_buf(&r, MemoryLayout<T>.size)
return r
}
We can use this new generic function to extend UInt64, adding boundary arguments and mitigating modulo bias. (This is lifted straight from arc4random.c)
public extension UInt64 {
public static func random(lower: UInt64 = min, upper: UInt64 = max) -> UInt64 {
var m: UInt64
let u = upper - lower
var r = arc4random(UInt64.self)
if u > UInt64(Int64.max) {
m = 1 + ~u
} else {
m = ((max - (u * 2)) + 1) % u
}
while r < m {
r = arc4random(UInt64.self)
}
return (r % u) + lower
}
}
With that we can extend Int64 for the same arguments, dealing with overflow:
public extension Int64 {
public static func random(lower: Int64 = min, upper: Int64 = max) -> Int64 {
let (s, overflow) = Int64.subtractWithOverflow(upper, lower)
let u = overflow ? UInt64.max - UInt64(~s) : UInt64(s)
let r = UInt64.random(upper: u)
if r > UInt64(Int64.max) {
return Int64(r - (UInt64(~lower) + 1))
} else {
return Int64(r) + lower
}
}
}
To complete the family...
private let _wordSize = __WORDSIZE
public extension UInt32 {
public static func random(lower: UInt32 = min, upper: UInt32 = max) -> UInt32 {
return arc4random_uniform(upper - lower) + lower
}
}
public extension Int32 {
public static func random(lower: Int32 = min, upper: Int32 = max) -> Int32 {
let r = arc4random_uniform(UInt32(Int64(upper) - Int64(lower)))
return Int32(Int64(r) + Int64(lower))
}
}
public extension UInt {
public static func random(lower: UInt = min, upper: UInt = max) -> UInt {
switch (_wordSize) {
case 32: return UInt(UInt32.random(UInt32(lower), upper: UInt32(upper)))
case 64: return UInt(UInt64.random(UInt64(lower), upper: UInt64(upper)))
default: return lower
}
}
}
public extension Int {
public static func random(lower: Int = min, upper: Int = max) -> Int {
switch (_wordSize) {
case 32: return Int(Int32.random(Int32(lower), upper: Int32(upper)))
case 64: return Int(Int64.random(Int64(lower), upper: Int64(upper)))
default: return lower
}
}
}
After all that, we can finally do something like this:
let diceRoll = UInt64.random(lower: 1, upper: 7)
Edit for Swift 4.2
Starting in Swift 4.2, instead of using the imported C function arc4random_uniform(), you can now use Swift’s own native functions.
// Generates integers starting with 0 up to, and including, 10
Int.random(in: 0 ... 10)
You can use random(in:) to get random values for other primitive values as well; such as Int, Double, Float and even Bool.
Swift versions < 4.2
This method will generate a random Int value between the given minimum and maximum
func randomInt(min: Int, max: Int) -> Int {
return min + Int(arc4random_uniform(UInt32(max - min + 1)))
}
I used this code:
var k: Int = random() % 10;
As of iOS 9, you can use the new GameplayKit classes to generate random numbers in a variety of ways.
You have four source types to choose from: a general random source (unnamed, down to the system to choose what it does), linear congruential, ARC4 and Mersenne Twister. These can generate random ints, floats and bools.
At the simplest level, you can generate a random number from the system's built-in random source like this:
GKRandomSource.sharedRandom().nextInt()
That generates a number between -2,147,483,648 and 2,147,483,647. If you want a number between 0 and an upper bound (exclusive) you'd use this:
GKRandomSource.sharedRandom().nextIntWithUpperBound(6)
GameplayKit has some convenience constructors built in to work with dice. For example, you can roll a six-sided die like this:
let d6 = GKRandomDistribution.d6()
d6.nextInt()
Plus you can shape the random distribution by using things like GKShuffledDistribution. That takes a little more explaining, but if you're interested you can read my tutorial on GameplayKit random numbers.
You can do it the same way that you would in C:
let randomNumber = arc4random()
randomNumber is inferred to be of type UInt32 (a 32-bit unsigned integer)
Use arc4random_uniform()
Usage:
arc4random_uniform(someNumber: UInt32) -> UInt32
This gives you random integers in the range 0 to someNumber - 1.
The maximum value for UInt32 is 4,294,967,295 (that is, 2^32 - 1).
Examples:
Coin flip
let flip = arc4random_uniform(2) // 0 or 1
Dice roll
let roll = arc4random_uniform(6) + 1 // 1...6
Random day in October
let day = arc4random_uniform(31) + 1 // 1...31
Random year in the 1990s
let year = 1990 + arc4random_uniform(10)
General form:
let number = min + arc4random_uniform(max - min + 1)
where number, max, and min are UInt32.
What about...
arc4random()
You can also get a random number by using arc4random(), which produces a UInt32 between 0 and 2^32-1. Thus to get a random number between 0 and x-1, you can divide it by x and take the remainder. Or in other words, use the Remainder Operator (%):
let number = arc4random() % 5 // 0...4
However, this produces the slight modulo bias (see also here and here), so that is why arc4random_uniform() is recommended.
Converting to and from Int
Normally it would be fine to do something like this in order to convert back and forth between Int and UInt32:
let number: Int = 10
let random = Int(arc4random_uniform(UInt32(number)))
The problem, though, is that Int has a range of -2,147,483,648...2,147,483,647 on 32 bit systems and a range of -9,223,372,036,854,775,808...9,223,372,036,854,775,807 on 64 bit systems. Compare this to the UInt32 range of 0...4,294,967,295. The U of UInt32 means unsigned.
Consider the following errors:
UInt32(-1) // negative numbers cause integer overflow error
UInt32(4294967296) // numbers greater than 4,294,967,295 cause integer overflow error
So you just need to be sure that your input parameters are within the UInt32 range and that you don't need an output that is outside of that range either.
Example for random number in between 10 (0-9);
import UIKit
let randomNumber = Int(arc4random_uniform(10))
Very easy code - simple and short.
I've been able to just use rand() to get a random CInt. You can make it an Int by using something like this:
let myVar: Int = Int(rand())
You can use your favourite C random function, and just convert to value to Int if needed.
#jstn's answer is good, but a bit verbose. Swift is known as a protocol-oriented language, so we can achieve the same result without having to implement boilerplate code for every class in the integer family, by adding a default implementation for the protocol extension.
public extension ExpressibleByIntegerLiteral {
public static func arc4random() -> Self {
var r: Self = 0
arc4random_buf(&r, MemoryLayout<Self>.size)
return r
}
}
Now we can do:
let i = Int.arc4random()
let j = UInt32.arc4random()
and all other integer classes are ok.
Updated: June 09, 2022.
Swift 5.7
Let's assume we have an array:
let numbers: [Int] = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
For iOS and macOS you can use system-wide random source in Xcode's framework GameKit. Here you can find GKRandomSource class with its sharedRandom() class method:
import GameKit
private func randomNumberGenerator() -> Int {
let rand = GKRandomSource.sharedRandom().nextInt(upperBound: numbers.count)
return numbers[rand]
}
randomNumberGenerator()
Also you can use a randomElement() method that returns a random element of a collection:
let randomNumber = numbers.randomElement()!
print(randomNumber)
Or use arc4random_uniform(). Pay attention that this method returns UInt32 type.
let generator = Int(arc4random_uniform(11))
print(generator)
And, of course, we can use a makeIterator() method that returns an iterator over the elements of the collection.
let iterator: Int = (1...10).makeIterator().shuffled().first!
print(iterator)
The final example you see here returns a random value within the specified range with a help of static func random(in range: ClosedRange<Int>) -> Int.
let randomizer = Int.random(in: 1...10)
print(randomizer)
Pseudo-random Double number generator drand48() returns a value between 0.0 and 1.0.
import Foundation
let randomInt = Int(drand48() * 10)
In Swift 4.2 you can generate random numbers by calling the random() method on whatever numeric type you want, providing the range you want to work with. For example, this generates a random number in the range 1 through 9, inclusive on both sides
let randInt = Int.random(in: 1..<10)
Also with other types
let randFloat = Float.random(in: 1..<20)
let randDouble = Double.random(in: 1...30)
let randCGFloat = CGFloat.random(in: 1...40)
Since Swift 4.2
There is a new set of APIs:
let randomIntFrom0To10 = Int.random(in: 0 ..< 10)
let randomDouble = Double.random(in: 1 ... 10)
All numeric types now have the random(in:) method that takes range.
It returns a number uniformly distributed in that range.
TL;DR
Well, what is wrong with the "good" old way?
You have to use imported C APIs (They are different between platforms).
And moreover...
What if I told you that the random is not that random?
If you use arc4random() (to calculate the remainder) like arc4random() % aNumber, the result is not uniformly distributed between the 0 and aNumber. There is a problem called the Modulo bias.
Modulo bias
Normally, the function generates a random number between 0 and MAX (depends on the type etc.). To make a quick, easy example, let's say the max number is 7 and you care about a random number in the range 0 ..< 2 (or the interval [0, 3) if you prefer that).
The probabilities for individual numbers are:
0: 3/8 = 37.5%
1: 3/8 = 37.5%
2: 2/8 = 25%
In other words, you are more likely to end up with 0 or 1 than 2.
Of course, bare in mind that this is extremely simplified and the MAX number is much higher, making it more "fair".
This problem is addressed by SE-0202 - Random unification in Swift 4.2
Here is a library that does the job well
https://github.com/thellimist/SwiftRandom
public extension Int {
/// SwiftRandom extension
public static func random(lower: Int = 0, _ upper: Int = 100) -> Int {
return lower + Int(arc4random_uniform(UInt32(upper - lower + 1)))
}
}
public extension Double {
/// SwiftRandom extension
public static func random(lower: Double = 0, _ upper: Double = 100) -> Double {
return (Double(arc4random()) / 0xFFFFFFFF) * (upper - lower) + lower
}
}
public extension Float {
/// SwiftRandom extension
public static func random(lower: Float = 0, _ upper: Float = 100) -> Float {
return (Float(arc4random()) / 0xFFFFFFFF) * (upper - lower) + lower
}
}
public extension CGFloat {
/// SwiftRandom extension
public static func random(lower: CGFloat = 0, _ upper: CGFloat = 1) -> CGFloat {
return CGFloat(Float(arc4random()) / Float(UINT32_MAX)) * (upper - lower) + lower
}
}
let MAX : UInt32 = 9
let MIN : UInt32 = 1
func randomNumber()
{
var random_number = Int(arc4random_uniform(MAX) + MIN)
print ("random = ", random_number);
}
I would like to add to existing answers that the random number generator example in the Swift book is a Linear Congruence Generator (LCG), it is a severely limited one and shouldn't be except for the must trivial examples, where quality of randomness doesn't matter at all. And a LCG should never be used for cryptographic purposes.
arc4random() is much better and can be used for most purposes, but again should not be used for cryptographic purposes.
If you want something that is guaranteed to be cryptographically secure, use SecCopyRandomBytes(). Note that if you build a random number generator into something, someone else might end up (mis)-using it for cryptographic purposes (such as password, key or salt generation), then you should consider using SecCopyRandomBytes() anyway, even if your need doesn't quite require that.
Swift 4.2
Bye bye to import Foundation C lib arc4random_uniform()
// 1
let digit = Int.random(in: 0..<10)
// 2
if let anotherDigit = (0..<10).randomElement() {
print(anotherDigit)
} else {
print("Empty range.")
}
// 3
let double = Double.random(in: 0..<1)
let float = Float.random(in: 0..<1)
let cgFloat = CGFloat.random(in: 0..<1)
let bool = Bool.random()
You use random(in:) to generate random digits from ranges.
randomElement() returns nil if the range is empty, so you unwrap the returned Int? with if let.
You use random(in:) to generate a random Double, Float or CGFloat and random() to return a random Bool.
More # Official
var randomNumber = Int(arc4random_uniform(UInt32(5)))
Here 5 will make sure that the random number is generated through zero to four. You can set the value accordingly.
Without arc4Random_uniform() in some versions of Xcode(in 7.1 it runs but doesn't autocomplete for me). You can do this instead.
To generate a random number from 0-5.
First
import GameplayKit
Then
let diceRoll = GKRandomSource.sharedRandom().nextIntWithUpperBound(6)
The following code will produce a secure random number between 0 and 255:
extension UInt8 {
public static var random: UInt8 {
var number: UInt8 = 0
_ = SecRandomCopyBytes(kSecRandomDefault, 1, &number)
return number
}
}
You call it like this:
print(UInt8.random)
For bigger numbers it becomes more complicated.
This is the best I could come up with:
extension UInt16 {
public static var random: UInt16 {
let count = Int(UInt8.random % 2) + 1
var numbers = [UInt8](repeating: 0, count: 2)
_ = SecRandomCopyBytes(kSecRandomDefault, count, &numbers)
return numbers.reversed().reduce(0) { $0 << 8 + UInt16($1) }
}
}
extension UInt32 {
public static var random: UInt32 {
let count = Int(UInt8.random % 4) + 1
var numbers = [UInt8](repeating: 0, count: 4)
_ = SecRandomCopyBytes(kSecRandomDefault, count, &numbers)
return numbers.reversed().reduce(0) { $0 << 8 + UInt32($1) }
}
}
These methods use an extra random number to determine how many UInt8s are going to be used to create the random number. The last line converts the [UInt8] to UInt16 or UInt32.
I don't know if the last two still count as truly random, but you can tweak it to your likings :)
Swift 4.2
Swift 4.2 has included a native and fairly full-featured random number API in the standard library. (Swift Evolution proposal SE-0202)
let intBetween0to9 = Int.random(in: 0...9)
let doubleBetween0to1 = Double.random(in: 0...1)
All number types have the static random(in:) which takes the range and returns the random number in the given range
Xcode 14, swift 5
public extension Array where Element == Int {
static func generateNonRepeatedRandom(size: Int) -> [Int] {
guard size > 0 else {
return [Int]()
}
return Array(0..<size).shuffled()
}
}
How to use:
let array = Array.generateNonRepeatedRandom(size: 15)
print(array)
Output
You can use GeneratorOf like this:
var fibs = ArraySlice([1, 1])
var fibGenerator = GeneratorOf{
_ -> Int? in
fibs.append(fibs.reduce(0, combine:+))
return fibs.removeAtIndex(0)
}
println(fibGenerator.next())
println(fibGenerator.next())
println(fibGenerator.next())
println(fibGenerator.next())
println(fibGenerator.next())
println(fibGenerator.next())
I use this code to generate a random number:
//
// FactModel.swift
// Collection
//
// Created by Ahmadreza Shamimi on 6/11/16.
// Copyright © 2016 Ahmadreza Shamimi. All rights reserved.
//
import GameKit
struct FactModel {
let fun = ["I love swift","My name is Ahmadreza","I love coding" ,"I love PHP","My name is ALireza","I love Coding too"]
func getRandomNumber() -> String {
let randomNumber = GKRandomSource.sharedRandom().nextIntWithUpperBound(fun.count)
return fun[randomNumber]
}
}
Details
xCode 9.1, Swift 4
Math oriented solution (1)
import Foundation
class Random {
subscript<T>(_ min: T, _ max: T) -> T where T : BinaryInteger {
get {
return rand(min-1, max+1)
}
}
}
let rand = Random()
func rand<T>(_ min: T, _ max: T) -> T where T : BinaryInteger {
let _min = min + 1
let difference = max - _min
return T(arc4random_uniform(UInt32(difference))) + _min
}
Usage of solution (1)
let x = rand(-5, 5) // x = [-4, -3, -2, -1, 0, 1, 2, 3, 4]
let x = rand[0, 10] // x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Programmers oriented solution (2)
Do not forget to add Math oriented solution (1) code here
import Foundation
extension CountableRange where Bound : BinaryInteger {
var random: Bound {
return rand(lowerBound-1, upperBound)
}
}
extension CountableClosedRange where Bound : BinaryInteger {
var random: Bound {
return rand[lowerBound, upperBound]
}
}
Usage of solution (2)
let x = (-8..<2).random // x = [-8, -7, -6, -5, -4, -3, -2, -1, 0, 1]
let x = (0..<10).random // x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
let x = (-10 ... -2).random // x = [-10, -9, -8, -7, -6, -5, -4, -3, -2]
Full Sample
Do not forget to add solution (1) and solution (2) codes here
private func generateRandNums(closure:()->(Int)) {
var allNums = Set<Int>()
for _ in 0..<100 {
allNums.insert(closure())
}
print(allNums.sorted{ $0 < $1 })
}
generateRandNums {
(-8..<2).random
}
generateRandNums {
(0..<10).random
}
generateRandNums {
(-10 ... -2).random
}
generateRandNums {
rand(-5, 5)
}
generateRandNums {
rand[0, 10]
}
Sample result