How to use last variable in swift - swift

I understand how lazy variable works but I don't know how to implement properly.
Below is just an example:
func calculation() -> (firstValue: Float, secondValue: Float) {
var a: Float = 2
var b: Float = 2
for i in 1...1000 {
a = a + Float(i)
b = b + Float(i)
print(i)
}
return (a, b)
}
let m = calculation().firstValue
let n = calculation().secondValue
when I run it, the calculation function ran twice (1000 time + 1000 time), first time to get the value to m, and second time to get the value of n.
What I have to do to force the function Calculation to run just only not time and to store the value m and n without repeat the process twice.

The lazy keyword is used on the class/struct/enum member variables.
For your example, you can implement like this:
class Solution {
lazy var tuple: (firstValue: Float, secondValue: Float) = {
var a: Float = 2
var b: Float = 2
for i in 1...1000 {
a = a + Float(i)
b = b + Float(i)
print(i)
}
return (a, b)
}()
}
let s = Solution()
let m = s.tuple.firstValue
let n = s.tuple.firstValue
The tuple variable stores the value returned by the following closure which only runs once.
You can also use a variable to record the return value of the function.
let pair = calculation()
let m = pair.firstValue
let n = pair.secondValue

lazy is the keyword to delay the calculation till the result is first used, so it is not for your case.
You can write something like this:
let values = calculation()
let m: Float = values.firstValue
let n: Float = values.secondValue
Or this:
let (m, n) = calculation()

Related

Swift function that accepts Numeric types and converts them to Doubles

I'm creating a custom exponent operator and I tried using a generic type that conforms to the numeric protocol. However, it doesn't work. Numeric seems to be very restrictive about what it supports.
precedencegroup Exponentiative {
associativity: left
higherThan: MultiplicationPrecedence
}
infix operator ** : Exponentiative
public func ** <N: Numeric>(base: N, power: N) -> N {
return N.self( pow(Double(base), Double(power)) )
}
Instead I have to do this:
public func ** <N: BinaryInteger>(base: N, power: N) -> N {
return N.self( pow(Double(base), Double(power)) )
}
public func ** <N: BinaryFloatingPoint>(base: N, power: N) -> N {
return N.self( pow(Double(base), Double(power)) )
}
This is fine for short functions, but one could imagine having to write a much longer function than this.
How do I create a generic type that can accept any BinaryInteger or BinaryFloatingPoint?
I there any way to combine these functions into just one? This code duplication seems unnecessary.
This post is NOT a duplicate of
Is there a way to convert any generic Numeric into a Double?
The first answer, when applied to my problem, would make the function even more redundant:
func f<T:Numeric>(_ i: T) {
var d: Double
switch i {
case let ii as Int:
d = Double(ii)
case let ii as Int8:
d = Double(ii)
case let ii as UInt8:
d = Double(ii)
case let ii as Int16:
d = Double(ii)
case let ii as UInt16:
d = Double(ii)
case let ii as Int32:
d = Double(ii)
case let ii as UInt32:
d = Double(ii)
case let ii as Int64:
d = Double(ii)
case let ii as UInt64:
d = Double(ii)
case let ii as Double:
d = ii
case let ii as Float32:
d = Double(ii)
case let ii as Float64:
d = Double(ii)
case let ii as Float:
d = Double(ii)
default:
fatalError("oops")
}
print(d)
}
Also, this switch statement would have to be applied to each argument passed to the function, which means the switch statement would have to be extracted into its own function.
The second answer in the above post is basically the same as the functions in my original post, so it does not answer my question either.
My post has absolutely NOTHING to do with this post:
Making my function calculate average of array Swift.

Where do I put a return in a function?

My code is returning 2 errors, both are unresolved identifiers. I looked up what means and most query answers are saying that I need to declare the constant first, but I have already done that.
I'm very new to coding and everytime I encountered this problem it was because I forgot to declare the constant or variable and I would catch my mistake but I'm stumped on this one.
var counter = 2
func fibonacci(_ x:Int ) -> Int {
var a = 1
var b = 1
if counter < x {
let sum = a + b
a = b
b = sum
counter += 1
}
print(sum)
return sum
}
fibonacci(5)
I bet you'll want to define counter within the scope of your function (passing it as a parameter might not make sense) and then define sum outside the scope of the if statement like this:
func fibonacci(_ x:Int ) -> Int {
var a = 1
var b = 1
var sum = 0
var counter = 0
if counter < x {
sum = a + b
a = b
b = sum
counter += 1
}
print(sum)
return sum
}
You've declared the variable sum inside the if condition and using it outside the if condition.
Return b instead of sum at the end of the function. Your if block will be executed only once. You should use while loop
var counter = 2
func fibonacci(_ x:Int ) -> Int {
var a = 1
var b = 1
while counter < x {
let sum = a + b
a = b
b = sum
counter += 1
}
print(b)
return b
}
print(fibonacci(5))
You can simplify the swapping using a tuple
var counter = 2
func fibonacci(_ x:Int ) -> Int {
var a = 1
var b = 1
while counter < x {
(a,b) = (b,a+b)
counter += 1
}
print(b)
return b
}
print(fibonacci(5))
var counter = 2
func fibonacci(_ x:Int ) -> Int {
var a = 1
var b = 1
var sum = 0 //this is when if you want to add sum
//var counter = 0 if you may please work around the counter
if counter < x {
sum = a + b
a = b
b = sum
counter += 1
}
return sum //if you declare sum then return sum could be used
}
var x = fibonacci(5)//here the return value is stored or used for further computations
print(x) // the returned value can be printed in this way or directly print(fibonacci(5))
If you feel like learning it more please go through following link https://docs.swift.org/swift-book/LanguageGuide/Functions.html
The code above is not a solution to fibonacci series but a solution to your error. Fibonacci series can be solved in many ways using logic. As your question is only for the errors the program above solves it. If you feel like studying it using algorithms please checkout this link: https://www.codewithc.com/fibonacci-series-algorithm-flowchart/
I hope the answer helps!
The unresolved indentifiers errors is because neither counter or sum are variables defined.
Just define them, Eg:
var sum = 0;
func fibonacci(counter: Int, x:Int) -> Int {
var a = 1
var b = 1
if counter < x {
sum = a + b;
a = b
b = sum
return fibonacci(counter: counter + 1, x: x)
}
return sum
}
fibonacci(counter: 0, x: 5)
Note: the fibonacci func is not working properly but the identified errors were solved.

How to convert float to fraction in Swift [duplicate]

I am building a calculator and want it to automatically convert every decimal into a fraction. So if the user calculates an expression for which the answer is "0.333333...", it would return "1/3". For "0.25" it would return "1/4". Using GCD, as found here (Decimal to fraction conversion), I have figured out how to convert any rational, terminating decimal into a decimal, but this does not work on any decimal that repeats (like .333333).
Every other function for this on stack overflow is in Objective-C. But I need a function in my swift app! So a translated version of this (https://stackoverflow.com/a/13430237/5700898) would be nice!
Any ideas or solutions on how to convert a rational or repeating/irrational decimal to a fraction (i.e. convert "0.1764705882..." to 3/17) would be great!
If you want to display the results of calculations as rational numbers
then the only 100% correct solution is to use rational arithmetic throughout all calculations, i.e. all intermediate values are stored as a pair of integers (numerator, denominator), and all additions, multiplications, divisions, etc are done using the rules for rational
numbers.
As soon as a result is assigned to a binary floating point number
such as Double, information is lost. For example,
let x : Double = 7/10
stores in x an approximation of 0.7, because that number cannot
be represented exactly as a Double. From
print(String(format:"%a", x)) // 0x1.6666666666666p-1
one can see that x holds the value
0x16666666666666 * 2^(-53) = 6305039478318694 / 9007199254740992
≈ 0.69999999999999995559107901499373838305
So a correct representation of x as a rational number would be
6305039478318694 / 9007199254740992, but that is of course not what
you expect. What you expect is 7/10, but there is another problem:
let x : Double = 69999999999999996/100000000000000000
assigns exactly the same value to x, it is indistinguishable from
0.7 within the precision of a Double.
So should x be displayed as 7/10 or as 69999999999999996/100000000000000000 ?
As said above, using rational arithmetic would be the perfect solution.
If that is not viable, then you can convert the Double back to
a rational number with a given precision.
(The following is taken from Algorithm for LCM of doubles in Swift.)
Continued Fractions
are an efficient method to create a (finite or infinite) sequence of fractions hn/kn that are arbitrary good approximations to a given real number x,
and here is a possible implementation in Swift:
typealias Rational = (num : Int, den : Int)
func rationalApproximationOf(x0 : Double, withPrecision eps : Double = 1.0E-6) -> Rational {
var x = x0
var a = floor(x)
var (h1, k1, h, k) = (1, 0, Int(a), 1)
while x - a > eps * Double(k) * Double(k) {
x = 1.0/(x - a)
a = floor(x)
(h1, k1, h, k) = (h, k, h1 + Int(a) * h, k1 + Int(a) * k)
}
return (h, k)
}
Examples:
rationalApproximationOf(0.333333) // (1, 3)
rationalApproximationOf(0.25) // (1, 4)
rationalApproximationOf(0.1764705882) // (3, 17)
The default precision is 1.0E-6, but you can adjust that to your needs:
rationalApproximationOf(0.142857) // (1, 7)
rationalApproximationOf(0.142857, withPrecision: 1.0E-10) // (142857, 1000000)
rationalApproximationOf(M_PI) // (355, 113)
rationalApproximationOf(M_PI, withPrecision: 1.0E-7) // (103993, 33102)
rationalApproximationOf(M_PI, withPrecision: 1.0E-10) // (312689, 99532)
Swift 3 version:
typealias Rational = (num : Int, den : Int)
func rationalApproximation(of x0 : Double, withPrecision eps : Double = 1.0E-6) -> Rational {
var x = x0
var a = x.rounded(.down)
var (h1, k1, h, k) = (1, 0, Int(a), 1)
while x - a > eps * Double(k) * Double(k) {
x = 1.0/(x - a)
a = x.rounded(.down)
(h1, k1, h, k) = (h, k, h1 + Int(a) * h, k1 + Int(a) * k)
}
return (h, k)
}
Examples:
rationalApproximation(of: 0.333333) // (1, 3)
rationalApproximation(of: 0.142857, withPrecision: 1.0E-10) // (142857, 1000000)
Or – as suggested by #brandonscript – with a struct Rational and an initializer:
struct Rational {
let numerator : Int
let denominator: Int
init(numerator: Int, denominator: Int) {
self.numerator = numerator
self.denominator = denominator
}
init(approximating x0: Double, withPrecision eps: Double = 1.0E-6) {
var x = x0
var a = x.rounded(.down)
var (h1, k1, h, k) = (1, 0, Int(a), 1)
while x - a > eps * Double(k) * Double(k) {
x = 1.0/(x - a)
a = x.rounded(.down)
(h1, k1, h, k) = (h, k, h1 + Int(a) * h, k1 + Int(a) * k)
}
self.init(numerator: h, denominator: k)
}
}
Example usage:
print(Rational(approximating: 0.333333))
// Rational(numerator: 1, denominator: 3)
print(Rational(approximating: .pi, withPrecision: 1.0E-7))
// Rational(numerator: 103993, denominator: 33102)
So a little late here, but I had a similar problem and ended up building Swift FractionFormatter. This works because most of the irrational numbers you care about are part of the set of vulgar, or common fractions and are easy to validate proper transformation. The rest may or may not round, but you get very close on any reasonable fraction your user might generate. It is designed to be a drop in replacement for NumberFormatter.
As Martin R said, the Only way to have (99.99%)exact calculations, is to calculate everything with rational numbers, from beginning to the end.
the reason behind the creation of this class was also the fact that i needed to have very accurate calculations, and that was not possible with the swift-provided types. so i created my own type.
here is the code, i'll explain it below.
class Rational {
var alpha = 0
var beta = 0
init(_ a: Int, _ b: Int) {
if (a > 0 && b > 0) || (a < 0 && b < 0) {
simplifier(a,b,"+")
}
else {
simplifier(a,b,"-")
}
}
init(_ double: Double, accuracy: Int = -1) {
exponent(double, accuracy)
}
func exponent(_ double: Double, _ accuracy: Int) {
//Converts a double to a rational number, in which the denominator is of power of 10.
var exp = 1
var double = double
if accuracy != -1 {
double = Double(NSString(format: "%.\(accuracy)f" as NSString, double) as String)!
}
while (double*Double(exp)).remainder(dividingBy: 1) != 0 {
exp *= 10
}
if double > 0 {
simplifier(Int(double*Double(exp)), exp, "+")
}
else {
simplifier(Int(double*Double(exp)), exp, "-")
}
}
func gcd(_ alpha: Int, _ beta: Int) -> Int {
// Calculates 'Greatest Common Divisor'
var inti: [Int] = []
var multi = 1
var a = Swift.min(alpha,beta)
var b = Swift.max(alpha,beta)
for idx in 2...a {
if idx != 1 {
while (a%idx == 0 && b%idx == 0) {
a = a/idx
b = b/idx
inti.append(idx)
}
}
}
inti.map{ multi *= $0 }
return multi
}
func simplifier(_ alpha: Int, _ beta: Int, _ posOrNeg: String) {
//Simplifies nominator and denominator (alpha and beta) so they are 'prime' to one another.
let alpha = alpha > 0 ? alpha : -alpha
let beta = beta > 0 ? beta : -beta
let greatestCommonDivisor = gcd(alpha,beta)
self.alpha = posOrNeg == "+" ? alpha/greatestCommonDivisor : -alpha/greatestCommonDivisor
self.beta = beta/greatestCommonDivisor
}
}
typealias Rnl = Rational
func *(a: Rational, b: Rational) -> Rational {
let aa = a.alpha*b.alpha
let bb = a.beta*b.beta
return Rational(aa, bb)
}
func /(a: Rational, b: Rational) -> Rational {
let aa = a.alpha*b.beta
let bb = a.beta*b.alpha
return Rational(aa, bb)
}
func +(a: Rational, b: Rational) -> Rational {
let aa = a.alpha*b.beta + a.beta*b.alpha
let bb = a.beta*b.beta
return Rational(aa, bb)
}
func -(a: Rational, b: Rational) -> Rational {
let aa = a.alpha*b.beta - a.beta*b.alpha
let bb = a.beta*b.beta
return Rational(aa, bb)
}
extension Rational {
func value() -> Double {
return Double(self.alpha) / Double(self.beta)
}
}
extension Rational {
func rnlValue() -> String {
if self.beta == 1 {
return "\(self.alpha)"
}
else if self.alpha == 0 {
return "0"
}
else {
return "\(self.alpha) / \(self.beta)"
}
}
}
// examples:
let first = Rnl(120,45)
let second = Rnl(36,88)
let third = Rnl(2.33435, accuracy: 2)
let forth = Rnl(2.33435)
print(first.alpha, first.beta, first.value(), first.rnlValue()) // prints 8 3 2.6666666666666665 8 / 3
print((first*second).rnlValue()) // prints 12 / 11
print((first+second).rnlValue()) // prints 203 / 66
print(third.value(), forth.value()) // prints 2.33 2.33435
First of all, we have the class itself. the class can be initialised in two ways:
in the Rational class, alpha ~= nominator & beta ~= denominator
The First way is initialising the class using two integers, first of with is the nominator, and the second one is the denominator. the class gets those two integers, and then reduces them to the least numbers possible. e.g reduces (10,5) to (2,1) or as another example, reduces (144, 60) to (12,5). this way, always the simplest numbers are stored.
this is possible using the gcd (greatest common divisor) function and simplifier function, which are not hard to understand from the code.
the only thing is that the class faces some issues with negative numbers, so it always saves whether the final rational number is negative or positive, and if its negative it makes the nominator negative.
The Second way to initialise the class, is with a double, and with an optional parameter called 'accuracy'. the class gets the double, and also the accuracy of how much numbers after decimal point you need, and converts the double to a nominator/denominator form, in which the denominator is of power of 10. e.g 2.334 will be 2334/1000 or 342.57 will be 34257/100. then tries to simplify the rational numbers using the same method which was explained in the #1 way.
After the class definition, there is type-alias 'Rnl', which you can obviously change it as you wish.
Then there are 4 functions, for the 4 main actions of math: * / + -, which i defined so e.g. you can easily multiply two numbers of type Rational.
After that, there are 2 extensions to Rational type, first of which ('value') gives you the double value of a Rational number, the second one ('rnlValue') gives you the the Rational number in form of a human-readable string: "nominator / denominator"
At last, you can see some examples of how all these work.

Generating random Int array of random count without repeated vars [duplicate]

This question already has answers here:
Get random elements from array in Swift
(6 answers)
Closed 5 years ago.
Code first generates a random between 0-8, assigning it to var n. Then a 2nd randomNumber Generator func is looped n amount of times to generate n amount of ints between 0 and 10, all having different probabilities of occurring and ultimately put into an array. What I want is for none of those 10 possible numbers to repeat, so once one is chosen it can no longer be chosen by the other n-1 times the func is run. I'm thinking a repeat-while-loop or an if-statement or something involving an index but I don't know exactly how, nor within what brackets. Thanks for any help! Some whisper this is the most challenging and intelligence demanding coding conundrum on earth. Challenge Accepted?
import UIKit
let n = Int(arc4random_uniform(8))
var a:Double = 0.2
var b:Double = 0.3
var c:Double = 0.2
var d:Double = 0.3
var e:Double = 0.2
var f:Double = 0.1
var g:Double = 0.2
var h:Double = 0.4
var i:Double = 0.2
var j:Double = 0.2
var k: [Int] = []
for _ in 0...n {
func randomNumber(probabilities: [Double]) -> Int {
let sum = probabilities.reduce(0, +)
let rnd = sum * Double(arc4random_uniform(UInt32.max)) / Double(UInt32.max)
var accum = 0.0
for (i, p) in probabilities.enumerated() {
accum += p
if rnd < accum {
return i
}}
return (probabilities.count - 1)
}
k.append(randomNumber(probabilities: [a, b, c, d, e, f, g, h, i, j]))
}
print(k)
pseudo Code -
1)generate a number between 1-8
n
2)take empty array
arr[]
3)loop from 0 to n
1) generate a random no
temp
2) check if it is there in arr
> if it is there in arr, generate another
3) when you get a number which is not there in arr, insert it
This is a python code
import random
n = random.randint(1,8)
arr = []
print(n)
for each in range(n):
temp = random.randint(1, 10)
while temp in arr:
temp = random.randint(1, 10)
print(temp)
arr.append(temp)
print(arr)
Check code example here
Swift version of Ankush's answer -
let n = arc4random_uniform(7) + 1
var arr: [UInt32] = []
for _ in 0 ... n {
var temp = arc4random_uniform(9) + 1
while arr.contains(temp) {
temp = arc4random_uniform(9) + 1
}
print(temp)
arr.append(temp)
}
print(arr)
Hope this helps!

x = y =1 in Scala?

While going through the book Scala for the Impatient, I came across this question:
Come up with one situation where the assignment x = y = 1 is valid in
Scala. (Hint: Pick a suitable type for x.)
I am not sure what exactly the author means by this question. The assignment doesn't return a value, so something like var x = y = 1 should return Unit() as the value of x. Can somebody point out what might I be missing here?
Thanks
In fact, x is Unit in this case:
var y = 2
var x = y = 1
can be read as:
var y = 2
var x = (y = 1)
and finally:
var x: Unit = ()
You can get to the point of being able to type x=y=1 in the REPL shell with no error thus:
var x:Unit = {}
var y = 0
x = y = 1
Here’s another less known case where the setter method returns its argument. Note that the type of x is actually Int here:
object AssignY {
private var _y: Int = _
def y = _y
def y_=(i: Int) = { _y = i; i }
}
import AssignY._
var x = y = 1
(This feature is used in the XScalaWT library, and was discussed in that question.)
BTW if assigning of the same value to both variables still required then use:
scala> var x#y = 1
x: Int = 1
y: Int = 1
It's valid, but not sensible, this make confusion.
scala> var x=y=1
x: Unit = ()
scala> y
res60: Int = 1
scala> var x#y = 1
x: Int = 1
y: Int = 1