I am making fuction that calculate factorial in swift. like this
func factorial(factorialNumber: UInt64) -> UInt64 {
if factorialNumber == 0 {
return 1
} else {
return factorialNumber * factorial(factorialNumber - 1)
}
}
let x = factorial(20)
this fuction can calculate untill 20.
I think factorial(21) value bigger than UINT64_MAX.
then How to calculate the 21! (21 factorial) in swift?
func factorial(_ n: Int) -> Double {
return (1...n).map(Double.init).reduce(1.0, *)
}
(1...n): We create an array of all the numbers that are involved in the operation (i.e: [1, 2, 3, ...]).
map(Double.init): We change from Int to Double because we can represent bigger numbers with Doubles than with Ints (https://en.wikipedia.org/wiki/Double-precision_floating-point_format). So, we now have the array of all the numbers that are involved in the operation as Doubles (i.e: [1.0, 2.0, 3.0, ...]).
reduce(1.0, *): We start multiplying 1.0 with the first element in the array (1.0*1.0 = 1.0), then the result of that with the next one (1.0*2.0 = 2.0), then the result of that with the next one (2.0*3.0 = 6.0), and so on.
Step 2 is to avoid the overflow issue.
Step 3 is to save us from explicitly defining a variable for keeping track of the partial results.
Unsigned 64 bit integer has a maximum value of 18,446,744,073,709,551,615. While 21! = 51,090,942,171,709,440,000. For this kind of case, you need a Big Integer type. I found a question about Big Integer in Swift. There's a library for Big Integer in that link.
BigInteger equivalent in Swift?
Did you think about using a double perhaps? Or NSDecimalNumber?
Also calling the same function recursively is really bad performance wise.
How about using a loop:
let value = number.intValue - 1
var product = NSDecimalNumber(value: number.intValue)
for i in (1...value).reversed() {
product = product.multiplying(by: NSDecimalNumber(value: i))
}
Here's a function that accepts any type that conforms to the Numeric protocol, which are all builtin number types.
func factorial<N: Numeric>(_ x: N) -> N {
x == 0 ? 1 : x * factorial(x - 1)
}
First we need to declare temp variable of type double so it can hold size of number.
Then we create a function that takes a parameter of type double.
Then we check, if the number equal 0 we can return or do nothing. We have an if condition so we can break the recursion of the function. Finally we return temp, which holds the factorial of given number.
var temp:Double = 1.0
func factorial(x:Double) -> Double{
if(x==0){
//do nothing
}else{
factorial(x: x-1)
temp *= x
}
return temp
}
factorial(x: 21.0)
I make function calculate factorial like this:
func factorialNumber( namber : Int ) -> Int {
var x = 1
for i in 1...namber {
x *= i
}
return x
}
print ( factorialNumber (namber : 5 ))
If you are willing to give up precision you can use a Double to roughly calculate factorials up to 170:
func factorial(_ n: Int) -> Double {
if n == 0 {
return 1
}
var a: Double = 1
for i in 1...n {
a *= Double(i)
}
return a
}
If not, use a big integer library.
func factoruial(_ num:Int) -> Int{
if num == 0 || num == 1{
return 1
}else{
return(num*factoruial(num - 1))
}
}
Using recursion to solve this problem:
func factorial(_ n: UInt) -> UInt {
return n < 2 ? 1 : n*factorial(n - 1)
}
func factorial(a: Int) -> Int {
return a == 1 ? a : a * factorial(a: a - 1)
}
print(factorial(a : 5))
print(factorial(a: 9))
Related
I hope you guys can check. when I use 5 as x it should be showing me -0.17749282815107623 but it returns -0.2792375. I couldn't where I have been doing the mistake.
var evenNumbers = [Int]()
for i in 2...10 {
if i % 2 == 0 {
evenNumbers.append(i)
}
}
func power(val: Float, power: Int)->Float{
var c:Float = 1
for i in 1...power {
c *= val
}
return c
}
func bessel(x: Float)->Float{
var j0:Float = 0
var counter = 1
var lastDetermVal:Float = 1
for eNumber in evenNumbers {
print(lastDetermVal)
if counter == 1 {
lastDetermVal *= power(val: Float(eNumber), power: 2)
j0 += (power(val: x, power: eNumber))/lastDetermVal
counter = -1
}else if counter == -1{
lastDetermVal *= power(val: Float(eNumber), power: 2)
j0 -= (power(val: x, power: eNumber))/lastDetermVal
counter = 1
}
}
return 1-j0
}
bessel(x: 5)
Function 1:
Your mistake seems to be that you didn't have enough even numbers.
var evenNumbers = [Int]()
for i in 2...10 {
if i % 2 == 0 {
evenNumbers.append(i)
}
}
After the above is run, evenNumbers will be populated with [2,4,6,8,10]. But to evaluate 10 terms, you need even numbers up to 18 or 20, depending on whether you count 1 as a "term". Therefore, you should loop up to 18 or 20:
var evenNumbers = [Int]()
for i in 2...18 { // I think the 1 at the beginning should count as a "term"
if i % 2 == 0 {
evenNumbers.append(i)
}
}
Alternatively, you can create this array like this:
let evenNumbers = (1..<10).map { $0 * 2 }
This means "for each number between 1 (inclusive) and 10 (exclusive), multiply each by 2".
Now your solution will give you an answer of -0.1776034.
Here's my (rather slow) solution:
func productOfFirstNEvenNumbers(_ n: Int) -> Float {
if n == 0 {
return 1
}
let firstNEvenNumbers = (1...n).map { Float($0) * 2.0 }
// ".reduce(1.0, *)" means "multiply everything"
return firstNEvenNumbers.reduce(1.0, *)
}
func nthTerm(_ n: Int, x: Float) -> Float {
let numerator = pow(x, Float(n) * 2)
// yes, this does recalculate the product of even numbers every time...
let product = productOfFirstNEvenNumbers(n)
let denominator = product * product
return numerator / (denominator) * pow(-1, Float(n))
}
func bessel10Terms(x: Float) -> Float {
// for each number n in the range 0..<10, get the nth term, add them together
(0..<10).map { nthTerm($0, x: x) }.reduce(0, +)
}
print(bessel10Terms(x: 5))
You code is a bit unreadable, however, I have written a simple solution so try to compare your intermediate results:
var terms: [Float] = []
let x: Float = 5
for index in 0 ..< 10 {
guard index > 0 else {
terms.append(1)
continue
}
// calculate only the multiplier for the previous term
// - (minus) to change the sign
// x * x to multiply nominator
// (Float(index * 2) * Float(index * 2) to multiply denominator
let termFactor = -(x * x) / (Float(index * 2) * Float(index * 2))
terms.append(terms[index - 1] * termFactor)
}
print(terms)
// sum the terms
let result = terms.reduce(0, +)
print(result)
One of the errors I see is the fact that you are actually calculating only 5 terms, not 10 (you iterate 1 to 10, but only even numbers).
i have two values in bytes in two different variables . i want to perform a certain action whenever values are nearly equal to each other.
I there any method in swift in which i can perform any action on variables values nearly equal to.
If recommend me some code , tutorial or article to achieve this.
I am new to swift so please avoid down voting.
let string1 = "Hello World"
let string2 = "Hello"
let byteArrayOfString1: [UInt8] = string1.utf8.map{UInt8($0)} //Converting HELLO WORLD into Byte Type Array
let byteArrayOfString2: [UInt8] = string2.utf8.map{UInt8($0)} //Converting HELLO into Byte Type Array
if byteArrayOfString1 == byteArrayOfString2 {
print("Match")
}else {
print("Not Match")
}
For more Help, Visit https://medium.com/#gorjanshukov/working-with-bytes-in-ios-swift-4-de316a389a0c
well exactly i don't think so there is such method that compare approx values but if you discuss what exactly you want to do we can find a better alternative solution.
Here is the Solution:
func nearlyEqual(a: Float, b: Float, epsilon: Float) -> Bool {
let absA = abs(a)
let absB = abs(b)
let diff = abs(a - b)
if a == b {
return true
} else if (a == 0 || b == 0 || absA + absB < Float.leastNonzeroMagnitude) {
// a or b is zero or both are extremely close to it
// relative error is less meaningful here
return diff < (epsilon * Float.leastNonzeroMagnitude)
} else {
return diff / (absA + absB) < epsilon
}
}
Then you can use it like :
print(nearlyEqual(a: 1.2, b: 1.4, epsilon: 0.2))
This will return true.
This is a LeetCode question. I wrote 4 answers in different versions of that question. When I tried to use "Bit manipulation", I got the error. Since no one in LeetCode can answer my question, and I can't find any Swift doc about this. I thought I would try to ask here.
The question is to get the majority element (>n/2) in a given array. The following code works in other languages like Java, so I think it might be a general question in Swift.
func majorityElement(nums: [Int]) -> Int {
var bit = Array(count: 32, repeatedValue: 0)
for num in nums {
for i in 0..<32 {
if (num>>(31-i) & 1) == 1 {
bit[i] += 1
}
}
}
var ret = 0
for i in 0..<32 {
bit[i] = bit[i]>nums.count/2 ? 1 : 0
ret += bit[i] * (1<<(31-i))
}
return ret
}
When the input is [-2147483648], the output is 2147483648. But in Java, it can successfully output the right negative number.
Swift doc says :
Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.
Well, it is 2,147,483,647, the input is 1 larger than that number. When I ran pow(2.0, 31.0) in playground, it shows 2147483648. I got confused. What's wrong with my code or what did I miss about Swift Int?
A Java int is a 32-bit integer. The Swift Int is 32-bit or 64-bit
depending on the platform. In particular, it is 64-bit on all OS X
platforms where Swift is available.
Your code handles only the lower 32 bits of the given integers, so that
-2147483648 = 0xffffffff80000000
becomes
2147483648 = 0x0000000080000000
So solve the problem, you can either change the function to take 32-bit integers as arguments:
func majorityElement(nums: [Int32]) -> Int32 { ... }
or make it work with arbitrary sized integers by computing the
actual size and use that instead of the constant 32:
func majorityElement(nums: [Int]) -> Int {
let numBits = sizeof(Int) * 8
var bit = Array(count: numBits, repeatedValue: 0)
for num in nums {
for i in 0..<numBits {
if (num>>(numBits-1-i) & 1) == 1 {
bit[i] += 1
}
}
}
var ret = 0
for i in 0..<numBits {
bit[i] = bit[i]>nums.count/2 ? 1 : 0
ret += bit[i] * (1<<(numBits-1-i))
}
return ret
}
A more Swifty way would be to use map() and reduce()
func majorityElement(nums: [Int]) -> Int {
let numBits = sizeof(Int) * 8
let bitCounts = (0 ..< numBits).map { i in
nums.reduce(0) { $0 + ($1 >> i) & 1 }
}
let major = (0 ..< numBits).reduce(0) {
$0 | (bitCounts[$1] > nums.count/2 ? 1 << $1 : 0)
}
return major
}
I am trying to generate random floats between 1 and 100, but I keep getting errors everytime. Currently I am trying:
func returnDbl ()-> Double {
var randNum = Double(Float(arc4random(101) % 5))
return randNum
}
print(returnDbl())
but to no avail, would someone point me in the right direction?
arc4random is zero-based and returns values between 0 and n-1, pass 100 as the upper bounds and add 1
arc4random_uniform is easier to use, it returns an Int32 type which has to be converted to Float.
func randomFloat() -> Float {
return Float(arc4random_uniform(100) + 1)
}
or Double
func randomDouble() -> Double {
return Double(arc4random_uniform(100) + 1)
}
or generic
func returnFloatingPoint<T : FloatingPointType>()-> T {
return T(arc4random_uniform(100) + 1)
}
let float : Float = returnFloatingPoint()
let double : Double = returnFloatingPoint()
Edit
To return a non-integral Double between 1.000000 and 99.99999 with arc4random_uniform() use
func returnDouble()-> Double {
return Double(arc4random_uniform(UInt32.max)) / 0x100000000 * 99.0 + 1.0
}
0x100000000 is UInt32.max + 1
let a = 1 + drand48() * 99
drand48 is a C function that returns a double in the range [0, 1). You can call it directly from Swift. Multiplying by 99 gives you a double in the range [0, 99). Add one to get into the range [1, 100).
As drand48 returns a double, the Swift type will be Double.
As per the comment, drand48 will by default return the same sequence of numbers upon every launch. You can avoid that by seeding it. E.g.
seed48(UnsafeMutablePointer<UInt16>([arc4random(), arc4random()]))
func returnDbl ()-> Double {
var randNum = Double(Float(arc4random() % 101))
return randNum
}
Ok thank you everybody for all of your help, the setups you showed me helped me figure out how the setup should at least look, my end result is
func returnDbl ()-> Double{
var randNum = Double(Float(arc4random_uniform(99)+1)) / Double(UINT32_MAX)
return randNum
}
print(returnDbl())
it returns floats between 1 and 100.
If I have a function like:
func evaluateGraph(sender: GraphView, atX: Double) -> Double? {
return function?(atX)
}
Where function is a variable declared earlier and it is a mathematical expression (like x^2). How can I find the inverse of the univariate function in swift at a point (atX)?
Assuming that you just want to know the inverse function in your GraphView (which is hopefully not infinite) you can use something like this:
// higher percision -> better accuracy, start...end: Interval, f: function
func getZero(#precision: Int, var #start: Double, var #end: Double, f: Double -> Double) -> Double? {
let fS = f(start)
let fE = f(end)
let isStartNegative = fS.isSignMinus
if isStartNegative == fE.isSignMinus { return nil }
let fMin = min(fS, fE)
let fMax = max(fS, fE)
let doublePrecision = pow(10, -Double(precision))
while end - start > doublePrecision {
let mid = (start + end) / 2
let fMid = f(mid)
if fMid < fMin || fMax < fMid {
return nil
}
if (fMid > 0) == isStartNegative {
end = mid
} else {
start = mid
}
}
return (start + end) / 2
}
// same as above but it returns an array of points
func getZerosInRange(#precision: Int, #start: Double, #end: Double, f: Double -> Double) -> [Double] {
let doublePrecision = pow(10, -Double(precision))
/// accuracy/step count between interval; "antiproportional" performance!!!!
let stepCount = 100.0
let by = (end - start) / stepCount
var zeros = [Double]()
for x in stride(from: start, to: end, by: by) {
if let xZero = getZero(precision: precision, start: x, end: x + by, f) {
zeros.append(xZero)
}
}
return zeros
}
// using currying; all return values should be elements of the interval start...end
func inverse(#precision: Int, #start: Double, #end: Double, f: Double -> Double)(_ x: Double) -> [Double] {
return getZerosInRange(precision: precision, start: start, end: end) { f($0) - x }
}
let f = { (x: Double) in x * x }
// you would pass the min and max Y values of the GraphView
// type: Double -> [Double]
let inverseF = inverse(precision: 10, start: -10, end: 10, f)
inverseF(4) // outputs [-1.999999999953436, 2.000000000046564]
Interestingly this code rund in a playground in about 0.5 second which I didn't expect.
You could create an inverse function to do the opposite.
So f(x) = y inverse f' gives f'(y) = x.
So if your defined function is to square the input then the inverse is to return the square root and so on.
You might run into trouble with something like that though as f'(1) = 1 and -1 in the case where f(x) returns the square.
Short of actually writing the inverse method, the only way to actually inversely infer what input gave a provided output, the best we can do is write our program to make guesses until it's within a certain accuracy.
So, for example, let's say we have the square function:
func square(input: Double) -> Double {
return input * input
}
Now, if we don't want to right the inverse of this function (which actually has two inputs for any given output), then we have to write a function to guess.
func inverseFunction(output: Double, function: (Double)->Double) -> Double
This takes a double representing the output, and a function (the one that generated the output), and returns its best-guess at the answer.
So, how do we guess?
Well, the same way we do in pre-calculus and the early parts of any calculus 1 class. Pick a starting number, run it through the function, compare the result to the output we're looking for. Record the delta. Pick a second number, run it through the function, compare the result to the output we're looking for. Record the delta, compare it to the first delta.
Continually do this until you have minimized the delta to an acceptable accuracy level (0.1 delta okay? 0.01? 0.0001?). The smaller the delta, the longer it takes to calculate. But it's going to take a long time no matter what.
As for the guessing algorithm? That's a math question that I'm not capable of answering. I wouldn't even know where to begin with that.
In the end, your best bet is to simply write inverse functions for any function you'd want to inverse.