How to get the calculation steps for factorials? - swift

I know how to get the factorial answer with recursion
func factorial(n: Int) -> Int {
if n == 0 { return 1 }
else { return n * factorial(n - 1)
}
If I passed factorial(5) the method would return 120, but is it possible to get the calculation?
What I mean is if I had a function called breakDownFactorial and if i called breakDownFactorial(5) it would return 5 * 4 * 3 * 2 * 1, if i called breakDownFactorial(4) it would return 4 * 3 * 2 * 1
and so on.

Well, I'm not entirely sure what you mean by "get the calculation", but to get a string representation of the calculation, you can do something like this:
func breakDownFactorial(n: Int) -> String {
return (1...n).reverse().map(String.init).joinWithSeparator(" * ")
}
breakDownFactorial(5) // 5 * 4 * 3 * 2 * 1

With small modifications, factorial can be converted to breakDownFactorial which returns the string description of the factorial calculation:
func breakDownFactorial(n: Int) -> String {
if n == 0 { return "1" }
else {
return "\(n) * " + breakDownFactorial(n - 1)
}
}
breakDownFactorial(10) // "10 * 9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1 * 1"
The extra "* 1" accurately reflects how the original factorial function works. If you wish to eliminate it, change the recursive base case to:
if n <= 1 { return "1" }

Related

How can I write the code of Bessel function with 10 term in swift?

I hope you guys can check. when I use 5 as x it should be showing me -0.17749282815107623 but it returns -0.2792375. I couldn't where I have been doing the mistake.
var evenNumbers = [Int]()
for i in 2...10 {
if i % 2 == 0 {
evenNumbers.append(i)
}
}
func power(val: Float, power: Int)->Float{
var c:Float = 1
for i in 1...power {
c *= val
}
return c
}
func bessel(x: Float)->Float{
var j0:Float = 0
var counter = 1
var lastDetermVal:Float = 1
for eNumber in evenNumbers {
print(lastDetermVal)
if counter == 1 {
lastDetermVal *= power(val: Float(eNumber), power: 2)
j0 += (power(val: x, power: eNumber))/lastDetermVal
counter = -1
}else if counter == -1{
lastDetermVal *= power(val: Float(eNumber), power: 2)
j0 -= (power(val: x, power: eNumber))/lastDetermVal
counter = 1
}
}
return 1-j0
}
bessel(x: 5)
Function 1:
Your mistake seems to be that you didn't have enough even numbers.
var evenNumbers = [Int]()
for i in 2...10 {
if i % 2 == 0 {
evenNumbers.append(i)
}
}
After the above is run, evenNumbers will be populated with [2,4,6,8,10]. But to evaluate 10 terms, you need even numbers up to 18 or 20, depending on whether you count 1 as a "term". Therefore, you should loop up to 18 or 20:
var evenNumbers = [Int]()
for i in 2...18 { // I think the 1 at the beginning should count as a "term"
if i % 2 == 0 {
evenNumbers.append(i)
}
}
Alternatively, you can create this array like this:
let evenNumbers = (1..<10).map { $0 * 2 }
This means "for each number between 1 (inclusive) and 10 (exclusive), multiply each by 2".
Now your solution will give you an answer of -0.1776034.
Here's my (rather slow) solution:
func productOfFirstNEvenNumbers(_ n: Int) -> Float {
if n == 0 {
return 1
}
let firstNEvenNumbers = (1...n).map { Float($0) * 2.0 }
// ".reduce(1.0, *)" means "multiply everything"
return firstNEvenNumbers.reduce(1.0, *)
}
func nthTerm(_ n: Int, x: Float) -> Float {
let numerator = pow(x, Float(n) * 2)
// yes, this does recalculate the product of even numbers every time...
let product = productOfFirstNEvenNumbers(n)
let denominator = product * product
return numerator / (denominator) * pow(-1, Float(n))
}
func bessel10Terms(x: Float) -> Float {
// for each number n in the range 0..<10, get the nth term, add them together
(0..<10).map { nthTerm($0, x: x) }.reduce(0, +)
}
print(bessel10Terms(x: 5))
You code is a bit unreadable, however, I have written a simple solution so try to compare your intermediate results:
var terms: [Float] = []
let x: Float = 5
for index in 0 ..< 10 {
guard index > 0 else {
terms.append(1)
continue
}
// calculate only the multiplier for the previous term
// - (minus) to change the sign
// x * x to multiply nominator
// (Float(index * 2) * Float(index * 2) to multiply denominator
let termFactor = -(x * x) / (Float(index * 2) * Float(index * 2))
terms.append(terms[index - 1] * termFactor)
}
print(terms)
// sum the terms
let result = terms.reduce(0, +)
print(result)
One of the errors I see is the fact that you are actually calculating only 5 terms, not 10 (you iterate 1 to 10, but only even numbers).

Resolving highest product in an array swift

I have an work to get highest product I could get in an array of Ints.
Given an array of integers, find the highest product you can get from
three of the integers. If array size is less than 3, please printout
-1
here is what I tried which returned 20
func adjacentElementsProduct(inputArray: [Int]) -> Int {
let sorted = inputArray.sorted()
let count = sorted.count
if count > 1 {
return max(sorted[0] * sorted[1] * sorted[2], sorted[count - 1] * sorted[count - 2] * sorted[count - 3])
} else {
return sorted.first ?? 0
}
}
adjacentElementsProduct(inputArray: [3, 1, 2, 5, 4])
it how ever fails for this test case
adjacentElementsProduct(inputArray: [1, 10, -5, 1, -100])
it returns 500 and 5000 is expected
func adjacentElementsProduct(inputArray: [Int]) -> Int {
let sorted = inputArray.sorted()
let count = sorted.count
guard count >= 3 else {
return -1
}
return max(
sorted[count - 1] * sorted[count - 2] * sorted[count - 3],
sorted[count - 1] * sorted[0] * sorted[1]
)
}
How does it work?
There are three cases to consider. The highest product is either
The product of the 3 highest positive numbers
The product of the highest positive number multiplicated by the the product of the two lowest negative numbers (the negatives cancel out)
The product of the 3 lowest negative numbers if there are only negative numbers.
Cases 1 and 3 are covered by sorted[count - 1] * sorted[count - 2] * sorted[count - 3]. Case 2 is covered by sorted[count - 1] * sorted[0] * sorted[1].
func adjacentElementsProduct(inputArray: [Int]) -> Int {
let sorted = inputArray.sorted()
let count = sorted.count
if count < 3 {
return -1
} else {
return max(sorted[0] * sorted[1] * sorted[count - 1],
sorted[count - 1] * sorted[count - 2] * sorted[count - 3])
}
}

How does a recursive function return the result in scala?

I am currently learning Scala and I am stuck in the following thing:
I have this algorithm which finds in a recursive way the factorial of a number:
def fact(n:Int): Int=
{
if(n == 1) 1
else n * fact(n - 1)
}
println(fact(5))
My question is, why does this line: if(n == 1) 1 does exactly? Does in mean that the function should return one or that n should become 1? I dont understand how this function returns 120 which is the result. Could someone help me udnerstand? I appreciate any help you can provide
Uhm, this is a very broad question.
Since you are asking for basic understanding of the operators of the language. I will try to explain it all to you, but I would recommend you to take a formal introduction to programming course.
In Scala everything is an expression. Thus, the function itself is an expression that evaluates to the assigned block.
In this case the block is just an if / else expression, which takes a predicate to decide which of the two branches to choose. In this case n == 1 checks if n is equals to 1, if that is true, then it returns 1, if not, it returns n * fact(n -1).
Thus, if we execute the algorithm by ourselves using "equational reasoning", we can understand how it works.
fact(3) = if (3 == 1) 1 else 3 * fact(3 - 1) // replace n in the block.
fact(3) = 3 * fact(2) // reduce the if and the subtraction.
fact(3) = 3 * (if (2 == 1) 1 else 2 * fact(2 - 1)) // expand the fact definition.
fact(3) = 3 * (2 * fact(1)) // reduce the if and the subtraction.
fact(3) = 3 * (2 * (if (1 == 1) 1 else 1 * fact(1 - 1))) // expand the fact definition.
fact(3) = 3 * (2 * (1)) // reduce the if.
fact(3) = 6 // reduce the multiplications.
Lets make this method more c oriented.
Maybe now its more clear that there are two branches
1. When n equals 1 - which stops the recursion.
2. Otherwise - multiply the current value of n by the result of calling the fact method with n - 1, which eventually becomes 1 and stops the recursion.
def fact(n:Int): Int=
{
if (n == 1) {
(return) 1;
}
else {
(return) n * fact(n - 1);
}
}
The semicolon is redundant and the the a return keyword is not recommended/necessary.
You can read about it here
So you are left with:
def fact(n:Int): Int=
{
if (n == 1) {
1
}
else {
n * fact(n - 1)
}
}
Which is basically the same as:
def fact(n:Int): Int=
{
if (n == 1) 1;
else n * fact(n - 1)
}

Recursive function in XCode Playground (Swift)

I'm learning recursive functions in Swift, and I did the following:
func recursive(i: Int) -> Int {
if i == 1 {
return 1
} else if i >= 2 {
return recursive(i: i - 1) + 1
}
return 0
}
I couldn't figure out why the function above is not working. I've tested it by doing the below doing print(recursive(10)), which gives me an output of 10. I expected the output to be 1. Can anyone help me with this? Thank you in advance.
I'm using Playgrounds on XCode 8.3.
When you do this:
recursive(i: i - 1) + 1
… then you are in effect decrementing i and then incrementing it again. That cancels out and you arrive at i again.
Let's write down what calculation would be done for i = 3:
(3 - 1) + 1 = ((2 - 1) + 1) + 1 = (((1) + 1) + 1) = 3
This is a perfect example of printing numbers without using any loop.
The recursive functions are very useful to handle such cases.
func printCount( count : inout Int , limit : Int) {
print(count, terminator: " ")
count += 1
if count > limit {
return
}
printCount(count: &count , limit: limit)
}
var count = 11
let limit = 20
printCount(count: &count , limit: limit)
Output : 11 12 13 14 15 16 17 18 19 20

SWIFT 3 - Convert Integer to Character

I am trying to get something to look like this:
1 2 3 4 5 6 7 8 9 10
A * * * * * * * * * *
B * * * * * * * * * *
....until J
How do I convert row's value to a character
Also I was thinking of using col and adding it to 64 for the ASCII value or adding it to 41 for the unicode character. I really don't care which way.
Would anyone know how I could do this???
for row in 0..<board.count
{
for col in 0..<board[row].count
{
if row == 0 && col != 0
{
board[row][col] = ROW AS CHARACTER
}
else if col == 0 && row != 0
{
board[row][col] = A - J
}
else if(col != 0 && row != 0)
{
board[row][col] = "*"
}
}
}
You can extend Int to return the desired associated character as follow:
extension Int {
var associatedCharacter: Character? {
guard 1...10 ~= self, let unicodeScalar = UnicodeScalar(64 + self) else { return nil }
return Character(unicodeScalar)
}
}
1.associatedCharacter // A
2.associatedCharacter // B
3.associatedCharacter // C
10.associatedCharacter // J
11.associatedCharacter // nil
(Un)fortunately, Swift tries very hard to separate the concept of a “character”, what you read on screen, and the underlying ASCII representation, for reasons which become very important when you deal with complex texts. (What letter comes after “a”? Depends what language you speak.)
Most likely, what you want is to increment the UnicodeScalar value which for “regular” text corresponds to the ASCII encoding.
let first = UnicodeScalar("A").value
board[row][col] = String(Character(UnicodeScalar(first + UInt32(row))!))
The extra steps and optional unwrapping is Swift’s way of reminding you of all the things that could go wrong when you “increment” a letter.