Why does this multiple recursion fail over a certain number of recursions? - swift

This code adds up all the integers and number, however it crashes with Segmentation fault: 11 (or bad memory access) at 104829 or larger. Why?
import Foundation
func sigma(_ m: Int64) -> Int64 {
if (m <= 0 ) {
return 0
} else {
return m + sigma(m - 1)
}
}
let number: Int64 = 104829
let answer = sigma(number)
nb: sigma(104828) = 5494507206
Running in terminal on macOS 10.11 on CoreDuo 2 Macbook Pro with 8GB Ram (incase that's relevant!)

You're getting a Stack Overflow. You can get/set the stack size of your current process using getrlimit(2)/setrlimit(2). Here's an example usage:
import Darwin // Unnecessary if you already have Foundation imported
func getStackByteLimit() -> rlimit? {
var limits = rlimit()
guard getrlimit(RLIMIT_STACK, &limits) != -1 else {
perror("Error with getrlimit")
return nil
}
return limits
}
func setStackLimit(bytes: UInt64) -> Bool {
guard let max = getStackByteLimit()?.rlim_max else { return false }
var limits = rlimit(rlim_cur: bytes, rlim_max: max)
guard setrlimit(RLIMIT_STACK, &limits) != -1 else {
perror("Error with setrlimit")
return false
}
return true
}
By default, it's 8,388,608 bytes, (2,048 pages of 4,096 bytes).
Yours is a textbook example of an algorithm that cannot be tail call optimized. The result of the recursive call isn't returned directly, but rather, used as an operand for addition. Because of this, the compiler can't generate code to eliminate away stack frames during recursion. They must stay, in order to keep track of the addition that will need to eventually be done. This algorithm can be improved by using an accumulator parameter:
func sigma(_ m: Int64, acc: Int64 = 0) -> Int64 {
if (m <= 0 ) {
return acc
} else {
return sigma(m - 1, acc: acc + m)
}
}
In this code, the result of the recursive call is returned directly. Because of this, the compiler can write code that removed intermediate stack frames. This should prevent stack overflows.
But really, you can just do this in constant time, without any recursive non-sense :p
func sum(from start: Int64 = 0, to end: Int64) -> Int64 {
let count = end - start + 1
return (start * count + end * count) / 2
}
print(sum(to: 50))

Related

Default integer result to avoid dividing by zero in SwiftUI

How do I create a default 0 value result for a calculated variable when the calculation divides by 0?
I am a beginner in SwiftUI and am trying to create a simple quiz app. In the app, questions by category are counted, such as "hsa" below (hsaCount.correct and hsaCount.incorrect) and eventually a score is calculated (scoreHSA).
The current code below works fine except when an "hsa" question is not attempted which results in "triedHsa" being 0 and an error of "Thread 1: Fatal error: Division by zero"
I am trying to figure out how to make the default scoreHsa result 0 if triedHsa is 0.
I tried using guard and if/else statements but failed, likely due to my inexperience.
let hsaCount: (correct: Int, incorrect: Int)
var triedHsa: Int { hsaCount.correct + hsaCount.incorrect}
var scoreHsa: Int { ((hsaCount.correct * 100) / triedHsa)}
Use ternary expression:
var scoreHsa: Int {
triedHsa == 0 ? 0 : ((hsaCount.correct * 100) / triedHsa)
}
Or, you can use conditional expressions:
var scoreHsa: Int {
guard triedHsa != 0 else {
return 0
}
return (hsaCount.correct * 100) / triedHsa
}
Swift is missing a throwing function for this; you need to make one yourself.
(option+/ makes ÷)
var scoreHsa: Int { (try? hsaCount.correct * 100 ÷ triedHsa) ?? 0 }
infix operator ÷: MultiplicationPrecedence
public extension BinaryInteger {
/// - Throws: `DivisionByZeroError<Self>`
static func ÷ (numerator: #autoclosure () -> Self, denominator: Self) throws -> Self {
guard denominator != 0
else { throw DivisionByZeroError() }
return numerator() / denominator
}
}
public struct DivisionByZeroError: Error { }

Swift: find out how many digits an integer has

I don't have the right English or math vocabulary to really explain what I want to do, but I'll try to explain. Basically I want to figure out "how big" an integer is, how many decimal positions it has. For example 1234 is "a thousand" number, and 2,987,123 is "a million" number.
I can do something like this, but that is rather silly :)
extension Int {
func size() -> Int {
switch self {
case 0...99:
return 10
case 100...999:
return 100
case 1000...9999:
return 1000
case 10000...99999:
return 10000
case 100000...999999:
return 100000
case 1000000...9999999:
return 1000000
default:
return 0 // where do we stop?
}
}
}
A solution using logarithms:
Note: This solution has limitations due to the inability of Double to fully represent the log10 of large Int converted to Double. It starts failing around 15
digits for Ints very close to the next power of 10 (e.g.
999999999999999).
This is a problem:
log10(Double(999999999999999)) == log10(Double(1000000000000000))
extension Int {
var size: Int {
self == 0 ? 1 : Int(pow(10.0, floor(log10(abs(Double(self))))))
}
}
A solution using Strings:
It avoids any mathematical representation errors by working entirely with Int and String.
extension Int {
var size: Int {
Int("1" + repeatElement("0", count: String(self.magnitude).count - 1))!
}
}
A generic version for any FixedWidthInteger:
In collaboration with #LeoDabus, I present the generic version for any integer type:
extension FixedWidthInteger {
var size: Self {
Self("1" + repeatElement("0", count: String(self.magnitude).count - 1))!
}
}
Examples:
Int8.max.size // 100
Int16.max.size // 10000
Int32.max.size // 1000000000
Int.max.size // 1000000000000000000
UInt.max.size // 10000000000000000000
I came up with this:
extension Int {
func size() -> Int {
var size = 1
var modifyingNumber = self
while modifyingNumber > 10 {
modifyingNumber = modifyingNumber / 10
size = size * 10
}
return size
}
}
Works, but it's rather imperative.
Is this too silly?
extension Int {
var size : Int {
String(self).count
}
}
My reasoning is that a "digit" really is a manner of writing, so converting to String answers the real question. Converting the size back to a number (i.e. the corresponding power of ten) would then of course lead right back to the logarithm answer. :)

Converting a C-style for loop that uses division for the step to Swift 3

I have this loop, decrementing an integer by division, in Swift 2.
for var i = 128; i >= 1 ; i = i/2 {
//do some thing
}
The C-style for loop is deprecated, so how can I convert this to Swift 3.0?
Quite general loops with a non-constant stride can be realized
with sequence:
for i in sequence(first: 128, next: { $0 >= 2 ? $0/2 : nil }) {
print(i)
}
Advantages: The loop variable i is a constant and its scope is
restricted to the loop body.
Possible disadvantages: The terminating condition must be adapted
(here: $0 >= 2 instead of i >= 1), and the loop is always executed
at least once, for the first value.
One could also write a wrapper which resembles the C-style for loop
more closely and does not have the listed disadvantages
(inspired by Erica Sadun: Stateful loops and sequences):
public func sequence<T>(first: T, while condition: #escaping (T)-> Bool, next: #escaping (T) -> T) -> UnfoldSequence<T, T> {
let nextState = { (state: inout T) -> T? in
guard condition(state) else { return nil }
defer { state = next(state) }
return state
}
return sequence(state: first, next: nextState)
}
and then use it as
for i in sequence(first: 128, while: { $0 >= 1 }, next: { $0 / 2 }) {
print(i)
}
MartinR's solution is very generic and useful and should be part of your toolbox.
Another approach is to rephrase what you want: the powers of two from 7 down to 0.
for i in (0...7).reversed().map({ 1 << $0 }) {
print(i)
}
I'll suggest that you should use a while loop to handle this scenario:
var i = 128
while i >= 1
{
// Do your stuff
i = i / 2
}

How to replace a complicated C-style for loop in Swift 2.2

For an unsigned integer type library that I've developed, I have a specialized C-style for loop used for calculating the significant bits in a stored numeric value. I have been struggling for some time with how to convert this into a Swift 2.2+ style for loop. Here's the code in question:
/// Counts up the significant bits in stored data.
public var significantBits: UInt128 {
// Will turn into final result.
var significantBitCount: UInt128 = 0
// The bits to crawl in loop.
var bitsToWalk: UInt64 = 0
if self.value.upperBits > 0 {
bitsToWalk = self.value.upperBits
// When upperBits > 0, lowerBits are all significant.
significantBitCount += 64
} else if self.value.lowerBits > 0 {
bitsToWalk = self.value.lowerBits
}
if bitsToWalk > 0 {
// Walk significant bits by shifting right until all bits are equal to 0.
for var bitsLeft = bitsToWalk; bitsLeft > 0; bitsLeft >>= 1 {
significantBitCount += 1
}
}
return significantBitCount
}
I'm sure there are multiple ways to handle this, and I can think of some more verbose means of handling this, but I'm interested in finding a succinct way of handling this scenario that I can reapply to similar circumstances. I find that I very rarely use C-style for loops, but when I do, it's for bizarre scenarios like this one where it's the most succinct way to handle a problem.
The simplest solution is to just use a while loop:
Replace this code:
if bitsToWalk > 0 {
// Walk significant bits by shifting right until all bits are equal to 0.
for var bitsLeft = bitsToWalk; bitsLeft > 0; bitsLeft >>= 1 {
significantBitCount += 1
}
}
With the following while loop:
while bitsToWalk > 0 {
significantBitCount += 1
bitsToWalk >>= 1
}
One option is to use the built-in processor functions:
Put:
#import <x86intrin.h>
to your Obj-C bridging header and then in Swift:
let number: UInt64 = 111
let mostSignificantBit = _lzcnt_u64(number)
print(mostSignificantBit)
(Of course, you have to be on the correct architecture, this function is defined only on x86. This solution is not exactly well portable).
This function should calculate the number of significant bits in a UInt64 value:
import Foundation
func significantBits(n: UInt64) -> Int {
return Int(ceil(log2(Double(n))))
}
let n: UInt64 = 0xFFFFFFFFFFFFFFFF // 64 significant bits
let m: UInt64 = 0b11011 // 5 significant bits
print("n has \(significantBits(n)) significant bits.")
print("m has \(significantBits(m)) significant bits.")
and outputs:
n has 64 significant bits.
m has 5 significant bits.
You could probably replace your code with something like:
private func calcSigBits(n: UInt64) -> Int {
return Int(ceil(log2(Double(n))))
}
public var significantBits: Int {
if self.value.upperBits > 0 {
return calcSigBits(self.value.upperBits) + 64
}
else {
return calcSigBits(self.value.lowerBits)
}
}
If you don't want to use log2, you can use the loop from nhgrif's answer, but it's still good to refactor this out, since it's a conceptually separate operation, and makes your own code much simpler. You could even add it as an extension to UInt64:
extension UInt64 {
public var significantBits: Int {
var sb = 0
var value = self
while value > 0 {
sb += 1
value >>= 1
}
return sb
}
}
// Rest of your class definition...
public var significantBits: Int {
if self.value.upperBits > 0 {
return self.value.upperBits.significantBits + 64
}
else {
return self.value.lowerBits.significantBits
}
}

A concise way to not execute a loop now that C-Style for loops are going to be removed from Swift 3?

Imagine we have this code which works perfectly for n >= 0.
func fibonacci(n: Int) -> Int {
var memo = [0,1]
for var i = 2; i <= n; i++ {
memo.append(memo[i-1] + memo[i-2])
}
return memo[n]
}
If I remove the C-style for loop due to upcoming changes to Swift 3.0, I get something like this:
func fibonacci(n: Int) -> Int {
var memo = [0,1]
for i in 2...n {
memo.append(memo[i-1] + memo[i-2])
}
return memo[n]
}
While this works fine for n >= 2, it fails for the numbers 0 and 1 with this error message:
fatal error: Can't form Range with end < start
What's the most concise way to fix this code so it works properly for 0 and 1?
(Note: It's okay, and even desirable, for negative numbers to crash the app.)
Note: I realize I could add a guard statement:
guard n >= 2 else { return memo[n] }
... but I'm hoping there is a better way to fix just the faulty part of the code (2...n).
For example, if there was a concise way to create a range that returns zero elements if end < start, that would be a more ideal solution.
To do this in a way that works for n < 2, you can use the stride method.
let startIndex = 2
let endIndex = n
for i in stride(from: startIndex, through: endIndex, by: 1) {
memo.append(memo[i-1] + memo[i-2])
}
You can easily create a valid range with the max() function:
for i in 2 ..< max(2, n+1) {
memo.append(memo[i-1] + memo[i-2])
}
This evaluates to an empty range 2 ..< 2 if n < 2.
It is important to use the ..< operator which excludes the upper bound because 2 ... 1 is not a valid range.
But in this function I would simply treat the special cases first
func fibonacci(n: Int) -> Int {
// Let it crash if n < 0:
precondition(n >= 0, "n must not be negative")
// Handle n = 0, 1:
if n <= 1 {
return n
}
// Handle n >= 2:
var memo = [0,1]
for i in 2 ... n {
memo.append(memo[i-1] + memo[i-2])
}
return memo[n]
}
(Note that your memo array is set to the initial value [0, 1]
for each function call, so the values are not really "memoized".
Without memoization you don't need an array, it would suffice to keep the last two numbers to compute the next.)
As it turns out, the variable i will always be equal to the count of the memoizing array, so you can just use that as your loop condition:
func fibonacci(n: Int) -> Int {
var memo = [0,1]
while n >= memo.count {
memo.append(memo[memo.count-1] + memo[memo.count-2])
}
return memo[n]
}
Alternatively, you could express the loop as a recursive function:
func fibonacci(n: Int) -> Int {
var memo = [0,1]
func rec(i: Int) -> Int {
if i >= memo.count { memo.append(rec(i-2) + rec(i-1)) }
return memo[i]
}
return rec(n)
}
Really, though, if is the best solution here. Ranges don't allow the end to be smaller than the beginning by design. The extra line for:
func fibonacci(n: Int) -> Int {
if n < 2 { return n }
var memo = [0,1]
for i in 2...n {
memo.append(memo[i-1] + memo[i-2])
}
return memo[n]
}
Is readable and understandable. (To my eye, the code above is better than the for ;; version)
#Marc's answer is great: https://stackoverflow.com/a/34324032/1032900
But the stride syntax is too long for frequent usage, so I made it a little more pleasant for the common i++ usages...
extension Strideable {
#warn_unused_result
public func stride(to end: Self) -> StrideTo<Self> {
return stride(to: end, by: 1)
}
}
extension Strideable {
#warn_unused_result
public func stride(thru end: Self) -> StrideThrough<Self> {
return stride(through: end, by: 1)
}
}
So use like this:
for i in startPos.stride(to: endPos) {
print("pos at: \(i)")
}