How do I "tune my program" to get relative errors down to E-10? - swift

The problem
I wrote a recursive program to calculate the first 26 values of the spherical Bessel functions for a given input x. The program seems to start to accumulate relatively large errors at j_8(x). Here is the output of some code that found the relative errors based on some known high precision spherical Bessel function values for the first 25 SBF's (l=0 to l=24):
5.27753443514687e-16
9.21595688945492e-16
3.02006896635059e-15
7.62720601251909e-14
4.57824286846449e-12
4.42939097671948e-10
6.23673401270476e-08
1.20281992792456e-05
0.00304187901758528
0.976196008573827
387.486227014647
186360.170424407
106776056.913571
71856241567.0684
56117385601216.0
5.03357079061797e+16
5.13915094291702e+19
5.92532834322625e+22
7.6613338261931e+25
1.10398479254694e+29
1.76304629085447e+32
3.10469930465156e+35
6.00134333269024e+38
1.26807670217943e+42
2.91783063679757e+45
As you can see, after the j_8(x) value the error is starting to increase. The problem simply states that I should "tune" my program to get the error down to E-10, but how do I do this? I'm completely lost...
The code:
import Foundation
var x: Double = 0.0
var j_up_val: Double = 0.0
func j_up(x_val: Double, l_val: Double) -> Double {
if l_val == 0 {
j_up_val = sin(x_val)/x_val
} else if l_val == 1 {
j_up_val = (sin(x_val)/pow(x_val,2.0)) - (cos(x_val)/x_val)
} else if l_val == 2 {
j_up_val = ((2*(l_val-1)+1)/x_val) * ((sin(x_val)/pow(x_val,2.0)) - cos(x_val)/x_val) - sin(x_val)/x_val
} else {
let l2: Double = l_val - 1
let l3: Double = l_val - 2
j_up_val = ((2*(l_val-1)+1)/x_val)*j_up(x_val: x, l_val: l2) - j_up(x_val: x, l_val: l3)
}
return j_up_val
}
x = 1
print("x:", x)
for i in 0...25 {
var i_double: Double = 0.0
i_double = Double(i)
print("l:", i)
print(j_up(x_val: x, l_val: i_double))
print(" \n")
}
print("**********************")

Related

How can I write the code of Bessel function with 10 term in swift?

I hope you guys can check. when I use 5 as x it should be showing me -0.17749282815107623 but it returns -0.2792375. I couldn't where I have been doing the mistake.
var evenNumbers = [Int]()
for i in 2...10 {
if i % 2 == 0 {
evenNumbers.append(i)
}
}
func power(val: Float, power: Int)->Float{
var c:Float = 1
for i in 1...power {
c *= val
}
return c
}
func bessel(x: Float)->Float{
var j0:Float = 0
var counter = 1
var lastDetermVal:Float = 1
for eNumber in evenNumbers {
print(lastDetermVal)
if counter == 1 {
lastDetermVal *= power(val: Float(eNumber), power: 2)
j0 += (power(val: x, power: eNumber))/lastDetermVal
counter = -1
}else if counter == -1{
lastDetermVal *= power(val: Float(eNumber), power: 2)
j0 -= (power(val: x, power: eNumber))/lastDetermVal
counter = 1
}
}
return 1-j0
}
bessel(x: 5)
Function 1:
Your mistake seems to be that you didn't have enough even numbers.
var evenNumbers = [Int]()
for i in 2...10 {
if i % 2 == 0 {
evenNumbers.append(i)
}
}
After the above is run, evenNumbers will be populated with [2,4,6,8,10]. But to evaluate 10 terms, you need even numbers up to 18 or 20, depending on whether you count 1 as a "term". Therefore, you should loop up to 18 or 20:
var evenNumbers = [Int]()
for i in 2...18 { // I think the 1 at the beginning should count as a "term"
if i % 2 == 0 {
evenNumbers.append(i)
}
}
Alternatively, you can create this array like this:
let evenNumbers = (1..<10).map { $0 * 2 }
This means "for each number between 1 (inclusive) and 10 (exclusive), multiply each by 2".
Now your solution will give you an answer of -0.1776034.
Here's my (rather slow) solution:
func productOfFirstNEvenNumbers(_ n: Int) -> Float {
if n == 0 {
return 1
}
let firstNEvenNumbers = (1...n).map { Float($0) * 2.0 }
// ".reduce(1.0, *)" means "multiply everything"
return firstNEvenNumbers.reduce(1.0, *)
}
func nthTerm(_ n: Int, x: Float) -> Float {
let numerator = pow(x, Float(n) * 2)
// yes, this does recalculate the product of even numbers every time...
let product = productOfFirstNEvenNumbers(n)
let denominator = product * product
return numerator / (denominator) * pow(-1, Float(n))
}
func bessel10Terms(x: Float) -> Float {
// for each number n in the range 0..<10, get the nth term, add them together
(0..<10).map { nthTerm($0, x: x) }.reduce(0, +)
}
print(bessel10Terms(x: 5))
You code is a bit unreadable, however, I have written a simple solution so try to compare your intermediate results:
var terms: [Float] = []
let x: Float = 5
for index in 0 ..< 10 {
guard index > 0 else {
terms.append(1)
continue
}
// calculate only the multiplier for the previous term
// - (minus) to change the sign
// x * x to multiply nominator
// (Float(index * 2) * Float(index * 2) to multiply denominator
let termFactor = -(x * x) / (Float(index * 2) * Float(index * 2))
terms.append(terms[index - 1] * termFactor)
}
print(terms)
// sum the terms
let result = terms.reduce(0, +)
print(result)
One of the errors I see is the fact that you are actually calculating only 5 terms, not 10 (you iterate 1 to 10, but only even numbers).

Generating a simple algebraic expression in swift

I'm looking to create a function that returns a solve for x math equation that can be preformed in ones head (Clearly thats a bit subjective but I'm not sure how else to phrase it).
Example problem: (x - 15)/10 = 6
Note: Only 1 x in the equation
I want to use the operations +, -, *, /, sqrt (Only applied to X -> sqrt(x))
I know that let mathExpression = NSExpression(format: question) converts strings into math equations but when solving for x I'm not sure how to go about doing this.
I previously asked Generating random doable math problems swift for non solving for x problems but I'm not sure how to convert that answer into solving for x
Edit: Goal is to generate an equation and have the user solve for the variable.
Since all you want is a string representing an equation and a value for x, you don't need to do any solving. Just start with x and transform it until you have a nice equation. Here's a sample: (copy and paste it into a Playground to try it out)
import UIKit
enum Operation: String {
case addition = "+"
case subtraction = "-"
case multiplication = "*"
case division = "/"
static func all() -> [Operation] {
return [.addition, .subtraction, .multiplication, .division]
}
static func random() -> Operation {
let all = Operation.all()
let selection = Int(arc4random_uniform(UInt32(all.count)))
return all[selection]
}
}
func addNewTerm(formula: String, result: Int) -> (formula: String, result: Int) {
// choose a random number and operation
let operation = Operation.random()
let number = chooseRandomNumberFor(operation: operation, on: result)
// apply to the left side
let newFormula = applyTermTo(formula: formula, number: number, operation: operation)
// apply to the right side
let newResult = applyTermTo(result: result, number: number, operation: operation)
return (newFormula, newResult)
}
func applyTermTo(formula: String, number:Int, operation:Operation) -> String {
return "\(formula) \(operation.rawValue) \(number)"
}
func applyTermTo(result: Int, number:Int, operation:Operation) -> Int {
switch(operation) {
case .addition: return result + number
case .subtraction: return result - number
case .multiplication: return result * number
case .division: return result / number
}
}
func chooseRandomNumberFor(operation: Operation, on number: Int) -> Int {
switch(operation) {
case .addition, .subtraction, .multiplication:
return Int(arc4random_uniform(10) + 1)
case .division:
// add code here to find integer factors
return 1
}
}
func generateFormula(_ numTerms:Int = 1) -> (String, Int) {
let x = Int(arc4random_uniform(10))
var leftSide = "x"
var result = x
for i in 1...numTerms {
(leftSide, result) = addNewTerm(formula: leftSide, result: result)
if i < numTerms {
leftSide = "(" + leftSide + ")"
}
}
let formula = "\(leftSide) = \(result)"
return (formula, x)
}
func printFormula(_ numTerms:Int = 1) {
let (formula, x) = generateFormula(numTerms)
print(formula, " x = ", x)
}
for i in 1...30 {
printFormula(Int(arc4random_uniform(3)) + 1)
}
There are some things missing. The sqrt() function will have to be implemented separately. And for division to be useful, you'll have to add in a system to find factors (since you presumably want the results to be integers). Depending on what sort of output you want, there's a lot more work to do, but this should get you started.
Here's sample output:
(x + 10) - 5 = 11 x = 6
((x + 6) + 6) - 1 = 20 x = 9
x - 2 = 5 x = 7
((x + 3) * 5) - 6 = 39 x = 6
(x / 1) + 6 = 11 x = 5
(x * 6) * 3 = 54 x = 3
x * 9 = 54 x = 6
((x / 1) - 6) + 8 = 11 x = 9
Okay, let’s assume from you saying “Note: Only 1 x in the equation” that what you want is a linear equation of the form y = 0 = β1*x + β0, where β0 and β1 are the slope and intercept coefficients, respectively.
The inverse of (or solution to) any linear equation is given by x = -β0/β1. So what you really need to do is generate random integers β0 and β1 to create your equation. But since it should be “solvable” in someone’s head, you probably want β0 to be divisible by β1, and furthermore, for β1 and β0/β1 to be less than or equal to 12, since this is the upper limit of the commonly known multiplication tables. In this case, just generate a random integer β1 ≤ 12, and β0 equal to β1 times some integer n, 0 ≤ n ≤ 12.
If you want to allow simple fractional solutions like 2/3, just multiply the denominator and the numerator into β0 and β1, respectively, taking care to prevent the numerator or denominator from getting too large (12 is again a good limit).
Since you probably want to make y non-zero, just generate a third random integer y between -12 and 12, and change your output equation to y = β1*x + β0 + y.
Since you mentioned √ could occur over the x variable only, that is pretty easy to add; the solution (to 0 = β1*sqrt(x) + β0) is just x = (β0/β1)**2.
Here is some very simple (and very problematic) code for generating random integers to get you started:
import func Glibc.srand
import func Glibc.rand
import func Glibc.time
srand(UInt32(time(nil)))
print(rand() % 12)
There are a great many answers on this website that deal with better ways to generate random integers.

Is there an Inverse Error Function available in Swift's Foundation import?

I'm using the erf function in Swift, like this:
import Foundation
erf(2)
Is there an inverse error function as well?
We do not have such function in standard library. But here is an implementation of this function-
https://github.com/antelopeusersgroup/antelope_contrib/blob/master/lib/location/libgenloc/erfinv.c
HTH.
Swift does not appear to have it. This has an algorithm that could be translated into Swift:
Need code for Inverse Error Function
Also, Ch 6.2.2 or Numerical Solutions, 3e has an algorithm:
https://e-maxx.ru/bookz/files/numerical_recipes.pdf
The solution that OP had posted on GitHub:
func erfinv(y: Double) -> Double {
let center = 0.7
let a = [ 0.886226899, -1.645349621, 0.914624893, -0.140543331]
let b = [-2.118377725, 1.442710462, -0.329097515, 0.012229801]
let c = [-1.970840454, -1.624906493, 3.429567803, 1.641345311]
let d = [ 3.543889200, 1.637067800]
if abs(y) <= center {
let z = pow(y,2)
let num = (((a[3]*z + a[2])*z + a[1])*z) + a[0]
let den = ((((b[3]*z + b[2])*z + b[1])*z + b[0])*z + 1.0)
var x = y*num/den
x = x - (erf(x) - y)/(2.0/sqrt(.pi)*exp(-x*x))
x = x - (erf(x) - y)/(2.0/sqrt(.pi)*exp(-x*x))
return x
}
else if abs(y) > center && abs(y) < 1.0 {
let z = pow(-log((1.0-abs(y))/2),0.5)
let num = ((c[3]*z + c[2])*z + c[1])*z + c[0]
let den = (d[1]*z + d[0])*z + 1
// should use the sign function instead of pow(pow(y,2),0.5)
var x = y/pow(pow(y,2),0.5)*num/den
x = x - (erf(x) - y)/(2.0/sqrt(.pi)*exp(-x*x))
x = x - (erf(x) - y)/(2.0/sqrt(.pi)*exp(-x*x))
return x
} else if abs(y) == 1 {
return y * Double(Int.max)
} else {
return .nan
}
}

IBM Swift Sandbox cannot resolve call to CoreFoundation function

I'm mucking around with the beta of the IBM Swift Sandbox; does anyone know why I'm getting the following error for the code below?:
LLVM ERROR: Program used external function 'CFAbsoluteTimeGetCurrent' which could not be resolved!
// A demonstration of both iterative and recursive algorithms for computing the Fibonacci numbers.
import CoreFoundation
// A recursive algorithm to compute the Fibonacci numbers.
func fibRec (n : Int) -> Double {
return (Double)(n < 3 ? 1 : fibRec(n - 1) + fibRec(n - 2))
}
// An iterative algorithm to compute the Fibonacci numbers.
func fibIter (n : Int) -> Double {
var f2 = 0.0
var f1 = 1.0
var f0 = 1.0
for _ in 0 ..< n {
f2 = f1 + f0
f0 = f1
f1 = f2
}
return f0
}
// Initialise array to hold algorithm execution times.
var fibTimes = [Double]()
// i is the ith Fibonacci number to be computed.
for i in 120..<129 {
var fibNum = 0.0
var fibSum = 0.0
// j is the number of times to compute F(i) to obtain average.
for j in 0..<5 {
// Set start time.
let startTime = CFAbsoluteTimeGetCurrent()
// Uses the recursive algorithm.
// fibNum = fibRec(i)
// Uses the iterative algorithm.
fibNum = fibIter(i)
fibTimes.insert(CFAbsoluteTimeGetCurrent() - startTime, atIndex: j)
}
// Compute the average execution time.
for p in fibTimes {
fibSum += p
}
fibSum = fibSum / 5
print("Fibonacci number \(i) is: \(fibNum)")
print("Execution time: \(fibSum) seconds")
}
You just have to import Foundation as well.
I am not sure why this is needed (I hope someone that does know, could shed some light on this), but it works.
To clarify, you have to import them both.
import Foundation
import CoreFoundation
Alternatively, Glibc has clock() which returns an Int instead. For example:
import Glibc
struct StartTimer {
let start = clock()
var elapsed = -1
mutating func stop() { elapsed = clock() - start }
}
var timer = StartTimer()
/* do your work */
timer.stop()
print("Elapsed Time: \(timer.elapsed)")

Swift automatic function inversion

If I have a function like:
func evaluateGraph(sender: GraphView, atX: Double) -> Double? {
return function?(atX)
}
Where function is a variable declared earlier and it is a mathematical expression (like x^2). How can I find the inverse of the univariate function in swift at a point (atX)?
Assuming that you just want to know the inverse function in your GraphView (which is hopefully not infinite) you can use something like this:
// higher percision -> better accuracy, start...end: Interval, f: function
func getZero(#precision: Int, var #start: Double, var #end: Double, f: Double -> Double) -> Double? {
let fS = f(start)
let fE = f(end)
let isStartNegative = fS.isSignMinus
if isStartNegative == fE.isSignMinus { return nil }
let fMin = min(fS, fE)
let fMax = max(fS, fE)
let doublePrecision = pow(10, -Double(precision))
while end - start > doublePrecision {
let mid = (start + end) / 2
let fMid = f(mid)
if fMid < fMin || fMax < fMid {
return nil
}
if (fMid > 0) == isStartNegative {
end = mid
} else {
start = mid
}
}
return (start + end) / 2
}
// same as above but it returns an array of points
func getZerosInRange(#precision: Int, #start: Double, #end: Double, f: Double -> Double) -> [Double] {
let doublePrecision = pow(10, -Double(precision))
/// accuracy/step count between interval; "antiproportional" performance!!!!
let stepCount = 100.0
let by = (end - start) / stepCount
var zeros = [Double]()
for x in stride(from: start, to: end, by: by) {
if let xZero = getZero(precision: precision, start: x, end: x + by, f) {
zeros.append(xZero)
}
}
return zeros
}
// using currying; all return values should be elements of the interval start...end
func inverse(#precision: Int, #start: Double, #end: Double, f: Double -> Double)(_ x: Double) -> [Double] {
return getZerosInRange(precision: precision, start: start, end: end) { f($0) - x }
}
let f = { (x: Double) in x * x }
// you would pass the min and max Y values of the GraphView
// type: Double -> [Double]
let inverseF = inverse(precision: 10, start: -10, end: 10, f)
inverseF(4) // outputs [-1.999999999953436, 2.000000000046564]
Interestingly this code rund in a playground in about 0.5 second which I didn't expect.
You could create an inverse function to do the opposite.
So f(x) = y inverse f' gives f'(y) = x.
So if your defined function is to square the input then the inverse is to return the square root and so on.
You might run into trouble with something like that though as f'(1) = 1 and -1 in the case where f(x) returns the square.
Short of actually writing the inverse method, the only way to actually inversely infer what input gave a provided output, the best we can do is write our program to make guesses until it's within a certain accuracy.
So, for example, let's say we have the square function:
func square(input: Double) -> Double {
return input * input
}
Now, if we don't want to right the inverse of this function (which actually has two inputs for any given output), then we have to write a function to guess.
func inverseFunction(output: Double, function: (Double)->Double) -> Double
This takes a double representing the output, and a function (the one that generated the output), and returns its best-guess at the answer.
So, how do we guess?
Well, the same way we do in pre-calculus and the early parts of any calculus 1 class. Pick a starting number, run it through the function, compare the result to the output we're looking for. Record the delta. Pick a second number, run it through the function, compare the result to the output we're looking for. Record the delta, compare it to the first delta.
Continually do this until you have minimized the delta to an acceptable accuracy level (0.1 delta okay? 0.01? 0.0001?). The smaller the delta, the longer it takes to calculate. But it's going to take a long time no matter what.
As for the guessing algorithm? That's a math question that I'm not capable of answering. I wouldn't even know where to begin with that.
In the end, your best bet is to simply write inverse functions for any function you'd want to inverse.