Number Operations and Return Types - swift

I am confused by what is returned when performing number operations in Swift between various types. Consider the following:
var castedFoo = Float(7.0/5.0) // returns 1.39999997...
var specifiedTypeFoo:Float = 7/5.0 //returns 1.39999997...
var foo = (7/5.0) //returns 1.4
What separates the first two from the last one? They are all returning floats, so why is the value from the last one rounded? I understand that the first is casted and the second explicitly specified to be a Float, but the last one also returns a Float value. So what makes the difference here?

According to Swift documentation,
Unless otherwise specified, the default type of a floating-point literal is the Swift standard library type Double, which represents a 64-bit floating-point number.
In other words, the literal 5.0 is of type Double.
Your first two examples set the result type to Float; your last example keeps the type of the result a Double, because the result of the division of an Int and a Double is a Double. Because of that difference, the last result has higher precision.

Related

Swift: Cannot convert value of type 'Range<Int>' to specified type 'Int'

I was trying to implement a small iteration which returns the square of some ranges.
Which should be the equivalence of this Python script
for i in range(n):
print(i*i)
In Swift I tried
first attempt
let numbers = [1..<10]
for i in numbers{
print(i*i)
}
and
second attmpt
let numbers = [1..<10]
for i in numbers{
var j: Int = i
print(j*j)
}
but then the compiler says
Cannot convert value of type 'Range<Int>' to specified type 'Int'
I understand from my python experience this is due to different types in Swift. Thus my questions are
How can I fix this? (i.e. implement the same thing i did in python)
What are the problems with my first and second attempts?
Why are there so many types of <Int> in Swift?
Thanks in advance!
Your code doesn't compile because you have used [] around the range, which creates an array. [1..<10] is an array of ranges. The for loop is then iterating over that array, which has only one element - the range 1..<10.
This is why i is of type Range<Int>. It is the range, not the numbers in the range.
Just remove the [] and both of your code would work. You can iterate over ranges directly (in fact, anything that conforms to the Sequence protocol), not just arrays. You can even write the range inline with the loop:
for i in 0..<10 {
print(i * i)
}
Why are there so many types of <Int> in Swift?
You are looking at this the wrong way, the word Range and ClosedRange in the types Range<Int> and ClosedRange<Int> are not words that modify Int, as if they are different "flavours" of Int. It's the opposite - Range<Bound> and ClosedRange<Bound> are generic types, and Range<Int> can be a considered the specific "type" of Range that has Int as its bounds. You can also have Range<Float> or Range<UInt8> for example.

Cannot convert value of type 'Int' to expected argument type 'Double'

So I am following this course called "Code With Chris - 14 Day Beginner Challenge (SwiftUI)" (yes I am a beginner), and after each lesson, there is a challenge, I have almost completed the challenge but I couldn't figure out why it wouldn't work, so I checked the dropbox of the completed challenge and I had everything pretty much the same, I have found a solution similar to the source but I still don't understand why my first version (first picture) won't work. I copied everything identically from the source code and it won't work. Is there a possibility that it is the creators of the source code fault, instead of mine?
My expected result is for the "Int" to work just like the "Double" did, The number of people is 5 so I don't see why it wouldn't.
My actual result is an error.
My goal is to complete this challenge:
We’re going to be trying out some math operations in a Swift Playground.
Open Xcode and create a new playground
(File Menu->New->Playground).
From the list of Playground templates, just select “Blank”
Challenge 1
Declare a struct called TaxCalculator
Declare a property inside called tax and set it to a decimal value representing the amount of sales tax where you live
Declare a method inside called totalWithTax that accepts a Double as an input parameter and returns a Double value.
Inside that method, write the code to return a Double value representing the input number with tax included
Challenge 2
Declare a struct called BillSplitter
Declare a method inside called splitBy that:
has an input parameter of type Double representing a subtotal
has an input parameter of type Int representing the number of people
returns a Double value
Inside that method, use an instance of TaxCalculator (from challenge 1 above) to calculate the total with tax and then split the bill by the number of people passed into the method.
Return the amount that each person has to pay.
Challenge 3
Create an instance of BillSplitter
Use the instance to print out the amount that each person pays (Assuming 5 people with a bill of $120)
The Code of the course I am using:
https://www.dropbox.com/sh/7aopencivoiegz4/AADbxSj83wt6mPNNgYcARFAsa/Lesson%2009?dl=0&file_subpath=%2FL9+Challenge+Solution.playground%2FContents.swift&preview=L9+Challenge+Solution.zip&subfolder_nav_tracking=1
an image of the code with an error
an image of the code without an error
//https://learn.codewithchris.com/courses/take/start/texts/18867185-lesson-9-challenge
//Challenge1
struct TaxCalculator{
var tax = 0.15
func totalWithTax(_ subtotal:Double) -> Double{
return subtotal * (1 + tax)
}
}
//Challenge2
struct BillSplitter {
func splitBy(subtotal:Double, numPeople:Int //here is the problem) ->Double {
let taxCalc = TaxCalculator()
let totalWithTax = taxCalc.totalWithTax(subtotal)
return totalWithTax/numPeople
}
}
let Split = BillSplitter()
print(Split.splitBy(subtotal: 120, numPeople: 5))
totalWithTax is a Double. numPeople is an Int.
You need to convert numPeople to a Double too.
return totalWithTax / Double(numPeople)
Operators like / don't work with mismatching types.
Swift is a bit of a pain with scalar types. Most C family languages will quietly "promote" scalar types to other types as long as there is no loss of data.
byte->int->long int->float->double all happen silently.
In C, this code just works:
int a = 2;
double b = 2.5;
double c = a * b;
The value a gets promoted to a double, and the result is that contains the double value 5.0.
Not so with Swift.
In Swift, you have to explicitly cast a to a double. It won't let you multiply an Int and a Double unless you explicitly cast the Int to a Double, as aheze said in their answer:
return totalWithTax / Double(numPeople)

How can I declare and initialize a constant bigger than UInt64 in Swift?

I like to know, How can I declare and initialize a constant bigger than UInt64 in Swift?
Swift infer seems unable to work for down number. How I should solve this issue?
let number = 11111111222222233333333344444445555555987654321 // Error: overflow
print(number, type(of: number))
Decimal is the numeric type capable of holding the largest value in Swift. However,you can't declare a Decimal literal, since integer literals are inferred to Int, while floating point literals are inferred to Double, so you need to initialise the Decimal from a String literal.
let number = Decimal(string: "321321321155564654646546546546554653334334")!
From the documentation of NSDecimalNumber (whose Swift version is Decimal and hence their numeric range is equivalent):
An instance can represent any number that can be expressed as mantissa x 10^exponent where mantissa is a decimal integer up to 38 digits long, and exponent is an integer from –128 through 127.
If you need to be able to represent arbitrary-length numbers in Swift, you need to use a 3rd party library (or create one yourself), there's no built-in type that could handle this in Swift.

Issue with Double datatype in Scala

New to Scala and am trying to come up with a library in Scala to check if the double value being passed is of a certain precision and scale. What I noticed was that if the value being passed is 1.00001 then I get the value as that in my called function, but if the value being passed is 0.00001 then I get the value as 1.0E-5, Is there any way to preserve the number in Scala?
def checkPrecisionAndScaleFormat(precision: Int, scale: Int)(valueToCheck: Double): Boolean = {
val value = BigDecimal(valueToCheck)
value.precision <= precision && value.scale <= scale
}
What I noticed was that if the value being passed is 1.00001 then I get the value as that in my called function, but if the value being passed is 0.00001 then I get the value as 1.0E-5
From your phrasing, it seems like you see 1.00001 and 1.0E-5 when debugging (either by printing or in the debugger). It's important to understand that
this doesn't correspond to any internal difference, it's just how Double.toString is defined in Java.
when you do something like val x = 1.00001, the value isn't exactly 1.00001 but the closest number representable as a Double: 1.000010000000000065512040237081237137317657470703125. The easiest way to see the exact value is actually looking at BigDecimal.exact(valueToCheck).
The only way to preserve the number is not to work with Double to begin with. If it's passed as a string, create the BigDecimal from the string. If it's the result of some calculations as a double, consider doing them on BigDecimals instead. But string representation of a Double simply doesn't carry the information you want.

Implicit type of constant in swift tutorial

When I do example from tutorial, I get some issue from constants variables topic.
If someone explain my example I'll be appreciate for this.
When you don't specify a type, a floating point number literal will be inferred to be of type Double.
Double, as its name suggests, has double precision than Float. So when you do:
let a = 64.1
The actual value in memory may be something like 64.099999999999991. Since Double shows only 16 significant digits, it shows 64.09999999999999, rounding off the last "1".
Why does let b: Float = 64.1 show the correct number?
When you specify the type to float, the precision decreases. Float only shows 8 significant digits. That's 64.099999, but there's a "9" straight after that, so it rounds it up to get 64.1.
This has nothing to do with explicitly stating the variable type. Try specifying it to be a Double:
let b: Double = 64.1
It will still show 64.09999999999999.