What are the implications of using an int were a double is required? dart allows you to declare a double without the .0 if it is 0 and handle that as a double without any kind of runtime casting?
Thinks that makes me worry about this:
I dont see any linter saying that this will cast the int into a double.
It compiles fine.
The height field is a double but it accepts an int.
Take a look to the examples:
int version
SizedBox(height: 10)
# Or
final double a = 2;
double version
SizedBox(height: 10.0)
# Or
final double a = 2.0;
If you look at the SizedBox implementation you'll see that the field height has the type double.
To understand what happens in your code that you have as example you have to understand how type inference works. In the first code example that you gave (int version), even if you wrote 10 and not 10.0 the Dart compiler infer that value as a double because the filed height is of that type. You do not specify explicitly that the value that you give as parameter is int so it's seen as double. If you give as value an integer (you specify the type int) you'll have the next compile-time error:
So, to conclude this, in both of your examples Dart infer the type as double because you don't say explicitly that is an integer.
You can read more about Dart type inference here: https://dart.dev/guides/language/type-system#type-inference
There is no automatic runtime conversion between int and double. double a = 2; is simply syntactic sugar for double a = 2.0; which happens at compile time and which is why it works only for integer literals.
Related
I tried adding an Int and Float literal in Swift and it compiled without any error :
var sum = 4 + 5.0 // sum is assigned with value 9.0 and type Double
But, when I tried to do the same with Int and Float variables, I got a compile-time error and I had to type-cast any one operand to the other one's type for it to work:
var i: Int = 4
var f:Float = 5.0
var sum = i + f // Binary operator '+' cannot be applied to operands of type 'Int' and 'Float'
Why is it happening so ? Is it related to type safety in any way ?
If you want Double result:
let i: Int = 4
let f: Float = 5.0
let sum = Double(i) + Double(f)
print("This is the sum:", sum)
If you want Int result:
let i: Int = 4
let f: Float = 5.0
let sum = i + Int(f)
print("This is the sum:", sum)
In case of var sum = 4 + 5.0 the compiler automatically converts 4 to a float as that is what is required to perform the operation.
Same happens if you write var x: Float = 4. The 4 is automatically converted to a float.
In second case, since you have explicitly defined the type of the variable, the compiler does not have the freedom to change is as per the requirement.
For solution, look at #Fabio 's answer
The document on Swift.org says:
Type inference is particularly useful when you declare a constant or variable with an initial value. This is often done by assigning a literal value (or literal) to the constant or variable at the point that you declare it. (A literal value is a value that appears directly in your source code, such as 42 and 3.14159 in the examples below.)
For example, if you assign a literal value of 42 to a new constant
without saying what type it is, Swift infers that you want the
constant to be an Int, because you have initialized it with a number
that looks like an integer:
let meaningOfLife = 42 // meaningOfLife is inferred to be of type Int
Likewise, if you don’t specify a type for a floating-point literal,
Swift infers that you want to create a Double:
let pi = 3.14159 // pi is inferred to be of type Double Swift always
chooses Double (rather than Float) when inferring the type of
floating-point numbers.
If you combine integer and floating-point literals in an expression, a
type of Double will be inferred from the context:
> let anotherPi = 3 + 0.14159 // anotherPi is also inferred to be of
type Double The literal value of 3 has no explicit type in and of
itself, and so an appropriate output type of Double is inferred from
the presence of a floating-point literal as part of the addition.
So I am following this course called "Code With Chris - 14 Day Beginner Challenge (SwiftUI)" (yes I am a beginner), and after each lesson, there is a challenge, I have almost completed the challenge but I couldn't figure out why it wouldn't work, so I checked the dropbox of the completed challenge and I had everything pretty much the same, I have found a solution similar to the source but I still don't understand why my first version (first picture) won't work. I copied everything identically from the source code and it won't work. Is there a possibility that it is the creators of the source code fault, instead of mine?
My expected result is for the "Int" to work just like the "Double" did, The number of people is 5 so I don't see why it wouldn't.
My actual result is an error.
My goal is to complete this challenge:
We’re going to be trying out some math operations in a Swift Playground.
Open Xcode and create a new playground
(File Menu->New->Playground).
From the list of Playground templates, just select “Blank”
Challenge 1
Declare a struct called TaxCalculator
Declare a property inside called tax and set it to a decimal value representing the amount of sales tax where you live
Declare a method inside called totalWithTax that accepts a Double as an input parameter and returns a Double value.
Inside that method, write the code to return a Double value representing the input number with tax included
Challenge 2
Declare a struct called BillSplitter
Declare a method inside called splitBy that:
has an input parameter of type Double representing a subtotal
has an input parameter of type Int representing the number of people
returns a Double value
Inside that method, use an instance of TaxCalculator (from challenge 1 above) to calculate the total with tax and then split the bill by the number of people passed into the method.
Return the amount that each person has to pay.
Challenge 3
Create an instance of BillSplitter
Use the instance to print out the amount that each person pays (Assuming 5 people with a bill of $120)
The Code of the course I am using:
https://www.dropbox.com/sh/7aopencivoiegz4/AADbxSj83wt6mPNNgYcARFAsa/Lesson%2009?dl=0&file_subpath=%2FL9+Challenge+Solution.playground%2FContents.swift&preview=L9+Challenge+Solution.zip&subfolder_nav_tracking=1
an image of the code with an error
an image of the code without an error
//https://learn.codewithchris.com/courses/take/start/texts/18867185-lesson-9-challenge
//Challenge1
struct TaxCalculator{
var tax = 0.15
func totalWithTax(_ subtotal:Double) -> Double{
return subtotal * (1 + tax)
}
}
//Challenge2
struct BillSplitter {
func splitBy(subtotal:Double, numPeople:Int //here is the problem) ->Double {
let taxCalc = TaxCalculator()
let totalWithTax = taxCalc.totalWithTax(subtotal)
return totalWithTax/numPeople
}
}
let Split = BillSplitter()
print(Split.splitBy(subtotal: 120, numPeople: 5))
totalWithTax is a Double. numPeople is an Int.
You need to convert numPeople to a Double too.
return totalWithTax / Double(numPeople)
Operators like / don't work with mismatching types.
Swift is a bit of a pain with scalar types. Most C family languages will quietly "promote" scalar types to other types as long as there is no loss of data.
byte->int->long int->float->double all happen silently.
In C, this code just works:
int a = 2;
double b = 2.5;
double c = a * b;
The value a gets promoted to a double, and the result is that contains the double value 5.0.
Not so with Swift.
In Swift, you have to explicitly cast a to a double. It won't let you multiply an Int and a Double unless you explicitly cast the Int to a Double, as aheze said in their answer:
return totalWithTax / Double(numPeople)
New to Scala and am trying to come up with a library in Scala to check if the double value being passed is of a certain precision and scale. What I noticed was that if the value being passed is 1.00001 then I get the value as that in my called function, but if the value being passed is 0.00001 then I get the value as 1.0E-5, Is there any way to preserve the number in Scala?
def checkPrecisionAndScaleFormat(precision: Int, scale: Int)(valueToCheck: Double): Boolean = {
val value = BigDecimal(valueToCheck)
value.precision <= precision && value.scale <= scale
}
What I noticed was that if the value being passed is 1.00001 then I get the value as that in my called function, but if the value being passed is 0.00001 then I get the value as 1.0E-5
From your phrasing, it seems like you see 1.00001 and 1.0E-5 when debugging (either by printing or in the debugger). It's important to understand that
this doesn't correspond to any internal difference, it's just how Double.toString is defined in Java.
when you do something like val x = 1.00001, the value isn't exactly 1.00001 but the closest number representable as a Double: 1.000010000000000065512040237081237137317657470703125. The easiest way to see the exact value is actually looking at BigDecimal.exact(valueToCheck).
The only way to preserve the number is not to work with Double to begin with. If it's passed as a string, create the BigDecimal from the string. If it's the result of some calculations as a double, consider doing them on BigDecimals instead. But string representation of a Double simply doesn't carry the information you want.
Looking at various posts on this topic but still no luck. Is there a simple way to make division/conversion when dividing Double (or Float) with Int? Here is a simple example in playground returning and error "Double is not convertible to UInt8".
var score:Double = 3.00
var length:Int = 2 // it is taken from some an array lenght and does not return decimal or float
var result:Double = (score / length )
Cast the int to double with var result:Double=(score/Double(length))
What this will do is before computing the division it will create a new Double variable with int inside parentheses hence constructor like syntax.
You cannot combine or use different variable types together.
You need to convert them all to the same type, to be able to divide them together.
The easiest way I see to make that happen, would be to make the Int a Double.
You can do that quite simply do that by adding a ".0" on the end of the Integer you want to convert.
Also, FYI:
Floats are pretty rarely used, so unless you're using them for something specific, its also just more fluid to use more common variables.
I tried the code from Swift Programming Language in playground and got the following error "NSNumber is not a subtype of Float", I just modified it slightly by making x and y of type Float in struct Point. What am I missing?
If I added Float type to centerX and centerY, I got error: Could not find an overload for '/' that accepts the supplied arguments.
The error message is completely unrelated to the actual error... The actual error is cannot convert Double to Float.
In Size, x and y are Double (default type of float literal) but in Point, width and height are Float. They are different types and you can't mix them without explicit conversion.
There are number of ways to fix it. You can change them all to Double or Float.
e.g.
class Point
{
var x:Double
var y:Double
}
or you can convert them to correct type by doing Float(centerX)
ps: can you post the code next time so I can change it without retype them