Swift type casting and parenthesis - swift

Both of this type casting works
Edit
(as written by Nate Cook this is not a real Type Casting, in Swift type casting is done with the as keyword. With the following call I'm initializing an Int64 with a Float parameter.)
anInt = Int64(aFloat)
anInt = (Int64)(aFloat)
First
var anInt : Int64 = 0
var aFloat : Float = 11.5
anInt = Int64(aFloat)
println(anInt) // this prints 11
Second
var anInt : Int64 = 0
var aFloat : Float = 11.5
anInt = (Int64)(aFloat)
println(anInt) // this prints 11
In the second example the main difference is that there are parenthesis around the type Int64, but I don't find any information about this syntax in the docs.
The statement Int64(aFloat) is a typical initializer call that creates an Int64 passing a Float as the initialization parameters. Is this correct?
What is the meaning of the parenthesis in (Int64)(aFloat)? Is for better readability or there is another meaning?
Thanks

It looks like you can add an arbitrary number of parentheses (e.g. (((Int64)))). The main reason for the parentheses is to make a cast like (object as SomeClass).method()

See the duplicate question, but the short answer is that (Int) declares a tuple containing a single Int, which is semantically identical, per language specification, to a single Int Int.

Related

Why is Swift inferring double values when using literals?

I understand that Swift is very strict about types and does not implicitly cast one type to another.
Why does this not generate an error and the output is an array of doubles?
let myDoubles = [Double.Pi, 42]
But this does?
let fortyTwoInt = 42;
let myDoubles = [Double.Pi, one]
Why does it implicitly cast 42 to 42.0 in the first example? And if it is not casting, what else is happening?
Simpler example:
let someFloat = 2.0 + 2
versus
let twoInt = 2
let someFloat = 2.0 + twoInt
Again, the latter one does not work.
"Cast" is not really the right word here. It interprets the character sequence "4" and "2" as an integer literal. Double conforms to ExpressibleByIntegerLiteral, so the Double can be constructed from it.
fortyTwoInt is not an integer literal. It's an Int. And Swift will not automatically convert between Int and Double.

Swift type inference and basic addition

Pretty new to Swift and learning about data types.
let partNumber = 3.2
let wholeNumber = 2
partNumber + wholeNumber //Binary operator '+' cannot be applied to operands of type 'Double' and 'Int'
3.2 + 2 // outputs 5.2
I understand that partNumber is a Double type and wholeNumber is an Int. What I don't understand is why playground errors out when I attempt to add both constants together. To add confusion the addition works when not assigned as a constant.
The + operator does not support adding a Double and and Integer together in this way
If you change up your code to make sure wholeNumber is a Double type, then it'll work
let partNumber = 3.2
let wholeNumber: Double = 2
let result = partNumber + wholeNumber
This is all covered in the Swift book under Numeric Type Conversion.
Some relevant quotes from the subsection titled "Integer and Floating-Point Conversion":
Conversions between integer and floating-point numeric types must be made explicit
This is followed by an example similar to your code. Your code needs a cast:
let partNumber = 3.2
let wholeNumber = 2
partNumber + Double(wholeNumber)
and:
The rules for combining numeric constants and variables are different from the rules for numeric literals. The literal value 3 can be added directly to the literal value 0.14159, because number literals don’t have an explicit type in and of themselves. Their type is inferred only at the point that they’re evaluated by the compiler.
Which covers the second part of your question.
To add confusion the addition works when not assigned as a constant.
That doesn't "add to the confusion" at all. It's the answer. There is implicit coercion between numeric types for literals (what you call a "constant") but not for variables. It's as simple as that.

Binary operator '*' cannot be applied to operands of type 'Float' and 'Float!'

When I do the following:
let gapDuration = Float(self.MONTHS) * Float(self.duration) * self.gapMonthly;
I get the error:
Binary operator '*' cannot be applied to operands of type 'Float' and 'Float!'
But when I do:
let gapDuration = 12 * Float(self.duration) * self.gapMonthly;
Everything is working fine.
I have no Idea what this error is telling me.
self.gapMonthly is of type Float! and self.duration and self.MONTHS are of type Int!
I would consider this a bug (at the very least, the error is misleading), and appears to be present when attempting to use a binary operator on 3 or more expressions that evaluate to a given type, where one or more of those expressions is an implicitly unwrapped optional of that type.
This simply stretches the type-checker too far, as it has to consider all possibilities of treating the IUO as a strong optional (as due to SE-0054 the compiler will treat an IUO as a strong optional if it can be type-checked as one), along with attempting to find the correct overloads for the operators.
At first glance, it appears to be similar to the issue shown in How can I concatenate multiple optional strings in swift 3.0? – however that bug was fixed in Swift 3.1, but this bug is still present.
A minimal example that reproduces the same issue would be:
let a: Float! = 0
// error: Binary operator '*' cannot be applied to operands of type 'Float' and 'Float!'
let b = a * a * a
and is present for other binary operators other than *:
// error: Binary operator '+' cannot be applied to operands of type 'Float' and 'Float!'
let b = a + a + a
It is also still reproducible when mixing in Float expressions (as long as at least one Float! expression remains), as well as when explicitly annotating b as a Float:
let b: Float = a * a * a // doesn't compile
let a: Float! = 0
let b: Int = 0
let c: Int = 0
let d: Float = a * Float(b) * Float(c) // doesn't compile
A simple fix for this would be to explicitly force unwrap the implicitly unwrapped optional(s) in the expression:
let d = a! * Float(b) * Float(c) // compiles
This relieves the pressure on the type-checker, as now all the expressions evaluate to Float, so overload resolution is much simpler.
Although of course, it goes without saying that this will crash if a is nil. In general, you should try and avoid using implicitly unwrapped optionals, and instead prefer to use strong optionals – and, as #vadian says, always use non-optionals in cases where the value being nil doesn't make sense.
If you need to use an optional and aren't 100% sure that it contains a value, you should safely unwrap it before doing the arithmetic. One way of doing this would be to use Optional's map(_:) method in order to propagate the optionality:
let a: Float! = 0
let b: Int = 0
let c: Int = 0
// the (a as Float?) cast is necessary if 'a' is an IUO,
// but not necessary for a strong optional.
let d = (a as Float?).map { $0 * Float(b) * Float(c) }
If a is non-nil, d will be initialized to the result of the unwrapped value of a multiplied with Float(b) and Float(c). If however a is nil, d will be initialised to nil.

Swift float multiplication error

This code fails:
let element: Float = self.getElement(row: 1, column: j)
let multiplier = powf(-1, j+2)*element
with this error:
Playground execution failed: :140:51: error: cannot invoke '*' with an argument list of type '(Float, Float)'
let multiplier = powf(-1, j+2)*element
Bear in mind that this occurs in this block:
for j in 0...self.columnCount {
where columnCount is a Float. Also, the first line does execute and so the getElement method indeed returns a Float.
I am completely puzzled by this as I see no reason why it shouldn't work.
There is no implicit numeric conversion in swift, so you have to do explicit conversion when dealing with different types and/or when the expected type is different than the result of the expression.
In your case, j is an Int whereas powf expects a Float, so it must be converted as follows:
let multiplier = powf(-1, Float(j)+2)*element
Note that the 2 literal, although usually considered an integer, is automatically inferred a Float type by the compiler, so in that case an explicit conversion is not required.
I ended up solving this by using Float(j) instead of j when calling powf(). Evidently, j cannot be implicitly converted to a Float.

IPhone SDK - How to detect variable type (float or double)?

How do I detect whether a variable is float, double, int, etc.?
Thanks.
Objective-C is not like PHP or other interpreted languages where the 'type' of a variable can change according to how you use it. All variables are set to a fixed type when they are declared and this cannot be changed. Since the type is defined at compile time, there is no need to query the type of a variable at run-time.
For example:
float var1; // var1 is a float and can't be any other type
int var2; // var2 is an int and can't be any other type
NSString* var3; // var3 is a pointer to a NSString object and can't be any other type
The type is specified before the variable name, also in functions:
- (void)initWithValue:(float)param1 andName:(NSString*)param2
{
// param1 is a float
// param2 is a pointer to a NSString object
}
So as you can see, the type is fixed when the variable is declared (also you will notice that all variables must be declared, i.e. you cannot just suddenly start using a new variable name unless you've declared it first).
In a compiled C based language (outside of debug mode with symbols), you can't actually "detect" any variable unless you know the type, or maybe guess a type and get lucky.
So normally, you know and declare the type before any variable reference.
Without type information, the best you can do might be to dereference a pointer to random unknown bits/bytes in memory, and hopefully not crash on an illegal memory reference.
Added comment:
If you know the type is a legal Objective C object, then you might be able to query the runtime for additional information about the class, etc. But not for ints, doubles, etc.
Use sizeof. For double it will be 8. It is 4 for float.
double x = 3.1415;
float y = 3.1415f;
printf("size of x is %d\n", sizeof(x));
printf("size of y is %d\n", sizeof(y));