This code fails:
let element: Float = self.getElement(row: 1, column: j)
let multiplier = powf(-1, j+2)*element
with this error:
Playground execution failed: :140:51: error: cannot invoke '*' with an argument list of type '(Float, Float)'
let multiplier = powf(-1, j+2)*element
Bear in mind that this occurs in this block:
for j in 0...self.columnCount {
where columnCount is a Float. Also, the first line does execute and so the getElement method indeed returns a Float.
I am completely puzzled by this as I see no reason why it shouldn't work.
There is no implicit numeric conversion in swift, so you have to do explicit conversion when dealing with different types and/or when the expected type is different than the result of the expression.
In your case, j is an Int whereas powf expects a Float, so it must be converted as follows:
let multiplier = powf(-1, Float(j)+2)*element
Note that the 2 literal, although usually considered an integer, is automatically inferred a Float type by the compiler, so in that case an explicit conversion is not required.
I ended up solving this by using Float(j) instead of j when calling powf(). Evidently, j cannot be implicitly converted to a Float.
Related
object LPrimeFactor {
def main(arg:Array[String]):Unit = {
start(13195)
start(600851475143)
}
def start(until:Long){
var all_prime_fac:Array[Int] = Array()
var i = 2
(compile:compileIncremental) Compilation failed
integer number too large
Even though I changed the arg type to Long, it's still not fixed.
Pass the argument as a Long (notice the L at the end of the number):
start(600851475143L)
// ^
To create a Long literal you must add L to the end of it.
start(600851475143L)
Please remember literals values, if you has not any type direct suffix, the compiler try to get your numeric type values, such as 600851475143 as type Int, which is 32-bit length, two complement representation
MIN_VALUE = -2147483648(- 2 ^ 31)
MAX_VALUE = 2147483647(2 ^ 31 - 1)
So please add right suffix on the literal value, as 600851475143L
I am very new in Julia, so maybe it is stupid question. I have the following code:
a = [1.0, 2.0];
b = [2.2, 3.1];
Int(a.>b)
It gives me an error:
MethodError: Cannot `convert` an object of type BitArray{1} to an object of type Int64
This may have arisen from a call to the constructor Int64(...),
since type constructors fall back to convert methods.
Stacktrace:
[1] Int64(::BitArray{1}) at ./sysimg.jl:77
[2] include_string(::String, ::String) at ./loading.jl:522
The command 1(a.>b) works well.
Could You explain me:
Why my implicit conversion did not work?
a.>b is of type BitArray{1}. With Int(a.>b) you are trying to convert an array, namely a BitArray, to a single integer, which doesn't make sense.
Instead you probably want to convert the elements of the array to integers:
julia> a = [1.0, 2.0];
julia> b = [2.2, 3.1];
julia> Int.(a.>b)
2-element Array{Int64,1}:
0
0
Note the dot in Int.(a.>b) which broadcasts the conversion to every element.
The reason why 1(a.>b) works is because it is being translated to 1*(a.>b). This is a multiplication of a number and an array which is an element-wise operation.
When I do the following:
let gapDuration = Float(self.MONTHS) * Float(self.duration) * self.gapMonthly;
I get the error:
Binary operator '*' cannot be applied to operands of type 'Float' and 'Float!'
But when I do:
let gapDuration = 12 * Float(self.duration) * self.gapMonthly;
Everything is working fine.
I have no Idea what this error is telling me.
self.gapMonthly is of type Float! and self.duration and self.MONTHS are of type Int!
I would consider this a bug (at the very least, the error is misleading), and appears to be present when attempting to use a binary operator on 3 or more expressions that evaluate to a given type, where one or more of those expressions is an implicitly unwrapped optional of that type.
This simply stretches the type-checker too far, as it has to consider all possibilities of treating the IUO as a strong optional (as due to SE-0054 the compiler will treat an IUO as a strong optional if it can be type-checked as one), along with attempting to find the correct overloads for the operators.
At first glance, it appears to be similar to the issue shown in How can I concatenate multiple optional strings in swift 3.0? – however that bug was fixed in Swift 3.1, but this bug is still present.
A minimal example that reproduces the same issue would be:
let a: Float! = 0
// error: Binary operator '*' cannot be applied to operands of type 'Float' and 'Float!'
let b = a * a * a
and is present for other binary operators other than *:
// error: Binary operator '+' cannot be applied to operands of type 'Float' and 'Float!'
let b = a + a + a
It is also still reproducible when mixing in Float expressions (as long as at least one Float! expression remains), as well as when explicitly annotating b as a Float:
let b: Float = a * a * a // doesn't compile
let a: Float! = 0
let b: Int = 0
let c: Int = 0
let d: Float = a * Float(b) * Float(c) // doesn't compile
A simple fix for this would be to explicitly force unwrap the implicitly unwrapped optional(s) in the expression:
let d = a! * Float(b) * Float(c) // compiles
This relieves the pressure on the type-checker, as now all the expressions evaluate to Float, so overload resolution is much simpler.
Although of course, it goes without saying that this will crash if a is nil. In general, you should try and avoid using implicitly unwrapped optionals, and instead prefer to use strong optionals – and, as #vadian says, always use non-optionals in cases where the value being nil doesn't make sense.
If you need to use an optional and aren't 100% sure that it contains a value, you should safely unwrap it before doing the arithmetic. One way of doing this would be to use Optional's map(_:) method in order to propagate the optionality:
let a: Float! = 0
let b: Int = 0
let c: Int = 0
// the (a as Float?) cast is necessary if 'a' is an IUO,
// but not necessary for a strong optional.
let d = (a as Float?).map { $0 * Float(b) * Float(c) }
If a is non-nil, d will be initialized to the result of the unwrapped value of a multiplied with Float(b) and Float(c). If however a is nil, d will be initialised to nil.
Both of this type casting works
Edit
(as written by Nate Cook this is not a real Type Casting, in Swift type casting is done with the as keyword. With the following call I'm initializing an Int64 with a Float parameter.)
anInt = Int64(aFloat)
anInt = (Int64)(aFloat)
First
var anInt : Int64 = 0
var aFloat : Float = 11.5
anInt = Int64(aFloat)
println(anInt) // this prints 11
Second
var anInt : Int64 = 0
var aFloat : Float = 11.5
anInt = (Int64)(aFloat)
println(anInt) // this prints 11
In the second example the main difference is that there are parenthesis around the type Int64, but I don't find any information about this syntax in the docs.
The statement Int64(aFloat) is a typical initializer call that creates an Int64 passing a Float as the initialization parameters. Is this correct?
What is the meaning of the parenthesis in (Int64)(aFloat)? Is for better readability or there is another meaning?
Thanks
It looks like you can add an arbitrary number of parentheses (e.g. (((Int64)))). The main reason for the parentheses is to make a cast like (object as SomeClass).method()
See the duplicate question, but the short answer is that (Int) declares a tuple containing a single Int, which is semantically identical, per language specification, to a single Int Int.
Here I tried to insert a value in GLKVector2Add value but the error comes expected expression missing in this line.
GLKVector2 self.position = GLKVector2Add({-200.695, 271},{-803.695, 0}); //Error - Expected expression
Try creating "GLKVector2" variables, setting them and then passing those as arguments in your call to "GLKVector2Add". It may be that the compiler simply doesn't know what to do with "{-200.695,271}" (a mix of float and integer numbers).
You must add them using GLKVector2Make.
So, your code will be:
GLKVector2 position = GLKVector2Add(
GLKVector2Make(-200.695, 271),
GLKVector2Make(-803.695, 0));