Cannot `convert` an object of type BitArray{1} to an object of type Int64. Julia - type-conversion

I am very new in Julia, so maybe it is stupid question. I have the following code:
a = [1.0, 2.0];
b = [2.2, 3.1];
Int(a.>b)
It gives me an error:
MethodError: Cannot `convert` an object of type BitArray{1} to an object of type Int64
This may have arisen from a call to the constructor Int64(...),
since type constructors fall back to convert methods.
Stacktrace:
[1] Int64(::BitArray{1}) at ./sysimg.jl:77
[2] include_string(::String, ::String) at ./loading.jl:522
The command 1(a.>b) works well.
Could You explain me:
Why my implicit conversion did not work?

a.>b is of type BitArray{1}. With Int(a.>b) you are trying to convert an array, namely a BitArray, to a single integer, which doesn't make sense.
Instead you probably want to convert the elements of the array to integers:
julia> a = [1.0, 2.0];
julia> b = [2.2, 3.1];
julia> Int.(a.>b)
2-element Array{Int64,1}:
0
0
Note the dot in Int.(a.>b) which broadcasts the conversion to every element.
The reason why 1(a.>b) works is because it is being translated to 1*(a.>b). This is a multiplication of a number and an array which is an element-wise operation.

Related

Why does this convert Int to Double in Swift playground not work?

f = 802
var df = Double(f / 4)
print(df)
result is 200.0
I expected 200.5
Your expression creates a Double from the division of the integers 820 by 4 which is 200.
If you want a floating point division you have to do the conversion before the division. Or simpler without a conversion declare f as Double
let f = 802.0
let df = f / 4 // 200.5
It's a good practice anyway to declare numeric literals as the actual type. I would even write
let df = f / 4.0
The benefit is that the compiler complains if the types don't match
This is a common bug and easy mistake to make. When you want a floating point result, you need to ensure that the operands of your arithmetic expressions are floating point numbers instead of integers. This will produce the expected result of 200.5:
var df = Double(f) / 4.0
(Edit: if your variable f really is going to be a hard-coded constant 802, I actually recommend vadian's solution of declaring f itself as Double rather than Int.)
A more detailed explanation:
Looking at the order of operations of var df = Double(f / 4):
The innermost expression is f / 4. This is evaluated first. f and 4 are both integers, so this is calculated using integer division which rounds down, so 802/4 => 200.
Then the result 200 is used in the Double() conversion, thus the result of 200.0. Finally, the result is assigned to the newly-declared variable df, which Swift infers to have the type Double based on the expression to the right of the equals sign.
Compare this to var df = Double(f) / 4.0: the Double(f) is evaluated first, converting the integer 802 to a double value 802.0. Now the division is performed, and since both operands of the division sign are floating point, floating-point division is performed and you get the result 802.0 / 4.0 => 200.5. This result is a Double value, so the variable df is declared to be a Double and assigned the value 200.5.
Some other approaches that don't work:
var df = f / 4: f and 4 are both integers, integer division is performed automatically, and df is now a variable of type Int with value 200
var df: Double = f / 4: trying to explicitly declare df as Double will produce a compiler error. The right side of the equals sign is still an integer division operation, and Swift won't automatically cast from Integer to Double, it wants you to explicitly decide how to cast
var df = f / 4.0: in some languages, this type of expression would automatically convert f to a Double and thus perform floating-point division like you want. But again Swift will not automatically convert and wants you to be explicit…this leads to my recommended solution of Double(f)/4.0
In your example you are dividing Integers and then casts to double.
Fix:
f = 802
var df = Double(f) / 4
print(df)

spsolve overloading and rowvec type conversion consistency

With the following declarations:
uvec basis;
rowvec c;
sp_mat B;
The expression c(basis) seems to return an
arma::subview_elem1<double, arma::Mat<unsigned int> > and the following call appears to work:
vec pi_B = spsolve(trans(B), c(basis), "superlu");
How does spsolve resolve this input?
Also vec pi_B = spsolve(trans(B), trans(c(basis)), "superlu"); throws a dimensional mismatch error but the following runs:
rowvec d;
vec pi_B2 = spsolve(trans(B), trans(d), "superlu");
According to the documentation, c(basis) is a non-contiguous submatrix, where basis specifies which elements in c to use.
In this case c is "... interpreted as one long vector, with column-by-column ordering of the elements" and that "... the aggregate set of the specified elements is treated as a column vector", which means that c(basis) produces a column vector.

Ignoring an output parameter from vDSP

When using vDSP to perform some speedy calculations, I often don't care about one of the output parameters. Let's say I'm finding the index of an array's maximum value:
var m:Float = 0
var i:vDSP_Length = 0
vDSP_maxvi(&array,
1,
&m,
&i,
vDSP_Length(array.count))
Ideally, I'd like to get rid of m altogether so that vDSP_maxvi fills i only. Something like:
var i:vDSP_Length = 0
vDSP_maxvi(&array,
1,
nil,
&i,
vDSP_Length(array.count))
But of course this doesn't work ("nil is not compatible with expected argument type 'UnsafeMutablePointer<Float>'"). Is there some sort of argument I can send to these kinds of methods that says "ignore this parameter"? Thanks for reading.
Except for documented cases where a null argument is accepted, you must pass a valid address. There is no argument value that tells vDSP to ignore the argument.

Binary operator '*' cannot be applied to operands of type 'Float' and 'Float!'

When I do the following:
let gapDuration = Float(self.MONTHS) * Float(self.duration) * self.gapMonthly;
I get the error:
Binary operator '*' cannot be applied to operands of type 'Float' and 'Float!'
But when I do:
let gapDuration = 12 * Float(self.duration) * self.gapMonthly;
Everything is working fine.
I have no Idea what this error is telling me.
self.gapMonthly is of type Float! and self.duration and self.MONTHS are of type Int!
I would consider this a bug (at the very least, the error is misleading), and appears to be present when attempting to use a binary operator on 3 or more expressions that evaluate to a given type, where one or more of those expressions is an implicitly unwrapped optional of that type.
This simply stretches the type-checker too far, as it has to consider all possibilities of treating the IUO as a strong optional (as due to SE-0054 the compiler will treat an IUO as a strong optional if it can be type-checked as one), along with attempting to find the correct overloads for the operators.
At first glance, it appears to be similar to the issue shown in How can I concatenate multiple optional strings in swift 3.0? – however that bug was fixed in Swift 3.1, but this bug is still present.
A minimal example that reproduces the same issue would be:
let a: Float! = 0
// error: Binary operator '*' cannot be applied to operands of type 'Float' and 'Float!'
let b = a * a * a
and is present for other binary operators other than *:
// error: Binary operator '+' cannot be applied to operands of type 'Float' and 'Float!'
let b = a + a + a
It is also still reproducible when mixing in Float expressions (as long as at least one Float! expression remains), as well as when explicitly annotating b as a Float:
let b: Float = a * a * a // doesn't compile
let a: Float! = 0
let b: Int = 0
let c: Int = 0
let d: Float = a * Float(b) * Float(c) // doesn't compile
A simple fix for this would be to explicitly force unwrap the implicitly unwrapped optional(s) in the expression:
let d = a! * Float(b) * Float(c) // compiles
This relieves the pressure on the type-checker, as now all the expressions evaluate to Float, so overload resolution is much simpler.
Although of course, it goes without saying that this will crash if a is nil. In general, you should try and avoid using implicitly unwrapped optionals, and instead prefer to use strong optionals – and, as #vadian says, always use non-optionals in cases where the value being nil doesn't make sense.
If you need to use an optional and aren't 100% sure that it contains a value, you should safely unwrap it before doing the arithmetic. One way of doing this would be to use Optional's map(_:) method in order to propagate the optionality:
let a: Float! = 0
let b: Int = 0
let c: Int = 0
// the (a as Float?) cast is necessary if 'a' is an IUO,
// but not necessary for a strong optional.
let d = (a as Float?).map { $0 * Float(b) * Float(c) }
If a is non-nil, d will be initialized to the result of the unwrapped value of a multiplied with Float(b) and Float(c). If however a is nil, d will be initialised to nil.

Swift float multiplication error

This code fails:
let element: Float = self.getElement(row: 1, column: j)
let multiplier = powf(-1, j+2)*element
with this error:
Playground execution failed: :140:51: error: cannot invoke '*' with an argument list of type '(Float, Float)'
let multiplier = powf(-1, j+2)*element
Bear in mind that this occurs in this block:
for j in 0...self.columnCount {
where columnCount is a Float. Also, the first line does execute and so the getElement method indeed returns a Float.
I am completely puzzled by this as I see no reason why it shouldn't work.
There is no implicit numeric conversion in swift, so you have to do explicit conversion when dealing with different types and/or when the expected type is different than the result of the expression.
In your case, j is an Int whereas powf expects a Float, so it must be converted as follows:
let multiplier = powf(-1, Float(j)+2)*element
Note that the 2 literal, although usually considered an integer, is automatically inferred a Float type by the compiler, so in that case an explicit conversion is not required.
I ended up solving this by using Float(j) instead of j when calling powf(). Evidently, j cannot be implicitly converted to a Float.