OpenCL select() function with double - double

I'm porting some complex engineering code to OpenCL and have run into a problem with the select() ternary function with doubles. I'm just using scalars for now so I could use the simple C ternary operator ()?: but I plan to move to vector types soon.
My problem is that select with doubles requires a (long) type as the comparison but the scalar relational functions (e.g., isgreater) only return (int) for doubles. The prototypes for these functions are ...
int isgreater (double a, double b);
longn isgreater (doublen a, doublen b);
double select (double a, double b, long cmp);
doublen select (doublen a, doublen b, longn cmp);
I can get the scalar code to compile/run in scalar mode only if I cast the results of isgreater() as a long since select requires the element types to by the same size.
double hi = ...;
double lo = ...;
double res = select (lo, hi, (long)isgreater(T, T_cutoff));
Otherwise, I get a compiler error since select is ambiguous. There seems to be a mismatch in the specification regarding the relational mask types for scalar and vector doubles.
Q1: Is this an oversight in the specification or a bug in the implementation? Both the Intel and AMD OpenCL compilers fail for builds on the CPU so I'm guessing is the former.
Q2: OpenCL scalar relational functions return 0/1 and vector relational functions return 0/-1 (that is, all bits set). The (int)->(long) conversion appears to be consistent with this requirement but not (int)->(ulong), right? Is the (int)->(long) conversion costly?
Q3: When (if) I switch to vector doubles, will the compiler toss out the unnecessary explicit conversion? I want to retain both scalar and vector types so I can target CUDA GPUs and SIMD devices (MIC, CPUs) w/o having to keep two massive code sets.
Thanks for any advice here.

Q1:
I'd say that not implicitly converting the result of isgreater into long is an oversight in the specification.
In the single element case select should work exactly like ternary operator. That's also the reason isgreater returns 1 in scalar case. Basically isgreater should work exactly like > does when scalar operators are used.
In the vectorized case select looks at the MSB, which is the reason isgreater returns -1 (All bits 1 so MSB is naturally 1 too).
Q2: Int long conversion shouldn't be costly at all. At most it just requires 1 additional instruction.
Q3:
It does not.
This issue annoyingly prevents one from making a code that vectorizes from 1 to n elements, it requires special handling for the scalar case.

Related

Error -> Colon operands must be in the range of the data type. No sense

In the code below image index range only accept to deal with int16 however I can't find a proper explanation for this case. In the same code, you can notice if I change the data type to int8 for same value error will present.
K>> t2 = int8(t)
t2 =
int8
45
K>> I2 = flt_rot(t:end,:);
K>> I2 = flt_rot(t2:end,:);
Error using :
Colon operands must be in the range of the data type.
Why did this happen?
To understand this problem, first a little background:
MATLAB has a rather unique behavior with respect to numeric values of different types. Originally, MATLAB only used double-precision floating point values (double). At some point it became possible to store arrays of other types, but it was not possible to use those for much. It was not until MATALB 7.0 or so that arithmetic with non-doubles was possible (I'm a bit hazy exactly when that was introduced). And these operations are still a bit "awkward", if you will.
For one, MATLAB does not allow operations with mixed types:
>> int8(4)+int16(6)
Error using +
Integers can only be combined with integers of the same class, or scalar doubles.
Note that error message: "scalar doubles". The one exception to mixed types is that any operation is possible if one of the operands is a scalar double.
Another thing to note is that any operation with a non-double type and a double type results in values of the non-double type:
>> int8(4)+6
ans =
int8
10
The color operator (:) is no exception:
>> int8(4):6
ans =
1×3 int8 row vector
4 5 6
Finally, the last thing to know to understand this problem is that end is a function that returns a double scalar value (yes, it really is a function, albeit a very special one, see help end).
If you have an array flt_rot that is 200x300, end in the first index returns 200. That is, flt_rot(t2:end,:) is the same as flt_rot(t2:200,:). Since t2 is a int8 type:
>> t2=int8(45);
>> t2:200
Error using :
Colon operands must be in the range of the data type.
The solution to your problem is to not use numeric values of type other than double for anything except in large data sets where the amount of memory used matters. For indexing, using an integer is not going to give you any speedup over using doubles, but will give you lots of other problems. There is a reason that the default values are always doubles.
This will work:
I2 = flt_rot(double(t2):end,:);

Scala constraint based types and literals

I was thinking whether it would be possible in Scala to define a type like NegativeNumber. This type would represent a negative number and it would be checked by the compiler similarly to Ints, Strings etc.
val x: NegativeNumber = -34
val y: NegativeNumber = 34 // should not compile
Likewise:
val s: ContainsHello = "hello world"
val s: ContainsHello = "foo bar" // this should not compile either
I could use these types just like other types, eg:
def myFunc(x: ContainsHello): Unit = println(s"$x contains hello")
These constrained types could be backed by casual types (Int, String).
Is it possible to implement these types (maybe with macros)?
How about custom literals?
val neg = -34n //neg is of type NegativeNumber because of the suffix
val pos = 34n // compile error
Unfortunately, no this isn't something you could easily check at compile time. Well - at least not if you aren't restricting the operations on your type. If your goal is simply to check that a number literal is non-zero, you could easily write a macro that checks this property. However, I do not see any benefit in proving that a negative literal is indeed negative.
The problem isn't a limitation of Scala - which has a very strong type system - but the fact that (in a reasonably complex program) you can't statically know every possible state. You can however try to overapproximate the set of all possible states.
Let us consider the example of introducing a type NegativeNumber that only ever represents a negative number. For simplicity, we define only one operation: plus.
Say you would only allow addition of multiple NegativeNumber, then, the type system could be used to guarantee that each NegativeNumber is indeed a negative number. But this seems really restrictive, so a useful example would certainly allow us to add at least a NegativeNumber and a general Int.
What if you had an expression val z: NegativeNumber = plus(x, y) where you don't know the value of x and y statically (maybe they are returned by a function). How do you know (statically) that z is indead a negative number?
An approach to solve the problem would be to introduce Abstract Interpretation which must be run on a representation of your program (Source Code, Abstract Syntax Tree, ...).
For example, you could define a Lattice on the numbers with the following elements:
Top: all numbers
+: all positive numbers
0: the number 0
-: all negative numbers
Bottom: not a number - only introduced that each pair of elements has a greatest lower bound
with the ordering Top > (+, 0, -) > Bottom.
Then you'd need to define semantics for your operations. Taking the commutative method plus from our example:
plus(Bottom, something) is always Bottom, as you cannot calculate something using invalid numbers
plus(Top, x), x != Bottom is always Top, because adding an arbitrary number to any number is always an arbitrary number
plus(+, +) is +, because adding two positive numbers will always yield a positive number
plus(-, -) is -, because adding two negative numbers will always yield a negative number
plus(0, x), x != Bottom is x, because 0 is the identity of the addition.
The problem is that
plus - + will be Top, because you don't know if it's a positive or negative number.
So to be statically safe, you'd have to take the conservative approach and disallow such an operation.
There are more sophisticated numerical domains but ultimately, they all suffer from the same problem: They represent an overapproximation to the actual program state.
I'd say the problem is similar to integer overflow/underflow: Generally, you don't know statically whether an operation exhibits an overflow - you only know this at runtime.
It could be possible if SIP-23 was implemented, using implicit parameters as a form of refinement types. However, it would be of questionable value though as the Scala compiler and type system is not really not well equipped for proving interesting things about for example integers. For that it would be much nicer to use a language with dependent types (Idris etc.) or refinement types checked by an SMT solver (LiquidHaskell etc.).

Precise division of doubles representing integers exactly (when they are divisible)

Given that 8-byte doubles can represent all 4-byte ints precisely, I'm wondering whether dividing a double A storing an int, by a double B storing an int (such that the integer B divides A) will always give the exact double corresponding to the integer that is their quotient? So, if B and C are integers, and B*C fits within a 32-bit int, then is it guaranteed that
int B,C = whatever s.t. B*C does not overflow 32-bit int
double(B*C)/double(C) == double((B*C)/C) ?
Does the IEEE754 standard guarantee this?
In my testing, it seems to work for all examples I've tried. In Python:
>>> (321312321.0*3434343.0)/321312321.0 == 3434343.0
True
The reason for asking is that Matlab makes it hard to work with ints, so I often just use the default doubles for integer calculations. And when I know that the integers are exactly divisible, and if I know that the answer to the present question is yes, then I could avoid doing casts to ints, idivide(..) etc., which is less readable.
Luis Mendo's comment does answer this question, but to specifically address the use in Matlab there are some handy utilities described here. You can use eps(numberOfInterest) to find the distance to the next largest double-precision floating point number. For example:
eps(1) = 2^(-52)
eps(2^52) = 1
This practically guarantees that mathematical operations with integers held in a double will be precise provided they don't overflow 2^52, which is quite a bit larger than what is held in a 32-bit int type.

Why asterisk is overloaded for types?

I still don't understand the motivation.
Why did they made two different operators (* and *.) for multiplication of integers and floats respectively as if they afraid of overloading, but at the same time they used * to denote Cartesian product of types?
type a = int * int ;;
Why suddenly they became so brave? Why not to write
type a = int *.. int ;;
or something?
Is there some relation, which makes Cartesian product closer to integer product and more far from float product?
It's not overloaded, on the right hand-side of type t = you are defining another kind of concept, you are defining a type, not a value.
In ML-like languages you can see two distinct language:
The language for types that allows you to define types (a specification of the structure of your values).
The language for values that allows you to define values (actual values corresponding to a type, functions are also values). That's what gets evaluated.
Since the domain of the two language are really separate there's no theoretical problem/ambiguity in reusing similar syntactic construct in each language and hence has absolutely nothing to do with overloading.
In mathematics, you note cartesian product using the multiplication character. So it is logic to note it the same way in OCaml...

Fixed point arithmetic

I'm currently using Microchip's Fixed Point Library, but I think this applies to most fixed point libraries. It supports Q15 and Q15.16 types, respectively 16-bit and 32-bit data.
One thing I noticed is that it does not include add, subtract, multiply or divide functions.
How am I supposed to do these? Is it as simple as just adding/subtracting/multiplying/dividing them together using integer math? I can see addition and subtraction working, but multiplying or dividing wouldn't take care of the fractional part...?
The Microsoft library includes functions for adding and subtracting that deal with underflow/overflow (_Q15add and _Q15sub).
Multiplication can be implemented as an assembly function (I think the code is good - this is from memory).
C calling prototype is:
extern _Q15 Q15mpy(_Q15 a, _Q15 b);
The routine (placed in a .s source file in your project) is:
.global _Q15mpy
_Q15mpy:
mul.ss w0, w1, w2 ; signed multiple parameters, result in w2:w3
SL w2, w2 ; place most significant bit of W2 in carry
RLC w3, w0 ; rotate left carry into w3; result in W0
return ; return value in W0
.end
Remember to include libq.h
This routine does a left-shift of one bit rather than a right-shift of 15 bit on the result. There are no overflow concerns because Q15 numbers always have a magnitude <= 1.
It turns out that all basic arithmetic functions are performed by using the native operators due to how the numbers are represented. e.g. divide uses the / operator and multiply the * operator, and these compile to simple 32-bit divides and multiplies.