How the check that theano is only using FP32? - neural-network

In my .theanorc file I have set the parameter...
[global]
floatX = float32
However when I run keras with the theano backend and make calls to model.predict the numpy datatype of the returned array is always of type FP64 not FP32. I am not sure if this is a problem or if keras / theano makes a conversion to FP32 before executing on the GPU. Is there a way to check. I would like it if theano could post and error or warning if I try to use FP64 on the GPU.

To check the type of floatX you can simply run
import theano
print theano.config.floatX
If that code prints 'float32' then theano will print out a warning when you try to use float64 as input for gpu computations. This can be suppressed though if you add the keyword argument allow_downcast, so make sure that you don't have this keyword in theano.function when you are compiling the graph.

Related

Numba cannot resolve function (np.digitize)

I get an error by numba, it is complaining it can't resolve a function.
The minimal code to reproduce my errors is
import numba
import numpy as np
#numba.jit("float64(float64)", nopython=True)
def get_cut_tight_nom(eta):
binning_abseta = np.array([0., 10., 20.])
return np.digitize(eta, binning_abseta)
I don't understand the error message
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<function digitize at 0x7fdb8c11dee0>) found for signature:
>>> digitize(float64, array(float64, 1d, C))
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload of function 'digitize': File: numba/np/arraymath.py: Line 3939.
With argument(s): '(float64, array(float64, 1d, C))':
No match.
During: resolving callee type: Function(<function digitize at 0x7fdb8c11dee0>)
During: typing of call at /tmp/ipykernel_309793/3917220133.py (8)
it seems it wants to resolve digitize(float64, array(float64, 1d, C)) and a function with the same signature is not matching?
It's indeed due to the signatures. The Numpy function (without Numba) of np.digitize returns an int64 (scalar or array), not a float64 as you've specified the return type of your function to be.
It seems like the Numba implementation of it requires both to always be arrays, which you'll also have to explicitly add to the signature.
So this for example works for me:
#numba.jit("int64[:](float64[:])", nopython=True)
def get_cut_tight_nom(eta):
binning_abseta = np.array([0., 10., 20.])
return np.digitize(eta, binning_abseta)
Resulting in:
But do you really need the signature in this case? Numba is able to figure it out itself as well, like:
#numba.njit
def get_cut_tight_nom(eta):
...
A signature can still add something if you for example want to explicitly cast float32 inputs to float64 etc.
You can also inspect what signatures Numba comes up with if you run it with some differently typed input. Running it twice with float32 & float64 as the input shows. It can help to highlight where issues like this might arise from.

how to return list for eager compilation?

The numba allows eager compilation by telling the function signature. But, I can not find some information about the list type. My test code is:
import numba as nb
import numpy as np
# #nb.jit(nb.ListType())
def Cal():
return [1, np.zeros(shape=[5, 5]), 3]
a = Cal()
What's the function signature for Cal?
In addition, what if there are three outputs? How to provide the function signature? For example:
def TwoOutput():
return 1, 2
Any suggestion is appreciated
You cannot return the list [1, np.zeros(shape=[5, 5]), 3] from a Numba jitted function. In fact, if you try, Numba will throw few error saying that the "compilation is falling back to object mode WITH looplifting enabled because Function "Cal" failed type inference" due to the output type being not well defined. Indeed, the type of the items in the list needs to be all the same and not objects (called typed list). This is not the case here (reflected list). Reflected lists are not supported by Numba (anymore). What makes Numba fast is the type inference enabling it to generate a fast native code. With Python dynamic objects there is no way to generate a fast code (due to many heavy overhead like type checking, reference counting, allocations, etc.).
Note that you can return a tuple if the output always have the same small number of item defined at compile-time. Also note that Numba can automatically infer the output type so you can use #nb.jit(()) here to enable the eager compilation of the target function.
Put it shortly, Numba is not meant to support/speed-up this use-case. Note that Cython can and be slightly faster. You need not to use reflected lists (nor dynamic objects) if you want to get a fast code.

Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe' in Ode solver

I have a function which needs to be simulated by using a scipy 'odeint' solver. I am providing initial values as an numpy array which contains 13 parameters which are different in sizes. Also, all are integer values. But, I am getting an error msg as "Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'".
If I try to use dtype=float or astype.float, then it gives me a new error as "setting an array element with a sequence."
Can someone help me to find a way to fix this problem?. Highly appreciate if you can show a solution by mentioning codes as well.

Power operator in Chisel

I am trying to find an equivalent of Verilog power operator ** in Chisel. I went through Chisel Cheat sheet and tutorial but I did not find what I was looking for. After going through designs written in Chisel, I found that log2xx functions are popular choice while the power operator is never used. Of course I can always use the shift operator to get power of 2 but I was hoping that there is general power operator in Chisel. I tried to use scala's math functions to the job but I got compilation error.
Since you are trying to calculate a bitwidth which is calculated at elaboration time (ie. when Scala is elaborating the hardware graph), we can use Scala functions. Scala only provides a power function for Doubles, but that works just fine for this case. Try math.pow(base, exp).toInt, note that base and exp can both be Ints and Scala will automatically convert them to Doubles for the function call. You simply need to convert the resulting Double to an Int for use as a bitwidth.

Matlab error "Find requires variable sizing"

[~,col] = find(ocpRefPt(2,:)>x1 & ocpRefPt(2,:)<x2 & ocpRefPt(1,:)>y1 & ocpRefPt(1,:)<y2);
About is the line where the compilation fails. The above line is in a loop.
x1,x2,x3,x4 are scalars(natural numbers)
ocpRefPt is a 2x16 matrix
Error: FIND requires variable sizing
What does this mean. How to overcome this error?
So it seems that you are trying to compile with emlmex to make embedded code. The error is saying that the size of the output of find is not known, and apparently the compiler requires fixed size outputs. See this newsgroup post for one explanation.
This method of compilation seems to be obsolete -- use the MATLAB coder (codegen command) instead:
emlmex Generate a C-MEX file from MATLAB code.
emlmex [-options] fun1 [fun2 ...]
This function is obsolete. For general purpose acceleration
and code generation use CODEGEN.