Using numba in a method to randomize matrices - numba

I'm still not very familiar with numba and my problem is that I have the piece of code bellow that I use for randomize the edges of graphs.
This code is simply used to swap some edges in a connectivity matrix given the number of desired swaps and a seed for the random number generator.
My problem is that when I try to use it with numba to speed it up I did not menage to run it. The error it returns is also pasted bellow.
#nb.jit(nopython=True)
def _randomize_adjacency_wei(A, n_swaps, seed):
np.random.seed(seed)
# Number of nodes
n_nodes = A.shape[0]
# Copy the adj. matrix
Arnd = A.copy()
# Choose edges that will be swaped
edges = np.random.choice(n_nodes, size=(4, n_swaps), replace=True).T
#itr = range(n_swaps)
#for it in tqdm(itr) if verbose else itr:
it = 0
for it in range(n_swaps):
i,j,k,l = edges[it,:]
if len(np.unique([i,j,k,l]))<4:
continue
else:
# Old values of weigths
w_ij,w_il,w_kj,w_kl=Arnd[i,j],Arnd[i,l],Arnd[k,j],Arnd[k,l]
# Swaping edges
Arnd[i,j]=Arnd[j,i]=w_il
Arnd[k,l]=Arnd[l,k]=w_kj
Arnd[i,l]=Arnd[l,i]=w_ij
Arnd[k,j]=Arnd[j,k]=w_kl
return Arnd
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<function unique at 0x7f1a1c03b0d0>) found for signature:
>>> unique(list(int64)<iv=None>)
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload in function 'np_unique': File: numba/np/arrayobj.py: Line 1915.
With argument(s): '(list(int64)<iv=None>)':
Rejected as the implementation raised a specific error:
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Unknown attribute 'ravel' of type list(int64)<iv=None>
File "../../../home/vinicius/anaconda3/lib/python3.8/site-packages/numba/np/arrayobj.py", line 1918:
def np_unique_impl(a):
b = np.sort(a.ravel())
^
During: typing of get attribute at /home/vinicius/anaconda3/lib/python3.8/site-packages/numba/np/arrayobj.py (1918)
File "../../../home/vinicius/anaconda3/lib/python3.8/site-packages/numba/np/arrayobj.py", line 1918:
def np_unique_impl(a):
b = np.sort(a.ravel())
^
raised from /home/vinicius/anaconda3/lib/python3.8/site-packages/numba/core/typeinfer.py:1071
During: resolving callee type: Function(<function unique at 0x7f1a1c03b0d0>)
During: typing of call at <ipython-input-165-90ffd30fe0e8> (19)
File "<ipython-input-165-90ffd30fe0e8>", line 19:
def _randomize_adjacency_wei(A, n_swaps, seed):
<source elided>
i,j,k,l = edges[it,:]
if len(np.unique([i,j,k,l]))<4:
^
Thanks in advance,
Vinicius

According to the comments, you are passing a list to np.unique() but this is not supported by Numba.
Modifying the code this way:
i, j, k, l = e = edges[it, :]
if len(np.unique(e)) < 4:
...
The following example doesn't produce any errors:
>>> A = np.random.randint(0, 5, (8,8))
>>> r = _randomize_adjacency_wei(A, 4, 33)

Related

Lsode throwing INTDY-- T (=R1) ILLEGAL and invalid input detected

I have my function:
function [result] = my_func(x,y)
result = y^2*(1-3*x)-3*y;
endfunction
Also, my vector with Ts, my function address and my initial variable x_0
load file_with_ts
# Add my limits as I also want to calculate those
# (all values in file_with_ts are within those limits.)
t_points = [-1, file_with_ts, 2]
myfunc = str2func("my_func")
x_0 = 0.9142
I am trying to execute the following line:
lsode_d1 = lsode(myfunc, x_0, t_points)
And expecting a result, but getting the following error:
INTDY-- T (=R1) ILLEGAL
In above message, R1 = 0.7987082301475D+00
T NOT IN INTERVAL TCUR - HU (= R1) TO TCUR (=R2)
In above, R1 = 0.8091168896311D+00 R2 = 0.8280400838323D+00
LSODE-- TROUBLE FROM INTDY. ITASK = I1, TOUT = R1
In above message, I1 = 1
In above message, R1 = 0.7987082301475D+00
error: lsode: invalid input detected (see printed message)
error: called from
main at line 20 column 10
Also, the variable sizes are:
x_0 -> 1x1
t_points -> 1x153
myfunc -> 1x1
I tried transposing the t_points vector
using #my_func instead of the str2func function
I tried adding multiple variables as the starting point (instead of x_0 I entered [x_0; x_1])
Tried changing my function header from my_func(x, y) to my_func(y, x)
Read the documentation and confirmed that my_func allows x to be a vector and returns a vector (whenever x is a vector).
EDIT: T points is the following 1x153 matrix (with -1 and 2 added to the beggining and the end respectively):
-4.9451e-01
-4.9139e-01
-4.7649e-01
-4.8026e-01
-4.6177e-01
-4.5412e-01
-4.4789e-01
-4.2746e-01
-4.1859e-01
-4.0983e-01
-4.0667e-01
-3.8436e-01
-3.7825e-01
-3.7150e-01
-3.5989e-01
-3.5131e-01
-3.4875e-01
-3.3143e-01
-3.2416e-01
-3.1490e-01
-3.0578e-01
-2.9267e-01
-2.9001e-01
-2.6518e-01
-2.5740e-01
-2.5010e-01
-2.4017e-01
-2.3399e-01
-2.1491e-01
-2.1067e-01
-2.0357e-01
-1.8324e-01
-1.8112e-01
-1.7295e-01
-1.6147e-01
-1.5424e-01
-1.4560e-01
-1.1737e-01
-1.1172e-01
-1.0846e-01
-1.0629e-01
-9.4327e-02
-8.0883e-02
-6.6043e-02
-6.6660e-02
-6.1649e-02
-4.7245e-02
-2.8332e-02
-1.8043e-02
-7.7416e-03
-6.5142e-04
1.0918e-02
1.7619e-02
3.4310e-02
3.3192e-02
5.2275e-02
5.5756e-02
6.8326e-02
8.2764e-02
9.5195e-02
9.4412e-02
1.1630e-01
1.2330e-01
1.2966e-01
1.3902e-01
1.4891e-01
1.5848e-01
1.7012e-01
1.8026e-01
1.9413e-01
2.0763e-01
2.1233e-01
2.1895e-01
2.3313e-01
2.4092e-01
2.4485e-01
2.6475e-01
2.7154e-01
2.8068e-01
2.9258e-01
3.0131e-01
3.0529e-01
3.1919e-01
3.2927e-01
3.3734e-01
3.5841e-01
3.5562e-01
3.6758e-01
3.7644e-01
3.8413e-01
3.9904e-01
4.0863e-01
4.2765e-01
4.2875e-01
4.3468e-01
4.5802e-01
4.6617e-01
4.6885e-01
4.7247e-01
4.8778e-01
4.9922e-01
5.1138e-01
5.1869e-01
5.3222e-01
5.4196e-01
5.4375e-01
5.5526e-01
5.6629e-01
5.7746e-01
5.8840e-01
6.0006e-01
5.9485e-01
6.1771e-01
6.3621e-01
6.3467e-01
6.5467e-01
6.6175e-01
6.6985e-01
6.8091e-01
6.8217e-01
6.9958e-01
7.1802e-01
7.2049e-01
7.3021e-01
7.3633e-01
7.4985e-01
7.6116e-01
7.7213e-01
7.7814e-01
7.8882e-01
8.1012e-01
7.9871e-01
8.3115e-01
8.3169e-01
8.4500e-01
8.4168e-01
8.5705e-01
8.6861e-01
8.8211e-01
8.8165e-01
9.0236e-01
9.0394e-01
9.2033e-01
9.3326e-01
9.4164e-01
9.5541e-01
9.6503e-01
9.6675e-01
9.8129e-01
9.8528e-01
9.9339e-01
Credits to Lutz Lehmann and PierU.
The problem lied in the array t_points not being a monotonous array. Adding a sort(t_points) before doing any calculations fixed the error.

how to allocate arrays in numba

I am trying to create an array inside numba jit function. I tried these two options and both return various errors
#njit
def monte_carlo_ou(T: int, nsamples: int = 100, dt:float=0.1, sigma_g:float =1.0, ):
particles = np.zeros(T, nsamples, dtype=float)
velocity_particles = np.zeros(nsamples , dtype=float)
return particles
monte_carlo_ou(100, 50, dt=0.1, sigma_g=1.0,)
TypingError: Cannot determine Numba type of <class 'numba.core.ir.UndefinedType'>
#njit
def monte_carlo_ou(T: int, nsamples: int = 100, dt:float=0.1, sigma_g:float =1.0, ):
particles = numba.float32[:, :]
velocity_particles = numba.float32[:]
return particles
monte_carlo_ou(100, 50, dt=0.1, sigma_g=1.0,)
Error :
TypingError: No implementation of function Function(<built-in function getitem>) found for signature:
getitem(class(float32), UniTuple(slice<a:b> x 2))
There are 22 candidate implementations:
- Of which 22 did not match due to:
Overload of function 'getitem': File: <numerous>: Line N/A.
With argument(s): '(class(float32), UniTuple(slice<a:b> x 2))':
No match.
During: typing of intrinsic-call at /var/folders/43/m_k444pn2z7c6ygxj0y3_c4r0000gn/T/ipykernel_58943/4011707724.py (6)
During: typing of static-get-item at /var/folders/43/m_k444pn2z7c6ygxj0y3_c4r0000gn/T/ipykernel_58943/4011707724.py (6)
Any advise on how to get started? All tutorials seems to be very limited
The problem is your first in-function line: the shape in np.zeros should be one tuple argument, not multiple scalar arguments.
particles = np.zeros(shape=(T, nsamples), dtype=float) worked for me.
shape= is not necessary but is explicit about how you should provide the shape of the array.

Talos --> TypeError: __init__() got an unexpected keyword argument 'grid_downsample'

I am trying to run a hyperparameters optimization with Talos. As I have a lot of parameters to test, I want to use a 'grid_downsample' argument that will select 30% of all possible hyperparameters combinations. However when I run my code I get: TypeError: __init__() got an unexpected keyword argument 'grid_downsample'
I tested the code below without the 'grid_downsample' option and with less hyperparameters.
#load data
data = pd.read_csv('data.txt', sep="\t", encoding = "latin1")
# split into input (X) and output (y) variables
Y = np.array(data['Y'])
data_bis = data.drop(['Y'], axis = 1)
X = np.array(data_bis)
p = {'activation':['relu'],
'optimizer': ['Nadam'],
'first_hidden_layer': [12],
'second_hidden_layer': [12],
'batch_size': [20],
'epochs': [10,20],
'dropout_rate':[0.0, 0.2]}
def dnn_model(x_train, y_train, x_val, y_val, params):
model = Sequential()
#input layer
model.add(Dense(params['first_hidden_layer'], input_shape=(1024,)))
model.add(Dropout(params['dropout_rate']))
model.add(Activation(params['activation']))
#hidden layer 2
model.add(Dense(params['second_hidden_layer']))
model.add(Dropout(params['dropout_rate']))
model.add(Activation(params['activation']))
# output layer with one node
model.add(Dense(1))
model.add(Activation(params['activation']))
# Compile model
model.compile(loss='binary_crossentropy', optimizer=params['optimizer'], metrics=['accuracy'])
out = model.fit(x_train, y_train,
batch_size=params['batch_size'],
epochs=params['epochs'],
validation_data=[x_val, y_val],
verbose=0)
return out, model
scan_object = ta.Scan(X, Y, model=dnn_model, params=p, experiment_name="test")
reporting = ta.Reporting(scan_object)
report = reporting.data
report.to_csv('./Random_search/dnn/report_talos.txt', sep = '\t')
This code works well. If I change the scan_object as the end to: scan_object = ta.Scan(X, Y, model=dnn_model, grid_downsample=0.3, params=p, experiment_name="test"), it gives me the error: TypeError: __init__() got an unexpected keyword argument 'grid_downsample' while I was expecting to have the same results format as a normal grid search but with less combinations. What am I missing? Did the name of the argument change? I'm using Talos 0.6.3 in a conda environment.
Thank you!
might be too late for you now but they've switched it to fraction_limit. It would give this for you
scan_object = ta.Scan(X, Y, model=dnn_model, params=p, experiment_name="test", fraction_limit = 0.1)
Sadly, the doc isn't well updated
Check out their examples on GitHub:
https://github.com/autonomio/talos/blob/master/examples/Hyperparameter%20Optimization%20with%20Keras%20for%20the%20Iris%20Prediction.ipynb

Tensorflow: Cannot interpret feed_dict key as Tensor

I am trying to build a neural network model with one hidden layer (1024 nodes). The hidden layer is nothing but a relu unit. I am also processing the input data in batches of 128.
The inputs are images of size 28 * 28. In the following code I get the error in line
_, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})
Error: TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder_64:0", shape=(128, 784), dtype=float32) is not an element of this graph.
Here is the code I have written
#Initialize
batch_size = 128
layer1_input = 28 * 28
hidden_layer1 = 1024
num_labels = 10
num_steps = 3001
#Create neural network model
def create_model(inp, w, b):
layer1 = tf.add(tf.matmul(inp, w['w1']), b['b1'])
layer1 = tf.nn.relu(layer1)
layer2 = tf.matmul(layer1, w['w2']) + b['b2']
return layer2
#Initialize variables
x = tf.placeholder(tf.float32, shape=(batch_size, layer1_input))
y = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
w = {
'w1': tf.Variable(tf.random_normal([layer1_input, hidden_layer1])),
'w2': tf.Variable(tf.random_normal([hidden_layer1, num_labels]))
}
b = {
'b1': tf.Variable(tf.zeros([hidden_layer1])),
'b2': tf.Variable(tf.zeros([num_labels]))
}
init = tf.initialize_all_variables()
train_prediction = tf.nn.softmax(model)
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
model = create_model(x, w, b)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model, y))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
#Process
with tf.Session(graph=graph1) as sess:
tf.initialize_all_variables().run()
total_batch = int(train_dataset.shape[0] / batch_size)
for epoch in range(num_steps):
loss = 0
for i in range(total_batch):
batch_x, batch_y = train_dataset[epoch * batch_size:(epoch+1) * batch_size, :], train_labels[epoch * batch_size:(epoch+1) * batch_size,:]
_, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})
loss = loss + c
loss = loss / total_batch
if epoch % 500 == 0:
print ("Epoch :", epoch, ". cost = {:.9f}".format(avg_cost))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
valid_prediction = tf.run(tf_valid_dataset, {x: tf_valid_dataset})
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
test_prediction = tf.run(tf_test_dataset, {x: tf_test_dataset})
print("TEST accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
This worked for me
from keras import backend as K
and after predicting my data i inserted this part of code
then i had again loaded the model.
K.clear_session()
i faced this problem in production server,
but in my pc it was running fine
...........
from keras import backend as K
#Before prediction
K.clear_session()
#After prediction
K.clear_session()
Variable x is not in the same graph as model, try to define all of these in the same graph scope. For example,
# define a graph
graph1 = tf.Graph()
with graph1.as_default():
# placeholder
x = tf.placeholder(...)
y = tf.placeholder(...)
# create model
model = create(x, w, b)
with tf.Session(graph=graph1) as sess:
# initialize all the variables
sess.run(init)
# then feed_dict
# ......
If you use django server, just runserver with --nothreading
for example:
python manage.py runserver --nothreading
I had the same issue with flask. adding --without-threads flag to flask run or threaded=False to app.run() fixed it
In my case, I was using loop while calling in CNN multiple times, I fixed my problem by doing the following:
# Declare this as global:
global graph
graph = tf.get_default_graph()
# Then just before you call in your model, use this
with graph.as_default():
# call you models here
Note: In my case too, the app ran fine for the first time and then gave the error above. Using the above fix solved the problem.
Hope that helps.
The error message TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("...", dtype=dtype) is not an element of this graph can also arise in case you run a session outside of the scope of its with statement. Consider:
with tf.Session() as sess:
sess.run(logits, feed_dict=feed_dict)
sess.run(logits, feed_dict=feed_dict)
If logits and feed_dict are defined properly, the first sess.run command will execute normally, but the second will raise the mentioned error.
You can also experience this while working on notebooks hosted on online learning platforms like Coursera. So, implementing following code could help get over with the issue.
Implement this at the topmost block of Notebook file:
from keras import backend as K
K.clear_session()
Similar to #javan-peymanfard and #hmadali-shafiee, I ran into this issue when loading the model in an API. I was using FastAPI with uvicorn. To fix the issue I just set the API function definitions to async similar to this:
#app.post('/endpoint_name')
async def endpoint_function():
# Do stuff here, including possibly (re)loading the model

Jython 2.5 isdigit

I am trying to add an isdigit() to the program so that I can verify what the user enters is valid. This is what I have so far. But when I run it an enter a character, say "f". It crashes and gives me the error which will be posted below the code. Any ideas?
def mirrorHorizontal(source):
userMirrorPoint = requestString("Enter a mirror point from 0 to halfway through the pitcure.") #asks user for an input
while (int(userMirrorPoint) < 0 or int(userMirrorPoint) > (int(getHeight(source) - 1)//2)) or not(userMirrorPoint.isdigit()):
userMirrorPoint = requestString("Enter a mirror point from 0 to halfway through the pitcure.")
height = getHeight(source)
mirrorPoint = int(userMirrorPoint)
for x in range(0, getWidth(source)):
for y in range(0, mirrorPoint):
topPixel = getPixel(source, x, y)
bottomPixel = getPixel(source, x, height-y-1)
color = getColor(topPixel)
setColor(bottomPixel, color)
The error was: f
Inappropriate argument value (of correct type).
An error occurred attempting to pass an argument to a function.
Please check line 182 of /Volumes/FLASHDRIVE2/College/Spring 16'/Programs - CPS 201/PA5Sikorski.py
isdigit() itself behaves itself in the 2.7.0 jython version I have locally
>>> '1'.isdigit()
True
>>> ''.isdigit()
False
>>> 'A'.isdigit()
False
>>> 'A2'.isdigit()
False
>>> '2'.isdigit()
True
>>> '22321'.isdigit()
True
Try breaking your big expression up, as typecasting to integers will throw errors for non-numeric strings. This is true across Python versions.
>>> int('b')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: 'b'
>>> int('2')
2
You probably want to be careful about the order of the parts of that long expression (this or that or ...). Breaking it up would also make it more readable.