Related
I have my function:
function [result] = my_func(x,y)
result = y^2*(1-3*x)-3*y;
endfunction
Also, my vector with Ts, my function address and my initial variable x_0
load file_with_ts
# Add my limits as I also want to calculate those
# (all values in file_with_ts are within those limits.)
t_points = [-1, file_with_ts, 2]
myfunc = str2func("my_func")
x_0 = 0.9142
I am trying to execute the following line:
lsode_d1 = lsode(myfunc, x_0, t_points)
And expecting a result, but getting the following error:
INTDY-- T (=R1) ILLEGAL
In above message, R1 = 0.7987082301475D+00
T NOT IN INTERVAL TCUR - HU (= R1) TO TCUR (=R2)
In above, R1 = 0.8091168896311D+00 R2 = 0.8280400838323D+00
LSODE-- TROUBLE FROM INTDY. ITASK = I1, TOUT = R1
In above message, I1 = 1
In above message, R1 = 0.7987082301475D+00
error: lsode: invalid input detected (see printed message)
error: called from
main at line 20 column 10
Also, the variable sizes are:
x_0 -> 1x1
t_points -> 1x153
myfunc -> 1x1
I tried transposing the t_points vector
using #my_func instead of the str2func function
I tried adding multiple variables as the starting point (instead of x_0 I entered [x_0; x_1])
Tried changing my function header from my_func(x, y) to my_func(y, x)
Read the documentation and confirmed that my_func allows x to be a vector and returns a vector (whenever x is a vector).
EDIT: T points is the following 1x153 matrix (with -1 and 2 added to the beggining and the end respectively):
-4.9451e-01
-4.9139e-01
-4.7649e-01
-4.8026e-01
-4.6177e-01
-4.5412e-01
-4.4789e-01
-4.2746e-01
-4.1859e-01
-4.0983e-01
-4.0667e-01
-3.8436e-01
-3.7825e-01
-3.7150e-01
-3.5989e-01
-3.5131e-01
-3.4875e-01
-3.3143e-01
-3.2416e-01
-3.1490e-01
-3.0578e-01
-2.9267e-01
-2.9001e-01
-2.6518e-01
-2.5740e-01
-2.5010e-01
-2.4017e-01
-2.3399e-01
-2.1491e-01
-2.1067e-01
-2.0357e-01
-1.8324e-01
-1.8112e-01
-1.7295e-01
-1.6147e-01
-1.5424e-01
-1.4560e-01
-1.1737e-01
-1.1172e-01
-1.0846e-01
-1.0629e-01
-9.4327e-02
-8.0883e-02
-6.6043e-02
-6.6660e-02
-6.1649e-02
-4.7245e-02
-2.8332e-02
-1.8043e-02
-7.7416e-03
-6.5142e-04
1.0918e-02
1.7619e-02
3.4310e-02
3.3192e-02
5.2275e-02
5.5756e-02
6.8326e-02
8.2764e-02
9.5195e-02
9.4412e-02
1.1630e-01
1.2330e-01
1.2966e-01
1.3902e-01
1.4891e-01
1.5848e-01
1.7012e-01
1.8026e-01
1.9413e-01
2.0763e-01
2.1233e-01
2.1895e-01
2.3313e-01
2.4092e-01
2.4485e-01
2.6475e-01
2.7154e-01
2.8068e-01
2.9258e-01
3.0131e-01
3.0529e-01
3.1919e-01
3.2927e-01
3.3734e-01
3.5841e-01
3.5562e-01
3.6758e-01
3.7644e-01
3.8413e-01
3.9904e-01
4.0863e-01
4.2765e-01
4.2875e-01
4.3468e-01
4.5802e-01
4.6617e-01
4.6885e-01
4.7247e-01
4.8778e-01
4.9922e-01
5.1138e-01
5.1869e-01
5.3222e-01
5.4196e-01
5.4375e-01
5.5526e-01
5.6629e-01
5.7746e-01
5.8840e-01
6.0006e-01
5.9485e-01
6.1771e-01
6.3621e-01
6.3467e-01
6.5467e-01
6.6175e-01
6.6985e-01
6.8091e-01
6.8217e-01
6.9958e-01
7.1802e-01
7.2049e-01
7.3021e-01
7.3633e-01
7.4985e-01
7.6116e-01
7.7213e-01
7.7814e-01
7.8882e-01
8.1012e-01
7.9871e-01
8.3115e-01
8.3169e-01
8.4500e-01
8.4168e-01
8.5705e-01
8.6861e-01
8.8211e-01
8.8165e-01
9.0236e-01
9.0394e-01
9.2033e-01
9.3326e-01
9.4164e-01
9.5541e-01
9.6503e-01
9.6675e-01
9.8129e-01
9.8528e-01
9.9339e-01
Credits to Lutz Lehmann and PierU.
The problem lied in the array t_points not being a monotonous array. Adding a sort(t_points) before doing any calculations fixed the error.
I have a series of data, for example:
0.767838478
0.702426493
0.733858228
0.703275979
0.651456058
0.62427187
0.742353261
0.646359026
0.695630431
0.659101665
0.598786652
0.592840135
0.59199059
which I know fits best to an equation of the form:
y=ae^(b*x)+c
How can I fit the custom function to this data?
Similar question had been already asked on LibreOffice forum without a proper answer. I would appreciate if you could help me know how to do this. Preferably answers applying to any custom function rather than workarounds to this specific case.
There are multiple possible solutions for this. But one approach would be the following:
For determining the aand b in the trend line function y = a*e^(b*x) there are solutions using native Calc functions (LINEST, EXP, LN).
So we could the y = a*e^(b*x)+c taking as y-c= a*e^(b*x) and so if we are knowing c, the solution for y = a*e^(b*x) could be taken too. How to know c? One approach is described in Exponential Curve Fitting. There approximation of b, a and then c are made.
I have the main part of the delphi code from Exponential Curve Fitting : source listing translated to StarBasic for Calc. The part of the fine tuning of c is not translated until now. To-Do for you as professional and enthusiast programmers.
Example:
Data:
x y
0 0.767838478
1 0.702426493
2 0.733858228
3 0.703275979
4 0.651456058
5 0.62427187
6 0.742353261
7 0.646359026
8 0.695630431
9 0.659101665
10 0.598786652
11 0.592840135
12 0.59199059
Formulas:
B17: =EXP(INDEX(LINEST(LN($B$2:$B$14),$A$2:$A$14),1,2))
C17: =INDEX(LINEST(LN($B$2:$B$14),$A$2:$A$14),1,1)
y = a*e^(b*x) is also the function used for the chart's trend line calculation.
B19: =INDEX(TRENDEXPPLUSC($B$2:$B$14,$A$2:$A$14),1,1)
C19: =INDEX(TRENDEXPPLUSC($B$2:$B$14,$A$2:$A$14),1,2)
D19: =INDEX(TRENDEXPPLUSC($B$2:$B$14,$A$2:$A$14),1,3)
Code:
function trendExpPlusC(rangey as variant, rangex as variant) as variant
'get values from ranges
redim x(ubound(rangex)-1) as double
redim y(ubound(rangex)-1) as double
for i = lbound(x) to ubound(x)
x(i) = rangex(i+1,1)
y(i) = rangey(i+1,1)
next
'make helper arrays
redim dx(ubound(x)-1) as double
redim dy(ubound(x)-1) as double
redim dxyx(ubound(x)-1) as double
redim dxyy(ubound(x)-1) as double
for i = lbound(x) to ubound(x)-1
dx(i) = x(i+1) - x(i)
dy(i) = y(i+1) - y(i)
dxyx(i) = (x(i+1) + x(i))/2
dxyy(i) = dy(i) / dx(i)
next
'approximate b
s = 0
errcnt = 0
for i = lbound(dxyx) to ubound(dxyx)-1
on error goto errorhandler
s = s + log(abs(dxyy(i+1) / dxyy(i))) / (dxyx(i+1) - dxyx(i))
on error goto 0
next
b = s / (ubound(dxyx) - errcnt)
'approximate a
s = 0
errcnt = 0
for i = lbound(dx) to ubound(dx)
on error goto errorhandler
s = s + dy(i) / (exp(b * x(i+1)) - exp(b * x(i)))
on error goto 0
next
a = s / (ubound(dx) + 1 - errcnt)
'approximate c
s = 0
errcnt = 0
for i = lbound(x) to ubound(x)
on error goto errorhandler
s = s + y(i) - a * exp(b * x(i))
on error goto 0
next
c = s / (ubound(x) + 1 - errcnt)
'make y for (y - c) = a*e^(b*x)
for i = lbound(x) to ubound(x)
y(i) = log(abs(y(i) - c))
next
'get a and b from LINEST for (y - c) = a*e^(b*x)
oFunctionAccess = createUnoService( "com.sun.star.sheet.FunctionAccess" )
args = array(array(y), array(x))
ab = oFunctionAccess.CallFunction("LINEST", args)
if a < 0 then a = -exp(ab(0)(1)) else a = exp(ab(0)(1))
b = ab(0)(0)
trendExpPlusC = array(a, b, c)
exit function
errorhandler:
errcnt = errcnt + 1
resume next
end function
The formula y = beax is the exponential regression equation for LibreOffice chart trend lines.
LibreOffice exports all settings
All the settings of LibreOffice, all in the LibreOffice folder.
C:\Users\a←When installing the operating system, the name
entered.\AppData←File Manager ~ "Hidden project" to open, the AppData
folder will be displayed.\Roaming\LibreOffice
Back up the LibreOffice folder, when reinstalling, put the LibreOffice folder in its original place.
Note:
1. If the installation is preview edition, because the name of preview edition is LibreOfficeDev, so the LibreOfficeDev folder will be
displayed.
2. Formal edition can be installed together with preview edition, if both formal edition and preview edition are installed, LibreOffice
folder and LibreOfficeDev folder will be displayed.
3. To clear all settings, just delete the LibreOffice folder, then open the program, a new LibreOffice folder will be created.
LibreOffice exports a single toolbar I made
Common path
C:\Users\a←When installing the operating system, the name
entered.\AppData←File Manager ~ "Hidden project" to open, the AppData
folder will be
displayed.\Roaming\LibreOffice\4\user\config\soffice.cfg\modules\Please
connect the branch path of the individual software below.
Branch path
\modules\StartModule\toolbar\The "Start" toolbar I made is placed here.
\modules\swriter\toolbar\The "writer" toolbar I made is placed here.
\modules\scalc\toolbar\The "calc" toolbar I made is placed here.
\modules\simpress\toolbar\The "impress" toolbar I made is placed here.
\modules\sdraw\toolbar\The "draw" toolbar I made is placed here.
\modules\smath\toolbar\The "math" toolbar I made is placed here.
\modules\dbapp\toolbar\The "base" toolbar I made is placed here.
Backup file, when reinstalling, put the file in the original place.
Note:
Because of the toolbar that I made myself, default file name, will automatically use Numbering, so to open the file, can know the name of
the toolbar.
The front file name "custom_toolbar_" cannot be changed, change will cause error, behind's file name can be changed. For example:
custom_toolbar_c01611ed.xml→custom_toolbar_AAA.xml.
Do well of toolbar, can be copied to other places to use. For example: In the "writer" Do well of toolbar, can be copied to "calc"
places to use.
LibreOffice self-made symbol toolbar
Step 1 Start "Recording Macros function" Tools\Options\Advanced\Enable macro recording(Tick), in the
"Tools\Macros", the "Record Macro" option will appear.
Step 2 Recording Macros Tools\Macros\Record Macro→Recording action (click "Ω" to enter symbol→select symbol→Insert)→Stop
Recording→The name Macros stored in "Module1" is Main→Modify Main
name→Save.
Step 3 Add item new toolbar Tools\Customize\Toolbar→Add→Enter a name (example: symbol)→OK, the new toolbar will appear in the top
left.
Step 4 Will Macros Add item new toolbar Tools\Customize\Toolbar\Category\Macros\My
Macros\Standard\Module1\Main→Click "Main"→Add item→Modify→Rename (can
be named with symbol)→OK→OK.
I am new to PYMC3. Maybe this is a naive question, but I searched a lot and didn't find any explanation on this issue.
Basically, I want to do a linear regression in PYMC3, however the training is very slow and the model performance on training set is very poor as well. Below is my code:
X_Tr = np.array([ 13.99802212, 13.8512075 , 13.9531636 , 13.97432944,
13.89211468, 13.91357953, 13.95987483, 13.86476587,
13.9501789 , 13.92698143, 13.9653932 , 14.06663115,
13.91697969, 13.99629862, 14.01392784, 13.96495713,
13.98697998, 13.97516973, 14.01048397, 14.05918188,
14.08342002, 13.89350606, 13.81768849, 13.94942447,
13.90465027, 13.93969029, 14.18771189, 14.08631113,
14.03718829, 14.01836206, 14.06758363, 14.05243539,
13.96287123, 13.93011351, 14.01616973, 14.01923812,
13.97424024, 13.9587175 , 13.85669845, 13.97778302,
14.04192138, 13.93775494, 13.86693585, 13.79985956,
13.82679677, 14.06474544, 13.90821822, 13.71648423,
13.78899668, 13.76857337, 13.87201756, 13.86152949,
13.80447525, 13.99609891, 14.0210165 , 13.986906 ,
13.97479211, 14.04562055, 14.03293095, 14.15178043,
14.32413197, 14.2330354 , 13.99247751, 13.92962912,
13.95394525, 13.87888254, 13.82743111, 14.10724699,
14.23638905, 14.15731881, 14.13239278, 14.13386722,
13.91442452, 14.01056255, 14.19378649, 14.22233852,
14.30405399, 14.25880108, 14.23985258, 14.21184303,
14.4443183 , 14.55710331, 14.42102092, 14.29047616,
14.43712609, 14.58666212])
y_tr = np.array([ 13.704, 13.763, 13.654, 13.677, 13.66 , 13.735, 13.845,
13.747, 13.747, 13.606, 13.819, 13.867, 13.817, 13.68 ,
13.823, 13.779, 13.814, 13.936, 13.956, 13.912, 13.982,
13.979, 13.919, 13.944, 14.094, 13.983, 13.887, 13.902,
13.899, 13.881, 13.784, 13.909, 13.99 , 14.06 , 13.834,
13.778, 13.703, 13.965, 14.02 , 13.992, 13.927, 14.009,
13.988, 14.022, 13.754, 13.837, 13.91 , 13.907, 13.867,
14.014, 13.952, 13.796, 13.92 , 14.051, 13.773, 13.837,
13.745, 14.034, 13.923, 14.041, 14.077, 14.125, 13.989,
14.174, 13.967, 13.952, 14.024, 14.171, 14.175, 14.091,
14.267, 14.22 , 14.071, 14.112, 14.174, 14.289, 14.146,
14.356, 14.5 , 14.265, 14.259, 14.406, 14.463, 14.473,
14.413, 14.507])
sns.regplot(x=X_tr, y=y_tr.flatten());
Here I use PYMC3 to train the model:
shA_X = shared(X_tr)
with pm.Model() as linear_model:
alpha = pm.Normal("alpha", mu=14,sd=100)
betas = pm.Normal("betas", mu=0, sd=100, shape=1)
sigma = pm.HalfCauchy('sigma', beta=10, testval=1.)
mu = alpha + betas * shA_X
forecast = pm.Normal("forecast", mu=mu, sd=sigma, observed=y_tr)
step = pm.NUTS()
trace=pm.sample(3000, tune=1000)
and then check the performance:
ppc_w = pm.sample_ppc(trace, 1000, linear_model,
progressbar=False)
plt.plot(ppc_w['forecast'].mean(axis=0),'r')
plt.plot(y_tr, color='k')`
Why is the prediction on the training set so poor?
Any suggestions and ideas are appreciated.
This model is doing a fine job - I think the confusion is over how to handle the PyMC3 objects (thank you for the easy to work with example, though!). In general, PyMC3 will be used to quantify the uncertainty in your model.
For example, trace['betas'].mean() is around 0.83 (this will depend on your random seed), while the least squares estimate that, for example, sklearn gives will be 0.826. Similarly, trace['alpha'].mean() gives 2.34, while the "true" value is 2.38.
You can also use the trace to plot many different plausible draws for your line of best fit:
for draw in trace[::100]:
pred = draw['betas'] * X_tr + draw['alpha']
plt.plot(X_tr, pred, '--', alpha=0.2, color='grey')
plt.plot(X_tr, y_tr, 'o');
Note that these are drawn from the distribution of "best fits" for your data. You also used sigma to model the noise, and you can plot this value as well:
for draw in trace[::100]:
pred = draw['betas'] * X_tr + draw['alpha']
plt.plot(X_tr, pred, '-', alpha=0.2, color='grey')
plt.plot(X_tr, pred + draw['sigma'], '-', alpha=0.05, color='red')
plt.plot(X_tr, pred - draw['sigma'], '-', alpha=0.05, color='red');
plt.plot(X_tr, y_tr, 'o');
Using sample_ppc draws observed values from your posterior distribution, so each row of ppc_w['forecast'] is a reasonable way for the data to be generated "next time". You might use that object this way:
ppc_w = pm.sample_ppc(trace, 1000, linear_model,
progressbar=False)
for draw in ppc_w['forecast'][::5]:
sns.regplot(X_tr, draw, scatter_kws={'alpha': 0.005, 'color': 'r'}, fit_reg=False)
sns.regplot(X_tr, y_tr, color='k', fit_reg=False);
I am trying to build a neural network model with one hidden layer (1024 nodes). The hidden layer is nothing but a relu unit. I am also processing the input data in batches of 128.
The inputs are images of size 28 * 28. In the following code I get the error in line
_, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})
Error: TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder_64:0", shape=(128, 784), dtype=float32) is not an element of this graph.
Here is the code I have written
#Initialize
batch_size = 128
layer1_input = 28 * 28
hidden_layer1 = 1024
num_labels = 10
num_steps = 3001
#Create neural network model
def create_model(inp, w, b):
layer1 = tf.add(tf.matmul(inp, w['w1']), b['b1'])
layer1 = tf.nn.relu(layer1)
layer2 = tf.matmul(layer1, w['w2']) + b['b2']
return layer2
#Initialize variables
x = tf.placeholder(tf.float32, shape=(batch_size, layer1_input))
y = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
w = {
'w1': tf.Variable(tf.random_normal([layer1_input, hidden_layer1])),
'w2': tf.Variable(tf.random_normal([hidden_layer1, num_labels]))
}
b = {
'b1': tf.Variable(tf.zeros([hidden_layer1])),
'b2': tf.Variable(tf.zeros([num_labels]))
}
init = tf.initialize_all_variables()
train_prediction = tf.nn.softmax(model)
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
model = create_model(x, w, b)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model, y))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
#Process
with tf.Session(graph=graph1) as sess:
tf.initialize_all_variables().run()
total_batch = int(train_dataset.shape[0] / batch_size)
for epoch in range(num_steps):
loss = 0
for i in range(total_batch):
batch_x, batch_y = train_dataset[epoch * batch_size:(epoch+1) * batch_size, :], train_labels[epoch * batch_size:(epoch+1) * batch_size,:]
_, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})
loss = loss + c
loss = loss / total_batch
if epoch % 500 == 0:
print ("Epoch :", epoch, ". cost = {:.9f}".format(avg_cost))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
valid_prediction = tf.run(tf_valid_dataset, {x: tf_valid_dataset})
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
test_prediction = tf.run(tf_test_dataset, {x: tf_test_dataset})
print("TEST accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
This worked for me
from keras import backend as K
and after predicting my data i inserted this part of code
then i had again loaded the model.
K.clear_session()
i faced this problem in production server,
but in my pc it was running fine
...........
from keras import backend as K
#Before prediction
K.clear_session()
#After prediction
K.clear_session()
Variable x is not in the same graph as model, try to define all of these in the same graph scope. For example,
# define a graph
graph1 = tf.Graph()
with graph1.as_default():
# placeholder
x = tf.placeholder(...)
y = tf.placeholder(...)
# create model
model = create(x, w, b)
with tf.Session(graph=graph1) as sess:
# initialize all the variables
sess.run(init)
# then feed_dict
# ......
If you use django server, just runserver with --nothreading
for example:
python manage.py runserver --nothreading
I had the same issue with flask. adding --without-threads flag to flask run or threaded=False to app.run() fixed it
In my case, I was using loop while calling in CNN multiple times, I fixed my problem by doing the following:
# Declare this as global:
global graph
graph = tf.get_default_graph()
# Then just before you call in your model, use this
with graph.as_default():
# call you models here
Note: In my case too, the app ran fine for the first time and then gave the error above. Using the above fix solved the problem.
Hope that helps.
The error message TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("...", dtype=dtype) is not an element of this graph can also arise in case you run a session outside of the scope of its with statement. Consider:
with tf.Session() as sess:
sess.run(logits, feed_dict=feed_dict)
sess.run(logits, feed_dict=feed_dict)
If logits and feed_dict are defined properly, the first sess.run command will execute normally, but the second will raise the mentioned error.
You can also experience this while working on notebooks hosted on online learning platforms like Coursera. So, implementing following code could help get over with the issue.
Implement this at the topmost block of Notebook file:
from keras import backend as K
K.clear_session()
Similar to #javan-peymanfard and #hmadali-shafiee, I ran into this issue when loading the model in an API. I was using FastAPI with uvicorn. To fix the issue I just set the API function definitions to async similar to this:
#app.post('/endpoint_name')
async def endpoint_function():
# Do stuff here, including possibly (re)loading the model
I'm trying to speed up my network implemented in torch7 but I get an error when I try to use nn.DataParallelTable.
This is what I'm trying to do:
m1, m2 = createModel(8,48), createModel(8,48)
--8 # of GPUs, 48 hidden unit in the last layer
m2:share(m1,'weight', 'bias') ----THE ERROR IS HERE
prl = nn.ParallelTable()
prl:add(m1)
prl:add(m2)
prl:cuda()
mlp = nn.Sequential()
mlp:add(prl)
mlp:cuda()
crit = nn.CosineEmbeddingCriterion():cuda()
Where the functions are:
function createModel(nGPU,bot)
local features = nn.Concat(2)
local fb1 = nn.Sequential() -- branch 1
fb1:add(nn.SpatialConvolution(1,48,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialConvolution(48,128,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialConvolution(128,192,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialConvolution(192,192,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialConvolution(192,128,3,3,1,1,1,1))
fb1:add(nn.ReLU(true))
fb1:add(nn.SpatialMaxPooling(2,2,2,2))
view = 12
local fb2 = fb1:clone() -- branch 2
for k,v in ipairs(fb2:findModules('nn.SpatialConvolution')) do
v:reset() -- reset branch 2's weights
end
features:add(fb1) features:add(fb2) features:cuda()
--------------the error is at this line-----------
features = makeDataParallel(features, nGPU)
local classifier = nn.Sequential()
classifier:add(nn.View(256viewview))
classifier:add(nn.Dropout(0.5))
classifier:add(nn.Linear(256viewview, 4096))
classifier:add(nn.Dropout(0.5))
classifier:add(nn.Linear(4096, 4096))
classifier:add(nn.Tanh())
classifier:add(nn.Linear(4096, bot))
classifier:add(nn.Tanh())
classifier:cuda()
local model = nn.Sequential():add(features):add(classifier)
return model
end
and the other one is:
function makeDataParallel(model, nGPU)
if nGPU > 1 then
print('converting module to nn.DataParallelTable')
assert(nGPU <= cutorch.getDeviceCount(), 'number of GPUs less than nGPU specified')
local model_single = model
model = nn.DataParallelTable(1)
for i=1, nGPU do
cutorch.setDevice(i)
model:add(model_single:clone():cuda(), i)
end
end
cutorch.setDevice(1)
return model
end
The error I get is:
[C]: in function 'error'
...a/torch/install/share/lua/5.1/cunn/DataParallelTable.lua:337: in function 'share'
/home/andrea/torch/install/share/lua/5.1/nn/Container.lua:97: in function 'share'
main.lua:123: in main chunk
[C]: at 0x00406670
Do you possibly know where the error is? Sorry but I'm kinda new at this and I cannot find a way to figure it out. Of course I'm figuring out wrong the net structure. Thanks in advance.