I am working on a robotic arm.
M106 is turn on the fan
M17 is stepper on
M18 is stepper off
G1 X... Y.. X.. is the coordinates of movement
the port is correct, the terminal prints the hello hi there...
However the robotic arm is not moving, I totally have no clue why is this happening.
Is it there is some problem with my code?
import serial
import struct
def gcode_encode(gcode):
gcode += '\r\n'
return struct.pack(f'<{len(gcode)}s', gcode.encode(encoding='utf-8'))
print("hello")
# ser = serial.Serial('COM7', 9600, timeout=0, parity=serial.PARITY_EVEN, rtscts=1)
ser = serial.Serial()
ser.port = 'COM7'
ser.baudrate = 9600
ser.timeout = 0
ser.open()
g = gcode_encode('M106')
ser.write(b'g')
g = gcode_encode('M17')
ser.write(b'g')
g = gcode_encode('M18')
ser.write(b'g')
g = gcode_encode('G1 X0 Y120 Z120')
ser.write(b'g')
g = gcode_encode('G1 X50 Y120 Z60')
ser.write(b'g')
ser.close()
print("hi")
You are writing only the character 'g' to the port. If you want to write bytes of a variable g, you need to use bytes(g). The same is with f'<{len(gcode)}s', the characters in single or double quotes are not a command here, but just a string. Also you don't need packing of the string, just encoding.
Also add some pauses between commands using time.sleep().
Related
I recently connected a reactor control tower through a serial 'COMMS' port to my computer (serial to USB). It seems to create a connection on the COM4 port (as indicated on the control panel 'devices and printers' section). However, it always gives me the following message when i try to 'fwrite(s)' or 'fscanf(s)'.
Warning: Unsuccessful read: The specified amount of data was not returned within the Timeout period..
%My COM4
s = serial('COM4');
s.BaudRate = 9600;
s.DataBits = 8;
s.Parity ='none';
s.StopBits = 1;
s.FlowControl='none';
s.Terminator = ';';
s.ByteOrder = 'LittleEndian';
s.ReadAsyncMode = 'manual';
% Building write message.
devID = '02'; % device ID
cmd = 'S'; % command read or write; S for write
readM = cell(961,3);% Read at most 961-by-3 values filling a 961–by–3 matrix in column order
strF = num2str(i);
strF = '11'; %pH parameter
strP = '15'; %pH set point
val = '006.8'; %pH set value
msg_ = strcat('!', devID, cmd, strF, strP, val);%output the string
chksum = dec2hex(mod(sum(msg_),256)); %conversion to hexdec
msg = strcat(msg_,':', char(chksum), ';');
fopen(s); %connects s to the device using fopen , writes and reads text data
fwrite(s, uint8(msg)); %writes the binary data/ Convert to 8-bit unsigned integer (unit8) to the instrument connected to s.
reply=fscanf(s); %reads ASCII data from the device connected to the serial port object and returns it to reply, for binary data use fread
fclose(s); %Disconnect s from the scope, and remove s from memory and the workspace.
This leads me to believe that the device is connected but is not sending or receiving information and I am unsure as to how I can configure this or even really check if there is communication occurring between the tower and my computer.
I have a code (a code we saw in a class) of a recurrent neural network that reads a given text and tries to produce its own text similar to the example. The code is written in python and uses the pytorch library. I wanted to modify to see whether I could increase its speed by using GPU instead of CPU and I made some tests on google collaboratory. The GPU version of the code runs fine but is about three times slower than the CPU version. I do not know the details of GPU architecture so I can not really understand why it is slower. I know that GPUs can do more arithmetic operations per cycle but have more limited memory so I am curious if I am having a memory issue. I also tried using CUDA with a generative adversarial network and in this case it was almost ten times faster. Any tips on this would be welcome.
The code (CUDA version) is below. I am new at this stuff so sorry if some of the terminology is not correct.
The architecture is input->encoder->recursive network->decoder->output.
import torch
import time
import numpy as np
from torch.autograd import Variable
import matplotlib.pyplot as plt
from google.colab import files
#uploding text on google collab
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
#data preprocessing
with open('text.txt','r') as file:
#with open closes the file after we are done with it
rawtxt=file.read()
rawtxt = rawtxt.lower()
#a function that assigns a number to each unique character in the text
def create_map(rawtxt):
letters = list(set(rawtxt))
lettermap = dict(enumerate(letters)) #gives each letter in the list a number
return lettermap
num_to_let = create_map(rawtxt)
#inverse to num_to_let
let_to_num =dict(zip(num_to_let.values(), num_to_let.keys()))
print(num_to_let)
#turns a text of characters into text of numbers using the mapping
#given by the input mapdict
def maparray(txt, mapdict):
txt = list(txt)
for k, letter in enumerate(txt):
txt[k]=mapdict[letter]
txt=np.array(txt)
return txt
X=maparray(rawtxt, let_to_num) #the data text in numeric format
Y= np.roll(X, -1, axis=0) #shifted data text in numeric format
X=torch.LongTensor(X)
Y=torch.LongTensor(Y)
#up to here we are done with data preprocessing
#return a random batch for training
#this reads a random piece inside data text
#with the size chunk_size
def random_chunk(chunk_size):
k=np.random.randint(0,len(X)-chunk_size)
return X[k:k+chunk_size], Y[k:k+chunk_size]
nchars = len(num_to_let)
#define the recursive neural network class
class rnn(torch.nn.Module):
def __init__(self,input_size,hidden_size,output_size, n_layers=1):
super().__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers= n_layers
self.encoder = torch.nn.Embedding (input_size, hidden_size)
self.rnn = torch.nn.RNN(hidden_size, hidden_size, n_layers, batch_first=True)
self.decoder = torch.nn.Linear (hidden_size, output_size)
def forward (self,x,hidden):
x=self.encoder(x.view(1,-1))
output, hidden = self.rnn(x.view(1,1,-1), hidden)
output = self.decoder(output.view(1,-1))
return output, hidden
def init_hidden(self):
return Variable(torch.zeros(self.n_layers , 1 , self.hidden_size)).cuda()
#hyper-params
lr = 0.009
no_epochs = 50
chunk_size = 150
myrnn = rnn(nchars, 150, nchars,1)
myrnn.cuda()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(myrnn.parameters(), lr=lr)
t0 = time.time()
for epoch in range(no_epochs):
totcost=0
generated = ''
for _ in range(len(X)//chunk_size):
h=myrnn.init_hidden()
cost = 0
x, y=random_chunk(chunk_size)
x, y= Variable(x).cuda(), Variable(y).cuda()
for i in range(chunk_size):
out, h = myrnn.forward(x[i],h)
_, outl = out.data.max(1)
letter = num_to_let[outl[0]]
generated+=letter
cost += criterion(out, y[i])
optimizer.zero_grad()
cost.backward()
optimizer.step()
totcost+=cost
totcost/=len(X)//chunk_size
print('Epoch', epoch, 'Avg cost/chunk: ', totcost)
print(generated[0:750],'\n\n\n')
t1 = time.time()
total = t1-t0
print('total',total)
#we encode each word into a vector of fixed size
I am trying to build a neural network model with one hidden layer (1024 nodes). The hidden layer is nothing but a relu unit. I am also processing the input data in batches of 128.
The inputs are images of size 28 * 28. In the following code I get the error in line
_, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})
Error: TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder_64:0", shape=(128, 784), dtype=float32) is not an element of this graph.
Here is the code I have written
#Initialize
batch_size = 128
layer1_input = 28 * 28
hidden_layer1 = 1024
num_labels = 10
num_steps = 3001
#Create neural network model
def create_model(inp, w, b):
layer1 = tf.add(tf.matmul(inp, w['w1']), b['b1'])
layer1 = tf.nn.relu(layer1)
layer2 = tf.matmul(layer1, w['w2']) + b['b2']
return layer2
#Initialize variables
x = tf.placeholder(tf.float32, shape=(batch_size, layer1_input))
y = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
w = {
'w1': tf.Variable(tf.random_normal([layer1_input, hidden_layer1])),
'w2': tf.Variable(tf.random_normal([hidden_layer1, num_labels]))
}
b = {
'b1': tf.Variable(tf.zeros([hidden_layer1])),
'b2': tf.Variable(tf.zeros([num_labels]))
}
init = tf.initialize_all_variables()
train_prediction = tf.nn.softmax(model)
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
model = create_model(x, w, b)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model, y))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
#Process
with tf.Session(graph=graph1) as sess:
tf.initialize_all_variables().run()
total_batch = int(train_dataset.shape[0] / batch_size)
for epoch in range(num_steps):
loss = 0
for i in range(total_batch):
batch_x, batch_y = train_dataset[epoch * batch_size:(epoch+1) * batch_size, :], train_labels[epoch * batch_size:(epoch+1) * batch_size,:]
_, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y})
loss = loss + c
loss = loss / total_batch
if epoch % 500 == 0:
print ("Epoch :", epoch, ". cost = {:.9f}".format(avg_cost))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
valid_prediction = tf.run(tf_valid_dataset, {x: tf_valid_dataset})
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
test_prediction = tf.run(tf_test_dataset, {x: tf_test_dataset})
print("TEST accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
This worked for me
from keras import backend as K
and after predicting my data i inserted this part of code
then i had again loaded the model.
K.clear_session()
i faced this problem in production server,
but in my pc it was running fine
...........
from keras import backend as K
#Before prediction
K.clear_session()
#After prediction
K.clear_session()
Variable x is not in the same graph as model, try to define all of these in the same graph scope. For example,
# define a graph
graph1 = tf.Graph()
with graph1.as_default():
# placeholder
x = tf.placeholder(...)
y = tf.placeholder(...)
# create model
model = create(x, w, b)
with tf.Session(graph=graph1) as sess:
# initialize all the variables
sess.run(init)
# then feed_dict
# ......
If you use django server, just runserver with --nothreading
for example:
python manage.py runserver --nothreading
I had the same issue with flask. adding --without-threads flag to flask run or threaded=False to app.run() fixed it
In my case, I was using loop while calling in CNN multiple times, I fixed my problem by doing the following:
# Declare this as global:
global graph
graph = tf.get_default_graph()
# Then just before you call in your model, use this
with graph.as_default():
# call you models here
Note: In my case too, the app ran fine for the first time and then gave the error above. Using the above fix solved the problem.
Hope that helps.
The error message TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("...", dtype=dtype) is not an element of this graph can also arise in case you run a session outside of the scope of its with statement. Consider:
with tf.Session() as sess:
sess.run(logits, feed_dict=feed_dict)
sess.run(logits, feed_dict=feed_dict)
If logits and feed_dict are defined properly, the first sess.run command will execute normally, but the second will raise the mentioned error.
You can also experience this while working on notebooks hosted on online learning platforms like Coursera. So, implementing following code could help get over with the issue.
Implement this at the topmost block of Notebook file:
from keras import backend as K
K.clear_session()
Similar to #javan-peymanfard and #hmadali-shafiee, I ran into this issue when loading the model in an API. I was using FastAPI with uvicorn. To fix the issue I just set the API function definitions to async similar to this:
#app.post('/endpoint_name')
async def endpoint_function():
# Do stuff here, including possibly (re)loading the model
I have a dataset that consists of IP addresses packets transferred etc. Octave is converting the IP addresses into float values. How do I import IP addresses as it is and what data type is to be used (String?)?
How do you import them in octave? GNU Octave has plenty of functions to load/save data. It depends how your IP adresses (IPv4 or IPv6?) are stored in your file which function is the easiest/best for you.
For example if you have a file called "ips.txt".
192.168.10.4
8.8.8.8
14.32.244.8
You could use this to get a cell:
octave:1> f = fopen("ips.txt", "r");
octave:2> l = textscan(f, "%s");
octave:3> fclose(f);
octave:4>
octave:4> l{1}
ans =
{
[1,1] = 192.168.10.4
[2,1] = 8.8.8.8
[3,1] = 14.32.244.8
}
octave:5>
But perhaps char(fread(..)) or fgetl may be better for you dependent what you want to do later with the imported ips...
addition:
Because you commented that you have IP addresses are inside a list of floats and not a fixed scheme (a fixed scheme would be for example: "The wanted ips are at the beginning of the line, the 4.th column or somethink like that which could be processed with awk) I also add a possibility with regexp:
I created this file ips.txt:
192.168.10.4 some text 3.14 8.8.8.8
other 123.44 14.32.244.8
4.667.2 12.943 127.0.0.1 hello world
And load it with this script using regexp
f = fopen("ips.txt", "r");
x = char(fread(f));
fclose(f);
[S, E, TE, M, T, NM, SP] = regexp (x', '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}');
which gives you for M:
M =
{
[1,1] = 192.168.10.4
[1,2] = 8.8.8.8
[1,3] = 14.32.244.8
[1,4] = 127.0.0.1
}
-- Andy
After importing this data file from Matlab with scipy.io.loadmat, things appeared to work fine until we tried to calculate the conditioning number of one of the matrixes within.
Here's the minimum amount of code that reproduces for us:
import scipy
import numpy
stuff = scipy.io.loadmat("dati-esercizio1.mat")
numpy.linalg.cond(stuff["A"])
Here's the extended stacktrace courtesy of iPython:
In [3]: numpy.linalg.cond(A)
---------------------------------------------------------------------------
LapackError Traceback (most recent call last)
/snip/<ipython-input-3-15d9ef00a605> in <module>()
----> 1 numpy.linalg.cond(A)
/snip/python2.7/site-packages/numpy/linalg/linalg.py in cond(x, p)
1409 x = asarray(x) # in case we have a matrix
1410 if p is None:
-> 1411 s = svd(x,compute_uv=False)
1412 return s[0]/s[-1]
1413 else:
/snip/python2.7/site-packages/numpy/linalg/linalg.py in svd(a, full_matrices, compute_uv)
1313 work = zeros((lwork,), t)
1314 results = lapack_routine(option, m, n, a, m, s, u, m, vt, nvt,
-> 1315 work, -1, iwork, 0)
1316 lwork = int(work[0])
1317 work = zeros((lwork,), t)
LapackError: Parameter a has non-native byte order in lapack_lite.dgesdd
All obvious ideas (like flattening and reshaping the matrix or recreating the matrix from scratch reassigning it element by element) failed. How can I want to massage the data, then, in order to make it more agreeable with numpy?
It's a bug, fixed some time ago: https://github.com/numpy/numpy/pull/235
Workaround:
np.linalg.cond(stuff['A'].newbyteorder('='))
This works for me:
In [33]: stuff = loadmat('dati-esercizio1.mat')
In [34]: a = stuff['A']
In [35]: try: np.linalg.cond(a)
....: except: print "Fail!"
Fail!
In [36]: b = np.array(a, dtype='>d')
In [37]: np.linalg.cond(b)
Out[37]: 62493201976.673141
In [38]: np.all(a == b) # Verify they hold the same data.
Out[38]: True
Apparently it's something wrong with the byte order (endianness?) of each number in the resulting ndarray and not just with the ndarray object itself.
Something like this but more elegant should do the trick:
n, m = A.shape()
B = numpy.empty_like(A)
for i in xrange(n):
for j in xrange(m):
B[i,j] = float(A[i,j])
del A
B = A
print numpy.linalg.cond(A) # 62493210091.354507
(For some reason an in-place replacement still gives that error - so there's something wrong with the byte order of the whole object, too.)