I want to built a function that sends a request from ModBus to serial in hex. I more o less have a working function but have two issues.
Issue 1
[b'\x06', b'\x1c', b'\x00!', b'\r', b'\x1e', b'\x1d\xd3', b'\r', b'\n', b'\x1e', b'\x1d']
I cant remove this part b'\r', b'\n', using the .split('\r \n') method since It's not a string.
Issue 2
When getting a value from holding register 40 (33) and i try to use the .to_bytes() method I keep getting b'\x00!', b'\r' and I'm expecting b'\x21'
r = client.read_holding_registers(40)
re = r.registers[0]
req = re.to_bytes(2, 'big')
My functions to generate my request and to send trough pyserial.
def scanned_code():
code = client.read_holding_registers(0)
# code2= client.re
r = code.registers[0]
return r
def send_request(data):
""" Takes input from create_request() and sends data to serial port"""
try:
for i in range(data):
serial_client.write(data[i])
# serial_client.writelines(data[i])
except:
print('no se pudo enviar el paquete <<<--------------------')
def create_request(job):
""" Request type is 33 looks for job
[06]
[1c]
req=33[0d][0a]
job=30925[0d][0a][1e]
[1d]
"""
r = client.read_holding_registers(40)
re = r.registers[0]
req = re.to_bytes(2, 'big')
num = job.to_bytes(2, 'big')
data = [
b'\x06',
b'\x1C',
req,
b'\x0D',
b'\x1E',
num,
b'\x0D',
b'\x0A',
b'\x1E',
b'\x1D'
]
print(data)
while True:
# verify order_trigger() is True.
while order_trigger() != False:
print('inside while loop')
# set flag coil back to 0
reset_trigger()
# get Job no.
job = scanned_code()
# check for JOB No. dif. than 0
if job != 0:
print(scanned_code())
send_request(create_request(job))
# send job request to host to get job data
# send_request()
# if TRUE send job request by serial to DVI client
# get job request data
# translate job request data to modbus
# send data to plc
else:
print(' no scanned code')
break
time.sleep(INTERNAL_SLEEP_TIME)
print('outside loop')
time.sleep(EXTERNAL_SLEEP_TIME)
As an additional question is this the proper way of doing things?
Related
I am trying to send and receive pickled versions of a random value generated by the producer. I am using the Multiprocess(not '-ing') and ForkingPickler module to to pickle and qeueue the generated value. However upon running the sample program below, i get the below error. Basis for using ForkingPickler is to pickle socket objects in future. I am now testing out with a sample version. Is this a feasible way to go about pickling socket objects?
rv = reduce(self.proto)
TypeError: cannot pickle 'memoryview' object
def producer(queue):
print('Producer: Running', flush=True)
# generate work
for i in range(10):
# generate a value
value = random()
# block
sleep(value)
# add to the queue
fork_value = ForkingPickler.dumps(value)
queue.put(fork_value)
# all done
queue.put(None)
print(f'Queue Size Consumer: {queue.qsize()}', flush=True)
print('Producer: Done', flush=True)
# consume work
def consumer(queue):
print('Consumer: Running', flush=True)
# consume work
while True:
print(f'Queue Size Consumer: {queue.qsize()}', flush=True)
# get a unit of work
fork_value = queue.get()
item = ForkingPickler.loads(fork_value)
# check for stop
if item is None:
break
# report
print(f'>got {item}', flush=True)
# all done
print('Consumer: Done', flush=True)
# entry point
if __name__ == '__main__':
# create the shared queue
queue = JoinableQueue()
# start the consumer
consumer_process = Process(target=consumer, args=(queue,))
consumer_process.start()
# start the producer
producer_process = Process(target=producer, args=(queue,))
producer_process.start()
# wait for all processes to finish
consumer_process.join()
producer_process.join()
I have a program which connects to a bunch of hosts and checks if they are "socket reflectors". Basically, it is scanning a bunch of ips and doing this:
Connect, check - if there is data, is that data the same as what I am sending? Yes, return true, no return false. No data, return false.
For some reason, asyncio is not reliably closing TCP connections after they time out. I attribute this to the fact that a lot of these hosts I am connecting to are god knows what, and maybe just buggy servers. Be that as it may, there must be a way to make this force timeout? When I run this, it hangs after a while. Out of 12,978 hosts, about 12,768 of them complete. Then I end up with a bunch of open ESTABLISHED connections! Why does this happen?
I need it close the connection if nothing happens during the given timeout period.
async def tcp_echo_client(message, host_port, loop, connection_timeout=10):
"""
Asyncio TCP echo client
:param message: data to send
:param host_port: host and port to connect to
:param loop: asyncio loop
"""
host_port_ = host_port.split(':')
try:
host = host_port_[0]
port = host_port_[1]
except IndexError:
pass
else:
fut = asyncio.open_connection(host, port, loop=loop)
try:
reader, writer = await asyncio.wait_for(fut, timeout=connection_timeout)
except asyncio.TimeoutError:
print('[t] Connection Timeout')
return 1
except Exception:
return 1
else:
if args.verbosity >= 1:
print('[~] Send: %r' % message)
writer.write(message.encode())
writer.drain()
data = await reader.read(1024)
await asyncio.sleep(1)
if data:
if args.verbosity >= 1:
print(f'[~] Host: {host} Received: %r' % data.decode())
if data.decode() == message:
honeypots.append(host_port)
writer.close()
return 0
else:
filtered_list.append(host_port)
print(f'[~] Received: {data.decode()}')
writer.close()
return 1
else:
filtered_list.append(host_port)
writer.close()
if args.verbosity > 1:
print(f'[~] No data received for {host}')
return 1
What am I doing wrong?
Our airflow implementation sends out http requests to get services to do tasks. We want those services to let airflow know when they complete their task, so we are sending a callback url to the service which they will call when their task is complete. I can't seem to find a callback sensor, however. How do people handle this normally?
There is no such thing as a callback or webhook sensor in Airflow. The sensor definition follows as taken from the documentation:
Sensors are a certain type of operator that will keep running until a certain criterion is met. Examples include a specific file landing in HDFS or S3, a partition appearing in Hive, or a specific time of the day. Sensors are derived from BaseSensorOperator and run a poke method at a specified poke_interval until it returns True.
This means that a sensor is an operator that performs polling behavior on external systems. In that sense, your external services should have a way of keeping state for each executed task - either internally or externally - so that a polling sensor can check on that state.
This way you can use for example the airflow.operators.HttpSensor that polls an HTTP endpoint until a condition is met. Or even better, write your own custom sensor that gives you the opportunity to do more complex processing and keep state.
Otherwise, if the service outputs data in a storage system you can use a sensor that polls a database for example. I believe you get the idea.
I'm attaching a custom operator example that I've written for integrating with the Apache Livy API. The sensor does two things: a) submits a Spark job through the REST API and b) waits for the job to be completed.
The operator extends the SimpleHttpOperator and at the same time implements the HttpSensor thus combining both functionalities.
class LivyBatchOperator(SimpleHttpOperator):
"""
Submits a new Spark batch job through
the Apache Livy REST API.
"""
template_fields = ('args',)
ui_color = '#f4a460'
#apply_defaults
def __init__(self,
name,
className,
file,
executorMemory='1g',
driverMemory='512m',
driverCores=1,
executorCores=1,
numExecutors=1,
args=[],
conf={},
timeout=120,
http_conn_id='apache_livy',
*arguments, **kwargs):
"""
If xcom_push is True, response of an HTTP request will also
be pushed to an XCom.
"""
super(LivyBatchOperator, self).__init__(
endpoint='batches', *arguments, **kwargs)
self.http_conn_id = http_conn_id
self.method = 'POST'
self.endpoint = 'batches'
self.name = name
self.className = className
self.file = file
self.executorMemory = executorMemory
self.driverMemory = driverMemory
self.driverCores = driverCores
self.executorCores = executorCores
self.numExecutors = numExecutors
self.args = args
self.conf = conf
self.timeout = timeout
self.poke_interval = 10
def execute(self, context):
"""
Executes the task
"""
payload = {
"name": self.name,
"className": self.className,
"executorMemory": self.executorMemory,
"driverMemory": self.driverMemory,
"driverCores": self.driverCores,
"executorCores": self.executorCores,
"numExecutors": self.numExecutors,
"file": self.file,
"args": self.args,
"conf": self.conf
}
print payload
headers = {
'X-Requested-By': 'airflow',
'Content-Type': 'application/json'
}
http = HttpHook(self.method, http_conn_id=self.http_conn_id)
self.log.info("Submitting batch through Apache Livy API")
response = http.run(self.endpoint,
json.dumps(payload),
headers,
self.extra_options)
# parse the JSON response
obj = json.loads(response.content)
# get the new batch Id
self.batch_id = obj['id']
log.info('Batch successfully submitted with Id %s', self.batch_id)
# start polling the batch status
started_at = datetime.utcnow()
while not self.poke(context):
if (datetime.utcnow() - started_at).total_seconds() > self.timeout:
raise AirflowSensorTimeout('Snap. Time is OUT.')
sleep(self.poke_interval)
self.log.info("Batch %s has finished", self.batch_id)
def poke(self, context):
'''
Function that the sensors defined while deriving this class should
override.
'''
http = HttpHook(method='GET', http_conn_id=self.http_conn_id)
self.log.info("Calling Apache Livy API to get batch status")
# call the API endpoint
endpoint = 'batches/' + str(self.batch_id)
response = http.run(endpoint)
# parse the JSON response
obj = json.loads(response.content)
# get the current state of the batch
state = obj['state']
# check the batch state
if (state == 'starting') or (state == 'running'):
# if state is 'starting' or 'running'
# signal a new polling cycle
self.log.info('Batch %s has not finished yet (%s)',
self.batch_id, state)
return False
elif state == 'success':
# if state is 'success' exit
return True
else:
# for all other states
# raise an exception and
# terminate the task
raise AirflowException(
'Batch ' + str(self.batch_id) + ' failed (' + state + ')')
Hope this will help you a bit.
I met an error when running codes at the bottom. It's like a simple ftp.
I use python2.6.6 and CentOS release 6.8
In most linux server, it gets right results like this:(I'm very sorry that I have just sign up and couldn't )
Clinet:
[root#Test ftp]# python client.py
path:put|/home/aaa.txt
Server:
[root#Test ftp]# python server.py
connected...
pre_data:put|aaa.txt|4
cmd: put
file_name: aaa.txt
file_size: 4
upload successed.
But I get errors in some server(such as my own VM in my PC). I have done lots of tests(python2.6/python2.7, Centos6.5/Centos6.7) and found this error is not because them. Here is the error imformation:
[root#Lewis-VM ftp]# python server.py
connected...
pre_data:put|aaa.txt|7sdfsdf ###Here gets the wrong result, "sdfsdf" is the content of /home/aaa.txt and it shouldn't be sent here to 'file_size' and so it cause the "ValueError" below
cmd: put
file_name: aaa.txt
file_size: 7sdfsdf
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 10699)
Traceback (most recent call last):
File "/usr/lib64/python2.6/SocketServer.py", line 570, in process_request_thread
self.finish_request(request, client_address)
File "/usr/lib64/python2.6/SocketServer.py", line 332, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib64/python2.6/SocketServer.py", line 627, in __init__
self.handle()
File "server.py", line 30, in handle
if int(file_size)>recv_size:
ValueError: invalid literal for int() with base 10: '7sdfsdf\n'
What's more, I found that if I insert a time.sleep(1) between sk.send(cmd+"|"+file_name+'|'+str(file_size)) and sk.send(data) in client.py, the error will disappear. I have said that I did tests in different system and python versions and the error is not because them. So I guess that is it because of some system configs? I have check about socket.send() and socket.recv() in python.org but fount nothing helpful. So could somebody help me to explain why this happend?
The code are here:
#!/usr/bin/env python
#coding:utf-8
################
#This is server#
################
import SocketServer
import os
class MyServer(SocketServer.BaseRequestHandler):
def handle(self):
base_path = '/home/ftp/file'
conn = self.request
print 'connected...'
while True:
#####receive pre_data: we should get data like 'put|/home/aaa|7'
pre_data = conn.recv(1024)
print 'pre_data:' + pre_data
cmd,file_name,file_size = pre_data.split('|')
print 'cmd: ' + cmd
print 'file_name: '+ file_name
print 'file_size: '+ file_size
recv_size = 0
file_dir = os.path.join(base_path,file_name)
f = file(file_dir,'wb')
Flag = True
####receive 1024bytes each time
while Flag:
if int(file_size)>recv_size:
data = conn.recv(1024)
recv_size+=len(data)
else:
recv_size = 0
Flag = False
continue
f.write(data)
print 'upload successed.'
f.close()
instance = SocketServer.ThreadingTCPServer(('127.0.0.1',9999),MyServer)
instance.serve_forever()
#!/usr/bin/env python
#coding:utf-8
################
#This is client#
################
import socket
import sys
import os
ip_port = ('127.0.0.1',9999)
sk = socket.socket()
sk.connect(ip_port)
while True:
input = raw_input('path:')
#####we should input like 'put|/home/aaa.txt'
cmd,path = input.split('|')
file_name = os.path.basename(path)
file_size=os.stat(path).st_size
sk.send(cmd+"|"+file_name+'|'+str(file_size))
send_size = 0
f= file(path,'rb')
Flag = True
#####read 1024 bytes and send it to server each time
while Flag:
if send_size + 1024 >file_size:
data = f.read(file_size-send_size)
Flag = False
else:
data = f.read(1024)
send_size+=1024
sk.send(data)
f.close()
sk.close()
The TCP is a stream of data. That is the problem. TCP do not need to keep message boundaries. So when a client calls something like
connection.send("0123456789")
connection.send("ABCDEFGHIJ")
then a naive server like
while True;
data = conn.recv(1024)
print data + "_"
may print any of:
0123456789_ABCDEFGHIJ_
0123456789ABCDEFGHIJ_
0_1_2_3_4_5_6_7_8_9_A_B_C_D_E_F_G_H_I_J_
The server has no chance to recognize how many sends client called because the TCP stack at client side just inserted data to a stream and the server must be able to process the data received in different number of buffers than the client used.
Your server must contain a logic to separate the header and the data. All of application protocols based on TCP use a mechanism to identify application level boundaries. For example HTTP separates headers and body by an empty line and it informs about the body length in a separate header.
Your program works correctly when server receives a header with the command, name and size in a separate buffer it it fails when client is fast enough and push the data into stream quickly and the server reads header and data in one chunk.
Trying to create a web-front end for a Python3 backed application. The application will require bi-directional streaming which sounded like a good opportunity to look into websockets.
My first inclination was to use something already existing, and the example applications from mod-pywebsocket have proved valuable. Unfortunately their API doesn't appear to easily lend itself to extension, and it is Python2.
Looking around the blogosphere many people have written their own websocket server for earlier versions of the websocket protocol, most don't implement the security key hash so dont' work.
Reading RFC 6455 I decided to take a stab at it myself and came up with the following:
#!/usr/bin/env python3
"""
A partial implementation of RFC 6455
http://tools.ietf.org/pdf/rfc6455.pdf
Brian Thorne 2012
"""
import socket
import threading
import time
import base64
import hashlib
def calculate_websocket_hash(key):
magic_websocket_string = b"258EAFA5-E914-47DA-95CA-C5AB0DC85B11"
result_string = key + magic_websocket_string
sha1_digest = hashlib.sha1(result_string).digest()
response_data = base64.encodestring(sha1_digest)
response_string = response_data.decode('utf8')
return response_string
def is_bit_set(int_type, offset):
mask = 1 << offset
return not 0 == (int_type & mask)
def set_bit(int_type, offset):
return int_type | (1 << offset)
def bytes_to_int(data):
# note big-endian is the standard network byte order
return int.from_bytes(data, byteorder='big')
def pack(data):
"""pack bytes for sending to client"""
frame_head = bytearray(2)
# set final fragment
frame_head[0] = set_bit(frame_head[0], 7)
# set opcode 1 = text
frame_head[0] = set_bit(frame_head[0], 0)
# payload length
assert len(data) < 126, "haven't implemented that yet"
frame_head[1] = len(data)
# add data
frame = frame_head + data.encode('utf-8')
print(list(hex(b) for b in frame))
return frame
def receive(s):
"""receive data from client"""
# read the first two bytes
frame_head = s.recv(2)
# very first bit indicates if this is the final fragment
print("final fragment: ", is_bit_set(frame_head[0], 7))
# bits 4-7 are the opcode (0x01 -> text)
print("opcode: ", frame_head[0] & 0x0f)
# mask bit, from client will ALWAYS be 1
assert is_bit_set(frame_head[1], 7)
# length of payload
# 7 bits, or 7 bits + 16 bits, or 7 bits + 64 bits
payload_length = frame_head[1] & 0x7F
if payload_length == 126:
raw = s.recv(2)
payload_length = bytes_to_int(raw)
elif payload_length == 127:
raw = s.recv(8)
payload_length = bytes_to_int(raw)
print('Payload is {} bytes'.format(payload_length))
"""masking key
All frames sent from the client to the server are masked by a
32-bit nounce value that is contained within the frame
"""
masking_key = s.recv(4)
print("mask: ", masking_key, bytes_to_int(masking_key))
# finally get the payload data:
masked_data_in = s.recv(payload_length)
data = bytearray(payload_length)
# The ith byte is the XOR of byte i of the data with
# masking_key[i % 4]
for i, b in enumerate(masked_data_in):
data[i] = b ^ masking_key[i%4]
return data
def handle(s):
client_request = s.recv(4096)
# get to the key
for line in client_request.splitlines():
if b'Sec-WebSocket-Key:' in line:
key = line.split(b': ')[1]
break
response_string = calculate_websocket_hash(key)
header = '''HTTP/1.1 101 Switching Protocols\r
Upgrade: websocket\r
Connection: Upgrade\r
Sec-WebSocket-Accept: {}\r
\r
'''.format(response_string)
s.send(header.encode())
# this works
print(receive(s))
# this doesn't
s.send(pack('Hello'))
s.close()
s = socket.socket( socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('', 9876))
s.listen(1)
while True:
t,_ = s.accept()
threading.Thread(target=handle, args = (t,)).start()
Using this basic test page (which works with mod-pywebsocket):
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Web Socket Example</title>
<meta charset="UTF-8">
</head>
<body>
<div id="serveroutput"></div>
<form id="form">
<input type="text" value="Hello World!" id="msg" />
<input type="submit" value="Send" onclick="sendMsg()" />
</form>
<script>
var form = document.getElementById('form');
var msg = document.getElementById('msg');
var output = document.getElementById('serveroutput');
var s = new WebSocket("ws://"+window.location.hostname+":9876");
s.onopen = function(e) {
console.log("opened");
out('Connected.');
}
s.onclose = function(e) {
console.log("closed");
out('Connection closed.');
}
s.onmessage = function(e) {
console.log("got: " + e.data);
out(e.data);
}
form.onsubmit = function(e) {
e.preventDefault();
msg.value = '';
window.scrollTop = window.scrollHeight;
}
function sendMsg() {
s.send(msg.value);
}
function out(text) {
var el = document.createElement('p');
el.innerHTML = text;
output.appendChild(el);
}
msg.focus();
</script>
</body>
</html>
This receives data and demasks it correctly, but I can't get the transmit path to work.
As a test to write "Hello" to the socket, the program above calculates the bytes to be written to the socket as:
['0x81', '0x5', '0x48', '0x65', '0x6c', '0x6c', '0x6f']
Which match the hex values given in section 5.7 of the RFC. Unfortunately the frame never shows up in Chrome's Developer Tools.
Any idea what I'm missing? Or a currently working Python3 websocket example?
When I try talking to your python code from Safari 6.0.1 on Lion I get
Unexpected LF in Value at ...
in the Javascript console. I also get an IndexError exception from the Python code.
When I talk to your python code from Chrome Version 24.0.1290.1 dev on Lion I don't get any Javascript errors. In your javascript the onopen() and onclose() methods are called, but not the onmessage(). The python code doesn't throw any exceptions and appears to have receive message and sent it's response, i.e exactly the behavior your seeing.
Since Safari didn't like the trailing LF in your header I tried removing it, i.e
header = '''HTTP/1.1 101 Switching Protocols\r
Upgrade: websocket\r
Connection: Upgrade\r
Sec-WebSocket-Accept: {}\r
'''.format(response_string)
When I make this change Chrome is able to see your response message i.e
got: Hello
shows up in the javascript console.
Safari still doesn't work. Now it raise's a different issue when I attempt to send a message.
websocket.html:36 INVALID_STATE_ERR: DOM Exception 11: An attempt was made to use an object that is not, or is no longer, usable.
None of the javascript websocket event handlers ever fire and I'm still seeing the IndexError exception from python.
In conclusion. Your Python code wasn't working with Chrome because of an extra LF in your header response. There's still something else going on because the code the works with Chrome doesn't work with Safari.
Update
I've worked out the underlying issue and now have the example working in Safari and Chrome.
base64.encodestring() always adds a trailing \n to it's return. This is the source of the LF that Safari was complaining about.
call .strip() on the return value of calculate_websocket_hash and using your original header template works correctly on Safari and Chrome.