Raspberry Pi Pico + Lora Module Peer to Peer Send Message Problem - micropython

I have a 2 Raspberry Pi Picos with a SX126X LoRa module attached. I found a great library and have the ping pong example working no problem.
I want to send a json string from one module to the other. sx.send(b'Ping') sends the string b'Ping' to the remote device.
Yes, including the b and the single quotes. If I remove the b it fails.
What I want to do is save a json string to a variable and then send the variable in the sx.send() command. It seems the "b" is somehow required but I can't figure out how to swap the literal 'Ping' with a variable.
I had a stab at it but Micropython is not really my thing (yet). If anyone has some ideas I could try.
Let me know if you require extra details.
Thanks
David
UPDATE
Here is an extract from main.py
while True:
sx.send(b'Ping')
time.sleep(10)
SX1262.py
def send(self, data):
if not self.blocking:
return self._startTransmit(data)
else:
return self._transmit(data)
def _startTransmit(self, data):
if isinstance(data, bytes) or isinstance(data, bytearray):
pass
else:
return 0, ERR_INVALID_PACKET_TYPE

the b in sx.send(b'Ping') means that you are sending literal bytes, as opposed to a string. This notation is internal to Python. What is actually sent is Ping. And when the other device receives it, it is stored as bytes, and when displayed:
>>>> packet = b'Ping'
>>>> packet
b'Ping'
>>>> len(packet)
4
As you can see there are only 4 bytes in packet.
To send a JSON string you can do something like this:
>>>> import json
>>>> p = '{"name": "Bob", "languages": ["Python", "Java"]}'
>>>> j = json.loads(p)
>>>> sx.send(bytes(json.dumps(j).encode()))
json.dumps(j) takes a JSON object j and transforms it into a string with dumps(), after making sure it has an encoding (encode()), and turns that string into bytes. Which you can send.
On the other side, on device 2, you can just json.loads() the bytes to make it a JSON object.
Getting the values back to variables is easy:
>>>> j['name']
'Bob'
>>>> j['languages']
['Python', 'Java']

Related

How to access particular registers using PyModbus rtu?

I am new to Python and Modbus in turn I have spent a vast amount of time trying to research, gather and experiment as much as possible before asking a possible easy problem to solve. If anyone could point me in the right direction I would be truly grateful.
Essentially I am attempting to read a register of a device, using the vendors Modbus map provided to me... I can establish a connection (I think), but having issues snooping in on a register I want to read.
from pymodbus.client.sync import ModbusSerialClient
# Connection to device
client = ModbusSerialClient(
port="COM7",
startbit=1,
databits=8,
parity="N",
stopbits=2,
errorcheck="crc",
baudrate=38400,
method="RTU",
timeout=3,
)
if client.connect(): # Connection to slave device
print("Connection Successful")
register = client.read_coils(54, 2)
print(register)
client.close()
else:
print("Failed to connect to Modbus device")
And this result is received.
Connection Successful
Modbus Error: [Input/Output] Modbus Error: [Invalid Message] No response received, expected at least 2 bytes (0 received)
The register address = 54, words = 1 and data type = INT16.
I am probably going about this all wrong, however, a push in the right direction would be appreciated.
So with a little more research I was able to access the data required.
from atexit import register
from pymodbus.client.sync import ModbusSerialClient
client = ModbusSerialClient(
port="COM7",
startbit=1,
databits=8,
parity="N",
stopbits=2,
errorcheck="crc",
baudrate=38400,
method="RTU",
timeout=3,
)
if client.connect(): # Trying for connect to Modbus slave
# Read holding register
print("Connection Successful")
res = client.read_holding_registers(address=53, count=1, unit=1)
# Where "address" is register address
# Where "count" is the number of registers to read
# Where "unit" is the slave address, found in vendor documentation
Output:
res = holding register value

Where is the structure for payload of netlink message defined for NETLINK_XFRM socket

I am running strongswan daemon to perform IKEv2 messaging.
I wrote some python code to be notified everytime any xfrm change happens.
The socket is created like so:
my_socket = socket.socket(socketAF_NETLINK, socket.SOCK_RAW, socket.NETLINK_XFRM)
I receive and decode the nlmsghdr structure defined in ./uapi/linux/netlink.h like so:
while True:
data = my_socket.recv(65535)
msg_len, msg_type, flags, seq, pid = struct.unpack("=LHHLL", data[:16])
print msg_type
This works fine, I get the message type every time an new SA is made or updated or deleted.
Now, I attempt to decode the payload of this message, but I cannot locate the structure in linux to decode it with.
There is a file called uapi/linux/xfrm.h but I am not sure if this file contains the payload structure.
Can someone share where the payload structure is defined for xfrm netlink messages?
uapi/linux/xfrm.h is indeed the file you needed. struct xfrm_usersa_info is the struct you're looking for

Parsing ByteString from Socket fails

We are writing a message broker in Haskell (HMB). Therefore messages have to be parsed (Data.Binary) after they are received from socket (Network.Socket). We've been testing on loopback (localhost) so far - for producing and parsing messages. This worked quiet well. If we benchmark by producing messages from another machine we are facing problems: Suddenly the parser does not have enough bytes to parse.
The first 4 bytes of each message defines the length of the message and thus describes the message to be parsed. As hinted above, we do parsing with Data.Binary - so this is lazy. For testing purposes we switched parsing of the first 4 bytes to strict by using the cereal library. This the same problem. We now even tried to completely parse the requests with cereal only and the problem also remains.
In the code you'll see that we do threading. However, we also tried without a channel (single threaded setup) but this didn't solve the problem either.
Here is a part of the code (Thread1) where the received bytes are written to a channel to be further consumed/parsed. (As mentioned, nothing changes if we omit channeling and directly parse input):
runConnection :: (Socket, SockAddr) -> RequestChan -> Bool -> IO()
runConnection conn chan False = return ()
runConnection conn chan True = do
r <- recvFromSock conn
case (r) of
Left e -> do
handleSocketError conn e
runConnection conn chan False
Right input -> do
threadDelay 5000 -- THIS FIXES THE PROBLEM!?
writeToReqChan conn chan input
runConnection conn chan True
Here is the part (Thread2) where input is beeing parsed:
runApiHandler :: RequestChan -> ResponseChan -> IO()
runApiHandler rqChan rsChan = do
(conn, req) <- readChan rqChan
case readRequest req of -- readRequest IS THE PARSER
Left (bs, bo, e) -> handleHandlerError conn $ ParseRequestError e
Right (bs, bo, rm) -> do
res <- handleRequest rm
case res of
Left e -> handleHandlerError conn e
Right bs -> writeToResChan conn rsChan bs
runApiHandler rqChan rsChan
Now I figured out, that if the process of parsing is delayed a bit (see threadDelay in the first code block), everything works fine. Which basically means, the parser doesn't wait for bytes received from the socket.
Why is that? Why does the parser not wait for the socket the have enough bytes? Is there a general mistake in our setup?
I would bet that the problem has nothing to do with the parser but is instead due to the blocking semantics of UNIX sockets.
While a loopback
interface will likely pass the packet directly from the sender to the receiver,
an Ethernet interface may need to break up the packet to fit in the Maximum
Transmission Unit (MTU) of the link. This is known as packet fragmentation.
The len
argument to the recv system call is merely
the upper bound on the received length (e.g. the size of the target buffer); the
call may produce less data than you ask for. To quote the manpage,
If no messages are available at the socket, the receive calls wait for a
message to arrive, unless the socket is nonblocking (see fcntl(2)), in which
case the value -1 is returned and the external variable errno is set to
EAGAIN or EWOULDBLOCK. The receive calls normally return any data
available, up to the requested amount, rather than waiting for receipt of
the full amount requested.
For this reason, you may need multiple recv calls to retrieve the entire packet. Your example works if you delay the recv as the operating system can reassemble the original packet since all fragments have arrived by the time it is requested.
As meiersi pointed out, there are a variety of streaming I/O libraries that have developed in the Haskell world for solving this problem, among others. These include pipes, conduit, io-streams, and others. Depending upon your goals, this may be a natural way to handle this issue.
You might want to try the socket support in conduit-extra combined with binary-conduit to properly handle the parsing of the chunked streaming, which happens due to the reasons pointed out by bgamari.
First of all, consider yourself lucky to observe this. On many platforms perhaps only one out of a thousand packets exhibit this behaviour, causing a lot of such (sorry) bad networking code to fail seldom and randomly.
The problem is that you start processing before the data is ready. Instead of the threadDelay (which introduces a permanent delay and might not be long enough in all cases), the solution is to make sure you have at least one item/message/packet to process before you start processing it. Your protocol where the first 32bit word contains the length is perfect for this. Read data until you have at least 4 bytes (the length). Then read data until you have the required number of bytes. If any calls to recvFromSock returns less than the required number, call it again to get some more. Remember to also handle the case of 0 bytes, this means the other party closed the connection.
I have implemented this for a similar protocol (SMPP, packets also starts with the length) and it works perfectly.

How to calculate the offset value for Jetty 8 web socket sendMessageMethod

I have a project where a proxy sends Base64 encoded messages to the server. The server then decodes the messages into a byte array and sends to the client. The Jetty 8 WeboScoket.Connection sendMessage(data, offset, length) method expects an offset.
Question how does one determine this offset when decoding from base64?
Also is it okay to assume that the length parameter is the converted byte array's length?
def onMessage(message:String) {
println("From client: " + message)
val decoded = Base64.decodeBase64(message)
println("Decoded and sent to the client: " + decoded)
serverSocket.connection.sendMessage(decoded, offset???, decoded.length)
}
tl;dr: It's an offset into decoded, supporting the case that only a part of an array is to be sent. Here, this parameter probably should be 0.
The link you've put into your message points to API v7, not 8.
Grep coding for org.eclipse.jetty.websocket.connection, I've followed this one in jetty-websocket. Then you can find types implementing that method -- e.g. WebSocketConnectionD00. You see your data mysteriously disappearing into the addFrame method of another interface, WebSocketGenerator. Now here's finally the real meat.
This is pretty low-level here, you can find the data being put into yet another abstraction:
private Buffer _buffer;
...
_buffer.put(content, offset + (payload - remaining), chunk);
One more down, and here's the info. Wait... no. Either grepcode is showing wrong data here or the devs copy/pasted the comments from the void put(byte b) to the two methods below, just adapting the comment on the returned value.
One more down, and you finally see what's happening:
System.arraycopy(b, offset, dst_array, index, length);
, where b is the decoded byte[]. Unfortunately, using grepcode, one cannot dive into that implementation.
Note that I don't use Jetty. Just wanted to look into some other code...

How much data to receive from server in SSL handshake before calling InitializeSecurityContext?

In our Windows C++ application I am using InitializeSecurityContext() client side to open an schannel connection to a server which is running stunnel SSL proxy. My code now works, but only with a hack I would like to eliminate.
I started with this sample code:http://msdn.microsoft.com/en-us/library/aa380536%28v=VS.85%29.aspx
In the sample code, look at SendMsg and ReceiveMsg. The first 4 bytes of any message sent or received indicates the message length. This is fine for the sample, where the server portion of the sample conforms to the same convention.
stunnel does not seem to use this convention. When the client is receiving data during the handshake, how does it know when to stop receiving and make another call to InitializeSecurityContext()?
This is how I structured my code, based on what I could glean from the documentation:
1. call InitializeSecurityContext which returns an output buffer
2. Send output buffer to server
3. Receive response from server
4. call InitializeSecurityContext(server_response) which returns an output buffer
5. if SEC_E_INCOMPLETE_MESSAGE, go back to step 3,
if SEC_I_CONTINUE_NEEDED go back to step 2
I expected InitializeSecurityContext in step 4 to return SEC_E_INCOMPLETE_MESSAGE if not enough data was read from the server in step 3. Instead, I get SEC_I_CONTINUE_NEEDED but an empty output buffer. I have experimented with a few ways to handle this case (e.g. go back to step 3), but none seemed to work and more importantly, I do not see this behavior documented.
In step 3 if I add a loop that receives data until a timeout expires, everything works fine in my test environment. But there must be a more reliable way.
What is the right way to know how much data to receive in step 3?
SChannel is different than the Negotiate security package. You need to receive at least 5 bytes, which is the SSL/TLS record header size:
struct {
ContentType type;
ProtocolVersion version;
uint16 length;
opaque fragment[TLSPlaintext.length];
} TLSPlaintext;
ContentType is 1 byte, ProtocolVersion is 2 bytes, and you have 2 byte record length. Once you read those 5 bytes, SChannel will return SEC_E_INCOMPLETE_MESSAGE and will tell you exactly how many more bytes to expect:
SEC_E_INCOMPLETE_MESSAGE
Data for the whole message was not read from the wire.
When this value is returned, the pInput buffer contains a SecBuffer structure with a BufferType member of SECBUFFER_MISSING. The cbBuffer member of SecBuffer contains a value that indicates the number of additional bytes that the function must read from the client before this function succeeds.
Once you get this output, you know exactly how much to read from the network.
I found the problem.
I found this sample:
http://www.codeproject.com/KB/IP/sslsocket.aspx
I was missing the handling of SECBUFFER_EXTRA (line 987 SslSocket.cpp)
The SChannel SSP returns SEC_E_INCOMPLETE_MESSAGE from both InitializeSecurityContext and DecryptMessage when not enough data is read.
A SECBUFFER_MISSING message type is returned from DecryptMessage with a cbBuffer value of the amount of desired bytes.
But in practice, I did not use the "missing data" value. The documentation indicates the value is not guaranteed to be correct, and is only a hint for developers can use to reduce calls.
InitalizeSecurityContext MSDN doc:
While this number is not always accurate, using it can help improve performance by avoiding multiple calls to this function.
So I unconditionally read more data into the same buffer whenever SEC_E_INCOMPLETE_MESSAGE was returned. Reading multiple bytes at a time from a socket.
Some extra input buffer management was required to append more read data and keep the lengths right. DecryptMessage will modify the input buffers' cbBuffer properties when it fails, which surprised me.
Printing out the buffers and return result after calling InitializeSecurityContext shows the following:
read socket:bytes(5).
InitializeSecurityContext:result(80090318). // SEC_E_INCOMPLETE_MESSAGE
inBuffers[0]:type(2),bytes(5).
inBuffers[1]:type(0),bytes(0). // no indication of missing data
outBuffer[0]:type(2),bytes(0).
read socket:bytes(74).
InitializeSecurityContext:result(00090312). // SEC_I_CONTINUE_NEEDED
inBuffers[0]:type(2),bytes(79). // notice 74 + 5 from before
inBuffers[1]:type(0),bytes(0).
outBuffer[0]:type(2),bytes(0).
And for the DecryptMessage Function, input is always in dataBuf[0], with the rest zeroed.
read socket:bytes(5).
DecryptMessage:len 5, bytes(17030201). // SEC_E_INCOMPLETE_MESSAGE
DecryptMessage:dataBuf[0].BufferType 4, 8 // notice input buffer modified
DecryptMessage:dataBuf[1].BufferType 4, 8
DecryptMessage:dataBuf[2].BufferType 0, 0
DecryptMessage:dataBuf[3].BufferType 0, 0
read socket:bytes(8).
DecryptMessage:len 13, bytes(17030201). // SEC_E_INCOMPLETE_MESSAGE
DecryptMessage:dataBuf[0].BufferType 4, 256
DecryptMessage:dataBuf[1].BufferType 4, 256
DecryptMessage:dataBuf[2].BufferType 0, 0
DecryptMessage:dataBuf[3].BufferType 0, 0
read socket:bytes(256).
DecryptMessage:len 269, bytes(17030201). // SEC_E_OK
We can see my TLS Server peer is sending TLS headers (5 bytes) in one packet, and then the TLS message (8 for Application Data), then the Application Data payload in a third.
You must read some arbitrary amount the first time, and when you receive SEC_E_INCOMPLETE_MESSAGE, you must look in the pInput SecBufferDesc for a SECBUFFER_MISSING and read its cbBuffer to find out how many bytes you are missing.
This problem was doing my head in today, as I was attempting to modify my handshake myself, and having the same problem the other commenters were having, i.e. not finding a SECBUFFER_MISSING. I do not want to interpret the tls packet myself, and I do not want to unconditionally read some unspecified number of bytes. I found the solution to that, so I'm going to address their comments, too.
The confusion here is because the API is confusing. Ordinarily, to read the output of InitializeSecurityContext, you look at the content of the pOutput parameter (as defined in the signature). It's that SecBufferDesc that contains the SECBUFFER_TOKEN etc to pass to AcceptSecurityContext.
However, in the case where InitializeSecurityContext returns SEC_E_INCOMPLETE_MESSAGE, the SECBUFFER_MISSING is returned in the pInput SecBufferDesc, in place of the SECBUFFER_ALERT SecBuffer that was passed in.
The documentation does say this, but not in a way that clearly contrasts this case against the SEC_I_CONTINUE_NEEDED and SEC_E_OK cases.
This answer also applies to AcceptSecurityContext.
From MSDN, I'd presume SEC_E_INCOMPLETE_MESSAGE is returned when not enough data is received from server at the moment. Instead, SEC_I_CONTINUE_NEEDED returned with InBuffers[1] indicating amount of unread data (note that some data is processed and must be skipped) and OutBuffers containing nothing.
So the algorithm is:
If SEC_I_CONTINUE_NEEDED returned, check type of InBuffers[1]
If it is SECBUFFER_EXTRA, handle it (move InBuffers[1].cbBuffer bytes to the beginning of input buffer) and jump to next recv & InitializeSecurityContext iteration
If OutBuffers is not empty, send its contents to server