I am sending a UDP-packet in double data type to a RaspberryPi via Simulink. I want to decode the UDP-packet with the following code:
import struct
import socket
sock = socket.socket(socket.AF_INET,socket.SOCK_GRAM)
sock.bind(("",5002))
data,addr= sock.recvfrom(1024)
struct.unpack('d','data')
print data
and I receive the following error:
struct.error: unpack requires a string argument of length 8
thanks in advance for any help!
You have to check if you are sending big endian or little endian.
you can add '<' for little endian and '>' for big endian data.
Depends on how many variables you are receiving, the number of 'd' should increase as well.
let's say you are receiving 2
struct.unpack('<'+'d','data')
Related
I'm using this code to listen to a port:
int start(){
ushort port = 61888;
listener = new TcpSocket();
assert(listener.isAlive);
listener.blocking = false;
listener.bind(new InternetAddress(port));
listener.listen(10);
writefln("Listening on port %d.", port);
enum MAX_CONNECTIONS = 60;
auto socketSet = new SocketSet(MAX_CONNECTIONS + 1);
Socket[] reads;
while (true)
{
socketSet.add( listener);
foreach (sock; reads)
socketSet.add(sock);
Socket.select(socketSet, null, null);
}
return 0;
}
As far as I know, sockets interact with the bytes as they are. I want to find a way how to convert these bytes (which are essentially SQL requests) to strings. How can I do so, providing that input is in UTF-8, which is an encoding using variable size?
You seem to have a few questions here.
How do I get chars from bytes?
cast them with cast(char[]) st. This aliases the bytes, giving you a slice of the exact same data, and doesn't require new allocation. You are not yet assuming that the bytes are valid UTF-8, but autodecoding or other parts of your program might complain if they aren't. You can run it by std.utf.validate if you want.
do basically the same thing with std.string.assumeUTF(st), which at least also asserts on invalid UTF in debug builds only.
How do I get a string from char[]?
You can unsafely alias the char[] with std.exception.assumeUnique(st), or you can allocate an immutable copy with st.idup or std.utf.toUTF8(st).
What if my fixed buffer of bytes contains invalid UTF-8 -- because it got cut off?
If that's a risk you can use low level std.utf tools (decodeFront and catching UTFException is one way) to peel off the valid UTF-8 and then check if you have remaining bytes, or to check that the end of the input is valid UTF-8.
How do I know if I've gotten a complete SQL statement with my fixed buffer socket I/O?
Instead of just passing the raw SQL statement over the line, you can define a network protocol that includes information like statement size, or that has 'end of statement' markers that you can read for.
I've a cheat sheet for string type conversions, which links to a file of more elaborate unittests.
Try this:
import std.exception: assumeUnique;
string s = assumeUnique (cast(char[])ubyteArray);
I have working C & Python3 code, based upon simple examples from the internet, where I can correctly send data from my Raspberry Pi4 to an Atmel SAM-E70 dev kit board. I've got a logic analyzer connected to look at the data being sent, and for every i2c_write_data_block() from my Python3 code, the smbus2 code sends the 7-bit address, followed by 0x00, followed by the byte stream that I want to send. My C code, sending the same byte streams, doesn't have the 0x00 between the address and the data. Finally, sending the byte stream using i2ctransfer() from the shell also works as expected: no extra byte.
Hypothetically, it could be that the smbus2 package is trying to use a 10-bit address, but I cannot find any documentation supporting this supposition. In fact, what I've found indicates that the I2C bus configuration is performed via config file(s) which would lead me to the believe that the language used to communicate on the I2C bus shouldn't matter - it would have the same configuration.
Has anyone else encountered this?
I noticed the same thing using the smbus2 i2c_write_data_block() function.
To avoid sending the 0x00 start byte, use the i2c_rdwr() function.
Example:
bus = SMBus(1)
address = 4
data = 'Some message'.encode()
msg = i2c_msg.write(address, data)
bus.i2c_rdwr(msg)
I'm reading in a stream of data, 64 bytes to be exact. I want to read 16 bits starting at the 480th bit of the incoming data. Unfortunately, I do not know what the incoming data type is, it's a bunch of random characters/boxes. Reading it in as an unsigned short (v), I get the number I am looking for, which for this example is 13.
my $satt_id = unpack("x60v1"), $msgdata); #$satt_id == 13
This results in $satt_id == 13, which is 00000000 00001101.
If I pull the data as 16 bits (b or B), the string does not reflect the value of 13, but rather is byte-swapped or reversed.
my $satt_idb = unpack("x60b16", $msgdata); #satt_idb == "10110000 00000000"
my $satt_idB = unpack("x60B16", $msgdata); #satt_idB == "00001101 00000000"
Why is this occurring? I want to alter the data and resend out the message, which would be relatively easy if all of the message elements were the same size (16 bits, just pack back as it was unpacked), but some are 6, 4, 2, and 1 bits. Should I just use little-endian b and then reverse? After altering the data reverse it back to original order and then pack it back as b?
Completely separate and not perl related, but this haunted me in a different utility. I just conceded by swapping the values in the Enum designation. It worked, just wasn't very viable when the amount of bits got higher than 4 (16 different values).
Thanks!
EDIT: I'm guessing this is just related to binary notation? Apparently starts from the right? So $satt_idb is correct, if you read right to left. So to make it more user friendly, just reverse, alter, then reverse again and repack?
EDIT2: Basically I'm trying to make a user-friendly method of editing messages coming through a data stream. As I mentioned in the comments, if I want to edit a single bit from 0 to 1 (which in the message represents something as true/false), I don't want the user to have to worry about editing the octet of data received, just select from a dropdown of true/false.
If it works with v, it means the data is in little-endian byte order, which means
0b0000000000001101
is stored as
0b00001101 0b00000000
which is what you got.
Should I just use little-endian b and then reverse?
No. You are likely doing something incorrect if you are converting the numbers to a text representation (binary).
If you did somehow want the binary representation of the number, you could use
sprintf("%16b", $num)
I would like to read a block of data from my Arduino Mega (and also from an Arduino Micro in another project) with my Raspberry Pi via I2C. The code has to be in Perl because it's sort of a plug-in for my Home-Automation-Server.
I'm using the Device::SMBus interface and the connection works, I'm able to write and read single Bytes. I can even use writeBlockData with register address 0x00. I randomly discovererd that this address works.
But when I want to readBlockData, no register-address seems to work.
Does anyone know the correct register-address, or is that not even the problem that causes errors?
Thanks in advance
First off, which register(s) are you wanting to read? Here's an example using my RPi::I2C software (it should be exceptionally similar with the distribution you're using), along with a sketch that has a bunch of pseudo-registers configured for reading/writing.
First, the Perl code. It reads two bytes (the output of an analogRead() of pin A0 which is set up as register 80), then bit-shifts the two bytes into a 16-bit integer to get the full 0-1023 value of the pin:
use warnings;
use strict;
use RPi::I2C;
my $arduino_addr = 0x04;
my $arduino = RPi::I2C->new($arduino_addr);
my #bytes = $arduino->read_block(2, 80);
my $a0_value = ($bytes[0] << 8) | $bytes[1];
print "$a0_value\n";
Here's a full-blown Arduino sketch you can review that sets up a half dozen or so pseudo-registers, and when each register is specified, the Arduino writes or reads the appropriate data. If no register is specified, it operates on 0x00 register.
The I2C on the Arduino always does an onReceive() call before it does the onRequest() (when using Wire), so I set up a global variable reg to hold the register value, which I populate in the onReceive() interrupt, which is then used in the onRequest() call to send you the data at the pseudo-register you've specified.
The sketch itself doesn't really do anything useful, I just presented it as an example. It's actually part of my automated unit test platform for my RPi::WiringPi distribution.
I'm collecting 802.11 packets using scapy on Ubuntu 16.04 (4.4 kernel). The RadioTap headers for my packets have the following present flags:
present=TSFT+Flags+Rate+Channel+dBm_AntSignal+b14+b29+Ext
Given the description of RadioTap, I would expect Channel to start on the 10th byte following the header and preceding fields (8 for TSFT + 1 each for Flags and Rate). Channel has an alignment of 2, so there is no need for padding. Yet this is what is in the undecoded portion of the packet:
notdecoded=' \x08\x00\x00\x00\x00\x00\x00f\xc0 \x02\x00\x00\x00\x00\x10\x02l\t\xa0\x00\xa9\x00\x00\x00\xa9\x00'
In this case the channel number actually appears at bytes 18-19 ('l\t' = 2412), and im not sure exactly what byte contains the dBm signal strength.
Anyone have an idea as to what i'm missing?
Found the answer after digging into the spec a bit deeper:
Scapy doesn't parse extended headers as signified by bit-32 (though it did tell me about them by stating +Ext above). Those extra headers are stuffed on the front of 'notdecoded' section of the packet. I think scapy should, at minimum, remove those extended headers from not-decoded to avoid future confusion.
In this particular case there are two extra 32 bit extended bitmap headers, accounting for the extra 8 bytes.
If someone wants to write an answer up with more detail, ill accept it, otherwise i will clean this answer up and accept it for perpetuity.