Sending float from PIC to Raspberry Pi - raspberry-pi

im trying to send float value from pic 16f877a (easypic4 development board) to raspberry pi via uart.
mikroc code
AValue = 4.88 * ADC_Read(2)
ptr = (insigned char *)&AValue;
for (i=0; i < sizeof(AValue);
UART1_Write(*(ptr+1)), i++);
UART1_Write(0x0a);
delay_ms(100)
import serial, time, struct
from pprint import pprint
ser = serial.Serial("/dev/ttyAMA0", 9600)
ser.write(raw_input("enter char: "))
while True:
count = 0
AValue = []
for ch in ser.read():
if ch == "\n":
AValue = []
time.sleep(0.1)
while count < 4:
for ch in ser.read():
AValue.append(ch)
count += 1
flt =struct.unpack("<f",str("".join(AValue)))
pprint (flt)enter code here
output in python shell on raspberry pi. the value is changing as i move the pot around as you can see but only the zero value is correct. actualy not even zero since first value should be 1*4.88
(0.0,)
(-1.1472824864025526e-35),
(-3.2123910193243324e-35),
(-4.405564851100735e-35),
(-3.0045950051756514e-32),

Be careful, Microchip use custom float format.
Here you can see both translations of 4997,12 number
0F6h, 028h, 09Ch, 045h ; IEEE Real = 4997,12
0F6h, 028h, 01Ch, 08Bh ; Microchip Real = 4997,12
For more information about read: http://ww1.microchip.com/downloads/en/AppNotes/00575.pdf
EDIT: added x86 asm conversion rutines...
procedure AnsiSingleToMFormat(var Data: single);
asm
mov cx,[eax + 2]
rcl cl,1
rcl ch,1
rcr cl,1
mov [eax + 2],cx
end;
procedure MFormatToAnsiSingle(var Data: single);
asm
mov cx,[eax + 2]
rcl cl,1
rcr ch,1
rcr cl,1
mov [eax + 2],cx
end;

Related

How to interact with the Linux PCA953X.c driver? How does one utilize this driver?

I have a PCA9535 GPIO Expander board connected through I²C to my Raspberry Pi. I am able to set the GPIO expander output pins (to high) using the i2cset command:
sudo i2cset 1 0x20 0x02 0xFF // 0x20 (gpio expander), and register 0x02
I came across a kernel driver for the PCA953X and loaded the kernel module gpio-pca953x.ko + modified the /boot/config.txt to include the dts overlay for the expander. When I run i2detect -y 1 (for i2c-1 bus), I can see "UU" at the 0x20 slave address, which corroborates that the driver is managing it.
If I wanted to do something comparable to what I did with the i2cset command programmatically, what API / interface should I use such that I make use of the PCA953X driver that's installed? As you can see from the code below, nothing there suggests that I would be utilizing PCA953X:
int file;
int adapter_nr = 1;
char filename[20];
snprintf(filename, 19, "/dev/i2c-%d", adapter_nr);
file = open(filename, O_RDWR);
if (file < 0) {
exit(1);
}
// Writes 0xFF to register 0x02
uint8_t cmd[2] = {0x02, 0XFF};
struct i2c_msg msg = {
.addr = 0x20,
.flags = 0,
.len = sizeof(cmd)/sizeof(uint8_t),
.buf = cmd
};
struct i2c_rdwr_ioctl_data tx = {
.msgs = &msg,
.nmsgs = 1
};
ioctl(file, I2C_RDWR, &tx);
Is there a programming route that involves utilizing the PCA953X driver? What does having that module actually do?
Thanks to #TomV for pointing it out in the comments. The PCA953X driver provides a new character device in the form of "gpiochipN". In my case, it was gpiochip2. I had not noticed this extra gpiochip in /dev as I was not looking for it. Because this is a character device, I was able to use ioctl to interact with the lines:
int fd, ret;
fd = open("/dev/gpiochip2", O_RDONLY);
if (fd < 0) {
printf("Unable to open: %s", strerror(errno));
return;
}
struct gpiohandle_request req;
req.lineoffsets[0] = 0;
req.lineoffsets[1] = 1;
req.lineoffsets[2] = 2;
req.lineoffsets[3] = 3;
req.lineoffsets[4] = 4;
req.lineoffsets[5] = 5;
req.lineoffsets[6] = 6;
req.lineoffsets[7] = 7;
req.lines = 8;
req.flags = GPIOHANDLE_REQUEST_OUTPUT;
ret = ioctl(fd, GPIO_GET_LINEHANDLE_IOCTL, &req);
if (-1 == ret) {
printf("Failed to get line handle:%s\n", strerror(ret));
close(fd);
return;
}
// Sets all 8 lines to high (equivalent to setting register 0x3 to 0b11111111 or 0xFF)
struct gpiohandle_data data;
data.values[0] = 1;
data.values[1] = 1;
data.values[2] = 1;
data.values[3] = 1;
data.values[4] = 1;
data.values[5] = 1;
data.values[6] = 1;
data.values[7] = 1;
ret = ioctl(req.fd, GPIOHANDLE_SET_LINE_VALUES_IOCTL, &data);
if (-1 == ret) {
printf("Failed to set line value\n");
}
else {
close(req.fd);
}
close(fd);
The above code interacts with the gpiochip2 character device, which is made possible by having the PCA953X installed. Without it, I would have to read and write to the registers through the i2c bus as was posted in my original question.

Unpacking a vector into an array of a certain bit width

Suppose I have a vector of bits. I'd like to convert it into an array of n bit values, where n is a variable (not a parameter). Can I achieve this using the streaming operators? I tried this (right now I'm just trying a value of 3, but eventually '3' should be variable):
module tb;
bit [51:0] vector = 'b111_110_101_100_011_010_001_000;
byte vector_byte[];
initial begin
$displayb(vector);
vector_byte = {<<3{vector}};
foreach(vector_byte[i])
$display("%0d = %0b", i, vector_byte[i]);
end
endmodule
What I was expecting was:
vector_byte = '{'b000, 'b001, 'b010 ... 'b111};
However, the output I got was:
# vsim -voptargs=+acc=npr
# run -all
# 00000000000000000000000000000000111110101100011010001000
# 0 = 101
# 1 = 111001
# 2 = 1110111
# 3 = 0
# 4 = 0
# 5 = 0
# 6 = 0
# exit
Am I just using the streaming operators wrong?
The streaming operators only work with contiguous streams. You need 5'b00000 inserted into each byte.
module tb;
bit [51:0] vector = 'b111_110_101_100_011_010_001_000;
int W = 3;
byte vector_byte[];
initial begin
vector_byte = new[$bits(vector)/3];
$displayb(vector);
foreach(vector_byte[i]) begin
vector_byte[i] = vector[i*W+:8] & (1<<W)-1; // mask W is in range 1-8
$display("%0d = %0b", i, vector_byte[i]);
end
end
endmodule

CRC-32 algorithm from HDL to software

I implemented a Galois Linear-Feedback Shift-Regiser in Verilog (and also in MATLAB, mainly to emulate the HDL design). It's been working great, and as of know I use MATLAB to calculate CRC-32 fields, and then include them in my HDL simulations to verify a data packet has arrived correctly (padding data with CRC-32), which produces good results.
The thing is I want to be able to calculate the CRC-32 I've implemented in software, because I'll be using a Raspberry Pi to input data through GPIO in my FPGA, and I haven't been able to do so. I've tried this online calculator, using the same parameters, but never get to yield the same result.
This is the MATLAB code I use to calculate my CRC-32:
N = 74*16;
data = [round(rand(1,N)) zeros(1,32)];
lfsr = ones(1,32);
next_lfsr = zeros(1,32);
for i = 1:length(data)
next_lfsr(1) = lfsr(2);
next_lfsr(2) = lfsr(3);
next_lfsr(3) = lfsr(4);
next_lfsr(4) = lfsr(5);
next_lfsr(5) = lfsr(6);
next_lfsr(6) = xor(lfsr(7),lfsr(1));
next_lfsr(7) = lfsr(8);
next_lfsr(8) = lfsr(9);
next_lfsr(9) = xor(lfsr(10),lfsr(1));
next_lfsr(10) = xor(lfsr(11),lfsr(1));
next_lfsr(11) = lfsr(12);
next_lfsr(12) = lfsr(13);
next_lfsr(13) = lfsr(14);
next_lfsr(14) = lfsr(15);
next_lfsr(15) = lfsr(16);
next_lfsr(16) = xor(lfsr(17), lfsr(1));
next_lfsr(17) = lfsr(18);
next_lfsr(18) = lfsr(19);
next_lfsr(19) = lfsr(20);
next_lfsr(20) = xor(lfsr(21),lfsr(1));
next_lfsr(21) = xor(lfsr(22),lfsr(1));
next_lfsr(22) = xor(lfsr(23),lfsr(1));
next_lfsr(23) = lfsr(24);
next_lfsr(24) = xor(lfsr(25), lfsr(1));
next_lfsr(25) = xor(lfsr(26), lfsr(1));
next_lfsr(26) = lfsr(27);
next_lfsr(27) = xor(lfsr(28), lfsr(1));
next_lfsr(28) = xor(lfsr(29), lfsr(1));
next_lfsr(29) = lfsr(30);
next_lfsr(30) = xor(lfsr(31), lfsr(1));
next_lfsr(31) = xor(lfsr(32), lfsr(1));
next_lfsr(32) = xor(data2(i), lfsr(1));
lfsr = next_lfsr;
end
crc32 = lfsr;
See I use a 32-zeroes padding to calculate the CRC-32 in the first place (whatever's left in the LFSR at the end is my CRC-32, and if I do the same replacing the zeroes with this CRC-32, my LFSR becomes empty at the end too, which means the verification passed).
The polynomial I'm using is the standard for CRC-32: 04C11DB7. See also that the order seems to be reversed, but that's just because it's mirrored to have the input in the MSB. The results of using this representation and a mirrored one are the same when the input is the same, only the result will be also mirrored.
Any ideas would be of great help.
Thanks in advance
Your CRC is not a CRC. The last 32 bits fed in don't actually participate in the calculation, other than being exclusive-or'ed into the result. That is, if you replace the last 32 bits of data with zeros, do your calculation, and then exclusive-or the last 32 bits of data with the resulting "crc32", then you will get the same result.
So you will never get it to match another CRC calculation, since it isn't a CRC.
This code in C replicates your function, where the data bits come from the series of n bytes at p, least significant bit first, and the result is a 32-bit value:
unsigned long notacrc(void const *p, unsigned n) {
unsigned char const *dat = p;
unsigned long reg = 0xffffffff;
while (n) {
for (unsigned k = 0; k < 8; k++)
reg = reg & 1 ? (reg >> 1) ^ 0xedb88320 : reg >> 1;
reg ^= (unsigned long)*dat++ << 24;
n--;
}
return reg;
}
You can immediately see that the last byte of data is simply exclusive-or'ed with the final register value. Less obvious is that the last four bytes are just exclusive-or'ed. This exactly equivalent version makes that evident:
unsigned long notacrc_xor(void const *p, unsigned n) {
unsigned char const *dat = p;
// initial register values
unsigned long const init[] = {
0xffffffff, 0x2dfd1072, 0xbe26ed00, 0x00be26ed, 0xdebb20e3};
unsigned xor = n > 3 ? 4 : n; // number of bytes merely xor'ed
unsigned long reg = init[xor];
while (n > xor) {
reg ^= *dat++;
for (unsigned k = 0; k < 8; k++)
reg = reg & 1 ? (reg >> 1) ^ 0xedb88320 : reg >> 1;
n--;
}
switch (n) {
case 4:
reg ^= *dat++;
case 3:
reg ^= (unsigned long)*dat++ << 8;
case 2:
reg ^= (unsigned long)*dat++ << 16;
case 1:
reg ^= (unsigned long)*dat++ << 24;
}
return reg;
}
There you can see that the last four bytes of the message, or all of the message if it is three or fewer bytes, is exclusive-or'ed with the final register value at the end.
An actual CRC must use all of the input data bits in determining when to exclusive-or the polynomial with the register. The inner part of that last function is what a CRC implementation looks like (though more efficient versions make use of pre-computed tables to process a byte or more at a time). Here is a function that computes an actual CRC:
unsigned long crc32_jam(void const *p, unsigned n) {
unsigned char const *dat = p;
unsigned long reg = 0xffffffff;
while (n) {
reg ^= *dat++;
for (unsigned k = 0; k < 8; k++)
reg = reg & 1 ? (reg >> 1) ^ 0xedb88320 : reg >> 1;
n--;
}
return reg;
}
That one is called crc32_jam because it implements a particular CRC called "JAMCRC". That CRC is the closest to what you attempted to implement.
If you want to use a real CRC, you will need to update your Verilog implementation.

What does this line of code do? Const uint32_t goodguys = 0x1 << 0

Can someone tell me what is being done here:
Const uint32_t goodguys = 0x1 << 0
I'm assuming it is c++ and it is assigning a tag to a group but I have never seen this done. I am a self taught objective c guy and this just looks very foreign to me.
Well, if there are more lines that look like this that follow the one that you posted, then they could be bitmasks.
For example, if you have the following:
const uint32_t bit_0 = 0x1 << 0;
const uint32_t bit_1 = 0x1 << 1;
const uint32_t bit_2 = 0x1 << 2;
...
then you could use use the bitwise & operator with bit_0, bit_1, bit_2, ... and another number in order to see which bits in that other number are turned on.
const uint32_t num = 5;
...
bool bit_0_on = (num & bit_0) != 0;
bool bit_1_on = (num & bit_1) != 0;
bool bit_2_on = (num & bit_2) != 0;
...
So your 0x1 is simply a way to designate that goodguys is a bitmask, because the hexadecimal 0x designator shows that the author of the code is thinking specifically about bits, instead of decimal digits. And then the << 0 is used to change exactly what the bitmask is masking (you just change the 0 to a 1, 2, etc.).
Although base 10 is a normal way to write numbers in a program, sometimes you want to express the number in octal base or hex base. To write numbers in octal, precede the value with a 0. Thus, 023, really means 19 in base 10. To write numbers in hex, precede the value with a 0x or 0X. Thus, 0x23, really means 35 in base 10.
So
goodguys = 0x1;
really means the same as
goodguys = 1;
The bitwise shift operators shift their first operand left (<<) or right (>>) by the number of positions the second operand specifies. Look at the following two statements
goodguys = 0x1;
goodguys << 2;
The first statement is the same as goodguys = 1;
The second statement says that we should shift the bits to the left by 2 positions. So we end up with
goodguys = 0x100
which is the same as goodguys = 4;
Now you can express the two statements
goodguys = 0x1;
goodguys << 2;
as a single statement
goodguys = 0x1 << 2;
which is similar to what you have. But if you are unfamiliar with hex notation and bitwise shift operators it will look intimidating.
When const is used with a variable, it uses the following syntax:
const variable-name = value;
In this case, the const modifier allows you to assign an initial value to a variable that cannot later be changed by the program. For Instance
const int POWER_UPS = 4;
will assign 4 to variable POWER_UPS. But if you later try to overwrite this value like
POWER_UPS = 8;
you will get a compilation error.
Finally the uint32_t means 32-bit unsigned int type. You will use it when you want to make sure that your variable is 32 bits long and nothing else.

How to write a unicode symbol in lua

How can I write a Unicode symbol in lua. For example I have to write symbol with 9658
when I write
string.char( 9658 );
I got an error. So how is it possible to write such a symbol.
Lua does not look inside strings. So, you can just write
mychar = "►"
(added in 2015)
Lua 5.3 introduced support for UTF-8 escape sequences:
The UTF-8 encoding of a Unicode character can be inserted in a literal string with the escape sequence \u{XXX} (note the mandatory enclosing brackets), where XXX is a sequence of one or more hexadecimal digits representing the character code point.
You can also use utf8.char(9658).
Here is an encoder for Lua that takes a Unicode code point and produces a UTF-8 string for the corresponding character:
do
local bytemarkers = { {0x7FF,192}, {0xFFFF,224}, {0x1FFFFF,240} }
function utf8(decimal)
if decimal<128 then return string.char(decimal) end
local charbytes = {}
for bytes,vals in ipairs(bytemarkers) do
if decimal<=vals[1] then
for b=bytes+1,2,-1 do
local mod = decimal%64
decimal = (decimal-mod)/64
charbytes[b] = string.char(128+mod)
end
charbytes[1] = string.char(vals[2]+decimal)
break
end
end
return table.concat(charbytes)
end
end
c=utf8(0x24) print(c.." is "..#c.." bytes.") --> $ is 1 bytes.
c=utf8(0xA2) print(c.." is "..#c.." bytes.") --> ¢ is 2 bytes.
c=utf8(0x20AC) print(c.." is "..#c.." bytes.") --> € is 3 bytes.
c=utf8(0x24B62) print(c.." is "..#c.." bytes.") --> 𤭢 is 4 bytes.
Maybe this can help you:
function FromUTF8(pos)
local mod = math.mod
local function charat(p)
local v = editor.CharAt[p]; if v < 0 then v = v + 256 end; return v
end
local v, c, n = 0, charat(pos), 1
if c < 128 then v = c
elseif c < 192 then
error("Byte values between 0x80 to 0xBF cannot start a multibyte sequence")
elseif c < 224 then v = mod(c, 32); n = 2
elseif c < 240 then v = mod(c, 16); n = 3
elseif c < 248 then v = mod(c, 8); n = 4
elseif c < 252 then v = mod(c, 4); n = 5
elseif c < 254 then v = mod(c, 2); n = 6
else
error("Byte values between 0xFE and OxFF cannot start a multibyte sequence")
end
for i = 2, n do
pos = pos + 1; c = charat(pos)
if c < 128 or c > 191 then
error("Following bytes must have values between 0x80 and 0xBF")
end
v = v * 64 + mod(c, 64)
end
return v, pos, n
end
To get broader support for Unicode string content, one approach is slnunicode which was developed as part of the Selene database library. It will give you a module that supports most of what the standard string library does, but with Unicode characters and UTF-8 encoding.