CRC-16 in MATLAB - matlab

I have a working program in LabVIEW that I want to port to MATLAB. It takes an input number, converts it to hex, appends it to a constant string (0110 0001 0002 0400 03), calculates a CRC-16, and sends it all to a COM port. Here are two examples for 1500 and 2000 respectively.
0110 0001 0002 0400 0305 DCC0 AA
0110 0001 0002 0400 0307 D0C1 CF
I can see that dec2hex(1500) produces the 5DC, and dec2hex(2000) produces the 7D0. The AA and the CF are produced by a CRC-16 LabVIEW program, which are 170 and 207 respectively. I understand these are some sort of check-sums, but I can't find a way to reproduce it in MATLAB.

Solution found via: FEX submission
A = ['01';'10';'00';'01';'00';'02';'04';'00';'03';'07';'D0']
dec2hex(append_crc(hex2dec(A)'))
Returns:
01
10
00
01
00
02
04
00
03
07
D0
C1
CF

Related

sending multiple bytes of data from pc serial port to custom hardware using matlab

I am using the following matlab program to send multiple serial port data through the PC using a USB-serial port
close all; clear all; clc;
array_1st = ones(64, 1, 'uint8');
array_1st = array_1st';
for i=1:64
array_1st(i) = array_1st(i) * i ;
end
dummy_var = zeros(1,1);
if ~isempty(instrfind)
fclose(instrfind);
delete(instrfind);
end
s=serial('COM9');
set(s,'Terminator','CR','DataBits',8,'Parity','none','StopBits',1,'baudrate', 38400);
fopen(s);
disp('fopen(s)');
fwrite(s, array_1st);
pause(0.5);
dummy_var = fread(s, 1);
dummy_var
fclose(s);
clear s
A custom hardware is connected to the serial port. I have already been able to read multiple bytes of data sent by the custom hardware in the PC using a similar matlab program (that is, multiple byte transfer from hardware -> PC through serial port using matlab is working)
I am expecting the code above to write an array of 8-bit unsigned ints when the fwrite(s, array_1st); function is executed. The custom hardware at this point displays the bytes received in the serial port in it's IDE's terminal and waits in a breakpoint before dummy_var = fread(s, 1); in the matlab program is called
The problem is matlab is always sending only the first byte of the array_1st array to the serial port hardware. I have tried all kinds of variations in the above program, such as removing the array_1st = array_1st'; line, using char instead of uint8, usinf LF for terminator property in the serial port object, but the problem is always same: only the first bte of the array being sent actually goes through the serial port hardware to the custom device.
After calling fwrite(s, array_1st);, matlab also the following warning:
Warning: The specified amount of data was not returned within the
Timeout period.
The properties of the serial object after the fopen():
Serial Port Object : Serial-COM9
Communication Settings
Port: COM9
BaudRate: 38400
Terminator: 'CR'
Communication State
Status: open
RecordStatus: off
Read/Write State
TransferStatus: idle
BytesAvailable: 0
ValuesReceived: 0
ValuesSent: 0
It's properties after executing the dummy_var = fread(s, 1); line are:
Serial Port Object : Serial-COM9
Communication Settings
Port: COM9
BaudRate: 38400
Terminator: 'CR'
Communication State
Status: open
RecordStatus: off
Read/Write State
TransferStatus: idle
BytesAvailable: 0
ValuesReceived: 0
ValuesSent: 64
The ValuesSent property has been updated to 64 from 0, which I guess is indicating that, at least as far as the matlab serial port object is concerned, 64 data are being sent, but for some reason only the 1st one is actually going through the serial port hardware
What could be the reason for this? Is it that matlab just doesn't support what I want to do? If only 1 data is getting sent through the serial port hardware, why does ValuesSent show 64?
Matlab version is R2009b
EDIT 1: -----
I recorded the serial object data using the serial/record function, and here is what it recorded:
Legend:
* - An event occurred.
> - A write operation occurred.
< - A read operation occurred.
1 Recording on 29-Jul-2018 at 23:27:19.241. Binary data in little endian format.
2 > 64 uchar values.
01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f 10
11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20
21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30
31 32 33 34 35 36 37 38 39 3a 3b 3c 3d 3e 3f 40
3 < 0 uchar values.
4 Recording off.

Is it safe to replace CP850 with UTF-8 encoding

I have an old project reading files with CP850 encoding. But it handles accent characters wrong (e.g., Montréal becomes MontrÚal).
I want to replace CP850 with UTF-8. The question is:
Is it safe? In other word, can we assume UTF-8 is a super set and Encoding the same way as CP850 encoding characters?
Thanks
I tried hexdump, below is the sample of my csv file, is it UTF-8?
000000d0 76 20 64 65 20 4d 61 72 6c 6f 77 65 2c 2c 4d 6f |v de Marlowe,,Mo|
000000e0 6e 74 72 c3 a9 61 6c 2c 51 43 2c 48 34 41 20 20 |ntr..al,QC,H4A |
If by superset you mean does UTF-8 include all the characters of CP850, then trivially yes, since UTF-8 can encode all valid Unicode code points using a variable-length encoding (1–4 bytes).
If you mean are characters encoded the same way, then as you've seen this is not the case, since é (U+00E9) is encoded as 82 in CP850 and C3 A9 in UTF-8.
I cannot see a character set / code page that encodes Ú as 82, but Ú is encoded as E9 in CP850, which is the ISO-8859-1 representation of é, so it's possible you've got your conversion the wrong way around (i.e. you're converting your file from ISO-8859-1 to CP850, and you want to convert from CP850 to UTF-8).
Here's an example using hd and iconv:
hd test.cp850.txt
00000000 4d 6f 6e 74 72 82 61 6c |Montr.al|
00000008
iconv --from cp850 --to utf8 test.cp850.txt > test.utf8.txt
hd test.utf8.txt
00000000 4d 6f 6e 74 72 c3 a9 61 6c |Montr..al|
00000009

Hex Encoding and Decoding

I have two modules in some 3rd party application(it does not have any documentation and I cannot reveal application name due to confidentiality). One module outputs only integers and other outputs only floating point numbers.
The module that outputs integers has very simple data format as it was HEX representation of numbers in reverse byte order. So, I am able to decode it successfully. But having issues in decoding HEX representation of floating point numbers.
The data below shows the data dump in HEX followed by the expected converted value. I have a little information about its representation that the last two bytes are some sort of CRC, so, it is like 8 byte number with two CRC bytes.
I have highlighted the 8 bytes that needs to be converted and their expected value is given below :
Dataset 1: 02 B5 E6 7B 15 C8 0C 00 0A F9 = 999359.533
Dataset 2: 7C 4C 3A 00 00 00 00 00 B7 4C = 0.001
Can anyone suggest something here, I have tried many encoding schemes including IEEE formats also. I do not have any other relevant information that I can share(I know it will be a hit and trial technique to solve this).
Not sure if this helps but:
02 B5 E6 7B 15 C8 0C 00 = 0x000CC8157BE6B502 = 3597694319113474
7C 4C 3A 00 00 00 00 00 = 0x00000000003A4C7C = 3820668
and
3597694319113474 / 3600000000 = 999359.5331
3820668 / 3600000000 = 0.001061297
So within a certain amount of rounding maybe they are fixed point numbers in fractions of 3600000000?
Can you get some more data points?

Why do you wrap around in 16 bit checksum (hex used)?

I have the question:
Compute the 16-bit checksum for the data block E3 4F 23 96 44 27 99
F3. Then perform the verification calculation.
I can perform the addition and I get the overflow like:
E3 4F
23 96
44 27
99 F3
``````````
1 E4 FF (overflow)
The solution then takes the overflow and adds it causing E4 FF to become E5 00. Can someone explain to me why this occurs?

Where is my interpretation of ASN1 der wrong?

Here is what my structure looks like:
SET OF
SEQUENCE:
INTEGER: XX
INTEGER: YY
My encoding looks like this:
11 08 10 06 02 01 XX 02 01 YY
11 08 -- SET OF
10 06 -- SEQUENCE
However, when I decode with openssl, I don't see the expected output. It looks like
0:d=0 hl=2 l= 8 prim: SET
0000 - 10 06 02 01 XX 02 01 YY-
This is not what I expected to see. (Look at the structure I wanted it to look like)
I am not sure what I am missing. Any help would be much appreciated.
A SET and SEQUENCE are constructed types. That means that the bit that indicates a constructed type in the tag needs to be set. That would be bit 5 or 6 (depending if you start with bit 0 or 1). If the bit isn't set then the parser will view it as a primitive type, which means it has a single value instead of children. This is why you get prim in your output. The tag number is still 17 or 16 which denotes a SET OF or SEQUENCE, so the structure is still seen to be a SET.
So instead of 11 and 10 you should be using values 31 and 30. Then your code should parse correctly.