I have a CyberGlove connected to Matlab via serial port. I am trying to write a command requesting a data sample, and then reading the sample back.
I have successfully connected the glove with the following code:
s = serial('/dev/ttyS0')
set(s,'BaudRate',115200)
fopen(s)
I can then write/read to get the sensor information either in binary or ascii.
In binary I am doing:
fwrite(s,'G')
fread(s)
fread always times out and then spits out a column vector of seemingly random length (1-100+) containing meaningless integers.
With ASCII, the commands are:
fprintf(s,'g')
fscanf(s)
This gives an empty string as the read-out value. I know that the glove is getting and at least somewhat processing the commands, though, because if I give it an invalid command, I get back the error message e?.
Here's the part that really confuses me, though: I accidentally discovered a way to get a correct reading from the glove.
fread(s) (which times out and gives the seemingly random output)
fprintf(s,'g')
fscanf(s)
I then get the string output 'g 1 76 93 113 89 42 20 77 98 106 117 81 62 23 52 60 34 68 57 254 92 26', which is correct.
My questions are:
(1) Why does that last part produce a correct response?
(2) How can I get a reading from the binary command? This is what I actually want to acquire.
Figured out the problem:
The timeout was occurring because it wasn't getting back as many bytes as it expected, and it was expecting the incorrect number. By default, get(s, 'InputBufferSize') returned 512, while the actual data was 24 bytes. Before calling fopen(s), I just ran set(s, 'InputBufferSize', 24); and got the correct 24 8-bit integers from the binary read/write that I was expecting, without a timeout.
Related
I used a 32-bit random number generator 100,000 times, and resulted in a file of 275,714 bytes. Then I typed the following line in my terminal,
./access 1024 (Here comes my first question, what should we exactly type here?)
Then fed my file as input, then it come to
"How many bitstreams?" 269
Here 269( 269= 275,714/1024). And I chose Binary as my format. Finally, I got numerous lines of "igamc: UNDERFLOW". What should I cope with this?
NIST Test Suite can work on a number of bitstreams of certain length and its result can be then displayed as proportion of the bitstreams that passed test to all bitstreams (in finalAnalysisReport in Proportion column). So when you execute ./assess length, length is the length of one bitstream.
I think that igamc underflow can be caused by too short bitstreams. In NIST document, for every test recommended input size is specified and e.g. for Binary Matrix Rank Test it's somewhere close to 40 000 and for Overlapping Template Matching Test it's 1 000 000 and both of these tests use igamc function to compute P-value.
I am using the ModBus RTU, and I'm trying to figure out how to calculate the CRC16.
I don't need a code example. I am simply curious about the mechanism.
I have learned that a basic CRC is a polynomial division of the data word, which is padded with zeros, depending on the length of the polynomial.
The following test example is supposed to check if my basic understanding is correct:
data word: 0100 1011
polynomial: 1001 (x3+1)
padded by 3 bits because of highest exponent x3
calculation: 0100 1011 000 / 1001 -> remainder: 011
Calculation.
01001011000
1001
0000011000
1001
01010
1001
0011
Edit1: So far verified by Mark Adler in previous comments/answers.
Searching for an answer I have seen a lot of different approaches with reversing, dependence on little or big endian, etc., which alter the outcome from the given 011.
Modbus RTU CRC16
Of course I would love to understand how different versions of CRCs work, but my main interest is to simply understand what mechanism is applied here. So far I know:
x16+x15+x2+1 is the polynomial: 0x18005 or 0b11000000000000101
initial value is 0xFFFF
example message in hex: 01 10 C0 03 00 01
CRC16 of above message in hex: C9CD
I did calculate this manually like the example above, but I'd rather not write this down in binary in this question. I presume my transformation into binary is correct. What I don't know is how to incorporate the initial value -- is it used to pad the data word with it instead of zeros? Or do I need to reverse the answer? Something else?
1st attempt: Padding by 16 bits with zeros.
Calculated remainder in binary would be 1111 1111 1001 1011 which is FF9B in hex and incorrect for CrC16/Modbus, but correct for CRC16/Bypass
2nd attempt: Padding by 16 bits with ones, due to initial value.
Calculated remainder in binary would be 0000 0000 0110 0100 which is 0064 in hex and incorrect.
It would be great if someone could explain, or clarify my assumptions. I honestly did spent many hours searching for an answer, but every explanation is based on code examples in C/C++ or others, which I don't understand. Thanks in advance.
EDIT1: According to this site, "1st attempt" points to another CRC16-method with same polynomial but a different initial value (0x0000), which tells me, the calculation should be correct.
How do I incorporate the initial value?
EDIT2: Mark Adlers Answer does the trick. However, now that I can compute CRC16/Modbus there are some questions left for clearification. Not needed but appreciated.
A) The order of computation would be: ... ?
1st applying RefIn for complete input (including padded bits)
2nd xor InitValue with (in CRC16) for the first 16 bits
3rd applying RefOut for complete output/remainder (remainder maximum 16 bits in CRC16)
B) Referring to RefIn and RefOut: Is it always reflecting 8 bits for input and all bits for output nonetheless I use CRC8 or CRC16 or CRC32?
C) What do the 3rd (check) and 8th (XorOut) column in the website I am referring to mean? The latter seems rather easy, I am guessing its apllied by computing the value xor after RefOut just like the InitValue?
Let's take this a step at a time. You now know how to correctly calculate CRC-16/BUYPASS, so we'll start from there.
Let's take a look CRC-16/CCITT-FALSE. That one has an initial value that is not zero, but still has RefIn and RefOut as false, like CRC-16/BUYPASS. To compute the CRC-16/CCITT-FALSE on your data, you exclusive-or the first 16 bits of your data with the Init value of 0xffff. That gives fe ef C0 03 00 01. Now do what you know on that, but with the polynomial 0x11021. You will get what is in the table, 0xb53f.
Now you know how to apply Init. The next step is dealing with RefIn and RefOut being true. We'll use CRC-16/ARC as an example. RefIn means that we reflect the bits in each byte of input. RefOut means that we reflect the bits of the remainder. The input message is then: 80 08 03 c0 00 80. Dividing by the polynomial 0x18005 we get 0xb34b. Now we reflect all of those bits (not in each byte, but all 16 bits), and we get 0xd2cd. That is what you see as the result in the table.
We now have what we need to compute CRC-16/MODBUS, which has both a non-zero Init value (0xffff) and RefIn and RefOut as true. We start with the message with the bits in each byte reflected and the first 16 bits inverted. That is 7f f7 03 c0 00 80. Divide by 0x18005 and you get the remainder 0xb393. Reflect those bits and we get 0xc9cd, the expected result.
The exclusive-or of Init is applied after the reflection, which you can verify using CRC-16/RIELLO in that table.
Answers for added questions:
A) RefIn has nothing to do with the padded bits. You reflect the input bytes. However in a real calculation, you reflect the polynomial instead, which takes care of both reflections.
B) Yes.
C) Yes, XorOut is the what you exclusive-or the final result with. Check is the CRC of the nine bytes "123456789" in ASCII.
My problem is as follows. As inputs I have sequences of whole numbers, around 200-500 per sequence. Each number in a sequence is marked as good or bad. The first number in each sequence is always good, but whether or not subsequent numbers are still considered good is determined by which numbers came before it. There's a mathematical function which governs how the numbers affect those that come after it but the specifics of this function are unknown. All we know for sure is it starts off accepting every number and then gradually starts rejecting numbers until finally every number is considered bad. Out of every sequence only around 50 numbers will ever be accepted before this happens.
It is possible that the validity of a number is not only determined by which numbers came before it, but also by whether these numbers were themselves considered good or bad.
For example: (good numbers in bold)
4 17 8 47 52 18 13 88 92 55 8 66 76 85 36 ...
92 13 28 12 36 73 82 14 18 10 11 21 33 98 1 ...
Attempting to determine the logic behind the system through guesswork seems like an impossible task. So my question is, can a neural network be trained to predict if a number will be good or bad? If so, approximately how many sequences would be required to train it? (assuming sequences of 200-500 numbers that are 32 bit integers)
Since your data is sequential and there is dependency between numbers, it should be possible to train a recurrent neural network. The recurrent weights take care of the relationship between numbers.
As a general rule of thumb, the more uncorrelated input sequences you have, the better it is. This survey article can help you get started with RNN: https://arxiv.org/abs/1801.01078
This is definitely possible. #salehinejad gives a good answer, but you might want to look for specific RNN's, such as the LSTM!
It's very good for sequence prediction. You just feed the network numbers one by one (sequentially).
I have 256 blocks with 16 byte per block. I'm trying to define miss or hit the hexadecimal addresses according to 2-way set associative cache. I doubt that the second can be miss because of 2-way set associative? I think as hit but I'm not sure.
2ABC10A2
2ABC10A7
4BBC10A0
2ABC10A9
So If I have 16 bytes per block, I have 2^4 then 4 bits that means respectively my offsets are 2, 7, 0, 9. If I have 256 blocks, I have 2^8 then 8 bits index that means 0A the remains are tags. I think I'm right up to here. So I get the table but I'm not for miss/hit part. Are they right? If there are mistake, could you fix? I want to learn. Thanks.
TAG INDEX BLOCK DATA HIT/MISS
2ABC1 0A 2ABC10A0 + 16 BYTE (2ABC10A0 - 2ABC10AF ) MISS
2ABC1 0A 2ABC10A0 + 16 BYTE HIT
4BBC1 0A 4BBC10A0 + 16 BYTE MISS
2ABC1 0A 2ABC10A0 + 16 BYTE HIT
The miss/hit part is correct.
The index bit width is 7 not 8. For 256 blocks 2-way set associative cache, index bit width is log2(256/2) = 7.
To be more accurate the miss/hit part is correct assuming that all the accesses are loads (read operations). If stores (write operations) are included then it depends on the choice of cache write policies.
I am having a problem with the I2C driver for a Freescale p1022tw board. There is a command on U-Boot's console to read from an I2C device:
i2c md chip address[.0, .1, .2] [# of objects]
When I read 4 bytes from a device with id 0x60, at address 0x0, I get:
tw=>i2c md 60 0 4
0000: 45 45 45 45 EEEE
These values that it returned are wrong. I can get the right values if I read one byte at the time:
tw=>i2c md 60 0 1
0000: 45 E
tw=>i2c md 60 1 1
0001: 45 E
tw=>i2c md 60 2 1
0002: 46 F
tw=>i2c md 60 3 1
0003: 00 .
I should have gotten 45 45 46 00 or EEF0 in the first command. In multiple readings for this device, it is returning always just the first byte value. If I try to get 6 bytes starting at address 0x2, this is the output:
tw=>i2c md 60 2 6
0002: 46 46 46 46 46 46 FFFFFF
This problem does not happen on other devices on the bus. For instance, in the device with id 0x4F, the right values are printed:
tw=>i2c md 4F 0.2 6
0000: 18 00 f6 48 00 00 ...H..
The address in the previous command has a ".2" because the chip uses 2 bytes for addresses. The first device only uses 1, so there's no need to put a ".1" (I already tested that).
I went through the implementation of the Freescale driver for the I2C communication, but I didn't change anything on it and it works for other devices. My coworker also says that the very same code works on his board. Have anybody had a similar issue or has any theory about why this is happening?
Thanks in advance.
I met such a situation. I had driver, read and write functions, and it worked not for all i2c devices. I found that was caused the not working device had different operating format for a number of operation. Unfortunately this happens, there a kind of not standard protocols. When you open the doc for the problem device and compare it to working and/or to the driver implementation you most likely will see a difference.