I'm working on a project where I need to send a value between two pieces of hardware using CoDeSys. The comms system in use is CAN and is only capable of transmitting in Bytes, making the maximum value 255.
I need to send a value higher than 255, I'm capable of splitting this over more than one byte and reconstructing it on the receiving machine to get the original value.
I'm thinking I can divide the REAL value by 255 and if the result is over 1 then deconstruct the value in to one byte holding the remainders and one byte holding the amount of 255's in the whole number.
For example 355 would amount to one byte of 100 and another of 1.
Whilst I can describe this, I'm having a really hard time figuring out how to actually write this in logic.
Can anyone help here?
This is all handled for you in CoDeSys if I understand you correctly.
1. CAN - Yes it's in byte but you must not be using CANopen you are using the low level FB that ask you to send a CAN frame of an 8 byte array?
If it is your own two custom controllers ( you are programming both of them in CoDeSys) just use netvariables. Netvariables allows you to transfer any type of variable and you can take the variable list from one controller and import it to another controller and all the data will show up. You don't have to do any variable manipulation it's handle under the hood for you. But I don't know the specifics of your system and what you are trying to do.
If you are trying to deconstruct construct variables from one size to another that is easy and I can share that code with you.
Related
I am generating random OTP-style strings that serve as a short-term identifier to link two otherwise unrelated systems (which have authentication at each end). These need to be read and re-entered by users, so in order to reduce the error rate and reduce the opportunities for forgery, I'd like to make one of the digits a check digit. At present my random string conforms to the pattern (removing I and O to avoid confusion):
^[ABCDEFGHJKLMNPQRSTUVWXYZ][0-9]{4}$
I want to append one extra decimal digit for the check. So far I've implemented this as a BLAKE2 hash (from libsodium) that's converted to decimal and truncated to 1 char. This gives only 10 possibilities for the check digit, which isn't much. My primary objective is to detect single character errors in the input.
This approach kind of works, but it seems that one digit is not enough to detect single char errors, and undetected errors are quite easy to find, for example K37705 and K36705 are both considered valid.
I do not have a time value baked into this OTP; instead it's purely random and I'm relying on keeping a record of the OTPs that have been generated recently for each user, which are deleted periodically, and I'm reducing opportunities for brute-forcing by rate and attempt-count limiting.
I'm guessing that BLAKE2 isn't a good choice here, but given there are only 10 possibilities for the result, I don't know that others will be better. What would be a better algorithm/approach to use?
Frame challenge
Why do you need a check digit?
It doesn't improve security, and a five digits is trivial for most humans to get correct. Check if server side and return an error message if it's wrong.
Normal TOTP tokens are commonly 6 digits, and actors such as google has determined that people in general manage to get them orrect.
I have a slave device with multiple TPDOs (4) for sending certain sensor data. Each TPDO has about 4 bytes of data and I want to insert a 'count' in the frame to indicate data is not stale. My plan is to create an object entry for this and map it to each PDO as 5th byte. Is this allowed by the CANOpen standard and of so, is this is a good idea at all?
PS: I am not sending all 8 bytes in 1 TPDO because of the 4 bytes of in 1 TPDO have a co-relation to each other.
Yes, it is allowed to map a (sub)object to multiple PDOs, or even multiple times to the same PDO. When using dummy mappings in RPDOs, this is actually quite common.
Whether inserting a count is a good idea depends on what you are trying to achieve. What is the problem you are trying to detect and how do you want to handle it?
If you want to check that the slave is alive and healthy, use heartbeats. If you want to check that you didn't miss a PDO, there are other ways. For SYNC-driven PDOs, you can set a flag for each PDO when you receive it and at the SYNC, check if you received them all before clearing the flags. For event-driven PDOs, you can use the event timer in the RPDO to generate an error if a PDO didn't arrive within a certain time.
Inserting a counter will work and help you detect how many PDOs you missed. But the question is, what can you do with that information? The last PDO, even if "stale", is usually still the best guess for the value at the receiving side.
I've got a piece of code that takes into account a given amount of features, where each feature is Boolean. I'm looking for the most efficient way to store a set of such features. My initial thought was to try and store these as a BitSet. But then, I realized that this implementation is meant to be used to store numbers in bit format rather than manipulate each bit, which is something I'd like to do (see the effect of switching any feature on and off). I then thought of using a Boolean array, but apparently the JVM uses much more memory for each Boolean element than the one bit it actually needs.
I'm therefore left with the question: What is the most efficient way to store a set of bits that I'd like to treat as independent bits rather than the building blocks of some number?
Please refer to this question: boolean[] vs. BitSet: Which is more efficient?
According to the answer of Peter Lawrey, boolean[] (not Boolean[]) is your way to go since its values can be manipulated and it takes only one byte of memory per bit to store. Consider that there is no way for a JVM application to store one bit in only one bit of memory and let it be directly (array-like) manipulated because it needs a pointer to find the address of the bit and the smallest addressable unit is a byte.
The site you referenced already states that the mutable BitSet is the same as the java.util.BitSet. There is nothing you can do in Java that you can't do in Scala. But since you are using Scala, you probably want a safe implementation which is probably meant to be even multithreaded. Mutable datatypes are not suitable for that. Therefore, I would simply use an immutable BitSet and accept the memory cost.
However, BitSets have their limits (deriving from the maximum number of int). If you need larger data sizes, you may use LongBitSets, which are basically Map<Long, BitSet>. If you need even more space, you may nest them in another map Map<Long, LongBitSet>, but in that case you need to use two or more identifiers (longs).
I have a product that is basically a USB flash drive based on an NXP LPC18xx microcontroller. I'm using a library provided from the manufacturer (LPCOpen) that handles the USB MSC and the SD card media (which is where I store data).
Here is the problem: Internally the LPC18xx has a 64kB (limited by hardware) buffer used to cache reads/writes which means it can only cache up to 128 blocks(512B) of memory. The SCSI Write-10 command has a total-blocks field that can be up to 256 blocks (128kB). When originally testing the product on Windows 7 it never writes more than 128 blocks at a time but when tested on Linux it sometimes writes more than 128 blocks, which causes the microcontroller to crash.
Is there a way to tell the host OS not to request more than 128 blocks? I see references[1] to a Read-Block-Limit command(05h) but it doesn't seem to be widely supported. Also, what sense key would I return on the Write-10 command to tell Linux the write is too large? I also see references to a block limit VPD page in some device spec sheets but cannot find a lot of documentation about how it is implemented.
[1]https://en.wikipedia.org/wiki/SCSI_command
Let me offer a disclaimer up front that this is what you SHOULD do, but none of this may work. A cursory search of the Linux SCSI driver didn't show me what I wanted to see. So, I'm not at all sure that "doing the right thing" will get you the results you want.
Going by the book, you've got to do two things: implement the Block Limits VPD and handle too-large transfer sizes in WRITE AND READ.
First, implement the Block Limits VPD page, which you can find in late revisions of SBC-3 floating around on the Internet (like this one: http://www.13thmonkey.org/documentation/SCSI/sbc3r25.pdf). It's probably worth going to the t10.org site, registering, and then downloading the last revision (http://www.t10.org/cgi-bin/ac.pl?t=f&f=sbc3r36.pdf).
The Block Limits VPD page has a maximum transfer length field that specifies the maximum number of blocks that can be transferred by all the READ and WRITE commands, and basically anything else that reads or writes data. Of course the downside of implementing this page is that you have to make sure that all the other fields you return are correct!
Second, when handling READ and WRITE, if the command's transfer length exceeds your maximum, respond with an ILLEGAL REQUEST key, and set the additional sense code to INVALID FIELD IN CDB. This behavior is indicated by a table in the section that describes the Block Limits VPD, but only in late revisions of SBC-3 (I'm looking at 35h).
You might just start with returning INVALID FIELD IN CDB, since it's the easiest course of action. See if that's enough?
I'm developing one system, where Badges will be created with a QRCode for each User, and I need to read that QRCode and show specific information to the user on the public screen.
QRCode reading is a little 'tricky'. When I did something like this, I was using MySQL with enumerated Ids (1, 100, 2304, 9990)... Witch is only about 5 characters.
However, MongoDB keys (DB that I'm using now) consists of a biggg key such as 52d35bf26bda8a5c8f8a22a8 witch has MANY characters.
What is the problem with that: QRCode becomes larger (more data, bigger the size), and becomes harder to read it fast on the WebCam (even in HD).
So, here is my idea: Use part of the Id, So that: 52d35bf26bda8a5c8f8a22a8 becomes perhaps 52d35bf26bd.
The question is really simple: Can I safely use the partial ID Key, without having re-occurrences? The maximum elements I will have, will be about 1000 order.
The question has has nothing to do with QRCode, but it explains the reason why I'm doing it.
ObjectId is a 12-byte BSON type, constructed using:
a 4-byte value representing the seconds since the Unix epoch,
a 3-byte machine identifier,
a 2-byte process id, and
a 3-byte counter, starting with a random value.
Once known that, it depends on the part of the objectId you choose and how many time will pass between the insertions.
Regards
Regardless of whether it's safe or not, the size difference between the QR codes isn't that great.
Using the full string will get you an image like:
Using half the characters produces a code like:
I would suggest that even the cheapest smartphone would be able to scan the larger of the two images - it's not very complex at all.
Yes, you can safely compress a long hexadecimal string into a higher base to get fewer characters while retaining the same value.
Example:
Hex: 52d35bf26bda8a5c8f8a22a8
Base64: UtNb8mvailyPiiKo
The same idea can be taken further by using binary or even Chinese pictograph characters instead of base64.
This concept can be tested using a Numeral Base Converter Tool.