I know that entries in memcached have a length limitation by default is 1M. And this value is configurable by passing -I when starting the daemon. When I write my client, I want to protect my user from passing in values that are too large to fit in. I run some tests and found there is a 71 bytes margin on my machine between the item_size_max and key_length+value_lenght.
I have STAT item_size_max 1048576 as default,
I put in byte[] key = new byte[250]; byte[] value = new byte[1048255]; success, and 1048576-250-1048255 =71
I put in byte[] key = new byte[250]; byte[] value = new byte[1048256]; failed.
Tried with multiple total cache size, multiple max item size limitation and multiple key lengths. There is always a 71 bytes margin between the sum of the length of my key and value and the item_size_max.
I am wondering how can I figure out what is the longest value I can put in, instead of the length of the entry.
Cheers! Thanks in advance.
Got respond from github/memcached issues
https://github.com/memcached/memcached/issues/334
In short
there's a "./sizes" utility built which describes the overhead of various data structures.
A simple approach could be like setting max size to 1M - 2x the maximum key length (500).
Related
[EDIT: Thanks to #JRLtechwriting, the Storage Size Calculations doc mentioned below has been updated so it no longer mentions namespaces-- Firestore does not (yet?) support them-- and includes more complete examples. After these improvements, my question may not come up again!]
I'm trying to write a general function to calculate the storage size of a Cloud Firestore document, but I'm already stuck on calculating the size of the document's name because I don't know exactly what they mean by a document's "namespace" in the Storage Size Calculations guide:
The size of a document name is the sum of:
The namespace string size (if not in the default namespace)
The full string size of the document name (integer IDs are 8 bytes each)
16 additional bytes
It also says that the namespace is stored as a string. So, for this hypothetical CFS doc...
var alovelaceDocumentRef = db.collection('users').doc('alovelace');
...which, per the Cloud Firestore Data Model docs, can also be referenced like this...
var alovelaceDocumentRef = db.doc('users/alovelace');
...would the namespace string be 'users'? Or maybe 'users/'? Unfortunately, all of the examples in the Storage Size Calculations guide assume the default namespace (for which the size is 0).
I feel like I should be able to experimentally find the answer to my question, but the only way I can think of to do so is to:
Create a document in a non-default namespace
Track its size in a variable "docSize" using the information in the
Storage Size Calculations guide) as I incrementally add data to it
When I get an error message that I have exceeded the maximum
document size (1,048,576 bytes, according to the Quotas and Limits
guide), subtract docSize from 1,048,576 to get the size of the namespace
string
But this approach seems labor-intensive, and probably prone to inaccuracies arising from other limitations of my understanding/knowledge, so I'm hoping one of you more-knowledgeable folks can help. Thanks!
Firestore does not support different namespaces (see this SO answer) so all documents will be in the default namespace. The namespace string size will always be 0.
I help maintain the Firestore docs so I updated the page.
I'm working on a project where I need to send a value between two pieces of hardware using CoDeSys. The comms system in use is CAN and is only capable of transmitting in Bytes, making the maximum value 255.
I need to send a value higher than 255, I'm capable of splitting this over more than one byte and reconstructing it on the receiving machine to get the original value.
I'm thinking I can divide the REAL value by 255 and if the result is over 1 then deconstruct the value in to one byte holding the remainders and one byte holding the amount of 255's in the whole number.
For example 355 would amount to one byte of 100 and another of 1.
Whilst I can describe this, I'm having a really hard time figuring out how to actually write this in logic.
Can anyone help here?
This is all handled for you in CoDeSys if I understand you correctly.
1. CAN - Yes it's in byte but you must not be using CANopen you are using the low level FB that ask you to send a CAN frame of an 8 byte array?
If it is your own two custom controllers ( you are programming both of them in CoDeSys) just use netvariables. Netvariables allows you to transfer any type of variable and you can take the variable list from one controller and import it to another controller and all the data will show up. You don't have to do any variable manipulation it's handle under the hood for you. But I don't know the specifics of your system and what you are trying to do.
If you are trying to deconstruct construct variables from one size to another that is easy and I can share that code with you.
I am implementing a HMAC-like solution based upon specifications provided to me by another company. The hashing parameters and use of the secret key is not an issue, and neither is the distribution of the key itself, since we are in close contact and close geographical location.
However - what is best practice for the actual secret key value?
Since both companies are working together, it would seem that
c9ac56dd392a3206fc80145406517d02 generated with a Rijndael algorithm and
Daisy Daisy give me your answer doare both pretty much equally secure (in this context) as a secret key used to add to the hash?
Citing Wikipedia page on HMAC:
The cryptographic strength of the HMAC depends upon the cryptographic strength of the underlying hash function, the size of its hash output, and on the size and quality of the key.
This means that completely random key, where every bit is randomly generated, is far better than set of characters.
The optimum size of the key is equal to block size. If the key is too short then it is padded usually with zeroes (which are not random). If the key is too long then its hash function is used. The length of hash output is anyway block size.
Use of visible characters as a key makes the key easier to guess as there are far less combinations of visible characters than if we allow for every possible combination of bits. For example:
There are 95 visible characters in ASCII (out of 256 combinations). If the block size is 16 bytes (HMAC_MD5) then there are 95^16 ~= 4.4*10^31 combinations. But for 16 bytes there are 3.4*10^38 possibilities. Attacker knowing that the key consists only of visible ASCII characters knows that he requires around 10 000 000 times less time than if he had to consider every possible combination of bits.
Summarizing I recommend use of cryptographic pseudo-random number generator to generate secret keys instead of coming up with your own keys.
Edit:
As martinstoeckli suggested if you have to you can use key-derivation-function to generate byte key of specified length from text password. This is much safer than converting plain text to bytes and using these bytes as a key directly. Nevertheless, there is nothing more secure than key consisting of random bytes.
I'm developing one system, where Badges will be created with a QRCode for each User, and I need to read that QRCode and show specific information to the user on the public screen.
QRCode reading is a little 'tricky'. When I did something like this, I was using MySQL with enumerated Ids (1, 100, 2304, 9990)... Witch is only about 5 characters.
However, MongoDB keys (DB that I'm using now) consists of a biggg key such as 52d35bf26bda8a5c8f8a22a8 witch has MANY characters.
What is the problem with that: QRCode becomes larger (more data, bigger the size), and becomes harder to read it fast on the WebCam (even in HD).
So, here is my idea: Use part of the Id, So that: 52d35bf26bda8a5c8f8a22a8 becomes perhaps 52d35bf26bd.
The question is really simple: Can I safely use the partial ID Key, without having re-occurrences? The maximum elements I will have, will be about 1000 order.
The question has has nothing to do with QRCode, but it explains the reason why I'm doing it.
ObjectId is a 12-byte BSON type, constructed using:
a 4-byte value representing the seconds since the Unix epoch,
a 3-byte machine identifier,
a 2-byte process id, and
a 3-byte counter, starting with a random value.
Once known that, it depends on the part of the objectId you choose and how many time will pass between the insertions.
Regards
Regardless of whether it's safe or not, the size difference between the QR codes isn't that great.
Using the full string will get you an image like:
Using half the characters produces a code like:
I would suggest that even the cheapest smartphone would be able to scan the larger of the two images - it's not very complex at all.
Yes, you can safely compress a long hexadecimal string into a higher base to get fewer characters while retaining the same value.
Example:
Hex: 52d35bf26bda8a5c8f8a22a8
Base64: UtNb8mvailyPiiKo
The same idea can be taken further by using binary or even Chinese pictograph characters instead of base64.
This concept can be tested using a Numeral Base Converter Tool.
I'm wondering if there's some way to encode data (either binary or ASCII) into a printable image or data pattern that can easily be rescanned again and interpreted back into a file. The problem with QR codes is that they won't handle file sizes of 7-10KB. Any suggestions?
EDIT: One catch: Can't store said data on the server. Security reasons. The data must not exist anywhere except on a printed piece of paper.
7 kilobytes is 57344 bits, hence the graphical code needs a lot of bars or squares (in case of QR) in order to represent the data, and that's without thinking of data error correction, format information, positioning, alignment etc...
I think a sound solution will be to put the data on a server. map it with index and create a service to retrieve the data by index.
the QR/barcode will scan index and get the data from the service