I haven't found information about this anywhere. Is there a minimal required length for virus signatures? I've read in book by Peter Szor that for 16-bit applications 16 bytes is enough even to avoid false positives. Is there equvilent minimum for 32 and 64-bit applications too?
Thanks.
May be my experience useful to you:
I generated hex-code signatures using n-gram, opcodes and using menonic. And I used more than 2000 viruses to min signatures. The min length was 16 byte and max size was 68 bytes. Also, for signatures was created for both malware and benign. The approach was Heuristic and Data-Mining.
The length of Benign signature was less than malware. And i though, it was because Benign are written in high-level language so compiler generated code and more similarity reduce the length of Benign signatures. Where as Malware are written in comparatively Low level (Assembly) or in inline-Assembly (embedded-assembly within high-level language) so produces lengthy signature comparably.
Also long signatures are useful in detail analysis in offline scanning.
Related
Non-cryptographic hashes such as MurmurHash3 and xxHash are almost exclusively designed for hash tables, but they appear to function comparably (and even favorably) to CRC-32, Adler-32 and Fletcher-32. Non-crypto hashes are often faster than CRC-32 and produce more "random" output similar to slow cryptographic hashes (MD5, SHA). Despite this, I only ever see CRC-32 or MD5 recommended for data integrity/checksum purposes.
In the table below, I tested 32-bit checksum/CRC/hash functions to determine how well they detect small differences in data:
The results in each cell means: A) number of collisions found, and B) minimum and maximum probability that any of the 32 output bits are set to 1. To pass test B, the max and min should be as close as possible to 50. Anything under 45 or over 55 indicates bias.
Looking at the table, MurmurHash3 and Jenkins lookup2 compare favorably to CRC-32 (which actually fails one test). They are also well-distributed. DJB2 and FNV1a pass collision tests but aren't well distributed. Fletcher32 and Adler32 struggle with the NullBytes and 8RandBytes tests.
So then my question is, compared to other checksums, how suitable are 'non-cryptographic hashes' for detecting errors or differences in files? Is there any reason a CRC-32/Adler-32/CRC-64 might outperform any decent 32-bit/64-bit hash?
Is there any reason this function would be inferior to CRC-32 or
Adler-32 for detecting errors in data?
Yes, for certain kinds of error characteristics. A CRC can be designed to very effectively detect small numbers of bit errors in a packet, as you might expect on an actual communications or storage channel. That's what it's designed for.
For large numbers of errors, any 32-bit check that fills the 32 bits and does a reasonably good job of being sensitive to all of the bits of the packet will work about as well as any other. So your's would be as good as a CRC-32, and a smidge better than an Adler-32. (The Adler-32 deliberately does not use all possible 32-bit values, so has a slightly higher false positive rate than 32-bit checks that use all possible values.)
By the way, looking a little more at your algorithm, it does not distribute over all 32-bit values until you have many bytes of input. So your check would not be as good as any other 32-bit check on a large number of errors until you have covered the possible 32-bit values of the check.
I'm working with a microcontroller with native HW functions to calculate CRC32 hashes from chunks of memory, where the polynomial can be freely defined. It turns out that the system has different data-links with different bit-lengths for CRC, like 16 and 8 bit, and I intend to use the hardware engine for it.
In simple tests with online tools I've concluded that it is possible to find a 32-bit polynomial that has the same result of a 8-bit CRC, example:
hashing "a sample string" with 8-bit engine and poly 0xb7 yelds a result 0x97
hashing "a sample string" with 16-bit engine and poly 0xb700 yelds a result 0x9700
...32-bit engine and poly 0xb7000000 yelds a result 0x97000000
(with zero initial value and zero final xor, no reflections)
So, padding the poly with zeros and right-shifting the results seems to work.
But is it 'always' possible to find a set of parameters that make 32-bit engines to work as 16 or 8 bit ones? (including poly, final xor, init val and inversions)
To provide more context and prevent 'bypass answers' like 'dont't use the native engine': I have a scenario in a safety critical system where it's necessary to prevent a common design error from propagating to redundant processing nodes. One solution for that is having software-based CRC calculation in one node, and hardware-based in its pair.
Yes, what you're doing will work in general for CRCs that are not reflected. The pre and post conditioning can be done very simply with code around the hardware instructions loop.
Assuming that the hardware CRC doesn't have an option for this, to do a reflected CRC you would need to reflect each input byte, and then reflect the final result. That may defeat the purpose of using a hardware CRC. (Though if your purpose is just to have a different implementation, then maybe it wouldn't.)
You don't have to guess. You can calculate it. Because CRC is a remainder of a division by an irreducible polynomial, it's a 1-to-1 function on its domain.
So, CRC16, for example, has to produce 65536 (64k) unique results if you run it over 0 through 65536.
To see if you get the same outcome by taking parts of CRC32, run it over 0 through 65535, keep the 2 bytes that you want to keep, and then see if there is any collision.
If your data has 32 bits in it, then it should not be an issue. The issue arises if you have less than 32 bit numbers and you shuffle them around in a 32-bit space. Their 1st and last byte are not guaranteed to be uniformly distributed.
I was wondering, what is the maximum possible length of a CISC instruction on most of today's CISC architectures?
I haven't found the definitive answer yet, but it is suggested that it's 16 bytes long, in theory.
In the video # around 15:00 mins, why does the speaker suggests "in theory" and why exactly 16 bytes?
In practice as well. For the x86-64 AMD has limited the allowed instruction length to 15 bytes. After that, the instruction decoder will give up and signal an error.
Otherwise, with multiple instruction prefixes and override bytes, we don't know exactly how long the instruction could get. No limit at all, if we allow redundant repetitions of some prefixes.
Agner Fog describes the problem:
Executing three, four or five instructions simultaneously is not unusual. The limit is not the execution units, which we have plenty of, but the instruction decoder. The length of an instruction can be anywhere from one to fifteen bytes. If we want to decode several instructions simultaneously, then we have a serious problem. We have to know the length of the first instruction before we know where the second instruction begins. So we can't decode the second instruction before we have decoded the first instruction. The decoding is a serial process by nature, and it takes a lot of hardware to be able to decode multiple instructions per clock cycle. In other words, the decoding of instructions can be a serious bottleneck, and it becomes worse the more complicated the instruction codes are.
See the rest of his blog post here.
CISC is a design philosophy, not an architecture, therefore there's no such thing as "CISC instruction length", only instruction length of a specific CISC architecture (like x86 or Motorola 68k)
Talking specifically about x86 then the limit is 15 bytes. Theoretically the instruction length can be infinite because prefixes can be repeated. However that makes it difficult for the decoder so in 80286 Intel began to limit it to 10 bytes, and then 15 bytes in later ISA versions. For more information about it read
x86_64 ASM - maximum bytes for an instruction?
What is the maximum length an Intel 386 instruction without any prefixes?
Also note that RISC doesn't mean fixed-length instructions. Modern MIPS, ARM, RISC-V... all have a variable length instruction mode to increase code density
Which algorithm does provide safer "speedy checksums"? Choose only from MD4 and Adler32.
I'd use both of them for checksums. As both are "relatively speedy", you can't be more safe than use both - ancient - options.
However, if you are looking for the safest option -- I'd say MD4.
Quote:
In the original SWID requirements, the acceptable collision rate was
determined at 1 in 10 million. The degree of uniqueness here is
important, but it does not have to be as robust as MD4's claim of 2^64
operations before a collision may occur.
Source: MD4-SWID.pdf
For short messages, Adler32 has a weakness to be aware of:
Jonathan Stone discovered in 2001 that Adler-32 has a weakness for
very short messages. He wrote "Briefly, the problem is that, for very
short packets, Adler32 is guaranteed to give poor coverage of the
available bits. Don't take my word for it, ask Mark Adler. :-)" The
problem is that sum A does not wrap for short messages. The maximum
value of A for a 128-byte message is 32640, which is below the value
65521 used by the modulo operation. An extended explanation can be
found in RFC 3309, which mandates the use of CRC32 instead of Adler-32
for SCTP, the Stream Control Transmission Protocol.
and:
Running Adler, CRC32 and both on several sets of 1 million randomly
generated url-like strings ranging from 16 to 128 characters in
length, Adler produced duplicates in ~1% of cases; CRC32 produced
~0.2%; and in several runs the combination of both found just 2
duplicates (circa. 0.002% but not enough samples to be judged
representative).
Source: Hashing urls with Adler32
Considering the chances of a collision are high [especially] with short messages using Alder32, MD4 has my vote.
Given that SSE 4.2 (Intel Core i7 & i5 parts) includes a CRC32 instruction, it seems reasonable to investigate whether one could build a faster general-purpose hash function. According to this only 16 bits of a CRC32 are evenly distributed. So what other transformation would one apply to overcome that?
Update
How about this? Only 16 bits are suitable for a hash value. Fine. If your table is 65535 or less then great. If not, run the CRC value through the Nehalem POPCNT (population count) instruction to get the number of bits set. Then, use that as an index into an array of tables. This works if your table is south of 1mm entries. I'd bet that's cheaper/faster that the best-performing hash functions. Now that GCC 4.5 has a CRC32 intrinsic it should be easy to test...if only I had the copious spare time to work on it.
David
Revisited, August 2014
Prompted by Arnaud Bouchez in a recent comment, and in view of other answers and comments, I acknowledge that the original answer needs to be altered or for the least qualified. I left the original as-is, at the end, for reference.
First, and maybe most important, a fair answer to the question depends on the intended use of the hash code: What does one mean by "good" [hash function...]? Where/how will the hash be used? (e.g. is it for hashing a relatively short input key? Is it for indexing / lookup purposes, to produce message digests or yet other uses? How long is the desired hash code itself, all 32 bits [of CRC32 or derivatives thereof], more bits, fewer... etc?
The OP questions calls for "a faster general-purpose hash function", so the focus is on SPEED (something less CPU intensive and/or something which can make use of parallel processing of various nature). We may note here that the computation time for the hash code itself is often only part of the problem in an application of hash (for example if the size of the hash code or its intrinsic characteristics result in many collisions which require extra cycles to be dealt with). Also the requirement for "general purpose" leaves many questions as to the possible uses.
With this in mind, a short and better answer is, maybe:
Yes, the hardware implementations of CRC32C on newer Intel processors can be used to build faster hash codes; beware however that depending on the specific implementation of the hash and on its application the overall results may be sub-optimal because of the frequency of collisions, of the need to use longer codes. Also, for sure, cryptographic uses of the hash should be carefully vetted because the CRC32 algorithm itself is very weak in this regard.
The original answer cited a article on Evaluating Hash functions by Bret Mulvey and as pointed in Mdlg's answer, the conclusion of this article are erroneous in regards to CRC32 as the implementation of CRC32 it was based on was buggy/flawed. Despite this major error in regards to CRC32, the article provides useful guidance as to the properties of hash algorithms in general. The URL to this article is now defunct; I found it on archive.today but I don't know if the author has it at another location and also whether he updated it.
Other answers here cite CityHash 1.0 as an example of a hash library that uses CRC32C. Apparently, this is used in the context of some longer (than 32 bits) hash codes but not for the CityHash32() function itself. Also, the use of CRC32 by City Hash functions is relatively small, compared with all the shifting and shuffling and other operations that are performed to produce the hash code. (This is not a critique of CityHash for which I have no hands-on experience. I'll go on a limb, from a cursory review of the source code that CityHash functions produce good, e.g. ell distributed codes, but are not significantly faster than various other hash functions.)
Finally, you may also find insight on this issue in a quasi duplicate question on SO .
Original answer and edit (April 2010)
A priori, this sounds like a bad idea!.
CRC32 was not designed for hashing purposes, and its distribution is likely to not be uniform, hence making it a relatively poor hash-code. Furthermore, its "scrambling" power is relatively weak, making for a very poor one-way hash, as would be used in cryptographic applications.
[BRB: I'm looking for online references to that effect...]
Google's first [keywords = CRC32 distribution] hit seems to confirm this :
Evaluating CRC32 for hash tables
Edit: The page cited above, and indeed the complete article provides a good basis of what to look for in Hash functions.
Reading [quickly] this article, confirmed the blanket statement that in general CRC32 should not be used as a hash, however, and depending on the specific purpose of the hash, it may be possible to use, at least in part, a CRC32 as a hash code.
For example the lower (or higher, depending on implementation) 16 bits of the CRC32 code have a relatively even distribution, and, provided that one isn't concerned about the cryptographic properties of the hash code (i.e. for example the fact that similar keys produce very similar codes), it may be possible to build a hash code which uses, say, a concatenation of the lower [or higher] 16 bits for two CRC32 codes produced with the two halves (or whatever division) of the original key.
One would need to run tests to see if the efficiency of the built-in CRC32 instruction, relative to an alternative hash functions, would be such that the overhead of calling the instruction twice and splicing the code together etc. wouldn't result in an overall slower function.
The article referred to in other answers draws incorrect conclusions based on buggy crc32 code. Google's ranking algorithm does not rank based on scientific accuracy yet.
Contrary to the referred article "Evaluating CRC32 for hash tables" conclusions, CRC32 and CRC32C are acceptable for hash table use. The author's sample code has a bug in the crc32 table generation. Fixing the crc32 table, gives satifactory results using the same methodology. Also the speed of the CRC32 instruction, makes it the best choice in many contexts. Code using the CRC32 instruction is 16x faster at peak than an optimal software implementation. (Note that CRC32 is not exactly the same than CRC32C which the intel instruction implements.)
CRC32 is obviously not suitable for crypto use. (32 bit is a joke to brute force).
Yes. CityHash 1.0.1 includes some new "good hash functions" that use CRC32 instructions.
Just so long as you're not after crypto hash it just might work.
For cryptographic purposes, CRC32 is a bad fundation because it is linear (over the vector space GF(2)^32) and that is hard to correct. It may work for non-cryptographic purposes.
However, recent Intel cores have the AES-NI instructions, which basically perform 1/10th of an AES block encryption in two clock cycles. They are available on the most recent i5 and i7 processors (see the Wikipedia page for some details). This looks like a good start for building a cryptographic hash function (and a hash function which is good for cryptography will also be good for about anything else).
Indeed, at least one of the SHA-3 "round 2" candidates (the ECHO hash function) is built around the AES elements so that the AES-NI opcodes provide a very substantial performance boost. (Unfortunately, in the absence of AES-NI instruction, ECHO performance somewhat sucks.)