What is the format of ECDSA Signature used in practical applications? - ecdsa

The output format of ECDSA Signature Algorithm is a tuple (r,s) according to its wiki aticle and other similar sources.
But when I triedy the signature generated through their my own algo through standard crypto programming libraries, the signature required as input is a single hex string.
I have searched the internet for standards regarding this conversion but haven't found much satisfactory results.

Related

Basics of MD5: How to know hash bit length and symmetry?

I'm curious about some basics of MD5 encryption I couldn't get from Google, Java questions here nor a dense law paper:
1-How to measure, in bytes, an MD5 hash string? And does it depends if the string is UNICODE or ANSI?
2-Is MD5 an assymetric algorythm?
Example: If my app talks (http) to a REST webservice using a key (MD5_128 hash string, ANSI made of 9 chars) to unencrypt received data, does that account for 9x8=72 bytes in an assymetric algorithm?
I'm using Windevs 25 in Windows, using functions like Encrypt and HashString, but I lack knowledge about encryption.
Edit: Not asnwered yet, but it seems like I need to know more about charsets before jumping to hashes and encryption. https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/
An MD5 hash is 128 bits, 16 bytes. The result is binary, not text, so it is neither "ANSI" nor "Unicode". Like all hashes, it is asymmetric, which should be obvious from the fact that you can hash inputs which are longer than 128 bits. Since it is asymmetric, you cannot "unencrypt" (decrypt) it. This is by design and intentional.

How to get Perl crypt to encrypt more than 8 characters?

Only the first 8 characters is encrypted when the Perl crypt function is used. Is there a way to get it to use more characters?
As an example:
$crypted_password = crypt ("PassWord", "SALT");
and
$crypted_password = crypt ("PassWord123", "SALT");
returns exactly the same result. $crypted_password has exactly the same value.
Would love to use crypt because it is a quick and easy solution to some none reversible encryption but this limit does not make it useful for anything serious.
To quote from the documentation:
Traditionally the result is a string of 13 bytes: two first bytes of the salt, followed by 11 bytes from the set [./0-9A-Za-z], and only the first eight bytes of PLAINTEXT mattered. But alternative hashing schemes (like MD5), higher level security schemes (like C2), and implementations on non-Unix platforms may produce different strings.
So the exact return value of crypt is system dependent, but it often uses an algorithm that only looks at the first 8 byte of the password. These two things combined make it a poor choice for portable password encryption. If you're using a system with a stronger encryption routine and don't try to check those passwords on incompatible systems, you're fine. But it sounds like you're using an OS with the old crappy DES routine.
So a better option is to use a module off of CPAN that does the encryption in a predictable, more secure way.
Some searching gives a few promising looking options (That I haven't used and can't recommend one over another; I just looked for promising keywords on metacpan):
Crypt::SaltedHash
Authen::Passphrase::SaltedDigest
Crypt::Bcrypt::Easy
Crypt::Password::Util

CRC32 integer hash to string

I was looking for a Lua implementation of CRC32 and stumbled upon this:
https://github.com/openresty/lua-nginx-module/blob/master/t/lib/CRC32.lua
However it returns the integer hash, how would I go about getting the string equivalent of it?
Using the input "something" it returns: 1850105976
Using an online CRC32 generator I get: "879fb991"
There are many CRC-32 algorithms. You can find ten different CRC-32s documented in this catalog. The Lua code you found and the online CRC32 you found (somewhere -- no link was provided) are different CRC-32s.
What you seem to mean by a "string equivalent" is the hexadecimal representation of the 32-bit integer. In Lua you can use string.format with the print format %x to get hexadecimal. For the example you gave, 1850105976, that would be 6e466078.
Your "online CRC32 generator" appears to be using the BZIP2 CRC-32, though it is showing you the bytes of the resulting CRC in reversed order (little-endian). So the actual CRC in that case in hexadecimal is 91b99f87. The Lua code you found appears to be using the MPEG-2 CRC-32. The only difference between those is the exclusive-or with ffffffff. So in fact the exclusive-or of the two CRCs you got from the two different sources, 6e466078 ^ 91b99f87 is ffffffff.

How are SHA-3 variants named?

How should we succinctly refer to SHA-3 variants of specific width? The precedent set by SHA-2 naming is unfortunately ambiguous if applied to SHA-3. Specifically, we have SHA-0 and SHA-1 (160 bits), followed by SHA-2 (224, 256, 384, or 512 bits), where SHA-224, SHA-256, SHA-384, and SHA-512 refer to the SHA-2 variants. SHA-3 supports the same bit counts as SHA-2, but a different naming convention is needed to distinguish between SHA-2 and SHA-3. SHA-3-224, SHA-3-256, SHA-3-384, and SHA-3-512 seem reasonable (if clumsy), but I can find no established naming convention of any sort.
I believe they have been finalized as follows
"SHA3-224", "SHA3-256", "SHA3-386", "SHA3-512"
SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions
There is no convention yet. Even the standard itself is not published AFAIK.
I'd use SHA3-256 etc. (like MD6-256).
Same naming scheme is also used in BouncyCastle library.
As for SHA-3-256 and friends, I personally don't like the idea of using the same char - in algorithm name and as property separator. If you necessarily need to keep the dash in algorithm name, I'd go with SHA-3/256 -- similar scheme is used in cipher transformation naming in JCA.

What is base32 encoding?

There is enough information on how to implement base32 encoding or the specification of base32 encoding but I don't understand what it is, why we need it and where are the primary applications. Can someone please explain and give nice real life scenarios on usage? Thanks.
crockford base32
wikipedia base32
Like any other "ASCII-only" encoding, base32's primary purpose is to make sure that the data it encodes will survive transportation through systems or protocols which have special restrictions on the range of characters they will accept and emerge unmodified.
For example, b32-encoded data can be passed to a system that accepts single-byte character input, or UTF-8 encoded string input, or appended to a URL, or added to HTML content, without being mangled or resulting in an invalid form. Base64 (which is much more common) is used for the exact same reasons.
The main advantage of b32 over b64 is that it is much more human-readable. That's not much of an advantage because the data will typically be processed by computers, hence the relative rarity of b32 versus b64 (which is more efficient space-wise).
Update: there's the same question asked about Base64 here: What is base 64 encoding used for?
Base32 encoding (and Base64) encoding is motivated by situations where you need to encode unrestricted binary within a storage or transport system that allow only data of a certain form such as plain text. Examples include passing data through URLs, XML, or JSON data, all of which are plain text sort of formats that don't otherwise permit or support arbitrary binary data.
In addition to previous answers for base32 vs base64 in numbers. For same .pdf file encoded result is:
base64.base32encode(content) = 190400 symbols
base64.base64encode(content) = 158668 symbols