How long Ethereum hash length? (block, transaction, address) - hash

I checked hash length from node data, like below(include '0x').
block hash: 66
transaction hash: 66
receipt root: 66
address: 40
Is it always a fixed length?
Or is it variable length?

Yes, it's always fixed length.
Block hash is calculated using the keccak256 algorithm that always results in 32 bytes (64 hex characters prepended by 0x) no matter the input length.
Same goes for transaction hash and receipt root hash.
An address is always the last 20 bytes (40 hex charaters prepended by 0x) of the public key hash.

Related

Teradata: Hash string to return string of certain length

I need to "sanitize" data for an external report. The input is a string of length 9, and the required output length is 15 characters.
I have tried hash_md5, but that function results in 32 characters. I can't reduce the length of that, because it might introduce the issue of duplicates.
Is there any hash function in Teradata, that allows me to choose the length of the output? Or is there any other approach you can suggest?
Thanks!

Write Bytes to file, bytes shifted

I have my bytes stored as string values
like this in file D:\source.txt
208
203
131
132
148
128
128
128
128
128
I just want to read them, and store in another file
I am quite new for powershell, so wrote program like this
$bytes = New-Object System.Collections.ArrayList
foreach($line in [System.IO.File]::ReadLines("D:\source.txt"))
{
[void]$bytes.Add([System.Convert]::ToByte($line));
}
[System.IO.File]::WriteAllBytes("D:\target.zip",[Byte[]]$bytes.ToArray());
So from my undestanding it should get string value, convert it to byte
store it in ArrayList, convert ArrayList to byte array and wrote it to file
And everything goes ok, even if i do echo [Byte[]]$bytes.ToArray() i see correct value
But result file is corrupted, and when i check it byte by byte i see next values
-48
-53
-125
-124
-108
-128
-128
-128
-128
-128
Seems like WriteAllBytes shift my byte values by 128, but why and where?
I am not very professional with powershell, and i cant find anything related in documentation
So can you suggest how i can correct this?
Thanks you for any info
Thanks, i actually found what is the problem. Cause for corruption was incorrect library method for converting from java byte(values from -128...127) to unsigned powershell byte And in hex redactor i`ve got int(8) representation, which is corresponds, if check in powershell(uint) bytes are shown correctly Thanks for help

Scala- How can I read some specific bytes from a file?

I'd like to encrypt a text(about 1 MB) and I use the max length of RSA keys(4096 bits). However, the key seems too short. As I googled, I got to know that the max size of text that a RSA can encrypt is 8 bytes shorter than the length of the key. Thus, I can only encrypt 501 bytes in this way. So I decided to divide my text into 2093 arrays (1024*1024/501=2092.1).The question is how can I pour the first 501 bytes into the first array in scala?Anyone who can help me this out?
I can't comment on whether your cryptographic approach is okay. (I don't know, but would rely on libraries written and vetted by more knowledgeable cryptographers if I were in your shoes. I'm not sure why you choose 501, which is 11 bytes, not 8, shorter than 512.)
But chunking your arrays into fixed-size blocks should be easy. Just use the grouped function f Array.
val text : String = ???
val bytes = text.getBytes( scala.io.Codec.UTF8.charSet ) // lots of ways to do this
val blocks = bytes.grouped( 501 )
Blocks will be an Iterator[Array[Byte]], each 501 bytes long except for the last (which may be shorter).

SignatureValue calculation for XML-DSIG

I am trying to write a method that returns a signature of an XML element for XMLDSIG using NET framework components (RSACryptoServiceProvider) in C++/CLI. Could please someone explain this excerpt from XMLDSIG specs ( http://www.w3.org/TR/2002/REC-xmldsig-core-20020212/ ) in simpler words, for I am have very little programming and maths background and therefore have trouble undrestanding this - Or provide an excerpt form a real code as an example where this is implemented?
The SignatureValue content for an RSA signature is the base64 [MIME]
encoding of the octet string computed as per RFC 2437 [PKCS1, section
8.1.1: Signature generation for the RSASSA-PKCS1-v1_5 signature scheme]. As specified in the EMSA-PKCS1-V1_5-ENCODE function RFC 2437
[PKCS1, section 9.2.1], the value input to the signature function MUST
contain a pre-pended algorithm object identifier for the hash
function, but the availability of an ASN.1 parser and recognition of
OIDs is not required of a signature verifier. The PKCS#1 v1.5
representation appears as: CRYPT (PAD (ASN.1 (OID, DIGEST (data))))
Note that the padded ASN.1 will be of the following form: 01 | FF*
| 00 | prefix | hash where "|" is concatenation, "01", "FF", and "00"
are fixed octets of the corresponding hexadecimal value, "hash" is the
SHA1 digest of the data, and "prefix" is the ASN.1 BER SHA1 algorithm
designator prefix required in PKCS1 [RFC 2437], that is, hex 30 21
30 09 06 05 2B 0E 03 02 1A 05 00 04 14 This prefix is included to make
it easier to use standard cryptographic libraries. The FF octet MUST
be repeated the maximum number of times such that the value of the
quantity being CRYPTed is one octet shorter than the RSA modulus.
In other words, if I am have the hash value for a certain XML element (not encoded in base64, is that right?), what do I do with it before sending it to the SignHash (in RSACryptoServiceProvider) function?
I know it's in the text, but I have troubles understanding it.
I don't understand "CRYPT (PAD (ASN.1 (OID, DIGEST (data))))" at all, although I understand parts of it... I don't understand the way to get the OID and then ASN and how to pad it...
Let me try to explain the components, and see if this gets you any closer:
DIGEST(data) is the hash-value you already computed
OID is a globally unique identifier representing the hash-algorithm used. For SHA1 this is 1.3.14.3.2.26
ANS.1 means ANS.1-encoding of the OID and the hash-value as an ASN.1-sequence. This means the hex-values listed in the reference, followed by the actual hash.
PAD means concatenating 01 FF* 01 with the ASN.1-encoded prefix and the hash to get the desired length (FF* means repeat FF an appropriate number of times, the RFC gives details)
CRYPT is the RSA-encryption-function
However, I believe the signHash-function does all of this for you, you just provide the OID and the hash-value.

Play! hash password returns bad result

I'm using Play 1.2.1. I want to hash my users password. I thought that Crypto.passwordHash will be good, but it isn't. passwordHash documentation says it returns MD5 password hash. I created some user accounts in fixture, where I put md5 password hash:
...
User(admin):
login: admin
password: f1682b54de57d202ba947a0af26399fd
fullName: Administrator
...
The problem is, when I try to log in, with something like this:
user.password.equals(Crypto.passwordHash(password))
and it doesn't work. So I put a log statement in my autentify method:
Logger.info("\nUser hashed password is %s " +
"\nPassed password is %s " +
"\nHashed passed password is %s",
user.password, password, Crypto.passwordHash(password));
And the password hashes are indeed different, but hey! The output of passwordHash method isn't even an MD5 hash:
15:02:16,164 INFO ~
User hashed password is f1682b54de57d202ba947a0af26399fd
Passed password is <you don't have to know this :P>
Hashed passed password is 8WgrVN5X0gK6lHoK8mOZ/Q==
How about that? How to fix it? Or maybe I have to implement my own solution?
Crypto.passwordHash returns base64-encoded password hash, while you are comparing to hex-encoded.
MD5 outputs a sequence of 16 bytes, each byte having (potentially) any value between 0 and 255 (inclusive). When you want to print the value, you need to convert the bytes to a sequence of "printable characters". There are several possible conventions, the two main being hexadecimal and Base64.
In hexadecimal notation, each byte value is represented as two "hexadecimal digits": such a digit is either a decimal digit ('0' to '9') or a letter (from 'a' to 'f', case is irrelevant). The 16 bytes thus become 32 characters.
In Base64 encoding, each group of three successive bytes is encoded as four characters, taken in a list of 64 possible characters (digits, lowercase letters, uppercase letters, '+' and '/'). One or two final '=' signs may be added so that the encoded string consists in a number of characters which is multiple of 4.
Here, '8WgrVN5X0gK6lHoK8mOZ/Q==' is the Base64 encoding of a sequence of 16 bytes, the first one having value 241, the second one 104, then 43, and so on. In hexadecimal notation, the first byte would be represented by 'f1', the second by '68', the third by '2b'... and the hexadecimal notation of the complete sequence of 16 bytes is then 'f1682b54de57d202ba947a0af26399fd', the value that you expected.
The play.libs.Codec class contains methods for decoding and encoding Base64 and hexadecimal notations. It also contains Codec.hexMD5() which performs MD5 hashing and returns the value in hexadecimal notation instead of Base64.
as Nickolay said you are comparing Hex vs Base-64 strings. Also, I would recommend using BCrypt for that, not the Crypto tool of Play.