How to convert hex(base16) value of hash SHA-256 to Unsigned Integer(7) (base10) in Scala? - scala

Logic which is working in Abinitio platform.
Lets take sample value as “123456789”, for which we need to generate SHA256 and convert into unsigned integer(7). Expected result - 40876285344408085
m_eval 'hash_SHA256("123456789")'
[void 0x15 0xe2 0xb0 0xd3 0xc3 0x38 0x91 0xeb 0xb0 0xf1 0xef 0x60 0x9e 0xc4 0x19 0x42 0x0c 0x20 0xe3 0x20 0xce 0x94 0xc6 0x5f 0xbc 0x8c 0x33 0x12 0x44 0x8e 0xb2 0x25]
m_eval 'string_to_hex(hash_SHA256("123456789"))'
"15E2B0D3C33891EBB0F1EF609EC419420C20E320CE94C65FBC8C3312448EB225"
m_eval '(unsigned integer(7)) reinterpret(hash_SHA256("123456789"))'
40876285344408085
Scala Approach
println("Input Value : "+shaVal)
val shaCode="SHA-256"
val utf="UTF-8"
val digest = MessageDigest.getInstance(shaCode)
println("digest SHA-256 : "+digest)
val InpStr = StringUtils.stripStart(shaVal,"0")
println("InpStr : "+InpStr)
val hashUTF = digest.digest(InpStr.getBytes(utf))
println("hashUTF(UTF-8) : "+hashUTF.mkString(" "))
val hashBigInt= new BigInteger(1, digest.digest(InpStr.getBytes("UTF-8")))
println("hashBigInt : "+hashBigInt)
val HashKeyRes = String.format("%032x", hashBigInt)
println("HashKeyRes : "+HashKeyRes)
Console Output
hashUTF(UTF-16) : 21 -30 -80 -45 -61 56 -111 -21 -80 -15 -17 96 -98 -60 25 66 12 32 -29 32 -50 -108 -58 95 -68 -116 51 18 68 -114 -78 37
hashBigInt : 9899097673353459346982371669967256498000649460813128595014811958380719944229
HashKeyRes : 15e2b0d3c33891ebb0f1ef609ec419420c20e320ce94c65fbc8c3312448eb225
fromBase : 16
toBase : 10
So the hash key generated matches with the value, which is HEX format (Base16). But the expected output should be in(Base10) Unsigned Integer (7) as 40876285344408085

Related

MIDI and Bit Order

I'm stuck trying to format a MIDI Sys Ex message that a device keeps rejecting as invalid. The problem is a section of the message that involves a type of data encoding described below.
According to the manual,
the device "will encode/interpret a consecutive group of 4-bytes"
Byte #0 - b31b30b29b28b27b26b25b24
Byte #1 - b23b22b21b20b19b18b17b16
Byte #2 - b15b14b13b12b11b10b09b08
Byte #3 - b07b06b05b04b03b02b01b00
as the following 5 consecutive SysEx bytes:"
Byte #0 - 0 b06b05b04b03b02b01b00
Byte #1 - 0 b13b12b11b10b09b08b07
Byte #2 - 0 b20b19b18b17b16b15b14
Byte #3 - 0 b27b26b25b24b23b22b21
Byte #4 - 0 0 0 0 b31b30b29b28
where "b" is the bit number. Notice the bit numbering has been flipped. Which way are you supposed to read the bits? MIDI data is, by convention, reverse bit ordered (MSB=7), if that helps. Also, the manual notes that "all data types are in Motorola big-endian byte order."
Here's a description of the message I'm trying to format correctly -
"A command will allow a consecutive group of one to four bytes to be edited. When 3 or less bytes are specified the device expects the Parameter Value field to be bit ordered as if it was performing a full 32-bit (4 byte) parameter change. For example, when editing a two byte parameter, Byte #0 will occupy the bit range of b24-b31, while Byte #1 will occupy bits b16-b23. The remaining bits (b00-b15) in the parameter value field should be set to zero."
bb Parameter Offset - 0 b06b05b04b03b02b01b00
bb Parameter Offset - 0 b13b12b11b10b09b08b07
bb Parameter Offset - 0 b20b19b18b17b16b15b14
bb Parameter Offset - 0 b27b26b25b24b23b22b21
bb Parameter Offset - 0 0 0 0 b31b30b29b28
0b Parameter Byte Size (1 to 4)
00
00
00
00
bb Parameter Value - 0 b06b05b04b03b02b01b00
bb Parameter Value - 0 b13b12b11b10b09b08b07
bb Parameter Value - 0 b20b19b18b17b16b15b14
bb Parameter Value - 0 b27b26b25b24b23b22b21
bb Parameter Value - 0 0 0 0 b31b30b29b28
So, when trying to enter offset values of 15H, 16H, 17H, and 18H, with respective values of let's say 00, 01, 02, 03 respectively, how would I encode those hex values, or do I even need to encode them? If I do need to, which direction do I write the bits so the binary values are correct?
When written, the order of bits in a byte is always big-endian, i.e., MSB first.
This can be confirmed by the fact that the MSB must be zero for data bytes.
Four bytes:
15h -> 00010101
16h -> 00010110
17h -> 00010111
18h -> 00011000
SysEx bytes:
0 0011000 -> 18h
0 0101110 -> 2eh
0 1011000 -> 58h
0 0101000 -> 28h
0000 0001 -> 01h
Four bytes:
01 -> 00000001
02 -> 00000010
03 -> 00000011
04 -> 00000100
SysEx bytes:
0 0000100 -> 04h
0 0000110 -> 06h
0 0001000 -> 08h
0 0001000 -> 08h
0000 0000 -> 00h
The basic approach to this conversion is to combine the four 8-bit values into a 32-bit value and then successively move the least significant seven bits into five bytes. Here's a sample program that does what you need.
#include <stdio.h>
#include <stdint.h>
void convert(uint8_t byte0, uint8_t byte1, uint8_t byte2, uint8_t byte3, uint8_t *outBytes) {
uint32_t inBytes = byte0 << 24 | byte1 << 16 | byte2 << 8 | byte3; //combine the input bytes into a single 32-bit value
for (int i = 0; i < 5; i++) {
outBytes[i] = inBytes & 0x7F; //Copy the least significant seven bits into the next byte in the output array
inBytes >>= 7; //Shift right to discard the seven bits that were copied
}
}
void printByteArray(uint8_t *byteArray) {
for (int i = 0; i < 5; i++) {
printf("Byte %d: %02xh\n", i, byteArray[i]);
}
printf("\n");
}
int main(int argc, char *argv[]) {
uint8_t sysExBytes[5]; //Five bytes to contain the converted SysExData
convert(0x15, 0x16, 0x17, 0x18, sysExBytes);
printByteArray(sysExBytes);
convert(0x00, 0x01, 0x02, 0x03, sysExBytes);
printByteArray(sysExBytes);
return 0;
}
Output:
Byte 0: 18h
Byte 1: 2eh
Byte 2: 58h
Byte 3: 28h
Byte 4: 01h
Byte 0: 03h
Byte 1: 04h
Byte 2: 04h
Byte 3: 00h
Byte 4: 00h
Your question is a bit confusing. 1 byte is 8 bits, therefore the following lines:
Byte - #0 b31b30b29b28b27b26b25b24
Byte - #1 b23b22b21b20b19b18b17b16
Byte - #2 b15b14b13b12b11b10b09b08
Byte - #3 b07b06b05b04b03b02b01b00
Do not make sense to me. The first line #0 has 8 bytes in it. I did some searching and this is a better explanation (http://www.music.mcgill.ca/~ich/classes/mumt306/midiformat.pdf). Taking the contents from page 2 and editing them for clarity.
Offset | Byte 0 | Byte 1 | Byte 2 | Byte 3
-------- bits | 24-31 | 16-23 | 8-15 | 0-7
00000000 | 00 | | |
00000040 | 40 | | |
0000007F | 7F | | |
00000080 | 81 | 00 | |
00002000 | C0 | 00 | |
00003FFF | FF | 7F | | <--- example
00004000 | 81 | 80 | 00 |
00100000 | C0 | 80 | 00 |
001FFFFF | FF | FF | 7F |
00200000 | 81 | 80 | 80 | 00
08000000 | C0 | 80 | 80 | 00
0FFFFFFF | FF | FF | FF | 7F
Example at offset 3FFF
Hex format 0xFF7F_0000 (32-bit number, unused bits are 0)
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16
1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1
15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Does this help?

need a math about (byte) 190 = -66

Arduino decode problem :
inString = "BE";
val = unhex(inString); // the result of val = 190
(byte) val;// val 190 convert to byte return -66
How val converted from int to byte? any math?
You're working with signed byte (which is typical for, say, Java):
0xBE == 190
But 190 > 127 that's why it's negative (providing that byte is signed and so should be in range [-128..127]). To find out the value of 190 representation you should find N such that
190 + N * 256 lies in [-128..127] range
in your case N == -1, and the answer is
190 - 256 == -66

how to remove decimal trailing zeros in matlab

how to remove decimal trailing zeros in matrix (C) in matlab.
65.7500 4.7500 4.7500 64.0000 60.0000
118.9000 105.6000 92.5500 147.6000 178.2000
73.6600 84.0100 95.6900 190.0000 164.0000
147.9000 132.0000 140.0000 147.0000 116.5000
ans=
65.75 4.75 4.75 64 60
118.9 105.6 92.55 147.6 178.2
73.66 84.01 95.69 190 164
147.9 132 140 147 116.5
>> format short g
>> C
C =
65.75 4.75 4.75 64 60
118.9 105.6 92.55 147.6 178.2
73.66 84.01 95.69 190 164
147.9 132 140 147 116.5

SHA-1 Hash from a namespace Identifier in c/c++

Suppose I have value of 01010101 and it's canonical sequence of octets:
0x30 0x31 0x30 0x31 0x30 0x31 0x30 0x31
Now i need to concatenate with namespace identifier value which is hexa-representation.
Then I need to find the value like
sha1 (0x03 0xfb 0xac 0xfc 0x73 0x8a 0xef 0x46 0x91 0xb1 0xe5 0xeb 0xee 0xab 0xa4 0xfe 0x30 0x31 0x30 0x31 0x30 0x31 0x30 0x31) =
0xA8 0x82 0x16 0x4B 0x68 0xF9 0x01 0xE7 0x03 0xFC 0x7C 0x67 0x41 0xDC 0x66 0x97 0xB8 0xA1 0xA9 0x3E
After that how to ..
4b1682a8-f968-5701-83fc-7c6741dc6697
Hello everyone my problem is solved when i follow rfc4122 link
http://www.ietf.org/rfc/rfc4122.txt
There need some modification also in code.If anyone have same problem just ask me..
Thanks

Hex combination of binary flags

Which of the following give back 63 as long (in Java) and how?
0x0
0x1
0x2
0x4
0x8
0x10
0x20
I'm working with NetworkManager API flags if that helps. I'm getting 63 from one of the operations but don't know how should I match the return value to the description.
Thanks
63 is 32 | 16 | 8 | 4 | 2 | 1, where | is the binary or operator.
Or in other words (in hex): 63 (which is 0x3F) is 0x20 | 0x10 | 0x8 | 0x4 | 0x2 | 0x1. If you look at them all in binary, it is obvious:
0x20 : 00100000
0x10 : 00010000
0x08 : 00001000
0x04 : 00000100
0x02 : 00000010
0x01 : 00000001
And 63 is:
0x3F : 00111111
If you're getting some return status and want to know what it means, you'll have to use binary and. For example:
if (status & 0x02)
{
}
Will execute if the flag 0x02 (that is, the 2nd bit from the right) is turned on in the returned status. Most often, these flags have names (descriptions), so the code above will read something like:
if (status & CONNECT_ERROR_FLAG)
{
}
Again, the status can be a combination of stuff:
// Check if both flags are set in the status
if (status & (CONNECT_ERROR_FLAG | WRONG_IP_FLAG))
{
}
P.S.: To learn why this works, this is a nice article about binary flags and their combinations.
I'd give you the same answer as Chris: your return value 0x63 seems like a combination of all the flags you mention in your list (except 0x0).
When dealing with flags, one easy way to figure out by hand which flags are set is by converting all numbers to their binary representation. This is especially easy if you already have the numbers in hexadecimal, since every digit corresponds to four bits. First, your list of numbers:
0x01 0x02 0x04 ... 0x20
| | | |
| | | |
V V V V
0000 0001 0000 0010 0000 0100 ... 0010 0000
Now, if you take your value 63, which is 0x3F (= 3 * 161 + F * 160, where F = 15) in hexadecimal, it becomes:
0x3F
|
|
V
0011 1111
You quickly see that the lower 6 bits are all set, which is an "additive" combination (bitwise OR) of the above binary numbers.
63 (decimal) equals 0x3F (hex). So 63 is a combination of all of the following flags:
0x20
0x10
0x08
0x04
0x02
0x01
Is that what you were looking for?