In Scala we have a part of code that hash a string in SHA512 and then pad the string in Hexadecimal format:
String.format("%032x", new BigInteger(1, MessageDigest.getInstance("SHA-512").digest("MyStringToBeHashed".getBytes("UTF-8"))))
I hash string in Go using crypto package:
package main
import (
"crypto/sha512"
"encoding/base64"
)
func main() {
hasher := sha512.New()
hasher.Write([]byte("MyStringToBeHashed"))
sha := base64.URLEncoding.EncodeToString(hasher.Sum(nil))
sha = fmt.Sprintf("%032x", sha)
println(sha)
}
In Scala I get the below output:
b0bcb1263862e574e5c9bcb88a3a14884625613410bac4f0be3e3b601a6dee78f5635d0f7b6eb19ba5a1d80142d9ff2678946331874c998226b16e7ff48e53e5
But in Go:
734c79784a6a68693558546c79627934696a6f556945596c5954515175735477766a343759427074376e6a3159313050653236786d365768324146433266386d654a526a4d59644d6d59496d7357355f3949355435513d3d
The additional base64 encoding is wrong. You either want to show the hash as base64 (i.e. base64(hash)) or you want to show the hash as hex string (i.e. hex(hash)) - but what you do instead is to show the base64 encoded hash as hex string (i.e. hex(base64(hash))). Simply do instead:
sha := fmt.Sprintf("%032x", hasher.Sum(nil))
Related
I am trying to use CryptoJS to encrypt something and then generate a hexadecimal string of the encrypted text.
function EncryptAES(text, key) {
var encrypted = CryptoJS.AES.encrypt(text, key);
return CryptoJS.enc.Hex.stringify(encrypted);
}
var encrypted = EncryptAES("Hello, World!", "SuperSecretPassword");
console.log(encrypted);
However, instead of a hexadecimal string, a blank line is printed to the console. What am I doing wrong?
CryptoJS.AES.encrypt() returns a CipherParams object that encapsulates several data, including the ciphertext as WordArray (s. here). By default, .toString() returns the hex encoded data for a WordArray:
function EncryptAES(text, key) {
var encrypted = CryptoJS.AES.encrypt(text, key);
return encrypted.ciphertext.toString()
}
var encrypted = EncryptAES("Hello, World!", "SuperSecretPassword");
console.log(encrypted);
<script src="https://cdnjs.cloudflare.com/ajax/libs/crypto-js/4.1.1/crypto-js.min.js"></script>
Note that in your example the key material is passed as string and therefore interpreted as passphrase (s. here), inferring key and IV via a key derivation function in conjunction with a random 8 bytes salt, which is why the ciphertext changes each time for the same input data.
Therefore, decryption requires not only the ciphertext but also the salt, which is also encapsulated in the CipherParams object.
For a CipherParams object, .toString() returns the data in the Base64 encoded OpenSSL format consisting of the ASCII encoding of Salted__ followed by the 8 bytes salt and the actual ciphertext, and thus contains all the information needed for decryption.
I have a base64 string of a document from api. I want to know which extension/file format is that. Because if it is in jpg/jpeg/png i want to show it in image widget. Or if it is in pdf format i want to show it in PdfView widget. So is there any way to get file extension from base64. Is there any package for it?
If you have a base64 string you can detect file type by checking the first character of your base64 string:
'/' means jpeg.
'i' means png.
'R' means gif.
'U' means webp.
'J' means PDF.
I wrote a function for that:
String getBase64FileExtension(String base64String) {
switch (base64String.characters.first) {
case '/':
return 'jpeg';
case 'i':
return 'png';
case 'R':
return 'gif';
case 'U':
return 'webp';
case 'J':
return 'pdf';
default:
return 'unknown';
}
}
If you don't have the original filename, there's no way to recover it. That's metadata that's not part of the file's content, and base64 encoding operates only on the file's content. It'd be best if you could save the original filename.
If you can't, you can use package:mime to guess the MIME type of the file from a small amount of binary data. You could decode the first n×4 characters from the base64 string (a valid base64 string must have a length that's a multiple of 4), decode it, and call lookupMimeType.
package:mime has a defaultMagicNumbersMaxLength value that you can use to compute n dynamically:
import 'dart:convert';
import 'package:mime/mime.dart' as mime;
String? guessMimeTypeFromBase64(String base64String) {
// Compute the minimum length of the base64 string we need to decode
// [mime.defaultMagicNumbersMaxLength] bytes. base64 encodes 3 bytes of
// binary data to 4 characters.
var minimumBase64Length = (mime.defaultMagicNumbersMaxLength / 3).ceil() * 4;
return mime.lookupMimeType(
'',
headerBytes: base64.decode(base64String.substring(0, minimumBase64Length)),
);
}
For the types that package:mime supports as of writing, mime.defaultMagicNumbersMaxLength is 12 (which translates to needing to decode the first 16 bytes from the base64 string).
I try to connect to Coinex API through a Java program
I follow exact patter that mentioned in below link for authorisation
https://github.com/coinexcom/coinex_exchange_api/wiki/012security_authorization
I MD5 has whole query string that is like below , and put result in authorization in request header parameter
tonce=1635504041595&access_id=XXXX&secret_key=YYYY
My intentions is to get account balance so my Get request URL is
https://api.coinex.com/v1//balance/info?tonce=1635504041595&access_id=XXXX
but server return below error
{"code": 25, "data": {}, "message": "Signature Incorrect"}
Anybody can advice what is the issue , thanks AndyJ
well this is my encode method:
public static String encode(String str) {
try {
// Generate a summary of MD5 encryption calculations
MessageDigest md = MessageDigest.getInstance("MD5");
md.update(str.getBytes("UTF-8"));
// digest() finally determines to return the md5 hash value, returning a value of 8 as a string. Because the md5 hash value is a 16-bit hex value, it is actually an 8-bit character.
// The BigInteger function converts an 8-bit string into a 16-bit hex value, represented by a string; gets a hash value in the form of a string
String md5 = new BigInteger(1, md.digest()).toString(16);
//BigInteger will omit 0 and need to be completed to 32 bits
return fillMD5(md5).toUpperCase();
} catch (Exception e) {
e.printStackTrace();
return "";
}
}
and one more thing,
your timestmap should have 10 digits, remove the last 3 "000" like this:
Integer.parseInt(String.valueOf(timestamp).substring(0, 10));
I've been trying to code a UTF-16 string structure, and although the standard library provides a unicode module, it doesn't seem to provide a way to print out a slice of u16.
I've tried this:
const std = #import("std");
const unicode = std.unicode;
const stdout = std.io.getStdOut().outStream();
pub fn main() !void {
const unicode_str = unicode.utf8ToUtf16LeStringLiteral("😎 hello! 😎");
try stdout.print("{}\n", .{unicode_str});
}
This outputs:
[12:0]u16#202e9c
Is there a way to print a unicode string ([]u16) without converting it back into a non-unicode string ([]u8)?
Both []const u8 and []const u16 store encoded unicode codepoints. Unicode codepoints fit within the range 0..1,114,112 so an actual unicode string with one array index per codepoint would have to be []const u21. utf-8 and utf-16 both require encoding for codepoints that don't fit. Unless there is a compatability reason for utf-16 (like some windows functions), you should probably be using []const u8 unicode strings.
To print utf-16 to a utf-8 stream, you have to decode utf-16 and re-encode it into utf-8. There is currently no formatting specifier to do this automatically.
You can either convert the entire string at once, requiring allocation:
const utf8string = try std.unicode.utf16leToUtf8Alloc(alloc, utf16le);
Or, without allocation:
var writer = std.io.getStdOut().writer();
var it = std.unicode.Utf16LeIterator.init(utf16le);
while (try it.nextCodepoint()) |codepoint| {
var buf: [4]u8 = [_]u8{undefined} ** 4;
const len = try std.unicode.utf8Encode(codepoint, &buf);
try writer.writeAll(buf[0..len]);
}
Note that this will be very slow without using a buffered writer if you are writing somewhere that requires a syscall to write.
I made a test suite for math:hmac_* KRL functions. I compare the KRL results with Python results. KRL gives me different results.
code: https://gist.github.com/980788 results: http://ktest.heroku.com/a421x68
How can I get valid signatures from KRL? I'm assuming that they Python results are correct.
UPDATE: It works fine unless you want newline characters in the message. How do I sign a string that includes newline characters?
I suspect that your python SHA library returns a different encoding than is expected by the b64encode library. My library does both the SHA and base64 in one call so I to do some extra work to check the results.
As you show in your KRL, the correct syntax is:
math:hmac_sha1_base64(raw_string,key);
math:hmac_sha256_base64(raw_string,key);
These use the same libraries that I use for the Amazon module which is testing fine right now.
To test those routines specifically, I used the test vectors from the RFC (sha1, sha256). We don't support Hexadecimal natively, so I wasn't able to use all of the test vectors, but I was able to use a simple one:
HMAC SHA1
test_case = 2
key = "Jefe"
key_len = 4
data = "what do ya want for nothing?"
data_len = 28
digest = 0xeffcdf6ae5eb2fa2d27416d5f184df9c259a7c79
HMAC SHA256
Key = 4a656665 ("Jefe")
Data = 7768617420646f2079612077616e7420666f72206e6f7468696e673f ("what do ya want for nothing?")
HMAC-SHA-256 = 5bdcc146bf60754e6a042426089575c75a003f089d2739839dec58b964ec3843
Here is my code:
global {
raw_string = "what do ya want for nothing?";
mkey = "Jefe";
}
rule first_rule {
select when pageview ".*" setting ()
pre {
hmac_sha1 = math:hmac_sha1_hex(raw_string,mkey);
hmac_sha1_64 = math:hmac_sha1_base64(raw_string,mkey);
bhs256c = math:hmac_sha256_hex(raw_string,mkey);
bhs256c64 = math:hmac_sha256_base64(raw_string,mkey);
}
{
notify("HMAC sha1", "#{hmac_sha1}") with sticky = true;
notify("hmac sha1 base 64", "#{hmac_sha1_64}") with sticky = true;
notify("hmac sha256", "#{bhs256c}") with sticky = true;
notify("hmac sha256 base 64", "#{bhs256c64}") with sticky = true;
}
}
var hmac_sha1 = 'effcdf6ae5eb2fa2d27416d5f184df9c259a7c79';
var hmac_sha1_64 = '7/zfauXrL6LSdBbV8YTfnCWafHk';
var bhs256c = '5bdcc146bf60754e6a042426089575c75a003f089d2739839dec58b964ec3843';
var bhs256c64 = 'W9zBRr9gdU5qBCQmCJV1x1oAPwidJzmDnexYuWTsOEM';
The HEX results for SHA1 and SHA256 match the test vectors of the simple case.
I tested the base64 results by decoding the HEX results and putting them through the base64 encoder here
My results were:
7/zfauXrL6LSdBbV8YTfnCWafHk=
W9zBRr9gdU5qBCQmCJV1x1oAPwidJzmDnexYuWTsOEM=
Which match my calculations for HMAC SHA1 base64 and HMAC SHA256 base64 respectively.
If you are still having problems, could you provide me the base64 and SHA results from python separately so I can identify the disconnect?