am trying to make a signature for a HTTP request ,
using flutter/dart for the app and the server in NodeJs
but i have a problem there is little different between the two signature
any idea what cause that
EEFSxb_coHvGM-69RhmfAlXJ9J0= //signature in dart
EEFSxb/coHvGM+69RhmfAlXJ9J0= //signature in nodejs
signature.dart
var key = "key";
var data = "data";
List<int> signingKey = utf8.encode("key");
List<int> signatureBaseString = utf8.encode("data");
var hmacSha1 = Hmac(sha1, signingKey);
var digest = hmacSha1.convert(signatureBaseString);
var hashInBase = base64Url.encode(digest.bytes);
print(hashInBase) ; // result : EEFSxb_coHvGM-69RhmfAlXJ9J0=
signature.js
var data = "data" ;
var key = "key" ;
var output = encodeURIComponent(data);
var keyO = encodeURIComponent(key);
var hashed = CryptoJS.HmacSHA1(output , keyO);
var hashInBase = CryptoJS.enc.Base64.stringify(hashed);
console.log(hashInBase); // result : EEFSxb/coHvGM+69RhmfAlXJ9J0=
There are two styles of Base64 encoding which use slightly different character sets for the 64 characters. (You get 52 from the upper and lower case letters and ten more from the numbers, so a couple more ascii chars are needed to make up the 64 - plus the special trailing equals.)
The / and + characters have special meanings in URLs, so can be replaced in the alternative "URL safe" encoding with _ and -. Note also that the trailing equals can be dropped.
It seems you may be trying to compare base64 encoded strings to test for equality. It's safer to decode from base 64 and compare the byte arrays.
Anyway, to solve you problem, don't use the Dart URL safe version, use the regular version: base64.encode()
Related
Trying to pass a string to URLComponents's percentEncodedPath crashes the app if the string is not a valid percent encoded path.
Is there a way to tell if the string is a valid percent encoded path before I set it? I am having trouble figuring this one out.
var urlComponents = URLComponents()
urlComponents.percentEncodedPath = "/some failing path/" // <- This crashes
You can do something like this:
extension String {
var isPercentEncoded: Bool {
let decoded = self.removingPercentEncoding
return decoded != nil && decoded != self
}
}
The rationale:
If the String is URL-encoded, removingPercentEncoding will decode it, and hence decoded will be different from self
If the String contains a percent (but is not URL-Encoded), removingPercentEncoding will fail, and hence decoded will be nil
Otherwise string will remain unmodified
Example:
let failingPath = "/some failing path/"
let succeedingPath = "%2Fsome+succeeding+path%2F"
let doubleEncoded = "%252Fsome%2Bfailing%2Bpath%252F"
let withPercent = "a%b"
failingPath.isPercentEncoded // returns false
succeedingPath.isPercentEncoded // returns true
doubleEncoded.isPercentEncoded // returns true
withPercent.isPercentEncoded // returns false
One disadvantage is that you are on a mercy of removingPercentEncoding and also this is a "mechanical" interpretation of the string, which may not take into account the implementation intent. For example a%20b will be interpreted as URL-encoded. But what if your app expects that string as a decoded one (and so it needs to be encoded further)?
I have this app of mine that reads datamatrix barcodes from drugs using the camera.
When it does for a particular drug, I receive this string from the detector, as seen on Xcode console:
0100000000D272671721123110700XXXX\U0000001d91D1
my problem is that \U0000001d91D1 part.
This code can be decomposed on the following:
01 00000000D27267 17 211231 10 700XXXX \U0000001d 91D1"
01 = drug code
17 = expiring date DMY
10 = batch number
The last part is the dosage rate
Now on another part of the application I am on the simulator, with no camera, so I need to pass this string to the module that decomposes the code.
I have tried to store the code as a string using
let code = "0100000000D272671721123110700XXXX\U0000001d91D1"
it complains about the inverted bar, so I change it to double bar
let code = "0100000000D272671721123110700XXXX\\U0000001d91D1"
the detector analyzes this string and concludes that the batch number is 700XXXX\U0000001d91D1, instead of just 700XXXX, so the information contained from the \ forward is lost.
I think this is unicode or something.
How do I create this string correctly.
You can use string transform to decode your hex unicode characters:
let str1 = #"0100000000D272671721123110700XXXX\U00000DF491D1"#
let str2 = #"0100000000D272671721123110700XXXX\U0000001d91D1"#
let decoded1 = str1.applyingTransform(.init("Hex-Any"), reverse: false)! // "0100000000D272671721123110700XXXX෴91D1"
let decoded2 = str2.applyingTransform(.init("Hex-Any"), reverse: false)! // "0100000000D272671721123110700XXXX91D1"
You can also get rid of the verbosity extending StringTransform and StringProtocol:
extension StringTransform {
static let hexToAny: Self = .init("Hex-Any")
static let anyToHex: Self = .init("Any-Hex")
}
extension StringProtocol {
var decodingHex: String {
applyingTransform(.hexToAny, reverse: false)!
}
var encodingHex: String {
applyingTransform(.anyToHex, reverse: false)!
}
}
Usage:
let str1 = #"0100000000D272671721123110700XXXX\U00000DF491D1"#
let str2 = #"0100000000D272671721123110700XXXX\U0000001d91D1"#
let decoded1 = str1.decodingHex // "0100000000D272671721123110700XXXX෴91D1"
let decoded2 = str2.decodingHex // "0100000000D272671721123110700XXXX91D1"
The \U0000001d substring probably represents code point U+001D INFORMATION SEPARATOR THREE, which is also the ASCII code point GS (group separator).
In a Swift string literal, we can write that code point using a Unicode escape sequence: \u{1d}. Try writing your string literal like this:
let code = "0100000000D272671721123110700XXXX\u{1d}91D1"
I'm passing password into sha256. I successfully create sha256 and can also print it. The problem begins when I'm trying to convert digest.bytes into a string and append it.
import 'package:crypto/crypto.dart';
var url = "http://example_api.php?";
url += '&hash=';
// hash the password
var bytes = utf8.encode(password);
var digest = sha256.convert(bytes);
print("Digest as hex string: $digest");
url += String.fromCharCodes(digest.bytes);
This is printed: Digest as hex string: 03ac674216f3e15c761ee1a5e255f067953623c8b388b4459e13f978d7c846f4
This is appended to url: ¬gBóá\vá¥âUðg6#ȳ´Eùx×ÈFô
What am I doing wrong? I also tried utf8.decode method but using it gives me an error.
When you print digest, the print method will call digest.toString(), which is implemented to return a string of the digest bytes using a hexadecimal representation. If you want the same thing you have several options:
Call digest.toString() explicitly (or implicitly)
final digestHex = digest.toString(); // explicitly
final digestHex = '$digest'; // implicitly
Map the byte array to its hexadecimal equivalent
final digestHex = digest.bytes.map((b) => b.toRadixString(16).padLeft(2, '0')).join();
Use the convert package (this is what the crypto package does)
import 'package:convert/convert.dart';
...
final digestHex = hex.encode(digest.bytes);
The reason you are getting an error using utf8.decode is that your digest isn't an encoded UTF-8 string but a list of bytes that for all intents and purposes are completely random. You are trying to directly convert the bytes into a string, and doing so is easier if you can assume that they already represent a valid string. With the byte output from a hashing algorithm, though, you cannot safely make such an assumption.
However, if for some reason you still want to use this option, use the second optional parameter for utf8.decode to force it to try and decode the bytes anyway:
final digestString = utf8.decode(bytes, allowMalformed: true);
For reference, a byte list of [1, 255, 47, 143, 6, 80, 33, 202] results in "�/�P!�" where "�" represents an invalid/control character. You do not want to use this option, especially where the string will become part of a URL (as it's virtually guaranteed that the resulting string will not be web-safe).
For the hexadecimal representation of a Digest object, please explicitly call Digest.toString() (though in formatted strings, i.e. "url${digest}", this is done for you implicitly).
I'm frankly not familiar with String.fromCharCode, but I think it's looking for UTF-16 and not UTF-8 bits. I wrote a terminal example to show this, and how the outputs differ.
import 'dart:core';
import 'dart:convert';
import 'package:crypto/crypto.dart';
void main() {
const String password = "mypassword";
// hash the password
var bytes = utf8.encode(password);
var digest = sha256.convert(bytes);
// different formats
var bytesDigest = digest.bytes;
var hexDigest = digest.toString();
String url = "http://example_api.php?hash=";
print(url + hexDigest);
print(url + String.fromCharCodes(bytesDigest));
}
Output:
> dart test.dart
http://example_api.php?hash=89e01536ac207279409d4de1e5253e01f4a1769e696db0d6062ca9b8f56767c8
http://example_api.php?hash=à6¬ ry#Ö,©¸õggÈ
I have a piece of code in my iOS app that should go through a word and check if a character is in it. When it finds at least one, it should change a string full of "_" of the same length as the word to one with the character in the right place:
wordToGuess = six
letterGuessed = i
wordAsUnderscores = _i_
The code works. But I start to have problems when I type in characters like: "ć", "ł", "ą", etc. From using character.utf8.count I saw that Swift thinks those are not 1 but 2 characters. So I get something like this:
wordToGuess = cześć
letterGuessed = ś
wordAsUnderscores = _ _ ś (place filled with empty char) _
It takes up 2 places.
I was at it for 6 hours and didn't figure out how to fix it, so I'm asking you guys for help.
Code that is supposed to do that:
let characterGuessed = Character(letterGuessed)
for index in wordToGuess.indices {
if (wordToGuess[index] == characterGuessed) {
let endIndex = wordToGuess.index(after: index)
let charRange = index..<endIndex
wordAsUnderscores = wordAsUnderscores.replacingCharacters(in: charRange, with: letterGuessed)
wordToGuessLabel.text = wordAsUnderscores
}
}
I would like the code to treat "ć", "ł", "ą" characters the same as "i", "a" and so on. I don't want them to be treated as 2.
The reason is that you cannot use indices from one string (wordToGuess) for subscripting another string (wordAsUnderscores). Generally, indices of one collection must not be used with a different collection. (There are exception like Array though).
Here is a working variant:
let wordToGuess = "cześć"
let letterGuessed: Character = "ś"
var wordAsUnderscores = "c____"
wordAsUnderscores = String(zip(wordToGuess, wordAsUnderscores)
.map { $0 == letterGuessed ? letterGuessed : $1 })
print(wordAsUnderscores) // c__ś_
The strings are traversed in parallel, and for each correctly guessed character in wordToGuess the corresponding character in wordAsUnderscores is replaced by that character.
How do you convert a String to UInt8 array?
var str = "test"
var ar : [UInt8]
ar = str
Lots of different ways, depending on how you want to handle non-ASCII characters.
But the simplest code would be to use the utf8 view:
let string = "hello"
let array: [UInt8] = Array(string.utf8)
Note, this will result in multi-byte characters being represented as multiple entries in the array, i.e.:
let string = "é"
print(Array(string.utf8))
prints out [195, 169]
There’s also .nulTerminatedUTF8, which does the same thing, but then adds a nul-character to the end if your plan is to pass this somewhere as a C string (though if you’re doing that, you can probably also use .withCString or just use the implicit conversion for bridged C functions.
let str = "test"
let byteArray = [UInt8](str.utf8)
swift 4
func stringToUInt8Array(){
let str:String = "Swift 4"
let strToUInt8:[UInt8] = [UInt8](str.utf8)
print(strToUInt8)
}
I came to this question looking for how to convert to a Int8 array. This is how I'm doing it, but surely there's a less loopy way:
Method on an Extension for String
public func int8Array() -> [Int8] {
var retVal : [Int8] = []
for thing in self.utf16 {
retVal.append(Int8(thing))
}
return retVal
}
Note: storing a UTF-16 encoded character (2 bytes) in an Int8 (1 byte) will lead to information loss.