I have a string and a key, which i want to generate an HMAC SHA256 from it. Although i'm using 2 libs
IDZSwiftCommonCrypto and CryptoSwift
and this answer
Nothing really worked for me. My source of truth are those 2 websites
https://myeasywww.appspot.com/utility/free/online/Crypt_Decrypt-MD5-AES-HMAC-SHA-DES-RABBIT/en?command=UTILITY&ID=2
and
https://www.freeformatter.com/hmac-generator.html#ad-output
Which they always generate the correct hash key for my case.
Any idea in what can work here? Some code samples
For IDZSwiftCommonCrypto
func getHMacSHA256(forMessage message: String, key: String) -> String? {
let hMacVal = HMAC(algorithm: HMAC.Algorithm.sha256, key: key).update(string: message)?.final()
if let encryptedData = hMacVal {
let decData = NSData(bytes: encryptedData, length: Int(encryptedData.count))
let base64String = decData.base64EncodedString(options: .lineLength64Characters)
print("base64String: \(base64String)")
return base64String
} else {
return nil
}
}
And for CryptoSwift
let password: Array<UInt8> = Array(payload.utf8)
let salt: Array<UInt8> = Array("somekey".utf8)
let signedBody = try? HKDF(password: password, salt: salt, variant: .sha256).calculate()
But nothing really works like the sources of truth.Any idea?
If you target iOS 13.0+ or macOS 10.15+, use Apple's CryptoKit
import CryptoKit
let secretString = "my-secret"
let key = SymmetricKey(data: Data(secretString.utf8))
let string = "An apple a day keeps anyone away, if you throw it hard enough"
let signature = HMAC<SHA256>.authenticationCode(for: Data(string.utf8), using: key)
print(Data(signature).map { String(format: "%02hhx", $0) }.joined()) // 1c161b971ab68e7acdb0b45cca7ae92d574613b77fca4bc7d5c4effab89dab67
I've been using this:
import Foundation
enum CryptoAlgorithm {
case MD5, SHA1, SHA224, SHA256, SHA384, SHA512
var HMACAlgorithm: CCHmacAlgorithm {
var result: Int = 0
switch self {
case .MD5: result = kCCHmacAlgMD5
case .SHA1: result = kCCHmacAlgSHA1
case .SHA224: result = kCCHmacAlgSHA224
case .SHA256: result = kCCHmacAlgSHA256
case .SHA384: result = kCCHmacAlgSHA384
case .SHA512: result = kCCHmacAlgSHA512
}
return CCHmacAlgorithm(result)
}
var digestLength: Int {
var result: Int32 = 0
switch self {
case .MD5: result = CC_MD5_DIGEST_LENGTH
case .SHA1: result = CC_SHA1_DIGEST_LENGTH
case .SHA224: result = CC_SHA224_DIGEST_LENGTH
case .SHA256: result = CC_SHA256_DIGEST_LENGTH
case .SHA384: result = CC_SHA384_DIGEST_LENGTH
case .SHA512: result = CC_SHA512_DIGEST_LENGTH
}
return Int(result)
}
}
extension String {
func hmac(algorithm: CryptoAlgorithm, key: String) -> String {
let str = self.cString(using: String.Encoding.utf8)
let strLen = Int(self.lengthOfBytes(using: String.Encoding.utf8))
let digestLen = algorithm.digestLength
let result = UnsafeMutablePointer<CUnsignedChar>.allocate(capacity: digestLen)
let keyStr = key.cString(using: String.Encoding.utf8)
let keyLen = Int(key.lengthOfBytes(using: String.Encoding.utf8))
CCHmac(algorithm.HMACAlgorithm, keyStr!, keyLen, str!, strLen, result)
let digest = stringFromResult(result: result, length: digestLen)
result.deallocate(capacity: digestLen)
return digest
}
private func stringFromResult(result: UnsafeMutablePointer<CUnsignedChar>, length: Int) -> String {
let hash = NSMutableString()
for i in 0..<length {
hash.appendFormat("%02x", result[i])
}
return String(hash).lowercased()
}
}
You'll need to add #import <CommonCrypto/CommonHMAC.h> to your Objective-C bridging header.
Source: #thevalyreangroup on this github thread
You're doing it wrong with CryptoSwift.
For future readers, here's how to do it:
let result = try! HMAC(key: key, variant: .sha256).authenticate(message.bytes)
Swift 4.2 solution for HMAC encryption
Not so long ago I had the same problem, so I wrote simple framework for use in Swift on all platforms - iOS macOS and tvOS
It's called EasyCrypt and you can find it here:
https://github.com/lukszar/EasyCrypt
This framework let you encrypt message with your key, using HMAC algorithms.
Usage is simple, like following:
let crypto = EasyCrypt(secret: "mySecretKey", algorithm: .sha256)
let result = crypto.hash("This is very secret text to encrypt")
let otherResult = crypto.hash("This is another secret text to encrypt")
print("result: ", result)
print("otherResult: ", otherResult)
You can fast install using Carthage. Inside project you can find Playground for demo usage with instructions.
Related
So when logging in to steam, I need my password encrypted with RSA public key they provided. But when I encrypt with SecKeyEncryptedData and sent it for authentication, it says my password is incorrect. I think maybe it's a problem with the format of encryption but I cannot figure it out. Please help me with that.
static func encrypt(string: String, mod: String, exp: String) -> String? {
let keyString = self.rsaPublicKeyder(mod: mod, exp: exp)
guard let data = Data(base64Encoded: keyString) else { return nil }
var attributes: CFDictionary {
return [kSecAttrKeyType : kSecAttrKeyTypeRSA,
kSecAttrKeyClass : kSecAttrKeyClassPublic,
kSecAttrKeySizeInBits : 2048,
kSecReturnPersistentRef : kCFBooleanTrue!] as CFDictionary
}
var error: Unmanaged<CFError>? = nil
guard let secKey = SecKeyCreateWithData(data as CFData, attributes, &error) else {
print(error.debugDescription)
return nil
}
guard let result = SecKeyCreateEncryptedData(secKey, SecKeyAlgorithm.rsaEncryptionPKCS1, string.data(using: .utf8)! as CFData, &error) else{
print(error.debugDescription)
return nil
}
return (result as Data).base64EncodedString()
}
}
The self.rsaPublicKeyder function's role is to convert the mod and exp format to PKCS#8 DER format and it can import with SecKeyCreateWithData. And I tried the result Der in cyberchef which seems no problem.
I tried using BigInt library to encrypt myself, but still failed. How????
class RSA{
static func encrypt(string: String, mod: String, exp: String) -> String
{
let secret = pkcs1pad2(data: string.data(using: .utf8)!, keySize: mod.count / 2)!
return secret.power(BigUInt(exp, radix: 16)!, modulus: BigUInt(mod, radix: 16)!).serialize().base64EncodedString()
}
static func pkcs1pad2(data: Data, keySize: Int) -> BigUInt?{
if (keySize < data.count + 11){
return nil;
}
var rndData: [UInt8] = [UInt8](repeating: 0, count: keySize - 3 - data.count)
let status = SecRandomCopyBytes(kSecRandomDefault, rndData.count, &rndData)
for i in 0..<rndData.count{
if rndData[i] == 0{
rndData[i] = UInt8(i+1)
}
}
guard status == errSecSuccess else{
return nil
}
return BigUInt(Data([0x00, 0x02]) + Data(rndData) + Data([0x00]) + data)
}
}
I have cost a bunch of time on this problem and I still don't know which part of my code is doing wrong. I have put all codes on Github. If you can help me with it I'll be so grateful.
https://github.com/MTAwsl/iAuth/tree/dev
I tried to debug myself using Fiddler and I found the answer.
Firstly, when a base64 string is transferred through HTTP GET method, it needs to be encoded by "%" encoding. So, String.addingPercentEncoding should be called to encode the string with proper encoding. But in CharacterSet.urlHostAllowed set, it did not include the "+" character so when the server decoding the data from base64, it treats "+" as space, which is definitely not what we wanted. I add the extension to String module and it solved the problem. Besides, yeah, both the BigInt and SecKey method works. There's nothing about the RSA encryption.
extension String{
var encoded: String? {
var urlB64Encoded: CharacterSet = .urlHostAllowed
urlB64Encoded.remove(charactersIn: "+")
return self.addingPercentEncoding(withAllowedCharacters: urlB64Encoded)
}
}
I've been at this about 10 hours now and no matter what HMAC combination I use for swift I can not get it to match the key generated by python.
Python Code:
signature = hmac.new(secret.decode('hex'), msg=datastring, digestmod=hashlib.sha256).hexdigest()
Swift Code:
let key = SymmetricKey(data: self.secret.data(using: .utf8)!)
let hexData = HMAC<SHA256>.authenticationCode(for: datastring.data(using: .utf8)!, using: key)
let signature = Data(hexData).map { String(format: "%02hhx", $0) }.joined()
Any help with what I'm doing wrong (or missing) in Swift would be greatly appreciated.
Based on the assumption that self.secret is a String containing the hex representation of the secret key, the difference between the two comes down to your use of:
self.secret.data(using: .utf8)!
which will just perform a straight conversion to the underlying bytes instead of converting each character pair into the corresponding byte, as:
secret.decode('hex')
does in Python 2.
From what I can tell, there isn't a function to do this conversion in the Swift standard library, but you could do it with something like:
func bytes(fromHex input: String) -> Data {
var result = Data()
var byte: UInt8 = 0
for (index, character) in input.enumerated() {
let codeUnit = character.utf8[character.utf8.startIndex]
var nibble: UInt8 = 0
switch codeUnit {
case 0x30..<0x3a:
nibble = codeUnit - 0x30
case 0x61..<0x67:
nibble = codeUnit - 0x57
default:
break
}
if index % 2 == 0 {
byte |= (nibble << 4)
} else {
byte |= nibble
result.append(contentsOf: [byte])
byte = 0
}
}
return result
}
and then your code would become:
let key = SymmetricKey(data: bytes(fromHex: self.secret))
let hexData = HMAC<SHA256>.authenticationCode(for: datastring.data(using: .utf8)!, using: key)
let signature = Data(hexData).map { String(format: "%02hhx", $0) }.joined()
I am using cryptoSwift library for encryption and decryption. But it's working with only string 16 bytes. If i am passing small string or less than 16 bytes then getting nil result.
I am using Incremental operations use instance of Cryptor and encrypt/decrypt one part at a time.
Please help me here, is there anything which i am doing wrong.
Thanks in advance.
func encAndDec(){
do {
// Encryption start
let data = Data.init(base64Encoded: "12345678901234567890123456789012".base64Encoded()!)
let iv : Array<UInt8> = [0,0,0,0,0,0,0,0,0,0,0,0]
let nIv = Data(iv)
let gcmEnc = GCM(iv: nIv.bytes, mode: .detached)
var enc = try? AES(key: data!.bytes, blockMode: gcmEnc, padding: .noPadding).makeEncryptor()
let arrStr = ["My name is tarun"] // Working
//let arrStr = ["tarun"] // Not working for this string
var ciphertext = Array<UInt8>()
for txt in arrStr{
let ciperText = try? enc?.update(withBytes: Array(txt.utf8)) // Getting nil for small string.
ciphertext.append(contentsOf: ciperText!)
}
var res = try? enc?.finish()
gcmEnc.authenticationTag = self.randomGenerateBytes(count: 16)?.bytes
res?.append(contentsOf: (gcmEnc.authenticationTag)!)
let cipherData = Data(ciphertext) + Data(res!)
let strEnc = String(decoding: cipherData, as: UTF8.self)
print(strEnc)
// Decryption start from here
do {
let gcmDec = GCM.init(iv: nIv.bytes, additionalAuthenticatedData: nil, tagLength: 16, mode: .detached)
var aesDec = try! AES(key: data!.bytes, blockMode: gcmDec, padding: .noPadding).makeDecryptor()
let tag_length = 16
let encData = cipherData.subdata(in: 0..<cipherData.count - tag_length)
let tag = cipherData.subdata(in: encData.count ..< cipherData.count)
let decData = try? aesDec.update(withBytes: encData.bytes) //Getting nil here for small string
let strData = String(decoding: decData!, as: UTF8.self)
print(strData)
do{
var res = try? aesDec.finish(withBytes: tag.bytes)
res?.append(contentsOf: tag)
}catch{
}
} catch {
// failed
}
}
}
func randomGenerateBytes(count: Int) -> Data? {
let bytes = UnsafeMutableRawPointer.allocate(byteCount: count, alignment: 1)
defer { bytes.deallocate() }
let status = CCRandomGenerateBytes(bytes, count)
guard status == kCCSuccess else { return nil }
return Data(bytes: bytes, count: count)
}
There's nothing wrong with aes-256-gcm implementation in CryptoSwift, as some of the commenters have suggested, you just have some bugs in your code. Hopefully the following will help you out.
I'm just going to call it GCM below for brevity.
GCM encryption takes as input the plaintext, a key, and an initialization vector and produces ciphertext and an authentication tag. In your code, you set the authentication tag to random bytes, overwriting the authentication tag.
I think it's a bit clearer if you break your code up into some functions, each with a clearly defined purpose. I also stripped away some of the conversions from Data to and from [UInt8] for clarity.
Here's what the encryption function would look like:
func enc(plainText: [String], key: [UInt8], iv: [UInt8]) throws -> (cipherText: [UInt8], authenticationTag: [UInt8]?)
{
let gcmEnc = GCM(iv: iv, mode: .detached)
var enc = try AES(key: key, blockMode: gcmEnc, padding: .noPadding).makeEncryptor()
var ciphertext = Array<UInt8>()
for txt in plainText {
ciphertext += try enc.update(withBytes: Array(txt.utf8))
}
ciphertext += try enc.finish()
return (ciphertext, gcmEnc.authenticationTag)
}
When you're decrypting GCM you need to pass in the ciphertext, key, initialization vector and the authentication tag. That would look like this:
func dec(cipherText: [UInt8], authenticationTag: [UInt8]?, key: [UInt8], iv: [UInt8]) throws -> [UInt8]? {
let tagLength = authenticationTag?.count ?? 0
let gcmDec = GCM.init(iv: iv, additionalAuthenticatedData: nil, tagLength: tagLength, mode: .detached)
gcmDec.authenticationTag = authenticationTag
var aesDec = try AES(key: key, blockMode: gcmDec, padding: .noPadding).makeDecryptor()
var decData = try aesDec.update(withBytes: cipherText)
decData += try aesDec.finish()
return decData
}
In both cases, you need to make sure that you append the output of the finish call to the ciphertext or plaintext. This is particularly important with small amounts of data as the update method may produce nothing!
With these two functions written you can rewrite your test function as follows:
func encAndDec(){
do {
guard let key = Data.init(base64Encoded: "12345678901234567890123456789012".base64Encoded()!)
else {
fatalError("Failed to create key")
}
let iv : Array<UInt8> = [0,0,0,0,
0,0,0,0,
0,0,0,0]
//let arrStr = ["My name is tarun"] // Working
let arrStr = ["tarun"] // Not working for this string
let (cipherText, authenticationTag) = try enc(plainText: arrStr, key: key.bytes, iv: iv)
guard let decrypedPlainText = try dec(cipherText: cipherText,
authenticationTag: authenticationTag, key: key.bytes, iv: iv) else {
fatalError("Decryption return nil")
}
guard let decryptedString = String(bytes: decrypedPlainText, encoding: .utf8) else {
fatalError("Failed to convert plaintext to string using UTF8 encoding.")
}
print("Decrypted Plaintext: \(decryptedString)")
}
catch let e {
print("EXCEPTION: \(e)")
}
}
If you run this you'll find it produces the expected output.
The complete example code can be found at: https://gist.github.com/iosdevzone/45456d2911bf2422bc4a6898876ba0ab
I don't believe GCM requires PADDING. Here is an example pretty much straight from the NODE.JS documentation that works fine and does not use padding. The line below will show the Ciphertext length is 5
I have done the same with Ruby, Java, Go, and others and none require padding or the input value to be a multiple of 16 bytes like the Swift library seems to require. Anyone else help confirm this is a bug in Swift implementation of GCM?
const crypto = require('crypto');
const key = '12345678901234567890123456789012';
const iv = '000000000000'
const cipher = crypto.createCipheriv('aes-256-gcm', key, iv);
const plaintext = 'Hello';
const ciphertext = cipher.update(plaintext, 'utf8');
**console.log("ciphertext length %d", ciphertext.length)**
cipher.final();
const tag = cipher.getAuthTag();
const decipher = crypto.createDecipheriv('aes-256-gcm', key, iv);
decipher.setAuthTag(tag);
const receivedPlaintext = decipher.update(ciphertext, null, 'utf8');
try {
decipher.final();
} catch (err) {
console.error('Authentication failed!');
return;
}
console.log(receivedPlaintext);
I am using CryptoSwift to encrypt data. I am learning how to use it however I cannot get past the first basic tutorial. I am unable to convert the encrypted data back to a String - which kind of defeats the purpose of encrypting it in the first place if I cannot legibly decrypt the data
Code:
let string = "Hi. This is Atlas"
let input: [UInt8] = Array(string.utf8)
print(input)
let key: [UInt8] = [0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00]
let iv: [UInt8] = AES.randomIV(AES.blockSize)
do {
let encryptedBytes: [UInt8] = try AES(key: key, iv: iv, blockMode: .CBC).encrypt(input, padding: PKCS7())
print(encryptedBytes)
let decrypted: [UInt8] = try AES(key: key, iv: iv, blockMode: .CBC).decrypt(encryptedBytes, padding: PKCS7())
print(decrypted) // << need to convert this array of byted to a string (should be equal to original input)
} catch {
} catch {
}
Thank you for the help
You'll want Foundation to decode the UTF8 for you since there's no way to generate a String.UTF8View directly. So convert to NSData first.
let decrypted: [UInt8] = [0x48, 0x65, 0x6c, 0x6c, 0x6f]
let data = NSData(bytes: decrypted, length: decrypted.count)
let str = String(data: data, encoding: NSUTF8StringEncoding)
If you want to do it without Foundation, you can, but it's a little work. You have to manage the decoding yourself.
extension String {
init?(utf8Bytes: [UInt8]) {
var decoder = UTF8()
var g = utf8Bytes.generate()
var characters: [Character] = []
LOOP:
while true {
let result = decoder.decode(&g)
switch result {
case .Result(let scalar): characters.append(Character(scalar))
case .EmptyInput: break LOOP
case .Error: return nil
}
}
self.init(characters)
}
}
let unicode = String(utf8Bytes: bytes)
(I'm very surprised that this isn't built into Swift stdlib since it's so common and can be quickly built out of other parts of Swift stdlib. Often when that's the case, there's a reason that I'm just not aware of yet, so there may be some subtle problem with my approach here.)
let stringDecrypted = String(decrypted.map { Character(UnicodeScalar($0)) })
So it maps each UInt8 to UnicodeScalar and then to Character. After that it uses String's initializer to create String from array of Characters.
We know we can print each character in UTF8 code units?
Then, if we have code units of these characters, how can we create a String with them?
With Swift 5, you can choose one of the following ways in order to convert a collection of UTF-8 code units into a string.
#1. Using String's init(_:) initializer
If you have a String.UTF8View instance (i.e. a collection of UTF-8 code units) and want to convert it to a string, you can use init(_:) initializer. init(_:) has the following declaration:
init(_ utf8: String.UTF8View)
Creates a string corresponding to the given sequence of UTF-8 code units.
The Playground sample code below shows how to use init(_:):
let string = "Café 🇫🇷"
let utf8View: String.UTF8View = string.utf8
let newString = String(utf8View)
print(newString) // prints: Café 🇫🇷
#2. Using Swift's init(decoding:as:) initializer
init(decoding:as:) creates a string from the given Unicode code units collection in the specified encoding:
let string = "Café 🇫🇷"
let codeUnits: [Unicode.UTF8.CodeUnit] = Array(string.utf8)
let newString = String(decoding: codeUnits, as: UTF8.self)
print(newString) // prints: Café 🇫🇷
Note that init(decoding:as:) also works with String.UTF8View parameter:
let string = "Café 🇫🇷"
let utf8View: String.UTF8View = string.utf8
let newString = String(decoding: utf8View, as: UTF8.self)
print(newString) // prints: Café 🇫🇷
#3. Using transcode(_:from:to:stoppingOnError:into:) function
The following example transcodes the UTF-8 representation of an initial string into Unicode scalar values (UTF-32 code units) that can be used to build a new string:
let string = "Café 🇫🇷"
let bytes = Array(string.utf8)
var newString = ""
_ = transcode(bytes.makeIterator(), from: UTF8.self, to: UTF32.self, stoppingOnError: true, into: {
newString.append(String(Unicode.Scalar($0)!))
})
print(newString) // prints: Café 🇫🇷
#4. Using Array's withUnsafeBufferPointer(_:) method and String's init(cString:) initializer
init(cString:) has the following declaration:
init(cString: UnsafePointer<CChar>)
Creates a new string by copying the null-terminated UTF-8 data referenced by the given pointer.
The following example shows how to use init(cString:) with a pointer to the content of a CChar array (i.e. a well-formed UTF-8 code unit sequence) in order to create a string from it:
let bytes: [CChar] = [67, 97, 102, -61, -87, 32, -16, -97, -121, -85, -16, -97, -121, -73, 0]
let newString = bytes.withUnsafeBufferPointer({ (bufferPointer: UnsafeBufferPointer<CChar>)in
return String(cString: bufferPointer.baseAddress!)
})
print(newString) // prints: Café 🇫🇷
#5. Using Unicode.UTF8's decode(_:) method
To decode a code unit sequence, call decode(_:) repeatedly until it returns UnicodeDecodingResult.emptyInput:
let string = "Café 🇫🇷"
let codeUnits = Array(string.utf8)
var codeUnitIterator = codeUnits.makeIterator()
var utf8Decoder = Unicode.UTF8()
var newString = ""
Decode: while true {
switch utf8Decoder.decode(&codeUnitIterator) {
case .scalarValue(let value):
newString.append(Character(Unicode.Scalar(value)))
case .emptyInput:
break Decode
case .error:
print("Decoding error")
break Decode
}
}
print(newString) // prints: Café 🇫🇷
#6. Using String's init(bytes:encoding:) initializer
Foundation gives String a init(bytes:encoding:) initializer that you can use as indicated in the Playground sample code below:
import Foundation
let string = "Café 🇫🇷"
let bytes: [Unicode.UTF8.CodeUnit] = Array(string.utf8)
let newString = String(bytes: bytes, encoding: String.Encoding.utf8)
print(String(describing: newString)) // prints: Optional("Café 🇫🇷")
It's possible to convert UTF8 code points to a Swift String idiomatically using the UTF8 Swift class. Although it's much easier to convert from String to UTF8!
import Foundation
public class UTF8Encoding {
public static func encode(bytes: Array<UInt8>) -> String {
var encodedString = ""
var decoder = UTF8()
var generator = bytes.generate()
var finished: Bool = false
do {
let decodingResult = decoder.decode(&generator)
switch decodingResult {
case .Result(let char):
encodedString.append(char)
case .EmptyInput:
finished = true
/* ignore errors and unexpected values */
case .Error:
finished = true
default:
finished = true
}
} while (!finished)
return encodedString
}
public static func decode(str: String) -> Array<UInt8> {
var decodedBytes = Array<UInt8>()
for b in str.utf8 {
decodedBytes.append(b)
}
return decodedBytes
}
}
func testUTF8Encoding() {
let testString = "A UTF8 String With Special Characters: 😀🍎"
let decodedArray = UTF8Encoding.decode(testString)
let encodedString = UTF8Encoding.encode(decodedArray)
XCTAssert(encodedString == testString, "UTF8Encoding is lossless: \(encodedString) != \(testString)")
}
Of the other alternatives suggested:
Using NSString invokes the Objective-C bridge;
Using UnicodeScalar is error-prone because it converts UnicodeScalars directly to Characters, ignoring complex grapheme clusters; and
Using String.fromCString is potentially unsafe as it uses pointers.
improve on Martin R's answer
import AppKit
let utf8 : CChar[] = [65, 66, 67, 0]
let str = NSString(bytes: utf8, length: utf8.count, encoding: NSUTF8StringEncoding)
println(str) // Output: ABC
import AppKit
let utf8 : UInt8[] = [0xE2, 0x82, 0xAC, 0]
let str = NSString(bytes: utf8, length: utf8.count, encoding: NSUTF8StringEncoding)
println(str) // Output: €
What happened is Array can be automatic convert to CConstVoidPointer which can be used to create string with NSSString(bytes: CConstVoidPointer, length len: Int, encoding: Uint)
Swift 3
let s = String(bytes: arr, encoding: .utf8)
I've been looking for a comprehensive answer regarding string manipulation in Swift myself. Relying on cast to and from NSString and other unsafe pointer magic just wasn't doing it for me. Here's a safe alternative:
First, we'll want to extend UInt8. This is the primitive type behind CodeUnit.
extension UInt8 {
var character: Character {
return Character(UnicodeScalar(self))
}
}
This will allow us to do something like this:
let codeUnits: [UInt8] = [
72, 69, 76, 76, 79
]
let characters = codeUnits.map { $0.character }
let string = String(characters)
// string prints "HELLO"
Equipped with this extension, we can now being modifying strings.
let string = "ABCDEFGHIJKLMONP"
var modifiedCharacters = [Character]()
for (index, utf8unit) in string.utf8.enumerate() {
// Insert a "-" every 4 characters
if index > 0 && index % 4 == 0 {
let separator: UInt8 = 45 // "-" in ASCII
modifiedCharacters.append(separator.character)
}
modifiedCharacters.append(utf8unit.character)
}
let modifiedString = String(modifiedCharacters)
// modified string == "ABCD-EFGH-IJKL-MONP"
// Swift4
var units = [UTF8.CodeUnit]()
//
// update units
//
let str = String(decoding: units, as: UTF8.self)
I would do something like this, it may be not such elegant than working with 'pointers' but it does the job well, those are pretty much about a bunch of new += operators for String like:
#infix func += (inout lhs: String, rhs: (unit1: UInt8)) {
lhs += Character(UnicodeScalar(UInt32(rhs.unit1)))
}
#infix func += (inout lhs: String, rhs: (unit1: UInt8, unit2: UInt8)) {
lhs += Character(UnicodeScalar(UInt32(rhs.unit1) << 8 | UInt32(rhs.unit2)))
}
#infix func += (inout lhs: String, rhs: (unit1: UInt8, unit2: UInt8, unit3: UInt8, unit4: UInt8)) {
lhs += Character(UnicodeScalar(UInt32(rhs.unit1) << 24 | UInt32(rhs.unit2) << 16 | UInt32(rhs.unit3) << 8 | UInt32(rhs.unit4)))
}
NOTE: you can extend the list of the supported operators with overriding + operator as well, defining a list of the fully commutative operators for String.
and now you are able to append a String with a unicode (UTF-8, UTF-16 or UTF-32) character like e.g.:
var string: String = "signs of the Zodiac: "
string += (0x0, 0x0, 0x26, 0x4b)
string += (38)
string += (0x26, 76)
This is a possible solution (now updated for Swift 2):
let utf8 : [CChar] = [65, 66, 67, 0]
if let str = utf8.withUnsafeBufferPointer( { String.fromCString($0.baseAddress) }) {
print(str) // Output: ABC
} else {
print("Not a valid UTF-8 string")
}
Within the closure, $0 is a UnsafeBufferPointer<CChar> pointing to the array's contiguous storage. From that a Swift String can be created.
Alternatively, if you prefer the input as unsigned bytes:
let utf8 : [UInt8] = [0xE2, 0x82, 0xAC, 0]
if let str = utf8.withUnsafeBufferPointer( { String.fromCString(UnsafePointer($0.baseAddress)) }) {
print(str) // Output: €
} else {
print("Not a valid UTF-8 string")
}
If you're starting with a raw buffer, such as from the Data object returned from a file handle (in this case, taken from a Pipe object):
let data = pipe.fileHandleForReading.readDataToEndOfFile()
var unsafePointer = UnsafeMutablePointer<UInt8>.allocate(capacity: data.count)
data.copyBytes(to: unsafePointer, count: data.count)
let output = String(cString: unsafePointer)
There is Swift 3.0 version of Martin R answer
public class UTF8Encoding {
public static func encode(bytes: Array<UInt8>) -> String {
var encodedString = ""
var decoder = UTF8()
var generator = bytes.makeIterator()
var finished: Bool = false
repeat {
let decodingResult = decoder.decode(&generator)
switch decodingResult {
case .scalarValue(let char):
encodedString += "\(char)"
case .emptyInput:
finished = true
case .error:
finished = true
}
} while (!finished)
return encodedString
}
public static func decode(str: String) -> Array<UInt8> {
var decodedBytes = Array<UInt8>()
for b in str.utf8 {
decodedBytes.append(b)
}
return decodedBytes
}
}
If you want show emoji from UTF-8 string, just user convertEmojiCodesToString method below. It is working properly for strings like "U+1F52B" (emoji) or "U+1F1E6 U+1F1F1" (country flag emoji)
class EmojiConverter {
static func convertEmojiCodesToString(_ emojiCodesString: String) -> String {
let emojies = emojiCodesString.components(separatedBy: " ")
var resultString = ""
for emoji in emojies {
var formattedCode = emoji
formattedCode.slice(from: 2, to: emoji.length)
formattedCode = formattedCode.lowercased()
if let charCode = UInt32(formattedCode, radix: 16),
let unicode = UnicodeScalar(charCode) {
let str = String(unicode)
resultString += "\(str)"
}
}
return resultString
}
}