I am fetching a list of WiFi SSID's from a Bluetooth characteristic. Each SSID is represented as a string, some have these UTF8-Literals like "\xc3\xa6".
I have tried multiple ways to decode this like
let s = "\\xc3\\xa6"
let dec = s.utf8
From this I expect
print(dec)
> æ
etc. but it doesn't work, it just results in
print(dec)
> \xc3\xa6
How do I decode UTF-8 literals in Strings in Swift 5?
You'll just have to parse the string, convert each hex string to a UInt8, and then decode that with String.init(byte:encoding:):
let s = "\\xc3\\xa6"
let bytes = s
.components(separatedBy: "\\x")
// components(separatedBy:) would produce an empty string as the first element
// because the string starts with "\x". We drop this
.dropFirst()
.compactMap { UInt8($0, radix: 16) }
if let decoded = String(bytes: bytes, encoding: .utf8) {
print(decoded)
} else {
print("The UTF8 sequence was invalid!")
}
Follow up question to my former thread about UTF-8 literals:
It was established that you can decode UTF-8 literals from string like this that exclusively includes UTF-8 literals:
let s = "\\xc3\\xa6"
let bytes = s
.components(separatedBy: "\\x")
// components(separatedBy:) would produce an empty string as the first element
// because the string starts with "\x". We drop this
.dropFirst()
.compactMap { UInt8($0, radix: 16) }
if let decoded = String(bytes: bytes, encoding: .utf8) {
print(decoded)
} else {
print("The UTF8 sequence was invalid!")
}
However this only works if the string only contains UTF-8 literals. As I am fetching a Wi-Fi list of names that has these UTF-8 literals within, how do I go about decoding the entire string?
Example:
let s = "This is a WiFi Name \\xc3\\xa6 including UTF-8 literals \\xc3\\xb8"
With the expected result:
print(s)
> This is a WiFi Name æ including UTF-8 literals ø
In Python there is a simple solution to this:
contents = source_file.read()
uni = contents.decode('unicode-escape')
enc = uni.encode('latin1')
dec = enc.decode('utf-8')
Is there a similar way to decode these strings in Swift 5?
To start with add the decoding code into a String extension as a computed property (or create a function)
extension String {
var decodeUTF8: String {
let bytes = self.components(separatedBy: "\\x")
.dropFirst()
.compactMap { UInt8($0, radix: 16) }
return String(bytes: bytes, encoding: .utf8) ?? self
}
}
Then use a regular expression and match using a while loop to replace all matching values
while let range = string.range(of: #"(\\x[a-f0-9]{2}){2}"#, options: [.regularExpression, .caseInsensitive]) {
string.replaceSubrange(range, with: String(string[range]).decodeUTF8)
}
As far as I know there's no native Swift solution to this. To make it look as compact as the Python version at the call site you can build an extension on String to hide the complexity
extension String {
func replacingUtf8Literals() -> Self {
let regex = #"(\\x[a-zAZ0-9]{2})+"#
var str = self
while let range = str.range(of: regex, options: .regularExpression) {
let literalbytes = str[range]
.components(separatedBy: "\\x")
.dropFirst()
.compactMap{UInt8($0, radix: 16)}
guard let actuals = String(bytes: literalbytes, encoding: .utf8) else {
fatalError("Regex error")
}
str.replaceSubrange(range, with: actuals)
}
return str
}
}
This lets you call
print(s.replacingUtf8Literals()).
//prints: This is a WiFi Name æ including UTF-8 literals ø
For convenience I'm trapping a failed conversion with fatalError. You may want to handle this in a better way in production code (although, unless the regex is wrong it should never occur!). There needs to be some form of break or error thrown here else you have an infinite loop.
I want to convert following hex-encoded String in Swift 3:
dcb04a9e103a5cd8b53763051cef09bc66abe029fdebae5e1d417e2ffc2a07a4
to its equivalant String:
Ü°J:\ص7cï ¼f«à)ýë®^A~/ü*¤
Following websites do the job very fine:
http://codebeautify.org/hex-string-converter
http://string-functions.com/hex-string.aspx
But I am unable to do the same in Swift 3. Following code doesn't do the job too:
func convertHexStringToNormalString(hexString:String)->String!{
if let data = hexString.data(using: .utf8){
return String.init(data:data, encoding: .utf8)
}else{ return nil}
}
Your code doesn't do what you think it does. This line:
if let data = hexString.data(using: .utf8){
means "encode these characters into UTF-8." That means that "01" doesn't encode to 0x01 (1), it encodes to 0x30 0x31 ("0" "1"). There's no "hex" in there anywhere.
This line:
return String.init(data:data, encoding: .utf8)
just takes the encoded UTF-8 data, interprets it as UTF-8, and returns it. These two methods are symmetrical, so you should expect this whole function to return whatever it was handed.
Pulling together Martin and Larme's comments into one place here. This appears to be encoded in Latin-1. (This is a really awkward way to encode this data, but if it's what you're looking for, I think that's the encoding.)
import Foundation
extension Data {
// From http://stackoverflow.com/a/40278391:
init?(fromHexEncodedString string: String) {
// Convert 0 ... 9, a ... f, A ...F to their decimal value,
// return nil for all other input characters
func decodeNibble(u: UInt16) -> UInt8? {
switch(u) {
case 0x30 ... 0x39:
return UInt8(u - 0x30)
case 0x41 ... 0x46:
return UInt8(u - 0x41 + 10)
case 0x61 ... 0x66:
return UInt8(u - 0x61 + 10)
default:
return nil
}
}
self.init(capacity: string.utf16.count/2)
var even = true
var byte: UInt8 = 0
for c in string.utf16 {
guard let val = decodeNibble(u: c) else { return nil }
if even {
byte = val << 4
} else {
byte += val
self.append(byte)
}
even = !even
}
guard even else { return nil }
}
}
let d = Data(fromHexEncodedString: "dcb04a9e103a5cd8b53763051cef09bc66abe029fdebae5e1d417e2ffc2a07a4")!
let s = String(data: d, encoding: .isoLatin1)
You want to use the hex encoded data as an AES key, but the
data is not a valid UTF-8 sequence. You could interpret
it as a string in ISO Latin encoding, but the AES(key: String, ...)
initializer converts the string back to its UTF-8 representation,
i.e. you'll get different key data from what you started with.
Therefore you should not convert it to a string at all. Use the
extension Data {
init?(fromHexEncodedString string: String)
}
method from hex/binary string conversion in Swift
to convert the hex encoded string to Data and then pass that
as an array to the AES(key: Array<UInt8>, ...) initializer:
let hexkey = "dcb04a9e103a5cd8b53763051cef09bc66abe029fdebae5e1d417e2ffc2a07a4"
let key = Array(Data(fromHexEncodedString: hexkey)!)
let encrypted = try AES(key: key, ....)
There is still a way to convert the key from hex to readable string by adding the below extension:
extension String {
func hexToString()->String{
var finalString = ""
let chars = Array(self)
for count in stride(from: 0, to: chars.count - 1, by: 2){
let firstDigit = Int.init("\(chars[count])", radix: 16) ?? 0
let lastDigit = Int.init("\(chars[count + 1])", radix: 16) ?? 0
let decimal = firstDigit * 16 + lastDigit
let decimalString = String(format: "%c", decimal) as String
finalString.append(Character.init(decimalString))
}
return finalString
}
func base64Decoded() -> String? {
guard let data = Data(base64Encoded: self) else { return nil }
return String(data: data, encoding: .init(rawValue: 0))
}
}
Example of use:
let hexToString = secretKey.hexToString()
let base64ReadableKey = hexToString.base64Decoded() ?? ""
I'm trying to convert emojis in hex values, I found some code online to do it but it's only working using Objective C, how to do the same with Swift?
This is a "pure Swift" method, without using Foundation:
let smiley = "😊"
let uni = smiley.unicodeScalars // Unicode scalar values of the string
let unicode = uni[uni.startIndex].value // First element as an UInt32
print(String(unicode, radix: 16, uppercase: true))
// Output: 1F60A
Note that a Swift Character represents a "Unicode grapheme cluster"
(compare Strings in Swift 2 from the Swift blog) which can
consist of several "Unicode scalar values". Taking the example
from #TomSawyer's comment below:
let zero = "0️⃣"
let uni = zero.unicodeScalars // Unicode scalar values of the string
let unicodes = uni.map { $0.value }
print(unicodes.map { String($0, radix: 16, uppercase: true) } )
// Output: ["30", "FE0F", "20E3"]
If some one trying to found a way to convert Emoji To Unicode string
extension String {
func decode() -> String {
let data = self.data(using: .utf8)!
return String(data: data, encoding: .nonLossyASCII) ?? self
}
func encode() -> String {
let data = self.data(using: .nonLossyASCII, allowLossyConversion: true)!
return String(data: data, encoding: .utf8)!
}
}
Example:
"😍".encode()
RESULT: \ud83d\ude0d
"\ud83d\ude0d".decode()
RESULT: 😍
It works similarly but pay attention when you're printing it:
import Foundation
var smiley = "😊"
var data: NSData = smiley.dataUsingEncoding(NSUTF32LittleEndianStringEncoding, allowLossyConversion: false)!
var unicode:UInt32 = UInt32()
data.getBytes(&unicode)
// println(unicode) // Prints the decimal value
println(NSString(format:"%2X", unicode)) // Print the hex value of the smiley
We know we can print each character in UTF8 code units?
Then, if we have code units of these characters, how can we create a String with them?
With Swift 5, you can choose one of the following ways in order to convert a collection of UTF-8 code units into a string.
#1. Using String's init(_:) initializer
If you have a String.UTF8View instance (i.e. a collection of UTF-8 code units) and want to convert it to a string, you can use init(_:) initializer. init(_:) has the following declaration:
init(_ utf8: String.UTF8View)
Creates a string corresponding to the given sequence of UTF-8 code units.
The Playground sample code below shows how to use init(_:):
let string = "Café 🇫🇷"
let utf8View: String.UTF8View = string.utf8
let newString = String(utf8View)
print(newString) // prints: Café 🇫🇷
#2. Using Swift's init(decoding:as:) initializer
init(decoding:as:) creates a string from the given Unicode code units collection in the specified encoding:
let string = "Café 🇫🇷"
let codeUnits: [Unicode.UTF8.CodeUnit] = Array(string.utf8)
let newString = String(decoding: codeUnits, as: UTF8.self)
print(newString) // prints: Café 🇫🇷
Note that init(decoding:as:) also works with String.UTF8View parameter:
let string = "Café 🇫🇷"
let utf8View: String.UTF8View = string.utf8
let newString = String(decoding: utf8View, as: UTF8.self)
print(newString) // prints: Café 🇫🇷
#3. Using transcode(_:from:to:stoppingOnError:into:) function
The following example transcodes the UTF-8 representation of an initial string into Unicode scalar values (UTF-32 code units) that can be used to build a new string:
let string = "Café 🇫🇷"
let bytes = Array(string.utf8)
var newString = ""
_ = transcode(bytes.makeIterator(), from: UTF8.self, to: UTF32.self, stoppingOnError: true, into: {
newString.append(String(Unicode.Scalar($0)!))
})
print(newString) // prints: Café 🇫🇷
#4. Using Array's withUnsafeBufferPointer(_:) method and String's init(cString:) initializer
init(cString:) has the following declaration:
init(cString: UnsafePointer<CChar>)
Creates a new string by copying the null-terminated UTF-8 data referenced by the given pointer.
The following example shows how to use init(cString:) with a pointer to the content of a CChar array (i.e. a well-formed UTF-8 code unit sequence) in order to create a string from it:
let bytes: [CChar] = [67, 97, 102, -61, -87, 32, -16, -97, -121, -85, -16, -97, -121, -73, 0]
let newString = bytes.withUnsafeBufferPointer({ (bufferPointer: UnsafeBufferPointer<CChar>)in
return String(cString: bufferPointer.baseAddress!)
})
print(newString) // prints: Café 🇫🇷
#5. Using Unicode.UTF8's decode(_:) method
To decode a code unit sequence, call decode(_:) repeatedly until it returns UnicodeDecodingResult.emptyInput:
let string = "Café 🇫🇷"
let codeUnits = Array(string.utf8)
var codeUnitIterator = codeUnits.makeIterator()
var utf8Decoder = Unicode.UTF8()
var newString = ""
Decode: while true {
switch utf8Decoder.decode(&codeUnitIterator) {
case .scalarValue(let value):
newString.append(Character(Unicode.Scalar(value)))
case .emptyInput:
break Decode
case .error:
print("Decoding error")
break Decode
}
}
print(newString) // prints: Café 🇫🇷
#6. Using String's init(bytes:encoding:) initializer
Foundation gives String a init(bytes:encoding:) initializer that you can use as indicated in the Playground sample code below:
import Foundation
let string = "Café 🇫🇷"
let bytes: [Unicode.UTF8.CodeUnit] = Array(string.utf8)
let newString = String(bytes: bytes, encoding: String.Encoding.utf8)
print(String(describing: newString)) // prints: Optional("Café 🇫🇷")
It's possible to convert UTF8 code points to a Swift String idiomatically using the UTF8 Swift class. Although it's much easier to convert from String to UTF8!
import Foundation
public class UTF8Encoding {
public static func encode(bytes: Array<UInt8>) -> String {
var encodedString = ""
var decoder = UTF8()
var generator = bytes.generate()
var finished: Bool = false
do {
let decodingResult = decoder.decode(&generator)
switch decodingResult {
case .Result(let char):
encodedString.append(char)
case .EmptyInput:
finished = true
/* ignore errors and unexpected values */
case .Error:
finished = true
default:
finished = true
}
} while (!finished)
return encodedString
}
public static func decode(str: String) -> Array<UInt8> {
var decodedBytes = Array<UInt8>()
for b in str.utf8 {
decodedBytes.append(b)
}
return decodedBytes
}
}
func testUTF8Encoding() {
let testString = "A UTF8 String With Special Characters: 😀🍎"
let decodedArray = UTF8Encoding.decode(testString)
let encodedString = UTF8Encoding.encode(decodedArray)
XCTAssert(encodedString == testString, "UTF8Encoding is lossless: \(encodedString) != \(testString)")
}
Of the other alternatives suggested:
Using NSString invokes the Objective-C bridge;
Using UnicodeScalar is error-prone because it converts UnicodeScalars directly to Characters, ignoring complex grapheme clusters; and
Using String.fromCString is potentially unsafe as it uses pointers.
improve on Martin R's answer
import AppKit
let utf8 : CChar[] = [65, 66, 67, 0]
let str = NSString(bytes: utf8, length: utf8.count, encoding: NSUTF8StringEncoding)
println(str) // Output: ABC
import AppKit
let utf8 : UInt8[] = [0xE2, 0x82, 0xAC, 0]
let str = NSString(bytes: utf8, length: utf8.count, encoding: NSUTF8StringEncoding)
println(str) // Output: €
What happened is Array can be automatic convert to CConstVoidPointer which can be used to create string with NSSString(bytes: CConstVoidPointer, length len: Int, encoding: Uint)
Swift 3
let s = String(bytes: arr, encoding: .utf8)
I've been looking for a comprehensive answer regarding string manipulation in Swift myself. Relying on cast to and from NSString and other unsafe pointer magic just wasn't doing it for me. Here's a safe alternative:
First, we'll want to extend UInt8. This is the primitive type behind CodeUnit.
extension UInt8 {
var character: Character {
return Character(UnicodeScalar(self))
}
}
This will allow us to do something like this:
let codeUnits: [UInt8] = [
72, 69, 76, 76, 79
]
let characters = codeUnits.map { $0.character }
let string = String(characters)
// string prints "HELLO"
Equipped with this extension, we can now being modifying strings.
let string = "ABCDEFGHIJKLMONP"
var modifiedCharacters = [Character]()
for (index, utf8unit) in string.utf8.enumerate() {
// Insert a "-" every 4 characters
if index > 0 && index % 4 == 0 {
let separator: UInt8 = 45 // "-" in ASCII
modifiedCharacters.append(separator.character)
}
modifiedCharacters.append(utf8unit.character)
}
let modifiedString = String(modifiedCharacters)
// modified string == "ABCD-EFGH-IJKL-MONP"
// Swift4
var units = [UTF8.CodeUnit]()
//
// update units
//
let str = String(decoding: units, as: UTF8.self)
I would do something like this, it may be not such elegant than working with 'pointers' but it does the job well, those are pretty much about a bunch of new += operators for String like:
#infix func += (inout lhs: String, rhs: (unit1: UInt8)) {
lhs += Character(UnicodeScalar(UInt32(rhs.unit1)))
}
#infix func += (inout lhs: String, rhs: (unit1: UInt8, unit2: UInt8)) {
lhs += Character(UnicodeScalar(UInt32(rhs.unit1) << 8 | UInt32(rhs.unit2)))
}
#infix func += (inout lhs: String, rhs: (unit1: UInt8, unit2: UInt8, unit3: UInt8, unit4: UInt8)) {
lhs += Character(UnicodeScalar(UInt32(rhs.unit1) << 24 | UInt32(rhs.unit2) << 16 | UInt32(rhs.unit3) << 8 | UInt32(rhs.unit4)))
}
NOTE: you can extend the list of the supported operators with overriding + operator as well, defining a list of the fully commutative operators for String.
and now you are able to append a String with a unicode (UTF-8, UTF-16 or UTF-32) character like e.g.:
var string: String = "signs of the Zodiac: "
string += (0x0, 0x0, 0x26, 0x4b)
string += (38)
string += (0x26, 76)
This is a possible solution (now updated for Swift 2):
let utf8 : [CChar] = [65, 66, 67, 0]
if let str = utf8.withUnsafeBufferPointer( { String.fromCString($0.baseAddress) }) {
print(str) // Output: ABC
} else {
print("Not a valid UTF-8 string")
}
Within the closure, $0 is a UnsafeBufferPointer<CChar> pointing to the array's contiguous storage. From that a Swift String can be created.
Alternatively, if you prefer the input as unsigned bytes:
let utf8 : [UInt8] = [0xE2, 0x82, 0xAC, 0]
if let str = utf8.withUnsafeBufferPointer( { String.fromCString(UnsafePointer($0.baseAddress)) }) {
print(str) // Output: €
} else {
print("Not a valid UTF-8 string")
}
If you're starting with a raw buffer, such as from the Data object returned from a file handle (in this case, taken from a Pipe object):
let data = pipe.fileHandleForReading.readDataToEndOfFile()
var unsafePointer = UnsafeMutablePointer<UInt8>.allocate(capacity: data.count)
data.copyBytes(to: unsafePointer, count: data.count)
let output = String(cString: unsafePointer)
There is Swift 3.0 version of Martin R answer
public class UTF8Encoding {
public static func encode(bytes: Array<UInt8>) -> String {
var encodedString = ""
var decoder = UTF8()
var generator = bytes.makeIterator()
var finished: Bool = false
repeat {
let decodingResult = decoder.decode(&generator)
switch decodingResult {
case .scalarValue(let char):
encodedString += "\(char)"
case .emptyInput:
finished = true
case .error:
finished = true
}
} while (!finished)
return encodedString
}
public static func decode(str: String) -> Array<UInt8> {
var decodedBytes = Array<UInt8>()
for b in str.utf8 {
decodedBytes.append(b)
}
return decodedBytes
}
}
If you want show emoji from UTF-8 string, just user convertEmojiCodesToString method below. It is working properly for strings like "U+1F52B" (emoji) or "U+1F1E6 U+1F1F1" (country flag emoji)
class EmojiConverter {
static func convertEmojiCodesToString(_ emojiCodesString: String) -> String {
let emojies = emojiCodesString.components(separatedBy: " ")
var resultString = ""
for emoji in emojies {
var formattedCode = emoji
formattedCode.slice(from: 2, to: emoji.length)
formattedCode = formattedCode.lowercased()
if let charCode = UInt32(formattedCode, radix: 16),
let unicode = UnicodeScalar(charCode) {
let str = String(unicode)
resultString += "\(str)"
}
}
return resultString
}
}