In Obj-C I can successfully append bytes enclosed inside two quotation marks like so:
[commands appendBytes:"\x1b\x61\x01"
length:sizeof("\x1b\x61\x01") - 1];
In Swift I supposed I would do something like:
commands.appendBytes("\x1b\x61\x01", length: sizeof("\x1b\x61\x01") - 1)
But this throws the error "invalid escape sequence in literal", how do I escape bytes in Swift?
As already said, in Swift a string stores Unicode characters, and not – as in (Objective-)C – an arbitrary (NUL-terminated) sequence of char, which is a signed
or unsigned byte on most platforms.
Now theoretically you can retrieve a C string from a Swift string:
let commands = NSMutableData()
let cmd = "\u{1b}\u{61}\u{01}"
cmd.withCString {
commands.appendBytes($0, length: 3)
}
println(commands) // <1b6101>
But this produces not the expected result for all non-ASCII characters:
let commands = NSMutableData()
let cmd = "\u{1b}\u{c4}\u{01}"
cmd.withCString {
commands.appendBytes($0, length: 3)
}
println(commands) // <1bc384>
Here \u{c4} is "Ä" which has the UTF-8 representation C3 84.
A Swift string cannot represent an arbitrary sequence of bytes.
Therefore you better work with an UInt8 array for (binary) control sequences:
let commands = NSMutableData()
let cmd : [UInt8] = [ 0x1b, 0x61, 0xc4, 0x01 ]
commands.appendBytes(cmd, length: cmd.count)
println(commands) // <1b61c401>
For text you have to know which encoding the printer expects.
As an example, NSISOLatin1StringEncoding is the ISO-8859-1 encoding, which is intended for "Western European" languages:
let text = "123Ö\n"
if let data = text.dataUsingEncoding(NSISOLatin1StringEncoding) {
commands.appendData(data)
println(commands) // 313233d6 0a>
} else {
println("conversion failed")
}
Unicode characters in Swift are entered differently - you need to add curly braces around the hex number:
"\u{1b}\u{61}\u{01}"
To avoid duplicating the literal, define a constant for it:
let toAppend = "\u{1b}\u{61}\u{01}"
commands.appendBytes(toAppend, length: toAppend.length - 1)
Related
How does one get all characters of the font with CTFontCopyCharacterSet() in Swift? ... for macOS?
The issue occured when implementing the approach from an OSX: CGGlyph to UniChar answer in Swift.
func createUnicodeFontMap() {
// Get all characters of the font with CTFontCopyCharacterSet().
let cfCharacterSet: CFCharacterSet = CTFontCopyCharacterSet(ctFont)
//
let cfCharacterSetStr = "\(cfCharacterSet)"
print("CFCharacterSet: \(cfCharacterSet)")
// Map all Unicode characters to corresponding glyphs
var unichars = [UniChar](…NYI…) // NYI: lacking unichars for CFCharacterSet
var glyphs = [CGGlyph](repeating: 0, count: unichars.count)
guard CTFontGetGlyphsForCharacters(
ctFont, // font: CTFont
&unichars, // characters: UnsafePointer<UniChar>
&glyphs, // UnsafeMutablePointer<CGGlyph>
unichars.count // count: CFIndex
)
else {
return
}
// For each Unicode character and its glyph,
// store the mapping glyph -> Unicode in a dictionary.
// ... NYI
}
What to do with CFCharacterSet to retrieve the actual characters has been elusive. Autocompletion of the cfCharacterSet instance offers show no relavant methods.
And the Core Foundation > CFCharacterSet appears have methods for creating another CFCharacterSet, but not something the provides an array|list|string of unichars to be able to create a mapped dictionary.
Note: I'm looking for a solution which is not specific to iOS as in Get all available characters from a font which uses UIFont.
CFCharacterSet is toll-free bridged with the Cocoa Foundation counterpart NSCharacterSet, and can be bridged to the corresponding Swift value type CharacterSet:
let charset = CTFontCopyCharacterSet(ctFont) as CharacterSet
Then the approach from NSArray from NSCharacterSet can be used to enumerate all Unicode scalar values of that character set (including non-BMP points, i.e. Unicode scalar values greater than U+FFFF).
The CTFontGetGlyphsForCharacters() expects non-BMP characters as surrogate pair, i.e. as an array of UTF-16 code units.
Putting it together, the function would look like this:
func createUnicodeFontMap(ctFont: CTFont) -> [CGGlyph : UnicodeScalar] {
let charset = CTFontCopyCharacterSet(ctFont) as CharacterSet
var glyphToUnicode = [CGGlyph : UnicodeScalar]() // Start with empty map.
// Enumerate all Unicode scalar values from the character set:
for plane: UInt8 in 0...16 where charset.hasMember(inPlane: plane) {
for unicode in UTF32Char(plane) << 16 ..< UTF32Char(plane + 1) << 16 {
if let uniChar = UnicodeScalar(unicode), charset.contains(uniChar) {
// Get glyph for this `uniChar` ...
let utf16 = Array(uniChar.utf16)
var glyphs = [CGGlyph](repeating: 0, count: utf16.count)
if CTFontGetGlyphsForCharacters(ctFont, utf16, &glyphs, utf16.count) {
// ... and add it to the map.
glyphToUnicode[glyphs[0]] = uniChar
}
}
}
}
return glyphToUnicode
}
You can do something like this.
let cs = CTFontCopyCharacterSet(font) as NSCharacterSet
let bitmapRepresentation = cs.bitmapRepresentation
The format of the bitmap is defined in the reference page for CFCharacterSetCreateWithBitmapRepresentation
I need to convert a cyrillic string to its urlencoded version using Windows-1251 encoding. For the following example string:
Моцарт
The correct result should be:
%CC%EE%F6%E0%F0%F2
I tried addingPercentEncoding(withAllowedCharacters:) but it doesn't work.
How to achieve the desired result in Swift?
NSString has a addingPercentEscapes(using:) method which allows to specify an arbitrary
encoding:
let text = "Моцарт"
if let encoded = (text as NSString).addingPercentEscapes(using: String.Encoding.windowsCP1251.rawValue) {
print(encoded)
// %CC%EE%F6%E0%F0%F2
}
However, this is deprecated as of iOS 9/macOS 10.11. It causes compiler warnings and may not be available in newer OS versions.
What you can do instead is to convert the string do Data with
the desired encoding,
and then convert each byte to the corresponding %NN sequence (using the approach from
How to convert Data to hex string in swift):
let text = "Моцарт"
if let data = text.data(using: .windowsCP1251) {
let encoded = data.map { String(format: "%%%02hhX", $0) }.joined()
print(encoded)
// %CC%EE%F6%E0%F0%F2
}
I'm translating an Android application to iOS with swift 3. My problem is with the next block of Java code:
String b64 = "KJT-AAAhtAvQ";
byte[] bytesCodigo = Base64.decode(b64, Base64.URL_SAFE);
System.out.Println(bytesCodigo.length) // 9
How would it be in swift?
Thanks
One would just create a Data (NSData in Objective-C) from the base64 string. Note, though, that the standard Data(base64Encoded:) does not have “URL Safe” rendition, but you can create one that replaces “-” with “+” and “_” with “/” before trying to convert it:
extension Data {
init?(base64EncodedURLSafe string: String, options: Base64DecodingOptions = []) {
let string = string
.replacingOccurrences(of: "-", with: "+")
.replacingOccurrences(of: "_", with: "/")
self.init(base64Encoded: string, options: options)
}
}
Then you can do:
guard let data = Data(base64EncodedURLSafe: string) else {
// handle errors in decoding base64 string here
return
}
// use `data` here
Obviously, if it was not a “URL safe” base64 string, you could simply do:
guard let data = Data(base64Encoded: string) else {
// handle errors in decoding base64 string here
return
}
// use `data` here
You asked how one gets a “byte array”: Data effectively is that, insofar as the Data type conforms to the RandomAccessCollection protocol, just as the Array type does. So, you have many of the “array” sort of behaviors should you need them, e.g.:
for byte in data {
// do something with `byte`, a `UInt8` value
}
or
let hexString = data.map { String(format: "%02x", $0) }
.joined(separator: " ")
print(hexString) // 28 94 fe 00 00 21 b4 0b d0
So, in essence, you generally do not need a “byte array” as Data provides all the sort of access to the bytes that you might need. Besides, most Swift APIs handling binary data expect Data types, anyway.
If you literally need an [UInt8] array, you can create one:
let bytes = data.map { $0 }
But it is generally inefficient to create a separate array of bytes (especially when the binary payload is large) when the Data provides all the array behaviors that one would generally need, and more.
I have decrypted using AES (CrytoSwift) and am left with an UInt8 array. What's the best approach to covert the UInt8 array into an appripriate string? Casting the array only gives back a string that looks exactly like the array. (When done in Java, a new READABLE string is obtained when casting Byte array to String).
I'm not sure if this is new to Swift 2, but at least the following works for me:
let chars: [UInt8] = [ 49, 50, 51 ]
var str = String(bytes: chars, encoding: NSUTF8StringEncoding)
In addition, if the array is formatted as a C string (trailing 0), these work:
str = String.fromCString(UnsafePointer(chars)) // UTF-8 is implicit
// or:
str = String(CString: UnsafePointer(chars), encoding: NSUTF8StringEncoding)
I don't know anything about CryptoSwift. But I can read the README:
For your convenience CryptoSwift provides two function to easily convert array of bytes to NSData and other way around:
let data = NSData.withBytes([0x01,0x02,0x03])
let bytes:[UInt8] = data.arrayOfBytes()
So my guess would be: call NSData.withBytes to get an NSData. Now you can presumably call NSString(data:encoding:) to get a string.
SWIFT 3.1
Try this:
let decData = NSData(bytes: enc, length: Int(enc.count))
let base64String = decData.base64EncodedString(options: .lineLength64Characters)
This is string output
Extensions allow you to easily modify the framework to fit your needs, essentially building your own version of Swift (my favorite part, I love to customize). Try this one out, put at the end of your view controller and call in viewDidLoad():
func stringToUInt8Extension() {
var cache : [UInt8] = []
for byte : UInt8 in 97..<97+26 {
cache.append(byte)
print(byte)
}
print("The letters of the alphabet are \(String(cache))")
}
extension String {
init(_ bytes: [UInt8]) {
self.init()
for b in bytes {
self.append(UnicodeScalar(b))
}
}
}
How do you convert a String to UInt8 array?
var str = "test"
var ar : [UInt8]
ar = str
Lots of different ways, depending on how you want to handle non-ASCII characters.
But the simplest code would be to use the utf8 view:
let string = "hello"
let array: [UInt8] = Array(string.utf8)
Note, this will result in multi-byte characters being represented as multiple entries in the array, i.e.:
let string = "é"
print(Array(string.utf8))
prints out [195, 169]
There’s also .nulTerminatedUTF8, which does the same thing, but then adds a nul-character to the end if your plan is to pass this somewhere as a C string (though if you’re doing that, you can probably also use .withCString or just use the implicit conversion for bridged C functions.
let str = "test"
let byteArray = [UInt8](str.utf8)
swift 4
func stringToUInt8Array(){
let str:String = "Swift 4"
let strToUInt8:[UInt8] = [UInt8](str.utf8)
print(strToUInt8)
}
I came to this question looking for how to convert to a Int8 array. This is how I'm doing it, but surely there's a less loopy way:
Method on an Extension for String
public func int8Array() -> [Int8] {
var retVal : [Int8] = []
for thing in self.utf16 {
retVal.append(Int8(thing))
}
return retVal
}
Note: storing a UTF-16 encoded character (2 bytes) in an Int8 (1 byte) will lead to information loss.