The only answer I found was in this, and I'm not satisfied with it.
I am adding a standard MD5 converter as a String extension:
/* ###################################################################################################################################### */
/**
From here: https://stackoverflow.com/q/24123518/879365
I am not making this public, because it requires the common crypto in the bridging header.
*/
fileprivate extension String {
/* ################################################################## */
/**
- returns: the String, as an MD5 hash.
*/
var md5: String {
let str = self.cString(using: String.Encoding.utf8)
let strLen = CUnsignedInt(self.lengthOfBytes(using: String.Encoding.utf8))
let digestLen = Int(CC_MD5_DIGEST_LENGTH)
let result = UnsafeMutablePointer<CUnsignedChar>.allocate(capacity: digestLen)
CC_MD5(str!, strLen, result)
let hash = NSMutableString()
for i in 0..<digestLen {
hash.appendFormat("%02x", result[i])
}
result.deallocate()
return hash as String
}
}
It requires that I add the following to my bridging header:
#import <CommonCrypto/CommonCrypto.h>
Since I'd like to add this to a suite of reusable tools, I'd like to see if there was a way to detect, at compile time, whether or not the common crypto library was being used.
Is there a way for me to set this up as a conditional compile?
It's not a big deal if not; just means that I'll need to set this up as a separate source file.
It might be worth noting that you can call CC_MD5 without a bridging header, if you use dlsym to access it.
import Foundation
typealias CC_MD5_Type = #convention(c) (UnsafeRawPointer, UInt32, UnsafeMutableRawPointer) -> UnsafeMutableRawPointer
let RTLD_DEFAULT = UnsafeMutableRawPointer(bitPattern: -2)
let CC_MD5 = unsafeBitCast(dlsym(RTLD_DEFAULT, "CC_MD5")!, to: CC_MD5_Type.self)
var md5 = Data(count: 16)
md5.withUnsafeMutableBytes {
_ = CC_MD5("abc", 3, $0)
}
assert(md5 == Data(bytes: [0x90, 0x01, 0x50, 0x98, 0x3C, 0xD2, 0x4F, 0xB0, 0xD6, 0x96, 0x3F, 0x7D, 0x28, 0xE1, 0x7F, 0x72]))
Here was the solution I hit. This was a mash-up of my original variant, and Rob's excellent answer. It works a charm (I use it for building responses to RFC2617 Digest authentication). Because of the use of the built-in hooks, I don't need the bridging header anymore, and can add this to my set of String extensions.
pretty classic case of the correct answer coming from a completely different place from where I was looking. I love it when that happens.
Here ya go:
public extension String {
/* ################################################################## */
/**
From here: https://stackoverflow.com/q/24123518/879365, but modified from here: https://stackoverflow.com/a/55639723/879365
- returns: an MD5 hash of the String
*/
var md5: String {
var hash = ""
// Start by getting a C-style string of our string as UTF-8.
if let str = self.cString(using: .utf8) {
// This is a cast for the MD5 function. The convention attribute just says that it's a "raw" C function.
typealias CC_MD5_Type = #convention(c) (UnsafeRawPointer, UInt32, UnsafeMutableRawPointer) -> UnsafeMutableRawPointer
// This is a flag, telling the name lookup to happen in the global scope. No dlopen required.
let RTLD_DEFAULT = UnsafeMutableRawPointer(bitPattern: -2)
// This loads a function pointer with the CommonCrypto MD5 function.
let CC_MD5 = unsafeBitCast(dlsym(RTLD_DEFAULT, "CC_MD5")!, to: CC_MD5_Type.self)
// This is the length of the hash
let CC_MD5_DIGEST_LENGTH = 16
// This is where our MD5 hash goes. It's a simple 16-byte buffer.
let result = UnsafeMutablePointer<CUnsignedChar>.allocate(capacity: CC_MD5_DIGEST_LENGTH)
// Execute the MD5 hash. Save the result in our buffer.
_ = CC_MD5(str, CUnsignedInt(str.count), result)
// Turn it into a normal Swift String of hex digits.
for i in 0..<CC_MD5_DIGEST_LENGTH {
hash.append(String(format: "%02x", result[i]))
}
// Don't need this anymore.
result.deallocate()
}
return hash
}
}
Related
For context: I'm trying to use the very handy LibXL. I've used it with success in Obj-C and C++ but am now trying to port over to Swift. In order to better support Unicode, I need to sent all strings to the LibXL api as wchar_t*.
So, for this purpose I've cobbled together this code:
extension String {
///Function to convert a String into a wchar_t buffer.
///Don't forget to free the buffer!
var wideChar: UnsafeMutablePointer<wchar_t>? {
get {
guard let _cString = self.cString(using: .utf16) else {
return nil
}
let buffer = UnsafeMutablePointer<wchar_t>.allocate(capacity: _cString.count)
memcpy(buffer, _cString, _cString.count)
return buffer
}
}
The calls to LibXL appear to be working (getting a print of the error messages returns 'Ok'). Except when I try to actually write to a cell in a test spreadsheet. I get can't write row 0 in trial version:
if let name = "John Doe".wideChar, let passKey = "mac-f.....lots of characters...3".wideChar {
xlBookSetKeyW(book, name, passKey)
print(">: " + String.init(cString: xlBookErrorMessageW(book)))
}
if let sheetName = "Output".wideChar, let path = savePath.wideChar, let test = "Hello".wideChar {
let sheet: SheetHandle = xlBookAddSheetW(book, sheetName, nil)
xlSheetWriteStrW(sheet, 0, 0, test, sectionTitleFormat)
print(">: " + String.init(cString: xlBookErrorMessageW(book)))
let success = xlBookSaveW(book, path)
dump(success)
print(">: " + String.init(cString: xlBookErrorMessageW(book)))
}
I'm presuming that my code for converting to wchar_t* is incorrect. Can someone point me in the right direction for that..?
ADDENDUM: Thanks to #MartinR for the answer. It appears that the block 'consumes' any pointers that are used in it. So, for example, when writing a string using
("Hello".withWideChars({ wCharacters in
xlSheetWriteStrW(newSheet, destRow, destColumn, wCharacters, aFormatHandle)
})
The aFormatHandle will become invalid after the writeStr line executes and isn't re-useable. It's necessary to create a new FormatHandle for each write command.
There are different problems here. First, String.cString(using:) does
not work well with multi-byte encodings:
print("ABC".cString(using: .utf16)!)
// [65, 0] ???
Second, wchar_t contains UTF-32 code points, not UTF-16.
Finally, in
let buffer = UnsafeMutablePointer<wchar_t>.allocate(capacity: _cString.count)
memcpy(buffer, _cString, _cString.count)
the allocation size does not include the trailing null character,
and the copy copies _cString.count bytes, not characters.
All that can be fixed, but I would suggest a different API
(similar to the String.withCString(_:) method):
extension String {
/// Calls the given closure with a pointer to the contents of the string,
/// represented as a null-terminated wchar_t array.
func withWideChars<Result>(_ body: (UnsafePointer<wchar_t>) -> Result) -> Result {
let u32 = self.unicodeScalars.map { wchar_t(bitPattern: $0.value) } + [0]
return u32.withUnsafeBufferPointer { body($0.baseAddress!) }
}
}
which can then be used like
let name = "John Doe"
let passKey = "secret"
name.withWideChars { wname in
passKey.withWideChars { wpass in
xlBookSetKeyW(book, wname, wpass)
}
}
and the clean-up is automatic.
I'm trying to use a library which was written in C. I've imported .a and .h files at Xcode project, and checked it works properly. I've already made them working on Objective-C, and now for Swift.
A problem I've got is functions' arguments. There's a function requires an argument widechar(defined as typedef Unsigned short int in Library), which was UnsafeMutablePointer<UInt16> in Swift. The function translates it and return the result.
So I should convert a String to UnsafeMutablePointer<UInt16>. I tried to find the right way to converting it, but I've only got converting it to UnsafeMutablePointer<UInt8>. I couldn't find answer/information about converting String to UnsafeMutablePointer<UInt16>.
Here's a source code I've written.
extension String{
var utf8CString: UnsafePointer<Int8> {
return UnsafePointer((self as NSString).utf8String!)
}
}
func translate(toBraille: String, withTable: String) -> [String]? {
let filteredString = toBraille.onlyAlphabet
let table = withTable.utf8CString
var inputLength = CInt(filteredString.count)
var outputLength = CInt(maxBufferSize)
let inputValue = UnsafeMutablePointer<widechar>.allocate(capacity: Int(outputLength))
let outputValue = UnsafeMutablePointer<widechar>.allocate(capacity: Int(outputLength))
lou_translateString(table, inputValue, &inputLength, outputValue, &outputLength, nil, nil, 0)
//This is a function that I should use.
let result:[String] = []
return result
}
You have to create an array with the UTF-16 representation of the Swift
string that you can pass to the function, and on return create
a Swift string from the UTF-16 array result.
Lets assume for simplicity that the C function is imported to Swift as
func translateString(_ source: UnsafeMutablePointer<UInt16>, _ sourceLen: UnsafeMutablePointer<CInt>,
_ dest: UnsafeMutablePointer<UInt16>, _ destLen: UnsafeMutablePointer<CInt>)
Then the following should work (explanations inline):
// Create array with UTF-16 representation of source string:
let sourceString = "Hello world"
var sourceUTF16 = Array(sourceString.utf16)
var sourceLength = CInt(sourceUTF16.count)
// Allocate array for UTF-16 representation of destination string:
let maxBufferSize = 1000
var destUTF16 = Array<UInt16>(repeating: 0, count: maxBufferSize)
var destLength = CInt(destUTF16.count)
// Call translation function:
translateString(&sourceUTF16, &sourceLength, &destUTF16, &destLength)
// Create Swift string from UTF-16 representation in destination buffer:
let destString = String(utf16CodeUnits: destUTF16, count: Int(destLength))
I have assumed that the C function updates destLength to reflect
the actual length of the translated string on return.
I am trying to get MD5 hash of my data (image downloaded from the interweb). Unfortunately I have upgraded the framework to swift 3 and the method I have been using doesn't work now.
I have converted most of it but I am unable to get bytes out of the data:
import Foundation
import CommonCrypto
struct MD5 {
static func get(data: Data) -> String {
var digest = [UInt8](repeating: 0, count: Int(CC_MD5_DIGEST_LENGTH))
CC_MD5(data.bytes, CC_LONG(data.count), &digest)
var digestHex = ""
for index in 0..<Int(CC_MD5_DIGEST_LENGTH) {
digestHex += String(format: "%02x", digest[index])
}
return digestHex
}
}
the CommonCrypto is already imported as a custom module. Problem is I am getting 'bytes' is unavailable: use withUnsafeBytes instead on CC_MD5(data.bytes,...
So the question really is, how do I get the bytes out of the data and will this solution work?
CC_MD5(data.bytes, CC_LONG(data.count), &digest)
As noted, bytes is unavailable because it's dangerous. It's a raw pointer into memory than can vanish. The recommended solution is to use withUnsafeBytes which promises that the target cannot vanish during the scope of the pointer. From memory, it would look something like this:
data.withUnsafeBytes { bytes in
CC_MD5(bytes, CC_LONG(data.count), &digest)
}
The point is that the bytes pointer can't escape into scopes where data is no longer valid.
For an example of this with CCHmac, which is pretty similar to MD5, see RNCryptor.
Here's a one liner:
import CryptoKit
let md5String = Insecure.MD5.hash(data: data).map { String(format: "%02hhx", $0) }.joined()
And for anyone that's interested, here's an example that you could build upon to support different Algorithms:
Usage:
Checksum.hash(data: data, using: .md5) == "MyMD5Hash"
Code Snippet:
import Foundation
import CommonCrypto
struct Checksum {
private init() {}
static func hash(data: Data, using algorithm: HashAlgorithm) -> String {
/// Creates an array of unsigned 8 bit integers that contains zeros equal in amount to the digest length
var digest = [UInt8](repeating: 0, count: algorithm.digestLength())
/// Call corresponding digest calculation
data.withUnsafeBytes {
algorithm.digestCalculation(data: $0.baseAddress, len: UInt32(data.count), digestArray: &digest)
}
var hashString = ""
/// Unpack each byte in the digest array and add them to the hashString
for byte in digest {
hashString += String(format:"%02x", UInt8(byte))
}
return hashString
}
/**
* Hash using CommonCrypto
* API exposed from CommonCrypto-60118.50.1:
* https://opensource.apple.com/source/CommonCrypto/CommonCrypto-60118.50.1/include/CommonDigest.h.auto.html
**/
enum HashAlgorithm {
case md5
case sha256
func digestLength() -> Int {
switch self {
case .md5:
return Int(CC_MD5_DIGEST_LENGTH)
case .sha256:
return Int(CC_SHA256_DIGEST_LENGTH)
}
}
/// CC_[HashAlgorithm] performs a digest calculation and places the result in the caller-supplied buffer for digest
/// Calls the given closure with a pointer to the underlying unsafe bytes of the data's contiguous storage.
func digestCalculation(data: UnsafeRawPointer!, len: UInt32, digestArray: UnsafeMutablePointer<UInt8>!) {
switch self {
case .md5:
CC_MD5(data, len, digestArray)
case .sha256:
CC_SHA256(data, len, digestArray)
}
}
}
}
I am interfacing with libxml2 in swift, and the C APIs binding (still) produce UnsafePointer<Int8>! for c-strings. Whereas Swift APIs normally result in UnsafePointer<UInt8>!.
So my question is - am I doing the string to null-terminated C-string in a proper way?
let cfilePath = unsafeBitCast(myStringString.nulTerminatedUTF8.withUnsafeBufferPointer { $0.baseAddress }, to: UnsafePointer<Int8>.self)
Should I instead prefer using some other method instead of just bypassing Swift type checking with interpreting UInt8 bytes as Int8 bytes?
I'm not sure this solves your problem exactly but for a project where I am sending strings over bluetooth this did the trick:
extension String {
var nullTerminated: Data? {
if var data = self.data(using: String.Encoding.utf8) {
data.append(0)
return data
}
return nil
}
}
Use like this
let data = "asfasf".nullTerminated
I can't find the function the other answers are referencing: nulTerminatedUTF8. Maybe it already does this.
don't use unsafeBitCast for that!!
let cstr = "alpha".nulTerminatedUTF8
let int8arr = cstr.map{ Int8(bitPattern: $0) }
let uint8arr = Array(cstr)
print(int8arr.dynamicType, uint8arr.dynamicType)
// Array<Int8> Array<UInt8>
update
let uint8: UInt8 = 200
let int8 = Int8(bitPattern: uint8)
print(uint8, int8)
// 200 -56
How do you convert a String to UInt8 array?
var str = "test"
var ar : [UInt8]
ar = str
Lots of different ways, depending on how you want to handle non-ASCII characters.
But the simplest code would be to use the utf8 view:
let string = "hello"
let array: [UInt8] = Array(string.utf8)
Note, this will result in multi-byte characters being represented as multiple entries in the array, i.e.:
let string = "é"
print(Array(string.utf8))
prints out [195, 169]
There’s also .nulTerminatedUTF8, which does the same thing, but then adds a nul-character to the end if your plan is to pass this somewhere as a C string (though if you’re doing that, you can probably also use .withCString or just use the implicit conversion for bridged C functions.
let str = "test"
let byteArray = [UInt8](str.utf8)
swift 4
func stringToUInt8Array(){
let str:String = "Swift 4"
let strToUInt8:[UInt8] = [UInt8](str.utf8)
print(strToUInt8)
}
I came to this question looking for how to convert to a Int8 array. This is how I'm doing it, but surely there's a less loopy way:
Method on an Extension for String
public func int8Array() -> [Int8] {
var retVal : [Int8] = []
for thing in self.utf16 {
retVal.append(Int8(thing))
}
return retVal
}
Note: storing a UTF-16 encoded character (2 bytes) in an Int8 (1 byte) will lead to information loss.