HMAC SHA512 using CommonCrypto in Swift 3.1 [duplicate] - swift

This question already has answers here:
CommonHMAC in Swift
(11 answers)
Closed 5 years ago.
I'm trying to encrypt data to send to the API.
The API requires the data to be sent as hmac_sha512 encrypted hash.
I've found various examples of how it possibly could have been done for sha1 and others (not sha512) and also in older versions of Swift.
None of the examples that I tried work for swift 3.1
Any help in the right direction will be appreciated.
Edit:
In PHP, I successfully send it using:
$sign = hash_hmac('sha512', $post_data, $this->secret);
Edit 2:
I did add briding header, I don't know what to do next! As the code examples followed after that don't work for swift 3.1 :(
Edit 3:
Solved! Guess what, I was creating briding header incorrectly! :(
P.S I'm trying to avoid CryptoSwift, focusing on CommonCrypto.
The answer given below is not proper, as it doesn't allow hmac to get a key for encryption. I did research and finally got it working. This post contains the working example project for hmac: https://github.com/nabtron/hmacTest

I think the best thing to do is using Crypto pod which is a wrapper for common crypto. In case you want to use directly commonCrypto you should add the bridging header to the project and import common crypto using: #import <CommonCrypto/CommonCrypto.h>
Edit 1
Create a swift class and add the following code to it:
import Foundation
extension String {
var md5: String {
return HMAC.hash(inp: self, algo: HMACAlgo.MD5)
}
var sha1: String {
return HMAC.hash(inp: self, algo: HMACAlgo.SHA1)
}
var sha224: String {
return HMAC.hash(inp: self, algo: HMACAlgo.SHA224)
}
var sha256: String {
return HMAC.hash(inp: self, algo: HMACAlgo.SHA256)
}
var sha384: String {
return HMAC.hash(inp: self, algo: HMACAlgo.SHA384)
}
var sha512: String {
return HMAC.hash(inp: self, algo: HMACAlgo.SHA512)
}
}
public struct HMAC {
static func hash(inp: String, algo: HMACAlgo) -> String {
if let stringData = inp.data(using: String.Encoding.utf8, allowLossyConversion: false) {
return hexStringFromData(input: digest(input: stringData as NSData, algo: algo))
}
return ""
}
private static func digest(input : NSData, algo: HMACAlgo) -> NSData {
let digestLength = algo.digestLength()
var hash = [UInt8](repeating: 0, count: digestLength)
switch algo {
case .MD5:
CC_MD5(input.bytes, UInt32(input.length), &hash)
break
case .SHA1:
CC_SHA1(input.bytes, UInt32(input.length), &hash)
break
case .SHA224:
CC_SHA224(input.bytes, UInt32(input.length), &hash)
break
case .SHA256:
CC_SHA256(input.bytes, UInt32(input.length), &hash)
break
case .SHA384:
CC_SHA384(input.bytes, UInt32(input.length), &hash)
break
case .SHA512:
CC_SHA512(input.bytes, UInt32(input.length), &hash)
break
}
return NSData(bytes: hash, length: digestLength)
}
private static func hexStringFromData(input: NSData) -> String {
var bytes = [UInt8](repeating: 0, count: input.length)
input.getBytes(&bytes, length: input.length)
var hexString = ""
for byte in bytes {
hexString += String(format:"%02x", UInt8(byte))
}
return hexString
}
}
enum HMACAlgo {
case MD5, SHA1, SHA224, SHA256, SHA384, SHA512
func digestLength() -> Int {
var result: CInt = 0
switch self {
case .MD5:
result = CC_MD5_DIGEST_LENGTH
case .SHA1:
result = CC_SHA1_DIGEST_LENGTH
case .SHA224:
result = CC_SHA224_DIGEST_LENGTH
case .SHA256:
result = CC_SHA256_DIGEST_LENGTH
case .SHA384:
result = CC_SHA384_DIGEST_LENGTH
case .SHA512:
result = CC_SHA512_DIGEST_LENGTH
}
return Int(result)
}
}
then use it simply by stringName.sha512
this class extends the String class which gives the ability to use the hashing as a function in the string class.

Related

Is it possible to create custom Swift KeyEncodingStrategy for enums only?

Update June 20, 2019: Thanks to #rudedog, I arrived at a working solution. I've appended implementation below my original post...
Note that I am NOT looking for "use private enum CodingKeys: String, CodingKey" in your struct/enum declaration.
I have a situation in which a service I call requires upper snake_case (UPPER_SNAKE_CASE) for all enumerations.
Given the following struct:
public struct Request: Encodable {
public let foo: Bool?
public let barId: BarIdType
public enum BarIdType: String, Encodable {
case test
case testGroup
}
}
All enums in any request should be converted to UPPER_SNAKE_CASE.
For example, let request = Request(foo: true, barId: testGroup) should end up looking like the following when sent:
{
"foo": true,
"barId": "TEST_GROUP"
}
I would like to provide a custom JSONEncoder.KeyEncodingStrategy that would ONLY apply to enum types.
Creating a custom strategy seems straightforward, at least according to Apple's JSONEncoder.KeyEncodingStrategy.custom(_:) documentation.
Here's what I have so far:
public struct AnyCodingKey : CodingKey {
public var stringValue: String
public var intValue: Int?
public init(_ base: CodingKey) {
self.init(stringValue: base.stringValue, intValue: base.intValue)
}
public init(stringValue: String) {
self.stringValue = stringValue
}
public init(intValue: Int) {
self.stringValue = "\(intValue)"
self.intValue = intValue
}
public init(stringValue: String, intValue: Int?) {
self.stringValue = stringValue
self.intValue = intValue
}
}
extension JSONEncoder.KeyEncodingStrategy {
static var convertToUpperSnakeCase: JSONEncoder.KeyEncodingStrategy {
return .custom { keys in // codingKeys is [CodingKey]
// keys = Enum ???
var key = AnyCodingKey(keys.last!)
// key = Enum ???
key.stringValue = key.stringValue.toUpperSnakeCase // toUpperSnakeCase is a String extension
return key
}
}
}
I'm stuck trying to determine whether [CodingKey] represents an enum, or whether the individual CodingKey represents an enum, and should therefor become UPPER_SNAKE_CASE.
I know this sounds pointless, since I can simply supply hardcoded CodingKeys, but we have a lot of service calls, all requiring the same handling of enum cases. It would be simpler to just specify a custom KeyEncodingStrategy for the encoder.
What would be ideal is to apply JSONEncoder.KeyEncodingStrategy.convertToSnakeCase in the custom strategy and then just return the uppercased value. But again, only if the value represents an enum case.
Any thoughts?
Here is the code I arrived at that solved my problem, with help from #rudedog:
import Foundation
public protocol UpperSnakeCaseRepresentable: Encodable {
var upperSnakeCaseValue: String { get }
}
extension UpperSnakeCaseRepresentable where Self: RawRepresentable, Self.RawValue == String {
var upperSnakeCaseValue: String {
return _upperSnakeCaseValue(rawValue)
}
}
extension KeyedEncodingContainer {
mutating func encode(_ value: UpperSnakeCaseRepresentable, forKey key: KeyedEncodingContainer<K>.Key) throws {
try encode(value.upperSnakeCaseValue, forKey: key)
}
}
// The following is copied verbatim from https://github.com/apple/swift/blob/master/stdlib/public/Darwin/Foundation/JSONEncoder.swift
// Copyright (c) 2014 - 2017 Apple Inc. and the Swift project authors
// Licensed under Apache License v2.0 with Runtime Library Exception
// The only change is to call uppercased() on the encoded value as part of the return.
fileprivate func _upperSnakeCaseValue(_ stringKey: String) -> String {
guard !stringKey.isEmpty else { return stringKey }
var words : [Range<String.Index>] = []
// The general idea of this algorithm is to split words on transition from lower to upper case, then on transition of >1 upper case characters to lowercase
//
// myProperty -> my_property
// myURLProperty -> my_url_property
//
// We assume, per Swift naming conventions, that the first character of the key is lowercase.
var wordStart = stringKey.startIndex
var searchRange = stringKey.index(after: wordStart)..<stringKey.endIndex
// Find next uppercase character
while let upperCaseRange = stringKey.rangeOfCharacter(from: CharacterSet.uppercaseLetters, options: [], range: searchRange) {
let untilUpperCase = wordStart..<upperCaseRange.lowerBound
words.append(untilUpperCase)
// Find next lowercase character
searchRange = upperCaseRange.lowerBound..<searchRange.upperBound
guard let lowerCaseRange = stringKey.rangeOfCharacter(from: CharacterSet.lowercaseLetters, options: [], range: searchRange) else {
// There are no more lower case letters. Just end here.
wordStart = searchRange.lowerBound
break
}
// Is the next lowercase letter more than 1 after the uppercase? If so, we encountered a group of uppercase letters that we should treat as its own word
let nextCharacterAfterCapital = stringKey.index(after: upperCaseRange.lowerBound)
if lowerCaseRange.lowerBound == nextCharacterAfterCapital {
// The next character after capital is a lower case character and therefore not a word boundary.
// Continue searching for the next upper case for the boundary.
wordStart = upperCaseRange.lowerBound
} else {
// There was a range of >1 capital letters. Turn those into a word, stopping at the capital before the lower case character.
let beforeLowerIndex = stringKey.index(before: lowerCaseRange.lowerBound)
words.append(upperCaseRange.lowerBound..<beforeLowerIndex)
// Next word starts at the capital before the lowercase we just found
wordStart = beforeLowerIndex
}
searchRange = lowerCaseRange.upperBound..<searchRange.upperBound
}
words.append(wordStart..<searchRange.upperBound)
let result = words.map({ (range) in
return stringKey[range].lowercased()
}).joined(separator: "_")
return result.uppercased()
}
enum Snake: String, UpperSnakeCaseRepresentable, Encodable {
case blackAdder
case mamba
}
struct Test: Encodable {
let testKey: String?
let snake: Snake
}
let test = Test(testKey: "testValue", snake: .mamba)
let enumData = try! JSONEncoder().encode(test)
let json = String(data: enumData, encoding: .utf8)!
print(json)
I think you are actually looking for a value encoding strategy? A key encoding strategy changes how keys are encoded, not how their values are encoded. A value encoding strategy would be something like JSONDecoder's dateDecodingStrategy, and you're looking for one for enums.
This is an approach that might work for you:
protocol UpperSnakeCaseRepresentable {
var upperSnakeCaseValue: String { get }
}
extension UpperSnakeCaseRepresentable where Self: RawRepresentable, RawValue == String {
var upperSnakeCaseValue: String {
// Correct implementation left as an exercise
return rawValue.uppercased()
}
}
extension KeyedEncodingContainer {
mutating func encode(_ value: UpperSnakeCaseRepresentable, forKey key: KeyedEncodingContainer<K>.Key) throws {
try encode(value.upperSnakeCaseValue, forKey: key)
}
}
enum Snake: String, UpperSnakeCaseRepresentable, Encodable {
case blackAdder
case mamba
}
struct Test: Encodable {
let snake: Snake
}
let test = Test(snake: .blackAdder)
let data = try! JSONEncoder().encode(test)
let json = String(data: data, encoding: .utf8)!
print(json)
Now, any enums that you declare as conforming to UpperSnakeCaseRepresentable will be encoded as you want.
You can extend the other encoding and decoding containers the same way.

How to read bytes of struct in Swift

I am working with a variety of structs in Swift that I need to be able to look at the memory of directly.
How can I look at a struct byte for byte?
For example:
struct AwesomeStructure {
var index: Int32
var id: UInt16
var stuff: UInt8
// etc.
}
The compiler will not allow me to do this:
func scopeOfAwesomeStruct() {
withUnsafePointer(to: &self, { (ptr: UnsafePointer<Int8>) in
})
}
Obviously because withUnsafePointer is a templated function that requires the UnsafePointer to be the same type as self.
So, how can I break down self (my structs) into 8 bit pieces? Yes, I want to be able to look at index in 4, 8-bit pieces, and so-on.
(In this case, I'm trying to port a CRC algorithm from C#, but I have been confounded by this problem for other reasons as well.)
edit/update: Xcode 12.5 • Swift 5.4
extension ContiguousBytes {
func object<T>() -> T { withUnsafeBytes { $0.load(as: T.self) } }
}
extension Data {
func subdata<R: RangeExpression>(in range: R) -> Self where R.Bound == Index {
subdata(in: range.relative(to: self) )
}
func object<T>(at offset: Int) -> T { subdata(in: offset...).object() }
}
extension Numeric {
var data: Data {
var source = self
return Data(bytes: &source, count: MemoryLayout<Self>.size)
}
}
struct AwesomeStructure {
let index: Int32
let id: UInt16
let stuff: UInt8
}
extension AwesomeStructure {
init(data: Data) {
index = data.object()
id = data.object(at: 4)
stuff = data.object(at: 6)
}
var data: Data { index.data + id.data + stuff.data }
}
let awesomeStructure = AwesomeStructure(index: 1, id: 2, stuff: 3)
let data = awesomeStructure.data
print(data) // 7 bytes
let structFromData = AwesomeStructure(data: data)
print(structFromData) // "AwesomeStructure(index: 1, id: 2, stuff: 3)\n"
You can use withUnsafeBytes(_:) directly like this:
mutating func scopeOfAwesomeStruct() {
withUnsafeBytes(of: &self) {rbp in
let ptr = rbp.baseAddress!.assumingMemoryBound(to: UInt8.self)
//...
}
}
As already noted, do not export ptr outside of the closure.
And it is not safe even if you have a function that knows the length of the structure. Swift API stability is not declared yet. Any of the layout details of structs are not guaranteed, including the orders of the properties and how they put paddings. Which may be different than the C# structs and may generate the different result than that of C#.
I (and many other developers) believe and expect that the current layout strategy would not change in the near future, so I would write some code like yours. But I do not think it's safe. Remember Swift is not C.
(Though, it's all the same if you copy the contents of a struct into a Data.)
If you want a strictly exact layout with C, you can write a C struct and import it into your Swift project.
Here's a decent first approximation. The trick is to use Swift.withUnsafeBytes(_:) to get a UnsafeRawBufferPointer, which can then be easily converted to Data using Data.init<SourceType>(buffer: UnsafeMutableBufferPointer<SourceType>).
This causes a copy of the memory, so you don't have to worry about any sort of dangling pointer issues.
import Foundation
struct AwesomeStructure {
let index: Int32 = 0x56
let id: UInt16 = 0x34
let stuff: UInt8 = 0x12
}
func toData<T>(_ input: inout T) -> Data {
var data = withUnsafeBytes(of: &input, Data.init)
let alignment = MemoryLayout<T>.alignment
let remainder = data.count % alignment
if remainder == 0 {
return data
}
else {
let paddingByteCount = alignment - remainder
return data + Data(count: paddingByteCount)
}
}
extension Data {
var prettyString: String {
return self.enumerated()
.lazy
.map { byteNumber, byte in String(format:"/* %02i */ 0x%02X", byteNumber, byte) }
.joined(separator: "\n")
}
}
var x = AwesomeStructure()
let d = toData(&x)
print(d.prettyString)

Using MD5 hash value on an ansi string in Swift 3

I have a small function that takes a string and returns its MD5 hash value. The problem is, that it expects a UTF8 string, and I need it to calculate a hash value of a byte array encoded with iso-8859-1 (~ansi).
How can I change the following code to accept a byte array of characters, then return its hashed value?
static func md5(_ string: String) -> String {
let context = UnsafeMutablePointer<CC_MD5_CTX>.allocate(capacity: 1)
var digest = Array<UInt8>(repeating:0, count:Int(CC_MD5_DIGEST_LENGTH))
CC_MD5_Init(context)
CC_MD5_Update(context, string, CC_LONG(string.lengthOfBytes(using: String.Encoding.utf8)))
CC_MD5_Final(&digest, context)
context.deallocate(capacity: 1)
var hexString = ""
for byte in digest {
hexString += String(format:"%02x", byte)
}
return hexString
}
The tricky part is the CC_MD5_Update call. Thanks.
You can easily modify your function to take an arbitrary byte
array as argument. CC_MD5_Update is mapped to Swift as
func CC_MD5_Update(_ c: UnsafeMutablePointer<CC_MD5_CTX>!, _ data: UnsafeRawPointer!, _ len: CC_LONG) -> Int32
and you can pass an array as the UnsafeRawPointer parameter:
func md5(bytes: [UInt8]) -> String {
var context = CC_MD5_CTX()
var digest = Array<UInt8>(repeating:0, count:Int(CC_MD5_DIGEST_LENGTH))
CC_MD5_Init(&context)
CC_MD5_Update(&context, bytes, CC_LONG(bytes.count))
CC_MD5_Final(&digest, &context)
return digest.map { String(format: "%02hhx", $0) }.joined()
}
(I have also simplified it a bit.)
Alternatively, pass a Data argument:
func md5(data: Data) -> String {
var context = CC_MD5_CTX()
var digest = Array<UInt8>(repeating:0, count:Int(CC_MD5_DIGEST_LENGTH))
CC_MD5_Init(&context)
data.withUnsafeBytes {
_ = CC_MD5_Update(&context, $0, CC_LONG(data.count))
}
CC_MD5_Final(&digest, &context)
return digest.map { String(format: "%02hhx", $0) }.joined()
}
which can then be used as
let s = "foo"
if let data = s.data(using: .isoLatin1) {
let hash = md5(data: data)
print(hash)
}
Update for Swift 5:
import CommonCrypto
func md5(data: Data) -> String {
var context = CC_MD5_CTX()
var digest = Array<UInt8>(repeating:0, count:Int(CC_MD5_DIGEST_LENGTH))
CC_MD5_Init(&context)
data.withUnsafeBytes {
_ = CC_MD5_Update(&context, $0.baseAddress, CC_LONG(data.count))
}
CC_MD5_Final(&digest, &context)
return digest.map { String(format: "%02hhx", $0) }.joined()
}
If you are sure your 'string' contains only utf8 characters, call CC_MD5_Update with string.utf8 so:
CC_MD5_Update(context, string.utf8, CC_LONG(string.lengthOfBytes(using: String.Encoding.utf8)))
Strings in swift are 'interesting', this is a good read on the topic: https://oleb.net/blog/2016/08/swift-3-strings/
// requires a bridging header with this:
// #import <CommonCrypto/CommonCrypto.h>
func MD5(_ string: String) -> String? {
let length = Int(CC_MD5_DIGEST_LENGTH)
var digest = [UInt8](repeating: 0, count: length)
if let d = string.data(using: String.Encoding.utf8) {
d.withUnsafeBytes { (body: UnsafePointer<UInt8>) in
CC_MD5(body, CC_LONG(d.count), &digest)
}
}
return (0..<length).reduce("") {
$0 + String(format: "%02x", digest[$1])
}
}
Justin Answer : https://gist.github.com/jstn/787da74ab4be9d4cf3cb

how to use SHA256 with salt(some key) in swift

I found we can hash some string with CommonCrypto.
and I see some examples but they don't use salt.
how can i use the SHA256 with salt?
Complete solution for Swift 4:
extension Data {
var hexString: String {
return map { String(format: "%02hhx", $0) }.joined()
}
var sha256: Data {
var digest = [UInt8](repeating: 0, count: Int(CC_SHA256_DIGEST_LENGTH))
self.withUnsafeBytes({
_ = CC_SHA256($0, CC_LONG(self.count), &digest)
})
return Data(bytes: digest)
}
}
extension String {
func sha256(salt: String) -> Data {
return (self + salt).data(using: .utf8)!.sha256
}
}
Example:
let hash = "test".sha256(salt: "salt").hexString
Combine your indata with a salt and run the hash calculation;
func hash(input: String, salt: String) -> String {
let toHash = input + salt
// TODO: Calculate the SHA256 hash of "toHash" and return it
// return sha256(toHash)
// Return the input data and hash for now
return toHash
}
print(hash("somedata", salt: "1m8f")) // Prints "somedata1m8f"

Convert Objective-C (#define) macro to Swift

Put simply I am trying to convert a #define macro into a native Swift data structure of some sort. Just not sure how or what kind.
Details
I would like to try and replicate the following #define from Objective-C to Swift. Source: JoeKun/FileMD5Hash
#define FileHashComputationContextInitialize(context, hashAlgorithmName) \
CC_##hashAlgorithmName##_CTX hashObjectFor##hashAlgorithmName; \
context.initFunction = (FileHashInitFunction)&CC_##hashAlgorithmName##_Init; \
context.updateFunction = (FileHashUpdateFunction)&CC_##hashAlgorithmName##_Update; \
context.finalFunction = (FileHashFinalFunction)&CC_##hashAlgorithmName##_Final; \
context.digestLength = CC_##hashAlgorithmName##_DIGEST_LENGTH; \
context.hashObjectPointer = (uint8_t **)&hashObjectFor##hashAlgorithmName
Obviously #define does not exist in Swift; therefore I'm not looking for a 1:1 port. More generally just the spirit of it.
To start, I made an enum called CryptoAlgorithm. I only care to support two crypto algorithms for the sake of this question; but there should be nothing stopping me from extending it further.
enum CryptoAlgorithm {
case MD5, SHA1
}
So far so good. Now to implement the digestLength.
enum CryptoAlgorithm {
case MD5, SHA1
var digestLength: Int {
switch self {
case .MD5:
return Int(CC_MD5_DIGEST_LENGTH)
case .SHA1:
return Int(CC_SHA1_DIGEST_LENGTH)
}
}
Again, so far so good. Now to implement the initFunction.
enum CryptoAlgorithm {
case MD5, SHA1
var digestLength: Int {
switch self {
case .MD5:
return Int(CC_MD5_DIGEST_LENGTH)
case .SHA1:
return Int(CC_SHA1_DIGEST_LENGTH)
}
var initFunction: UnsafeMutablePointer<CC_MD5_CTX> -> Int32 {
switch self {
case .MD5:
return CC_MD5_Init
case .SHA1:
return CC_SHA1_Init
}
}
}
Crash and burn. 'CC_MD5_CTX' is not identical to 'CC_SHA1_CTX'. The problem is that CC_SHA1_Init is a UnsafeMutablePointer<CC_SHA1_CTX> -> Int32. Therefore, the two return types are not the same.
Is an enum the wrong approach? Should I be using generics? If so, how should the generic be made? Should I provide a protocol that both CC_MD5_CTX and CC_SHA1_CTX and then are extended by and return that?
All suggestions are welcome (except to use an Objc bridge).
I don't know if I love where this is going in the original ObjC code, because it's pretty type-unsafe. In Swift you just need to make all the type unsafety more explicit:
var initFunction: UnsafeMutablePointer<Void> -> Int32 {
switch self {
case .MD5:
return { CC_MD5_Init(UnsafeMutablePointer<CC_MD5_CTX>($0)) }
case .SHA1:
return { CC_SHA1_Init(UnsafeMutablePointer<CC_SHA1_CTX>($0)) }
}
}
The more "Swift" way of approaching this would be with protocols, such as:
protocol CryptoAlgorithm {
typealias Context
init(_ ctx: UnsafeMutablePointer<Context>)
var digestLength: Int { get }
}
Then you'd have something like (untested):
struct SHA1: CryptoAlgorithm {
typealias Context = CC_SHA1_CONTEXT
private let context: UnsafeMutablePointer<Context>
init(_ ctx: UnsafeMutablePointer<Context>) {
CC_SHA1_Init(ctx) // This can't actually fail
self.context = ctx // This is pretty dangerous.... but matches above. (See below)
}
let digestLength = Int(CC_SHA1_DIGEST_LENGTH)
}
But I'd be strongly tempted to hide the context, and just make it:
protocol CryptoAlgorithm {
init()
var digestLength: Int { get }
}
struct SHA1: CryptoAlgorithm {
private var context = CC_SHA1_CTX()
init() {
CC_SHA1_Init(&context) // This is very likely redundant.
}
let digestLength = Int(CC_SHA1_DIGEST_LENGTH)
}
Why do you need to expose the fact that it's CommonCrypto under the covers? And why would you want to rely on the caller to hold onto the context for you? If it goes out of scope, then later calls will crash. I'd hold onto the context inside.
Getting more closely to your original question, consider this (compiles, but not tested):
// Digests are reference types because they are stateful. Copying them may lead to confusing results.
protocol Digest: class {
typealias Context
var context: Context { get set }
var length: Int { get }
var digester: (UnsafePointer<Void>, CC_LONG, UnsafeMutablePointer<UInt8>) -> UnsafeMutablePointer<UInt8> { get }
var updater: (UnsafeMutablePointer<Context>, UnsafePointer<Void>, CC_LONG) -> Int32 { get }
var finalizer: (UnsafeMutablePointer<UInt8>, UnsafeMutablePointer<Context>) -> Int32 { get }
}
// Some helpers on all digests to make them act more Swiftly without having to deal with UnsafeMutablePointers.
extension Digest {
func digest(data: [UInt8]) -> [UInt8] {
return perform { digester(UnsafePointer<Void>(data), CC_LONG(data.count), $0) }
}
func update(data: [UInt8]) {
updater(&context, UnsafePointer<Void>(data), CC_LONG(data.count))
}
func final() -> [UInt8] {
return perform { finalizer($0, &context) }
}
// Helper that wraps up "create a buffer, update buffer, return buffer"
private func perform(f: (UnsafeMutablePointer<UInt8>) -> ()) -> [UInt8] {
var hash = [UInt8](count: length, repeatedValue: 0)
f(&hash)
return hash
}
}
// Example of creating a new digest
final class SHA1: Digest {
var context = CC_SHA1_CTX()
let length = Int(CC_SHA1_DIGEST_LENGTH)
let digester = CC_SHA1
let updater = CC_SHA1_Update
let finalizer = CC_SHA1_Final
}
// And here's what you change to make another one
final class SHA256: Digest {
var context = CC_SHA256_CTX()
let length = Int(CC_SHA256_DIGEST_LENGTH)
let digester = CC_SHA256
let updater = CC_SHA256_Update
let finalizer = CC_SHA256_Final
}
// Type-eraser, so we can talk about arbitrary digests without worrying about the underlying associated type.
// See http://robnapier.net/erasure
// So now we can say things like `let digests = [AnyDigest(SHA1()), AnyDigest(SHA256())]`
// If this were the normal use-case, you could rename "Digest" as "DigestAlgorithm" and rename "AnyDigest" as "Digest"
// for convenience
final class AnyDigest: Digest {
var context: Void = ()
let length: Int
let digester: (UnsafePointer<Void>, CC_LONG, UnsafeMutablePointer<UInt8>) -> UnsafeMutablePointer<UInt8>
let updater: (UnsafeMutablePointer<Void>, UnsafePointer<Void>, CC_LONG) -> Int32
let finalizer: (UnsafeMutablePointer<UInt8>, UnsafeMutablePointer<Void>) -> Int32
init<D: Digest>(_ digest: D) {
length = digest.length
digester = digest.digester
updater = { digest.updater(&digest.context, $1, $2) }
finalizer = { (hash, _) in digest.finalizer(hash, &digest.context) }
}
}