I need a way to convert a string that contains a literal string representing a hexadecimal value into a Character corresponding to that particular hexadecimal value.
Ideally, something along these lines:
let hexString: String = "2C"
let char: Character = fromHexString(hexString)
println(char) // prints -> ","
I've tried to use the syntax: "\u{n}" where n is a Int or String and neither worked.
This could be used to loop over an array of hexStrings like so:
var hexArray = ["2F", "24", "40", "2A"]
var charArray = [Character]()
charArray = map(hexArray) { charArray.append(Character($0)) }
charArray.description // prints -> "[/, $, #, *]"
A couple of things about your code:
var charArray = [Character]()
charArray = map(hexArray) { charArray.append(Character($0)) }
You don't need to create an array and then assign the result of the map, you can just assign the result and avoid creating an unnecessary array.
charArray = map(hexArray) { charArray.append(Character($0)) }
Here you can use hexArray.map instead of map(hexArray), also when you use a map function what you are conceptually doing is mapping the elements of the receiver array to a new set of values and the result of the mapping is the new "mapped" array, which means that you don't need to do charArray.append inside the map closure.
Anyway, here is a working example:
let hexArray = ["2F", "24", "40", "2A"]
var charArray = hexArray.map { char -> Character in
let code = Int(strtoul(char, nil, 16))
return Character(UnicodeScalar(code))
}
println(charArray) // -> [/, $, #, *]
EDIT: This is another implementation that doesn't need Foundation:
func hexToScalar(char: String) -> UnicodeScalar {
var total = 0
for scalar in char.uppercaseString.unicodeScalars {
if !(scalar >= "A" && scalar <= "F" || scalar >= "0" && scalar <= "9") {
assertionFailure("Input is wrong")
}
if scalar >= "A" {
total = 16 * total + 10 + scalar.value - 65 /* 'A' */
} else {
total = 16 * total + scalar.value - 48 /* '0' */
}
}
return UnicodeScalar(total)
}
let hexArray = ["2F", "24", "40", "2A"]
var charArray = hexArray.map { Character(hexToScalar($0)) }
println(charArray)
EDIT2 Yet another option:
func hexToScalar(char: String) -> UnicodeScalar {
let map = [ "0": 0, "1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9,
"A": 10, "B": 11, "C": 12, "D": 13, "E": 14, "F": 15 ]
let total = reduce(char.uppercaseString.unicodeScalars, 0, { $0 * 16 + (map[String($1)] ?? 0xff) })
if total > 0xFF {
assertionFailure("Input char was wrong")
}
return UnicodeScalar(total)
}
Final edit: explanation
Given that the ascii table has all the number together (012345679), we can convert 'N' (base 10) to an integer knowing the ascii value of 0.
Because:
'0': 48
'1': 49
...
'9': 57
Then if for example you need to convert '9' to 9 you could do
asciiValue('9') - asciiValue('0') => 57 - 48 = 9
And you can do the same from 'A' to 'F':
'A': 65
'B': 66
...
'F': 70
Now we can do the same as before but, for example for 'F' we'd do:
asciiValue('F') - asciiValue('A') => 70 - 65 = 5
Note that we need to add 10 to this number to get the decimal. Then (going back to the code): If the scalar is between A-Z we need to do:
10 + asciiValue(<letter>) - asciiValue('A')
which is the same as: 10 + scalar.value - 65
And if it's between 0-9:
asciiValue(<letter>) - asciiValue('0')
which is the same as: scalar.value - 48
For example: '2F'
'2' is 2 and 'F' is 15 (by the previous example), right?. Since hex is base 16 we'd need to do:
((16 ^ 1) * 2) + ((16 ^ 0) * 15) = 47
Here you go:
var string = String(UnicodeScalar(Int("2C", radix: 16)!))
BTW, you can include hex values in the literal strings like this:
var string = "\u{2c}"
With Swift 5, you will have to convert your string variable into an integer (using init(_:radix:) initializer), create Unicode scalar from this integer (using init(_:)) then create a character from this Unicode scalar (using init(_:)).
The Swift 5 Playground sample code below shows how to proceed:
let validHexString: String = "2C"
let validUnicodeScalarValue = Int(validHexString, radix: 16)!
let validUnicodeScalar = Unicode.Scalar(validUnicodeScalarValue)!
let character = Character(validUnicodeScalar)
print(character) // prints: ","
If you want to perform this operation for the elements inside an array, you can use the sample code below:
let hexArray = ["2F", "24", "40", "2A"]
let characterArray = hexArray.map({ (hexString) -> Character in
let unicodeScalarValue = Int(hexString, radix: 16)!
let validUnicodeScalar = Unicode.Scalar(unicodeScalarValue)!
return Character(validUnicodeScalar)
})
print(characterArray) // prints: ["/", "$", "#", "*"]
Alternative with no force unwraps:
let hexArray = ["2F", "24", "40", "2A"]
let characterArray = hexArray.compactMap({ (hexString) -> Character? in
guard let unicodeScalarValue = Int(hexString, radix: 16),
let unicodeScalar = Unicode.Scalar(unicodeScalarValue) else {
return nil
}
return Character(unicodeScalar)
})
print(characterArray) // prints: ["/", "$", "#", "*"]
Another simple way based on ICU transforms:
extension String {
func transformingFromHex() -> String? {
return "&#x\(self);".applyingTransform(.toXMLHex, reverse: true)
}
}
Usage:
"2C".transformingFromHex()
Results in: ,
Related
Swift character is so hard to manipulate..
I have a simple request:
For a given string, I'd like to encode it with a "moving integer" for all the alphabetical digits.
For example: "abc", move 1: would become "bcd"; "ABC", move 1 -> "BCD".
One thing to note is, if after moving, the range it larger than "z" or "Z", it should loop back and calculate from "a" or "A" again. e.g "XYZ", move 1 -> "YZA"
This would be very easy to do in Java. But could anyone show me what would be the cleanest way to do it swift?
I've done something like:
let arr: [Character] = Array(s)
for i in arr {
let curAsciiValue = i.asciiValue!
var updatedVal = (curAsciiValue + UInt8(withRotationFactor))
if i >= "A" && i <= "Z" {
newAsciiVal = 65
updatedVal = updatedVal - 65 >= 26 ? 65 + (updatedVal - 65) % 26 : updatedVal
} else if i >= "a" && i <= "z" {
newAsciiVal = 97
updatedVal = updatedVal - 97 >= 26 ? 97 + (updatedVal - 97) % 26 : updatedVal
}
}
What should be best way to do this?
Better to bring all your logic together extending the Character type:
extension Character {
var isAlphabet: Bool { isLowercaseAlphabet || isUppercaseAlphabet }
var isLowercaseAlphabet: Bool { "a"..."z" ~= self }
var isUppercaseAlphabet: Bool { "A"..."Z" ~= self }
func rotateAlphabetLetter(_ factor: Int) -> Self? {
precondition(factor > 0, "rotation must be positive")
guard let asciiValue = asciiValue, isAlphabet else { return nil }
let addition = asciiValue + UInt8(factor % 26)
return .init(UnicodeScalar(addition > (isLowercaseAlphabet ? 122 : 90) ? addition - 26 : addition))
}
}
Character("a").rotateAlphabetLetter(2) // "c"
Character("A").rotateAlphabetLetter(2) // "C"
Character("j").rotateAlphabetLetter(2) // "l"
Character("J").rotateAlphabetLetter(2) // "L"
Character("z").rotateAlphabetLetter(2) // "b"
Character("Z").rotateAlphabetLetter(2) // "B"
Character("ç").rotateAlphabetLetter(2) // nil
Character("Ç").rotateAlphabetLetter(2) // nil
Expanding on that you can map your string elements using compactMap:
let string = "XYZ"
let rotatedAlphabetLetters = string.compactMap { $0.rotateAlphabetLetter(2) } // ["Z", "A", "B"]
And to convert the sequence of characters back to string:
extension Sequence where Element == Character {
var string: String { .init(self) }
}
let result = rotatedAlphabetLetters.string // "ZAB"
or simply in one liner:
let result = "XYZ".compactMap { $0.rotateAlphabetLetter(2) }.string // "ZAB"
I'm currently using Xcode 10.1 and am trying to count the amount of vowels and consonants in my given sentence. I declare the constant globally
let sentence = "Here is my sentence"
Then I attempt to call a function with a parameter by the name of "sentenceInput" that under a certain case adds the sentence to the function which is mean to count the amount of vowels and consonants and return the Int values. However when the function is called I'm told that there is only 1 consonant and 0 vowels. Being new to programming and Xcode in general I would very much appreciate the help. Thank you. Code for function:
func findVowelsConsonantsPunctuation(sentenceInput: String) -> (Int, Int, Int) {
var vowels = 0; var consonants = 0; var punctuation = 0
for character in sentenceInput.characters{
switch String(character).lowercased
{
case "a", "e", "i", "o", "u":
vowels += 1
case ".","!",":",";","?", " ", "'", "":
punctuation += 1
default:
consonants += 1
}
return (vowels, consonants, punctuation)
}
}
I would suggest reading up on Set.
With that in mind, you could use CharacterSet and create 3 sets.
// Make your vowels
let vowels = CharacterSet(charactersIn: "aeiouy")
let consonants = CharacterSet.letters.subtracting(vowels)
let punctuation = CharacterSet.punctuationCharacters
Then, you'd want to track the counts of vowels, consonants, and punctuation:
// Make your vowels
let vowels = CharacterSet(charactersIn: "aeiouy")
let consonants = CharacterSet.letters.subtracting(vowels)
let punctuation = CharacterSet.punctuationCharacters
Then, loop through through it.
func sentenceComponents(string: String) -> (Int, Int, Int) {
// Make your vowels
let vowels = CharacterSet(charactersIn: "aeiouy")
let consonants = CharacterSet.letters.subtracting(vowels)
let punctuation = CharacterSet.punctuationCharacters
// Set up vars to track our coutns
var vowelCount = 0
var consonantCount = 0
var punctuationCount = 0
string.forEach { char in
// Make a set of one character
let set = CharacterSet(charactersIn: String(char))
// If the character is a member of a set, incremennt the relevant var
if set.isSubset(of: vowels) { vowelCount += 1 }
if set.isSubset(of: consonants) { consonantCount += 1 }
if set.isSubset(of: punctuation) { punctuationCount += 1 }
}
return (vowelCount, consonantCount, punctuationCount)
}
let testString = "The quick brown fox jumped over the lazy dog."
sentenceComponents(string: testString)
Update: Neater & Easier Read (Maybe?)
I can't stand unlabeled tuples, so here's an update with a typealias that tells you what you've got without having to go over the river & through the woods to figure out what's what in the tuple:
typealias SentenceComponents = (vowels: Int, consonants: Int, punctuation: Int)
func components(in string: String) -> SentenceComponents {
// Make your vowels
let vowels = CharacterSet(charactersIn: "aeiouy")
let consonants = CharacterSet.letters.subtracting(vowels)
let punctuation = CharacterSet.punctuationCharacters
// Set up vars to track our coutns
var vowelCount = 0
var consonantCount = 0
var punctuationCount = 0
string.forEach { char in
// Make a set of one character
let singleCharSet = CharacterSet(charactersIn: String(char))
// If the character is a member of a set, incremennt the relevant var
if singleCharSet.isSubset(of: vowels) { vowelCount += 1 }
if singleCharSet.isSubset(of: consonants) { consonantCount += 1 }
if singleCharSet.isSubset(of: punctuation) { punctuationCount += 1 }
}
return (vowelCount, consonantCount, punctuationCount)
}
let testString = "Smokey, this is not 'Nam. This is bowling. There are rules."
// (vowels 17, consonants 27, punctuation 5)
components(in: testString)
You put your return statement inside the loop, so the first time the loop code executes, the function will return. That is why there will only be one consonant since the switch case statement is only executed once.
You need to put your return statement outside the loop, like this:
func findVowelsConsonantsPunctuation(sentenceInput: String) -> (Int, Int, Int) {
var vowels = 0
var consonants = 0
var punctuation = 0
for character in sentenceInput.characters {
switch String(character).lowercase {
case "a", "e", "i", "o", "u":
vowels += 1
case ".","!",":",";","?", " ", "'", "":
punctuation += 1
default:
consonants += 1
}
}
return (vowels, consonants, punctuation)
}
Try this below. You had your return statement within your for-loop. Thus you're returning after the first iteration. I also moved your lowercased() above the for-loop. That way it's less processing time to lowercase the character every iteration.
func findVowelsConsonantsPunctuation(sentenceInput: String) -> (Int, Int, Int) {
var vowels = 0; var consonants = 0; var punctuation = 0
sentenceInput = sentenceInput.lowercased()
for character in sentenceInput.characters {
switch character {
case 'a', 'e', 'i', 'o', 'u':
vowels += 1
case '.','!',':',';','?', ' ', '\'', '':
punctuation += 1
default:
consonants += 1
}
}
return (vowels, consonants, punctuation)
}
func findVowelsConsonants (_ sentence: String)-> (Int, Int){
let sentenceLowercase = sentence.lowercased()
var tuple = (numberOfVowels: 0, numberOfConsonants: 0, numberOfPunctuation: 0)
for char in sentenceLowercase{
switch char{
case "a", "e", "i", "o", "u":
tuple.0 += 1
case "b", "c", "d", "f", "g", "h", "j", "k", "l", "m", "n", "p", "q", "r", "s", "t", "v", "w", "x", "y", "z":
tuple.1 += 1
default:
tuple.2 += 1
}
}
return (tuple.0, tuple.1)
}
suppose i have a array that have 10 elements. say,
var ArrayElemts : ["1","2","3","4","5","6","7","8","9","10","11"]
Now how can i keep the elements from 0 t0 5 in one array set and 6 to 10 to another array set?
Use [0...5] to create an ArraySlice and then Array to convert that back to an array:
var arrayElemts = ["1","2","3","4","5","6","7","8","9","10","11"]
let first = Array(arrayElemts[0...5])
let second = Array(arrayElemts[6...10])
print(first) // ["1", "2", "3", "4", "5", "6"]
print(second) // ["7", "8", "9", "10", "11"]
The easiest option is the following:
let partition1 = array.filter { Int($0) ?? 0 <= 5 }
let partition2 = array.filter { Int($0) ?? 0 > 5 }
Conversion to numbers should be the first step though. You should never work with strings as if they were numbers.
let numbers = array.flatMap { Int($0) }
let partition1 = numbers.filter { $0 <= 5 }
let partition2 = numbers.filter { $0 > 5 }
If we suppose the array is sorted, there are easier options:
let sorted = numbers.sorted()
let partition1: [Int]
let partition2: [Int]
if let partition2start = sorted.index(where: { $0 > 5 }) {
partition1 = Array(sorted.prefix(upTo: partition2start))
partition2 = Array(sorted.suffix(from: partition2start))
} else {
partition1 = sorted
partition2 = []
}
which is what the native partition method can do:
var numbers = array.flatMap { Int($0) }
let index = numbers.partition { $0 > 5 }
let partition1 = Array(numbers.prefix(upTo: index))
let partition2 = Array(numbers.suffix(from: index))
Note the method changes the original array.
Breaking the array up into N-sized chunks
The other answers show you how to "statically" partition the original array in different arrays using ArraySlice:s. Given your description, possibly you want to, generally, break up your original array into N-sized chunks (here: n = 5).
We could use the sequence(state:next) to implement such a chunk(bySize:) method as an extension to Collection:
extension Collection {
func chunk(bySize size: IndexDistance) -> [SubSequence] {
precondition(size > 0, "Chunk size must be a positive integer.")
return sequence(
state: (startIndex, index(startIndex, offsetBy: size, limitedBy: endIndex) ?? endIndex),
next: { indices in
guard indices.0 != self.endIndex else { return nil }
indices.1 = self.index(indices.0, offsetBy: size, limitedBy: self.endIndex) ?? self.endIndex
return (self[indices.0..<indices.1], indices.0 = indices.1).0
}).map { $0 }
}
}
Applied to your example:
var arrayElements = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11"]
let partitions = arrayElements.chunk(bySize: 5)
/* [["1", "2", "3", "4", "5"],
["6", "7", "8", "9", "10"],
["11"]] */
The chunk(bySize:) method will break up the array into bySize-sized chunks, as well as (possible) a smaller chunk for the final partition.
However, as much as I'd like to try to use the sequence(state:next) function (not needing to use any mutable intermediate variables other than state), the implementation above is quite bloated and difficult to read, so (as for so many other cases ...) we are probably better off simply using a while loop:
extension Collection {
func chunk(bySize size: IndexDistance) -> [SubSequence] {
precondition(size > 0, "Chunk size must be a positive integer.")
var chunks: [SubSequence] = []
var from = startIndex
while let to = index(from, offsetBy: size, limitedBy: endIndex) {
chunks.append(self[from..<to])
from = to
}
if from != endIndex { chunks.append(self[from..<endIndex]) }
return chunks
}
}
lol I don't see why there are so complicated answers here
(Consider the "array" variable as is -> [Int], not [Any])
So the first approach is just for Number types.
The second one should do it
Simply:
let array = [0,1,2,3,4,5,6,7,8,9,10]
//For instance..
var arrayA = ["A","B","C","D","E","F","G"]
//First 6 elements
let arrayOfFirstFour = array.filter({
return $0 <= 5 ? true : false
})
//Remaining elements:
let restOfArray = array.filter({
return $0 > 5 ? true : false
})
let elementsToFourth = arrayA.prefix(upTo: 4)
let elementsAfterFourth = arrayA.suffix(from: 4)
print(arrayOfFirstFour)
print(restOfArray)
print(elementsToFourth)
print(elementsAfterFourth)
//[0, 1, 2, 3, 4, 5]
//[6, 7, 8, 9, 10]
//["A", "B", "C", "D"]
//["E", "F", "G"]
I just want to get the ASCII value of a single char string in Swift. This is how I'm currently doing it:
var singleChar = "a"
println(singleChar.unicodeScalars[singleChar.unicodeScalars.startIndex].value) //prints: 97
This is so ugly though. There must be a simpler way.
edit/update Swift 5.2 or later
extension StringProtocol {
var asciiValues: [UInt8] { compactMap(\.asciiValue) }
}
"abc".asciiValues // [97, 98, 99]
In Swift 5 you can use the new character properties isASCII and asciiValue
Character("a").isASCII // true
Character("a").asciiValue // 97
Character("á").isASCII // false
Character("á").asciiValue // nil
Old answer
You can create an extension:
Swift 4.2 or later
extension Character {
var isAscii: Bool {
return unicodeScalars.allSatisfy { $0.isASCII }
}
var ascii: UInt32? {
return isAscii ? unicodeScalars.first?.value : nil
}
}
extension StringProtocol {
var asciiValues: [UInt32] {
return compactMap { $0.ascii }
}
}
Character("a").isAscii // true
Character("a").ascii // 97
Character("á").isAscii // false
Character("á").ascii // nil
"abc".asciiValues // [97, 98, 99]
"abc".asciiValues[0] // 97
"abc".asciiValues[1] // 98
"abc".asciiValues[2] // 99
UnicodeScalar("1")!.value // returns 49
Swift 3.1
Now in Xcode 7.1 and Swift 2.1
var singleChar = "a"
singleChar.unicodeScalars.first?.value
You can use NSString's characterAtIndex to accomplish this...
var singleCharString = "a" as NSString
var singleCharValue = singleCharString.characterAtIndex(0)
println("The value of \(singleCharString) is \(singleCharValue)") // The value of a is 97
Swift 4.2
The easiest way to get ASCII values from a Swift string is below
let str = "Swift string"
for ascii in str.utf8 {
print(ascii)
}
Output:
83
119
105
102
116
32
115
116
114
105
110
103
The way you're doing it is right. If you don't like the verbosity of the indexing, you can avoid it by cycling through the unicode scalars:
var x : UInt32 = 0
let char = "a"
for sc in char.unicodeScalars {x = sc.value; break}
You can actually omit the break in this case, of course, since there is only one unicode scalar.
Or, convert to an Array and use Int indexing (the last resort of the desperate):
let char = "a"
let x = Array(char.unicodeScalars)[0].value
A slightly shorter way of doing this could be:
first(singleChar.unicodeScalars)!.value
As with the subscript version, this will crash if your string is actually empty, so if you’re not 100% sure, use the optional:
if let ascii = first(singleChar.unicodeScalars)?.value {
}
Or, if you want to be extra-paranoid,
if let char = first(singleChar.unicodeScalars) where char.isASCII() {
let ascii = char.value
}
Here's my implementation, it returns an array of the ASCII values.
extension String {
func asciiValueOfString() -> [UInt32] {
var retVal = [UInt32]()
for val in self.unicodeScalars where val.isASCII() {
retVal.append(UInt32(val))
}
return retVal
}
}
Note: Yes it's Swift 2 compatible.
Swift 4.1
https://oleb.net/blog/2017/11/swift-4-strings/
let flags = "99_problems"
flags.unicodeScalars.map {
"\(String($0.value, radix: 16, uppercase: true))"
}
Result:
["39", "39", "5F", "70", "72", "6F", "62", "6C", "65", "6D", "73"]
Swift 4+
Char to ASCII
let charVal = String(ch).unicodeScalars
var asciiVal = charVal[charVal.startIndex].value
ASCII to Char
let char = Character(UnicodeScalar(asciiVal)!)
var singchar = "a" as NSString
print(singchar.character(at: 0))
Swift 3.1
There's also the UInt8(ascii: Unicode.Scalar) initializer on UInt8.
var singleChar = "a"
UInt8(ascii: singleChar.unicodeScalars[singleChar.startIndex])
With Swift 5, you can pick one of the following approaches in order to get the ASCII numeric representation of a character.
#1. Using Character's asciiValue property
Character has a property called asciiValue. asciiValue has the following declaration:
var asciiValue: UInt8? { get }
The ASCII encoding value of this character, if it is an ASCII character.
The following Playground sample codes show how to use asciiValue in order to get
the ASCII encoding value of a character:
let character: Character = "a"
print(character.asciiValue) //prints: Optional(97)
let string = "a"
print(string.first?.asciiValue) //prints: Optional(97)
let character: Character = "👍"
print(character.asciiValue) //prints: nil
#2. Using Character's isASCII property and Unicode.Scalar's value property
As an alternative, you can check that the first character of a string is an ASCII character (using Character's isASCII property) then get the numeric representation of its first Unicode scalar (using Unicode.Scalar's value property). The Playground sample code below show how to proceed:
let character: Character = "a"
if character.isASCII, let scalar = character.unicodeScalars.first {
print(scalar.value)
} else {
print("Not an ASCII character")
}
/*
prints: 97
*/
let string = "a"
if let character = string.first, character.isASCII, let scalar = character.unicodeScalars.first {
print(scalar.value)
} else {
print("Not an ASCII character")
}
/*
prints: 97
*/
let character: Character = "👍"
if character.isASCII, let scalar = character.unicodeScalars.first {
print(scalar.value)
} else {
print("Not an ASCII character")
}
/*
prints: Not an ASCII character
*/
Swift 4
print("c".utf8["c".utf8.startIndex])
or
let cu = "c".utf8
print(cu[cu.startIndex])
Both print 99. Works for any ASCII character.
var input = "Swift".map {
Character(extendedGraphemeClusterLiteral: $0).asciiValue!
}
// [83, 119, 105, 102, 116]
I am trying to cycle through the entire alphabet using Swift. The only problem is that I would like to assign values to each letter.
For Example: a = 1, b = 2, c = 3 and so on until I get to z which would = 26.
How do I go through each letter in the text field that the user typed while using the values previously assigned to the letters in the alphabet?
After this is done, how would I add up all the letters values to get a sum for the entire word. I am looking for the simplest possible way to accomplish this but works the way I would like it to.
edit/update: Xcode 12.5 • Swift 5.4
extension Character {
static let alphabetValue = zip("abcdefghijklmnopqrstuvwxyz", 1...26).reduce(into: [:]) { $0[$1.0] = $1.1 }
var lowercased: Character { .init(lowercased()) }
var letterValue: Int? { Self.alphabetValue[lowercased] }
}
extension String {
var wordValue: Int { compactMap(\.letterValue).reduce(0, +) }
}
Character("A").letterValue // 1
Character("b").letterValue // 2
Character("c").letterValue // 3
Character("d").letterValue // 4
Character("e").letterValue // 5
Character("Z").letterValue // 26
"Abcde".wordValue // 15
I'd create a function something like this...
func valueOfLetter(inputLetter: String) -> Int {
let alphabet = ["a", "b", "c", "d", ... , "y", "z"] // finish the array properly
for (index, letter) in alphabet {
if letter = inputLetter.lowercaseString {
return index + 1
}
}
return 0
}
Then you can iterate the word...
let word = "hello"
var score = 0
for character in word {
score += valueOfLetter(character)
}
Assign the letters by iterating over them and building a dictionary with letters corresponding to their respective values:
let alphabet: [String] = [
"a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"
]
var alphaDictionary = [String: Int]()
var i: Int = 0
for a in alphabet {
alphaDictionary[a] = ++i
}
Use Swift's built-in Array reduce function to sum up the letters returned from your UITextViewDelegate:
func textViewDidEndEditing(textView: UITextView) {
let sum = Array(textView.text.unicodeScalars).reduce(0) { a, b in
var sum = a
if let d = alphaDictionary[String(b).lowercaseString] {
sum += d
}
return sum
}
}
I've just put together the following function in swiftstub.com and it seems to work as expected.
func getCount(word: String) -> Int {
let alphabetArray = Array(" abcdefghijklmnopqrstuvwxyz")
var count = 0
// enumerate through each character in the word (as lowercase)
for (index, value) in enumerate(word.lowercaseString) {
// get the index from the alphabetArray and add it to the count
if let alphabetIndex = find(alphabetArray, value) {
count += alphabetIndex
}
}
return count
}
let word = "Hello World"
let expected = 8+5+12+12+15+23+15+18+12+4
println("'\(word)' should equal \(expected), it is \(getCount(word))")
// 'Hello World' should equal 124 :)
The function loops through each character in the string you pass into it, and uses the find function to check if the character (value) exists in the sequence (alphabetArray), and if it does it returns the index from the sequence. The index is then added to the count and when all characters have been checked the count is returned.
Maybe you are looking for something like this:
func alphabetSum(text: String) -> Int {
let lowerCase = UnicodeScalar("a")..."z"
return reduce(filter(text.lowercaseString.unicodeScalars, { lowerCase ~= $0}), 0) { acc, x in
acc + Int((x.value - 96))
}
}
alphabetSum("Az") // 27 case insensitive
alphabetSum("Hello World!") // 124 excludes non a...z characters
The sequence text.lowercaseString.unicodeScalars ( lower case text as unicode scalar ) is filtered filter keeping only the scalars that pattern match ~= with the lowerCase range.
reduce sums all the filtered scalar values shifted by -96 (such that 'a' gives 1 etc.). reduce starts from an accumulator (acc) value of 0.
In this solution the pattern match operator will just check for the scalar value to be between lowerCase.start (a) and lowerCase.end (z), thus there is no lookup or looping into an array of characters.