Swift. How to get the previous character? - swift

For example: I have character "b" and I what to get "a", so "a" is the previous character.
let b: Character = "b"
let a: Character = b - 1 // Compilation error

It's actually pretty complicated to get the previous character from Swift's Character type because Character is actually comprised of one or more Unicode.Scalar values. Depending on your needs you could restrict your efforts to just the ASCII characters. Or you could support all characters comprised of a single Unicode scalar. Once you get into characters comprised of multiple Unicode scalars (such as the flag Emojis or various skin toned Emojis) then I'm not even sure what the "previous character" means.
Here is a pair of methods added to a Character extension that can handle ASCII and single-Unicode scalar characters.
extension Character {
var previousASCII: Character? {
if let ascii = asciiValue, ascii > 0 {
return Character(Unicode.Scalar(ascii - 1))
}
return nil
}
var previousScalar: Character? {
if unicodeScalars.count == 1 {
if let scalar = unicodeScalars.first, scalar.value > 0 {
if let prev = Unicode.Scalar(scalar.value - 1) {
return Character(prev)
}
}
}
return nil
}
}
Examples:
let b: Character = "b"
let a = b.previousASCII // Gives Optional("a")
let emoji: Character = "😆"
let previous = emoji.previousScalar // Gives Optional("😅")

Related

Get next alphabet of a character

Is there any specific API to get the next alphabet of a character?
Example:
if "Somestring".characters.first results in "S", then should
return "T"
If there's none I guess I have to iterate through a collection of alphabet and return the next character in order. Or is there any other better solution?
If you think of the Latin capital letters "A" ... "Z" then the
following should work:
func nextLetter(_ letter: String) -> String? {
// Check if string is build from exactly one Unicode scalar:
guard let uniCode = UnicodeScalar(letter) else {
return nil
}
switch uniCode {
case "A" ..< "Z":
return String(UnicodeScalar(uniCode.value + 1)!)
default:
return nil
}
}
It returns the next Latin capital letter if there is one,
and nil otherwise. It works because the Latin capital letters
have consecutive Unicode scalar values.
(Note that UnicodeScalar(uniCode.value + 1)! cannot fail in that
range.) The guard statement handles both multi-character
strings and extended grapheme clusters (such as flags "🇩🇪").
You can use
case "A" ..< "Z", "a" ..< "z":
if lowercase letters should be covered as well.
Examples:
nextLetter("B") // C
nextLetter("Z") // nil
nextLetter("€") // nil
func nextChar(str:String) {
if let firstChar = str.unicodeScalars.first {
let nextUnicode = firstChar.value + 1
if let var4 = UnicodeScalar(nextUnicode) {
var nextString = ""
nextString.append(Character(UnicodeScalar(var4)))
print(nextString)
}
}
}
nextChar(str: "A") // B
nextChar(str: "ζ") // η
nextChar(str: "z") // {

Strange String.unicodeScalars and CharacterSet behaviour

I'm trying to use a Swift 3 CharacterSet to filter characters out of a String but I'm getting stuck very early on. A CharacterSet has a method called contains
func contains(_ member: UnicodeScalar) -> Bool
Test for membership of a particular UnicodeScalar in the CharacterSet.
But testing this doesn't produce the expected behaviour.
let characterSet = CharacterSet.capitalizedLetters
let capitalAString = "A"
if let capitalA = capitalAString.unicodeScalars.first {
print("Capital A is \(characterSet.contains(capitalA) ? "" : "not ")in the group of capital letters")
} else {
print("Couldn't get the first element of capitalAString's unicode scalars")
}
I'm getting Capital A is not in the group of capital letters yet I'd expect the opposite.
Many thanks.
CharacterSet.capitalizedLetters
returns a character set containing the characters in Unicode General Category Lt aka "Letter, titlecase". That are
"Ligatures containing uppercase followed by lowercase letters (e.g., Dž, Lj, Nj, and Dz)" (compare Wikipedia: Unicode character property or
Unicode® Standard Annex #44 – Table 12. General_Category Values).
You can find a list here: Unicode Characters in the 'Letter, Titlecase' Category.
You can also use the code from
NSArray from NSCharacterset to dump the contents of the character
set:
extension CharacterSet {
func allCharacters() -> [Character] {
var result: [Character] = []
for plane: UInt8 in 0...16 where self.hasMember(inPlane: plane) {
for unicode in UInt32(plane) << 16 ..< UInt32(plane + 1) << 16 {
if let uniChar = UnicodeScalar(unicode), self.contains(uniChar) {
result.append(Character(uniChar))
}
}
}
return result
}
}
let characterSet = CharacterSet.capitalizedLetters
print(characterSet.allCharacters())
// ["Dž", "Lj", "Nj", "Dz", "ᾈ", "ᾉ", "ᾊ", "ᾋ", "ᾌ", "ᾍ", "ᾎ", "ᾏ", "ᾘ", "ᾙ", "ᾚ", "ᾛ", "ᾜ", "ᾝ", "ᾞ", "ᾟ", "ᾨ", "ᾩ", "ᾪ", "ᾫ", "ᾬ", "ᾭ", "ᾮ", "ᾯ", "ᾼ", "ῌ", "ῼ"]
What you probably want is CharacterSet.uppercaseLetters which
Returns a character set containing the characters in Unicode General Category Lu and Lt.

Get numbers characters from a string [duplicate]

This question already has answers here:
Filter non-digits from string
(12 answers)
Closed 6 years ago.
How to get numbers characters from a string? I don't want to convert in Int.
var string = "string_1"
var string2 = "string_20_certified"
My result have to be formatted like this:
newString = "1"
newString2 = "20"
Pattern matching a String's unicode scalars against Western Arabic Numerals
You could pattern match the unicodeScalars view of a String to a given UnicodeScalar pattern (covering e.g. Western Arabic numerals).
extension String {
var westernArabicNumeralsOnly: String {
let pattern = UnicodeScalar("0")..."9"
return String(unicodeScalars
.flatMap { pattern ~= $0 ? Character($0) : nil })
}
}
Example usage:
let str1 = "string_1"
let str2 = "string_20_certified"
let str3 = "a_1_b_2_3_c34"
let newStr1 = str1.westernArabicNumeralsOnly
let newStr2 = str2.westernArabicNumeralsOnly
let newStr3 = str3.westernArabicNumeralsOnly
print(newStr1) // 1
print(newStr2) // 20
print(newStr3) // 12334
Extending to matching any of several given patterns
The unicode scalar pattern matching approach above is particularly useful extending it to matching any of a several given patterns, e.g. patterns describing different variations of Eastern Arabic numerals:
extension String {
var easternArabicNumeralsOnly: String {
let patterns = [UnicodeScalar("\u{0660}")..."\u{0669}", // Eastern Arabic
"\u{06F0}"..."\u{06F9}"] // Perso-Arabic variant
return String(unicodeScalars
.flatMap { uc in patterns.contains{ $0 ~= uc } ? Character(uc) : nil })
}
}
This could be used in practice e.g. if writing an Emoji filter, as ranges of unicode scalars that cover emojis can readily be added to the patterns array in the Eastern Arabic example above.
Why use the UnicodeScalar patterns approach over Character ones?
A Character in Swift contains of an extended grapheme cluster, which is made up of one or more Unicode scalar values. This means that Character instances in Swift does not have a fixed size in the memory, which means random access to a character within a collection of sequentially (/contiguously) stored character will not be available at O(1), but rather, O(n).
Unicode scalars in Swift, on the other hand, are stored in fixed sized UTF-32 code units, which should allow O(1) random access. Now, I'm not entirely sure if this is a fact, or a reason for what follows: but a fact is that if benchmarking the methods above vs equivalent method using the CharacterView (.characters property) for some test String instances, its very apparent that the UnicodeScalar approach is faster than the Character approach; naive testing showed a factor 10-25 difference in execution times, steadily growing for growing String size.
Knowing the limitations of working with Unicode scalars vs Characters in Swift
Now, there are drawbacks using the UnicodeScalar approach, however; namely when working with characters that cannot represented by a single unicode scalar, but where one of its unicode scalars are contained in the pattern to which we want to match.
E.g., consider a string holding the four characters "Café". The last character, "é", is represented by two unicode scalars, "e" and "\u{301}". If we were to implement pattern matching against, say, UnicodeScalar("a")...e, the filtering method as applied above would allow one of the two unicode scalars to pass.
extension String {
var onlyLowercaseLettersAthroughE: String {
let patterns = [UnicodeScalar("1")..."e"]
return String(unicodeScalars
.flatMap { uc in patterns.contains{ $0 ~= uc } ? Character(uc) : nil })
}
}
let str = "Cafe\u{301}"
print(str) // Café
print(str.onlyLowercaseLettersAthroughE) // Cae
/* possibly we'd want "Ca" or "Caé"
as result here */
In the particular use case queried by from the OP in this Q&A, the above is not an issue, but depending on the use case, it will sometimes be more appropriate to work with Character pattern matching over UnicodeScalar.
Edit: Updated for Swift 4 & 5
Here's a straightforward method that doesn't require Foundation:
let newstring = string.filter { "0"..."9" ~= $0 }
or borrowing from #dfri's idea to make it a String extension:
extension String {
var numbers: String {
return filter { "0"..."9" ~= $0 }
}
}
print("3 little pigs".numbers) // "3"
print("1, 2, and 3".numbers) // "123"
import Foundation
let string = "a_1_b_2_3_c34"
let result = string.components(separatedBy: CharacterSet.decimalDigits.inverted).joined(separator: "")
print(result)
Output:
12334
Here is a Swift 2 example:
let str = "Hello 1, World 62"
let intString = str.componentsSeparatedByCharactersInSet(
NSCharacterSet
.decimalDigitCharacterSet()
.invertedSet)
.joinWithSeparator("") // Return a string with all the numbers
This method iterate through the string characters and appends the numbers to a new string:
class func getNumberFrom(string: String) -> String {
var number: String = ""
for var c : Character in string.characters {
if let n: Int = Int(String(c)) {
if n >= Int("0")! && n < Int("9")! {
number.append(c)
}
}
}
return number
}
For example with regular expression
let text = "string_20_certified"
let pattern = "\\d+"
let regex = try! NSRegularExpression(pattern: pattern, options: [])
if let match = regex.firstMatch(in: text, options: [], range: NSRange(location: 0, length: text.characters.count)) {
let newString = (text as NSString).substring(with: match.range)
print(newString)
}
If there are multiple occurrences of the pattern use matches(in..
let matches = regex.matches(in: text, options: [], range: NSRange(location: 0, length: text.characters.count))
for match in matches {
let newString = (text as NSString).substring(with: match.range)
print(newString)
}

Issues when attempting to modify an emoji sequence

I have the following function that takes a string with emojis, if its a sequence emoji a+b it will leave a as is and it will change b to a different emoji
func changeEmoji(givenString:String)->(String){
let emojiDictionary :[String:String] = [
"⛹" : "⛹",
"♀️" : "👩",
"🏻" : "💤",
"♂️" :"👨",
]
let stringCharacters=Array(givenString.characters)
var returnedString=String()
for character in stringCharacters{
if emojiDictionary[String(character)] == nil {
return "error"
}
else {
returnedString=returnedString+emojiDictionary[String(character)]!
}
}
return returnedString
}
i have no problem with
changeEmoji(givenString: "⛹⛹🏻")
it outputs: "⛹⛹💤"
but:
changeEmoji(givenString: "⛹⛹🏻⛹🏻‍♀️")
outputs "error" while it shouldn't as ♀ Female Sign and Variation Selector-16 is the second key in my emojiDictionary..
Similar issue appears with male sign and variation selector.
Any ideas why is this happening?
The problem is that "⛹🏻‍♀️" is made up of 3 Swift Characters (aka extended grapheme clusters):
"⛹🏻‍" (U+26F9 PERSON WITH BALL)
"🏻‍" (U+1F3FB Emoji Modifier Fitzpatrick Type-1-2, U+200D ZERO WIDTH JOINER)
"♀️" (U+2640 FEMALE SIGN, U+FE0F VARIATION SELECTOR-16)
However, your emojiDictionary only contains a "🏻" (U+1F3FB Emoji Modifier Fitzpatrick Type-1-2), which doesn't match the second Character of "⛹🏻‍♀️" as it's missing the zero width joiner.
The simplest solution therefore is to just add another key to your dictionary to include the Emoji Modifier Fitzpatrick Type-1-2 character, with a zero width joiner suffix. The clearest way of doing this would be to just suffix it with the unicode escape sequence \u{200D}.
For example:
func changeEmoji(givenString: String) -> String? {
// I have included the unicode point breakdowns for clarity
let emojiDictionary : [String : String] = [
"⛹" : "⛹", // 26F9 : 26F9
"♀️" : "👩", // 2640, fe0f : 1f469
"🏻" : "💤", // 1f3fb : 1f4a4
"🏻\u{200D}" : "💤", // 1f3fb, 200d : 1f4a4
"♂️" :"👨" // 2642, fe0f : 1f468
]
// Convert characters of string to an array of string characters,
// given that you're just going to use the String(_:) initialiser later.
let stringCharacters = givenString.characters.map(String.init(_:))
var returnedString = ""
for character in stringCharacters {
guard let replacementCharacter = emojiDictionary[character] else {
// I would advise making your method return an optional
// in cases where the string can't be converted.
// Failure is shown by the return of nil, rather than some
// string sentinel.
return nil
}
returnedString += replacementCharacter
}
return returnedString
}
print(changeEmoji(givenString: "⛹⛹🏻⛹🏻‍♀️")) // Optional("⛹⛹💤⛹💤👩")

NSCharacterSet.characterIsMember() with Swift's Character type

Imagine you've got an instance of Swift's Character type, and you want to determine whether it's a member of an NSCharacterSet. NSCharacterSet's characterIsMember method takes a unichar, so we need to get from Character to unichar.
The only solution I could come up with is the following, where c is my Character:
let u: unichar = ("\(c)" as NSString).characterAtIndex(0)
if characterSet.characterIsMember(u) {
dude.abide()
}
I looked at Character but nothing leapt out at me as a way to get from it to unichar. This may be because Character is more general than unichar, so a direct conversion wouldn't be safe, but I'm only guessing.
If I were iterating a whole string, I'd do something like this:
let s = myString as NSString
for i in 0..<countElements(myString) {
let u = s.characterAtIndex(i)
if characterSet.characterIsMember(u) {
dude.abide()
}
}
(Warning: The above is pseudocode and has never been run by anyone ever.) But this is not really what I'm asking.
My understanding is that unichar is a typealias for UInt16. A unichar is just a number.
I think that the problem that you are facing is that a Character in Swift can be composed of more than one unicode "characters". Thus, it cannot be converted to a single unichar value because it may be composed of two unichars. You can decompose a Character into its individual unichar values by casting it to a string and using the utf16 property, like this:
let c: Character = "a"
let s = String(c)
var codeUnits = [unichar]()
for codeUnit in s.utf16 {
codeUnits.append(codeUnit)
}
This will produce an array - codeUnits - of unichar values.
EDIT: Initial code had for codeUnit in s when it should have been for codeUnit in s.utf16
You can tidy things up and test for whether or not each individual unichar value is in a character set like this:
let char: Character = "\u{63}\u{20dd}" // This is a 'c' inside of an enclosing circle
for codeUnit in String(char).utf16 {
if NSCharacterSet(charactersInString: "c").characterIsMember(codeUnit) {
dude.abide()
} // dude will abide() for codeUnits[0] = "c", but not for codeUnits[1] = 0x20dd (the enclosing circle)
}
Or, if you are only interested in the first (and often only) unichar value:
if NSCharacterSet(charactersInString: "c").characterIsMember(String(char).utf16[0]) {
dude.abide()
}
Or, wrap it in a function:
func isChar(char: Character, inSet set: NSCharacterSet) -> Bool {
return set.characterIsMember(String(char).utf16[0])
}
let xSet = NSCharacterSet(charactersInString: "x")
isChar("x", inSet: xSet) // This returns true
isChar("y", inSet: xSet) // This returns false
Now make the function check for all unichar values in a composed character - that way, if you have a composed character, the function will only return true if both the base character and the combining character are present:
func isChar(char: Character, inSet set: NSCharacterSet) -> Bool {
var found = true
for ch in String(char).utf16 {
if !set.characterIsMember(ch) { found = false }
}
return found
}
let acuteA: Character = "\u{e1}" // An "a" with an accent
let acuteAComposed: Character = "\u{61}\u{301}" // Also an "a" with an accent
// A character set that includes both the composed and uncomposed unichar values
let charSet = NSCharacterSet(charactersInString: "\u{61}\u{301}\u{e1}")
isChar(acuteA, inSet: charSet) // returns true
isChar(acuteAComposed, inSet: charSet) // returns true (both unichar values were matched
The last version is important. If your Character is a composed character you have to check for the presence of both the base character ("a") and the combining character (the acute accent) in the character set or you will get false positives.
I would treat the Character as a String and let Cocoa do all the work:
func charset(cset:NSCharacterSet, containsCharacter c:Character) -> Bool {
let s = String(c)
let ix = s.startIndex
let ix2 = s.endIndex
let result = s.rangeOfCharacterFromSet(cset, options: nil, range: ix..<ix2)
return result != nil
}
And here's how to use it:
let cset = NSCharacterSet.lowercaseLetterCharacterSet()
let c : Character = "c"
let ok = charset(cset, containsCharacter:c) // true
Do it all in a one liner:
validCharacterSet.contains(String(char).unicodeScalars.first!)
(Swift 3)
Due to changes in Swift 3.0, matt's answer no longer works, so here is working version (as extension):
private extension NSCharacterSet {
func containsCharacter(c: Character) -> Bool {
let s = String(c)
let ix = s.startIndex
let ix2 = s.endIndex
let result = s.rangeOfCharacter(from: self as CharacterSet, options: [], range: ix..<ix2)
return result != nil
}
}
Swift 3.0 changes means you actually don't need to be bridging to NSCharacterSet anymore, you can use Swift's native CharacterSet.
You could do something similar to Jiri's answer directly:
extension CharacterSet {
func contains(_ character: Character) -> Bool {
let string = String(character)
return string.rangeOfCharacter(from: self, options: [], range: string.startIndex..<string.endIndex) != nil
}
}
or do:
func contains(_ character: Character) -> Bool {
let otherSet = CharacterSet(charactersIn: String(character))
return self.isSuperset(of: otherSet)
}
Note: the above crashes and doesn't work due to https://bugs.swift.org/browse/SR-3667. Not sure CharacterSet gets the kind of love it needs.