How do you turn a string into a unicode family in Swift? - swift

I'm trying to make a feature in my app that when a user types in a text field, the text converts into a unicode family.
Like below, there is an image of a user typing. And as the user types, you can see different unicode family characters that when a user types on a cell, you can copy the text and paste it somewhere else.
If I would like to turn my text into the black bubble unicode family like the screenshot above, how can I do that?

You can define a character map. Here's one to get you started.
let circledMap: [Character : Character] = ["A": "πŸ…", "B": "πŸ…‘", "C": "πŸ…’", "D": "πŸ…“"] // The rest are left as an exercise
let circledRes = String("abacab".uppercased().map { circledMap[$0] ?? $0 })
print(circledRes)
If your map contains mappings for both upper and lowercase letters then don't call uppercased.
Create whatever maps you want. Spend lots of time with the "Emoji & Symbols" viewer found on the Edit menu of every macOS program.
let invertedMap: [Character : Character] = ["a": "ɐ", "b": "q", "c": "Ι”", "d": "p", "e": "ǝ", "f": "ɟ", "g": "Ζƒ", "h": "Ι₯"]
In a case like the circled letters, it would be nice to define a range where you can transform "A"..."Z" to "πŸ…"..."πŸ…©".
That actually takes more code than I expected but the following does work:
extension String {
// A few sample ranges to get started
// NOTE: Make sure each mapping pair has the same number of characters or bad things may happen
static let circledLetters: [ClosedRange<UnicodeScalar> : ClosedRange<UnicodeScalar>] = ["A"..."Z" : "πŸ…"..."πŸ…©", "a"..."z" : "πŸ…"..."πŸ…©"]
static let boxedLetters: [ClosedRange<UnicodeScalar> : ClosedRange<UnicodeScalar>] = ["A"..."Z" : "πŸ…°"..."πŸ†‰", "a"..."z" : "πŸ…°"..."πŸ†‰"]
static let italicLetters: [ClosedRange<UnicodeScalar> : ClosedRange<UnicodeScalar>] = ["A"..."Z" : "𝐴"..."𝑍", "a"..."z" : "π‘Ž"..."𝑧"]
func transformed(using mapping: [ClosedRange<UnicodeScalar> : ClosedRange<UnicodeScalar>]) -> String {
let chars: [UnicodeScalar] = self.unicodeScalars.map { ch in
for transform in mapping {
// If the character is found in the range, convert it
if let offset = transform.key.firstIndex(of: ch) {
// Convert the offset from key range into an Int
let dist = transform.key.distance(from: transform.key.startIndex, to: offset)
// Build new index into value range
let newIndex = transform.value.index(transform.value.startIndex, offsetBy: dist)
// Get the mapped character
let newch = transform.value[newIndex]
return newch
}
}
// Not found in any of the mappings so return the original as-is
return ch
}
// Convert the final [UnicodeScalar] into a new String
var res = ""
res.unicodeScalars.append(contentsOf: chars)
return res
}
}
print("This works".transformed(using: String.circledLetters)) // πŸ…£πŸ…—πŸ…˜πŸ…’ πŸ…¦πŸ…žπŸ…‘πŸ…šπŸ…’
The above String extension also requires the following extension (thanks to this answer):
extension UnicodeScalar: Strideable {
public func distance(to other: UnicodeScalar) -> Int {
return Int(other.value) - Int(self.value)
}
public func advanced(by n: Int) -> UnicodeScalar {
let advancedValue = n + Int(self.value)
guard let advancedScalar = UnicodeScalar(advancedValue) else {
fatalError("\(String(advancedValue, radix: 16)) does not represent a valid unicode scalar value.")
}
return advancedScalar
}
}

Related

Replace Accent character with basic in a String - Δ… -> a , Δ‡ -> c

I'm removing accent characters from the Polish Alphabet when searching through a database. That way the user can type in text without accent.
I'm using this in my TableView search controller with approx 15,000 Strings. The code works but it is very slow, app freezes for a second with every letter typed.
Does anyone have a solution for more efficient approach?
My Filter for the TableView:
//My old method which didn't convert accent letters and works smoothly
var arr = dataSetArray.filter({$0.lowercased().contains(searchText.lowercased())})
//My new filtering method
var arr = dataSetArray.filter({$0.forSorting().contains(searchText.lowercased())})
My Extension:
extension String {
func forSorting() -> String {
let set = [("Δ…", "a"), ("Δ‡", "c"), ("Δ™", "e"), ("Ε‚", "l"), ("Ε„", "n"), ("Γ³", "o"), ("Ε›", "s"), ("ΕΊ", "z"), ("ΕΌ", "z")]
let ab = self.lowercased()
let new = ab.folding(options: .diacriticInsensitive, locale: nil)
let final = new.replaceCharacters(characters: set)
return final
}
}
extension String {
func replaceCharacters(characters: [(String, String)]) -> String
{
var input: String = self
let count = characters.count
if count >= 1
{
for i in 1...count
{
let c = i - 1
let first = input
let working = first.replacingOccurrences(of: characters[c].0, with: characters[c].1)
input = working
}
}
return input
}
}
Try range(of with caseInsensitive and diacriticInsensitive options
let arr = dataSetArray.filter{ $0.localizedStandardRange(of: searchText) != nil }
without the extensions
You can use localizedStandardContains which returns a Boolean value indicating whether the string contains the given string, taking the current locale into account.
Declaration
func localizedStandardContains<T>(_ string: T) -> Bool where T : StringProtocol
Discussion
This is the most appropriate method for doing user-level string searches, similar to how searches are done generally in the system.
The search is locale-aware, case and diacritic insensitive. The exact
list of search options applied may change over time
extension Collection where Element: StringProtocol {
public func localizedStandardFilter(_ element: Element) -> [Element] {
filter { $0.localizedStandardContains(element) }
}
}
let array = ["cafe","Café Quente","CAFÉ","Coffe"]
let filtered = array.localizedStandardFilter("cafe")
filtered // ["cafe", "Café Quente", "CAFÉ"]

Issues when attempting to modify an emoji sequence

I have the following function that takes a string with emojis, if its a sequence emoji a+b it will leave a as is and it will change b to a different emoji
func changeEmoji(givenString:String)->(String){
let emojiDictionary :[String:String] = [
"β›Ή" : "β›Ή",
"♀️" : "πŸ‘©",
"🏻" : "πŸ’€",
"♂️" :"πŸ‘¨",
]
let stringCharacters=Array(givenString.characters)
var returnedString=String()
for character in stringCharacters{
if emojiDictionary[String(character)] == nil {
return "error"
}
else {
returnedString=returnedString+emojiDictionary[String(character)]!
}
}
return returnedString
}
i have no problem with
changeEmoji(givenString: "β›Ήβ›ΉπŸ»")
it outputs: "β›Ήβ›ΉπŸ’€"
but:
changeEmoji(givenString: "β›Ήβ›ΉπŸ»β›ΉπŸ»β€β™€οΈ")
outputs "error" while it shouldn't as ♀ Female Sign and Variation Selector-16 is the second key in my emojiDictionary..
Similar issue appears with male sign and variation selector.
Any ideas why is this happening?
The problem is that "β›ΉπŸ»β€β™€οΈ" is made up of 3 Swift Characters (aka extended grapheme clusters):
"β›ΉπŸ»β€" (U+26F9 PERSON WITH BALL)
"πŸ»β€" (U+1F3FB Emoji Modifier Fitzpatrick Type-1-2, U+200D ZERO WIDTH JOINER)
"♀️" (U+2640 FEMALE SIGN, U+FE0F VARIATION SELECTOR-16)
However, your emojiDictionary only contains a "🏻" (U+1F3FB Emoji Modifier Fitzpatrick Type-1-2), which doesn't match the second Character of "β›ΉπŸ»β€β™€οΈ" as it's missing the zero width joiner.
The simplest solution therefore is to just add another key to your dictionary to include the Emoji Modifier Fitzpatrick Type-1-2 character, with a zero width joiner suffix. The clearest way of doing this would be to just suffix it with the unicode escape sequence \u{200D}.
For example:
func changeEmoji(givenString: String) -> String? {
// I have included the unicode point breakdowns for clarity
let emojiDictionary : [String : String] = [
"β›Ή" : "β›Ή", // 26F9 : 26F9
"♀️" : "πŸ‘©", // 2640, fe0f : 1f469
"🏻" : "πŸ’€", // 1f3fb : 1f4a4
"🏻\u{200D}" : "πŸ’€", // 1f3fb, 200d : 1f4a4
"♂️" :"πŸ‘¨" // 2642, fe0f : 1f468
]
// Convert characters of string to an array of string characters,
// given that you're just going to use the String(_:) initialiser later.
let stringCharacters = givenString.characters.map(String.init(_:))
var returnedString = ""
for character in stringCharacters {
guard let replacementCharacter = emojiDictionary[character] else {
// I would advise making your method return an optional
// in cases where the string can't be converted.
// Failure is shown by the return of nil, rather than some
// string sentinel.
return nil
}
returnedString += replacementCharacter
}
return returnedString
}
print(changeEmoji(givenString: "β›Ήβ›ΉπŸ»β›ΉπŸ»β€β™€οΈ")) // Optional("β›Ήβ›ΉπŸ’€β›ΉπŸ’€πŸ‘©")

Remove all non-numeric characters from a string in swift

I have the need to parse some unknown data which should just be a numeric value, but may contain whitespace or other non-alphanumeric characters.
Is there a new way of doing this in Swift? All I can find online seems to be the old C way of doing things.
I am looking at stringByTrimmingCharactersInSet - as I am sure my inputs will only have whitespace/special characters at the start or end of the string. Are there any built in character sets I can use for this? Or do I need to create my own?
I was hoping there would be something like stringFromCharactersInSet() which would allow me to specify only valid characters to keep
I was hoping there would be something like stringFromCharactersInSet() which would allow me to specify only valid characters to keep.
You can either use trimmingCharacters with the inverted character set to remove characters from the start or the end of the string. In Swift 3 and later:
let result = string.trimmingCharacters(in: CharacterSet(charactersIn: "0123456789.").inverted)
Or, if you want to remove non-numeric characters anywhere in the string (not just the start or end), you can filter the characters, e.g. in Swift 4.2.1:
let result = string.filter("0123456789.".contains)
Or, if you want to remove characters from a CharacterSet from anywhere in the string, use:
let result = String(string.unicodeScalars.filter(CharacterSet.whitespaces.inverted.contains))
Or, if you want to only match valid strings of a certain format (e.g. ####.##), you could use regular expression. For example:
if let range = string.range(of: #"\d+(\.\d*)?"#, options: .regularExpression) {
let result = string[range] // or `String(string[range])` if you need `String`
}
The behavior of these different approaches differ slightly so it just depends on precisely what you're trying to do. Include or exclude the decimal point if you want decimal numbers, or just integers. There are lots of ways to accomplish this.
For older, Swift 2 syntax, see previous revision of this answer.
let result = string.stringByReplacingOccurrencesOfString("[^0-9]", withString: "", options: NSStringCompareOptions.RegularExpressionSearch, range:nil).stringByTrimmingCharactersInSet(NSCharacterSet.whitespaceCharacterSet())
Swift 3
let result = string.replacingOccurrences( of:"[^0-9]", with: "", options: .regularExpression)
You can upvote this answer.
I prefer this solution, because I like extensions, and it seems a bit cleaner to me. Solution reproduced here:
extension String {
var digits: String {
return components(separatedBy: CharacterSet.decimalDigits.inverted)
.joined()
}
}
You can filter the UnicodeScalarView of the string using the pattern matching operator for ranges, pass a UnicodeScalar ClosedRange from 0 to 9 and initialise a new String with the resulting UnicodeScalarView:
extension String {
private static var digits = UnicodeScalar("0")..."9"
var digits: String {
return String(unicodeScalars.filter(String.digits.contains))
}
}
"abc12345".digits // "12345"
edit/update:
Swift 4.2
extension RangeReplaceableCollection where Self: StringProtocol {
var digits: Self {
return filter(("0"..."9").contains)
}
}
or as a mutating method
extension RangeReplaceableCollection where Self: StringProtocol {
mutating func removeAllNonNumeric() {
removeAll { !("0"..."9" ~= $0) }
}
}
Swift 5.2 β€’ Xcode 11.4 or later
In Swift5 we can use a new Character property called isWholeNumber:
extension RangeReplaceableCollection where Self: StringProtocol {
var digits: Self { filter(\.isWholeNumber) }
}
extension RangeReplaceableCollection where Self: StringProtocol {
mutating func removeAllNonNumeric() {
removeAll { !$0.isWholeNumber }
}
}
To allow a period as well we can extend Character and create a computed property:
extension Character {
var isDecimalOrPeriod: Bool { "0"..."9" ~= self || self == "." }
}
extension RangeReplaceableCollection where Self: StringProtocol {
var digitsAndPeriods: Self { filter(\.isDecimalOrPeriod) }
}
Playground testing:
"abc12345".digits // "12345"
var str = "123abc0"
str.removeAllNonNumeric()
print(str) //"1230"
"Testing0123456789.".digitsAndPeriods // "0123456789."
Swift 4
I found a decent way to get only alpha numeric characters set from a string.
For instance:-
func getAlphaNumericValue() {
var yourString = "123456789!##$%^&*()AnyThingYouWant"
let unsafeChars = CharacterSet.alphanumerics.inverted // Remove the .inverted to get the opposite result.
let cleanChars = yourString.components(separatedBy: unsafeChars).joined(separator: "")
print(cleanChars) // 123456789AnyThingYouWant
}
A solution using the filter function and rangeOfCharacterFromSet
let string = "sld [f]34é7*˜¡"
let alphaNumericCharacterSet = NSCharacterSet.alphanumericCharacterSet()
let filteredCharacters = string.characters.filter {
return String($0).rangeOfCharacterFromSet(alphaNumericCharacterSet) != nil
}
let filteredString = String(filteredCharacters) // -> sldf34Γ©7Β΅
To filter for only numeric characters use
let string = "sld [f]34é7*˜¡"
let numericSet = "0123456789"
let filteredCharacters = string.characters.filter {
return numericSet.containsString(String($0))
}
let filteredString = String(filteredCharacters) // -> 347
or
let numericSet : [Character] = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
let filteredCharacters = string.characters.filter {
return numericSet.contains($0)
}
let filteredString = String(filteredCharacters) // -> 347
Swift 4
But without extensions or componentsSeparatedByCharactersInSet which doesn't read as well.
let allowedCharSet = NSCharacterSet.letters.union(.whitespaces)
let filteredText = String(sourceText.unicodeScalars.filter(allowedCharSet.contains))
let string = "+1*(234) fds567#-8/90-"
let onlyNumbers = string.components(separatedBy: CharacterSet.decimalDigits.inverted).joined()
print(onlyNumbers) // "1234567890"
or
extension String {
func removeNonNumeric() -> String {
return self.components(separatedBy: CharacterSet.decimalDigits.inverted).joined()
}
}
let onlyNumbers = "+1*(234) fds567#-8/90-".removeNonNumeric()
print(onlyNumbers)// "1234567890"
Swift 3, filters all except numbers
let myString = "dasdf3453453fsdf23455sf.2234"
let result = String(myString.characters.filter { String($0).rangeOfCharacter(from: CharacterSet(charactersIn: "0123456789")) != nil })
print(result)
Swift 4.2
let numericString = string.filter { (char) -> Bool in
return char.isNumber
}
You can do something like this...
let string = "[,myString1. \"" // string : [,myString1. "
let characterSet = NSCharacterSet(charactersInString: "[,. \"")
let finalString = (string.componentsSeparatedByCharactersInSet(characterSet) as NSArray).componentsJoinedByString("")
print(finalString)
//finalString will be "myString1"
The issue with Rob's first solution is stringByTrimmingCharactersInSet only filters the ends of the string rather than throughout, as stated in Apple's documentation:
Returns a new string made by removing from both ends of the receiver characters contained in a given character set.
Instead use componentsSeparatedByCharactersInSet to first isolate all non-occurrences of the character set into arrays and subsequently join them with an empty string separator:
"$$1234%^56()78*9££".componentsSeparatedByCharactersInSet(NSCharacterSet(charactersInString: "0123456789").invertedSet)).joinWithSeparator("")
Which returns 123456789
Swift 3
extension String {
var keepNumericsOnly: String {
return self.components(separatedBy: CharacterSet(charactersIn: "0123456789").inverted).joined(separator: "")
}
}
Swift 4.0 version
extension String {
var numbers: String {
return String(describing: filter { String($0).rangeOfCharacter(from: CharacterSet(charactersIn: "0123456789")) != nil })
}
}
Swift 4
String.swift
import Foundation
extension String {
func removeCharacters(from forbiddenChars: CharacterSet) -> String {
let passed = self.unicodeScalars.filter { !forbiddenChars.contains($0) }
return String(String.UnicodeScalarView(passed))
}
func removeCharacters(from: String) -> String {
return removeCharacters(from: CharacterSet(charactersIn: from))
}
}
ViewController.swift
let character = "1Vi234s56a78l9"
let alphaNumericSet = character.removeCharacters(from: CharacterSet.decimalDigits.inverted)
print(alphaNumericSet) // will print: 123456789
let alphaNumericCharacterSet = character.removeCharacters(from: "0123456789")
print("no digits",alphaNumericCharacterSet) // will print: Vishal
Swift 4.2
let digitChars = yourString.components(separatedBy:
CharacterSet.decimalDigits.inverted).joined(separator: "")
Swift 3 Version
extension String
{
func trimmingCharactersNot(in charSet: CharacterSet) -> String
{
var s:String = ""
for unicodeScalar in self.unicodeScalars
{
if charSet.contains(unicodeScalar)
{
s.append(String(unicodeScalar))
}
}
return s
}
}

Convert String.CharacterView.Index to int [duplicate]

I want to convert the index of a letter contained within a string to an integer value. Attempted to read the header files but I cannot find the type for Index, although it appears to conform to protocol ForwardIndexType with methods (e.g. distanceTo).
var letters = "abcdefg"
let index = letters.characters.indexOf("c")!
// ERROR: Cannot invoke initializer for type 'Int' with an argument list of type '(String.CharacterView.Index)'
let intValue = Int(index) // I want the integer value of the index (e.g. 2)
Any help is appreciated.
edit/update:
Xcode 11 β€’ Swift 5.1 or later
extension StringProtocol {
func distance(of element: Element) -> Int? { firstIndex(of: element)?.distance(in: self) }
func distance<S: StringProtocol>(of string: S) -> Int? { range(of: string)?.lowerBound.distance(in: self) }
}
extension Collection {
func distance(to index: Index) -> Int { distance(from: startIndex, to: index) }
}
extension String.Index {
func distance<S: StringProtocol>(in string: S) -> Int { string.distance(to: self) }
}
Playground testing
let letters = "abcdefg"
let char: Character = "c"
if let distance = letters.distance(of: char) {
print("character \(char) was found at position #\(distance)") // "character c was found at position #2\n"
} else {
print("character \(char) was not found")
}
let string = "cde"
if let distance = letters.distance(of: string) {
print("string \(string) was found at position #\(distance)") // "string cde was found at position #2\n"
} else {
print("string \(string) was not found")
}
Works for Xcode 13 and Swift 5
let myString = "Hello World"
if let i = myString.firstIndex(of: "o") {
let index: Int = myString.distance(from: myString.startIndex, to: i)
print(index) // Prints 4
}
The function func distance(from start: String.Index, to end: String.Index) -> String.IndexDistance returns an IndexDistance which is just a typealias for Int
Swift 4
var str = "abcdefg"
let index = str.index(of: "c")?.encodedOffset // Result: 2
Note: If String contains same multiple characters, it will just get the nearest one from left
var str = "abcdefgc"
let index = str.index(of: "c")?.encodedOffset // Result: 2
encodedOffset has deprecated from Swift 4.2.
Deprecation message:
encodedOffset has been deprecated as most common usage is incorrect. Use utf16Offset(in:) to achieve the same behavior.
So we can use utf16Offset(in:) like this:
var str = "abcdefgc"
let index = str.index(of: "c")?.utf16Offset(in: str) // Result: 2
When searching for index like this
⛔️ guard let index = (positions.firstIndex { position <= $0 }) else {
it is treated as Array.Index. You have to give compiler a clue you want an integer
βœ… guard let index: Int = (positions.firstIndex { position <= $0 }) else {
Swift 5
You can do convert to array of characters and then use advanced(by:) to convert to integer.
let myString = "Hello World"
if let i = Array(myString).firstIndex(of: "o") {
let index: Int = i.advanced(by: 0)
print(index) // Prints 4
}
To perform string operation based on index , you can not do it with traditional index numeric approach. because swift.index is retrieved by the indices function and it is not in the Int type. Even though String is an array of characters, still we can't read element by index.
This is frustrating.
So ,to create new substring of every even character of string , check below code.
let mystr = "abcdefghijklmnopqrstuvwxyz"
let mystrArray = Array(mystr)
let strLength = mystrArray.count
var resultStrArray : [Character] = []
var i = 0
while i < strLength {
if i % 2 == 0 {
resultStrArray.append(mystrArray[i])
}
i += 1
}
let resultString = String(resultStrArray)
print(resultString)
Output : acegikmoqsuwy
Thanks In advance
Here is an extension that will let you access the bounds of a substring as Ints instead of String.Index values:
import Foundation
/// This extension is available at
/// https://gist.github.com/zackdotcomputer/9d83f4d48af7127cd0bea427b4d6d61b
extension StringProtocol {
/// Access the range of the search string as integer indices
/// in the rendered string.
/// - NOTE: This is "unsafe" because it may not return what you expect if
/// your string contains single symbols formed from multiple scalars.
/// - Returns: A `CountableRange<Int>` that will align with the Swift String.Index
/// from the result of the standard function range(of:).
func countableRange<SearchType: StringProtocol>(
of search: SearchType,
options: String.CompareOptions = [],
range: Range<String.Index>? = nil,
locale: Locale? = nil
) -> CountableRange<Int>? {
guard let trueRange = self.range(of: search, options: options, range: range, locale: locale) else {
return nil
}
let intStart = self.distance(from: startIndex, to: trueRange.lowerBound)
let intEnd = self.distance(from: trueRange.lowerBound, to: trueRange.upperBound) + intStart
return Range(uncheckedBounds: (lower: intStart, upper: intEnd))
}
}
Just be aware that this can lead to weirdness, which is why Apple has chosen to make it hard. (Though that's a debatable design decision - hiding a dangerous thing by just making it hard...)
You can read more in the String documentation from Apple, but the tldr is that it stems from the fact that these "indices" are actually implementation-specific. They represent the indices into the string after it has been rendered by the OS, and so can shift from OS-to-OS depending on what version of the Unicode spec is being used. This means that accessing values by index is no longer a constant-time operation, because the UTF spec has to be run over the data to determine the right place in the string. These indices will also not line up with the values generated by NSString, if you bridge to it, or with the indices into the underlying UTF scalars. Caveat developer.
In case you got an "index is out of bounds" error. You may try this approach. Working in Swift 5
extension String{
func countIndex(_ char:Character) -> Int{
var count = 0
var temp = self
for c in self{
if c == char {
//temp.remove(at: temp.index(temp.startIndex,offsetBy:count))
//temp.insert(".", at: temp.index(temp.startIndex,offsetBy: count))
return count
}
count += 1
}
return -1
}
}

attribute multiple values to a single key ( dictionary) in Swift

I'd like to ask how can I attribute several values to the same key then use those (same) key (from several values).
the exercise comes from https://www.weheartswift.com/dictionaries/
it's just an adaptation of his code.
i'm first creating a dictionary like this with multiple values for one keys:
var code = [
"X" : "a","b",
"Y" : "c","d",
"Z" : "e","f",
...
]
Then I'd like when I enter words containing a b c d e or f, it changes those letters to X Y or Z depending the dictionary
var encodedMessage = "abcdef"
var decoder: [String:[String]] = [:]
// reverse the code
for (key, value) in code {
decoder[value] = key
}
//an error occurs here, what can i do to fix it?
var decodedMessage = ""
for char in encodedMessage {
var character = "\(char)"
if let encodedChar = decoder[character] {
// letter
decodedMessage += encodedChar
} else {
// space
decodedMessage += character
}
}
and since i prefer decoding the message without divide "letter" and "space" is there any better and easier way?
so it will be like, there won't be "space"
println(decodedMessage)
i'd like the decodedMessage is XXYYZZ
thank you already for those who can help.
Regards
Here:
let encodedMessage = "abcdef"
var code = ["X": ["a", "b"], "Y": ["c", "d"], "Z": ["e", "f"]]
var decoder: [String:String] = [:]
// reverse the code
for key in code.keys {
for newCode in code[key]! {
decoder[newCode] = key
}
}
var decodedMessage = ""
for char in encodedMessage.characters {
var character = "\(char)"
if let encodedChar = decoder[character] {
// letter
decodedMessage += encodedChar
} else {
// space
decodedMessage += character
}
}
You just need to store them as an array.