Reduce float precision using RegExp in swift - swift

I'm trying to reduce the precision of the floats that are embedded in a strings.
The example is [93829.38, 1415.45467897]
I'd like to cut float numbers obtaining float number with a maximum precision of 2 (I can cut the string directly, no needs to round the numbers somehow).
The example is [93829.38, 1415.45]
with this regexp on rubular I can get float numbers in the string:
(\d+\.\d)
But I can't understand how to port this regexp on Swift and how to substitute the float strings with the shortest ones...

You may use
let str = "The example is [93829.38, 1415.45467897, 1.2, 134.34]"
let pattern = "(\\d+\\.\\d{2})\\d+"
let result = str.replacingOccurrences(of: pattern, with: "$1", options: [.regularExpression])
print(result) // => The example is [93829.38, 1415.45, 1.2, 134.34]
A pattern like (\d+\.\d{2})\d+ will match and capture into Group 1 one or more diigts, a dot and then two digits, and then will match one or more digits. The replacement is $1, the backreference to the value stored in Group 1, thus, truncating the digits matched with the last \d+.
See the regex demo here.
If there are any edge cases, they can usually be handled by means of word boundaries (\b) or lookarounds.

Related

How to align numbers vertically? Swift, UIKit

I want to align numbers by digits in table rows. For example:
___123.4
-5 678.9
That is, so that tens are under tens, units under units, a fractional number under a fractional number.
To convert a Number to a String, I use the numberStringFormatter function.
func numberStringFormat(_ number: Double) -> String {
let numberFormatter = NumberFormatter()
numberFormatter.numberStyle = .decimal
numberFormatter.maximumFractionDigits = 1
numberFormatter.groupingSeparator = " "
let result = numberFormatter.string(from: NSNumber(value: number))
return result ?? ""
}
This function sets the fractional format, determines the number of decimal places, and groups the numbers before the decimal point by digits.
But if the number is an integer, without fractions, or a digit after a floating point after rounding it turns out to be 0, then the formatted string looks like this, for example, 123
And then these numbers in the rows of the table are shifted and it turns out like this:
----123
5 678.9
That is, the fractional number on the bottom row is under the integer number on the top row.
In my opinion, I can solve this task if I force the Number to always show 0 after converting to a String.
I tried googling but couldn't find an answer to this question.
Maybe someone has already encountered such situations and can suggest a possible solution, or at least in what direction to move?
Or, perhaps there is some other solution without forcing 0 to be shown, but simply aligning the characters vertically?
Any ideas are welcome. I really appreciate your help.
Update: A greate solution from HangarRash.
minimumFractionDigits = 1

Swift Dictionary is slow?

Situation: I was solving LeetCode 3. Longest Substring Without Repeating Characters, when I use the Dictionary using Swift the result is Time Limit Exceeded that failed to last test case, but using the same notion of code with C++ it acctually passed with runtime just fine. I thought in swift Dictionary is same thing as UnorderdMap.
Some research: I found some resources said use NSDictionary over regular one but it requires reference type instead of Int or Character etc.
Expected result: fast performance in accessing Dictionary in Swift
Question: I know there are better answer for the question, but the main goal here is Is there a effiencient to access and write to Dictionary or someting we can use to substitude.
func lengthOfLongestSubstring(_ s: String) -> Int {
var window:[Character:Int] = [:] //swift dictionary is kind of slow?
let array = Array(s)
var res = 0
var left = 0, right = 0
while right < s.count {
let rightChar = array[right]
right += 1
window[rightChar, default: 0] += 1
while window[rightChar]! > 1 {
let leftChar = array[left]
window[leftChar, default: 0] -= 1
left += 1
}
res = max(res, right - left)
}
return res
}
Because complexity of count in String is O(n), so that you should save count in a variable. You can read at chapter
Strings and Characters in Swift Book
Extended grapheme clusters can be composed of multiple Unicode scalars. This means that different characters—and different representations of the same character—can require different amounts of memory to store. Because of this, characters in Swift don’t each take up the same amount of memory within a string’s representation. As a result, the number of characters in a string can’t be calculated without iterating through the string to determine its extended grapheme cluster boundaries. If you are working with particularly long string values, be aware that the count property must iterate over the Unicode scalars in the entire string in order to determine the characters for that string.
The count of the characters returned by the count property isn’t always the same as the length property of an NSString that contains the same characters. The length of an NSString is based on the number of 16-bit code units within the string’s UTF-16 representation and not the number of Unicode extended grapheme clusters within the string.

UTF8 String length and indices in Go vs Swift

I have apps in Go and Swift which process strings, such as finding substrings and their indices. At first it worked nicely even with multi-byte characters (e.g. emojis), using to Go's utf8.RuneCountInString() and Swift's native String.
But there are some UTF8 characters that break the string length and indices for substrings, e.g. a string "Lorem 😂😃✌️🤔 ipsum":
Go's utf8.RuneCountInString("Lorem 😂😃✌️🤔 ipsum") returns 17 and the start index of ipsum is 12.
Swift's "Lorem 😂😃✌️🤔 ipsum".count returns 16 and the start index of ipsum is 11.
Using Swift String's utf8, utf16 or casting to NSString gives also different lengths and indices. There are also other emojis composed from multiple other emoji's like 👨‍👩‍👧‍👦 which gives even funnier numbers.
This is with Go 1.8 and Swift 4.1.
Is there any way to get the same string lengths and substrings' indices with same values with Go and Swift?
EDIT
I created a Swift String extension based on #MartinR's great answer:
extension String {
func runesRangeToNSRange(from: Int, to: Int) -> NSRange {
let length = to - from
let start = unicodeScalars.index(unicodeScalars.startIndex, offsetBy: from)
let end = unicodeScalars.index(start, offsetBy: length)
let range = start..<end
return NSRange(range, in: self)
}
}
In Swift a Character is an “extended grapheme cluster,” and each of "😂", "😃", "✌️", "🤔", "👨‍👩‍👧‍👦" counts as a single character.
I have no experience with Go, but as I understand it from Strings, bytes, runes and characters in Go,
a “rune” is a Unicode code point, which essentially corresponds to a UnicodeScalar in Swift.
In your example, the difference comes from "✌️" which
counts as a single Swift character, but is built from two Unicode scalars:
print("✌️".count) // 1
print("✌️".unicodeScalars.count) // 2
Here is an example how you can compute the length and offsets in
terms of Unicode scalars:
let s = "Lorem 😂😃✌️🤔 ipsum"
print(s.unicodeScalars.count) // 17
if let idx = s.range(of: "ipsum") {
print(s.unicodeScalars.distance(from: s.startIndex, to: idx.lowerBound)) // 12
}
As you can see, this gives the same numbers as in your example from Go.
A rune in Go identifies a specific UTF-8 code point; that does not necessarily mean it maps 1:1 to visually distinct characters. Some characters may be made up of multiple runes/code points, therefor counting runes may not give you what you'd expect from a visual inspection of the string. I don't know what "some text".count actually counts in Swift so I can't offer any comparison there.

Why two flags only form 1 character? [duplicate]

let str1 = "🇩🇪🇩🇪🇩🇪🇩🇪🇩🇪"
let str2 = "🇩🇪.🇩🇪.🇩🇪.🇩🇪.🇩🇪."
println("\(countElements(str1)), \(countElements(str2))")
Result: 1, 10
But should not str1 have 5 elements?
The bug seems only occurred when I use the flag emoji.
Update for Swift 4 (Xcode 9)
As of Swift 4 (tested with Xcode 9 beta) grapheme clusters break after every second regional indicator symbol, as mandated by the Unicode 9
standard:
let str1 = "🇩🇪🇩🇪🇩🇪🇩🇪🇩🇪"
print(str1.count) // 5
print(Array(str1)) // ["🇩🇪", "🇩🇪", "🇩🇪", "🇩🇪", "🇩🇪"]
Also String is a collection of its characters (again), so one can
obtain the character count with str1.count.
(Old answer for Swift 3 and older:)
From "3 Grapheme Cluster Boundaries"
in the "Standard Annex #29 UNICODE TEXT SEGMENTATION":
(emphasis added):
A legacy grapheme cluster is defined as a base (such as A or カ)
followed by zero or more continuing characters. One way to think of
this is as a sequence of characters that form a “stack”.
The base can be single characters, or be any sequence of Hangul Jamo
characters that form a Hangul Syllable, as defined by D133 in The
Unicode Standard, or be any sequence of Regional_Indicator (RI) characters. The RI characters are used in pairs to denote Emoji
national flag symbols corresponding to ISO country codes. Sequences of
more than two RI characters should be separated by other characters,
such as U+200B ZWSP.
(Thanks to #rintaro for the link).
A Swift Character represents an extended grapheme cluster, so it is (according
to this reference) correct that any sequence of regional indicator symbols
is counted as a single character.
You can separate the "flags" by a ZERO WIDTH NON-JOINER:
let str1 = "🇩🇪\u{200C}🇩🇪"
print(str1.characters.count) // 2
or insert a ZERO WIDTH SPACE:
let str2 = "🇩🇪\u{200B}🇩🇪"
print(str2.characters.count) // 3
This solves also possible ambiguities, e.g. should "🇫​🇷​🇺​🇸"
be "🇫​🇷🇺​🇸" or "🇫🇷​🇺🇸" ?
See also How to know if two emojis will be displayed as one emoji? about a possible method
to count the number of "composed characters" in a Swift string,
which would return 5 for your let str1 = "🇩🇪🇩🇪🇩🇪🇩🇪🇩🇪".
Here's how I solved that problem, for Swift 3:
let str = "🇩🇪🇩🇪🇩🇪🇩🇪🇩🇪" //or whatever the string of emojis is
let range = str.startIndex..<str.endIndex
var length = 0
str.enumerateSubstrings(in: range, options: NSString.EnumerationOptions.byComposedCharacterSequences) { (substring, substringRange, enclosingRange, stop) -> () in
length = length + 1
}
print("Character Count: \(length)")
This fixes all the problems with character count and emojis, and is the simplest method I have found.

dot notation with p for hexadecimal numeric literals in swift

I'm working through the first basic playground in https://github.com/nettlep/learn-swift using XCode
What exactly is happening with this expression?
0xC.3p0 == 12.1875
I've learned about hexadecimal literals and the special "p" notation that indicates a power of 2.
0xF == 15
0xFp0 == 15 // 15 * 2^0
If I try 0xC.3 I get the error: Hexadecimal floating point literal must end with an exponent.
I found this nice overview of numeric literals and another deep explanation, but I didn't see something that explains what .3p0 does.
I've forked the code and upgraded this lesson to XCode 7 / Swift 2 -- here's the specific line.
This is Hexadecimal exponential notation.
By convention, the letter P (or p, for "power") represents times two
raised to the power of ... The number after the P is decimal and
represents the binary exponent.
...
Example: 1.3DEp42 represents hex(1.3DE) × dec(2^42).
For your example, we get:
0xC.3p0 represents 0xC.3 * 2^0 = 0xC.3 * 1 = hex(C.3) = 12.1875
where hex(C.3) = dec(12.{3/16}) = dec(12.1875)
As an example, you can try 0xC.3p1 (equals hex(C.3) * dec(2^1)), which yields double the value, i.e., 24.375.
You can also study the binary exponent growth in a playground for hex-value 1:
// ...
print(0x1p-3) // 1/8 (0.125)
print(0x1p-2) // 1/4 (0.25)
print(0x1p-1) // 1/2 (0.5)
print(0x1p1) // 2.0
print(0x1p2) // 4.0
print(0x1p3) // 8.0
// ...
Finally, this is also explained in Apple`s Language Reference - Lexical Types: Floating-Point Literals:
Hexadecimal floating-point literals consist of a 0x prefix, followed
by an optional hexadecimal fraction, followed by a hexadecimal
exponent. The hexadecimal fraction consists of a decimal point
followed by a sequence of hexadecimal digits. The exponent consists
of an upper- or lowercase p prefix followed by a sequence of decimal
digits that indicates what power of 2 the value preceding the p is
multiplied by. For example, 0xFp2 represents 15 x 2^2, which
evaluates to 60. Similarly, 0xFp-2 represents 15 x 2^-2, which
evaluates to 3.75.