Swift 5: Index of a Character in String - swift

Before Swift 5, I had this extension working:
fileprivate extension String {
func indexOf(char: Character) -> Int? {
return firstIndex(of: char)?.encodedOffset
}
}
Now, I get a deprecated message:
'encodedOffset' is deprecated: encodedOffset has been deprecated as most common usage is incorrect. Use `utf16Offset(in:)` to achieve the same behavior.
Is there a simpler solution to this instead of using utf16Offset(in:)?
I just need the index of the character position passed back as an Int.

After some time I have to admit that my original answer was incorrect.
In Swift are two methods: firstIndex(of:) and lastIndex(of:)
Both returns Int? representing index of first/last element in Array which is equal to passed element (if there is any, otherwise it returns nil).
So, you should avoid using your custom method to get index because there could be two same elements and you wouldn't know which index you need. So try to think about your usage and decide which index is more suitable for you; first or last. 🙏
Original answer:
And what is wrong with utf16Offset(in:)? This is way to go with Swift 5
fileprivate extension String {
func indexOf(char: Character) -> Int? {
return firstIndex(of: char)?.utf16Offset(in: self)
}
}

Related

Sequence extension that works on Arrays, Sets and Dictionaries in Swift

I'm learning Swift with Paul Hudson's 100 days of Swift. In one of his extension lessons, I have found a notion that more advanced developers could write a Sequence extension that would service Arrays, Sets and Dictionaries:
https://www.hackingwithswift.com/quick-start/understanding-swift/when-are-protocol-extensions-useful-in-swift
I have given this a shot but:
I don't know how to create a variable that could change its type (wonder if that's even possible at all)
I don't know how to create a sequence that would service dictionaries too, since their syntax for allSatisfy is a bit different
Would you be so kind and give me a hand here? :)
The code:
extension Sequence {
var isAllEven:Bool {
numbers.allSatisfy { $0.isMultiple(of:2)}
}
}
let numbers = Set([4, 8, 15, 16])
print(numbers.isAllEven)
I can change numbers to be both Array and Set but as soon as I understood what Paul said, there is a possibility to create an extension that could service all 3 in one passage of code without having to change the variables content.
As isMultiple(of:) belongs to all integer types a generic version must be constrained to BinaryInteger
extension Sequence where Element : BinaryInteger {
var isAllEven : Bool {
allSatisfy {$0.isMultiple(of: 2)}
}
}
But this cannot cover Dictionary, because although Dictionary conforms to Sequence the Element type is different.
You could write a second extension of Sequence which matches the Dictionary tuple type
extension Sequence where Element == (key:String, value:Int) {
var isAllEven : Bool {
allSatisfy {$0.value.isMultiple(of:2)}
}
}
but this considers only String keys and Int values
A more generic way is to extend Dictionary directly
extension Dictionary where Value : BinaryInteger {
var isAllEven : Bool {
allSatisfy {$0.value.isMultiple(of: 2)}
}
}

Set's contains method returns different value at different time

I was thinking about how Swift ensures uniqueness for Set because I have turned one of my obj from Equatable to Hashable for free and so I came up with this simple Playground
struct SimpleStruct: Hashable {
let string: String
let number: Int
static func == (lhs: SimpleStruct, rhs: SimpleStruct) -> Bool {
let areEqual = lhs.string == rhs.string
print(lhs, rhs, areEqual)
return areEqual
}
}
var set = Set<SimpleStruct>()
let first = SimpleStruct(string: "a", number: 2)
set.insert(first)
So my first question was:
Will the static func == method be called anytime I insert a new obj inside the set?
My question comes from this thought:
For Equatable obj, in order to make this decision, the only way to ensure two obj are the same is to ask the result of static func ==.
For Hashable obj, a faster way is to compare hashValues... but, like in my case, the default implementation will use both string and number, in contrast with == logic.
So, in order to test how Set behaves, I have just added a print statement.
I have figured out that sometimes I got the print statement, sometimes no. Like sometimes hashValue isn't enough in order to make this decision ... So the method hasn't been called every time.
Weird...
So I've tried to add two objects that are equal and wondering what will be the result of set.contains
let second = SimpleStruct(string: "a", number: 3)
print(first == second) // returns true
set.contains(second)
And wonders of wonders, launching a couple of times the playground, I got different results and this might cause unpredictable results ...
Adding
var hashValue: Int {
return string.hashValue
}
it gets rid of any unexpected results but my doubt is:
Why, without the custom hashValue implementation, == sometimes gets called and sometimes it doesn't?
Should Apple avoid this kind of unexpected behaviours?
The synthesized implementation of the Hashable requirement uses all stored
properties of a struct, in your case string and number. Your implementation
of == is only based on the string:
let first = SimpleStruct(string: "a", number: 2)
let second = SimpleStruct(string: "a", number: 3)
print(first == second) // true
print(first.hashValue == second.hashValue) // false
This is a violation of a requirement of the Hashable protocol:
Two instances that are equal must feed the same values to Hasher in hash(into:), in the same order.
and causes the undefined behavior. (And since hash values are randomized
since Swift 4.2, the behavior can be different in each program run.)
What probably happens in your test is that the hash value of second is used to determine the “bucket” of the set in which the value
would be stored. That may or may not be the same bucket in which first is stored. – But that is an implementation detail: Undefined behavior is undefined behavior, it can cause unexpected results or even
runtime errors.
Implementing
var hashValue: Int {
return string.hashValue
}
or alternatively (starting with Swift 4.2)
func hash(into hasher: inout Hasher) {
hasher.combine(string)
}
fixes the rule violation, and therefore makes your code behave as expected.

Can a swift computed property getter mutate a structure?

I have a struct wrapping a var data:[T] that also supplies some statistics on the internal Array. One statistic is the max value, which can be an expensive operation because it requires searching the every element to determine the max value-- so I'd like to cache the max value and only recalculate it if I need to:
private mutating func getMax()->T? {
if let m=maxValue {
return m
}
else if data.count>0 {
maxValue=data.maxElement()
return maxValue
}
else {
return nil
}
}
That seems to work fine as a method, but I can't figure out how to do the same thing as a computed property.
var max:T? {return getMax()}
leads to a complaint that the accessor needs to be marked "mutating" because getMax() is mutating (actually I'd put the getMax code into the property accessor, but it's easier to not rewrite the code here).
Xcode suggests I rewrite the code thusly:
var max:T? mutating {return getMax()}
which then flags another problem and Xcode suggests putting a semicolon before mutating which leads to a suggestion to put another semicolon after mutating and then yet another semicolon after mutating and it's clear the compiler isn't even trying to help but just has a semicolon fetish.
Is there a way to write a computed property that permits caching values or am I stuck writing this as a method?
The correct syntax, despite the compiler's suggestions, would be:
var max:T? {
mutating get {return getMax()}
}

Are `Any` and `All` functions with predicates already built-in or do I need to roll my own? [duplicate]

Swift provides map, filter, reduce, ... for Array's, but I am not finding some (or any) or every (or all) whose counterparts in JavaScript are Array.some and Array.every. Am I not looking hard enough or do they exist?
A related question here is looking for the Swift all method, but JS programmers will probably not find it (there is no all in JS and some or any is not mentioned).
Update:
Use allSatisfy (all) and contains(where:) (some).
Old answer:
Just use contains.
// check if ALL items are completed
// so it does not contain a single item which is not completed
!items.contains { !$0.completed }
// check if SOME item is completed
// so test if there is at least one item which is completed
items.contains { $0.completed }
To replace any/some, you can use SequenceType's contains method (there's one version which takes a boolean predicate; the other one takes an element and works only for sequences of Equatable elements).
There's no built-in function like all/every, but you can easily write your own using an extension:
extension SequenceType
{
/// Returns `true` iff *every* element in `self` satisfies `predicate`.
func all(#noescape predicate: Generator.Element throws -> Bool) rethrows -> Bool
{
for element in self {
if try !predicate(element) {
return false
}
}
return true
}
}
Or, thanks to De Morgan, you can just negate the result of contains: !array.contains{ !predicate($0) }.
By the way, these work on any SequenceType, not just Array. That includes Set and even Dictionary (whose elements are (key, value) tuples).

How to handle initial nil value for reduce functions

I would like to learn and use more functional programming in Swift. So, I've been trying various things in playground. I don't understand Reduce, though. The basic textbook examples work, but I can't get my head around this problem.
I have an array of strings called "toDoItems". I would like to get the longest string in this array. What is the best practice for handling the initial nil value in such cases? I think this probably happens often. I thought of writing a custom function and use it.
func optionalMax(maxSofar: Int?, newElement: Int) -> Int {
if let definiteMaxSofar = maxSofar {
return max(definiteMaxSofar, newElement)
}
return newElement
}
// Just testing - nums is an array of Ints. Works.
var maxValueOfInts = nums.reduce(0) { optionalMax($0, $1) }
// ERROR: cannot invoke 'reduce' with an argument list of type ‘(nil, (_,_)->_)'
var longestOfStrings = toDoItems.reduce(nil) { optionalMax(count($0), count($1)) }
It might just be that Swift does not automatically infer the type of your initial value. Try making it clear by explicitly declaring it:
var longestOfStrings = toDoItems.reduce(nil as Int?) { optionalMax($0, count($1)) }
By the way notice that I do not count on $0 (your accumulator) since it is not a String but an optional Int Int?
Generally to avoid confusion reading the code later, I explicitly label the accumulator as a and the element coming in from the serie as x:
var longestOfStrings = toDoItems.reduce(nil as Int?) { a, x in optionalMax(a, count(x)) }
This way should be clearer than $0 and $1 in code when the accumulator or the single element are used.
Hope this helps
Initialise it with an empty string "" rather than nil. Or you could even initialise it with the first element of the array, but an empty string seems better.
Second go at this after writing some wrong code, this will return the longest string if you are happy with an empty string being returned for an empty array:
toDoItems.reduce("") { count($0) > count($1) ? $0 : $1 }
Or if you want nil, use
toDoItems.reduce(nil as String?) { count($0!) > count($1) ? $0 : $1 }
The problem is that the compiler cannot infer the types you are using for your seed and accumulator closure if you seed with nil, and you also need to get the optional type correct when using the optional string as $0.