Why does String.contains behave differently when I import Foundation? - swift

Just started learning Swift, am really confused about the following behaviour.
This is what I get when I run String.contains without Foundation:
"".contains("") // true
"a".contains("") // true
"a".contains("a") // true
"" == "" // true
And this is what I get with Foundation:
import Foundation
"".contains("") // false
"a".contains("") // false
"a".contains("a") // true
"" == "" // true
Why are the results different depending on whether I import Foundation? Are there other such differences, and is there an exhaustive list somewhere? Didn't find anything in the Foundation documentation, but this seems important to document. I'm only aware of this other example.
Also: How does this happen and is it normal? I understand that Swift has stuff like extensions that change the behaviour of every instance of something once they're included, but surely that should only add behaviour, not change existing behaviour. And if existing behaviour is changed, shouldn't the language indicate this somehow, like make me use a different type if I want the different behaviour?

Basically this is the same as the question I answer here.
Foundation is not part of Swift, it's part of Cocoa, the older Objective-C library that preceded Swift by many, many years. Foundation's version of a string is NSString. But Swift String is "bridged" to NSString, so as soon as you import Foundation, a bunch of NSString methods spring to life as if they were part of Swift String even though they are not. In your case, you actually end up calling a completely different method which, as you've discovered, gives different results.
A good way to see this is to command-click on the term contains in your code (or even better, option-click it and then click Open in Developer Documentation):
If you have not imported Foundation (or UIKit), you jump to Swift String's contains.
If you have imported Foundation, you jump to Foundation's contains.
As for this part:
shouldn't the language indicate this somehow
I'm afraid Stack Overflow is not very good on "should" questions. I would say, yes, this is a little maddening, but it's part of the price we pay for the easy and seamless integration of Swift into the world of Cocoa programming. You could argue that the Swift people should not have named their method contains, but that train has left the station, and besides, it's a name that perfectly indicates what the method does.
Another thing to keep in mind is that you would probably never really use Swift except in the presence of Foundation (perhaps because you're in the presence of UIKit or SwiftUI or AppKit) so in practical terms the issue wouldn't arise. You've hit an unusual edge-case, which is commendable but, ex hypothesi, unusual.
To make things even more complicated, I think the Swift library method you encountered may have been just introduced as part of Xcode 14 and Swift 5.7 etc. See https://developer.apple.com/videos/play/wwdc2022/110354/?time=1415 for the WWDC '22 discussion of new String features. In earlier versions of Xcode, the phrase "a".contains("") would not even have compiled in the absence of Foundation — and so the problem would never have arisen!

Related

Switch behavior when new enum values are added

Note: this is no longer relevant. Recent versions of Swift have multiple features that address enum binary compatibility in various ways, such as #unknown default, frozen enums, etc.
Various enums in HealthKit tend to get new values added with each release of iOS. For example, HKWorkoutActivityType has had new values added in each iOS version since its introduction.
Say I am mapping this enum to a string value using a Swift switch statement:
extension HKWorkoutActivityType {
var displayName: String {
switch self {
case .americanFootball: return "American Football"
// ...Exhaustive switch statement, with a line for every enum case.
// Including values added in iOS 10 and 11...
case .taiChi: return "Tai Chi"
}
}
}
let event: HKWorkoutEvent = ...
print("Activity type is: \(event.type.displayName)")
This switch statement, compiled against the iOS 11 SDK, works fine and is backward compatible with older iOS versions. Note that at the time of compilation, the switch statement is exhaustive, so there is no default case.
But if new HKWorkoutActivityType values are added in iOS 12, and I don't recompile this code, how will the displayName getter behave for new enum values? Should I expect a crash? Undefined behavior? Does it depend on the type of enum (for example, here it's an Objective-C NS_ENUM, but will Swift enums behave differently)? etc.
FWIW, this is the partially what this Swift Evolution proposal is addressing.
Hopefully they will decide on a solution that satisfies issues like this nicely too!
Long story short, you may be able to avoid this issue by adding a default case, (even though the compiler will yell at you), or using version tags. However this problem likely falls under "undefined" currently.
The long story:
The current version of Swift does not have ABI stability, so a compiled Swift application is not guaranteed to (and almost definitely wont) interface with a Framework compiled with a newer version (the reason the platform Frameworks are still Objective-C).
So how this category of changes affect Swift is a work in progress. We will probably have a better definition of how to deal with this type of issue when Swift 5 is released. until then adding default and/or version checking is probably the way to go.
Very interesting question, and upvoted. I know of no way to perfectly test this in (a) Xcode 9 and (b) iOS 11. But that may be your answer.
I think the desired solution if if #available(iOS 12, *), where though is at issue. Encapsulate the entire switch statement? Just the iOS 12 addition?
The result should be that between the target iOS version in Xcode and the Swift compiler, it's covered - and should yield an error (hopefully explaining the issue that iOS 11 is targeted but something is only available in iOS 12) to indicate you either need to use (a) the if #available(iOS 12, *) someplace or change your target.
I know of no easy way to test this though, without rebuilding. Which is integral to your question! Therefore I guess the rule is:
Always rebuild your app when a new iOS (and associated Xcode) version is released.
Consider this part of you taking ownership of your code.

Why are symbols for classes written in Swift difficult to read in lldb? Is it possible to change this?

Some context, I'm new to swift, going through a book right now
When looking at exceptions in lldb,
when there's a stack frame from a Swift class, the symbol is very hard to read
ex:
_TFC10MyApp16TestViewControllersP01CBLDocumentModel5queryfzT4viewCSo7CBLView4
it looks like lldb just doesn't know how to display the signatures properly-- is there a flag or setting I can change? Or is just a thing that everybody has learned to deal with?
Really the difficult part, for me, is when it prints random letters in the middle of the symbol
This has nothing to do with lldb, it’s called name mangling in Swift, and the symbols have very specific meanings. Swift’s name mangling is specifically designed so that the mangled name can be deterministically reconstructed to provide information about the kind of declaration is is, and the scope it lives in, among other things.

Swift: why aren't all variables lazy by default?

In comparing these two options for defining an instance property:
var networkManager = NetworkManager.sharedInstance()
var lazy networkManager = NetworkManager.sharedInstance()
Both:
Can evaluate a block to get the value
Can be declared inline (not a block, like above)
Lazy:
Can refer to self
Is not calculated until needed
If you don't use it, it is never calculated
Non-lazy:
No benefits whatsoever
It appears that there is no benefit to ever use a non-lazy variable. So why does the language allow the programmer to make this inferior choice?
(I am NOT asking about the difference between var and let à la Are Swift constants lazy by default?)
One reason might be that lazyness is not well-suited for situations where you want control when the evaluation happens. this is relevant in cases where the work being done in the assignment has side effects.
Although this pertains to closure, this blog post by stuart sierra explains this idea very well, and I think it applies equally in any language.
As others already said, there are several critical scenarios where you want the initialization of the properties to be deterministic.
This is an example (among many others) related to game development.
Often the instances of classes representing items in a game scene/level, are created before the level does begin.
Initialisation can be a time expensive task (load stuff from persistent storage, allocate memory, prepare the instances...) and doing this part before the player does begin playing the level does avoid CPU overhead.
This is critical because a CPU overhead in the middle of a level could cause a drop in the frame rate which is a nightmare for the user experience.
FYI. My feeling is that Swift wants to become more like a functional language and would like lazy instantiation in more places.
My early assessment of Swift has held up pretty well over time (well, the "not functional" part. I didn't anticipate how much Swift would favor methods over functions in later versions). Swift is not a functional language and does not intend to be one. This has come up often in WWDC talks, on the forums, on Twitter, and in conversations with the Swift team. Originally all maps and filters were lazy. Swift removed that because of the problems it caused. Probably the best talk on that subject is "Building Better Apps with Value Types in Swift". As they say:
We like mutation. We think it's valuable. We think it's easy to use when done correctly.
You don't get much more "non-functional" than that. Swift also embraces immutable data. But functional programming is about pure functions over immutable data, and that's not Swift.
(Of course there are plenty of non-lazy functional languages. Lazy and functional are orthogonal concepts. Haskell just happened to embrace both.)
To the question at hand, though:
I've found the lazy attribute rarely useful in real-world Swift (I'm being generous; I have never encountered a case where I kept it in the code). It doesn't offer anything like the laziness you get in Haskell. It isn't thread safe, so that's a nightmare. It forces you into reference types (or forces your structs to be mutable), so that can be annoying. If I heard they were pulling it from the language and we just had to roll our own, that'd be fine with me. (I'm tempted to write a proposal to do just that.) It implements a specific memo pattern that can occasionally be handy, but often isn't the one you want. So it's a very good thing that it isn't the default.
As you likely know, global variables and class variables are lazy by default, and I think that tends to work out pretty well since there are so many fewer of them, there's a much better chance they won't be accessed in practice, and that laziness is thread safe (which has a cost, but since they're so much rarer, the cost is much lower).
If you have an expensive object (in terms of, takes long to create) you would like to decide and control when it is created. One could argue that the lazy variable should be the default though. Maybe it has historical reasons. Lazy properties in ObjC resulted in a lot boilerplate code.

Objective-C Data Structures (Building my own DAWG)

After not programming for a long, long time (20+ years) I'm trying to get back into it. My first real attempt is a Scrabble/Words With Friends solver/cheater (pick your definition). I've built a pretty good engine, but it's solves the problems through brute force instead of efficiency or elegance. After much research, it's pretty clear that the best answer to this problem is a DAWG or CDWAG. I've found a few C implementations our there and have been able to leverage them (search times have gone from 1.5s to .005s for the same data sets).
However, I'm trying to figure out how to do this in pure Objective-C. At that, I'm also trying to make it ARC compliant. And efficient enough for an iPhone. I've looked quite a bit and found several data structure libraries (i.e. CHDataStructures ) out there, but they are mostly C/Objective-C hybrids or they are not ARC compliant. They rely very heavily on structs and embed objects inside of the structs. ARC doesn't really care for that.
So - my question is (sorry and I understand if this was tl;dr and if it seems totally a newb question - just can't get my head around this object stuff yet) how do you program classical data structures (trees, etc) from scratch in Objective-C? I don't want to rely on a NS[Mutable]{Array,Set,etc}. Does anyone have a simple/basic implementation of a tree or anything like that that I can crib from while I go create my DAWG?
Why shoot yourself in the foot before you even started walking?
You say you're
trying to figure out how do this in pure Objective-C
yet you
don't want to rely on a NS[Mutable]{Array,Set,etc}
Also, do you want to use ARC, or do you not want to use ARC? If you stick with Objective-C then go with ARC, if you don't want to use the Foundation collections, then you're probably better off without ARC.
My suggestion: do use NS[Mutable]{Array,Set,etc} and get your basic algorithm working with ARC. That should be your first and only goal, everything else is premature optimization. Especially if your goal is to "get back into programming" rather than writing the fastest possible Scrabble analyzer & solver. If you later find out you need to optimize, you have some working code that you can analyze for bottlenecks, and if need be, you can then still replace the Foundation collections.
As for the other libraries not being ARC compatible: you can pretty easily make them compatible if you follow some rules set by ARC. Whether that's worthwhile depends a lot on the size of the 3rd party codebase.
In particular, casting from void* to id and vice versa requires a bridged cast, so you would write:
void* pointer = (__bridge void*)myObjCObject;
Similarly, if you flag all pointers in C structs as __unsafe_unretained you should be able to use the C code as is. Even better yet: if the C code can be built as a static library, you can build it with ARC turned off and only need to fix some header files.

How do I import code in Pascal?

What's the Pascal way to do C's #include "code.h", Python's import code, etc.?
Pascal uses
uses
to import other modules.
While you can explicitly {$INCLUDE a file it's rarely done other than for configuration files containing compiler switches. The only time I've ever done it was long ago when I wanted two versions of the code identical except one used coprocessor-only datatypes and the other didn't. (And how many people these days even know that single and double types used to require either an expensive additional chip or a slow emulator?)
If you include the same code in two places you will get two copies of it in your .EXE. If you include the same type definition in two places you'll get two types with the same name and since Pascal uses strict typing they will not match.
The normal mechanic is as Greg Hewgill says, to use the file you want. Anything that appears in the interface of the file you use is visible, anything that's only in the implementation is not visible. This is an all-or-nothing process, you don't specify what you are bringing in. Think of the C# using command.
Unlike the C# version it's absolutely mandatory. You can't use fully qualified names to get around it.