Is it possible to concoct a compile time assert in Swift like static_assert in C++? Maybe some way to exploit type constraints on generics to force a compiler break?
This has been accepted into Swift as of version 4.2, here is the Swift evolution for the proposal.
If you're talking about a general assert, where the app will crash if a given condition fails, just use: assert(condition,message)
For example: assert(2 == 3,"failing because 2 does not equal 3")
This is possible in Swift. However, I should note that Apple's design mantra is that an app should never crash, but instead should handle all its errors in a "sophisticated" fashion.
Related
Note: this is no longer relevant. Recent versions of Swift have multiple features that address enum binary compatibility in various ways, such as #unknown default, frozen enums, etc.
Various enums in HealthKit tend to get new values added with each release of iOS. For example, HKWorkoutActivityType has had new values added in each iOS version since its introduction.
Say I am mapping this enum to a string value using a Swift switch statement:
extension HKWorkoutActivityType {
var displayName: String {
switch self {
case .americanFootball: return "American Football"
// ...Exhaustive switch statement, with a line for every enum case.
// Including values added in iOS 10 and 11...
case .taiChi: return "Tai Chi"
}
}
}
let event: HKWorkoutEvent = ...
print("Activity type is: \(event.type.displayName)")
This switch statement, compiled against the iOS 11 SDK, works fine and is backward compatible with older iOS versions. Note that at the time of compilation, the switch statement is exhaustive, so there is no default case.
But if new HKWorkoutActivityType values are added in iOS 12, and I don't recompile this code, how will the displayName getter behave for new enum values? Should I expect a crash? Undefined behavior? Does it depend on the type of enum (for example, here it's an Objective-C NS_ENUM, but will Swift enums behave differently)? etc.
FWIW, this is the partially what this Swift Evolution proposal is addressing.
Hopefully they will decide on a solution that satisfies issues like this nicely too!
Long story short, you may be able to avoid this issue by adding a default case, (even though the compiler will yell at you), or using version tags. However this problem likely falls under "undefined" currently.
The long story:
The current version of Swift does not have ABI stability, so a compiled Swift application is not guaranteed to (and almost definitely wont) interface with a Framework compiled with a newer version (the reason the platform Frameworks are still Objective-C).
So how this category of changes affect Swift is a work in progress. We will probably have a better definition of how to deal with this type of issue when Swift 5 is released. until then adding default and/or version checking is probably the way to go.
Very interesting question, and upvoted. I know of no way to perfectly test this in (a) Xcode 9 and (b) iOS 11. But that may be your answer.
I think the desired solution if if #available(iOS 12, *), where though is at issue. Encapsulate the entire switch statement? Just the iOS 12 addition?
The result should be that between the target iOS version in Xcode and the Swift compiler, it's covered - and should yield an error (hopefully explaining the issue that iOS 11 is targeted but something is only available in iOS 12) to indicate you either need to use (a) the if #available(iOS 12, *) someplace or change your target.
I know of no easy way to test this though, without rebuilding. Which is integral to your question! Therefore I guess the rule is:
Always rebuild your app when a new iOS (and associated Xcode) version is released.
Consider this part of you taking ownership of your code.
What has replaced the method toUIntMax() and the method toIntMax() in Swift 4 ? The error occurred within the FacebookCore framework.
Any help would be appreciated
The concept of IntMax has been completely removed as part of SE-104.
Converting from one integer type to another is performed using the concept of the 'maximum width integer' (see MaxInt), which is an artificial limitation. The very existence of MaxInt makes it unclear what to do should someone implement Int256, for example.
The proposed model eliminates the 'largest integer type' concept previously used to interoperate between integer types (see toIntMax in the current model) and instead provides access to machine words. It also introduces the multipliedFullWidth(by:), dividingFullWidth(_:), and quotientAndRemainder methods. Together these changes can be used to provide an efficient implementation of bignums that would be hard to achieve otherwise.
In this specific case FB SDK should simply use the UInt64($0) initializer which is now available for any BinaryInteger type thanks to the new protocols.
You can also for now, can select Swift 3.2 under Pods -> Targets -> ObjectMapper -> Swift language version option
I'm trying to build a collection view with pagination (showing cells as previews) and found this tutorial. I'm getting the error from Xcode and guessing this must be due to an update to Xcode (since the tutorial seemed to have worked for a lot of people). Would love a hint on how to fix this:
Downcast from '[UICollectionViewLayoutAttributes]?' to
'[UICollectionViewLayoutAttributes]' only unwraps optionals; did you
mean to use '!'?
You probably did something like this (someOptionalAttributes being of type [UICollectionViewLayoutAttributes]?):
someOptionalAttributes as! [UICollectionViewLayoutAttributes]
However, because someOptionalAttributes is just the optional version of [UICollectionViewLayoutAttributes], you do not need a force downcast, you just need to unwrap it:
someOptionalAttributes!
Force downcasting is only necessary if casting to an entirely new type (usually subclass).
So, yes, you did mean to use ! as said in the error. If you want to be safe, you could use a number of different optional unwrapping techniques.
Please tell me the difference between test.length and [test length]?
Which is more usefull for iOS developement?
No difference in meaning, they both access length property.
Their only difference is syntactic.
Check Apple documentation about sending a message to an object
test.length is just a convenience syntax introduced in Objective C 2.0. The two expressions you list are totally equivalent and a matter of preference more than anything else.
They are the same. Sometimes it may (!) be better to use one or the other for readability of your code.
The following code is perfectly safe, yet Xcode 4 gives me an error for it:
if ([self respondsToSelector: #selector(foo)])
[self foo];
I am aware that I can get around it with a dummy protocol, but I use this pattern pretty often, and it feels like that amount of work should not be necessary. Is there any way to set a setting somewhere, preferably once, so that this "error" does not bug me again?
if ([self respondsToSelector: #selector(foo)])
[self foo];
That expression is only "perfectly safe" if there are no arguments and no return value. If any type information is required, #selector(foo) is insufficient.
Even then, I suspect that there are architectures whose ABI are such that the no-arg-no-return case would actually require that type knowledge be available to the compiler to be able to generate code that is absolutely guaranteed correct.
That is to say, your example of fooWithInteger: and/or fooWithX:y:z: could not possibly be compiled correctly without the full type information available due to the vagaries of the C language and the architecture specific ABI.
As well, to allow the compiler to compile that without warning would require the compiler to collude a runtime expression -- respondsToSelector: must be dynamically dispatched -- with a compile time expression. Compilers hate that.
To silence the compiler when following that kind of pattern, I use -performSelector:
if ([self respondsToSelector:#selector(foo)]) {
[self performSelector:#selector(foo)];
}
I don't know of any other way.