In Swift 2.1, I am trying to use reflection in order to add cases generated from a text file to an enum at compile time.
Here is the enum wrapper:
enum Kind : Int {
}
Using C/++ I could just use this macro:
#define X(value, left, right) \
value##Left = left, value##Right = right,
How can I get similar result in Swift?
Preprocessor directives are deliberately cut down to a very bare minimum in Swift. Even if technically possible, your particular case would go quite against Swift philosophy in respect of enums, as this philosopy requires that switch statements on enumerations are exhaustive, that is, cover all possible cases.
Now, if you would be able to dynamically fill up the enum's cases from some file, then how would compiler be able to ensure exhaustiveness? Opting out to use default: cases all over the program would basically throw the whole Swift's idea of enum safety right into the window.
If you stick with Swift then you are probably better off with dictionary as #RMenke suggests.
Related
I've been attempting to follow some iOS Swift tutorials for connecting to Parse.com
codementor.io tutorial:
loginViewController.fields = .UsernameAndPassword | .LogInButton | .PasswordForgotten | .SignUpButton | .Facebook | .Twitter
makeschool tutorial
loginViewController.fields = [.UsernameAndPassword, .LogInButton, .SignUpButton, .PasswordForgotten, .Facebook]
I assume the former is Switch 1.x, and thelatter is Swift 2. From context they appear to be doing the same thing, but I haven't yet found the language references for the change in syntax. Awfully hard to search for dots, pipes and commas... can someone explain the syntax in each snippet? (I'm working on reading through the language specification, but would be fun to actually get an app to work!)
The old Swift 1 syntax is based on the way you deal with option sets in C and Objective-C: you store an option set in an integer type and use bitwise operators (| and & and ~) to manipulate them. So .UsernameAndPassword | .LogInButton means an option set in which both the .UsernameAndPassword and the .LogInButton options are included. In the old syntax, you use nil to represent an empty option set (in which no options are included), which is not obvious based on the syntax for a non-empty set.
Chris Lattner described the changed syntax in WWDC 2015 Session 106: What's New in Swift. First he describes the problems with the old syntax:
The problem is, when you get to the other syntaxes you end up using, it is a bit less nice. You create an empty-option set with nil -- it doesn't make sense because option sets and optionals are completely different concepts and they're conflated together. You extract them with bitwise operations, which is a pain and super error-prone, and you can get it wrong easily.
Then he describes the new approach:
But Swift 2 solves this. It makes option sets set-like. That means option sets and sets are now formed with square brackets. That means you get empty sets with an empty set of square brackets, and you get the full set of standard set API to work with option sets.
The reason the new syntax works is because OptionSetType conforms to the ArrayLiteralConvertible protocol (indirectly, by conforming to SetAlgebraType). This protocol allows a conforming object to be initialized using an array literal, by having an init that takes a list of elements.
In the new Swift 2 syntax, [ .UsernameAndPassword, .LogInButton ] represents an option set containing both the .UsernameAndPassword and the .LogInButton options. Note that it looks just like the syntax by which you can initialize a plain old Set: let intSet: Set<Int> = [ 17, 45 ]. The new syntax makes it obvious that you specify an empty option set as [].
Swift no longer supports the vertical bar operator used your first line. The second is taking the different cases of an enumeration and describing them as an option set. Here's an example from the Swift docs:
enum CompassPoint {
case North
case South
case East
case West
}
directionToHead = .South
switch directionToHead {
case .North:
print("Lots of planets have a north")
case .South:
print("Watch out for penguins")
case .East:
print("Where the sun rises")
case .West:
print("Where the skies are blue")
}
// prints "Watch out for penguins"
I am porting a Clojure program to Swift. Being a dynamically typed language, it is easy to throw different values together like this:
(def settings {:total-gens 5
:name "Incredible Program"
:options [:a :b :c :d :e]
:final-comment "Hope you had a good time."})
I pass settings maps like this around in the program, and I wanted to have a fairly similar process in Swift.
Right away, I feel like I am fighting the type system and I'm wondering what is the most elegant way to do this.
Here are two options that were recommended to me, both of which seem verbose or strange:
1) First, make an enum type of all possible settings value types. Then, create a dictionary of String: SettingsEnumType. Every time I need to add a new type of value to my dictionary, I first need to change the enum definition, and then change the actual dictionary.
2) Instead, create an empty protocol with no requirements. Then extend values like Int, String, etc to adopt this protocol, even though it is really a "dummy" protocol. Then make my settings dictionary String : SettingsProtocol so I can add whatever type I want in there (after first extending the type).
Both of these options feel weird to me, like I'm trying to circumvent the type system rather than have it work for me. The second option is frankly silly, but would no doubt work as needed.
Are there any other possibilities for doing something like this? Additionally, would the String type be the only obvious type for the keys in a settings dictionary? In this case, Clojure has again spoiled me with the useful keyword type that simultaneously acts as a look-up function in addition to a value type.
Any advice/pointers appreciated as I consider this new language.
After referring to Array with string and number answer, I believe you can create a Heterogeneous Dictionary with below Syntax:
let heteroDict = Dictionary<Any, Any>()
Can you try this one?
I'm beginning to teach myself swift and I'm going through examples of games at the moment. I've run across a line of code that I thought was peculiar
scene.scaleMode = .ResizeFill
In languages I'm used to (C / Java) the "." notation is used to reference some sort of structure/ object but I'm not exactly sure what this line of code does as there is no specified object explicitly before the "."
Information regarding clarification of this non-specified "." reference, or when/ how it can be used, would be great
P.S. I'm using sprite kit in Xcode
In Swift, as in the other languages you mentioned, '.' is a member access operator. The syntax you are referring to is a piece of shorthand that Swift allows because it is a type-safe language.
The compiler recognises that the property you are assigning to is of type SKSceneScaleMode and so the value you are assigning must be one of that type's enumerated values - so the enumeration name can be omitted.
To add to PaulW11's answer, what's happening here is only valid syntax for enums, and won't work with any other type (class, struct, method, function). Swift knows the type of the property that you are assigning to is an enum of type SKSceneScaleMode, so lets you refer to the enum member without having to explicitly give the type of the enum (ie SKSceneScaleMode.ResizeFill).
There are some situations where there will be ambiguity, and you will have to give the full name, this will be dependant on the context. For example, you may have two different enum types in scope, that both have a matching member name.
EDIT
Updating this answers as I incorrectly specified this was only applicable to enums, which is not true. There is a good blog post here which explains in more detail
http://ericasadun.com/2015/04/21/swift-occams-code-razor/
Variables of struct declared by data type of language in the header file. Usually data type using to declare variables, but other data type pass to preprocessors. When we should use to a data type send to preprocessor for declare variables? Why data type and variables send to processor?
#define DECLARE_REFERENCE(type, name) \
union { type name; int64_t name##_; }
typedef struct _STRING
{
int32_t flags;
int32_t length;
DECLARE_REFERENCE(char*, identifier);
DECLARE_REFERENCE(uint8_t*, string);
DECLARE_REFERENCE(uint8_t*, mask);
DECLARE_REFERENCE(MATCH*, matches_list_head);
DECLARE_REFERENCE(MATCH*, matches_list_tail);
REGEXP re;
} STRING;
Why this code is doing this for declarations? Because as the body of DECLARE_REFERENCE shows, when a type and name are passed to this macro it does more than just the declaration - it builds something else out of the name as well, for some other unknown purpose. If you only wanted to declare a variable, you wouldn't do this - it does something distinct from simply declaring one variable.
What it actually does? The unions that the macro declares provide a second name for accessing the same space as a different type. In this case you can get at the references themselves, or also at an unconverted integer representation of their bit pattern. Assuming that int64_t is the same size as a pointer on the target, anyway.
Using a macro for this potentially serves several purposes I can think of off the bat:
Saves keystrokes
Makes the code more readable - but only to people who already know what the macros mean
If the secondary way of getting at reference data is only used for debugging purposes, it can be disabled easily for a release build, generating compiler errors on any surviving debug code
It enforces the secondary status of the access path, hiding it from people who just want to see what's contained in the struct and its formal interface
Should you do this? No. This does more than just declare variables, it also does something else, and that other thing is clearly specific to the gory internals of the rest of the containing program. Without seeing the rest of the program we may never fully understand the rest of what it does.
When you need to do something specific to the internals of your program, you'll (hopefully) know when it's time to invent your own thing-like-this (most likely never); but don't copy others.
So the overall lesson here is to identify places where people aren't writing in straightforward C, but are coding to their particular application, and to separate those two, and not take quirks from a specific program as guidelines for the language as a whole.
Sometimes it is necessary to have a number of declarations which are guaranteed to have some relationship to each other. Some simple kinds of relationships such as constants that need to be numbered consecutively can be handled using enum declarations, but some applications require more complex relationships that the compiler can't handle directly. For example, one might wish to have a set of enum values and a set of string literals and ensure that they remain in sync with each other. If one declares something like:
#define GENERATE_STATE_ENUM_LIST \
ENUM_LIST_ITEM(STATE_DEFAULT, "Default") \
ENUM_LIST_ITEM(STATE_INIT, "Initializing") \
ENUM_LIST_ITEM(STATE_READY, "Ready") \
ENUM_LIST_ITEM(STATE_SLEEPING, "Sleeping") \
ENUM_LIST_ITEM(STATE_REQ_SYNC, "Starting synchronization") \
// This line should be left blank except for this comment
Then code can use the GENERATE_STATE_ENUM_LIST macro both to declare an enum type and a string array, and ensure that even if items are added or removed from the list each string will match up with its proper enum value. By contrast, if the array and enum declarations were separate, adding a new state to one but not the other could cause the values to get "out of sync".
I'm not sure what the purpose the macros in your particular case, but the pattern can sometimes be a reasonable one. The biggest 'question' is whether it's better to (ab)use the C preprocessor so as to allow such relationships to be expressed in valid-but-ugly C code, or whether it would be better to use some other tool to take a list of states and would generate the appropriate C code from that.
I am new to Scala and heard a lot that everything is an object in Scala. What I don't get is what's the advantage of "everything's an object"? What are things that I cannot do if everything is not an object? Examples are welcome. Thanks
The advantage of having "everything" be an object is that you have far fewer cases where abstraction breaks.
For example, methods are not objects in Java. So if I have two strings, I can
String s1 = "one";
String s2 = "two";
static String caps(String s) { return s.toUpperCase(); }
caps(s1); // Works
caps(s2); // Also works
So we have abstracted away string identity in our operation of making something upper case. But what if we want to abstract away the identity of the operation--that is, we do something to a String that gives back another String but we want to abstract away what the details are? Now we're stuck, because methods aren't objects in Java.
In Scala, methods can be converted to functions, which are objects. For instance:
def stringop(s: String, f: String => String) = if (s.length > 0) f(s) else s
stringop(s1, _.toUpperCase)
stringop(s2, _.toLowerCase)
Now we have abstracted the idea of performing some string transformation on nonempty strings.
And we can make lists of the operations and such and pass them around, if that's what we need to do.
There are other less essential cases (object vs. class, primitive vs. not, value classes, etc.), but the big one is collapsing the distinction between method and object so that passing around and abstracting over functionality is just as easy as passing around and abstracting over data.
The advantage is that you don't have different operators that follow different rules within your language. For example, in Java to perform operations involving objects, you use the dot name technique of calling the code (static objects still use the dot name technique, but sometimes the this object or the static object is inferred) while built-in items (not objects) use a different method, that of built-in operator manipulation.
Number one = Integer.valueOf(1);
Number two = Integer.valueOf(2);
Number three = one.plus(two); // if only such methods existed.
int one = 1;
int two = 2;
int three = one + two;
the main differences is that the dot name technique is subject to polymorphisim, operator overloading, method hiding, and all the good stuff that you can do with Java objects. The + technique is predefined and completely not flexible.
Scala circumvents the inflexibility of the + method by basically handling it as a dot name operator, and defining a strong one-to-one mapping of such operators to object methods. Hence, in Scala everything is an object means that everything is an object, so the operation
5 + 7
results in two objects being created (a 5 object and a 7 object) the plus method of the 5 object being called with the parameter 7 (if my scala memory serves me correctly) and a "12" object being returned as the value of the 5 + 7 operation.
This everything is an object has a lot of benefits in a functional programming environment, for example, blocks of code now are object too, making it possible to pass back and forth blocks of code (without names) as parameters, yet still be bound to strict type checking (the block of code only returns Long or a subclass of String or whatever).
When it boils down to it, it makes some kinds of solutions very easy to implement, and often the inefficiencies are mitigated by the lack of need to handle "move into primitives, manipulate, move out of primitives" marshalling code.
One specific advantage that comes to my mind (since you asked for examples) is what in Java are primitive types (int, boolean ...) , in Scala are objects that you can add functionality to with implicit conversions. For example, if you want to add a toRoman method to ints, you could write an implicit class like:
implicit class RomanInt(i:Int){
def toRoman = //some algorithm to convert i to a Roman representation
}
Then, you could call this method from any Int literal like :
val romanFive = 5.toRoman // V
This way you can 'pimp' basic types to adapt them to your needs
In addition to the points made by others, I always emphasize that the uniform treatment of all values in Scala is in part an illusion. For the most part it is a very welcome illusion. And Scala is very smart to use real JVM primitives as much as possible and to perform automatic transformations (usually referred to as boxing and unboxing) only as much as necessary.
However, if the dynamic pattern of application of automatic boxing and unboxing is very high, there can be undesirable costs (both memory and CPU) associated with it. This can be partially mitigated with the use of specialization, which creates special versions of generic classes when particular type parameters are of (programmer-specified) primitive types. This avoids boxing and unboxing but comes at the cost of more .class files in your running application.
Not everything is an object in Scala, though more things are objects in Scala than their analogues in Java.
The advantage of objects is that they're bags of state which also have some behavior coupled with them. With the addition of polymorphism, objects give you ways of changing the implicit behavior and state. Enough with the poetry, let's go into some examples.
The if statement is not an object, in either scala or java. If it were, you could be able to subclass it, inject another dependency in its place, and use it to do stuff like logging to a file any time your code makes use of the if statement. Wouldn't that be magical? It would in some cases help you debug stuff, and in other cases it would make your hairs grow white before you found a bug caused by someone overwriting the behavior of if.
Visiting an objectless, statementful world: Imaging your favorite OOP programming language. Think of the standard library it provides. There's plenty of classes there, right? They offer ways for customization, right? They take parameters that are other objects, they create other objects. You can customize all of these. You have polymorphism. Now imagine that all the standard library was simply keywords. You wouldn't be able to customize nearly as much, because you can't overwrite keywords. You'd be stuck with whatever cases the language designers decided to implement, and you'd be helpless in customizing anything there. Such languages exist, you know them well, they're the sequel-like languages. You can barely create functions there, but in order to customize the behavior of the SELECT statement, new versions of the language had to appear which included the features most desired. This would be an extreme world, where you'd only be able to program by asking the language designers for new features (which you might not get, because someone else more important would require some feature incompatible with what you want)
In conclusion, NOT everything is an object in scala: Classes, expressions, keywords and packages surely aren't. More things however are, like functions.
What's IMHO a nice rule of thumb is that more objects equals more flexibility
P.S. in Python for example, even more things are objects (like the classes themselves, the analogous concept for packages (that is python modules and packages). You'd see how there, black magic is easier to do, and that brings both good and bad consequences.