Reflectively look up enum value by String in Swift 2 - swift

I'm writing an XML-based descriptor for UIKit and am wondering if there's any slim possibility at all of taking a string like "UIStackViewAlignmentCenter" or "UIStackViewAlignment.Center" and converting it into the appropriate constant value.
I'm really expecting this is impossible, but wanted to ask just in case.
My fallback plan is to create a helper class that allows me to register strings like "UIStackViewAlignmentCenter" and map them to values, but this is going to be painstaking adding all of the possible constants. :(

Related

Simple dictionary of heterogeneous settings, ported from Clojure

I am porting a Clojure program to Swift. Being a dynamically typed language, it is easy to throw different values together like this:
(def settings {:total-gens 5
:name "Incredible Program"
:options [:a :b :c :d :e]
:final-comment "Hope you had a good time."})
I pass settings maps like this around in the program, and I wanted to have a fairly similar process in Swift.
Right away, I feel like I am fighting the type system and I'm wondering what is the most elegant way to do this.
Here are two options that were recommended to me, both of which seem verbose or strange:
1) First, make an enum type of all possible settings value types. Then, create a dictionary of String: SettingsEnumType. Every time I need to add a new type of value to my dictionary, I first need to change the enum definition, and then change the actual dictionary.
2) Instead, create an empty protocol with no requirements. Then extend values like Int, String, etc to adopt this protocol, even though it is really a "dummy" protocol. Then make my settings dictionary String : SettingsProtocol so I can add whatever type I want in there (after first extending the type).
Both of these options feel weird to me, like I'm trying to circumvent the type system rather than have it work for me. The second option is frankly silly, but would no doubt work as needed.
Are there any other possibilities for doing something like this? Additionally, would the String type be the only obvious type for the keys in a settings dictionary? In this case, Clojure has again spoiled me with the useful keyword type that simultaneously acts as a look-up function in addition to a value type.
Any advice/pointers appreciated as I consider this new language.
After referring to Array with string and number answer, I believe you can create a Heterogeneous Dictionary with below Syntax:
let heteroDict = Dictionary<Any, Any>()
Can you try this one?

Everything's an object in Scala

I am new to Scala and heard a lot that everything is an object in Scala. What I don't get is what's the advantage of "everything's an object"? What are things that I cannot do if everything is not an object? Examples are welcome. Thanks
The advantage of having "everything" be an object is that you have far fewer cases where abstraction breaks.
For example, methods are not objects in Java. So if I have two strings, I can
String s1 = "one";
String s2 = "two";
static String caps(String s) { return s.toUpperCase(); }
caps(s1); // Works
caps(s2); // Also works
So we have abstracted away string identity in our operation of making something upper case. But what if we want to abstract away the identity of the operation--that is, we do something to a String that gives back another String but we want to abstract away what the details are? Now we're stuck, because methods aren't objects in Java.
In Scala, methods can be converted to functions, which are objects. For instance:
def stringop(s: String, f: String => String) = if (s.length > 0) f(s) else s
stringop(s1, _.toUpperCase)
stringop(s2, _.toLowerCase)
Now we have abstracted the idea of performing some string transformation on nonempty strings.
And we can make lists of the operations and such and pass them around, if that's what we need to do.
There are other less essential cases (object vs. class, primitive vs. not, value classes, etc.), but the big one is collapsing the distinction between method and object so that passing around and abstracting over functionality is just as easy as passing around and abstracting over data.
The advantage is that you don't have different operators that follow different rules within your language. For example, in Java to perform operations involving objects, you use the dot name technique of calling the code (static objects still use the dot name technique, but sometimes the this object or the static object is inferred) while built-in items (not objects) use a different method, that of built-in operator manipulation.
Number one = Integer.valueOf(1);
Number two = Integer.valueOf(2);
Number three = one.plus(two); // if only such methods existed.
int one = 1;
int two = 2;
int three = one + two;
the main differences is that the dot name technique is subject to polymorphisim, operator overloading, method hiding, and all the good stuff that you can do with Java objects. The + technique is predefined and completely not flexible.
Scala circumvents the inflexibility of the + method by basically handling it as a dot name operator, and defining a strong one-to-one mapping of such operators to object methods. Hence, in Scala everything is an object means that everything is an object, so the operation
5 + 7
results in two objects being created (a 5 object and a 7 object) the plus method of the 5 object being called with the parameter 7 (if my scala memory serves me correctly) and a "12" object being returned as the value of the 5 + 7 operation.
This everything is an object has a lot of benefits in a functional programming environment, for example, blocks of code now are object too, making it possible to pass back and forth blocks of code (without names) as parameters, yet still be bound to strict type checking (the block of code only returns Long or a subclass of String or whatever).
When it boils down to it, it makes some kinds of solutions very easy to implement, and often the inefficiencies are mitigated by the lack of need to handle "move into primitives, manipulate, move out of primitives" marshalling code.
One specific advantage that comes to my mind (since you asked for examples) is what in Java are primitive types (int, boolean ...) , in Scala are objects that you can add functionality to with implicit conversions. For example, if you want to add a toRoman method to ints, you could write an implicit class like:
implicit class RomanInt(i:Int){
def toRoman = //some algorithm to convert i to a Roman representation
}
Then, you could call this method from any Int literal like :
val romanFive = 5.toRoman // V
This way you can 'pimp' basic types to adapt them to your needs
In addition to the points made by others, I always emphasize that the uniform treatment of all values in Scala is in part an illusion. For the most part it is a very welcome illusion. And Scala is very smart to use real JVM primitives as much as possible and to perform automatic transformations (usually referred to as boxing and unboxing) only as much as necessary.
However, if the dynamic pattern of application of automatic boxing and unboxing is very high, there can be undesirable costs (both memory and CPU) associated with it. This can be partially mitigated with the use of specialization, which creates special versions of generic classes when particular type parameters are of (programmer-specified) primitive types. This avoids boxing and unboxing but comes at the cost of more .class files in your running application.
Not everything is an object in Scala, though more things are objects in Scala than their analogues in Java.
The advantage of objects is that they're bags of state which also have some behavior coupled with them. With the addition of polymorphism, objects give you ways of changing the implicit behavior and state. Enough with the poetry, let's go into some examples.
The if statement is not an object, in either scala or java. If it were, you could be able to subclass it, inject another dependency in its place, and use it to do stuff like logging to a file any time your code makes use of the if statement. Wouldn't that be magical? It would in some cases help you debug stuff, and in other cases it would make your hairs grow white before you found a bug caused by someone overwriting the behavior of if.
Visiting an objectless, statementful world: Imaging your favorite OOP programming language. Think of the standard library it provides. There's plenty of classes there, right? They offer ways for customization, right? They take parameters that are other objects, they create other objects. You can customize all of these. You have polymorphism. Now imagine that all the standard library was simply keywords. You wouldn't be able to customize nearly as much, because you can't overwrite keywords. You'd be stuck with whatever cases the language designers decided to implement, and you'd be helpless in customizing anything there. Such languages exist, you know them well, they're the sequel-like languages. You can barely create functions there, but in order to customize the behavior of the SELECT statement, new versions of the language had to appear which included the features most desired. This would be an extreme world, where you'd only be able to program by asking the language designers for new features (which you might not get, because someone else more important would require some feature incompatible with what you want)
In conclusion, NOT everything is an object in scala: Classes, expressions, keywords and packages surely aren't. More things however are, like functions.
What's IMHO a nice rule of thumb is that more objects equals more flexibility
P.S. in Python for example, even more things are objects (like the classes themselves, the analogous concept for packages (that is python modules and packages). You'd see how there, black magic is easier to do, and that brings both good and bad consequences.

In Scala, is it possible to simultaneously extend a library and have a default conversion?

For example in the following article
http://www.artima.com/weblogs/viewpost.jsp?thread=179766
Two separate examples are given:
Automatic string conversion
Addition of append method
Suppose I want to have automatic string conversion AND a new append method. Is this possible? I have been trying to do both at the same time but I get compile errors. Does that mean the two implicits are conflicting?
You can have any number of implicit conversions from a class provided that each one can be unambiguously determined depending on usage. So the array to string and array to rich-array-class-containing-append is fine since String doesn't have an append method. But you can't convert to StringBuffer which has append methods which would interfere with your rich array append.

What kind of data type is this?

In an class header I have seen something like this:
enum {
kAudioSessionProperty_PreferredHardwareSampleRate = 'hwsr', // Float64
kAudioSessionProperty_PreferredHardwareIOBufferDuration = 'iobd' // Float32
};
Now I wonder what data type such an kAudioSessionProperty_PreferredHardwareSampleRate actually is?
I mean this looks like plain old C, but in Objective-C I would write #"hwsr" if I wanted to make it a string.
I want to pass such an "constant" or "enum thing" as argument to an method.
This converts to an UInt32 enum value using the ASCII value of each of the entries. This style have been around for a long time in Mac OS headers.
'hwsr' has the same value as if you had written 0x68777372, but is a lot more reader friendly. If you used the #"hwsr" style instead you would need more than 4 bytes to represent the same.
The advantage of using this style is that you are actually able to quickly identify the content of a raw data stream if you can see the ASCII values of it.

How to workaround the XmlSerialization issue with type name aliases (in .NET)?

I have a collection of objects and for each object I have its type FullName and either its value (as string) or a subtree of inner objects details (again type FullName and value or a subtree).
For each root object I need to create a piece of XML that will be xml de-serializable.
The problem with XmlSerializer is that i.e. following object
int age = 33;
will be serialized to
<int>33</int>
At first this seems to be perfectly ok, however when working with reflection you will be using System.Int32 as type name and int is an alias and this
<System.Int32>33</System.Int32>
will not deserialize.
Now additional complexity comes from the fact that I need to handle any possible data type.
So solutions that utilize System.Activator.CreateInstance(..) and casting won't work unless I go down the path of code gen & compilation as a way of achieving this (which I would rather avoid)
Notes:
A quick research with .NET Reflector revealed that XmlSerializer uses internal class TypeScope, look at its static constructor to see that it initializes an inner HashTable with mappings.
So the question is what is the best (and I mean elegant and solid) way to workaround this sad fact?
I don't exactly see, where your problem originally stems from. XmlSerializer will use the same syntax/mapping for serializing as for deserializing, so there is no conflict when using it for both serializing and deserializing.
Probably the used type-tags are some xml-standard thing, but not sure about that.
I guess the problem is more your usage of reflection. Do you instantiate your
imported/deserialized objects by calling Activator.CreateInstance ?
I would recommend the following instead, if you have some type Foo to be created from the xml in xmlReader:
Foo DeserializedObject = (Foo)Serializer(typeof(Foo)).Deserialize(xmlReader);
Alternatively, if you don't want to switch to the XmlSerializer completely, you could do some preprocessing of your input. The standard way would then be to create some XSLT, in which you transform all those type-elements to their aliases or vice versa. then before processing the XML, you apply your transformation using System.Xml.Xsl.XslCompiledTransform and use your (or the reflector's) mappings for each type.
Why don't you serialize all the field's type as an attribute?
Instead of
<age>
<int>33</int>
</age>
you could do
<age type="System.Int32">
33
</age>