Interaction with enum members in swift-based app - swift

I'm beginning to teach myself swift and I'm going through examples of games at the moment. I've run across a line of code that I thought was peculiar
scene.scaleMode = .ResizeFill
In languages I'm used to (C / Java) the "." notation is used to reference some sort of structure/ object but I'm not exactly sure what this line of code does as there is no specified object explicitly before the "."
Information regarding clarification of this non-specified "." reference, or when/ how it can be used, would be great
P.S. I'm using sprite kit in Xcode

In Swift, as in the other languages you mentioned, '.' is a member access operator. The syntax you are referring to is a piece of shorthand that Swift allows because it is a type-safe language.
The compiler recognises that the property you are assigning to is of type SKSceneScaleMode and so the value you are assigning must be one of that type's enumerated values - so the enumeration name can be omitted.

To add to PaulW11's answer, what's happening here is only valid syntax for enums, and won't work with any other type (class, struct, method, function). Swift knows the type of the property that you are assigning to is an enum of type SKSceneScaleMode, so lets you refer to the enum member without having to explicitly give the type of the enum (ie SKSceneScaleMode.ResizeFill).
There are some situations where there will be ambiguity, and you will have to give the full name, this will be dependant on the context. For example, you may have two different enum types in scope, that both have a matching member name.
EDIT
Updating this answers as I incorrectly specified this was only applicable to enums, which is not true. There is a good blog post here which explains in more detail
http://ericasadun.com/2015/04/21/swift-occams-code-razor/

Related

Why bother casting the return value since the type has been specified when calling the function?

I am learning Editor Script of Unity recently, and I come across a piece of code in Unity Manual like this:
EditorWindowTest window = (EditorWindowTest)EditorWindow.GetWindow(typeof(EditorWindowTest), true, "My Empty Window");
I don't know why bother casting the result with (EditorWindowTest) again since the type has been specified in the parameter field of GetWindow().
Thanks in advance :)
There are multiple overloads of the EditorWindow.GetWindow method.
The one used in your code snippet is one of the non-generic ones. It accepts a Type argument which it can use at runtime to create the right type of window. However, since it doesn't use generics, it's not possible to know the type of the window at compile time, so the method just returns an EditorWindow, as that's the best it can do.
You can hover over a method in your IDE to see the return type of any method for yourself.
When using one of the generic overloads of the GetWindow method, you don't need to do any manual casting, since the method already knows at compile time the exact type of the window and returns an instance of that type directly.
The generic variants should be used when possible, because it makes the code safer by removing the need for casting at runtime, which could cause exceptions.
If you closely look, GetWindow's return type is EditorWindow. Not the EditorWindowTest, so typecasting makes sense.
https://docs.unity3d.com/ScriptReference/EditorWindow.GetWindow.html

Why is a plus operator required in some Powershell type names?

Why is it that, in Powershell, the System.DayOfWeek enum can be referred to like [System.DayOfWeek], whereas the System.Environment.SpecialFolder enum must be referred to like [System.Environment+SpecialFolder] (note the plus character)?
My guess is because SpecialFolder is part of the static Environment class and DayOfWeek is sitting directly in the System namespace, but I'm having trouble finding any information on this. Normally static members would use the "static member operator", but that doesn't work in this case, nor does anything else I try except the mysterious plus character...
[System.DayOfWeek] # returns enum type
[enum]::GetValues([System.DayOfWeek]) # returns enum values
[enum]::GetValues([System.Environment.SpecialFolder]) # exception: unable to find type
[enum]::GetValues([System.Environment]::SpecialFolder) # exception: value cannot be null
[enum]::GetValues([System.Environment+SpecialFolder]) # returns enum values
System.Environment.SpecialFolder is definitely a type, and in C# both enums work the same way:
Enum.GetValues(typeof(System.Environment.SpecialFolder)) // works fine
Enum.GetValues(typeof(System.DayOfWeek)) // also works
I'd really like to understand why there's a distinction in Powershell and the reasoning behind this behaviour. Does anyone know why this is the case?
System.Environment.SpecialFolder is definitely a type
Type SpecialFolder, which is nested inside type Environment, is located in namespace System:
C# references that type as a full type name as in the quoted passage; that is, it uses . not only to separate the namespace from the containing type's name, but also to separate the latter from its nested type's name.
By contrast, PowerShell uses a .NET reflection method, Type.GetType(), to obtain a reference to the type at runtime:
That method uses a language-agnostic notation to identify types, as specified in documentation topic Specifying fully qualified type names.Tip of the hat to PetSerAl.
In that notation, it is + that is used to separate a nested type from its containing type (not ., as in C#).
That is, a PowerShell type literal ([...]) such as:
[System.Environment+SpecialFolder]
is effectively the same as taking the content between [ and ], System.Environment+SpecialFolder, and passing it as a string argument to Type.GetType, namely (expressed in PowerShell syntax):
[Type]::GetType('System.Environment+SpecialFolder')
Note that PowerShell offers convenient extensions (simplifications) to .NET's language-agnostic type notation, notably the ability to use PowerShell's type accelerators (such as [regex] for [System.Text.RegularExpressions.Regex]), the ability to omit the System. prefix from namespaces (e.g. [Collections.Generic.List`1[string]] instead of [System.Collections.Generic.List`1[string]]), and not having to specify the generic arity (e.g. `1) when a list of type argument is passed (e.g. [Collections.Generic.List[string]] instead of [Collections.Generic.List`1[string]] - see this answer) for more information.

Terminology of Optionals in Swift or other languages

In Swift the elements we manipulates all have types.
When we use theses types, we can add a '!', '?' or nothing to express their nullability.
What shall I call the '?' or '!' used to express this trait ?
A type decorator ? A decorator ? Operator ? Something else ?
What shall I call the type created when using this character ?
Is it a new type ? Is it a decorated type ? A type variation ?
The swift compiler, seems to consider them as new types, However my question is not implementation or language dependent and therefor I tagged it as language agnostic.
Edit: I'm looking for a language agnostic name. I understand with pranjalsatija's comment optionals are defined as compound type.
However, this is a language implementation detail.
I could rephrase my questions as:
What do you call a character with a special meaning when used it a type definition, and how to call the derived type.
This term should probably apply to capital casing constants in ruby as the concept is similar.
? on the end of the type isn’t a decorator or operator. It’s hardcoded syntactic sugar in Swift that allows you to shorten Optional<Thing> to Thing?.
The ? doesn’t really have a name (at least I’ve never heard anyone on the Swift team use one), in the language reference it’s just described as “the postfix ?”. The language grammar doesn’t put it in a syntactic category.
Similarly, [Thing] is shorthand for Array<Thing>, but there isn’t a name for the square brackets in this context.
Describing Option<Int> as “derived from” Int would be to misuse the term “derived”. You can if you want describe it as “Optional specialized for Int”.
In fact you may be looking for the language-agnostic term for how Swift allows you to build types (like Optional<T> or Array<T>) that apply to any kind of type T without having to care what T actually is. In which case the term would probably be generics.
! is a little different. When applied to a type name as in Thing!, it’s shorthand for ImplicitlyUnwrappedOptional<Thing>, in the same manner as ?.
! when applied to a variable of type Thing? is equivalent to a postfix operator that tests the optional and if it is nil, terminates your program, something like this:
postfix operator !<T>(value: T?) -> T {
if let unwrapped = value {
return unwrapped
}
else {
fatalError("unexpectedly found nil while unwrapping an Optional value")
}
}
so in this context, ! can be described as an operator. But not in the first context.
For the terminology a given language uses to describe optionals, see the Option type wikipedia page.
Optionals in Swift technically are completely different types, not variations of the same type. However, to a developer, they seem to be variations, so we'll treat them as such. The ? and the ! don't really have a set, specified name just yet, at least not that I know of. In a sense, you shouldn't be calling them type decorators, because optionals are really new types on their own. So to answer your question, the ? and the ! are parts of a type's name, more than anything else. And the new type created when using a ? or an ! is just that. A brand new type.
The type created using '?' is an Optional and '!' is an Implicitly Unwrapped Optional
I think the answer here may help you out, they refer to both as decorations
Here's a larger explanation about exclamation marks
And here's one for question marks

Definition of statically typed and dynamically types

Which of these two definitions is correct?
Statically typed - Type matching is checked at compile time (and therefore can only be applied to compiled languages)
Dynamically typed - Type matching is checked at run time, or not at all. (this term can be applied to compiled or interpreted languages)
Statically typed - Types are assigned to variables, so that I would say 'x is of type int'.
Dynamically typed - types are assigned to values (if at all), so that I would say 'x is holding an int'
By this definition, static or dynamic typing is not tied to compiled or interpreted languages.
Which is correct, or is neither one quite right?
Which is correct, or is neither one quite right?
The first pair of definitions are closer but not quite right.
Statically typed - Type matching is checked at compile time (and therefore can only be applied to compiled languages)
This is tricky. I think if a language were interpreted but did type checking before execution began then it would still be statically typed. The OCaml REPL is almost an example of this except it technically compiles (and type checks) source code into its own byte code and then interprets the byte code.
Dynamically typed - Type matching is checked at run time, or not at all.
Rather:
Dynamically typed - Type checking is done at run time.
Untyped - Type checking is not done.
Statically typed - Types are assigned to variables, so that I would say 'x is of type int'.
Dynamically typed - types are assigned to values (if at all), so that I would say 'x is holding an int'
Variables are irrelevant. Although you only see types explicitly in the source code of many statically typed languages at variable and function definitions all of the subexpressions also have static types. For example, "foo" + 3 is usually a static type error because you cannot add a string to an int but there is no variable involved.
One helpful way to look at the word static is this: static properties are those that hold for all possible executions of the program on all possible inputs. Then you can look at any given language or type system and consider which static properties can it verify, for example:
JavaScript: no segfaults/memory errors
Java/C#/F#: if a program compiled and a variable had a type T, then the variable only holds values of this type - in all executions. But, sadly, reference types also admit null as a value - the billion dollar mistake.
ML has no null, making the above guarantee stronger
Haskell can verify statements about side effects, for example a property such as "this program does not print anything on stdout"
Coq also verifies termination - "this program terminates on all inputs"
How much do you want to verify, this depends on taste and the problem at hand. All magic (verification) comes at price.
If you have never ever seen ML before, do give it a try. At least give 5 minutes of attention to Yaron Minsky's talk. It can change your life as a programmer.
The second is a better definition in my eyes, assuming you're not looking for an explanation as to why or how things work.
Better again would be to say that
Static typing gives variables an EXPLICIT type that CANNOT change
Dynamic typing gives variables an IMPLICIT type that CAN change
I like the latter definition. Consider the type checking when casting from a base class to a derived class in object oriented languages like Java or C++ which fits the second definition and not the first. It's a compiled language with (optional) dynamic type checking.

How to check a value type?

How do I check the type of a value on runtime?
I'd like to find out where I'm creating doubles.
If you're using Objective-C classes, then the [myObject isKindOfClass: [InterestingClass class]] test is available. If you're using primitive types (which your question, quoting the "double" type, suggests), then you can't. However unless you're doing some very funky stuff, the compiler can tell you when primitive types do or don't match up, and when it doesn't will perform implicit promotion to the desired type.
It would be beneficial to know a little more about what the specific problem is that you're trying to solve, because it may be that the solution doesn't involve detecting the creation of doubles at all :-).
With very few exceptions, you never need to check type at runtime. Typed variables can only hold their assigned types, and type promotion is determined at compile time.