How can I disambiguate my type from a module type - swift

I am having troubles referring to a symbol from 3rd party module, my code looks something like this:
import Foo // 3rd party module
struct Foo { // my type on my module
//...
}
struct Bar: Foo.ProtocolA { // here swift referes to my struct instead of the module
}
I cannot only use : ProtocolA because that name is already being used in my module.
Is there a way to disambiguate this?
I have seen similar questions but they solve different thing, disambiguating a module type instead of own type.
TIA

You can fix this by importing the submodules from the parent module directly; e.g. in your example that would become something like:
import protocol Foo.ProtocolA
You can specify many types to import, like class enum let/var etc. as well, not only a protocol.
The obvious downside is that the amount of imports can balloon quickly if you need a lot of submodules from the specific parent module.
Personally I usually try to avoid naming collisions altogether, sometimes at the expensive of slightly less descriptive naming while still being understandable. Additionally as far as I know there is no way to alias imports at this time in Swift.

Related

Simple container bindings in Swift?

Disclaimer: I'm still learning Swift so forgive me if I haven't understood certain concepts/capabilities/limitations of Swift.
With the Swinject framework, if you wanted to bind a protocol to a class - it seems you have to return the class instance in a closure such as:
container.register(Animal.self) { _ in Cat() }
Is is possible to be able to instead pass in two types to the register() method and have the framework instantiate the class for you? It would need to recursively see if that class had any initializer dependencies of course (Inversion of Control).
This is possible in the PHP world as you have the concept of reflection, which allows you to get the class types of the dependencies, allowing you instantiate them on the fly. I wonder if Swift has this capability?
It would be much nicer to write this:
container.register(Animal.self, Cat.self)
This would also allow you to resolve any class from the container and have it's dependencies resolved also (without manually registering the class):
container.resolve(NotRegisteredClass.self)
Note: This only makes sense for classes that do not take scalar types as a dependency (as they need to be explicitly given of course).
The second case - resolving a type without the explicit registration - is currently not possible because of Swift's very limited support for the reflection.
However, there is a SwinjectAutoregistration extension which will enable you to write something very close to your first example:
container.autoregister(Animal.self, initializer: Cat.init)

Best practice for using same functions between classes

In swift, what is best practice for having several functions common to more than one class, where inheritance between those classes isn't feasible?
I'm new to programming so please don't condescend. Its just when I first started learning a few months ago I was told its terrible practice to repeat code, and at the time I was coding in Ruby where I could create a module in which all the functions resided, and then just include module in any class where I wanted to use those functions. As long as all variables in the module's functions were declared in the classes the code worked.
Is there a similar practice in swift, or should I be doing something else like making a bunch of global functions and passing the instance variables to those functions? Please be as specific as possible as I'm gonna follow your advice for all code I write in swift going forward, thanks!
simple answer to your question is protocol
define protocol
protocol ProtocolName {
/* common functions */
func echoTestString()
}
extension ProtocolName {
/* default implementation */
func echoTestString() {
print("default string")
}
}
class conforming to protocol with default implementation
class ClassName: ProtocolName {
}
ClassName().echoTestString() // default string
class conforming to protocol with overriden implementation
class AnotherClass: ProtocolName {
func echoTestString() {
print("my string")
}
}
AnotherClass().echoTestString() // my string
While an opinion, I think this is the right route - use a Framework target. Protocols work too. But with a Framework, you can:
Share across projects
Keep everything local in scope what you need or not
Be agnostic in many ways
If you want to use the "include" Swift verb (and all that comes with it), you pretty much need to use a Framework target. If you want complete splitting of code too. Protocols are used when you are within a single project, do not want to "repeat" code pieces, and know you will always be local.
If what you want is to (a) use protocols across projects, (b) include true separate code, (c) have global functions, while (d) passing instance variables... consider a separate target.
EDIT: Looking at your question title ("using same functions") and thinking about OOP versus functional programming, I thought I'd add something that doesn't change my solution but enhances it - functional programming means you can "pass" a function as a parameter. I don't think that's what you were saying, but it's another piece of being Swifty in your coding.

Swift: access level between `private` and `internal`?

In my Swift code, I often use the private modifier to limit the visibility of helper classes. For example, in one file, I'll have a GridController and a GridControllerModel.
The GridController (the UI) should be accessible to the rest of the application, but the model class is wholly internal and should never be accessed by the rest of the application.
I can address this in Swift by making both classes private and keeping them in the same file. But this gets unwieldy as classes get bigger. What I'd like to do is keep each class in a separate file (for programming convenience), but prevent access to the model class anything but GridController (for information hiding purposes).
Is there any way to do this in Swift?
As others have said, there is no way to do exactly what you want today in Swift.
One alternative is to use an extension in another file to add GridControllerModel as a nested subtype of GridController. e.g.
//GridControllerModel.swift
extension GridController {
struct GridControllerModel {
let propertyOne:String
let propertyTwo:String
}
}
This allows your GridController class in its own separate file to declare something like:
var model = GridControllerModel()
However, the rest of the application can still access the GridControllerModel type like this:
//SomeOtherClass.swift
var nested = GridController.GridControllerModel()
So, you do achieve some separation by making the model type a subtype of GridController, but it isn't true access control. On the plus side, it will not appear in code completion outside of the GridController class as "GridControllerModel", you would need to first type "GridController" and then "." to see the subtype "GridController.GridControllerModel"
It's also worth noting that an additional access control level is currently under review and likely to be in the next version of Swift (3.0) :
https://github.com/apple/swift-evolution/blob/master/proposals/0025-scoped-access-level.md
Assuming this proposal is accepted and implemented, you would be able to update your declared subtype like this:
//GridControllerModel.swift
local extension GridController {
struct GridControllerModel {
let propertyOne:String
let propertyTwo:String
}
}
(Note the "local" keyword above now). This would make the GridControllerModel type invisible and inaccessible to all classes except GridController and any extensions of GridController.
So, I would recommend that you consider this nested subtype approach today, because when Swift 3.0 arrives later this year, it's likely to support what you want by simply adding a keyword in front of your subtype declaration. And in the meantime, you get some of the separation you want as well.
No, there isn't an access modifier that restricts visibility to only a certain set of files. But you probably don't need that.
What does exist:
private: restricts visibility to within the same source file.
internal: restricts visibility to within the same module.
If you're building a piece of software that's too big for one source file, but both defines an outward-facing interface and internal details that should stay hidden from clients of that interface... then you're probably working at a level where it's appropriate to build a framework. Your framework can then define features that are internal for its use only and separate from the public interface it exposes to clients.

Why are classes inside Scala package objects dispreferred?

Starting with 2.10, -Xlint complains about classes defined inside of package objects. But why? Defining a class inside a package object should be exactly equivalent to defining the classes inside of a separate package with the same name, except a lot more convenient.
In my opinion, one of the serious design flaws in Scala is the inability to put anything other than a class-like entity (e.g. variable declarations, function definitions) at top level of a file. Instead, you're forced to put them into a separate ''package object'' (often in package.scala), separate from the rest of the code that they belong with and violating a basic programming rule which is that conceptually related code should be physically related as well. I don't see any reason why Scala can't conceptually allow anything at top level that it allows at lower levels, and anything non-class-like automatically gets placed into the package object, so that users never have to worry about it.
For example, in my case I have a util package, and under it I have a number of subpackages (util.io, util.text, util.time, util.os, util.math, util.distances, etc.) that group heterogeneous collections of functions, classes and sometimes variables that are semantically related. I currently store all the various functions, classes, etc. in a package object sitting in a file called io.scala or text.scala or whatever, in the util directory. This works great and it's very convenient because of the way functions and classes can be mixed, e.g. I can do something like:
package object math {
// Coordinates on a sphere
case class SphereCoord(lat: Double, long: Double) { ... }
// great-circle distance between two points
def spheredist(a: SphereCoord, b: SphereCoord) = ...
// Area of rectangle running along latitude/longitude lines
def rectArea(topleft: SphereCoord, botright: SphereCoord) = ...
// ...
// ...
// Exact-decimal functions
class DecimalInexactError extends Exception
// Format floating point value in decimal, error if can't do exactly
formatDecimalExactly(val num: Double) = ...
// ...
// ...
}
Without this, I would have to split the code up inconveniently according to fun vs. class rather than by semantics. The alternative, I suppose, is to put them in a normal object -- kind of defeating the purpose of having package objects in the first place.
But why? Defining a class inside a package object should be exactly equivalent to defining the classes inside of a separate package with the same name,
Precisely. The semantics are (currently) the same, so if you favor defining a class inside a package object, there should be a good reason. But the reality is that there is at least one good reason no to (keep reading).
except a lot more convenient
How is that more convenient?
If you are doing this:
package object mypkg {
class MyClass
}
You can just as well do the following:
package mypkg {
class MyClass
}
You'll even save a few characters in the process :)
Now, a good and concrete reason not to go overboard with package objects is that while packages are open, package objects are not.
A common scenario would be to have your code dispatched among several projects, with each project defining classes in the same package. No problem here.
On the other hand, a package object is (like any object) closed (as the spec puts it "There can be only one package object per package"). In other words,
you will only be able to define a package object in one of your projects.
If you attempt to define a package object for the same package in two distinct projects, bad things will happen, as you will effectively end up with two
distinct versions of the same JVM class (n our case you would end up with two "mypkg.class" files).
Depending on the cases you might end up with the compiler complaining that it cannot find something that you defined in the first version of your package object,
or get a "bad symbolic reference" error, or potentially even a runtime error. This is a general limitation of package objects, so you have to be aware of it.
In the case of defining classes inside a package object, the solution is simple: don't do it (given that you won't gain anything substantial compared to just defining the class as a top level).
For type aliase, vals and vars, we don't have such a luxuary, so in this case it is a matter of weighing whether the syntactic convenience (compared to defining them in an object) is worth it, and then take care not to define duplicate package objects.
I have not found a good answer to why this semantically equivalent operation would generate a lint warning. Methinks this is a lint bug. The only thing that I have found that must not be placed inside a package object (vs inside a plain package) is an object that implements main (or extends App).
Note that -Xlint also complains about implicit classes declared inside package objects, even though they cannot be declared at package scope. (See http://docs.scala-lang.org/overviews/core/implicit-classes.html for the rules on implicit classes.)
I figured out a trick that allows for all the benefits of package objects without the complaints about deprecation. In place of
package object foo {
...
}
you can do
protected class FooPackage {
...
}
package object foo extends FooPackage { }
Works the same but no complaint. Clear sign that the complaint itself is bogus.

Redeclaring/extending typedef defined in Objective-C protocol in class conforming to protocol

I have an Objective-C protocol:
typedef enum {
ViewStateNone
} ViewState;
#protocol ViewStateable
- (void)initViewState:(ViewState)viewState;
- (void)setViewState:(ViewState)viewState;
#end
I'm using this protocol in the following class:
#import "ViewStateable.h"
typedef enum {
ViewStateNone,
ViewStateSummary,
ViewStateContact,
ViewStateLocation
} ViewState;
#interface ViewController : UIViewController <ViewStateable> {
}
#end
I won't go too far into the specifics of my application, but what I'm doing here is typedefing an enumeration in a protocol so that the protocol's methods can take an input value of that type.
I'm then hoping to redeclare or extend that typedef in the classes that conform to that protocol, so that each class can have their own view states. However, I'm running into the following two errors:
Redeclaration of enumerator 'ViewStateNone'
Conflicting types for 'ViewState'
I'm ashamed to admit that my knowledge of C (namely typedefs) is not extensive, so is what I'm trying to do here, firstly, possible and, secondly, sensible?
Cheers friends.
It is neither possible nor sensible. This comes from the fact that typedefs and enums are basically just defines. (Well, not really, but for this purpose, they are.) If you need to do things like this, you might want to review your design (see below).
More info
typedef type newtype;
is (almost) equivalent to
#define newtype type;
and
enum {
ViewStateNone
};
is basically the same as
#define ViewStateNone 1
There are a few finer points withe regards to differences between the two, and the most compelling argument for using enums and typedefs is of course compile time checking of integer constants.
However; once an typedef enum {} type; has been seen, it cannot be unseen, and once it has been seen, its name is reserved for it, and it alone.
There are ways around all of this; but those are paths rarely traveled, and generally for good reason. It quickly becomes incredibly unmanageable.
As a solution, you might want to create a new class, MyViewState, which represents a view state and associated information, which could easily just be a wrapper around an NSInteger.
In closing: Review your design. I fear you might be doing something overly convoluted.
It's certainly not possible in the form you have it, for reasons that the errors fairly succinctly explain. An enum constant can only be declared once in any scope, and similarly a typedef.
Moreover, there's a bit of a conceptual difficulty with defining a type in a protocol that implementors can then redefine. The implementors should be conforming to the type, not adding to it. If the class needs to be able to determine its own set of values, the protocol must use a type that is general enough to hold all those that might be wanted. In this case you could use int or, probably more sensibly, something readable like NSString. You might also add another method to the protocol that will report back the values supported by the implementing class.