This sounds like a stupid question, but I have been trying to find a solution for hours now, and I still don't know what to do. I am using Swift 3.0, and I am having an issue calling a method inside a singleton class from a selector inside another class. My singleton class is as follows:
class Singleton : NSObject {
static let sharedInstance = Singleton()
private override init() {} // defeats instantiation
func myAction() {
// do something useful...
}
}
Then, here is the class from which I am calling the method contained in the Singleton:
class StatusBarPresenter {
func addItemsToMenu(menu: NSMenu) {
...
menu.insertItem(withTitle: "Disconnect this network",
action: #selector(Singleton.sharedInstance.myAction),
keyEquivalent: "D", at: 4)
...
}
}
Xcode doesn't complain about the code... it compiles without any errors or warnings, but the selector doesn't work. The UIMenuItem that I add to the menu is disabled, which means that the selector is not working. If the selector instead calls a method inside the class, everything works fine just as usual. This is a screenshot of what I am getting:
Thanks to Martin R. for pointing out that in my code I was not setting an explicit target for the UIMenuItem, leading to it being nil and ultimately self.
The following line added to the addItemsToMenu function after the call to insertItem solves the problem:
menu.item(at: 4)?.target = Singleton.sharedInstance
Related
I have a class, MyViewController, with several actions which are triggered from menu items.
class MyViewController: NSViewController { … }
The actions are connected to first responder in IB. Actions look like this:
#IBAction func removeSelectedItems(_ sender: AnyObject) {
arrayController.remove(contentsOf: arrayController.selectedObjects)
}
validateMenuItem(:) code looks like this:
override func validateMenuItem(_ menuItem: NSMenuItem) -> Bool {
let selection = arrayController.selectedObjects
if (menuItem.action == #selector(removeSelectedItems(_:))) {
return selection!.count > 0
}
return super.validateMenuItem(menuItem)
}
When I include the actions in the if() list, everything is fine. But if I don't, and validateMenuItem(:) falls through to super, I get an exception:
[MyApp.MyViewController validateMenuItem:]: unrecognized selector sent to instance 0x618000165ac0
If I instead return false at the end of the method, there's no exception.
This happens when validateMenuItem(:) is called, e.g. when the menu is opened. In spite of this, though, the action is triggered when the item is selected.
Am I wrong to be calling super at the end of the method? I would expect the responder chain to be queried until a match was found, not an exception claiming I didn't implement a method which I clearly did!
Am I wrong to be calling super at the end of the method
Yes. Neither NSViewController nor any of its superclasses implements validateMenuItem. Despite the override in Swift, it is not actually inherited. It is injected in Objective-C by an informal protocol (NSMenuValidation). [The Swift compiler doesn't understand that kind of trickery; hence the override despite the fact that we are not overriding anything.]
See https://forums.developer.apple.com/thread/46772
To update complications on the watch, I use a singleton class ComplicationController (irrelevant code has been omitted below):
final class ComplicationController: NSObject, CLKComplicationDataSource {
static let shared = ComplicationController() // Instantiate the singleton
private override init() {
super.init()
print("====self: \(self): init")
} // Others can't init the singleton
}
The singleton is created on the watch by the extension delegate:
class ExtensionDelegate: NSObject, WKExtensionDelegate {
override init() {
super.init()
_ = ComplicationController.shared
}
}
When I launch the watch extension with a breakpoint at the print statement above, the execution breaks and the stack trace is:
When I then execute the print statement in a single step, the debugger shows:
====self: <Watch_Extension.ComplicationController: 0x7bf35f20>: init
When I then continue, the execution breaks again at the same breakpoint, and the stack trace is:
After another single step, the debugger shows:
====self: <Watch_Extension.ComplicationController: 0x7d3211d0>: init
Obviously, the CLKComplicationServer has created another instance of the singleton.
My question is: Did I something wrong or is this a bug? If it is a bug, is there a workaround?
PS: It does not help not to initialize ComplicationController in the ExtensionDelegate. In this case, the 2nd instance is created as soon as ComplicationController.shared is used anywhere in the code.
I found a workaround:
Do not instantiate the singleton in the app, i.e. do not use ComplicationController.shared anywhere.
If you have to call functions in ComplicationController.shared, send a notification, e.g. to the default notification center.
The ComplicationController singleton had to have registered for such notifications, and when it receives one, it has to execute the required function.
This did work for me.
This one has me stumped. I can't figure out why Swift is complaining that self.init is called more than once in this code:
public init(body: String) {
let parser = Gravl.Parser()
if let node = parser.parse(body) {
super.init(document: self, gravlNode: node)
} else {
// Swift complains with the mentioned error on this line (it doesn't matter what "whatever" is):
super.init(document: self, gravlNode: whatever)
}
}
Unless I'm missing something, it's very obvious that it is only calling init once. Funnily enough if I comment out the second line Swift complains that Super.init isn't called on all paths, lol.
What am I missing?
Update:
Ok so the problem was definitely trying to pass self in the call to super.init. I totally forgot I was doing that, ha. I think I had written that experimentally and gotten it to compile and thought that it might actually work, but looks like it's actually a bug that it compiled that way at all.
Anyhow, since passing self to an initializer is kind of redundant since it's the same object, I changed the parent initializer to accept an optional document parameter (it's just an internal initializer so no big deal) and if it's nil I just set it to self in the parent initializer.
For those curious, this is what the parent initializer (now) looks like:
internal init(document: Document?, gravlNode: Gravl.Node) {
self.value = gravlNode.value
super.init()
self.document = document ?? self as! Document
// other stuff...
}
I suspect this is a bad diagnostic (i.e the wrong error message). It would be very helpful if you had a full example we could experiment with, but this line doesn't make sense (and I suspect is the underlying problem):
super.init(document: self, gravlNode: node)
You can't pass self to super.init. You're not initialized yet (you're not initialized until you've called super.init). For example, consider the following simplified code:
class S {
init(document: AnyObject) {}
}
class C: S {
public init() {
super.init(document: self)
}
}
This leads to error: 'self' used before super.init call which I believe is the correct error.
EDIT: I believe Hamish has definitely uncovered a compiler bug. You can exploit it this way in Xcode 8.3.1 (haven't tested on 8.3.2 yet):
class S {
var x: Int
init(document: S) {
self.x = document.x
}
}
class C: S {
public init() {
super.init(document: self)
}
}
let c = C() // malloc: *** error for object 0x600000031244: Invalid pointer dequeued from free list
I'm using Swift and Objective C's reflection to try to invoke a method, but the method accepts arguments and I can't work out how to construct an instance of Selector which refers to a method which accepts arguments.
Here is some sample code:
class Thing : NSObject {
func doSomething() {
}
func doSomething(str :String) {
}
}
extension Thing {
func doSomethingElse(str :String) -> String {
}
}
let t = Thing()
var selector = Selector("doSomething")
//selector = Selector("doSomething:")
if t.responds(to: selector) {
t.perform(selector)
}
So I can invoke doSomething with no problem, but I cannot seem to create a Selector from a string which refers to doSomething(str :String). I attempted do to so with the string "doSomething:" (the commented out line).
Just in case it makes any difference, I'm ultimately attempting to invoke the extension method doSomethingElse.
How can I invoke Swift methods with arguments via reflection/selectors?
P.S. I'm aware that in general you're supposed to use #selector nowadays, but this won't work in my case because the method doSomething might not exist in the compiled code.
In Objective C land a Swift method with the signature "doSomething(str :String)" is called "doSomethingElseWithStr:".
I'm trying to create a reusable test harness in Swift with the idea that subclasses will extend the test harness to provide the instance under test, and can add their own subclass-specific test methods, something like this:
class FooTestHarness: XCTestCase {
let instance: Foo
init(instance: Foo) {
self.instance = instance
}
func testFooBehavior() {
XCTAssert(instance.doesFoo())
}
}
class FooPrime: Foo {
func doesFooPrime(): Bool { /* ... */ }
}
class FooPrimeTests: XCTestCase {
init() {
super.init(FooPrime())
}
func myInstance(): FooPrime {
return instance as FooPrime
}
func testFooPrimeBehavior() {
XCTAssert(myInstance().doesFooPrime())
}
}
However, when XCode's testrunner tries to run FooPrimeTests, it doesn't call the no-arg init(), it calls init(invocation: NSInvocation!) (and fails because there isn't one). I tried to override this in FooTestHarness:
init(invocation: NSInvocation!, instance: Foo) {
self.instance = instance
super.init(invocation)
}
and in FooPrimeTests:
init(invocation: NSInvocation!) {
super.init(invocation, FooPrime())
}
but this fails with the message 'NSInvocation' is unavailable.
Is there a workaround?
I'm not os sure if I got it right, but checking the code you suggested you should get a compiler Error like:
Which actually I reckon is quite normal since your FooPrimeTests is just subclassing XCTestCase which has got different init like:
init!(invocation: NSInvocation!)
init!(selector: Selector)
init()
Probably when you posted you're question you're running on an older version of Swift, (I'm currently running it on the Xcode Beta 6.2) that's why you can't see the error. But, and I say again if I got your point right, your class FooPrimeTests can't see you custom initializer just because is sublcassing XCTestCase, rather then FooTestHarness. Which is the class where the init(instance: Foo) is defined.
So you might probably want to define FooPrimeTests as subclass of FooTestHarness. That way you should be able to correctly see your initializer. Hope this help.