I need to implement this class:
class PinImageView: UIImageView {
var lastLocation:CGPoint
var panRecognizer:UIPanGestureRecognizer
init(imageIcon: UIImage?, location:CGPoint) {
self.lastLocation = location
super.init(image: imageIcon)
self.center = location
self.panRecognizer = UIPanGestureRecognizer(target:self, action:"detectPan:")
self.gestureRecognizers = [panRecognizer]
}
}
I think there is a kind of "cyclic" problem because the compiler wants me to initialize panRecognizer before calling super.init(image: imageIcon) but panRecognizer has self as target and we can use self only after calling the super init method.
How can I solve this?
This is a non-optional instance variable
var panRecognizer:UIPanGestureRecognizer
so you have to set a value for it before completing init, and specifically as you see, before calling super.
It doesn't quite need to be like that. Instead, it can be a lazy loading instance variable, so it's created the first time you request it.
Now, when you init you can setup the instance, call super, and then add the gesture recogniser (which will create the gesture in the process).
lazy var panRecognizer : UIPanGestureRecognizer = {
return UIPanGestureRecognizer(target:self, action:"detectPan:")
}()
Related
I need to adjust the mask in the UILable for the animation, but this layer.mask method does not run
class FirstView: UIView {
var showAbout: UILabel {
let label = UILabel()
...
return label
}
var showAboutMask: CAShapeLayer {
let layer = CAShapeLayer()
...
return layer
}
override init(frame: CGRect) {
super.init(frame: frame)
showAbout.layer.mask = showAboutMask
addSubview(showAbout)
}
}
The label is shown but the mask layer does not appear
It looks like you are modifying once instance of showAbout and then adding another instance as a subview. This happens because showAbout is a "computed property". Computed properties don't actually store a value, instead they compute the value each time it's called (like a function).
You can verify this behavior by setting a breakpoint in the body of showAbout and note that it gets called twice.
Because of this, in initializeViews when you call
showAbout.layer.mask = showAboutMask
you compute one UILabel instance and modify the mask of its layer. Then, when you call
addSubview(showAbout)
you compute another UILabel instance (without the layer mask) and add it as a subview.
Computed properties have their use cases but here you probably want to store the UILabel instance. One way of doing this would be to lazily assign it the value of computing a closure, similar to how showAboutMask is assigned:
lazy var showAbout: UILabel = {
// same label creation as before
}()
At the time of initializing the label, the mask should be specified
var showAbout: UILabel {
....
label.layer.mask = showAboutMask
return label
}
Here is the situation. I have a protocol, and extension of it.
protocol CustomViewAddable {
var aView: UIView { get }
var bView: UIView { get }
func setupCustomView()
}
extension CustomViewAddable where Self: UIViewController {
var aView: UIView {
let _aView = UIView()
_aView.frame = self.view.bounds
_aView.backgroundColor = .grey
// this is for me to observe how many times this aView init.
print("aView: \(_aView)")
return _aView
}
var bView: UIView {
let _bView = UIView(frame: CGRect(x: 30, y: 30, width: 30, height: 30))
_bView.backgroundColor = .yellow
return _bView
}
func setupCustomView() {
view.addSubview(aView);
aView.addSubview(bView);
}
}
And I make a ViewController to conform this protocol then I add this custom 'aView' to my ViewController's view.
class MyVC: UIViewController, CustomViewAddable {
override func viewDidLoad() {
super.viewDidLoad()
setupCustomView()
}
}
I run it. In my console log it prints twice of init and I trying to do something in my custom 'aView' and it failed. (The code I paste above that I simplified so that it'll be very easy to show my intension)
Could anybody to explain why or make a fix to it that I'll be very appreciated.
Because your var aView: UIView is computed variable not store variable,
So every time you call aView, it will create a new UIView,
You can use Associated Objects in NSObject here is some tutorials:
swift-objc-runtime
associated-objects
Hope this may help.
Basically in the way you implemented the setupCustomView method nothing should work because as mentioned in another response you're using a computed property, so this implies that every time you access the property it's created again.
You don't need to use associated-objects or something like that to achieve what you want, you only need to keep the reference of the aView at the beginning avoiding calling it again, in this way:
func setupCustomView() {
let tView = aView // only is computed once
view.addSubview(tView)
tView.addSubview(bView)
}
I hope this help you.
hello there i am trying to figure out what the issue is why i cant get a cross on the game board when a tap is made. the TTT class is the one below the first code and it is called in the main code. i will appreciate if some can help me on this
#IBOutlet var fields: [TTTImageView]!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
func fieldTapped (recognizer:UITapGestureRecognizer) {
let tappedField = recognizer.view as! TTTImageView
tappedField.setPlayer(_player: "x.png")
}
func setupField () {
for index in 0 ... fields.count - 1 {
let gestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(fieldTapped))
gestureRecognizer.numberOfTapsRequired = 1
fields[index].addGestureRecognizer(gestureRecognizer)
}
}
import UIKit
class TTTImageView: UIImageView {
var player:String?
var activated:Bool! = false
func setPlayer (_player:String) {
self.player = _player
if activated == false {
if _player == "x.png" {
self.image = UIImage (named: "x.png")
}else{
self.image = UIImage (named: "o.png")
}
activated = true
}
}
}
Make sure when you set fields[index].addGestureRecognizer(gestureRecognizer) to set isUserInteractionEnabled to true.
fields[index].isUserInteractionEnabled = true
Whenever you want to detect user activity (e.g. UITapGestureRecognizer on a UIImageView, you need to set this to true
EDIT:
I also wanted to mention something I noticed after looking at your code again: the method you are using for selectors has been deprecated. If you are using Swift 3 (which I highly recommend you do), you shouldn't call selectors like this: "functionName:" anymore. Now we use selectors like so: #selector(functionName(_:)). You should start updating your code to the most current syntax sooner rather than later.
I'm learning Swift Language by following Stanford University Course.
In lecture five, there is a demo to draw a smile face.
There is a declaration of faceCenter, the code showed as below.
var faceCenter: CGPoint {
return convertPoint(center, fromView: superview)
}
But my question is why can I use simply equal like below?
var faceCenter: CGPoint = convertPoint(center, fromView: superview)
When I did it, the system gives this error, "Extra argument "fromView" in call".
Can anyone tell me the problem?
It does not work because self is not initialised yet. Every value has to be assigned before self becomes available.
It actually tries to use self three times :
var faceCenter: CGPoint = self.convertPoint(self.center, fromView: self.superview)
Unfortunately the compiler is not really helpful with this error.
You can always make it an optional or give it a default value. Then give it the correct value in the init method.
It is possible to create a stored property while also accessing self, without assigning the value in the init method. This involves the method in faceCenterBeta. It is declared with lazy to assign a value to it when it is first read, not when the object is initialised. It also uses a closure instead of a getter to get the value.
class Test : UIView {
var faceCenter : CGPoint = CGPointZero // give a default value, give correct value in the init method
var faceCenterAlpha: CGPoint { // getter
print("getter")
return convertPoint(center, fromView: superview)
}
lazy var faceCenterBeta: CGPoint = { [unowned self] in // closure
print("closure")
return self.convertPoint(self.center, fromView: self.superview)
}()
func faceCenterDelta() -> CGPoint { // good ol' function
print("function")
return convertPoint(center, fromView: superview)
}
init() {
super.init(frame: CGRectZero)
faceCenter = convertPoint(center, fromView: superview)
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
let test = Test()
// executed every time, just like a function
test.faceCenterAlpha
test.faceCenterAlpha
test.faceCenterAlpha
// only executed once
test.faceCenterBeta
test.faceCenterBeta
test.faceCenterBeta
I have a RNG and want it to go off every three seconds. So far I have
var timer = NSTimer(timeInterval: 3, target: self , selector: randomnumbers, userInfo: nil, repeats: true)
func randomnumbers() {
var rockNamesArray:[String] = ["bird", "rock2", "rock3"]
var rockpos = Int(arc4random_uniform(UInt32(3)))
}
But I have a bunch of error messages and I'm not sure how to organize it.
EDIT
The error message in this code is telling me that it has an unresolved identifier "self" and all the other error s are just ones occuring because I have changed this code, like unresolved identifier rockNamesArray and rockpos which happen 4 times in three different lines of code.
EDIT2
As stated in the comment the above code is placed outside of a class which explains that self is not working. But how to address the timer routine in this case?
it has an unresolved identifier "self"
It sounds like the code you've provided is not part of an instance method. self is a keyword that refers to the object whose code is executing. If you don't have an object, there's no self, hence the error. To solve the problem, you could pass a pointer to some other object that has a randomnumbers selector in place of self.
You can not refer to self as a value when assigning an initial value to every stored property before the completion of the first phase of initialization in Swift.
As The Swift Programming Language says:
Class initialization in Swift is a two-phase process. In the first phase, each stored property is assigned an initial value by the class that introduced it. Once the initial state for every stored property has been determined, the second phase begins, and each class is given the opportunity to customize its stored properties further before the new instance is considered ready for use.
An initializer cannot call any instance methods, read the values of any instance properties, or refer to self as a value until after the first phase of initialization is complete.
Try this one:
var timer = NSTimer(timeInterval:3, target:self, selector:Selector("randomnumbers:"), userInfo: nil, repeats: true)
func randomnumbers(timer:NSTimer) {
var rockNamesArray:[String] = ["bird", "rock2", "rock3"]
var rockpos = Int(arc4random_uniform(UInt32(3)))
}
since the timer routine expects a timer object.
Edit You need to place it inside a (dummy) class like this:
class MyTimer {
var timer: NSTimer
init() {
timer = NSTimer(timeInterval:3, target:self , selector:Selector("randomnumbers:"), userInfo:nil, repeats:true)
}
func randomnumbers(timer:NSTimer) {
// ...
}
}
let myTimer = MyTimer()
Write selector: "randomnumbers" instead of selector: randomnumbers. You can instantiate the timer and start it at the same time using timer = NSTimer.scheduledTimerWithTimeInterval instead of timer = NSTimer(timeInterval.... Call this in your controller, e.g. in viewDidLoad
Please also note that your randomnumbers() does not do anything. You assign value to rockpos, but do not do anything with it. So you won't be able to see if the timer is working...