How can I find all closures? - swift

We've totally forgotten to capture self and its properties when referencing it within a closure. (Note: the compiler didn't warn us.) Now our application is full with strong reference cycles. To fix them, we have to add the capture list to each closure one-by-one.
How can we find them all? I thought to search for in but it results in too much results including comments, for cycles.
Good old Objective C would help me searching for ^. And it would warn us...

Related

self. in trailing swift closures, meaning and purpose?

whenever I use a trailing closure on an action ... example:
run(SKAction.wait(forDuration: 10)){timeRemains = false}
I’m seeing this:
Reference to property (anything) in closure requires explicitly ‘self’
to make capture semantics explicit.
What does this mean? And what is it on about? I'm curious because I'm only ever doing this in the context/scope of the property or function I want to call in the trailing closure, so don't know why I need `self and fascinated by the use of the word
"semantics"
here. Does it have some profound meaning, and will I magically understand closures if I understand this?
Does it have some profound meaning, and will I magically understand closures if I understand this?
No and no. Or perhaps, maybe and maybe.
The reason for this syntactical demand is that this might really be a closure, that is, it might capture and preserve its referenced environment (because the anonymous function you are passing might be stored somewhere for a while). That means that if you refer here to some property of self, such as myProperty, you are in fact capturing a reference to self. Swift demands that you acknowledge this fact explicitly (by saying self.myProperty, not merely myProperty) so that you understand that this is what's happening.
Why do you need to understand it? Because under some circumstances you can end up with a retain cycle, or in some other way preserving the life of self in ways that you didn't expect. This is a way of getting you to think about that fact.
(If it is known that this particular function will not act as a closure, i.e. that it will be executed immediately, then there is no such danger and Swift will not demand that you say self explicitly.)

Object class members as pointers to avoid #include in headers - is it good practice?

This is really a question of precedence: which is more preferred in C++, avoiding pointers or avoiding #includes in header files?
"Don't Use #include in header files."
There seems to be some ambiguity based on my research. In this SO question, the top answer says "...make sure you actually need an include, [don't use one] when a forward declaration or even leaving it out completely will do." (From Header files and include best practice)
And this article explains the negative effect excess header inclusions can have on compile-time: http://blog.knatten.org/2012/11/09/another-reason-to-avoid-includes-in-headers/
As well as this tutorial, stating, "...you should try to put all of your code in the CPP class and only the class declaration in the HPP file.": https://github.com/LaurentGomila/SFML/wiki/Tutorial%3A-Basic-Game-Engine#wiki-declarations
"Don't Use Pointers."
But, there is also evidence that pointers should be avoided most often as well:
c++: when to use pointers?
https://softwareengineering.stackexchange.com/questions/56935/why-are-pointers-not-recommended-when-coding-with-c
Which preference takes precedence?
If my understanding about avoiding #includes in header files is correct, this can easily be done by changing things like class members to pointers so I can use a forward declaration instead, but is this a good idea for class members whose lifetime only lasts as long as the class itself?
It's not really an "one or the other". Both statements are true, but you need to understand the reasoning behind them.
tl;dr: Use forward declaration where possible to reduce compile time. Use stack objects or references as much as possible and pointers only in rare cases.
"Don't Use #include in header files."
This is a rather general statement, which as is, would be wrong. The more important part behind this statement actually is: "Use forward declarations where ever possible". Includes in header files are not something bad per se, but they often aren't needed either.
Forward declarations can be used, if the included type/class/etc. is used as a pointer in the new type/class/etc. declaration within the given header. Forward declaration just tells the compiler: "Somewhere a long the way you'll find the actual declaration of type X." The include can even be removed if the type isn't used at all in the declaration. The reason is that the compiler doesn't need to know anything about these types to calculate the required memory layout for the new type. For example a pointer has "always" the same size. Including the file additionally in the header, would potentially only waste processing power, since the compiler would have to open and parse the file, thus adding expensive seconds to the compile time. So in most cases you'll do yourself a favor by reducing the unnecessary includes in the header files and instead use forward declaration.
For the sake of completion: Forward declaration are explicitly needed if you get circular references (class A depends on class B, which depends on class C, which depends on class A). However this can often also reveal either bad design and/or old/outdate coding standards which would lead us to the second topic.
"Don't use pointers."
Again the statement is a tiny bit too general. One might rather want to say: "Don't use raw pointers."
With C++11 and soon C++1y the language itself has changed a lot. As much bad C++ books the world has seen, the more outdated C++ books float around nowadays (here's a good list however). While in the past we were mostly stuck with pointers new and delete for memory management, we've evolved to better, more readable, less risk and 100% memory leak free ways to manage the data in memory. One of the magic words is RAII - since you linked something from SFML above, here's a nice demonstration of the power of RAII. I see many people use pointers and new and delete just because or maybe because they are thinking in Java or C# terms were objects get instantiated with the new keyword. In C++ however object don't need to use new to be allocated and it's mostly preferable to run things on the stack instead of the heap. This works for many, many things, especially when using STL containers, which will hide the dynamic management in the background. The usage of the heap is mostly all cases only preferable if you need the data to be dynamic, non "local" or you need a lot of it. However when you use the heap, make sure to use smart pointers such as std::unique_ptr or std::shared_ptr depending on the use case, but certainly not raw pointers. In modern C++ raw pointers should never own an object anymore. There are cases where it's okay to return a raw pointer to reference an object, but there's really no reason in modern C++ to call new on a raw pointer.
Lets get back to the original question though. The "Don't use raw pointers" is essentially more of a design question and quite unrelated to the whole header issue. While there might be some cases where you'll have to switch to raw pointers, due to circular references, the use of forward declarations is otherwise just about compilation time (and maybe clean code), but it's not as essential for the programming itself.
In short: Don't use raw pointers to avoid inclusions in header files, but use forward declaration where ever possible and utilize smart pointers as much as possible.

recursive reference in Perl

$a=\$a;
The book I'm reading says in this case $a will NEVER be free,my question is why perl interpreter doesn't fix it at compile time?When it finds it's pointing at itself,don't increase refcount.
Why perl doesn't do it?
Some garbage collectors have cycle detection; Perl, for performance and historical reasons, does not. If you want a reference that doesn't affect the reference count, you can use Scalar::Util::weaken to obtain a weak reference, which removes the need for cycle detection in most situations where you would need to rely on it. There would need to be cycle-detection built into the interpreter to automatically detect whether \$a should be a weak or strong reference, so you just have to do it explicitly.

Why retain/release rather than new/delete?

I'm newbie to objective-C, I feel comportable in C++.
My question is:
Why language designer of obj-c proper to use retain/release rather then use new/delete(=alloc/dealloc) only?
Maybe my brain is fit to new/delete only memory management, I can not understand why I should manage reference counts, I think I know when object have to be alloc/dealloc with my C++ experence.
(Yes, I spend 4 hours to debug reference count problem, it is resolved by 1 line "release")
Can anyone explain me what is better when we use reference counter? (in programming language respects) I think I can manage lifecycle of object by new/delete, but I can't with reference counting.
I need long article that explains why reference counter is useful, if you have link.
P.S: I heard about Compile-time Automatic Reference Counting at WWDC 2011, it is really awesome, it can be reason of use of reference counter, for example.
The short answer is that it is a way to manage object lifetimes without requiring "ownership" as one does with C++.
When creating an object using new in C++, something has to know when to delete that object later. Often this is straightforward, but it can be difficult when an object can be passed around and shared by many different "owners" with differing lifetimes.
With reference counting, as long as any other object refers to the object, it stays alive. When all other objects remove their references, it disappears. There are drawbacks to this approach (the debugging of retain/release and reference cycles being the most obvious), but it is a useful alternative to fully automatic garbage collection.
Objective-C is not the only language to use reference counting. In C++, it is common to use std::shared_ptr, which is the standard reference-counted smart pointer template. Windows Component Object Model programming requires it. Many languages use automated reference-counting behind the scenes as a garbage-collection strategy.
The Wikipedia article is a good place to start looking for more information: http://en.wikipedia.org/wiki/Reference_counting

Why to use empty parentheses in Scala if we can just use no parentheses to define a function which does not need any arguments?

As far as I understand, in Scala we can define a function with no parameters either by using empty parentheses after its name, or no parentheses at all, and these two definitions are not synonyms. What is the purpose of distinguishing these 2 syntaxes and when should I better use one instead of another?
It's mostly a question of convention. Methods with empty parameter lists are, by convention, evaluated for their side-effects. Methods without parameters are assumed to be side-effect free. That's the convention.
Scala Style Guide says to omit parentheses only when the method being called has no side-effects:
http://docs.scala-lang.org/style/method-invocation.html
Other answers are great, but I also think it's worth mentioning that no-param methods allow for nice access to a classes fields, like so:
person.name
Because of parameterless methods, you could easily write a method to intercept reads (or writes) to the 'name' field without breaking calling code, like so
def name = { log("Accessing name!"); _name }
This is called the Uniform Access Principal
I have another light to bring to the usefulness of the convention encouraging an empty parentheses block in the declaration of functions (and thus later in calls to them) with side effects.
It is with the debugger.
If one add a watch in a debugger, such as, say, process referring for the example to a boolean in the focused debug context, either as a variable view, or as a pure side-effect free function evaluation, it creates a nasty risk for your later troubleshooting.
Indeed, if the debugger keeps that watch as a try-to-evaluate thing whenever you change the context (change thread, move in the call stack, reach another breakpoint...), which I found to be at least the case with IntelliJ IDEA, or Visual Studio for other languages, then the side-effects of any other process function possibly found in any browsed scope would be triggered...
Just imagine the kind of puzzling troubleshooting this could lead to if you do not have that warning just in mind, because of some innocent regular naming. If the convention were enforced, with my example, the process boolean evaluation would never fall back to a process() function call in the debugger watches; it might just be allowed in your debugger to explicitly access the () function putting process() in the watches, but then it would be clear you are not directly accessing any attribute or local variables, and fallbacks to other process() functions in other browsed scopes, if maybe unlucky, would at the very least be very less surprising.