Why retain/release rather than new/delete? - iphone

I'm newbie to objective-C, I feel comportable in C++.
My question is:
Why language designer of obj-c proper to use retain/release rather then use new/delete(=alloc/dealloc) only?
Maybe my brain is fit to new/delete only memory management, I can not understand why I should manage reference counts, I think I know when object have to be alloc/dealloc with my C++ experence.
(Yes, I spend 4 hours to debug reference count problem, it is resolved by 1 line "release")
Can anyone explain me what is better when we use reference counter? (in programming language respects) I think I can manage lifecycle of object by new/delete, but I can't with reference counting.
I need long article that explains why reference counter is useful, if you have link.
P.S: I heard about Compile-time Automatic Reference Counting at WWDC 2011, it is really awesome, it can be reason of use of reference counter, for example.

The short answer is that it is a way to manage object lifetimes without requiring "ownership" as one does with C++.
When creating an object using new in C++, something has to know when to delete that object later. Often this is straightforward, but it can be difficult when an object can be passed around and shared by many different "owners" with differing lifetimes.
With reference counting, as long as any other object refers to the object, it stays alive. When all other objects remove their references, it disappears. There are drawbacks to this approach (the debugging of retain/release and reference cycles being the most obvious), but it is a useful alternative to fully automatic garbage collection.
Objective-C is not the only language to use reference counting. In C++, it is common to use std::shared_ptr, which is the standard reference-counted smart pointer template. Windows Component Object Model programming requires it. Many languages use automated reference-counting behind the scenes as a garbage-collection strategy.
The Wikipedia article is a good place to start looking for more information: http://en.wikipedia.org/wiki/Reference_counting

Related

Swift: why aren't all variables lazy by default?

In comparing these two options for defining an instance property:
var networkManager = NetworkManager.sharedInstance()
var lazy networkManager = NetworkManager.sharedInstance()
Both:
Can evaluate a block to get the value
Can be declared inline (not a block, like above)
Lazy:
Can refer to self
Is not calculated until needed
If you don't use it, it is never calculated
Non-lazy:
No benefits whatsoever
It appears that there is no benefit to ever use a non-lazy variable. So why does the language allow the programmer to make this inferior choice?
(I am NOT asking about the difference between var and let à la Are Swift constants lazy by default?)
One reason might be that lazyness is not well-suited for situations where you want control when the evaluation happens. this is relevant in cases where the work being done in the assignment has side effects.
Although this pertains to closure, this blog post by stuart sierra explains this idea very well, and I think it applies equally in any language.
As others already said, there are several critical scenarios where you want the initialization of the properties to be deterministic.
This is an example (among many others) related to game development.
Often the instances of classes representing items in a game scene/level, are created before the level does begin.
Initialisation can be a time expensive task (load stuff from persistent storage, allocate memory, prepare the instances...) and doing this part before the player does begin playing the level does avoid CPU overhead.
This is critical because a CPU overhead in the middle of a level could cause a drop in the frame rate which is a nightmare for the user experience.
FYI. My feeling is that Swift wants to become more like a functional language and would like lazy instantiation in more places.
My early assessment of Swift has held up pretty well over time (well, the "not functional" part. I didn't anticipate how much Swift would favor methods over functions in later versions). Swift is not a functional language and does not intend to be one. This has come up often in WWDC talks, on the forums, on Twitter, and in conversations with the Swift team. Originally all maps and filters were lazy. Swift removed that because of the problems it caused. Probably the best talk on that subject is "Building Better Apps with Value Types in Swift". As they say:
We like mutation. We think it's valuable. We think it's easy to use when done correctly.
You don't get much more "non-functional" than that. Swift also embraces immutable data. But functional programming is about pure functions over immutable data, and that's not Swift.
(Of course there are plenty of non-lazy functional languages. Lazy and functional are orthogonal concepts. Haskell just happened to embrace both.)
To the question at hand, though:
I've found the lazy attribute rarely useful in real-world Swift (I'm being generous; I have never encountered a case where I kept it in the code). It doesn't offer anything like the laziness you get in Haskell. It isn't thread safe, so that's a nightmare. It forces you into reference types (or forces your structs to be mutable), so that can be annoying. If I heard they were pulling it from the language and we just had to roll our own, that'd be fine with me. (I'm tempted to write a proposal to do just that.) It implements a specific memo pattern that can occasionally be handy, but often isn't the one you want. So it's a very good thing that it isn't the default.
As you likely know, global variables and class variables are lazy by default, and I think that tends to work out pretty well since there are so many fewer of them, there's a much better chance they won't be accessed in practice, and that laziness is thread safe (which has a cost, but since they're so much rarer, the cost is much lower).
If you have an expensive object (in terms of, takes long to create) you would like to decide and control when it is created. One could argue that the lazy variable should be the default though. Maybe it has historical reasons. Lazy properties in ObjC resulted in a lot boilerplate code.

Object class members as pointers to avoid #include in headers - is it good practice?

This is really a question of precedence: which is more preferred in C++, avoiding pointers or avoiding #includes in header files?
"Don't Use #include in header files."
There seems to be some ambiguity based on my research. In this SO question, the top answer says "...make sure you actually need an include, [don't use one] when a forward declaration or even leaving it out completely will do." (From Header files and include best practice)
And this article explains the negative effect excess header inclusions can have on compile-time: http://blog.knatten.org/2012/11/09/another-reason-to-avoid-includes-in-headers/
As well as this tutorial, stating, "...you should try to put all of your code in the CPP class and only the class declaration in the HPP file.": https://github.com/LaurentGomila/SFML/wiki/Tutorial%3A-Basic-Game-Engine#wiki-declarations
"Don't Use Pointers."
But, there is also evidence that pointers should be avoided most often as well:
c++: when to use pointers?
https://softwareengineering.stackexchange.com/questions/56935/why-are-pointers-not-recommended-when-coding-with-c
Which preference takes precedence?
If my understanding about avoiding #includes in header files is correct, this can easily be done by changing things like class members to pointers so I can use a forward declaration instead, but is this a good idea for class members whose lifetime only lasts as long as the class itself?
It's not really an "one or the other". Both statements are true, but you need to understand the reasoning behind them.
tl;dr: Use forward declaration where possible to reduce compile time. Use stack objects or references as much as possible and pointers only in rare cases.
"Don't Use #include in header files."
This is a rather general statement, which as is, would be wrong. The more important part behind this statement actually is: "Use forward declarations where ever possible". Includes in header files are not something bad per se, but they often aren't needed either.
Forward declarations can be used, if the included type/class/etc. is used as a pointer in the new type/class/etc. declaration within the given header. Forward declaration just tells the compiler: "Somewhere a long the way you'll find the actual declaration of type X." The include can even be removed if the type isn't used at all in the declaration. The reason is that the compiler doesn't need to know anything about these types to calculate the required memory layout for the new type. For example a pointer has "always" the same size. Including the file additionally in the header, would potentially only waste processing power, since the compiler would have to open and parse the file, thus adding expensive seconds to the compile time. So in most cases you'll do yourself a favor by reducing the unnecessary includes in the header files and instead use forward declaration.
For the sake of completion: Forward declaration are explicitly needed if you get circular references (class A depends on class B, which depends on class C, which depends on class A). However this can often also reveal either bad design and/or old/outdate coding standards which would lead us to the second topic.
"Don't use pointers."
Again the statement is a tiny bit too general. One might rather want to say: "Don't use raw pointers."
With C++11 and soon C++1y the language itself has changed a lot. As much bad C++ books the world has seen, the more outdated C++ books float around nowadays (here's a good list however). While in the past we were mostly stuck with pointers new and delete for memory management, we've evolved to better, more readable, less risk and 100% memory leak free ways to manage the data in memory. One of the magic words is RAII - since you linked something from SFML above, here's a nice demonstration of the power of RAII. I see many people use pointers and new and delete just because or maybe because they are thinking in Java or C# terms were objects get instantiated with the new keyword. In C++ however object don't need to use new to be allocated and it's mostly preferable to run things on the stack instead of the heap. This works for many, many things, especially when using STL containers, which will hide the dynamic management in the background. The usage of the heap is mostly all cases only preferable if you need the data to be dynamic, non "local" or you need a lot of it. However when you use the heap, make sure to use smart pointers such as std::unique_ptr or std::shared_ptr depending on the use case, but certainly not raw pointers. In modern C++ raw pointers should never own an object anymore. There are cases where it's okay to return a raw pointer to reference an object, but there's really no reason in modern C++ to call new on a raw pointer.
Lets get back to the original question though. The "Don't use raw pointers" is essentially more of a design question and quite unrelated to the whole header issue. While there might be some cases where you'll have to switch to raw pointers, due to circular references, the use of forward declarations is otherwise just about compilation time (and maybe clean code), but it's not as essential for the programming itself.
In short: Don't use raw pointers to avoid inclusions in header files, but use forward declaration where ever possible and utilize smart pointers as much as possible.

pass by refernce method in matlab

I'm developing a robotics application in Matlab for my thesis. I'm experienced in C#, PHP, js, etc etc.
I would love if objects I create could somehow be passed by reference. I heard that there are things called "handle objects" and others called "value objects". I can't find any specific documentation on how to create a "handle object" and it seems they are usually graphics objects.
I have a few design patterns that are easy to implement when passing by reference is possible. I would like certain objects to share 'simulation spaces', without making each space a global variable. I would like to avoid passing IDs around everywhere, in an effort to keep objects synchronized. I would like to share environmental objects between robots, without worrying about the fact that passing this object actually copies it. (this will lead to bugs over time)
I'm starting to feel like my only solution will be to have a weird global 'object broker' that has the latest copy of many common system objects. I hope to avoid this sort of thing!
Any advice would be amazing!
Handle objects are created by the following syntax
classdef myClass < handle
properties
% properties here
end
methods
% methods here
end
end
A good place to start looking in the documentation is the classes start page. Note that value and handle classes have only been implemented in R2008a, and are reasonably bug-free since R2009a (though more recent releases have improved performance quite a bit).
If you're coming from other languages, this section about the differences between Matlab and other languages OOP can be useful.
Your classes should inherit from the handle abstract class
classdef MyHandleClass < handle
% // class stuff
Class with this semantics can be passed by reference in a java like way.
Consider also this section of the guide.

Tracing Allocations Test Results - look good or not so good?

I'm trying to memory test my app.
I've followed the Organizer - Documentation article entitled "Recovering Memory You Have Abandoned", but I'm not sure if the results make my tested page good or bad, or somewhere in-between.
(My test involved: navigate to page 2, going back to page 1, press 'Mark Heap' -repeated 25 times for good measure.)
Attached is a screenshot of my allocations test. Most of the #Persistent values are 0. But there are some anomalies. Are these typical?
(The last Heapshot, 26, was taken after stopping the recording, and pressing 'Mark Heap' at the end of the trace - as suggested in the documentation.)
I would be very grateful for some advice. Thanks.
I believe that you are using ARC, and if you are using ARC, there is no need of bothering about the heaps, it will take care of everything.
Here are the 9 simple points from Apple's docs to be in mind while using ARC:
ARC imposes some new rules that are not present when using other
compiler modes. The rules are intended to provide a fully reliable
memory management model; in some cases, they simply enforce best
practice, in some others they simplify your code or are obvious
corollaries of your not having to deal with memory management. If you
violate these rules, you get an immediate compile-time error, not a
subtle bug that may become apparent at runtime.
You cannot explicitly invoke dealloc, or implement or invoke retain,
release, retainCount, or autorelease.
The prohibition extends to using #selector(retain), #selector(release), and so on.
You may implement a dealloc method if you need to manage resources other than releasing instance variables. You do not have to
(indeed you cannot) release instance variables, but you may need to
invoke [systemClassInstance setDelegate:nil] on system classes and
other code that isn’t compiled using ARC.
Custom dealloc methods in ARC do not require a call to [super dealloc] (it actually results in a compiler error). The
chaining to
super is automated and enforced by the compiler.
You can still use CFRetain, CFRelease, and other related functions
with Core Foundation-style
You cannot use NSAllocateObject or NSDeallocateObject.
You create objects using alloc; the runtime takes care of
deallocating objects.
You cannot use object pointers in C structures.
Rather than using a struct, you can create an Objective-C class to manage the data instead.
There is no casual casting between id and void *.
You must use special casts that tell the compiler about object lifetime. You need to do this to cast between Objective-C objects and
Core Foundation types that you pass as function arguments. For more
details, see “Managing Toll-Free Bridging.”
You cannot use NSAutoreleasePool objects.
ARC provides #autoreleasepool blocks instead. These have an advantage of being more efficient than NSAutoreleasePool.
You cannot use memory zones.
There is no need to use NSZone any more—they are ignored by the modern Objective-C runtime anyway.

Objective-C Data Structures (Building my own DAWG)

After not programming for a long, long time (20+ years) I'm trying to get back into it. My first real attempt is a Scrabble/Words With Friends solver/cheater (pick your definition). I've built a pretty good engine, but it's solves the problems through brute force instead of efficiency or elegance. After much research, it's pretty clear that the best answer to this problem is a DAWG or CDWAG. I've found a few C implementations our there and have been able to leverage them (search times have gone from 1.5s to .005s for the same data sets).
However, I'm trying to figure out how to do this in pure Objective-C. At that, I'm also trying to make it ARC compliant. And efficient enough for an iPhone. I've looked quite a bit and found several data structure libraries (i.e. CHDataStructures ) out there, but they are mostly C/Objective-C hybrids or they are not ARC compliant. They rely very heavily on structs and embed objects inside of the structs. ARC doesn't really care for that.
So - my question is (sorry and I understand if this was tl;dr and if it seems totally a newb question - just can't get my head around this object stuff yet) how do you program classical data structures (trees, etc) from scratch in Objective-C? I don't want to rely on a NS[Mutable]{Array,Set,etc}. Does anyone have a simple/basic implementation of a tree or anything like that that I can crib from while I go create my DAWG?
Why shoot yourself in the foot before you even started walking?
You say you're
trying to figure out how do this in pure Objective-C
yet you
don't want to rely on a NS[Mutable]{Array,Set,etc}
Also, do you want to use ARC, or do you not want to use ARC? If you stick with Objective-C then go with ARC, if you don't want to use the Foundation collections, then you're probably better off without ARC.
My suggestion: do use NS[Mutable]{Array,Set,etc} and get your basic algorithm working with ARC. That should be your first and only goal, everything else is premature optimization. Especially if your goal is to "get back into programming" rather than writing the fastest possible Scrabble analyzer & solver. If you later find out you need to optimize, you have some working code that you can analyze for bottlenecks, and if need be, you can then still replace the Foundation collections.
As for the other libraries not being ARC compatible: you can pretty easily make them compatible if you follow some rules set by ARC. Whether that's worthwhile depends a lot on the size of the 3rd party codebase.
In particular, casting from void* to id and vice versa requires a bridged cast, so you would write:
void* pointer = (__bridge void*)myObjCObject;
Similarly, if you flag all pointers in C structs as __unsafe_unretained you should be able to use the C code as is. Even better yet: if the C code can be built as a static library, you can build it with ARC turned off and only need to fix some header files.