I want to know the use of KVO in swift as I read in Apple Doc or any other online articles it states that it provides indirect access to properties and addressable via string. I have a set of doubts.
If I can set the property directly via person.name = "John" they why to use a Set Value for key name = "John" indirectly
Apple doc says key-value coding compliant can participate in a wide range of Cocoa technologies like Core Data. Why it's used and not in other frameworks
It is used during runtime or dynamic to set value. How it is?
Is it TypeSafe and how?
Its an Objective - C feature then why still used in Swift 4 with latest improvements with ./Type.property access and set
If I can set the property directly via person.name = "John" they why
to use a Set Value for key name = "John" indirectly
Please read “What is the point of key-value coding?”
Apple doc says
key-value coding compliant can participate in a wide range of Cocoa
technologies like Core Data. Why it's used and not in other frameworks
It's used where appropriate. It's used where it is helpful and the performance is acceptable. If it's not useful, or if its performance is too low, it's not used.
It is used during runtime or dynamic to set value. How it is?
Key-Value Coding uses the Objective-C runtime to look up getter and setter methods, and to find the types and locations of instance variables if the setters don't exist. See Friday Q&A 2013-02-08: Let's Build Key-Value Coding for a detailed analysis.
Apple documentation briefly describes the implementation of Key-Value Observing here. It's short enough to quote entirely:
Automatic key-value observing is implemented using a technique called
isa-swizzling.
The isa pointer, as the name suggests, points to the object's class
which maintains a dispatch table. This dispatch table essentially
contains pointers to the methods the class implements, among other
data.
When an observer is registered for an attribute of an object the isa
pointer of the observed object is modified, pointing to an
intermediate class rather than at the true class. As a result the
value of the isa pointer does not necessarily reflect the actual class
of the instance.
You should never rely on the isa pointer to determine class
membership. Instead, you should use the class method to determine the
class of an object instance.
Mike Ash gave a more detailed analysis in Friday Q&A 2009-01-23.
Is it
TypeSafe and how?
It's not particularly type safe. For example, it doesn't stop you from storing a UIView in a property that's declared as NSString, or vice versa.
Its an Objective - C feature then why still used in
Swift 4 with latest improvements with ./Type.property access and set
It's still used because most of Apple's SDK is still implemented in Objective-C, and because it lets you do things that you cannot do in Swift without much more “boilerplate” (manual, repetitive functions). The trade-off is that Objective-C is lower performance. In many, many cases, the lower performance of Objective-C (compared to Swift) is not a significant problem, and the increased dynamism is very helpful.
Related
The Rust standard library has the std::mem::uninitialized function, which allows you to manually create an undefined (and possible invalid) value of any type. This essentially maps to LLVM's undef. I was wondering if Swift had an equivalent, as I haven't been able to find one skimming through the standard library documentation.
Motivation
The primary use for this is to bypass the compiler's normal memory initialization checks when they prove imprecise. For instance, you might want to initialize some members of a structure using methods (or in Swift's case, property setters), which the compiler usually would not allow you to do. Using dummy values works, but that's potentially less efficient (if the optimizer cannot prove that the dummy is meaningless).
In Rust, uninitialized values are treated as initialized by the initialization checker, but as uninitialized by LLVM, leading to more reliable optimization. Since the Swift compiler is also built atop LLVM, it seems like something the Swift compiler should also be able to support.
My Specific Use Case
I have a struct which encapsulates a set of bit fields compacted into one machine word (an unsigned integer type). The struct type provides a safe and convenient interface for reading and modifying the individual fields, through computed properties.
I also have an initializer that takes the relevant field values as parameters. I could initialize the underlying machine word using the same bitwise operations that the computed properties use (in which case I would be repeating myself), or I could initialize it simply by setting the values of the computed properties.
Currently, Swift does not support using computed properties before self has been fully initialized. It's also unlikely Swift will ever support this case, since the computed property setters modify the existing machine word, which would require it to already be initialized, at least as far as Swift is concerned. Only I know that all my setters, in concert, will fully initialize that machine word.
My current solution is to simply initialize the machine word to 0, and then run the setters. Since a constant 0 is trivially absorbed into the bitwise | chain that combines the fields, there's no CPU time lost, but that's always going to be the case. This is the kind of situation where, in Rust, I would have used an uninitialized value to express my intent to the optimizer.
i studing about c# and i think that the diference betwen mutable & inmutable class , (un c# for example), are that the definitión of the variables cant change. The string still string, or may be that the value of the types cant change : string = "Hola" still "Hola". and the mutable can change.
well i am right or what is the real diference?
thank you
An immutable object is an object that can't change it's property values after it's created (actually its state, but to simplify, let's just assume that a different state implies different property/variable values). Any properties are usually asigned values in the constructor (it may not have any properties at all, just methods).
An immutable object can have internal variables that might change values, as long as they don't affect the state of that object from a public/external point of view.
A string in C# is immutable... if you try to assign a string variable a different value, a new string is created.
You can find more information about immutability in OOP on the Wikipedia
PS: it's a bit more complicated than this, but I don't want to confuse you... there are different levels of what can be considered "immutable", but if you want to research further, apart from the Wikipedia article (which doesn't mention C#), there's this post by Eric Lippert which explains the different types way better than I could ever do.
I'm newbie to objective-C, I feel comportable in C++.
My question is:
Why language designer of obj-c proper to use retain/release rather then use new/delete(=alloc/dealloc) only?
Maybe my brain is fit to new/delete only memory management, I can not understand why I should manage reference counts, I think I know when object have to be alloc/dealloc with my C++ experence.
(Yes, I spend 4 hours to debug reference count problem, it is resolved by 1 line "release")
Can anyone explain me what is better when we use reference counter? (in programming language respects) I think I can manage lifecycle of object by new/delete, but I can't with reference counting.
I need long article that explains why reference counter is useful, if you have link.
P.S: I heard about Compile-time Automatic Reference Counting at WWDC 2011, it is really awesome, it can be reason of use of reference counter, for example.
The short answer is that it is a way to manage object lifetimes without requiring "ownership" as one does with C++.
When creating an object using new in C++, something has to know when to delete that object later. Often this is straightforward, but it can be difficult when an object can be passed around and shared by many different "owners" with differing lifetimes.
With reference counting, as long as any other object refers to the object, it stays alive. When all other objects remove their references, it disappears. There are drawbacks to this approach (the debugging of retain/release and reference cycles being the most obvious), but it is a useful alternative to fully automatic garbage collection.
Objective-C is not the only language to use reference counting. In C++, it is common to use std::shared_ptr, which is the standard reference-counted smart pointer template. Windows Component Object Model programming requires it. Many languages use automated reference-counting behind the scenes as a garbage-collection strategy.
The Wikipedia article is a good place to start looking for more information: http://en.wikipedia.org/wiki/Reference_counting
While thinking about the possibility of a handle class based ORM in MATLAB, the issue of caching instances came up. I could not immediately think of a way to make weak references or a weak map, though I'm guessing that something could be contrived with event listeners. Any ideas?
More Info
In MATLAB, a handle class (as opposed to a value class) has reference semantics. An example included with MATLAB is the containers.Map class. If you instantiate one and pass it to a function, any modifications the function makes to the object will be visible via the original reference. That is, it works like a Java or Python object reference.
Like Java and Python, MATLAB keeps track in one way or another of how many things are referencing each object of a handle class. When there aren't any more, MATLAB knows it is safe to delete the object.
A weak reference is one that refers to the object but does not count as a reference for purposes of garbage collection. So if the only remaining references to the object are weak, then it can be thrown away. Generally an event or callback can be supplied to the weak reference - when the object is thrown away, the weak references to it will be notified, allowing cleanup code to run.
For instance, a weak value map is like a normal map, except that the values (as opposed to the keys) are implemented as weak references. The weak map class can arrange a callback or event on each of these weak references so that when the referenced object is deleted, the key/value entry in the map is removed, keeping the map nice and tidy.
These special reference types are really a language-level feature, something you need the VM and GC to do. Trying to implement it in user code will likely end in tears, especially if you lean on undocumented behavior. (Sorry to be a stick in the mud.)
There's a couple ways you could do something similar. These are just ideas, not endorsements; I haven't actually done them.
Perhaps instead of caching Matlab object instances per se, you could cache expensive computational results using a real Java weak ref map in the JVM embedded inside Matlab. If you can convert your Matlab values to and from Java relatively quickly, this could be a win. If it's relatively flat numeric data, primitives like double[] or double[][] convert quickly using Matlab's implicit conversion.
Or you could make a regular LRU object cache in the Matlab level (maybe using a containers.Map keyed by hashcodes) that explicitly removes the objects inside it when new ones are added. Either use it directly, or add an onCleanup() behavior to your objects that has them automatically add a copy of themselves to a global "recently deleted objects" LRU cache of fixed size, keyed by an externally meaningful id, and mark the instances in the cache so your onCleanup() method doesn't try to re-add them when they're deleted due to expiration from the cache. Then you could have a factory method or other lookup method "resurrect" instances from the cache instead of constructing brand new ones the expensive way. This sounds like a lot of work, though, and really not idiomatic Matlab.
This is not an answer to your question but just my 2 cents.
Weak reference is a feature of garbage collector. In Java and .NET garbage collector is being called when the pressure on memory is high and is therefore indeterministic.
This MATLAB Digest post says that MATLAB is not using a (indeterministic) garbage collector. In MATLAB references are being deleted from memory (deterministically) on each stack pop i.e. on leaving each function.
Thus I do not think that weak references belongs to the MATLAB reference handling concept. But MATLAB has always had tons of undocumented features so I can not exclude that it is buried somewhere.
In this SO post I asked about MATLAB garbage collector implementation and got no real answer. One MathWorks stuff member instead of answering my question has accused me of trying to construct a Python vs. MATLAB argument. Another MathWorks stuff member wrote something looking reasonable but in substance a clever deception - purposeful distraction from the problem I asked about. And the best answer has been:
if you ask this question then MATLAB
is not the right language for you!
What is the difference between NSMutableArray and CFMutableArray?
In which case(s) should we use one or the other?
CFMutableArray and NSMutableArray are the C- and Objective-C-equivalents of the same type. They are considered a "toll free bridged" type pair, which means you can cast a CFMutableArrayRef to a NSMutableArray* (and vice versa) and everything will just work. The cases in which you would use one over the other is ease-of-use (are you using C or Objective-C?) or compatibility with an API you would like to use. See more information here.
At runtime, they are identical. They are the same, they are toll-free bridged types - you can safely (and efficiently) cast one to the other.
They are different types, available in different/overlapping languages.
CFMutableArrayRef is the opaque C object interface
NSMutableArray * is the Objective-C interface
They may be freely interchanged, and the difference is the declaration that says one is a pointer to an opaque type, vs a objc object.
Also, you can (sorta - it requires a little more implementation than usual) subclass NSMutableArray type using objc.
OSX's APIs are layered, there are basic low-level APIs that are self-cotnained and then there are richer APIs built on top of them, in turn using the lower level APIs themselves.
CFMutableArray is part of the CoreFoundation framework and used by the lower-level APIs. NSMutableArray (I guess NS stands for NextStep) is part of the Foundation framework and used in higher level APIs like the AppKit or Cocoa.
Which you should use depends on where you are working. If you're working in a rich user interface using Cocoa, NSMutableArray is the right choice. If you're working on a daemon, driver or anything else just using CoreFoundation, use CFMutableArray.
Luckily, as pointed out above, many of these CF/NS types are toll-free bridged and so you can use CoreFoundation APIs from e.g. Cocoa without having to constantly convert types.