What is the right choice between NSDecimal, NSDecimalNumber, CFNumber? - iphone

I've read a lot about NSDecimal, NSNumber, NSNumberDecimal, CFNumber... and it begins to be a kind of jungle to me.
Basically, I'm trying to create a simple model class that will handle simple computations, like this one:
#import <Foundation/Foundation.h>
#interface Test : NSObject
{
float rate;
float amount;
int duration;
}
- (float)capitalizedAmount;
#end
#implementation Test
- (float)capitalizedAmount {
return (amount*pow((1.0+rate),duration));
}
#end
I want to access these methods and setters with their names as strings, since I plan to have a lot of other classes like this one, and I am only keeping a list of field to do key value coding.
// This is just the desired behavior
// This evidently won't work with the previous class definition
Test *obj = [[Test alloc] init];
[NSNumber numberWithInt:10]
...
float r;
r = [obj performSelector:NSSelectorFromString(#"capitalizedAmount")];
I understand that this is not possible, that performSelector: will return an object, and thus that capitalizedAmount should return an object. I've read things about NSInvocation and the relevant part in the Objective-C Faq on comp.lang.
I also understand that I should use NSDecimalNumber, but there is two things that I would like to know:
Are the memory overhead and performance loss acceptable for a somewhat more complicated class (only financial computations of this kind, showed in an UITableView)? I do not have much background in C...
Isn't it too fastidious and complicated to use functions like decimalNumberByAdding:? With Python it was easy to define __add__ to use operators with objects. Should I get float values from NSDecimalNumber, then do the computations and returns the result wrapped in an NSDecimalNumber? How would you deal with this problem?
I am looking for a simple and beautiful solution!
Just another question in the same area: is CFBoolean the object wrapper for BOOL on iPhone Core Foundation?
Thank you very much for your help!

If you are dealing with financial computations, you really should use base-10 arithmetic to avoid the rounding errors that can occur with the standard base-2 floating point types. So it's either NSDecimal or NSDecimalNumber. And since you're writing object-oriented code, NSDecimalNumber is the right choice for you.
To answer your questions: only testing of your code can reveal whether the memory overhead and performance loss are acceptable to you. I haven't really worked much with NSDecimalNumber but I'd wager that Apple's implementation is quite efficient and will be more than adequate for most people's needs.
Unfortunately, you won't be able to avoid the likes of decimalNumberByAdding: since Objective-C does not support operator overloading like C++ does. I agree that it makes your code somewhat less elegant.
One comment on the code you posted: r = [obj performSelector:NSSelectorFromString(#"capitalizedAmount")]; is rather unelegant. Either
r = [obj performSelector:#selector(capitalizedAmount)];
or even the simple
r = [obj capitalizedAmount];
would be better unless you require the NSSelectorFromString syntax for some other reason.

Ole is correct, in that you should be using NSDecimal or NSDecimalNumber to avoid floating point math errors when doing financial calculations. However, my suggestion would be to use NSDecimal and its C functions, rather than NSDecimalNumber. NSDecimal calculations can be much faster, and since they avoid creating a lot of autoreleased objects, much better on memory usage.
As an example, I benchmarked math operations for the two types on my MacBook Air:
NSDecimal
Additions per second: 3355476.75
Subtractions per second: 3866671.27
Multiplications per second: 3458770.51
Divisions per second: 276242.32
NSDecimalNumber
Additions per second: 676901.32
Subtractions per second: 671474.6
Multiplications per second: 720310.63
Divisions per second: 190249.33
Divisions were the only operation that didn't experience a roughly fivefold increase in performance when using NSDecimal vs. NSDecimalNumber. Similar performance improvements occur on the iPhone. This, along with the memory savings, were why we recently switched Core Plot over to using NSDecimal.
The only difficulty you'll run into is in getting values into and out of the NSDecimal types. Going directly to and from float and integer values might require using NSDecimalNumber as a bridge. Also, if you use Core Data, you'll be storing your values as NSDecimalNumbers, not NSDecimals.

I want to access these methods and setters with their names as strings, since I plan to have a lot of other classes like this one, and I am only keeping a list of field to do key value coding.
r = [obj performSelector:NSSelectorFromString(#"capitalizedAmount")];
KVC is not simply sending messages to objects using strings. See the Key-Value Coding Programming Guide.
Should I get float values from NSDecimalNumber, then do the computations and returns the result wrapped in an NSDecimalNumber?
No. Conversion to binary floating-point (float/double) from decimal floating-point (NSDecimal/NSDecimalNumber) is lossy. Your result will be incorrect for some calculations if you do them that way.

Related

Is there any way to convert from Double to NSNumber and maintain accuracy down to at least 6 decimal places in Swift?

I was using a Get and Set to store a Double into Core Data as an NSNumber. During this conversion which was something like this.
var number {
get {
return coreDataNumber.double
}
set {
coreDataNumber = NSNumber(double: newValue!)
}
}
If the syntax is wrong, that has nothing to do with my question, I'm just not on my Mac right now. I eventually came to the conclusion the only way to maintain accuracy on the conversion was to use a String to store the Double. I am fine with using this method, but for my future knowledge, is there a way to prevent a number like 0.003459 from becoming 0.0034589999999999999999 when you retrieve it? This wasn't the only conversion error I found. Sometimes it would round when I didn't want it to. I understand this probably has something to do with that not all decimal values can be properly portrayed in binary. If there is a way to convert without losing accuracy I would appreciate that knowledge.
The accuracy is much higher than 6 decimal digits.
Using your numbers:
0.003459 - 0.0034589999999999999999 = 1e-22
The problem is the formatting function (or lack thereof).

Should I ever prefer operators overloading over functions/methods?

I feel like using operators overloading adds unnecessary complexity and ambiguity to the code.
Does it have its benefits in real-world cases where it's worth to use custom operators or overload existing operators instead of using functions or object methods?
Is it used on a regular basis or more just a funny exotic stuff to add a language a bit more hipness?
The main reason for overloading is comfort of using custom class with mathematic or logic background
like:
vectors
matrices
complex numbers
phasors,tensors,quaternions,...
finite fields
big-numbers (arbnum,biginteger,bigdecimal...)
custom precision floating and fixed formats
predicates,boolean,fuzy and probabilistic
strings,lists,ques
and much much more
If coded right you can write math equations directly and not bother to convert to set of function calls. The reading and understanding math code is much simpler and straightforward with operators.
Sometimes even non strictly math classes are used this way for example images or signals. In DIP/CV there are usually math/physics equations applied on those and overloaded operators make that more simple.
For the non-math classes
are operators usually useless/meaningless (as you feel) except for special operator= which is crucial for any class/struct with dynamic allocation members. Without it things like std:vector<> will not work properly.
Another example are comparison operators which are sometimes implemented for non math classes to make sorting easier.
from wiki
operator overloading—less commonly known as operator ad hoc polymorphism—is a specific case of polymorphism, where different operators have different implementations depending on their arguments
Swift has over 40 operators, all of them are overloaded and we are using them on regular bases. Do you prefer let sum = value.plus(anotherValue) over let sum = value + anotherValue ?? I am sure, you don't! If the value is custom type conforming to protocol Equatable, == operator must be overloaded and we do it regularly.
Is it a good idea to use custom defined operators (like ±, <*> etc ...)? In that area I am not sure. I am not big fan of this ...
Is it a good idea to overload + operator for something else than sum ? No, definitely not!

Where would custom subscripts be suited over methods/functions and why?

I've researched this in Swift and am confused on where custom subscripts are useful compared to methods and functions. What is the power in using them rather than using a method/func?
It's purely stylistic. Use a subscript whenever you'd prefer this syntax:
myObject[mySubscript] = newValue
over this one:
myObject.setValue(newValue, forSubscript: mySubscript)
Subscripts are more concise and, when used in appropriate situations, clearer in intent.
Which is an easier, clearer way to refer to an array element: myArray[1] or myArray.objectAtIndex(1)?
Would you like to saymyArray[1...3], or would it by just fine if you had to say something like myArray.sliceFromIndex(1).throughIndex(3) every time?
And hey, you know what? Arithmetic operators are also just functions. So don't we abandon them, so we'd have to say something like
let sum = a.addedTo(b.multipliedBy(c))
Wouldn't that be just the same really? What's the power in having arithmetic operators really?

Do I have to execute the NSDecimalCompact() function on every NSDecimal I create?

The documentation confuses me about this. They say:
void NSDecimalCompact (
NSDecimal *number
);
Discussion Formats number so that
calculations using it will take up as
little memory as possible. All the
NSDecimal... arithmetic functions
expect compact NSDecimal arguments.
The last part is important:
All the
NSDecimal... arithmetic functions
expect compact NSDecimal arguments.
So that means, that I have to execute NSDecimalCompact() on every NSDecimal, every time I provide it as a parameter to one of the NSDecimal... arithmetic functions? Or would I do that only once when creating the NSDecimal?
You do not need to call NSDecimalCompact(), the results from NSDecimal..() functions are compact as long as the return value is NSCalculationNoError.

How do you use the different Number Types in Objective C

So I am trying to do a few things with numbers in Objective C and realize there is a plethora of options, and i am just bewildered as to which type to use for my app.
so here are the types.
NSNumber (which is a class)
NSDecmial (which is a struct)
NSDecimalNumber (which is a class)
float/double (which are primitive types)
so essentially what i need to do is take an NSString, which is representing decimal based hours. (10.4 would be 10 hours and (4/10)*60 minutes) and convert it into:
a string representation D H:M (this needs division, multiplication and basic arithmatic)
a Number type to store for easy calculations latter (will mostly be converting between NSTimeIntervals and doing subtractions)
Oh and i need to be able to do an Absolute value as well on these
It appears that the hard part is actually transitioning between the types.
To me this is a very trivial problem so I"m not sure if its getting late or because objective C numerical types suck, but i could use a hand.
Use primitive types (double, CGFLoat, NSInteger) for typical arithmetic and when you need to store a number as an instance variable that's going to be used primarily for arithmetic in other places. You can use C math functions (abs(), pow(), etc) as needed. NSTimeInterval is a typedef for double, so you can interchange the two.
Use NSNumber when you need to store a number as an object, for example if you're creating an NSArray of numbers. Some parts of Cocoa like Core Data or key value coding deal more with NSNumber than primitive types, so you may find yourself using NSNumber more then usual in those situations. For example, if you write [timeKeepersArray valueForKeyPath:#"sum.seconds"] you'll get back an NSNumber, so you may find it easier just to keep that variable instead of converting it to a primitive.
Since it's a small amount of extra code to convert between NSNumber and primitive types, usually your application will end up favoring one or the other depending on what you're doing with numbers.
Oh, and NSDecmial and NSDecimalNumber? Don't worry too much about them, they only come up when you need really precise decimal operations, such as if you're storing financial data.