Find the smallest value among variables? - iphone

I have from 4 up to 20 variables that differ in size.
They are all of type float and number values.
Is there an easy way to find the smallest value among them and assign it to a variable?
Thanks

Not sure about objective-c but the procedure's something like:
float min = arrayofvalues[0];
foreach( float value in arrayofvalues)
{
if(value < min)
min=value;
}

I agree with Davy8 - you could try rewriting his code into Objective C.
But, I have found some min()-like code - in Objective C!
Look at this:
- (int) smallestOf: (int) a andOf: (int) b andOf: (int) c
{
int min = a;
if ( b < min )
min = b;
if( c < min )
min = c;
return min;
}
This code assumes it'll always compare only three variables, but I guess that's something you can deal with ;)

The best solution, without foreach.
`- (float)minFromArray:(float *)array size:(int)arrSize
{
float min;
int i;
min = array[0]
for(i=1;i<arrSize;i++)
if(array[i] < min)
min = array[i];
return min;
}
`
If you want to be sure, add a check of the arrSize > 0.
Marco

Thanks for all your answers and comments.. I learn a lot from you guys :)
I ended up using something like Martin suggested.
if (segmentValueNumber == 11){
float min = 100000000;
if(game51 > 0, game51 < min){
min=game51;
}
if(game52 > 0, game52 < min){
min=game52;
}
}
...............................................
I could not figure out how to implement it all into one array since each result depends on a segment control, and I think the program is more optimised this way since it only checks relevant variables.
But thanks again, you are most helpful..

Related

Negative Integer Comparisons

I am trying to compare two integers. One is the row of an NSIndexPath, the other is the count of an NSArray. The row is equal to 0 and the count is equal to 2. I have them in an if statement as follows:
if([self.selectedSetpoint row] < ([self.theCategories count]-3))
{
//Do true stuff
}
So by my math, the if statement should be false as I am comparing 0 < -1. However, I keep getting this statement coming through as true, and the block inside the if is being run. I have tried to NSLog the values to make sure that I am not just getting the wrong value. I placed this right BEFORE the if statement:
NSLog(#"%d",[self.selectedSetpoint row]);
NSLog(#"%d",[self.theCategories count]-3);
NSLog(#"%#",[self.selectedSetpoint row] < ([self.theCategories count]-3)?#"Yes":#"No");
and got this on the console:
2012-07-17 08:58:46.061 App[61345:11603] 0
2012-07-17 08:58:46.061 App[61345:11603] -1
2012-07-17 08:58:46.062 App[61345:11603] Yes
Any ideas why this comparison is coming up as true? Am I misunderstanding something about comparing integers?
I have another if statement just above this one that compares
[self.selectedSetpoint row]<([self.theCategories count]-2)
Which is 0 < 0, and it works fine (returns NO). So I feel like there is some issue with the use of negative integers that I am not getting.
Thank you in advance.
I suspect the issue is that the return of count is an unsigned integer, and subtracting more than it's magnitude, it underflows and becomes quite large. I have run some tests, and I get the same basic behavior as you (it looks like it's -1, and in certain contexts it appears to be working as expected... however it is clearly being underflowed in the context of the if() block.
Silly problem, but luckily there is a simple solution: Cast it in place in the if statement:
if([self.selectedSetpoint row] < ( (int)[self.theCategories count] -3 ))
{
//Do true stuff
}
I was going to propose an alternative solution - namely to avoid using subtraction and instead use addition on the other side of the equation:
if ([self.selectedSetpoint row] + 3 < [self.theCategories count])
{
//Do true stuff
}
This sidesteps getting caught by these kind of underflow bugs, however it leaves another gotcha untouched... Namely the conversion rules referred to in the answer to this question: What are the general rules for comparing different data types in C?
Quoting from an answer to that question, you see the C99 spec states:
(when one operand is signed and the other unsigned) Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type.
So if you have a negative value for [self.selectedSetpoint row] + 3 then the comparison will fail...
The other answers here have advocated casting (NSUInteger) to (NSInteger) - but note that this may cause overflow problems if your unsigned values are significantly massive. For e.g:
(NSInteger) -3 < (NSInteger) 4294967289 == false...
Figuring there must be an easy way to solve this, I first came up with a hard way to solve it...
#define SafeLT(X, Y) \
({ typeof (X) _X = (X); \
typeof (Y) _Y = (Y); \
( _X < (NSInteger) 0 ? (( _Y > 0) ? YES : _X < _Y ) : ( _Y < (NSInteger) 0 ? NO : _X < _Y));})
This should work regardless of how you mix NSUInteger and NSInteger, and will ensure that the operands are evaluated at most once (for efficiency).
By way of proof of correctness:
For _X < (NSInteger) 0 to evaluate to True, _X must be NSInteger and < 0, so we check if _Y > 0. The compiler will make the correct comparison here by evaluating the type of _Y. If Y > 0 then by definition we return YES. Otherwise we know both X and Y are signed and < 0 and can compare them safely.
However, if X is either NSUInteger or > 0 then we test Y to see if it is < 0. If Y is < 0 then by definition we return NO. So now we have X is NSUInteger or NSInteger > 0 and Y is NSUInteger or NSInteger > 0. Since mixed comparisons will be promoted to NSUInteger we will have a safe conversion every time because there is no chance of underflow.
A simpler solution, however, would be to just typecast to a longer signed integer (iff your system has one):
#define EasySafeLT(X, Y) \
({ long long _X = (X); \
long long _Y = (Y); \
( _X < _Y);})
Although this depends on having the larger type available so may not always be feasible.
This does appear to be a case of comparing signed ints to unsigned ints. Your log statements will throw your assumptions because you are asking for the numbers to be printed signed.
NSLog(#"%d", -1); //Signed
-1;
NSLog(#"%u", -1); //Unsigned (tested on ios, 32 bit ints)
4294967295;
And then when you compare: 0 < 4294967295 certainly is true.
Casting as #ctrahey suggests should fix your problem:
if([self.selectedSetpoint row] < ( (int)[self.theCategories count] -3 ))
{
//Do true stuff
}
the problem is the count property is kind of NSUInteger and when you subtract a higher number from a smaller you won't get a less than zero number, you will get a very large positive number, it cause the strange behaviour.
try this way, and you would get the good result:
NSLog(#"%#",(NSInteger)[self.selectedSetpoint row] < ((NSInteger)[self.theCategories count]-3)?#"Yes":#"No");

How to convert a double to NSInteger?

Very simple question here. I have a double that I wish to convert back to a NSInteger, truncating to the units place. How would I do that?
Truncation is an implicit conversion:
NSInteger theInteger = theDouble;
That's assuming you're not checking the value is within NSInteger's range. If you want to do that, you'll have to add some branching:
NSInteger theInteger = 0;
if (theDouble > NSIntegerMax) {
// ...
} else if (theDouble < NSIntegerMin) {
// ...
} else {
theInteger = theDouble;
}
NSInteger is a typedef for a C type. So you can just do:
double originalNumber;
NSInteger integerNumber = (NSInteger)originalNumber;
Which, per the C spec, will truncate originalNumber.
but anyway, assuming you want no rounding, i believe this should work simply
double myDouble = 10.4223;
NSInteger myInt = myDouble;
edit for rounding: (i'm sure theres a much simpler (and precise) way to do this.. (this also doesn't account for negative numbers or maximum boundaries)
double myDecimal = myDouble - myInt;
if(myDecimal < 0.50)
{
//do nothing
}
else
{
myInt = myInt + 1;
}
NSInteger is a typedef, it's the same as using an int. Just assign the value like:
double d;
NSInteger i = d;
JesseNaugher mentions rounding and I note the OP needs were met with a simple truncate, but in the spirit of full generalisation it's worth remembering the simple trick of adding 0.5 to the double before invoking floor() to achieve rounding. Extending Jonathan Grynspan's comment: NSInteger myInt = floor(myDouble + 0.5); i.e., rounding up in absolute terms. If rounding 'up' means rounding away from zero a more convoluted approach is needed: NSInteger myInt = floor( myDouble + (myDouble < 0.0 ? -0.5 : 0.5) );

three integers compare

I have three integers
I would like to determine what is the highest and which is the lowest value using Objective-C
Thank you!
It is good to store that numbers in an array. Just plain C array is good enough and in Objective-C best for performance. To find a minimum you can use this function. Similar for maximum.
int find_min(int numbers[], int N){
int min = numbers[0];
for(int i=1;i<N;i++)
if(min>numbers[i])min=numbers[i];
return min;
}
If that is just three numbers you can do the comparisons manually for best performance. There is a MIN() and MAX() macro in Cocoa in Foundation/NSObjCRuntime.h. For the maximum, just do:
int m = MAX(myI1, MAX(myI2, myI3));
This may be scaled to more numbers and may be faster than the first approach using loop.
Unfortunately there is no short and elegant neither a generalized way for that in Cocoa.
Plain C Array + custom loop would be the best. With an NSArray you would have to wrap the Integers in NSNumbers without getting any benefit out of that.
Objective-C's built in MAX(a,b) and MIN(a,b) macros only work for two values.
I have two macros I've created for using 2 or more values called multi-max and multi-min (MMAX and MMIN)
Here is their definition, just copy paste into your .h
#define MMAX(...) ({\
long double __inputs[(sizeof((long double[]){__VA_ARGS__})/sizeof(long double))] = {__VA_ARGS__};\
long double __maxValue = __inputs[0];\
for (int __i = 0; __i < (sizeof((long double[]){__VA_ARGS__})/sizeof(long double)); ++__i) {\
long double __inputValue = __inputs[__i];\
__maxValue = __maxValue>__inputValue?__maxValue:__inputValue;\
}\
__maxValue;\
})
#define MMIN(...) ({\
long double __inputs[(sizeof((long double[]){__VA_ARGS__})/sizeof(long double))] = {__VA_ARGS__};\
long double __minValue = __inputs[0];\
for (int __i = 0; __i < (sizeof((long double[]){__VA_ARGS__})/sizeof(long double)); ++__i) {\
long double __inputValue = __inputs[__i];\
__minValue = __minValue<__inputValue?__minValue:__inputValue;\
}\
__minValue;\
})
Example use:
x = MMAX(2,3,9,5);
//sets x to 9.

Percentage Calculation always returns 0

I am trying to calculate the percentage of something.
It's simple maths. Here is the code.
float percentComplete = 0;
if (todaysCollection>0) {
percentComplete = ((float)todaysCollection/(float)totalCollectionAvailable)*100;
}
Here the value of todaysCollection is 1751 and totalCollectionAvailable is 4000. Both are int.
But percentComplete always shows 0. Why is this happening? Can any one Help me out.
I'm new to Objective C.
But percentComplete always shows 0
How are you displaying percentComplete? Bear in mind it's a float - if you interpret it as an int without casting it you'll get the wrong output. For example, this:
int x = 1750;
int y = 4000;
float result = 0;
if ( x > 0 ) {
result = ((float)x/(float)y)*100;
}
NSLog(#"[SW] %0.1f", result); // interpret as a float - correct
NSLog(#"[SW] %i", result); // interpret as an int without casting - WRONG!
NSLog(#"[SW] %i", (int)result); // interpret as an int with casting - correct
Outputs this:
2010-09-04 09:41:14.966 Test[6619:207] [SW] 43.8
2010-09-04 09:41:14.967 Test[6619:207] [SW] 0
2010-09-04 09:41:14.967 Test[6619:207] [SW] 43
Bear in mind that casting a floating point value to an integer type just discards the stuff after the decimal point - so in my example 43.8 renders as 43. To round the floating point value to the nearest integer use one of the rounding functions from math.h, e.g.:
#import <math.h>
... rest of code here
NSLog(#"[SW] %i", (int)round(result)); // now prints 44
Maybe try with *(float)100, sometimes that is the problem ;)
I think that your value for todaysCollection and totalCollectionAvailable is wrong. Double check for that.
Put the NSLog(#"%d", todaysCollection) right before the if statement

Do I need to use decimal places when using floats? Is the "f" suffix necessary?

I've seen several examples in books and around the web where they sometimes use decimal places when declaring float values even if they are whole numbers, and sometimes using an "f" suffix. Is this necessary?
For example:
[UIColor colorWithRed:0.8 green:0.914 blue:0.9 alpha:1.00];
How is this different from:
[UIColor colorWithRed:0.8f green:0.914f blue:0.9f alpha:1.00f];
Does the trailing "f" mean anything special?
Getting rid of the trailing zeros for the alpha value works too, so it becomes:
[UIColor colorWithRed:0.8 green:0.914 blue:0.9 alpha:1];
So are the decimal zeros just there to remind myself and others that the value is a float?
Just one of those things that has puzzled me so any clarification is welcome :)
Decimal literals are treated as double by default. Using 1.0f tells the compiler to use a float (which is smaller than double) instead. In most cases it doesn't really matterĀ if a number is a double or a float, the compiler will make sure you get the right format for the job in the end. In high-performance code you may want to be explicit, but I'd suggest benchmarking it yourself.
As John said numbers with a decimal place default to double. TomTom is wrong.
I was curious to know if the compiler would just optimize the double to a const float (which I assumed would happen)... turns out it doesn't and the idea of the speed increase is actually legit... depending on how much you use it. In math-heavy application, you probably do want to use this trick.
It must be the case that it is taking the stored float variable, casting it to a double, performing the math against the double (the number without the f), then casting it back to a float to store it again. That would explain the diference in calculation even though we're storing in floats each time.
The code & raw results:
https://gist.github.com/1880400
Pulled out relevant benchmark on an iPad 1 in Debug profile (Release resulted in even more of a performance increase by using the f notation):
------------ 10000000 total loops
timeWithDoubles: 1.33593 sec
timeWithFloats: 0.80924 sec
Float speed up: 1.65x
Difference in calculation: -0.000038
Code:
int main (int argc, const char * argv[]) {
for (unsigned int magnitude = 100; magnitude < INT_MAX; magnitude *= 10) {
runTest(magnitude);
}
return 0;
}
void runTest(int numIterations) {
NSTimeInterval startTime = CFAbsoluteTimeGetCurrent();
float d = 1.2f;
for (int i = 0; i < numIterations; i++) {
d += 1.8368383;
d *= 0.976;
}
NSTimeInterval timeWithDoubles = CFAbsoluteTimeGetCurrent() - startTime;
startTime = CFAbsoluteTimeGetCurrent();
float f = 1.2f;
for (int i = 0; i < numIterations; i++) {
f += 1.8368383f;
f *= 0.976f;
}
NSTimeInterval timeWithFloats = CFAbsoluteTimeGetCurrent() - startTime;
printf("\n------------ %d total loops\n", numIterations);
printf("timeWithDoubles: %2.5f sec\n", timeWithDoubles);
printf("timeWithFloats: %2.5f sec\n", timeWithFloats);
printf("Float speed up: %2.2fx\n", timeWithDoubles / timeWithFloats);
printf("Difference in calculation: %f\n", d - f);
}
Trailing f: this is a float.
Trailing f + "." - redundant.
That simple.
8f is 8 as a float.
8.0 is 8 as a float.
8 is 8 as integer.
8.0f is 8 as a float.
Mostly the "f" can be style - to make sure it is a float, not a double.