Printing CGFloat in Swift - swift

I'm trying to get familiar with the Swift and test several things.
Here's a strange thing which I can't understand.
var count : NSInteger = 19
var percent : CGFloat = 22.01
var random : NSInteger = NSInteger(percent)
NSLog("%d, %f, %d", count, percent, random);
println("\(count), \(percent), \(random)")
It should print 19, 22.01, 22 but the log is...
19, 0.000000, 33875549
19, 22.0100002288818, 22
What's wrong here? After I removed the type specifier, it works fine with println not with NSLog.
Any Idea why the log is not correct?
ADDED
What about println? Using \() has no way to print 22.01?

My guess is you're compiling for 32-bit iOS, where CGFloat is a 32-bit float.
The closest float to 22.01 is exactly 22.0100002288818359375.
The closest 64-bit double to 22.01 is exactly 22.010000000000001563194018672220408916473388671875.
It appears that Swift string interpolation converts a double to the shortest string that would convert back to exactly the same double. It converts a float to a double (with the extra bits being zeros), then converts the double to a string as if it had been given a double in the first place.
The shortest string that converts back to Double(22.010000000000001563194018672220408916473388671875) is 22.01. But the shortest string that converts back to Double(22.0100002288818359375) is 22.0100002288818.

CGFloats are weird since they can be 32 or 64 bit depending on the system. It seems that NSLog has an issue with them. If you type percent explicitly as a Float or Double it will work correctly. You can also get the native type with cgfloat.native.
NSLog("%d, %f, %d", count, percent.native, random);

Use %# for CGFloat and it works fine:
NSLog("%d, %#, %d", count, percent, random);
This works because you can call .description on a CGFloat.

Related

Attempting to have SKLabelNode show a float

Trying to get a SKLabelNode to show a float, but instead it only shows an integer. This is what i have at the moment:
sdrLabel.text = String(Float(NSUserDefaults.standardUserDefaults().integerForKey("TotalScore") / NSUserDefaults.standardUserDefaults().integerForKey("TotalDeath")))
I tried to type cast it to a float and present it as a string, but it still shows an int. For example 233 / 8, it shows 29.
You are loosing precision because you are working with integers and then you cast the result to Float. You need to work with Float (or Double) when dividing those two numbers.
You can use this instead:
sdrLabel.text = String(NSUserDefaults.standardUserDefaults().doubleForKey("TotalScore") / NSUserDefaults.standardUserDefaults().doubleForKey("TotalDeath"))

Double String formatting madness

I have the following in playground:
let a:CGFloat = 1.23
let b:Double = 3.45
let c:Float = 6.78
String(format: "(%2.2f|%2.2f|%2.2f)",a,b,c) // prints "(3.45|6.78|0.00)"
let d = NSMakePoint(9.87,6.54)
String(format: "(%2.2f|%2.2f)",d.x,d.y) // prints "(0.00|0.00)"
So why is c:Float rendered as 0.00. I possibly need something else than f (Apples docu on this formatting function is - to be polite - quite thin with a limes going to zero).
BUT: Why is the CGFloat in the first place rendered correctly while the two CGFloats inside the NSPoint get rendered as 0.00?
And no: it's not a duplicate of Precision String Format Specifier In Swift and its pendant.
P.S.
String(format: "(%2.2f|%2.2f)",Double(d.x),d.y) // prints "(9.87|0.00)"
which is a work around but no explanation.
And a PPS: Isn't %2.2f supposed to print " 9.87" instead of "9.87" (2 places for leading digits? It seems to ignore the number. Specifying %02.2f also prints "9.87" rather than "09.87"
CGFloat isn't a float or a double, it is its own struct. If you want the double or float value of a CGFloat (which is dependent on the architecture being 32 or 64 but, then you can access it with a.native. In other words try:
String(format: "(%2.2f|%2.2f|%2.2f)",a.native,b,c)
You'd see similar behavior if you tried to pass other non-float or non-double arguments to the %f formatter. For example:
var str = "Hello, playground"
String(format: "%2.2f|%2.2f|%2.2f", arguments: [str,b,c])
would result in "3.45|6.78|0.00". It appears to be looking for another float in your arguments to satisfy the last %f, and defaults to 0.00
As for the PPS. %2.2F is two decimal place and at least 2 total digits. If you wanted two digits minimum before the decimal, you'd want %5.2f. 5 because the decimal itself takes a place.

NSString - Truncate everything after decimal point in a double

I have a double that I need only the value of everything before the decimal point.
Currently I am using
NSString *level = [NSString stringWithFormat:#"%.1f",doubleLevel];
but when given a value of 9.96, this returns "10". So it is rounding. I need it to return only the "9". (note - when the value is 9.95, it correctly returns the "9" value.)
Any suggestions?
Thank You.
Simply assign the float/double value to a int value.
int intValue = doubleLevel;
Cast that baby as an int.
int castedDouble = doubleLevel;
Anything after the . in the double will be truncated.
9.1239809384 --> 9
123.90454980 --> 123
No rounding, simple truncation.
If you want to keep it as a float:
CGFloat f = 9.99;
f = floorf(f);
there are quite a variety of floor and round implementations.
they have been around since UN*X, and are actually part of those low-level libraries, be they BSD, Posix, or some other variety - you should make yourself familiar with them.
there are different versions for different "depths" of floating point variables.
NSString *level = [NSString stringWithFormat:#"%d",doubleLevel];

Performing operations on a double returns 0

I have a method that receives a number in a NSString format.
I wish to convert this string to a double which I can use to calculate a temperature.
Here's my method.
NSString *stringTemp = text; // text is a NSString
NSLog(#"%#",stringTemp); // used for debugging
double tempDouble = [stringTemp doubleValue];
NSLog(#"%f",tempDouble); // used for debugging
Please note I put the NSLog commands here just to see if the number was correct. The latter NSLog returns a value of 82.000000 etc. (constantly changes as it's a temperature).
Next I wanted to use this double and convert it to a Celsius value. To do so, I did this:
double celsiusTemp = (5 / 9) * (tempDouble - 32);
Doing this: NSLog(#"%d", celsiusTemp); , or this: NSLog(#"%f", celsiusTemp); both give me a value of 0 in the console. Is there any reason why this would be happening? Have I made a stupid mistake somewhere?
Thank you for your help!
Try doing (5.0 / 9.0). If you only use an int to do math where you are expecting a double to be returned (like 0.55) everything after the decimal place will be lost because the cpu expects an int to be returned.
5 / 9 is the division of two integers, and as such uses integer division, which performs the division normally and then truncates the result. So the result of 5 / 9 is always the integer 0.
Try:
double celsiusTemp = (5.0 / 9) * (tempDouble - 32);
If you evaulate (5/9) as an integer, then it is just 0.

Inaccurate division of doubles (Visual C++ 2008)

I have some code to convert a time value returned from QueryPerformanceCounter to a double value in milliseconds, as this is more convenient to count with.
The function looks like this:
double timeGetExactTime() {
LARGE_INTEGER timerPerformanceCounter, timerPerformanceFrequency;
QueryPerformanceCounter(&timerPerformanceCounter);
if (QueryPerformanceFrequency(&timerPerformanceFrequency)) {
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
}
return 0.0;
}
The problem I'm having recently (I don't think I had this problem before, and no changes have been made to the code) is that the result is not very accurate. The result does not contain any decimals, but it is even less accurate than 1 millisecond.
When I enter the expression in the debugger, the result is as accurate as I would expect.
I understand that a double cannot hold the accuracy of a 64-bit integer, but at this time, the PerformanceCounter only required 46 bits (and a double should be able to store 52 bits without loss)
Furthermore it seems odd that the debugger would use a different format to do the division.
Here are some results I got. The program was compiled in Debug mode, Floating Point mode in C++ options was set to the default ( Precise (/fp:precise) )
timerPerformanceCounter.QuadPart: 30270310439445
timerPerformanceFrequency.QuadPart: 14318180
double perfCounter = (double)timerPerformanceCounter.QuadPart;
30270310439445.000
double perfFrequency = (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
14318.179687500000
double result = perfCounter / perfFrequency;
2114117248.0000000
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
2114117248.0000000
Result with same expression in debugger:
2114117188.0396111
Result of perfTimerCount / perfTimerFreq in debugger:
2114117234.1810646
Result of 30270310439445 / 14318180 in calculator:
2114117188.0396111796331656677036
Does anyone know why the accuracy is different in the debugger's Watch compared to the result in my program?
Update: I tried deducting 30270310439445 from timerPerformanceCounter.QuadPart before doing the conversion and division, and it does appear to be accurate in all cases now.
Maybe the reason why I'm only seeing this behavior now might be because my computer's uptime is now 16 days, so the value is larger than I'm used to?
So it does appear to be a division accuracy issue with large numbers, but that still doesn't explain why the division was still correct in the Watch window.
Does it use a higher-precision type than double for it's results?
Adion,
If you don't mind the performance hit, cast your QuadPart numbers to decimal instead of double before performing the division. Then cast the resulting number back to double.
You are correct about the size of the numbers. It throws off the accuracy of the floating point calculations.
For more about this than you probably ever wanted to know, see:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
http://docs.sun.com/source/806-3568/ncg_goldberg.html
Thanks, using decimal would probably be a solution too.
For now I've taken a slightly different approach, which also works well, at least as long as my program doesn't run longer than a week or so without restarting.
I just remember the performance counter of when my program started, and subtract this from the current counter before converting to double and doing the division.
I'm not sure which solution would be fastest, I guess I'd have to benchmark that first.
bool perfTimerInitialized = false;
double timerPerformanceFrequencyDbl;
LARGE_INTEGER timerPerformanceFrequency;
LARGE_INTEGER timerPerformanceCounterStart;
double timeGetExactTime()
{
if (!perfTimerInitialized) {
QueryPerformanceFrequency(&timerPerformanceFrequency);
timerPerformanceFrequencyDbl = ((double)timerPerformanceFrequency.QuadPart) / 1000.0;
QueryPerformanceCounter(&timerPerformanceCounterStart);
perfTimerInitialized = true;
}
LARGE_INTEGER timerPerformanceCounter;
if (QueryPerformanceCounter(&timerPerformanceCounter)) {
timerPerformanceCounter.QuadPart -= timerPerformanceCounterStart.QuadPart;
return ((double)timerPerformanceCounter.QuadPart) / timerPerformanceFrequencyDbl;
}
return (double)timeGetTime();
}