Why can I not get an accurate readout when at extreme coordinates for CGPoint?
For example if the CGPoint created for (500000250,500000100) and I use print() I only see "5e+08" for both x and y. When I use number formatter (see below) and convert the scientific notation back to a decimal I only get 500000000 for each x and y which is 250 and 100 out of sync.
let coordinates = CGPointMake(500000250,500000100)
let numberFormatter = NSNumberFormatter()
numberFormatter.numberStyle = NSNumberFormatterStyle.DecimalStyle
let xCoord = numberFormatter.numberFromString("\(coordinates.x)")
let yCoord = numberFormatter.numberFromString("\(coordinates.y)")
I've also tried:
let decimalValue = NSDecimalNumber(string: "\(coordinates.x)")
All the results will read out as 500000000 and the value cannot be stored for future use because it is inaccurate.
I must be missing something.
CGPoint uses CGFloat for the x and y coordinates.
On a 32-bit platform CGFloat is the type Float, which has about 6 significant digits. In your case the last three digits are rounded off.
If you move to a 64-bit platform CGFloat is the type Double which has at least 15 significant digits.
If you need to do calculations with 15 significant digits, do it with Double then cast to CGFloat when you pass it in to Core Graphics.
Related
I was converting Float => CGFloat and it gave me following result. Why It comes as "0.349999994039536" after conversion but works fine with Double => CGFloat?
let float: Float = 0.35
let cgFloat = CGFloat(float)
print(cgFloat)
// 0.349999994039536
let double: Double = 0.35
let cgFloat = CGFloat(double)
print(cgFloat)
// 0.35
Both converting “.35” to float and converting “.35” to double produce a value that differs from .35, because the floating-point formats use a binary base, so the exact mathematical value must be approximated using powers of two (negative powers of two in this case).
Because the float format uses fewer bits, its result is less precise and, in this case, less accurate. The float value is 0.3499999940395355224609375, and the double value is 0.34999999999999997779553950749686919152736663818359375.
I am not completely conversant with Swift, but I suspect the algorithm it is using to convert a CGFloat to decimal (with default options) is something like:
Produce a fixed number of decimal digits, with correct rounding from the actual value of the CGFloat to the number of digits, and then suppress any trailing zeroes. For example, if the exact mathematical value is 0.34999999999999997…, and the formatting uses 15 significant digits, the intermediate result is “0.350000000000000”, and then this is shorted to “0.35”.
The way this operates with float and double is:
When converted to double, .35 becomes 0.34999999999999997779553950749686919152736663818359375. When printed using the above methods, the result is “0.35”.
When converted to float, .35 becomes 0.3499999940395355224609375. When printed using the above method, the result is “0.349999994039536”.
Thus, both the float and double values differ from .35, but the formatting for printing does not use enough digits to show the deviation for the double value, while it does use enough digits to show the deviation for the float value.
How to calculate doubles accurately in Swift?
I try it on Swift playground, and 10.0 - 0.2 won't be 9.8.
How to fix it?
You could use round.
So for your example this should work:
let x = 10.0 - 0.2
let y = Double(round(100*x)/100)
If you want more accuracy multiply and divide for example by 1000 or 10000 and so on.
I have a number that I am trying to append to an array. The number is a coordinate -37.77745068746633 however in my Swift project if I println the array after the append call the value that should be -37.77745068746633 is -37.77745069
I am receiving the number through Google snap to roads API and I can see the original as the full number but changes after I call
self.latitude = locations["latitude"] as! Double
Swift just doesn't seem to store the entire value. Is there rounding on by default?
Thanks
If you keep the number as Double all the time, Swift won't round it, it is most likely the print statement that truncates it to a reasonable precision.
By the way, try to check the distance between the original and the truncated coordinates:
let lat1: Double = -37.77745068746633
let lat2: Double = -37.77745069
let lon: Double = 150
let loc1 = CLLocation(latitude: lat1, longitude: lon)
let loc2 = CLLocation(latitude: lat2, longitude: lon)
print(loc1.distanceFromLocation(loc2)) // prints 0.000281218294500339
If I am not making any mistake, the difference is just 0.3 millimeters.
I have a UISlider to give a range of a radius of search (for 5 - x miles, max x=100 miles), and I want to be able to get a random radius distance within this range.
#IBAction func sliderMoved(sender: UISlider){
//gives a range (minimum range 5 miles, maximum range 5 - 100 miles)
var range = (sender.value*95)+5
//gives a random distance in miles from within that range
var distance = Int((arc4random()%(range))+1)
}
When I try to assign "distance" I get the error "could not find an overload for '%' that accepts the supplied arguments".
Use arc4random_uniform()
let distance = arc4random_uniform(UInt32(range)) - 1
arc4random_uniform() will perform the modulo operation without bias.
The problem is that the sender.value is a UISlider.value that is a Float. arc4random is a UInt32 which is your main issue here.
What you've done is try to convert the whole thing to an Int when you just need the internal components to be matching first. So get the range and the arc4random to the same type (I've done a Float here) and then do the casting you want =
Int(Float(arc4random()) % range + 1)
You of course could also use arc4random_uniform as the other commenters have stated as in :
arc4random_uniform(UInt32(range)) + 1
which is actually much better if you don't really care about the value of your range (especially any decimal part).
http://www.raywenderlich.com/forums/viewtopic.php?f=2&t=20282
How can I do that? I tried the abs() but it only works for int's. Is there a built in way to do so?
CGFloat flo = -123;
abs(flo)
this returns 0
Use fabs()
CGFloat f = -123.4f;
CGFloat g = fabs(f);
CGFloat is defined as a double on 64-bit machines, and float on 32-bit machines. If you are targeting both 64 and 32 bit than this answer gets more complicated, but is still correct.
You want to use fabs() because it works on double datatypes which are larger than float. On 32-bit, assigning the return value of fabs() (which is a double) into a CGFloat (which is a float) is ok ONLY if you are taking the absolute value of a CGFloat or float. You would potentially overflow a 32-bit CGFloat if you try to store the absolute value of a large double number. In short, 64-bit is fine, and on 32-bit don't mix and match double and CGFloat and you'll be fine.
The ABS() macro apparently can have side-effects on some compilers.
For 64/32 bit system
#if CGFLOAT_IS_DOUBLE
CGFloat g = fabs(flo);
#else
CGFloat g = fabsf(flo);
#endif
I normally just use the ABS macro, as far as I can tell, it works regardless of which system you're on or which primitive you're using.
CGFloat x = -1.1;
double y = -2.1;
NSInteger z = -5;
x = ABS(x);
y = ABS(y);
z = ABS(z);
Small addition:
CGFloat g = fabs(f);
This will result in a casting warning, because fabs() returns a double value.
To get a float value back, you need to use fabsf().
CGFloat g = fabsf(f);
fabs is depreciated. The signature of abs is now generic with this signature
/// Returns the absolute value of the given number.
///
/// The absolute value of `x` must be representable in the same type. In
/// particular, the absolute value of a signed, fixed-width integer type's
/// minimum cannot be represented.
///
/// let x = Int8.min
/// // x == -128
/// let y = abs(x)
/// // Overflow error
///
/// - Parameter x: A signed number.
/// - Returns: The absolute value of `x`.
#inlinable public func abs<T>(_ x: T) -> T where T : Comparable, T : SignedNumeric