What's the difference between M_PI and M_PI_2? - swift

I forked a project from Github, Xcode shows a lot of warnings:
'M_PI' is deprecated: Please use 'Double.pi' or '.pi' to get the value
of correct type and avoid casting.
and
'M_PI_2' is deprecated: Please use 'Double.pi' or '.pi' to get the value
of correct type and avoid casting.
Since both M_PI and M_PI_2 are prompted to be replaced by Double.pi, I assume there are in fact the same value. However, there's this code in the project:
switch angle {
case M_PI_2:
...
case M_PI:
...
case Double.pi * 3:
...
default:
...
}
I'm really confused here, are M_PI and M_PI_2 different? or are they just the same?
UPDATE:
It turns out to be my blunder, Xcode says 'M_PI_2' is deprecated: Please use Double.pi / 2 or .pi / 2 to get the value of correct type and avoid casting. so it isn't a bug, just too hard to notice the difference of 2 prompts.

Use Double.pi / 2 for M_PI_2 and Double.pi for M_PI.
You can also use Float.pi and CGFloat.pi.
In Swift 3 & 4, pi is defined as a static variable on the floating point number types Double, Float and CGFloat.

These constants are related to the implementations of different functions in the math library:
s_cacos.c: __real__ res = (double) M_PI_2 - __real__ y;
s_cacosf.c: __real__ res = (float) M_PI_2 - __real__ y;
s_cacosh.c: ? M_PI - M_PI_4 : M_PI_4)
...
s_clogf.c: __imag__ result = signbit (__real__ x) ? M_PI : 0.0;
M_PI, M_PI_2, and M_PI_4 show up quite often but there's no 2.0 * M_PI. 2π is just not that useful, at least in implementing libm.
M_PI_2 and M_PI_4, their existences are well justified. The documentation of the GNU C library suggests that "these constants come from the Unix98 standard and were also available in 4.4BSD". Compilers were not that smart back at that time. Typing M_PI/4 instead of M_PI_4 may cause an unnecessary division. Although modern compilers can optimize that away (GCC uses mpfr since 2008 so even rounding is done correctly), using numeric constants is still a more portable way to write high-performance code.

M_PI is defined as a macro
#define M_PI 3.14159265358979323846264338327950288
in math.h and part of the POSIX standard.
Follow This
You can find reference answer here
Check This Solution - How you can use it

Related

What does "2..xxx" mean in shaderlab?

float3 f = float3(1,2,3);
f *= 2..xxx;
I have no idea what ..xxx does. I got the code from here
It's a "swizzle" operation. In this case of a scalar constant 2.0.
2..xxx is equivalent to float3(2.0, 2.0, 2.0)
You can find more info here in the "Vector swizzle operator" section.

"The compiler is unable to type-check this expression in reasonable time" for a simple formula

I have what appears to be a rather simple arithmetic expression:
let N = 2048
// var c = (0..<N).map{ sin( 2.0 * .pi * Float($0) / (Float(N)/2.0)) }
let sinout = (0..<N * 10).map { x in
sin(2 * .pi * Float(x) / Float(N / 2))
}
But this is generating:
The compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions
Why is such a simple equation not parse-able by the Swift compiler? How do we write equations that Swift can actually parse? This must be a major headache for persons writing DSP and/or linear algebra libraries: what workarounds or patterns do you use?
You just have to explicitly set the return type of your map expression:
map { x -> Float in
Sometimes it is hard for Swift to compile some seemingly easy code. The best thing you can do in those cases is modulate it in smaller chunks. I honestly think that this is an error that should be fixed but that for some reason is still there.

Hints on getting basic arithmetic expressions to be parsed in Swift

Consider the following expression:
let N = 2048
var c = (0..<N).map{ f -> Float in sin( 2 * .pi * f / (N/2)) }
Swift can not really parse it:
This is already a very small expression: it's absurd to break it into even smaller pieces. So I am trying to use type-casts. But I am getting weary of adding many explicit type casts :
let N = 2048
var c: [Float] = (0..<N).map{ f -> Float in
Float(sin( 2.0 * .pi * f / (Float(N/2)))) }
Even with the above the error continues
Why is swift so weak in parsing these simple arithmetic expressions? What can I do short of breaking it into pieces of the form
let c = a * b
let f = c * d
That is just too simplistic to be practical for signal processing. I am guessing that there were tricks to get the compiler to be a bit more intelligent: please do share.
The issue is that the arithmetic operators (+,-,* and /) have a lot of overloads. Hence, when you write expressions containing a lot of those operators, the compiler cannot resolve them in time.
This is especially true when you have type errors. The compiler tries to find the correct overload, but cannot do so, since your types are mismatching and there's no matching overload. However, by the time the compiler could infer this, it's already past the timeout for resolving expressions and hence you get that error instead of the actual type error.
As soon as you resolve the type errors by casting all Ints to Float, the single line expression compiles just fine.
let c = (0..<N).map{ f -> Float in sin( 2 * .pi * Float(f) / Float(N/2)) }
Once you do that, you don't even need the named closure argument and type annotation of the return value anymore.
let c = (0..<N).map{ sin(2 * .pi * Float($0) / Float(N/2)) }
That looks like java. What about
let N = 2048
var c = (0..<N).map{ f in
sin( 2.0 * .pi * Float(f) / Float(N/2))
}

Weird casting needed in Swift

I was watching some of the videos at WWDC2014 and trying to code I liked, but one of the weird things I noticed is that Swift keeps getting mad at me and wanting me to cast to different number types. This is easy enough but in the videos at WWDC they did NOT need to do this. Here is an example from "What's New With Interface Builder":
-M_PI/2 keeps giving me the error: "Could not find an overload for '/' that accepts the supplied arguments'
Does anyone have a solution to this problem, that does NOT simply involve casting because there is clearly another way of doing this? I have many many more examples for similar problems to this.
if !ringLayer {
ringLayer = CAShapeLayer()
let innerRect = CGRectInset(bounds, lineWidth / 2.0, lineWidth / 2.0)
let innerPath = UIBezierPath(ovalInRect: innerRect)
ringLayer.path = innerPath.CGPath
ringLayer.fillColor = nil
ringLayer.lineWidth = lineWidth
ringLayer.strokeColor = UIColor.blueColor().CGColor
ringLayer.anchorPoint = CGPointMake(0.5, 0.5)
ringLayer.transform = CATransform3DRotate
(ringLayer.transform, -M_PI/2, 0, 0, 1)
layer.addSublayer(ringLayer)
}
ringLayer.frame = layer.bounds
Edit: NB: CGFloat has changed in beta 4, specifically to make handling this 32/64-bit difference easier. Read the release notes and don't take the below as gospel now: it was written for beta 2.
After a clue from this answer I've worked it out: it depends on the selected project architecture. If I leave the Project architecture at the default of (armv7, arm64), then I get the same error as you with this code:
// Error with arm7 target:
ringLayer.transform = CATransform3DRotate(ringLayer.transform, -M_PI/2, 0, 0, 1)
...and need to cast to a Float (well, CGFloat underneath, I'm sure) to make it work:
// Works with explicit cast on arm7 target
ringLayer.transform = CATransform3DRotate(ringLayer.transform, Float(-M_PI/2), 0, 0, 1)
However, if I change the target architecture to arm64 only, then the code works as written in the Apple example from the video:
// Works fine with arm64 target:
ringLayer.transform = CATransform3DRotate(ringLayer.transform, -M_PI/2, 0, 0, 1)
So to answer your question, I believe this is because CGFloat is defined as double on 64-bit architecture, so it's okay to use M_PI (which is also a double)-derived values as a CGFloat parameter. However, when arm7 is the target, CGFloat is a float, not a double, so you'd be losing precision when passing M_PI (still a double)-derived expressions directly as a CGFloat parameter.
Note that Xcode by default will only build for the "active" architecture for Debug builds—I found it was possible to toggle this error by switching between iPhone 4S and iPhone 5S schemes in the standard drop-down in the menu bar of Xcode, as they have different architectures. I'd guess that in the demo video, there's a 64-bit architecture target selected, but in your project you've got a 32-bit architecture selected?
Given that a CGFloat is double-precision on 64-bit architectures, the simplest way of dealing with this specific problem would be to always cast to CGFloat.
But as a demonstration of dealing with this type of issue when you need to do different things on different architectures, Swift does support conditional compilation:
#if arch(x86_64) || arch(arm64)
ringLayer.transform = CATransform3DRotate (ringLayer.transform, -M_PI / 2, 0, 0, 1)
#else
ringLayer.transform = CATransform3DRotate (ringLayer.transform, CGFloat(-M_PI / 2), 0, 0, 1)
#endif
However, that's just an example. You really don't want to be doing this sort of thing all over the place, so I'd certainly stick to simply using CGFloat(<whatever POSIX double value you need>) to get either a 32- or 64-bit value depending on the target architecture.
Apple have added much more help for dealing with different floats in later compiler releases—for example, in early betas you couldn't even take floor() of a single-precision float easily, whereas now (currently Xcode 6.1) there are overrides for floor(), ceil(), etc. for both float and double, so you don't need to be fiddling with conditional compilation.
There seems to be issues currently with automatic conversions between Objective C numeric types and Swift types. For this I was able to get it to work by marking the lineWidth to the Float type. I don't know why they didn't have that issue in the video, I assume that is a different build they were using. Either there is an Objective C interop setting I'm missing, or it's just a beta issue.
To verify some of the basic issues (even happening in Playground) I used:
var x:NSNumber = 1
var y:Integer = 2
var z:Int = 3
x += 5 //error
y += 6 //error
z = z + y //error
For Swift 1.2 you have to cast second parameter to CGFloat
This code works:
ringLayer.transform = CATransform3DRotate(ringLayer.transform, CGFloat(-M_PI/2), 0, 0, 1)

objective-c : what is the equivalent of java Math.toRadians(EndLat - StartLat) in objective-c?

what is the equivalent of java function
Double EndLat;
Double StartLat;
Math.toRadians(EndLat - StartLat);
in objective-c?
#include <math.h>
(EndLat - StartLat) * M_PI / 180.0;
You might want to put this in a function if you're going to be using it a lot
Well Math.toRadians(a) seems to compute the radians from the angle a.
According to any sufficient basic math text, the equivalent code would be
result = a * 3.14159265358979323846 / 180.0