How can I implement NORMDIST function in objective c? - iphone

I am trying to implement a NORMDIST feature in my iphone application, but I am not sure what library to import, or how I would go about doing this.
If someone can point me in a direction, that would be awesome.

Not sure if this is precisely what you're looking for but here is an algorithm for calculating a cumulative normal distribution approximation. There is an implementation in C++ that should be fairly trivial to port to Obj-C.

Try this:
static double normdist (double x, double mean, double standard_dev) {
double res;
x = (x - mean) / standard_dev;
if (x == 0) {
res = 0.5;
} else {
double oor2pi = 1 / (sqrt(2.00000 * 3.14159265358979323846));
double t = 1 / (1.0000000 + 0.2316419 * fabs(x));
t *= oor2pi * exp(-0.5 * x * x)
* (0.31938153 + t
* (-0.356563782 + t
* (1.781477937 + t
* (-1.821255978 + t * 1.330274429))));
if (x >= 0)
res = 1.00000 - t;
else
res = t;
}
return res;
}

Related

How to modify this code to return Geopoint

I would like this code to return a newly constructed geopoint.
I need this,
GeoPoint prjTest=new GeoPoint(vxi+x,vyi+y);
to stick somewhere and return prjTest. I'm new to programming and I don't know well synthax.I tried many things, I can keep guessing for a long time. Please help. Thanks.
public class ProjectileTest
{
public ProjectileTest(float vi, float angle) /** renamed velocity -> vi */
{
//Starting location
double xi = 0, yi = 100;
final double gravity = -9.81;
//timeSlice declares the interval before checking the new location.
double timeSlice = 0.001; /** renamed time -> timeSlice */
double totalTime = 0; /** renamed recordedTime -> totalTime */
double vxi = vi * Math.cos(Math.toRadians(angle)); /** renamed xSpeed -> vxi */
double vyi = vi * Math.sin(Math.toRadians(angle)); /** renamed ySpeed -> vyi */
//This (secondsTillImpact) seems to give a very accurate time whenever the angle is positive.
double secondsTillImpact = Math.sqrt(2 * yi / -(gravity));
/** Not sure I agree. Does this formula take into account the upward motion
* of the projectile along its parabolic arc? My suspicion is that this
* formula only "works" when the initial theta is: 180 <= angle <= 360.
*
* Compare with the result predicted by quadratic(). Discarding the zero
* intercept which can't work for us (i.e. the negative one, because time
* only moves forward) leaves us with an expected result very close to the
* observed result.
*/
double y;
double x;/** Current position along the y-axis */
do {
// x = x + (xSpeed * time);
x = vxi * totalTime; /** Current position along the x-axis */
// y = y + (ySpeed * time);
y = yi + vyi * totalTime + .5 * gravity * (totalTime * totalTime);
// ySpeed = ySpeed + (gravity * time);
double vy = vyi + gravity * totalTime; /** Current velocity of vector y-component */
System.out.println("X: " + round2(x) + " Y: " + round2(y) + " YSpeed: " + round2(vy));
totalTime += timeSlice;
}
while (y > 0);
////////////////////////////++++++++ GeoPoint prjTest=new GeoPoint(vxi+x,vyi+y);
System.out.println("Incorrectly expected seconds: " + secondsTillImpact + "\nResult seconds: " + totalTime);
quadratic((.5 * gravity), vyi, yi);
}
public double round2(double n) {
return (int) (n * 100.0 + 0.5) / 100.0;
}
public void quadratic(double a, double b, double c) {
if (b * b - 4 * a * c < 0) {
System.out.println("No roots in R.");
} else {
double dRoot = Math.sqrt(b * b - 4 * a * c); /** root the discriminant */
double x1 = (-b + dRoot) / (2 * a); /** x-intercept 1 */
double x2 = (-b - dRoot) / (2 * a); /** x-intercept 2 */
System.out.println("x-int one: " + x1 + " x-int two: " + x2);
}
}
}

Color conversion RGB to HSL using Core Image Kernel Language

I'm trying to create image filter that will shift color of the image. In order to do this I need to convert rgb color to hsl and after shift, convert hsl back to rgb. I make some researches and found formulas that can help me with this task.
I implement them in my playground using Swift just to test if they are reliable and they are. I won't post Swift code here just to keep things clean, but I'll show my test results:
input: rgb (61, 117,237) or (0.24,0.46,0.93)
result:
rgb2hsl [0.613527 0.831325 0.585] or (221, 83, 58.5) //hsl
hsl2rgb [0.24 0.46 0.93] //back to rgb
Great! So far so good.
Now we need to convert our Swift code to Core Image Kernel Language (CIKL).
And here it is:
float hue2rgb(float f1, float f2, float hue) {
if (hue < 0) {
hue += 1.0;
}
else if (hue > 1) {
hue -= 1.0;
}
float res;
if (6*hue<1) {
res = f1 + (f2 - f1) * 6 * hue;
}
else if (2*hue<1) {
res = f2;
}
else if (3*hue<2) {
res = f1 + (f2 - f1) * (2.0/3.0 - hue) * 6;
}
else {
res = f1;
}
return res;
}
vec3 hsl2rgb(vec3 hsl) {
vec3 rgb;
if (hsl.y == 0) {
rgb = vec3(hsl.z,hsl.z,hsl.z);
}
else {
float f2;
if (hsl.z < 0.5) {
f2 = hsl.z * (1.0 + hsl.y);
}
else {
f2 = hsl.z + hsl.y - hsl.y * hsl.z;
}
float f1 = 2 * hsl.z - f2;
float r = hue2rgb(f1, f2, hsl.x + 1.0/3.0);
float g = hue2rgb(f1, f2, hsl.x);
float b = hue2rgb(f1, f2, hsl.x - 1.0/3.0);
rgb = vec3(r,g,b);
}
return rgb;
}
vec3 rgb2hsl(vec3 rgb) {
float maxC = max(rgb.x, max(rgb.y,rgb.z));
float minC = min(rgb.x, min(rgb.y,rgb.z));
float l = (maxC + maxC)/2.0;
float h = 0;
float s = 0;
if (maxC != minC) {
float d = maxC - minC;
s = l > 0.5 ? d / (2.0 - maxC - minC) : d / (maxC + minC);
if (maxC == rgb.x) {
h = (rgb.y - rgb.z) / d + (rgb.y < rgb.z ? 6.0 : 0);
} else if (maxC == rgb.y) {
h = (rgb.z - rgb.x) / d + 2.0;
}
else {
h = (rgb.x - rgb.y) / d + 4.0;
}
h /= 6.0;
}
return vec3(h,s,l);
}
And here comes the problem. I'm not able to get right values using this functions in my filter. To check everything I made a Quartz Composer Patch.
Since I didn't find any print/log option in CIKL, I made this to check if my conversions work right:
The logic of this patch: my filter takes color as an input, convert it to hsl and back to rgb and returns it; image input ignored for now.
Kernel func of my filter:
kernel vec4 kernelFunc(__sample pixel, __color color) {
vec3 vec = color.rgb;
vec3 hsl = rgb2hsl(vec);
return vec4(hsl2rgb(hsl), 1);
}
Filter includes functions listed above.
The result I see in the viewer is:
Image on the right is cropped constant color image from the input color.
The left image is the output from our filter.
Digital color picker returns rgb (237, 239.7, 252) for left image.
I have no more ideas how to debug this thing and find a problem. Any help will be highly appreciated. Thanks.
I found the problem. It was me, converting code from Swift to CIKL I made a stupid mistake that was very hard to find, because you have no print / log tools in CIKL or I don't know about it.
Anyway, the problem was in the rgb2hsl function:
float l = (maxC + maxC)/2.0; // WRONG
it should be:
float l = (maxC + minC)/2.0;
Hope it'll help someone in the future.

'Float' is not identical to 'UInt8' Swift

I have no idea what's wrong with my code for a Taylor series:
func factorial(n: Int) -> Int {
return n == 0 ? 1 : n * factorial(n - 1)
}
func sin(num: Float) -> Float {
let rad : Float = num * 1.0 / 180.0 * 3.1415926535897;
var sum : Float = rad;
for i in 1...100 {
if (i % 2 == 0) {
sum += Float(pow(rad, 2 * i + 1) / Float(factorial(2 * i + 1)));
} else {
sum -= Float(pow(rad, 2 * i + 1)) / Float(factorial(2 * i + 1));
}
}
return sum;
}
print(sin(123.0));
Here are the errors:
<stdin>:11:17: error: cannot invoke '/' with an argument list of type '(#lvalue Float, $T25)'
sum += Float(pow(rad, 2 * i + 1) / Float(factorial(2 * i + 1)));
~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
<stdin>:13:13: error: 'Float' is not identical to 'UInt8'
sum -= Float(pow(rad, 2 * i + 1)) / Float(factorial(2 * i + 1));
^
The pow function needs two arguments of the same type, either Float or Double, and so does the division.
Change your sum +/-=... statements to:
if (i % 2 == 0) {
sum += pow(rad, Float(2 * i + 1)) / Float(factorial(2 * i + 1))
} else {
sum -= pow(rad, Float(2 * i + 1)) / Float(factorial(2 * i + 1))
}
Do you need to deal with both Ints and Floats? Swift is really picky about types, so if you can stick to one, that will make your life way easier. With just Floats, you can get this working by adding a single line:
func factorial(n: Float) -> Float {
return n == 0 ? 1 : n * factorial(n - 1)
}
func sin(num: Float) -> Float {
let rad : Float = num * 1.0 / 180.0 * 3.1415926535897;
var sum : Float = rad;
for i in 1...100 {
let float_index :Float = Float(i)
if (i % 2 == 0) {
sum += Float(pow(rad, 2 * float_index + 1) / Float(factorial(2 * float_index + 1)));
} else {
sum -= Float(pow(rad, 2 * float_index + 1)) / Float(factorial(2 * float_index + 1));
}
}
return sum;
}
I think pow requires its arguments to be of the same type but what you have isn't going to work even if you do sort out that issue. That's because your factorial function is generating numbers that are too big.
The largest 64-bit integral number has about twenty digits while the factorial of 200 has about 375.
Ergo it won't fit into a Int, it won't even fit into a Double (double precision IEEE754 number), which maxes out at about 308 digits (upper range, obviously precision is less).
You'll need to move up to Float80 which provides for several thousand digits of range.
But, again, due to the limited precision, you're likely to introduce a lot of errors into your calculations. This would be true even with the IEEE754 quad precision, limited to about 34 decimal digits of precision.

Transform screen coordinates to model coordinates

I've got some sort of newbie question.
In my application (processingjs) i use scale() and translate() to allow the user to zoom and scroll through the scene. As long as i keep the scale set to 1.0 i've got no issues. BUT whenever i use the scale (i.e. scale(0.5)) i'm lost...
I need the mouseX and mouseY translated to the scene coordinates, which i use to determine the mouseOver state of the object I draw on the scene.
Can anybody help me how to translate these coordinates?
Thanks in advance!
/Richard
Unfortunately for me this required a code modification. I'll look at submitting this to the Processing.JS code repository at some point, but here's what I did.
First, you'll want to use modelX() and modelY() to get the coordinates of the mouse in world view. That will look like this:
float model_x = modelX(mouseX, mouseY);
float model_y = modelY(mouseX, mouseY);
Unfortunately Processing.JS doesn't seem to calculate the modelX() and modelY() values correctly in a 2D environment. To correct that I changed the functions to be as follows. Note the test for mv.length == 16 and the section at the end for 2D:
p.modelX = function(x, y, z) {
var mv = modelView.array();
if (mv.length == 16) {
var ci = cameraInv.array();
var ax = mv[0] * x + mv[1] * y + mv[2] * z + mv[3];
var ay = mv[4] * x + mv[5] * y + mv[6] * z + mv[7];
var az = mv[8] * x + mv[9] * y + mv[10] * z + mv[11];
var aw = mv[12] * x + mv[13] * y + mv[14] * z + mv[15];
var ox = 0, ow = 0;
var ox = ci[0] * ax + ci[1] * ay + ci[2] * az + ci[3] * aw;
var ow = ci[12] * ax + ci[13] * ay + ci[14] * az + ci[15] * aw;
return ow !== 0 ? ox / ow : ox
}
// We assume that we're in 2D
var mvi = modelView.get();
// NOTE that the modelViewInv doesn't seem to be correct in this case, so
// having to re-derive the inverse
mvi.invert();
return mvi.multX(x, y);
};
p.modelY = function(x, y, z) {
var mv = modelView.array();
if (mv.length == 16) {
var ci = cameraInv.array();
var ax = mv[0] * x + mv[1] * y + mv[2] * z + mv[3];
var ay = mv[4] * x + mv[5] * y + mv[6] * z + mv[7];
var az = mv[8] * x + mv[9] * y + mv[10] * z + mv[11];
var aw = mv[12] * x + mv[13] * y + mv[14] * z + mv[15];
var oy = ci[4] * ax + ci[5] * ay + ci[6] * az + ci[7] * aw;
var ow = ci[12] * ax + ci[13] * ay + ci[14] * az + ci[15] * aw;
return ow !== 0 ? oy / ow : oy
}
// We assume that we're in 2D
var mvi = modelView.get();
// NOTE that the modelViewInv doesn't seem to be correct in this case, so
// having to re-derive the inverse
mvi.invert();
return mvi.multY(x, y);
};
I hope that helps someone else who is having this problem.
Have you tried another method?
For example, assume that you are in a 2D environment, you can "map" all the frame in a sort of matrix.
Something like this:
int fWidth = 30;
int fHeight = 20;
float objWidth = 10;
float objHeight = 10;
void setup(){
fWidth = 30;
fHeight = 20;
objWidth = 10;
objHeight = 10;
size(fWidth * objWidth, fHeight * objHeight);
}
In this case you will have a 300*200 frame, but divided in 30*20 sections.
This allows you to move in somewhat ordered way your objects.
When you draw an object you have to give his sizes, so you can use objWidth and objHeight.
Here's the deal: you can make a "zoom-method" that edit the value of the object sizes.
In this way you drew a smaller/bigger object without editing any frame property.
This is a simple example because of your inaccurate question.
You can do it [in more complex ways], in a 3D environment too.

How can I convert C# code to MATLAB?

I have this C# code and I am trying to convert it to MATLAB code.
float randomFloat()
{
return (float)rand() / (float)RAND_MAX;
}
int calculateOutput(float weights[], float x, float y)
{
float sum = x * weights[0] + y * weights[1] + weights[2];
return (sum >= 0) ? 1 : -1;
}
I don't think we can use float and int in MATLAB. How do I change the code?
the first one is simply: rand()
the second function can be written as:
if ( [x y 1]*w(:) >=0 )
result = 1;
else
result = -1;
end
The built-in function rand() already does what you're trying to do with randomFloat().
For calculateOutput you can use something fairly similar to what you've got, but as you say you don't need to declare types:
function result = calculateOutput (weights, x, y)
s = x * weights(1) + y * weights(2) + weights(3);
if s >= 0
result = 1;
else
result = -1;
end
end
Note that matlab vectors are one-based, so you need to adjust the indexing.
If you want to generalise this to arbitrary vectors it would make sense to "vectorize" it, but for this simple case a straight translation like this should be fine.