Transform midi pitch bend to 0 to 4 logarithmic scale - swift

I have midi pitch bend message which needs to be transformed from a linear scale between 0 and 16368 to a logarithmic scale between 0.0 and 4.0.
I know that when the pitch bend is at 12432, the value needs to be at 1.0 and that at 16368 it should be at 4.0
How can I program a function in swift to convert between these two scales?

I'm not sure what do you want to achieve. But logarithm has a vertical asymptote. So, you should define the left and right bounds of abscissa. One of the possible formulas:
y = 4 * ln( (x + 1) * (e - 1) / 16368)

Related

create opencv camera matrix for iPhone 5 solvepnp

I am developing an application for the iPhone using opencv. I have to use the method solvePnPRansac:
http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html
For this method I need to provide a camera matrix:
__ __
| fx 0 cx |
| 0 fy cy |
|_0 0 1 _|
where cx and cy represent the center pixel positions of the image and fx and fy represent focal lengths, but that is all the documentation says. I am unsure what to provide for these focal lengths. The iPhone 5 has a focal length of 4.1 mm, but I do not think that this value is usable as is.
I checked another website:
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
which shows how opencv creates camera matrices. Here it states that focal lengths are measured in pixel units.
I checked another website:
http://www.velocityreviews.com/forums/t500283-focal-length-in-pixels.html
(about half way down)
it says that focal length can be converted from units of millimeters to pixels using the equation: fx = fy = focalMM * pixelDensity / 25.4;
Another Link I found states that fx = focalMM * width / (sensorSizeMM);
fy = focalMM * length / (sensorSizeMM);
I am unsure about these equations and how to properly create this matrix.
Any help, advice, or links on how to create an accurate camera matrix (especially for the iPhone 5) would be greatly appreciated,
Isaac
p.s. I think that (fx/fy) or (fy/fx) might be equal to the aspect ratio of the camera, but that might be completely wrong.
UPDATE:
Pixel coordinates to 3D line (opencv)
using this link, I can figure out how they want fx and fy to be formatted because they use it to scale angles relative to their distance from the center. therefore, fx and fy are likely in pixels/(unit length) but im still not sure what this unit length needs to be, can it be arbitrary as long as x and y are scaled to each other?
You can get an initial (rough) estimate of the focal length in pixel dividing the focal length in mm by the width of a pixel of the camera' sensor (CCD, CMOS, whatever).
You get the former from the camera manual, or read it from the EXIF header of an image taken at full resolution. Finding out the latter is a little more complicated: you may look up on the interwebs the sensor's spec sheet, if you know its manufacturer and model number, or you may just divide the overall width of its sensitive area by the number of pixels on the side.
Absent other information, it's usually safe to assume that the pixels are square (i.e. fx == fy), and that the sensor is orthogonal to the lens's focal axis (i.e. that the term in the first row and second column of the camera matrix is zero). Also, the pixel coordinates of the principal point (cx, cy) are usually hard to estimate accurately without a carefully designed calibration rig, and an as-carefully executed calibration procedure (that's because they are intrinsically confused with the camera translation parallel to the image plane). So it's best to just set them equal to the geometrical geometrical center of the image, unless you know that the image has been cropped asymmetrically.
Therefore, your simplest camera model has only one unknown parameter, the focal length f = fx = fy.
Word of advice: in your application is usually more convenient to carry around the horizontal (or vertical) field-of-view angle, rather than the focal length in pixels. This is because the FOV is invariant to image scaling.
The "focal length" you are dealing with here is simply a scaling factor from objects in the world to camera pixels, used in the pinhole camera model (Wikipedia link). That's why its units are pixels/unit length. For a given f, an object of size L at a distance (perpendicular to the camera) z, would be f*L/z pixels.
So, you could estimate the focal length by placing an object of known size at a known distance of your camera and measuring its size in the image. You could aso assume the central point is the center of the image. You should definitely not ignore the lens distortion (dist_coef parameter in solvePnPRansac).
In practice, the best way to obtain the camera matrix and distortion coefficients is to use a camera calibration tool. You can download and use the MRPT camera_calib software from this link, there's also a video tutorial here. If you use matlab, go for the Camera Calibration Toolbox.
Here you have a table with the spec of the cameras for iPhone 4 and 5.
The calculation is:
double f = 4.1;
double resX = (double)(sourceImage.cols);
double resY = (double)(sourceImage.rows);
double sensorSizeX = 4.89;
double sensorSizeY = 3.67;
double fx = f * resX / sensorSizeX;
double fy = f * resY / sensorSizeY;
double cx = resX/2.;
double cy = resY/2.;
Try this:
func getCamMatrix()->(Float, Float, Float, Float)
{
let format:AVCaptureDeviceFormat? = deviceInput?.device.activeFormat
let fDesc:CMFormatDescriptionRef = format!.formatDescription
let dim:CGSize = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true)
// dim = dimensioni immagine finale
let cx:Float = Float(dim.width) / 2.0;
let cy:Float = Float(dim.height) / 2.0;
let HFOV : Float = format!.videoFieldOfView
let VFOV : Float = ((HFOV)/cx)*cy
let fx:Float = abs(Float(dim.width) / (2 * tan(HFOV / 180 * Float(M_PI) / 2)));
let fy:Float = abs(Float(dim.height) / (2 * tan(VFOV / 180 * Float(M_PI) / 2)));
return (fx, fy, cx, cy)
}
Old thread, present problem.
As Milo and Isaac mentioned after Milo's answer, there seems to be no "common" params available for, say, the iPhone 5.
For what it is worth, here is the result of a run with the MRPT calibration tool, with a good old iPhone 5:
[CAMERA_PARAMS]
resolution=[3264 2448]
cx=1668.87585
cy=1226.19712
fx=3288.47697
fy=3078.59787
dist=[-7.416752e-02 1.562157e+00 1.236471e-03 1.237955e-03 -5.378571e+00]
Average err. of reprojection: 1.06726 pixels (OpenCV error=1.06726)
Note that dist means distortion here.
I am conducting experiments on a toy project, with these parameters---kind of ok. If you do use them on your own project, please keep in mind that they may be hardly good enough to get started. The best will be to follow Milo's recommendation with your own data. The MRPT tool is quite easy to use, with the checkerboard they provide. Hope this does help getting started !

Dome rotation on arbitrary axis?

Imagine a dome with its centre in the +z direction. What I want to do is to move that dome's centre to a different axis (e.g. 20 degrees x axis, 20 degrees y axis, 20 degrees z axis). How can I do that ? Any hint/tip helps.
Add more info:
I've been dabbling with rotation matrices in wiki for a while. The problem is, it is not a commutative operation. RxRyRz is not same as RzRyRx. So based on the way I multiple it I get a different final results. For example, I want my final projection to have 20 degrees from the original X axis, 20 degrees from original Y axis and 20 degrees from original Z axis. Based on the matrix, giving alpha, beta, gamma values 20 (or its corresponding radian) does NOT result the intended rotation. Am I missing something? Is there a matrix that I can just put the intended angles and get it at the end ?
Using a rotation matrix is an easy way to rotate a collection of (x,y,z) points. You can calculate a rotation matrix for your case using the equations in the general rotation section. Note that figuring out the angle values to plug into those equations can be tricky. Think of it as rotating about one axis at a time and remember that the order of your rotations (order of multiplications) does matter.
An alternative to the general rotation equations is to calculate a rotation matrix from axis and angle. It may be easier for you to define correct parameters with this method.
Update: After perusing Wikipedia, I found a simple way to calculate rotation axis and angle between two vectors. Just fill in your starting and ending vectors for a and b here:
a = [0.0 0.0 1.0];
b = [0.5 0.5 0.0];
vectorMag = #(x) sqrt(sum(x.^2));
rotAngle = acos(dot(a,b) / (vectorMag(a) * vectorMag(b)))
rotAxis = cross(a,b)
rotAxis =
-0.5 0.5 0
rotAngle =
1.5708

Calculating Scale for a UIView

Suppose the current scale of my UIView is x. Suppose I apply a scale transformation to my UIView of the amount y ie:
view.transform = CGAffineTransformScale(view.transform, y, y);
. How do I determine what the value of the scale of the UIView after the scale transformation occurs (in terms of x and y?).
The scale transform multiplies the current scale with your scale y.
if the scale was 2.0 for retina, it is y* 2.0 afterwards.
So x*y is the answer. but dont forget x.achsis scale and y achsis can be different.
x, and y for scale is confusing, better use s1 and s2, or sx, sy if you have different scale on y and x achsis, in your code.
Scaling combines by multiplication, translation (movement) by addition, rotation is a matrix multiplication. All three can be combined into an AffineTransformation (a matrix with 1 more row than the dimensions of the space), these are combined by matrix multiplication. 2D AffineTransformations are 3x2 or 3x3 matrices, the extra column just makes them easier to work with.
Edit:
Using clearer names: if he current scale was currxs, currys and the scale applied was xs,ys the new scale would be currxs*xs, currys*ys. Note that applying a scale will also scale any translation component that is contained in the AffineTransformation, this is why order of application is important.
Its quite simple if you are just using the CGAffineTransformScale and not the other transformations like rotation, you can use the view frame and bounds size to calculate the resulting scale values.
float scaleX = view.frame.size.width/view.bounds.size.width;
float scaleY = view.frame.size.height/view.bounds.size.height;

Draw Camera Range with Postgis

i am working on some camera data. I have some points which consist of azimuth, angle, distance, and of course coordinate field attributes. In postgresql postgis I want to draw shapes like this with functions.
how can i draw this pink range shape?
at first should i draw 360 degree circle then extracting out of my shape... i dont know how?
I would create a circle around the point(x,y) with your radius distance, then use the info below to create a triangle that has a larger height than the radius.
Then using those two polygons do an ST_Intersection between the two geometries.
NOTE: This method only works if the angle is less than 180 degrees.
Note, that if you extend the outer edges and meet it with a 90 degree angle from the midpoint of your arc, you have a an angle, and an adjacent side. Now you can SOH CAH TOA!
Get Points B and C
Let point A = (x,y)
To get the top point:
point B = (x + radius, y + (r * tan(angle)))
to get the bottom point:
point C = (x + radius, y - (r * tan(angle)))
Rotate your triangle to you azimouth
Now that you have the triangle, you need to rotate it to your azimuth, with a pivot point of A. This means you need point A at the origin when you do the rotation. The rotation is the trickiest part. Its used in computer graphics all the time. (Actually, if you know OpenGL you could get it to do the rotation for you.)
NOTE: This method rotates counter-clockwise through an angle (theta) around the origin. You might have to adjust your azimuth accordingly.
First step: translate your triangle so that A (your original x,y) is at 0,0. Whatever you added/subtracted to x and y, do the same for the other two points.
(You need to translate it because you need point A to be at the origin)
Second step: Then rotate points B and C using a rotation matrix. More info here, but I'll give you the formula:
Your new point is (x', y')
Do this for points B and C.
Third step: Translate them back to the original place by adding or subtracting. If you subtracted x last time, add it this time.
Finally, use points {A,B,C} to create a triangle.
And then do a ST_Intersection(geom_circle,geom_triangle);
Because this takes a lot of calculations, it would be best to write a program that does all these calculations and then populates a table.
PostGIS supports curves, so one way to achieve this that might require less math on your behalf would be to do something like:
SELECT ST_GeomFromText('COMPOUNDCURVE((0 0, 0 10), CIRCULARSTRING(0 10, 7.071 7.071, 10 0), (10 0, 0 0))')
This describes a sector with an origin at 0,0, a radius of 10 degrees (geographic coordinates), and an opening angle of 45°.
Wrapping that with additional functions to convert it from a true curve into a LINESTRING, reduce the coordinate precision, and to transform it into WKT:
SELECT ST_AsText(ST_SnapToGrid(ST_CurveToLine(ST_GeomFromText('COMPOUNDCURVE((0 0, 0 10), CIRCULARSTRING(0 10, 7.071 7.071, 10 0), (10 0, 0 0))')), 0.01))
Gives:
This requires a few pieces of pre-computed information (the position of the centre, and the two adjacent vertices, and one other point on the edge of the segment) but it has the distinct advantage of actually producing a truly curved geometry. It also works with segments with opening angles greater than 180°.
A tip: the 7.071 x and y positions used in the example can be computed like this:
x = {radius} cos {angle} = 10 cos 45 ≈ 7.0171
y = {radius} sin {angle} = 10 sin 45 ≈ 7.0171
Corner cases: at the antimeridian, and at the poles.

What is the depth image received from Kinect

When I ran this Matlab code to get the depth image, the result I got is a matrix of 480x640. The min element value is 0 and the max element value is 2711. What does 2711 mean? Is that the distance from the camera to the farthest part of the image. But what is the unit of 2711. Is that meter of feet or ??
I don't know what the Matlab code exactly does to the depth, but it probably does some processing on it because the depth sent by the Kinect is on 11 bits, so it shouldn't be higher than 2048. Try to find out what it does, or to get access to the raw data sent by the Kinect.
The data sent by the Kinect is not a proper distance (it's a "disparity"), so you have to do some math to convert it to useful units.
From the OpenKinect project wiki (which contains useful information about the Kinect) :
From their data, a basic first order
approximation for converting the raw
11-bit disparity value to a depth
value in centimeters is: 100/(-0.00307
* rawDisparity + 3.33). This approximation is approximately 10 cm
off at 4 m away, and less than 2 cm
off within 2.5 m.
A better approximation is given by
Stéphane Magnenat in this post:
distance = 0.1236 * tan(rawDisparity /
2842.5 + 1.1863) in meters. Adding a final offset term of -0.037 centers
the original ROS data. The tan
approximation has a sum squared
difference of .33 cm while the 1/x
approximation is about 1.7 cm.
Once you have the distance using the
measurement above, a good
approximation for converting (i, j, z)
to (x,y,z) is:
x = (i - w / 2) * (z + minDistance) * scaleFactor * (w/h)
y = (j - h / 2) * (z + minDistance) * scaleFactor
z = z
Where
minDistance = -10
scaleFactor = .0021.
These values were found by hand.
You can find more details about the Kinect's depth camera and its calibration on the ROS website (and many others !).
If you map the data to a meter scale it compresses the depth image slightly. I found this was an issue when I was trying to look for planes in the mapped data.