iPhone - AVAudioPlayer - convert decibel level into percent - iphone

I like to update an existing iPhone application which is using AudioQueue for playing audio files. The levels (peakPowerForChannel, averagePowerForChannel) were linear form 0.0f to 1.0f.
Now I like to use the simpler class AVAudioPlayer which works fine, the only issue is that the levels which are now in decibel, not linear from -120.0f to 0.0f.
Has anyone a formula to convert it back to the linear values between 0.0f and 1.0f?
Thanks
Tom

Several Apple examples use the following formula to convert the decibels into a linear range (from 0.0 to 1.0):
double percentage = pow (10, (0.05 * power));
where power is the value you get from one of the various level meter methods or functions, such as AVAudioPlayer's averagePowerForChannel:

Math behind the Linear and Logarithmic value conversion:
1. Linear to Decibel (logarithmic):
decibelValue = 20.0f * log10(linearValue)
Note: log is base 10
Suppose the linear value in the form of percentage range from [ 0 (min vol) to 100 (max vol)] then the decibelValue for half of the volume (50%) is
decibelValue = 20.0f * log10(50.0f/100.0f) = -6 dB
Full volume:
decibelValue = 20.0f * log10(100.0f/100.0f) = 0 dB
Complete mute:
decibelValue = 20.0f * log10(0/100.0f) = -infinity
2. Decibel(logarithmic) to Linear:
LinearValue = pow(10.0f, decibelValue/20.0f)

Apple uses a lookup table in their SpeakHere sample that converts from dB to a linear value displayed on a level meter.
I moulded their calculation in a small routine; see here.

Related

Extrapolate Animation Curve (Endless game balancing curve)

I need a curve editor to make balancing of endless game then I tried to use AnimationCurve.
I need to set a curve to a certain range ex. [0;1] and if I want a value over 1, the result of the Evaluation have to extrapolate the curve. I want to be able to compute Y from X and X from Y.
The problem is AnimationCurve have only 3 WrapMode (Clamp, PingPong, Loop).
How to extrapolate an AnimationCurve ?
Is there a better tool to make curve with extrapolation (post and pre curve) ?
For real extrapolation I think you'd have to implement your own system based on Bézier mathematics. Me at least am not aware of unity providing it out of the box.
A work around for it could be to just define values beyond the 0 to 1 range to cover the extents, animation curves do allow this, I don't think there are to many issues with that.
Another solution, to stay in 0 to 1 range still but achieve the same effect, would be to model the curve from 0 to 1 so that it would cover extreme values within that range and remap the time for curve evaluation given by the object to a 0 to 1 range.
E.g.:
// define range extents
float rangeMin = -5f, rangeMax = 5f;
var range = 10f;
// range could be calculated at runtime if necessary:
// [to] (higher value) - [from] (lower value) = [range]
// 5f - -5f = 10f
var timeRaw = 0; // variable provided value
var time01 = (timeRaw - rangeMin) / range;
// reult by timeRaw = 0: (0 - -5) / 10 = 0.5
// reult by timeRaw = 5: (5 - -5) / 10 = 1.0
// reult by timeRaw = -5: (-5 - -5) / 10 = 0.0
Combining both solutions allow you to cover even more extreme values.

Swift - Converting RSSI to Distance

I was doing some work using the CoreBluetooth API and ran into a problem. All of the places I have looked, they say that to convert RSSI (Signal Strength of Bluetooth), you must do things like:
Distance = 10 ^ ((Measured Power – RSSI)/(10 * N))
And:
var txPower = -59 //hard coded power value. Usually ranges between -59 to -65
if (rssi == 0) {
return -1.0;
}
var ratio = rssi*1.0/txPower;
if (ratio < 1.0) {
return Math.pow(ratio,10);
}
else {
var distance = (0.89976)*Math.pow(ratio,7.7095) + 0.111;
return distance;
}
I have tried all of the above and everything I could find. None of it gets me the accurate measurements from about 0.5 meters to around 5 - 7 meters of distance between.
My code to do so is making both phones using the app as a central and peripheral Bluetooth and in my didDiscoverPeripheral callback from CentralManager, I get the RSSI - which I want to convert to a distance (meter, feet).
Along with that:
I also need to find out how to get the Measured Power (RSSI Strength at 1 meter) of iPhones as it would really help in the accurate calculations.
Also, what does environmental factor mean in terms of Bluetooth? What do the different environmental factors mean (which have the range of 2-4). Is there a way to change or increase the Broadcasting Power value of the Apple Device?
Basically, I am looking for an accurate RSSI to distance formula which works from distances from 0.5 meter to 5-7 meters
Thank you so much!
This is what is a common solution:
pow(10, ((-56-Double(rssi))/(10*2)))*3.2808
It was good for most distances but got very inaccurate as you get close or too far, so I ended up using bins kind of like Apple's iBeacons (Unknown, Far, Near, Immediate). If the raw RSSI is less than -80, then it is far, if it is more than -50, then it is immediate, and if it is between those two, it is near. This solution worked for me.

Calculate acceleration from data points

I have a servo motor, and this servo motor I would like to make it follow a "motion pattern" as closely as possible, and use the same value for acceleration and deceleration.
The attach picture illustrates the "motion pattern" (Y = velocity, X = Time)
motion pattern:
accelerates 0m/s to 0.100m/s.
constant velocity 0.100m/s for 4 sec.
decelerates to negative ?m/s.
accelerates to 0m/s, and motor position = 0.
How can i calculate the acceleration and deceleration?
What i have tried so far is:
Time = (total time - constant velocity time) 10 - 4 = 6sec.
Distances = (total distances - constant velocity distances ) 1 - 0.4 = 0.6meter.
acceleration = (2 * distances / (time^2) 2 * 0.6 / sqr(6) = 0.0333m/s.
But with this acceleration it over shoots in the negative direction by 500mm.
Take a look at the PLC Open motion function blocks, for example the MC_MoveRelative and the MC_MoveContinuesRelative block:
(Beckhoff documentation)
As Sergey already stated you can use those blocks to create a motion profile by entering all the parameters you need and integrating the blocks in a step chain.

create opencv camera matrix for iPhone 5 solvepnp

I am developing an application for the iPhone using opencv. I have to use the method solvePnPRansac:
http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html
For this method I need to provide a camera matrix:
__ __
| fx 0 cx |
| 0 fy cy |
|_0 0 1 _|
where cx and cy represent the center pixel positions of the image and fx and fy represent focal lengths, but that is all the documentation says. I am unsure what to provide for these focal lengths. The iPhone 5 has a focal length of 4.1 mm, but I do not think that this value is usable as is.
I checked another website:
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
which shows how opencv creates camera matrices. Here it states that focal lengths are measured in pixel units.
I checked another website:
http://www.velocityreviews.com/forums/t500283-focal-length-in-pixels.html
(about half way down)
it says that focal length can be converted from units of millimeters to pixels using the equation: fx = fy = focalMM * pixelDensity / 25.4;
Another Link I found states that fx = focalMM * width / (sensorSizeMM);
fy = focalMM * length / (sensorSizeMM);
I am unsure about these equations and how to properly create this matrix.
Any help, advice, or links on how to create an accurate camera matrix (especially for the iPhone 5) would be greatly appreciated,
Isaac
p.s. I think that (fx/fy) or (fy/fx) might be equal to the aspect ratio of the camera, but that might be completely wrong.
UPDATE:
Pixel coordinates to 3D line (opencv)
using this link, I can figure out how they want fx and fy to be formatted because they use it to scale angles relative to their distance from the center. therefore, fx and fy are likely in pixels/(unit length) but im still not sure what this unit length needs to be, can it be arbitrary as long as x and y are scaled to each other?
You can get an initial (rough) estimate of the focal length in pixel dividing the focal length in mm by the width of a pixel of the camera' sensor (CCD, CMOS, whatever).
You get the former from the camera manual, or read it from the EXIF header of an image taken at full resolution. Finding out the latter is a little more complicated: you may look up on the interwebs the sensor's spec sheet, if you know its manufacturer and model number, or you may just divide the overall width of its sensitive area by the number of pixels on the side.
Absent other information, it's usually safe to assume that the pixels are square (i.e. fx == fy), and that the sensor is orthogonal to the lens's focal axis (i.e. that the term in the first row and second column of the camera matrix is zero). Also, the pixel coordinates of the principal point (cx, cy) are usually hard to estimate accurately without a carefully designed calibration rig, and an as-carefully executed calibration procedure (that's because they are intrinsically confused with the camera translation parallel to the image plane). So it's best to just set them equal to the geometrical geometrical center of the image, unless you know that the image has been cropped asymmetrically.
Therefore, your simplest camera model has only one unknown parameter, the focal length f = fx = fy.
Word of advice: in your application is usually more convenient to carry around the horizontal (or vertical) field-of-view angle, rather than the focal length in pixels. This is because the FOV is invariant to image scaling.
The "focal length" you are dealing with here is simply a scaling factor from objects in the world to camera pixels, used in the pinhole camera model (Wikipedia link). That's why its units are pixels/unit length. For a given f, an object of size L at a distance (perpendicular to the camera) z, would be f*L/z pixels.
So, you could estimate the focal length by placing an object of known size at a known distance of your camera and measuring its size in the image. You could aso assume the central point is the center of the image. You should definitely not ignore the lens distortion (dist_coef parameter in solvePnPRansac).
In practice, the best way to obtain the camera matrix and distortion coefficients is to use a camera calibration tool. You can download and use the MRPT camera_calib software from this link, there's also a video tutorial here. If you use matlab, go for the Camera Calibration Toolbox.
Here you have a table with the spec of the cameras for iPhone 4 and 5.
The calculation is:
double f = 4.1;
double resX = (double)(sourceImage.cols);
double resY = (double)(sourceImage.rows);
double sensorSizeX = 4.89;
double sensorSizeY = 3.67;
double fx = f * resX / sensorSizeX;
double fy = f * resY / sensorSizeY;
double cx = resX/2.;
double cy = resY/2.;
Try this:
func getCamMatrix()->(Float, Float, Float, Float)
{
let format:AVCaptureDeviceFormat? = deviceInput?.device.activeFormat
let fDesc:CMFormatDescriptionRef = format!.formatDescription
let dim:CGSize = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true)
// dim = dimensioni immagine finale
let cx:Float = Float(dim.width) / 2.0;
let cy:Float = Float(dim.height) / 2.0;
let HFOV : Float = format!.videoFieldOfView
let VFOV : Float = ((HFOV)/cx)*cy
let fx:Float = abs(Float(dim.width) / (2 * tan(HFOV / 180 * Float(M_PI) / 2)));
let fy:Float = abs(Float(dim.height) / (2 * tan(VFOV / 180 * Float(M_PI) / 2)));
return (fx, fy, cx, cy)
}
Old thread, present problem.
As Milo and Isaac mentioned after Milo's answer, there seems to be no "common" params available for, say, the iPhone 5.
For what it is worth, here is the result of a run with the MRPT calibration tool, with a good old iPhone 5:
[CAMERA_PARAMS]
resolution=[3264 2448]
cx=1668.87585
cy=1226.19712
fx=3288.47697
fy=3078.59787
dist=[-7.416752e-02 1.562157e+00 1.236471e-03 1.237955e-03 -5.378571e+00]
Average err. of reprojection: 1.06726 pixels (OpenCV error=1.06726)
Note that dist means distortion here.
I am conducting experiments on a toy project, with these parameters---kind of ok. If you do use them on your own project, please keep in mind that they may be hardly good enough to get started. The best will be to follow Milo's recommendation with your own data. The MRPT tool is quite easy to use, with the checkerboard they provide. Hope this does help getting started !

What is the depth image received from Kinect

When I ran this Matlab code to get the depth image, the result I got is a matrix of 480x640. The min element value is 0 and the max element value is 2711. What does 2711 mean? Is that the distance from the camera to the farthest part of the image. But what is the unit of 2711. Is that meter of feet or ??
I don't know what the Matlab code exactly does to the depth, but it probably does some processing on it because the depth sent by the Kinect is on 11 bits, so it shouldn't be higher than 2048. Try to find out what it does, or to get access to the raw data sent by the Kinect.
The data sent by the Kinect is not a proper distance (it's a "disparity"), so you have to do some math to convert it to useful units.
From the OpenKinect project wiki (which contains useful information about the Kinect) :
From their data, a basic first order
approximation for converting the raw
11-bit disparity value to a depth
value in centimeters is: 100/(-0.00307
* rawDisparity + 3.33). This approximation is approximately 10 cm
off at 4 m away, and less than 2 cm
off within 2.5 m.
A better approximation is given by
Stéphane Magnenat in this post:
distance = 0.1236 * tan(rawDisparity /
2842.5 + 1.1863) in meters. Adding a final offset term of -0.037 centers
the original ROS data. The tan
approximation has a sum squared
difference of .33 cm while the 1/x
approximation is about 1.7 cm.
Once you have the distance using the
measurement above, a good
approximation for converting (i, j, z)
to (x,y,z) is:
x = (i - w / 2) * (z + minDistance) * scaleFactor * (w/h)
y = (j - h / 2) * (z + minDistance) * scaleFactor
z = z
Where
minDistance = -10
scaleFactor = .0021.
These values were found by hand.
You can find more details about the Kinect's depth camera and its calibration on the ROS website (and many others !).
If you map the data to a meter scale it compresses the depth image slightly. I found this was an issue when I was trying to look for planes in the mapped data.