I was doing some work using the CoreBluetooth API and ran into a problem. All of the places I have looked, they say that to convert RSSI (Signal Strength of Bluetooth), you must do things like:
Distance = 10 ^ ((Measured Power – RSSI)/(10 * N))
And:
var txPower = -59 //hard coded power value. Usually ranges between -59 to -65
if (rssi == 0) {
return -1.0;
}
var ratio = rssi*1.0/txPower;
if (ratio < 1.0) {
return Math.pow(ratio,10);
}
else {
var distance = (0.89976)*Math.pow(ratio,7.7095) + 0.111;
return distance;
}
I have tried all of the above and everything I could find. None of it gets me the accurate measurements from about 0.5 meters to around 5 - 7 meters of distance between.
My code to do so is making both phones using the app as a central and peripheral Bluetooth and in my didDiscoverPeripheral callback from CentralManager, I get the RSSI - which I want to convert to a distance (meter, feet).
Along with that:
I also need to find out how to get the Measured Power (RSSI Strength at 1 meter) of iPhones as it would really help in the accurate calculations.
Also, what does environmental factor mean in terms of Bluetooth? What do the different environmental factors mean (which have the range of 2-4). Is there a way to change or increase the Broadcasting Power value of the Apple Device?
Basically, I am looking for an accurate RSSI to distance formula which works from distances from 0.5 meter to 5-7 meters
Thank you so much!
This is what is a common solution:
pow(10, ((-56-Double(rssi))/(10*2)))*3.2808
It was good for most distances but got very inaccurate as you get close or too far, so I ended up using bins kind of like Apple's iBeacons (Unknown, Far, Near, Immediate). If the raw RSSI is less than -80, then it is far, if it is more than -50, then it is immediate, and if it is between those two, it is near. This solution worked for me.
Related
I'm working on a program in which i want to store the distance the user walked since pressing a button. I retrieve the distance via geolocator package and display it on screen which works just fine.
I know there are some distanceBetween-Function for locations, but as far as i noticed, they are just calculating the distance between 2 points and not the actual distance the user walked (For example, if the user starts at one point X, walks over to Point Y and back to X would end in comparing start-and endpoint (X to X), which results in distance: 0, but i want the distance X -> Y -> X.
I added following function that calculated the distance based on longitude/latitude.
double distance(Position start, Position current){
return double.parse((acos(sin(start.latitude)*sin(current.latitude)+cos(start.latitude)*cos(current.latitude)*cos(current.longitude-start.longitude))*6371).toStringAsFixed(2));
}
I call it every frame and store the distance between the current and last gps position.
Works slowly but fine, except one Problem:
Somewhen, the double suddenly turns into "NaN", and i can't figure out why.
It's completely random when this occurs - At the beginning, it was always around 0.6, but it also occurred around 4.5 and 0.2, so i think the problem may be somewhere else.
Can anybody help?
Or does anybody knows a built-in-function that can solve the same problem?
I tried parsing the double to only have 2 decimal spaces (Didn't round it before) because i thought the number might just got too many decimal spaces to be displayed, but error still occured.
I have a second task that is happening at the same time each time stamp, so i thought it was hindering retrieving the GPS, so i tried disabling it, but it didn't change anything.
It's possible that you are getting numerical stability issues with the spherical law of cosines since you're calculating the distance on every frame? It is known that the formula has conditioning issues for very small distances (less than one meter).
Note that the domain for
arccosine(x) is given by -1 <= x <= 1. If in your case you were to supply a value greater than 1 (or smaller than -1) you would get a NaN result.
If you are still debugging this you can add a simple print statement:
double distance(Position start, Position current){
double x = sin(start.latitude)*sin(current.latitude)+cos(start.latitude)*cos(current.latitude)*cos(current.longitude-start.longitude);
if (x > 1 || x < -1) {
print("error");
}
return ((acos(sin(start.latitude)*sin(current.latitude)+cos(start.latitude)*cos(current.latitude)*cos(current.longitude-start.longitude))*6371));
}
If this is indeed the case, then you have a few options, use the Haversine formula because it is better conditioned for small distances, or simply set x to 1 if it's above 1. This anyway just means that the distance is zero.
For more information (and the Haversine formula) see also: Great circle distance
I really didn't think about the arccosines domain...
So i updated my code with your proposition to:
double distance(Position start, Position current) {
double x = sin(start.latitude) * sin(current.latitude) + cos(start.latitude) * cos(current.latitude) * cos(current.longitude - start.longitude);
if (x > 1 || x < -1) {
if (kDebugMode) {
print("error");
}
return 0;
}
return double.parse((acos(x) * 6371).toStringAsFixed(2));
}
It works fine, thank you for your help!
I have a servo motor, and this servo motor I would like to make it follow a "motion pattern" as closely as possible, and use the same value for acceleration and deceleration.
The attach picture illustrates the "motion pattern" (Y = velocity, X = Time)
motion pattern:
accelerates 0m/s to 0.100m/s.
constant velocity 0.100m/s for 4 sec.
decelerates to negative ?m/s.
accelerates to 0m/s, and motor position = 0.
How can i calculate the acceleration and deceleration?
What i have tried so far is:
Time = (total time - constant velocity time) 10 - 4 = 6sec.
Distances = (total distances - constant velocity distances ) 1 - 0.4 = 0.6meter.
acceleration = (2 * distances / (time^2) 2 * 0.6 / sqr(6) = 0.0333m/s.
But with this acceleration it over shoots in the negative direction by 500mm.
Take a look at the PLC Open motion function blocks, for example the MC_MoveRelative and the MC_MoveContinuesRelative block:
(Beckhoff documentation)
As Sergey already stated you can use those blocks to create a motion profile by entering all the parameters you need and integrating the blocks in a step chain.
I have two variables, a speed and a minimum. The speed gets compared to the minimum to see if the speed should continue decreasing. For some reason, even when the speed is equal to the minimum, it continues to decrease the speed.
var wallSpeed : CGFloat!
var wallSpeedMin : CGFloat!
var wallSpeedChange : CGFloat!
override init(){
wallSpeed = 0.0035
wallSpeedMin = 0.0034
wallSpeedChange = 0.0001
}
The speed minimum is close to the speed for testing purposes.
if wallSpeed > wallSpeedMin
{
print("Wall speed has been increased")
wallSpeed = wallSpeed - wallSpeedChange
print("New speed is \(wallSpeed!)")
}else
{
print("Player moved up screen")
//Move player up instead
playerNode.position.y = playerNode.position.y + 5
print("Players Y value is \(playerNode.position.y)")
}
It never hits the else statement, even though the wall speed is equal to the wall speed minimum after the first decrease.
Do I have my if statement set up incorrectly? What is causing this behavior?
Floating point math does not work like you're expecting it to. Check Is floating point math broken?
You can't compare floating point numbers like this way...
Since the way of the floating point number represented we can't simply do some math on them....
when we use integers they are represented directly in binary format, and you can do any arithmetic calculation on them, while floating point numbers are 32 bits container following the IEEE 754 standard divided in three sections :
1 bit S for the sign
8 bits for the exponent
23 bits for the mantissa
For more information Comparing Floating Point Numbers and Floating Point in Swift
I am developing an application for the iPhone using opencv. I have to use the method solvePnPRansac:
http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html
For this method I need to provide a camera matrix:
__ __
| fx 0 cx |
| 0 fy cy |
|_0 0 1 _|
where cx and cy represent the center pixel positions of the image and fx and fy represent focal lengths, but that is all the documentation says. I am unsure what to provide for these focal lengths. The iPhone 5 has a focal length of 4.1 mm, but I do not think that this value is usable as is.
I checked another website:
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
which shows how opencv creates camera matrices. Here it states that focal lengths are measured in pixel units.
I checked another website:
http://www.velocityreviews.com/forums/t500283-focal-length-in-pixels.html
(about half way down)
it says that focal length can be converted from units of millimeters to pixels using the equation: fx = fy = focalMM * pixelDensity / 25.4;
Another Link I found states that fx = focalMM * width / (sensorSizeMM);
fy = focalMM * length / (sensorSizeMM);
I am unsure about these equations and how to properly create this matrix.
Any help, advice, or links on how to create an accurate camera matrix (especially for the iPhone 5) would be greatly appreciated,
Isaac
p.s. I think that (fx/fy) or (fy/fx) might be equal to the aspect ratio of the camera, but that might be completely wrong.
UPDATE:
Pixel coordinates to 3D line (opencv)
using this link, I can figure out how they want fx and fy to be formatted because they use it to scale angles relative to their distance from the center. therefore, fx and fy are likely in pixels/(unit length) but im still not sure what this unit length needs to be, can it be arbitrary as long as x and y are scaled to each other?
You can get an initial (rough) estimate of the focal length in pixel dividing the focal length in mm by the width of a pixel of the camera' sensor (CCD, CMOS, whatever).
You get the former from the camera manual, or read it from the EXIF header of an image taken at full resolution. Finding out the latter is a little more complicated: you may look up on the interwebs the sensor's spec sheet, if you know its manufacturer and model number, or you may just divide the overall width of its sensitive area by the number of pixels on the side.
Absent other information, it's usually safe to assume that the pixels are square (i.e. fx == fy), and that the sensor is orthogonal to the lens's focal axis (i.e. that the term in the first row and second column of the camera matrix is zero). Also, the pixel coordinates of the principal point (cx, cy) are usually hard to estimate accurately without a carefully designed calibration rig, and an as-carefully executed calibration procedure (that's because they are intrinsically confused with the camera translation parallel to the image plane). So it's best to just set them equal to the geometrical geometrical center of the image, unless you know that the image has been cropped asymmetrically.
Therefore, your simplest camera model has only one unknown parameter, the focal length f = fx = fy.
Word of advice: in your application is usually more convenient to carry around the horizontal (or vertical) field-of-view angle, rather than the focal length in pixels. This is because the FOV is invariant to image scaling.
The "focal length" you are dealing with here is simply a scaling factor from objects in the world to camera pixels, used in the pinhole camera model (Wikipedia link). That's why its units are pixels/unit length. For a given f, an object of size L at a distance (perpendicular to the camera) z, would be f*L/z pixels.
So, you could estimate the focal length by placing an object of known size at a known distance of your camera and measuring its size in the image. You could aso assume the central point is the center of the image. You should definitely not ignore the lens distortion (dist_coef parameter in solvePnPRansac).
In practice, the best way to obtain the camera matrix and distortion coefficients is to use a camera calibration tool. You can download and use the MRPT camera_calib software from this link, there's also a video tutorial here. If you use matlab, go for the Camera Calibration Toolbox.
Here you have a table with the spec of the cameras for iPhone 4 and 5.
The calculation is:
double f = 4.1;
double resX = (double)(sourceImage.cols);
double resY = (double)(sourceImage.rows);
double sensorSizeX = 4.89;
double sensorSizeY = 3.67;
double fx = f * resX / sensorSizeX;
double fy = f * resY / sensorSizeY;
double cx = resX/2.;
double cy = resY/2.;
Try this:
func getCamMatrix()->(Float, Float, Float, Float)
{
let format:AVCaptureDeviceFormat? = deviceInput?.device.activeFormat
let fDesc:CMFormatDescriptionRef = format!.formatDescription
let dim:CGSize = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true)
// dim = dimensioni immagine finale
let cx:Float = Float(dim.width) / 2.0;
let cy:Float = Float(dim.height) / 2.0;
let HFOV : Float = format!.videoFieldOfView
let VFOV : Float = ((HFOV)/cx)*cy
let fx:Float = abs(Float(dim.width) / (2 * tan(HFOV / 180 * Float(M_PI) / 2)));
let fy:Float = abs(Float(dim.height) / (2 * tan(VFOV / 180 * Float(M_PI) / 2)));
return (fx, fy, cx, cy)
}
Old thread, present problem.
As Milo and Isaac mentioned after Milo's answer, there seems to be no "common" params available for, say, the iPhone 5.
For what it is worth, here is the result of a run with the MRPT calibration tool, with a good old iPhone 5:
[CAMERA_PARAMS]
resolution=[3264 2448]
cx=1668.87585
cy=1226.19712
fx=3288.47697
fy=3078.59787
dist=[-7.416752e-02 1.562157e+00 1.236471e-03 1.237955e-03 -5.378571e+00]
Average err. of reprojection: 1.06726 pixels (OpenCV error=1.06726)
Note that dist means distortion here.
I am conducting experiments on a toy project, with these parameters---kind of ok. If you do use them on your own project, please keep in mind that they may be hardly good enough to get started. The best will be to follow Milo's recommendation with your own data. The MRPT tool is quite easy to use, with the checkerboard they provide. Hope this does help getting started !
I like to update an existing iPhone application which is using AudioQueue for playing audio files. The levels (peakPowerForChannel, averagePowerForChannel) were linear form 0.0f to 1.0f.
Now I like to use the simpler class AVAudioPlayer which works fine, the only issue is that the levels which are now in decibel, not linear from -120.0f to 0.0f.
Has anyone a formula to convert it back to the linear values between 0.0f and 1.0f?
Thanks
Tom
Several Apple examples use the following formula to convert the decibels into a linear range (from 0.0 to 1.0):
double percentage = pow (10, (0.05 * power));
where power is the value you get from one of the various level meter methods or functions, such as AVAudioPlayer's averagePowerForChannel:
Math behind the Linear and Logarithmic value conversion:
1. Linear to Decibel (logarithmic):
decibelValue = 20.0f * log10(linearValue)
Note: log is base 10
Suppose the linear value in the form of percentage range from [ 0 (min vol) to 100 (max vol)] then the decibelValue for half of the volume (50%) is
decibelValue = 20.0f * log10(50.0f/100.0f) = -6 dB
Full volume:
decibelValue = 20.0f * log10(100.0f/100.0f) = 0 dB
Complete mute:
decibelValue = 20.0f * log10(0/100.0f) = -infinity
2. Decibel(logarithmic) to Linear:
LinearValue = pow(10.0f, decibelValue/20.0f)
Apple uses a lookup table in their SpeakHere sample that converts from dB to a linear value displayed on a level meter.
I moulded their calculation in a small routine; see here.