mean and SD of directions (in degrees) in NetLogo - netlogo

I'm working on a model in NetLogo where I would like to report the mean and standard deviation of a set of turtle dispersal directions (bearings from 0-360 degrees) in each year of the simulation. Of course there's no default command in NetLogo for these circular statistics, so I'll need to write out the calculations by hand. I'm wondering if anyone has worked out a similar custom function in NetLogo before?
I've come across this set of code for calculating the mean:
to-report mean-of-headings [headings]
let x-mean mean map [sin ?] headings
let y-mean mean map [cos ?] headings
if x-mean = 0 and y-mean = 0 [ report random 360 ]
report atan x-mean y-mean
end
But I'm not positive if that will do for the mean, and haven't seen code for the SD. My thought was to translate this R code into NetLogo language, and create a reporter similar to the one above?
Tester <-c(340, 360, 20) # list of bearings
sine = sum(sin(Tester * pi/180)) # sin of each angle, convert to radian first
cosine = sum(cos(Tester * pi/180)) # cos of each angle, convert to radian first
Tester_mean = (atan2(sine, cosine) * 180/pi) %% 360
mu = (Tester - Tester_mean + 180) %% 360 - 180 # Difference of each angle from mean
Tester_sd = sqrt(sum(mu^2)/(length(Tester) - 1)) # Standard Deviation
Tester_mean # mean bearing
Tester_sd # sd bearing

Related

Unity predict endpoint from current velocity

My rocket's rigidbody velocity is Vector2(0,100) when I call a function. How can I calculate the world coordinate (enpoint) when the velocity reaches 0?
Gravity should be included in the formula.
Thanks!
It sounds like you want the integral of the velocity function, which should provide the total distance respective to time.
Your velocity is going to be v = (100 - ('t'ime * 'g'ravity)). We can solve for time like t = (-v + 100)/g -> t = (0 + 100)/g = 100/g. So you should reach zero velocity at t = 100/g (assuming all the same units).
The integral of your velocity will give you distance traveled. An integral calculator is here: http://www.integral-calculator.com/
The integral function of your velocity is 100t - (g*t^2)/2
From zero to a particular time t, you can just plug and play. So for example, if for a particular gravity you reach zero velocity at t = 10 seconds, you will have traveled (100 * 10) - ((g * 10^2)/2) distance. (so for gravity 9, you would get 1000 - (9 * 100)/2 = 550 units
Edit: To be clear - first you want to calculate how long it takes to get to velocity zero at a particular starting velocity and gravity:
t = vStart/g
Then plug that time value into the integral function above:
distance = (vStart * t) - ((g * t^2)/2)
(or clearly you could turn it into one function by replacing t with vStart/g in the second function, but if I were coding I would definitely calculate them in two steps to provide a sanity check in case my units were wrong)

Compass Heading from Magnetometer on other axis

I am building a small device that also uses magnetometer data in order to calculate the compass heading. The LSM9DS0 IMU sensor works great if the heading is calculated as a yaw (if the sensor is on a flat surface).
I have 3D printed a shell in which i am going to assemble all the electronics. My problem is that it is poorly designed and the IMU sensor is not on a flat surface, but it has to stay on 90 degrees. So by this, the Z axis is no more my way to calculate the yaw (or heading), but it changed to Y.
In order to calculate the heading on Z, i was using this formula:
heading.value = atan2((float)dof.my, (float)dof.mx);
if(heading.value < 0) heading.value += 2*PI;
if(heading.value > 2*PI) heading.value -= 2*PI;
heading.value *= 180/PI;
...where my is the magnetometer Y and mx the magnetometer X
Now, I don't know how to calculate the heading based on other axis.
I know this thread hasn't been active for a while, but during my search I came across this publication by NXP which explains the solution really nicely.
In brief:
Align the accelerometer readings (G) and the magnetometer readings (B) so they follow the NED coordinate system with the x-axis pointing forward and the y and z-axis to the right and down, respectively.
Calculate the roll and pitch
// Using atan2 to restrict +/- PI
const roll = Math.atan2(Gy, Gz)
// Using atan to restrict to +/- PI/2
const pitch = Math.atan(-Gx / (Gy * Math.sin(roll) + Gz * Math.cos(roll)))
Calculate the yaw / compass heading (here Vx, Vy, Vz correspond to the Hard-Iron effects that can be calculated separately as discussed in this publication):
// Using atan2 to restring to +/- PI
let yaw = Math.atan2( (Bz-Vz)*Math.sin(roll) - (By-Vy)*Math.cos(roll),
(Bx-Vx)*Math.cos(pitch) + (By-Vy)*Math.sin(pitch)*Math.sin(roll) + (Bz-Vz)*Math.sin(pitch)*Math.cos(roll))
Correct the heading to [0, 2*PI)
if( yaw < 0 ) {
yaw += 2*Math.PI
}

Find Position based on signal strength (intersection area between circles)

I'm trying to estimate a position based on signal strength received from 4 Wi-Fi Access Points. I measure the signal strength from 4 access points located in each corner of a square room with 100 square meters (10x10). I recorded the signal strengths in a known position (x, y) = (9.5, 1.5) using an Android phone. Now I want to check how accurate can a multilateration method be under the circumstances.
Using MATLAB, I applied a formula to calculate distance using the signal strength. The following MATLAB function shows the application of the formula:
function [ d_vect ] = distance( RSS )
% Calculate distance from signal strength
result = (27.55 - (20 * log10(2400)) + abs(RSS)) / 20;
d_vect = power(10, result);
end
The input RSS is a vector with the four signal strengths measured in the test point (x,y) = (9.5, 1.5). The RSS vector looks like this:
RSS =
-57.6000
-60.4000
-44.7000
-54.4000
and the resultant vector with all the estimated distances to each access points looks like this:
d_vect =
7.5386
10.4061
1.7072
5.2154
Now I want to estimate my position based on these distances and the access points position in order to find the error between the estimated position and the known position (9.5, 1.5). I want to find the intersection area (In order to estimate a position) between four circles where each access point is the center of one of the circles and the distance is the radius of the circle.
I want to find the grey area as shown in this image :
http://www.biologycorner.com/resources/venn4.gif
If you want an alternative way of estimating the location without estimating the intersection of circles you can use trilateration. It is a common technique in navigation (e.g. GPS) to estimate a position given a set of distance measurements.
Also, if you wanted the area because you also need an estimate of the uncertainty of the position I would recommend solving the trilateration problem using least squares which will easily give you an estimate of the parameters involved and an error propagation to yield an uncertainty of the location.
I found an answear that solved perfectly the question. It is explained in detail in this link:
https://gis.stackexchange.com/questions/40660/trilateration-algorithm-for-n-amount-of-points
I also developed some MATLAB code for the problem. Here it goes:
Estimate distances from the Access Points:
function [ d_vect ] = distance( RSS )
result = (27.55 - (20 * log10(2400)) + abs(RSS)) / 20;
d_vect = power(10, result);
end
The trilateration function:
function [] = trilat( X, d, real1, real2 )
cla
circles(X(1), X(5), d(1), 'edgecolor', [0 0 0],'facecolor', 'none','linewidth',4); %AP1 - black
circles(X(2), X(6), d(2), 'edgecolor', [0 1 0],'facecolor', 'none','linewidth',4); %AP2 - green
circles(X(3), X(7), d(3), 'edgecolor', [0 1 1],'facecolor', 'none','linewidth',4); %AP3 - cyan
circles(X(4), X(8), d(4), 'edgecolor', [1 1 0],'facecolor', 'none','linewidth',4); %AP4 - yellow
axis([0 10 0 10])
hold on
tbl = table(X, d);
d = d.^2;
weights = d.^(-1);
weights = transpose(weights);
beta0 = [5, 5];
modelfun = #(b,X)(abs(b(1)-X(:,1)).^2+abs(b(2)-X(:,2)).^2).^(1/2);
mdl = fitnlm(tbl,modelfun,beta0, 'Weights', weights);
b = mdl.Coefficients{1:2,{'Estimate'}}
scatter(b(1), b(2), 70, [0 0 1], 'filled')
scatter(real1, real2, 70, [1 0 0], 'filled')
hold off
end
Where,
X: matrix with APs coordinates
d: distance estimation vector
real1: real position x
real2: real position y
If you have three sets of measurements with (x,y) coordinates of location and corresponding signal strength. such as:
m1 = (x1,y1,s1)
m2 = (x2,y2,s2)
m3 = (x3,y3,s3)
Then you can calculate distances between each of the point locations:
d12 = Sqrt((x1 - x2)^2 + (y1 - y2)^2)
d13 = Sqrt((x1 - x3)^2 + (y1 - y3)^2)
d23 = Sqrt((x2 - x3)^2 + (y2 - y3)^2)
Now consider that each signal strength measurement signifies an emitter for that signal, that comes from a location somewhere at a distance. That distance would be a radius from the location where the signal strength was measured, because one would not know at this point the direction from where the signal came from. Also, the weaker the signal... the larger the radius. In other words, the signal strength measurement would be inversely proportional to the radius. The smaller the signal strength the larger the radius, and vice versa. So, calculate the proportional, although not yet accurate, radius's of our three points:
r1 = 1/s1
r2 = 1/s2
r3 = 1/s3
So now, at each point pair, set apart by their distance we can calculate a constant (C) where the radius's from each location will just touch one another. For example, for the point pair 1 & 2:
Ca * r1 + Ca * r2 = d12
... solving for the constant Ca:
Ca = d12 / (r1 + r2)
... and we can do this for the other two pairs, as well.
Cb = d13 / (r1 + r3)
Cc = d23 / (r2 + r3)
All right... select the largest C constant, either Ca, Cb, or Cc. Then, use the parametric equation for a circle to find where the coordinates meet. I will explain.
The parametric equation for a circle is:
x = radius * Cos(theta)
y = radius * Sin(theta)
If Ca was the largest constant found, then you would compare points 1 & 2, such as:
Ca * r1 * Cos(theta1) == Ca * r2 * Cos(theta2) &&
Ca * r1 * Sin(theta1) == Ca * r2 * Sin(theta2)
... iterating theta1 and theta2 from 0 to 360 degrees, for both circles. You might write code like:
for theta1 in 0 ..< 360 {
for theta2 in 0 ..< 360 {
if( abs(Ca*r1*cos(theta1) - Ca*r2*cos(theta2)) < 0.01 && abs(Ca*r1*sin(theta1) - Ca*r2*sin(theta2)) < 0.01 ) {
print("point is: (", Ca*r1*cos(theta1), Ca*r1*sin(theta1),")")
}
}
}
Depending on what your tolerance was for a match, you wouldn't have to do too many iterations around the circumferences of each signal radius to determine an estimate for the location of the signal source.
So basically you need to intersect 4 circles. There can be many approaches to it, and there are two that will generate the exact intersection area.
First approach is to start with one circle, intersect it with the second circle, then intersect the resulting area with the third circle and so on. that is, on each step you know current intersection area, and you intersect it with a new circle. The intersection area will always be a region bounded by circle arcs, so to intersect it with a new circle you walk along the boundary of the area and check whether each bounding arc intersects with a new circle. If it does, then you leave only the part of the arc that lies inside a new circle, remember that you should continue with an arc from a new circle, and continue traversing the boundary until you find the next intersection.
Another approach that seems to result in a worse time complexity, but in your case of 4 circles this will not be important, is to find all the intersection points of two circles and choose only those points that are of interest for you, that is which lie inside all other circles. These points will be the corners of your area, and then it is rather easy to reconstruct the area. After googling a bit, I have even found a live demo of this approach.

Rotating a vector denoting turtle position by an angle Netlogo

I set/update each turtle's position as follows:
set xcor xcor + item 0 vector
set ycor ycor + item 0 vector
Therefore I add a vector to the current agent's coordinates.
PROBLEM:
I wish to rotate the added vector by angle x. Thus the vector "vector" should be rotated by angle x.
The angle should be taken from a Gaussian distribution with a specified deviation.
I am trying to achieve something similar to Couzin's model.
http://www.csim.scu.edu.tw/~chiang/course/ComputerGameAdvance/Collective%20Memory%20and%20Spatial%20Sorting%20in%20Animal%20Groups.pdf
Thanks in advance!
You seem to have two questions here; I'll address the one you used for the title. The matrix extension allows matrix multiplication, so you could just create a standard rotation matrix once you have the ange of rotation. But standard advice in NetLogo would be to use a more turtle-centric approach. Then you need to decide whether to use the NetLogo heading conventions (0 degrees for north, 90 degrees for east, etc.) If so you could do something like this:
to move [#dx #dy]
let %dist 0
ask patch 0 0 [set %dist distancexy #dx #dy]
facexy (xcor + #dx) (ycor + dy)
let %theta random-rotation
rt %theta
jump %dist
end
to-report random-rotation
report (random-float 360) - 180
end
Here the random rotation is not Gaussian distributed because I was not sure what you meant. Perhaps a von Mises distribution? In any case, you should clarify and ask as a separate question.
Just to emphasize Alan's point: Unless you have a good reason to use vectors, it's usually much easier and clearer to avoid them in NetLogo. If all you want to do is turn the turtle by a random amount drawn from a Gaussian distribution, you can just do:
right-turn random-normal 0 <std-dev>
where <std-dev> is your desired standard deviation. Then, you can tell the turtle to go forward by what would have been the magnitude of the vector: forward <distance>.
If you absolutely need to do a vector rotation, you can do so without the matrix extension fairly easily:
to-report rotate-vector [ vec angle ]
let x first vec
let y last vec
let mag sqrt (x * x + y * y)
let old-angle atan x y
let new-angle angle + old-angle
report (list (mag * sin new-angle) (mag * cos new-angle))
end
Remember that angles in NetLogo are flipped around 45º so that 0º is north and 90º is east; thus, sin and cos are flipped when dealing with angles.
Somewhat simple, convert vector to angle, rotate (randomize), then convert back. For good coding style and such, break into modules.
to-report rotate [ #vector #angle ]
let $dx first #vector
let $dy last #vector
let $magnitude sqrt ($dx * $dx + $dy * $dy)
set #angle #angle + atan $dx $dy
report (list $magnitude * sin #angle $magnitude * cos #angle)
end
to-report nudge-vector [ #vector #std-dev ]
report rotate #vector random-normal 0 #std-dev
end
to move-inaccurately [ #vector #std-deviation ]
set #vector nudge-vector #vector #std-deviation
setxy (xcor + first #vector) (ycor + last #vector)
end

What is the depth image received from Kinect

When I ran this Matlab code to get the depth image, the result I got is a matrix of 480x640. The min element value is 0 and the max element value is 2711. What does 2711 mean? Is that the distance from the camera to the farthest part of the image. But what is the unit of 2711. Is that meter of feet or ??
I don't know what the Matlab code exactly does to the depth, but it probably does some processing on it because the depth sent by the Kinect is on 11 bits, so it shouldn't be higher than 2048. Try to find out what it does, or to get access to the raw data sent by the Kinect.
The data sent by the Kinect is not a proper distance (it's a "disparity"), so you have to do some math to convert it to useful units.
From the OpenKinect project wiki (which contains useful information about the Kinect) :
From their data, a basic first order
approximation for converting the raw
11-bit disparity value to a depth
value in centimeters is: 100/(-0.00307
* rawDisparity + 3.33). This approximation is approximately 10 cm
off at 4 m away, and less than 2 cm
off within 2.5 m.
A better approximation is given by
Stéphane Magnenat in this post:
distance = 0.1236 * tan(rawDisparity /
2842.5 + 1.1863) in meters. Adding a final offset term of -0.037 centers
the original ROS data. The tan
approximation has a sum squared
difference of .33 cm while the 1/x
approximation is about 1.7 cm.
Once you have the distance using the
measurement above, a good
approximation for converting (i, j, z)
to (x,y,z) is:
x = (i - w / 2) * (z + minDistance) * scaleFactor * (w/h)
y = (j - h / 2) * (z + minDistance) * scaleFactor
z = z
Where
minDistance = -10
scaleFactor = .0021.
These values were found by hand.
You can find more details about the Kinect's depth camera and its calibration on the ROS website (and many others !).
If you map the data to a meter scale it compresses the depth image slightly. I found this was an issue when I was trying to look for planes in the mapped data.