I have a microbit project where the microbit is inserted into a kiktronics robot vertically.
I would like to get the heading of the robot, but the
compass.heading()
only works if the microbit is horizontal. I have tried reading the x,y,z co-ordinates of the compass using get_x(), get_y(), get_z()
But the ranges of numbers I am getting are scaled differently for the z axis and the x,y axis.
Does anyone know what the ranges are for the different sensors?
I used the test code below. I can get an accurate compass reading if I run the compass_calibrate() function first, even with the magnetometer vertical.
After commenting out the compass_calibrate() line, when moving the board around 3 axis in free space, I can see that the z value does not vary as much as x and y. So I got a small magnet. Moving that around the magnetometer makes the x,y,z values appear to change within roughly the same limits - this is a rough eyeball experiment.
Looking at the data sheet for the MAG3110 magnetometer, I can't see any indication that the 3 magnetometer axis are different. So why are the z-readings different without an external field? I hypothesise that there is a ground plane in the PCB. This is common in PCB construction. This could be acting as a shield for the z-axis.
from microbit import *
# compass.calibrate()
while True:
sleep(250)
# c = compass.heading()
x = compass.get_x()
y = compass.get_y()
z = compass.get_z()
print('x:{} y:{} z: {}'.format(x,y,z))
For those who are working with micro:bit v2, you can get the compass readings by using compass.heading() function too.
Code example:
from microbit import *
while True:
if button_a.was_pressed():
display.scroll(str(compass.heading()))
At the first time after running this code, the micro:bit board will ask you to tilt the board in different directions, and then it will start providing you with the required directions.
According to your questions about the compass ranges, here is a quick summarization for them as in the image below:
Related
Background
I am reading data from an ADXL355 accelerometer via SPI on a RPI Zero W and seem to have gotten stable g values after conversion, but the data I am getting seem to have a few problems.
ADXL355 Datasheet
1) Offset from expected value
Even though the accelerometer is sat on a static table, the data from it do not read 1g. There are settings for changing the offset of readings, but it strikes me as strange that the offset would be set to several g's, around 3g in a +-8g range. This offset is present in the x-axis and y-axis (the z-axis seems to be way out of order), these are shown in "Flat on table".
Flat on table
2) Axes movement in same direction
I tested the accelerometer axes by changing the orientation of the board as per the "Testing sequence [...]" image below.
Testing sequence, RPI headers in red, Analog Discovery headers in green, smallest square board is ADXL355
This gives the following readout from the accelerometer:
Accelerometer testing results
The y-axis being displaced in the same direction at each extreme of the rotation is what surprises me, given that the endpoints of the rotation are 180 degrees opposed.
3) Coupling of axes
As seen in the accelerometer testing results image above, the axes seem to move together, this is something else that confuses me, as I am barely moving the x axis. Could it be that it is being rotated through space as though it was attached to an arm?
4) Z-Axis Irregularities
Any ideas as to why the z-axis is all over the place would also be greatly appreciated. The axis are all fed through the same conversion from the data read directly from the accelerometer to give g values.
I was visualising my wind velocity using glyphs and cloud water content at the same time. However, I notice that the direction where the clouds move do not match the direction the glyphs pointing.
Below are the steps how I create the output:
The data is a netcdf file with wind variable array "ua" (eastward_wind_speed), "va" (northward_wind_speed), and "wa" (wind_vertical_velocity).
I used a cell_data_to_point_data filter to convert them into point data.
Then I combined these 3 arrays using a Paraview calculator with the equation iHatua + jHatva + kHat*wa.
Then do a glyph filter to visualise the wind velocity.
The problem is, the clouds are moving to the left(east), which does not match where the glyphs are pointing at (south).
What would be the possible reason for this error?
TIA
Update:
For anyone that might have the same problem:
Just solved the problem and the glyphs make much more sense now.
Switched off the spherical coordinate
Transform filter to scaled down the vertical components
Then do the contour filter and glyph filter as usual.
There are two things to consider.
Some weather agencies use a convection of wind direction from which is blowing. However there are agencies that have a wind direction to which is blowing.
Probably you are not using the wind direction at the same height as the clouds.
I am developing a project of detecting vehicles' headlights in night scene. I am working on a demo on MATLAB. My problem is that I need to find region of interest (ROI) to get low computing requirement. I have researched in many papers and they just use a fixed ROI like this one, the upper part is ignored and the bottom is used to analysed later.
However, if the camera is not stable, I think this approach is inappropriate. I want to find a more flexible one, which alternates in each frame. My experiments images are shown here:
If anyone has any idea, plz give me some suggestions.
I would turn the problem around and say that we are looking for headlights
ABOVE a certain line rather than saying that the headlights are below a certain line i.e. the horizon,
Your images have a very high reflection onto the tarmac and we can use that to our advantage. We know that the maximum amount of light in the image is somewhere around the reflection and headlights. We therefore look for the row with the maximum light and use that as our floor. Then look for headlights above this floor.
The idea here is that we look at the profile of the intensities on a row-by-row basis and finding the row with the maximum value.
This will only work with dark images (i.e. night) and where the reflection of the headlights onto the tarmac is large.
It will NOT work with images taking in daylight.
I have written this in Python and OpenCV but I'm sure you can translate it to a language of your choice.
import matplotlib.pylab as pl
import cv2
# Load the image
im = cv2.imread('headlights_at_night2.jpg')
# Convert to grey.
grey_image = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
Smooth the image heavily to mask out any local peaks or valleys
We are trying to smooth the headlights and the reflection so that there will be a nice peak. Ideally, the headlights and the reflection would merge into one area
grey_image = cv2.blur(grey_image, (15,15))
Sum the intensities row-by-row
intensity_profile = []
for r in range(0, grey_image.shape[0]):
intensity_profile.append(pl.sum(grey_image[r,:]))
Smooth the profile and convert it to a numpy array for easy handling of the data
window = 10
weights = pl.repeat(1.0, window)/window
profile = pl.convolve(pl.asarray(intensity_profile), weights, 'same')
Find the maximum value of the profile. That represents the y coordinate of the headlights and the reflection area. The heat map on the left show you the distribution. The right graph shows you the total intensity value per row.
We can clearly see that the sum of the intensities has a peak.The y-coordinate is 371 and indicated by a red dot in the heat map and a red dashed line in the graph.
max_value = profile.max()
max_value_location = pl.where(profile==max_value)[0]
horizon = max_value_location
The blue curve in the right-most figure represents the variable profile
The row where we find the maximum value is our floor. We then know that the headlights are above that line. We also know that most of the upper part of the image will be that of the sky and therefore dark.
I display the result below.
I know that the line in both images are on almost the same coordinates but I think that is just a coincidence.
You may try downsampling the image.
I've been experimenting with the compass and gyroscope on iPhone 4 and would like some help with an issue I'm having. I want to compensate for the slowness of the compass by using data from the gyroscope.
Using CMMotionManager and its CMDeviceMotion object (motionManager.deviceMotion), I get the CMAttitude object. Correct me if I'm wrong (please), but here is what I've deduced from the CMAttitude object's yaw property (I don't need pitch nor roll for my purposes):
yaw ranges from 0 to PI when the phone is pointing downwards (as indicated by deviceMotion.gravity.z) and swinging counterclockwise and 0 to -PI when swung clockwise
when the device is pointing upwards, yaw ranges from -PI to 0 and PI to 0, respectively
and from the compass data (I'm using locationManager.heading.magneticHeading), I see that the compass gives values from 0 to 360, with the value increasing when swinging clockwise
All right, so using all of this information together, I'm able to get a value I call horizontal that, regardless of whether the device is pointing up or down, will give values from 0 to 360 and increase when the device is swung clockwise (though I am still having trouble when deviceManager.gravity.z is around 0 -- the yaw value freaks out at this gravity.z value).
It seems to me that I could "synchronize" the horizontal and magneticHeading values, using a calculated horizontal value that maps to magneticHeading, and "synchronize" the horizontal value to magneticHeading when I feel the compass has "caught up."
So my questions:
Am I on the right track with this?
Am I using the gyro data from CMDeviceMotion properly and the assumptions I listed above correct?
Why might yaw freak out when gravity.z is around 0?
Thank you very much. I look forward to hearing your answers!
Just trying to answer... correct me if i'm wrong..
1.Yes you are on the right track
2.gravity in CM is already "isolated" from user gravity (gravity value caused by user acceleration) thats why there is two gravity, the "gravity" and "userAcceleration" its on apple CM documentation
// Note : not entirely isolated //
3.
if you have a gravity 0 it mean that the coresponding axis is perpendicular with gravity.
gravity.z is the iPhone screen thats why it -9.82m/s2 if you put on the desk with screen upright, actualy it hard to get 0 or maximum value of the gravity due to the sensor noise (it's normal, all sensor has a noise expecially cheap sensor).
what i do on my apps is I will switch my reference axis to other axis (in your case may be x or y) for certain limits, how the strategy is depend on the purpose or which side is your reference.
the other thing is, gyro is fast but its not stable, you need to re-calibrate the value for several interval. In my case every 5 second. I've experiment with gyro for calculating angle between two plane, i try with exacly 90 degree ruler and it will give an error about 0.5 degree every second try and keep increasing, but thats is mine, maybe others have a better method for avoid the error.
below is my steps "
Init
Read gravity XYZ -> Xg Yg Zg
Check if Xg < 0.25 If TRUE try Yg then Zg // Note 1 = 1g = 9.82 m/s^2
Read the compass and gyro
Configure and calibrate the gyro using the compass and calulate based on which axis i use in point 3.
If 5 second is pass then recalibrate, read the compass
If the the difference with gyro reading is > 5 degree skip recalibartion the gyro.
If the the difference with gyro reading is < 5 degree calibrate the gyro using compass value
Note: for number 7 : is to check if the phone affected with magnetic field or near huge steel such or high voltage electrical line or in noisy and heavy equipment in factory plant.
Thats all... Hope this could help you...
And sorry for my english..
Here is an example of an iPhone app where the compass get compensated with the gyroscope. Code and project can be seen here:
http://www.sundh.com/blog/2011/09/stabalize-compass-of-iphone-with-gyroscope/
The direction of the yaw axis vector is undefined when in zero gravity (or free fall, or close enough).
In order to do synchronization while in motion, you need to create a filter for your "horizontal" value that has the same lag/delay response characteristics as the magnetic compass. Either that, or wait until motion stops long enough for both values to settle before recalculating the offset.
Answer to question 1 is Yes, question 2 you are on the right track but you could use a variable name that is not 'horizontal', question 3 is answered by hotpaw2 and also a yaw in a chopper or helicopter at near zero altitude would alert the pilot with an alarm. There is a time lag because part of the software is local while there are other factors which can slow it down including access to a sensor for detecting magnetic waves, the device position and direction, preparing the graphic output for the compass display, computing and outputting data from the gyro and sensors through a relatively slow interface, using a general purpose handheld device not custom designed for the type of task being asked of it.
I've found this really cool site on interfacing an Arduino to an optical mouse to read out x-y readings from it. I've done it, and it's working nicely.
Then I was thinking, 'Why not plot all this to become a graph?' and I came across Processing.
I am aware that Processing has an example named 'MouseSignal'
This example is the EXACT thing that I want to write with Processing. But, the only change is that, I want to use the x-y coordinates from the mouse that is attached to the Arduino and ask Processing to generate a 'real-time' graph of the coordinate.
Thanks!
Change the spot in the code where it says:
xvals[width-1] = mouseX;
yvals[width-1] = mouseY;
Replace mouseX and mouseY with the values coming from the Arduino. You may need to scale these values to fit within the axes.