Controlling robot makes a shaking movement - coordinates

Im trying to control a robot by sending positions with 100hz. It's making a shaking movement when sending so much positions. When I send 1 position that is like 50 mm from his start position it moves smoothly. When I use my sensor to steer,(so it send every position from 0 to 50mm) it is shaking. I'm probably sending like X0-X1-X2-X1-X2-X3-X4-X5-X4-X5 and this is the reason why it might shake. How can I solve this making the robot move smoothly when I use my mouse to steer it?
Robot is asking 125hz
IR sensor is sending 100hz
Otherwise does the 25hz makes the diffrent?
Here is my code.
while(true)
// If sensor 1 is recording IR light.
if (listen1.newdata = true)
{
coX1 = (int) listen1.get1X(); //
coY1 = (int) listen1.get1Y();
newdata = true;
} else {
coX1 = 450;
coY1 = 300;
}
if (listen2.newdata = true)
{
coX2 = (int) listen2.get1X();
coY2 = (int) listen2.get1Y();
newdata = true;
} else {
coY2 = 150;
}
// If the sensor gets further then the workspace, it will automaticly correct it to these
// coordinates.
if (newdata = true)
{
if (coX1< 200 || coX1> 680)
{
coX1 = 450;
}
if (coY1<200 || coY1> 680)
{
coY1 = 300;
}
if (coY2<80 || coY2> 300)
{
coY2 = 150;
}
}
// This is the actually command send to a robot.
Gcode = String.format( "movej(p[0.%d,-0.%d, 0.%d, -0.5121, -3.08, 0.0005])"+ "\n", coX1, coY1, coY2);
//sends message to server
send(Gcode, out);
System.out.println(Gcode);
newdata = false;
}
}
private static void send(String movel, PrintWriter out) {
try {
out.println(movel); /*Writes to server*/
// System.out.println("Writing: "+ movel);
// Thread.sleep(250);
}
catch(Exception e) {
System.out.print("Error Connecting to Server\n");
}
}
}
# Edit
I discovered on wich way I can do this. It is via min and max. So basicly what I think I have to do is:
* put every individual coordinate in a array( 12 coordinates)
* Get the min and max out of this array
* Output the average of the min and max

Without knowing more about your robot characteristics and how you could control it, here are some general considerations:
To have a smooth motion of your robot, you should control it in speed with a well designed PID controller algorithm.
If you can only control it in position, the best you can do is monitoring the position & waiting for it to be "near enough" from the targetted position before sending the next position.
If you want a more detailed answer, please give more information on the command you send to the robot (movej), I suspect that you can do much more than just sending [x,y] coordinates.

Related

How to allow raycasting when the player is at a certain distance from object?

I've implemented opening drawers/doors features on my game but the issue I'm facing is that when The player opens a drawer, it gets pushed back so the drawer has some room to be opened. sometimes it jitters when pushed back.
Physics.Raycast(mainCamera.transform.position, mainCamera.transform.forward, out hit, 3f);
if (hit.transform)
{
interactiveObjects = hit.transform.GetComponent<InteractiveObjects>();
}
else
{
lookObject = null;
interactiveObjects = null;
}
if (Open)
{
if (interactiveObjects)
{
interactiveObjects.OnOpen();
}
}
I'm using raycast to open the drawer. Is there a way to only allow the raycasting, when the player is not too close to the drawer? so it doesn't get pushed back by the drawer.
You can check the distance after doing the raycast. If the distance is within the tolerable range, execute the rest of your code.
if (Physics.Raycast(mainCamera.transform.position, mainCamera.transform.forward, out hit, 3f))
{
if (hit.distance >= minDistance)
{
// Code to execute when range is acceptable
}
else
{
Debug.Log("Player is too close to object!");
}
}
The distance used above does not take into account the height difference between the hit point and the camera. You can get a much more consistent distance by setting the y component of both vectors equal before getting the distance.
var cameraPos = mainCamera.transform.position;
cameraPos.y = hit.point.y;
var distance = Vector3.Distance(hit.point, cameraPos);

Check microphone for silence

While recording user voice, i want to know when he/she stopped talking to end the recording and send the audio file to google speech recognition API.
I found this thread here and tried to use it's solution but i am always getting the same value from the average of spectrum data which is 5.004574E-08:
Unity - Microphone check if silent
This is the code i am using for getting the GetSpectrumData value:
public void StartRecordingSpeech()
{
//If there is a microphone
if (micConnected)
{
if (!Microphone.IsRecording(null))
{
goAudioSource.clip = Microphone.Start(null, true, 10, 44100); //Currently set for a 10 second clip max
goAudioSource.Play();
StartCoroutine(StartRecordingSpeechCo());
}
}
else
{
Debug.LogError("No microphone is available");
}
}
IEnumerator StartRecordingSpeechCo()
{
while (Microphone.IsRecording(null))
{
float[] clipSampleData = new float[128];
goAudioSource.GetSpectrumData(clipSampleData, 0, FFTWindow.Rectangular);
Debug.Log(clipSampleData.Average());
yield return null;
}
}
PS: I am able to record the users voice, save it and get the right response from the voice recognition api.
The following method is what worked for me. it detect the volume of the microphone, turn it into decibels. It does not need to play the recorded audio or anything. (credit goes to this old thread in the unity answers: https://forum.unity.com/threads/check-current-microphone-input-volume.133501/).
public float LevelMax()
{
float levelMax = 0;
float[] waveData = new float[_sampleWindow];
int micPosition = Microphone.GetPosition(null) - (_sampleWindow + 1); // null means the first microphone
if (micPosition < 0) return 0;
goAudioSource.clip.GetData(waveData, micPosition);
// Getting a peak on the last 128 samples
for (int i = 0; i < _sampleWindow; i++)
{
float wavePeak = waveData[i] * waveData[i];
if (levelMax < wavePeak)
{
levelMax = wavePeak;
}
}
float db = 20 * Mathf.Log10(Mathf.Abs(levelMax));
return db;
}
In my case, if the value is bigger then -40 then the user is talking!if its 0 or bigger then there is a loud noise, other then that, its silence!
If you are interested in a volume then GetSpectrumData is actually not really what you want. This is used for frequency analysis and returns - as the name says - a frequency spectrum so how laud is which frequency in a given frequency range.
What you rather want to use is GetOutputData which afaik returns an array with amplitudes from -1 to 1. So you have to square all values, get the average and take the square root of this result (source)
float[] clipSampleData = new float[128];
goAudioSource.GetOutputData(clipSampleData, 0);
Debug.Log(Mathf.Sqrt(clipSampleData.Select(f => f*f).Average()));

Unity - CheckSphere not working? spawn freezes only on device?

I am trying to spawn a set number of cubes within an ever changing area (a plane, ARKit) and NOT have them overlap. Pretty simple I'd think, and I have this working in Unity editor like so:
My problem is deploy to device (iPhone) and everything is different. Several things aren't working, and I don't know why - it's a relatively simple script. First I thought CheckSphere wasn't working, something with scale being different - but this is how I try to get an empty space:
public Vector3 CheckForEmptySpace (Bounds bounds)
{
float sphereRadius = tierDist;
Vector3 startingPos = new Vector3 (UnityEngine.Random.Range(bounds.min.x, bounds.max.x), bounds.min.y, UnityEngine.Random.Range(bounds.min.z, bounds.max.z));
// Loop, until empty adjacent space is found
var spawnPos = startingPos;
while ( true )
{
if (!(Physics.CheckSphere(spawnPos, sphereRadius, 1 << 0)) ) // Check if area is empty
return spawnPos; // Return location
else
{
// Not empty, so gradually move position down. If we hit the boundary edge, move and start again from the opposite edge.
var shiftAmount = 0.5f;
spawnPos.z -= shiftAmount;
if ( spawnPos.z < bounds.min.z )
{
spawnPos.z = bounds.max.z;
spawnPos.x += shiftAmount;
if ( spawnPos.x > bounds.max.x )
spawnPos.x = bounds.min.x;
}
// If we reach back to a close radius of the starting point, then we didn't find any empty spots
var proximity = (spawnPos - startingPos).sqrMagnitude;
var range = shiftAmount-0.1; // Slight 0.1 buffer so it ignores our initial proximity to the start point
if ( proximity < range*range ) // Square the range
{
Debug.Log( "An empty location could not be found" );
return new Vector3 (200, 200, 200);
}
}
}
}
This again, works perfect in editor. This is the code Im running on my device (without check sphere)
public void spawnAllTiers(int maxNum)
{
if(GameController.trackingReady && !hasTriedSpawn)
{
hasTriedSpawn = true;
int numTimesTried = 0;
BoxCollider bounds = GetGrid ();
if (bounds != null) {
while (tiersSpawned.Length < maxNum && numTimesTried < 70) { //still has space
Tier t = getNextTier ();
Vector3 newPos = new Vector3 (UnityEngine.Random.Range(GetGrid ().bounds.min.x, GetGrid ().bounds.max.x), GetGrid ().bounds.min.y, UnityEngine.Random.Range(GetGrid ().bounds.min.z, GetGrid ().bounds.max.z));
//Vector3 newPos = CheckForEmptySpace (bounds.bounds);
if(GetGrid ().bounds.Contains(newPos)) //meaning not 200 so it is there
{
spawnTier (newPos, t);
}
numTimesTried++;
platformsSpawned = GameObject.FindObjectsOfType<Platform> ();
tiersSpawned = GameObject.FindObjectsOfType<Tier> ();
}
if(tiersSpawned.Length < maxNum)
{
print ("DIDNT REACH - maxed at "+tiersSpawned.Length);
}
}
}
//maybe check for num times trying, or if size of all spawned tiers is greater than area approx
}
//SPAWN NEXT TIER
public void spawnTier(Vector3 position, Tier t) //if run out of plats THEN we spawn up like tree house
{
print ("SUCCESS - spawn "+position+"SPHERE: "+Physics.CheckSphere(position, tierDist, 1 << 0));
// Vector3 pos = currentTier.transform.position; //LATER UNCOMMENT - would be the current tier spawning from
//TO TEST comment to this line ---------------------------------------------------------------------------
#if UNITY_EDITOR
Instantiate (t, position, Quaternion.identity);
anchorManager.AddAnchor(t.gameObject);
#else
//------------------------------------------------------------------------------------------
Instantiate (t, position, Quaternion.identity);
anchorManager.AddAnchor(t.gameObject);
#endif
}
This doesnt crash the device but spawns ALL in the same place. I cant understand why. If I do this, CHECKING for overlap:
public void spawnAllTiers(int maxNum)
{
if(GameController.trackingReady && !hasTriedSpawn)
{
hasTriedSpawn = true;
int numTimesTried = 0;
BoxCollider bounds = GetGrid ();
if (bounds != null) {
while (tiersSpawned.Length < maxNum && numTimesTried < 70) { //still has space
Tier t = getNextTier ();
//Vector3 newPos = new Vector3 (UnityEngine.Random.Range(GetGrid ().bounds.min.x, GetGrid ().bounds.max.x), GetGrid ().bounds.min.y, UnityEngine.Random.Range(GetGrid ().bounds.min.z, GetGrid ().bounds.max.z));
Vector3 newPos = CheckForEmptySpace (GetGrid ().bounds);
if(GetGrid ().bounds.Contains(newPos) && t) //meaning not 200 so it is there
{
spawnTier (newPos, t);
}
numTimesTried++;
platformsSpawned = GameObject.FindObjectsOfType<Platform> ();
tiersSpawned = GameObject.FindObjectsOfType<Tier> ();
}
if(tiersSpawned.Length < maxNum)
{
print ("DIDNT REACH - maxed at "+tiersSpawned.Length);
}
}
}
//maybe check for num times trying, or if size of all spawned tiers is greater than area approx
}
Works great in editor again, but completely freezes the device. Logs are not helpful, as I just get this every time, even though they aren't spawning in those positions:
SUCCESS - spawn (0.2, -0.9, -0.9)SPHERE: False
SUCCESS - spawn (-0.4, -0.9, 0.2)SPHERE: False
SUCCESS - spawn (0.8, -0.9, 0.2)SPHERE: False
SUCCESS - spawn (-0.4, -0.9, -0.8)SPHERE: False
SUCCESS - spawn (0.9, -0.9, -0.8)SPHERE: False
What the hell is happening - why would it freeze only on device like this?
Summary:
it sounds like you needed a short gap between each spawn.
(BTW a useful trick is, learn how to wait until the next frame - check out many articles on it.)
All-time classic answer for this
https://stackoverflow.com/a/35228592/294884
get in to "chunking" for random algorthims
observe the handy line of code at "How to get sets of unique random numbers."
Enjoy
Unrelated issue -
Could it be you need to basically wait a small moment between spawning each cube?
For a time in unity it's very simply Invoke - your code pattern would look something like this:
Currently ...
for 1 to 100 .. spawn a cube
To have a pause between each ...
In Start ...
Call Invoke("_spawn", 1f)
and then
func _spawn() {
if count > 70 .. break
spawn a cube
Invoke("_spawn", 1f)
}
Similar example code - https://stackoverflow.com/a/36736807/294884
Even simpler - https://stackoverflow.com/a/35807346/294884
Enjoy

Detect physical movement of iPhone/Apple Watch

I'm trying to detect the movement (to the right or left) performed by users.
We assume that the user starts with his arm extended in front of him and then moves his arm to the right or to the left (about 90 degrees off center).
I've integrated CMMotionManager and want to understand detecting direction via startAccelerometerUpdatesToQueue and startDeviceMotionUpdatesToQueue methods.
Can anyone suggest how to implement this logic on an iPhone and then on an Apple Watch?
Apple provides watchOS 3 SwingWatch sample code demonstrating how to use CMMotionManager() and startDeviceMotionUpdates(to:) to count swings in a racquet sport.
Their code demonstrates how to detect the direction of a one-second interval of motion, although you may have to tweak the thresholds to account for the characteristics of the motion you want to track.
func processDeviceMotion(_ deviceMotion: CMDeviceMotion) {
let gravity = deviceMotion.gravity
let rotationRate = deviceMotion.rotationRate
let rateAlongGravity = rotationRate.x * gravity.x // r⃗ · ĝ
+ rotationRate.y * gravity.y
+ rotationRate.z * gravity.z
rateAlongGravityBuffer.addSample(rateAlongGravity)
if !rateAlongGravityBuffer.isFull() {
return
}
let accumulatedYawRot = rateAlongGravityBuffer.sum() * sampleInterval
let peakRate = accumulatedYawRot > 0 ?
rateAlongGravityBuffer.max() : rateAlongGravityBuffer.min()
if (accumulatedYawRot < -yawThreshold && peakRate < -rateThreshold) {
// Counter clockwise swing.
if (wristLocationIsLeft) {
incrementBackhandCountAndUpdateDelegate()
} else {
incrementForehandCountAndUpdateDelegate()
}
} else if (accumulatedYawRot > yawThreshold && peakRate > rateThreshold) {
// Clockwise swing.
if (wristLocationIsLeft) {
incrementForehandCountAndUpdateDelegate()
} else {
incrementBackhandCountAndUpdateDelegate()
}
}
// Reset after letting the rate settle to catch the return swing.
if (recentDetection && abs(rateAlongGravityBuffer.recentMean()) < resetThreshold) {
recentDetection = false
rateAlongGravityBuffer.reset()
}
}

Google Earth API Moving a polygon

I just starting coding with Google Earth using the GEPlugin control for .Net and still got a lot to learn.
What has got me puzzled is when I try to drag a polygon.
The method below is called whenever the mousemove event fires and should be moving each point of the polygon while retaining the orginal shape of the polygon. The lat / long for each point is changed but the polygon does not move position on the map.
Will moving a point in a polygon cause it to redraw, do I need to call a method to force a redraw or perhaps do something else entirely?
Thanks!
private void DoMouseMove(IKmlMouseEvent mouseEvent)
{
if (isDragging)
{
mouseEvent.preventDefault();
var placemark = mouseEvent.getTarget() as IKmlPlacemark;
if (placemark == null)
{
return;
}
IKmlPolygon polygon = placemark.getGeometry() as IKmlPolygon;
if (polygon != null)
{
float latOffset = startLatLong.Latitude - mouseEvent.getLatitude();
float longOffset = startLatLong.Longitude - mouseEvent.getLongitude();
KmlLinearRingCoClass outer = polygon.getOuterBoundary();
KmlCoordArrayCoClass coordsArray = outer.getCoordinates();
for(int i = 0; i < coordsArray.getLength(); i++)
{
KmlCoordCoClass currentPoint = coordsArray.get(i);
currentPoint.setLatLngAlt(currentPoint.getLatitude() + latOffset,
currentPoint.getLongitude() + longOffset, 0);
}
}
}
}
Consider voting for these issues to be resolved
http://code.google.com/p/earth-api-utility-library/issues/detail?id=33
http://code.google.com/p/earth-api-samples/issues/detail?id=167
You may find some hints at the following link:
http://earth-api-utility-library.googlecode.com/svn/trunk/extensions/examples/ruler.html
UPDATE:
I've released the extension library: https://bitbucket.org/mutopia/earth
See https://bitbucket.org/mutopia/earth/src/master/sample/index.html to run it.
See the drag() method in the sample code class, which calls setDragMode() and addDragEvent() to enable dragging of the KmlPolygon.
I successfully implemented this using takeOverCamera in the earth-api-utility-library and three events:
setDragMode: function (mode) {
// summary:
// Sets dragging mode on and off
if (mode == this.dragMode) {
Log.info('Drag mode is already', mode);
} else {
this.dragMode = mode;
Log.info('Drag mode set', mode);
if (mode) {
this.addEvent(this.ge.getGlobe(), 'mousemove', this.dragMouseMoveCallback);
this.addEvent(this.ge.getGlobe(), 'mouseup', this.dragMouseUpCallback);
this.addEvent(this.ge.getView(), 'viewchange', this.dragViewChange, false);
} else {
this.removeEvent(this.ge.getGlobe(), 'mousemove', this.dragMouseMoveCallback);
this.removeEvent(this.ge.getGlobe(), 'mouseup', this.dragMouseUpCallback);
this.removeEvent(this.ge.getView(), 'viewchange', this.dragViewChange, false);
}
}
},
This is in a utility library within a much larger project. dragMode is a boolean which adds and removes events. These three events control what happens when you drag. addEvent and removeEvent are my own wrapper functions:
addEvent: function (targetObject, eventID, listenerCallback, capture) {
// summary:
// Convenience method for google.earth.addEventListener
capture = setDefault(capture, true);
google.earth.addEventListener(targetObject, eventID, listenerCallback, capture);
},
removeEvent: function (targetObject, eventID, listenerCallback, capture) {
// summary:
// Convenience method for google.earth.removeEventListener
capture = setDefault(capture, true);
google.earth.removeEventListener(targetObject, eventID, listenerCallback, capture);
},
Ignoring the minor details, all the important stuff is in the callbacks to those events. The mousedown event locks the camera and sets the polygon I'm dragging as the dragObject (it's just a variable I'm using). It saves the original lat long coordinates.
this.dragMouseDownCallback = lang.hitch(this, function (event) {
var obj = event.getTarget();
this.lockCamera(true);
this.setSelected(obj);
this.dragObject = obj;
this.dragLatOrigin = this.dragLatLast = event.getLatitude();
this.dragLngOrigin = this.dragLngLast = event.getLongitude();
}
The mousemove callback updates to the latest lat long coordinates:
this.dragMouseMoveCallback = lang.hitch(this, function (event) {
if (this.dragObject) {
var lat = event.getLatitude();
var lng = event.getLongitude();
var latDiff = lat - this.dragLatLast;
var lngDiff = lng - this.dragLngLast;
if (Math.abs(latDiff) > this.dragSensitivity || Math.abs(lngDiff > this.dragSensitivity)) {
this.addPolyCoords(this.dragObject, [latDiff, lngDiff]);
this.dragLatLast = lat;
this.dragLngLast = lng;
}
}
});
Here I'm using some fancy sensitivity values to prevent updating this too often. Finally, addPolyCoords is also my own function which adds lat long values to the existing coordinates of the polygon - effectively moving it across the globe. I do this with the built in setLatitude() and setLongitude() functions for each coordinate. You can get the coordinates like so, where polygon is a KmlPolyon object:
polygon.getGeometry().getOuterBoundary().getCoordinates()
And of course, the mousedown callback turns off the drag mode so that moving the mouse doesn't continue to drag the polygon:
this.dragMouseUpCallback = lang.hitch(this, function (event) {
if (this.dragObject) {
Log.info('Stop drag', this.dragObject.getType());
setTimeout(lang.hitch(this, function () {
this.lockCamera(false);
this.setSelected(null);
}), 100);
this._dragEvent(event);
this.dragObject = this.dragLatOrigin = this.dragLngOrigin = this.dragLatLast = this.dragLngLast = null;
}
});
And finally, _dragEvent is called to ensure that the final coordinates are the actual coordinates the mouse event finished with (and not the latest mousemove call):
_dragEvent: function (event) {
// summary:
// Helper function for moving drag object
var latDiff = event.getLatitude() - this.dragLatLast;
var lngDiff = event.getLongitude() - this.dragLngLast;
if (!(latDiff == 0 && lngDiff == 0)) {
this.addPolyCoords(this.dragObject, [latDiff, lngDiff]);
Log.info('Moved ' + latDiff + ', ' + lngDiff);
}
},
The mousemove callback isn't too important and can actually be ignored - the only reason I use it is to show the polygon moving as the user moves their mouse. Removing it will result in the object being moved when they lift their mouse up.
Hopefully this incredibly long answer gives you some insights into how to implement dragging in the Google Earth API. And I also plan to release my library in the future when I've ironed out the kinks :)