Wrong axis values of Gamepad in Matlab's Psychtoolbox - matlab

I'm using the Gamepad interface in Psychtoolbox together with a Logitech Attack3 joystick and the following code:
while ~Gamepad('GetButton', 4, 1)
force = Gamepad('GetAxis', 4, 2);
force = force / 32768;
zoomFactor = 0.1 * force;
zoom(1 + zoomFactor);
end
It's supposed to get the vertical axis value from the joystick and use that to calculate the zoom factor (toy problem: zoom in and out of picture).
When querying the axis value, I get strange results. If I move the joystick, the axis value changes as expected. However, when I release the joystick back into resting, the axis value should return 0, but it just stays at the last displayed value. Basically, the joystick only registers movement away from the center but not the returning motion back to the resting position.

Related

Rotating rotation value by different normalized vector directions

I have written a script in Unity which takes a SkinnedMeshRenderer and AnimationClip and rotates the vertices in each by a specified number of degrees. It looks mostly correct except that rotations seem to be incorrect. Here is an example bone rotation (in euler angles) in the skeleton along with the correct values that would be needed for the animation to look correct.
With no rotation: (0, 0, -10)
Rotated 90 degrees: (-10, 0, 0)
Rotate 180 degrees: (0, 0, 10)
I have been trying to find a way to rotate these bones to make this conversion make sense with the data I have here, but have come up short. I know I want to rotate these values around the Y axis, but don't actually want the Y value in the euler angle to change. I am aware I could just reorient the root bone around the Y axis and the problem would be solved, but I want to have no rotation in the Y axis. I am "fixing" some older animations that have unnecessary rotation values in them.
var localBoneRotation = new Quaternion(keysX[j].value, keysY[j].value, keysZ[j].value, keysW[j].value).eulerAngles;
var reorientedForward = Quaternion.AngleAxis(rotation, Vector3.up) * Vector3.forward;
localBoneRotation.x *= reorientedForward.x;
localBoneRotation.y *= reorientedForward.y;
localBoneRotation.z *= reorientedForward.z;
var finalRotation = Quaternion.Euler(localBoneRotation);
keysX[j].value = finalRotation.x;
keysY[j].value = finalRotation.y;
keysZ[j].value = finalRotation.z;
keysW[j].value = finalRotation.w;
I have also tried using a matrix and Vector3 but most of the time I end up with values in the Y. Perhaps I am going about this incorrectly. I just need to be able to specify an angle rotation and then have the input data match the final euler angles with each of these data points.

Moving camera to proper position in Zoom function in Unity

Hi I have a question that I'm hoping someone can help me work through. I've asked elsewhere to no avail but it seems like a standard problem so I'm not sure why I haven't been getting answers.
Its basically setting up a zoom function that mirrors Google Maps zoom. Like, the camera zooms in/out onto where your mouse is. I know this probably gets asked a lot but I think Unity's new Input System changed things up a bit since the 4-6 year old questions that I've found in my own research.
In any case, I've set up an parent GameObject that holds all 2D sprites that will be in my scene and an orthographic camera. I can set the orthographic size through code to change to zoom, but its moving the camera to the proper place that I am having trouble with.
This was my 1st attempt:
public Zoom(float direction, Vector2 mousePosition) {
// zoom calcs
float rate 1 + direction * Time.deltaTime;
float targetOrtho = Mathf.MoveTowards(mainCam.orthographicSize, mainCam.orthoGraphicSize/rate, 0.1f);
// move calcs
mousePosition = mainCam.ScreenToWorldPoint(mousePosition);
Vector2 deltaPosition = previousPosition - mousePosition;
// move and zoom
transform.position += new Vector3(deltaPosition.x, deltaPosition.y, 0);
// zoomLevels are a generic struct that holds the max/min values.
SetZoomLevel(Mathf.Clamp(targetOrthoSize, zoomLevels.min, zoomLevels.max));
previousPosition = mousePosition;
}
This function gets called through my input controller, activated through Unity's Input System events. When the mouse wheel scrolls, the Zoom function is given a normalized value as direction (1 or -1) and the current mousePosition. When its finished its calculation, the mousePosition is stored in previousPosition.
The code actually works -- except it is extremely jittery. This, of course happens because there is no Time.deltaTime applied to the camera movement, nor is this in LateUpdate; both of which helps to smooth the movements. Except, in the former case, multiplying Time.deltaTime to new Vector3(deltaPosition.x, deltaPosition.y, 0) seems to cause the zoom occur at the camera's centre rather than the mouse position. When i put zoom into LateUpdate, it creates a cool but unwanted vibration effect when the camera moves.
So, after doing some thinking and reading, I thought it may be best to calculate the difference between the mouse position and the camera's center point, then multiply it by a scale factor, which is the camera's orthographic size * 2 (maybe...??). Hence my updated code here:
public void Zoom(float direction, Vector2 mousePosition)
{
// zoom
float rate = 1 + direction * Time.unscaledDeltaTime * zoomSpeed;
float orthoTarget = Mathf.MoveTowards(mainCam.orthographicSize, mainCam.orthographicSize * rate, maxZoomDelta);
SetZoomLevel(Mathf.Clamp(orthoTarget, zoomLevels.min, zoomLevels.max));
// movement
if (mainCam.orthographicSize < zoomLevels.max && mainCam.orthographicSize > zoomLevels.min)
{
mousePosition = mainCam.ScreenToWorldPoint(mousePosition);
Vector2 offset = (mousePosition - new Vector2(transform.position.x, transform.position.y)) / (mainCam.orthographicSize * 2);
// panPositions are the same generic struct holding min/max values
offset.x = Mathf.Clamp(offset.x, panPositions.min.x, panPositions.max.x);
offset.y = Mathf.Clamp(offset.y, panPositions.min.y, panPositions.max.y);
transform.position += new Vector3(offset.x, offset.y, 0) * Time.deltaTime;
}
}
This seems a little closer to what I'm trying to achieve but the camera still zooms in near its center point and zooms out on some point... I'm a bit lost as to what I am missing out here.
Is anyone able to help guide my thinking about what I need to do to create a smooth zoom in/out on the point where the mouse currently is? Much appreciated & thanks for reading through this.
Ok I figured it out for if anyone ever comes across the same problem. it is a standard problem that is easily solved once you know the math.
Basically, its a matter of scaling and translating the camera. You can do one or the other first - it does not matter; the outcome is the same. Imagine your screen looks like this:
The green box is your camera viewport, the arrow is your cursor. When you zoom in, the orthographic size gets smaller and shrinks around its anchor point (usually P1(0,0)). This is the scaling aspect of the problem and the following image explains it well:
So, now we want to move the camera position to the new position:
So how do we do this? Its just a matter of getting distance from the old camera position (P1(0, 0)) to the new camera position (P2(x,y)). Basically, we only want this:
My solution to find the length of the arrow in the picture above was to basically subtract the length of the cursor position from the old camera position (oldLength) from the length of the cursor position to the new camera position (newLength).
But how do you find newLength? Well, since we know the length will be scaled accordingly to the size of the camera viewport, newLength will be either oldLength / scaleFactor or oldLength * scaleFactor, depending on whether you want to zoom in or out, respectively. The scale factor can be whatever you want (zoom in/out by 2, 4, 1.4... whatever).
From there, its just a matter of subtracting newLength from oldLength and adding that difference from the current camera position. The psuedo code is below:
(Note that i changed 'newLength' to 'length' and 'oldLength' to 'scaledLength')
// make sure you're working in world space
mousePosition = camera.ScreenToWorldPoint(mousePosition);
length = mousePosition - currentCameraPosition;
scaledLength = length / scaleFactor // to zoom in, otherwise its length * scaleFactor
deltaLength = length - scaledLength;
// change position
cameraPosition = currentCameraPosition - deltaLength;
// do zoom
camera.orthographicSize /= scaleFactor // to zoom in, otherwise orthographic size *= scaleFactor
Works perfectly for me. Thanks to those who helped me in a discord coding community!

Unity make Joystick always result in speed 1

So I have a joystick object, which gives me values of -1 to 1 for each axis.
float horizontalMove = joystick.Horizontal * speed;
float verticalMove = joystick.Vertical * speed;
rb.velocity = new Vector3(horizontalMove, verticalMove, 0);
Now, what I want is that no matter how far you pull the joystick in each direction, it will always result in speed 1. Just like how my current code works, but my joystick is always pulled to the edge. I also made it so max. 1 directions can be set to 0.
You can use the .normalized property of the vector, which ensures it either has length 1, or is equal to Vector3.zero.
rb.velocity = new Vector3(horizontalMove, verticalMove, 0).normalized;
Unlike using Mathf.Sign on each axis, the angle of the vector is preserved, so the player will still be able to move in any orientation, not just along axes and diagonals.

Unity - get position of UI Slider Handle

I am working on Unity 4.7 project and need to create shooting on the target. I simulated gunpoint using horizontal and vertical slider moving on the time. When I click the button I need to memorize x and y coordinates of handles and instantiate bullet hole at this point but don't know how to get cords of sliders handle. It is possible to get values but it seems that it doesn't correspond to coordinates. If horizontal slider changes its value for 1, would its handle change x position for 1?
Use this then:
public static Vector3 GetScreenPositionFromWorldPosition(Vector3 targetPosition)
{
Vector3 screenPos = Camera.main.WorldToScreenPoint(targetPosition);
return screenPos;
}
Have the reference to Handles of the horizontal and vertical sliders, and use them like:
Vector3 pos = GetScreenPositionFromWorldPosition(horizontalHandle.transform.position);

How do I use the gravity vector to correctly transform scene for augmented reality?

I'm trying figure out how to get an OpenGL specified object to be displayed correctly according to the device orientation (ie. according to the gravity vector from the accelerometer, and heading from compass).
The GLGravity sample project has an example which is almost like this (despite ignoring heading), but it has some glitches. For example, the teapot jumps 180deg as the device viewing angle crosses the horizon, and it also rotates spuriously if you tilt the device from portrait into landscape. This is fine for the context of this app, as it just shows off an object and it doesn't matter that it does these things. But it means that the code just doesn't work when you attempt to emulate real life viewing of an OpenGL object according to the device's orientation. What happens is that it almost works, but the heading rotation you apply from the compass gets "corrupted" by the spurious additional rotations seen in the GLGravity example project.
Can anyone provide sample code that shows how to adjust correctly for the device orientation (ie. gravity vector), or to fix the GLGravity example so that it doesn't include spurious heading changes?
//Clear matrix to be used to rotate from the current referential to one based on the gravity vector
bzero(matrix, sizeof(matrix));
matrix[3][3] = 1.0;
//Setup first matrix column as gravity vector
matrix[0][0] = accel[0] / length;
matrix[0][1] = accel[1] / length;
matrix[0][2] = accel[2] / length;
//Setup second matrix column as an arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz} defined by by the equation "Gx * x + Gy * y + Gz * z = 0" in which we arbitrarily set x=0 and y=1
matrix[1][0] = 0.0;
matrix[1][1] = 1.0;
matrix[1][2] = -accel[1] / accel[2];
length = sqrtf(matrix[1][0] * matrix[1][0] + matrix[1][1] * matrix[1][1] + matrix[1][2] * matrix[1][2]);
matrix[1][0] /= length;
matrix[1][1] /= length;
matrix[1][2] /= length;
//Setup third matrix column as the cross product of the first two
matrix[2][0] = matrix[0][1] * matrix[1][2] - matrix[0][2] * matrix[1][1];
matrix[2][1] = matrix[1][0] * matrix[0][2] - matrix[1][2] * matrix[0][0];
matrix[2][2] = matrix[0][0] * matrix[1][1] - matrix[0][1] * matrix[1][0];
//Finally load matrix
glMultMatrixf((GLfloat*)matrix);
Here's a clarification showing how to get the elevation and tilt that are needed for gluLookAt solution as shown in my last answer:
// elevation comes from z component (0 = facing horizon)
elevationRadians = asin(gravityVector.z / Vector3DMagnitude(gravityVector));
// tilt is how far screen is from vertical, looking along z axis
tiltRadians = atan2(-gravityVector.y, -gravityVector.x) - M_PI_2;
Following up on Chris's suggestion: I'm not sure if I've got this all correct due to differing conventions of row/column order and heading cw or ccw. However the following code is what I came up with:
Vector3D forward = Vector3DMake(0.0f, 0.0f, -1.0f);
// Multiply it by current rotation matrix to get teapot direction
Vector3D direction;
direction.x = matrix[0][0] * forward.x + matrix[1][0] * forward.y + matrix[2][0] * forward.z;
direction.y = matrix[0][1] * forward.x + matrix[1][1] * forward.y + matrix[2][1] * forward.z;
direction.z = matrix[0][2] * forward.x + matrix[1][2] * forward.y + matrix[2][2] * forward.z;
heading = atan2(direction.z, direction.x) * 180 / M_PI;
// Use this heading to adjust the teapot direction back to keep it fixed
// Rotate about vertical axis (Y), as it is a heading adjustment
glRotatef(heading, 0.0, 1.0, 0.0);
When I run this code, the teapot behaviour has apparently "improved" eg. heading no longer flips 180deg when device screen (in portrait view) is pitched forward/back through upright. However, it still makes major jumps in heading when device (in landscape view) is pitched forward/back. So something's not right. It suggests that the above calculation of the actual heading is incorrect...
I finally found a solution that works. :-)
I dropped the rotation matrix approach, and instead adopted gluLookAt. To make this work you need to know the device "elevation" (viewing angle relative to horizon ie. 0 on horizon, +90 overhead), and the camera's "tilt" (how far the device is from vertical its x/y plane ie. 0 when vertical/portrait, +/-90 when horizontal/landscape), both of which are obtained from the device gravity vector components.
Vector3D eye, scene, up;
CGFloat distanceFromScene = 0.8;
// Adjust eye position for elevation (y/z)
eye.x = 0;
eye.y = distanceFromScene * -sin(elevationRadians); // eye position goes down as elevation angle goes up
eye.z = distanceFromScene * cos(elevationRadians); // z position is maximum when elevation is zero
// Lookat point is origin
scene = Vector3DMake(0, 0, 0); // Scene is at origin
// Camera tilt - involves x/y plane only - arbitrary vector length
up.x = sin(tiltRadians);
up.y = cos(tiltRadians);
up.z = 0;
Then you just apply the gluLookAt transformation, and also rotate the scene according to the device heading.
// Adjust view for device orientation
gluLookAt(eye.x, eye.y, eye.z, scene.x, scene.y, scene.z, up.x, up.y, up.z);
// Apply device heading to scene
glRotatef(currentHeadingDegrees, 0.0, 1.0, 0.0);
Try rotating the object depending upon iphone acceleration values.
float angle = -atan2(accelX, accelY);
glPushMatrix();
glTranslatef(centerPoint.x, centerPoint.y, 0);
glRotatef(angle, 0, 0, 1);
glTranslatef(-centerPoint.x, -centerPoint.y, 0);
glPopMatrix();
Where centerPoint is the middle point the object.
oo, nice.
GLGravity seems to get everything right except for the yaw. Here's what I would try. Do everything GLGravity does, and then this:
Project a vector in the direction you want the teapot to face, using the compass or whatever you so choose. Then multiply a "forward" vector by the teapot's current rotation matrix, which will give you the direction the teapot is facing. Flatten the two vectors to the horizontal plane and take the angle between them.
This angle is your corrective yaw. Then just glRotatef by it.
Whether or not the 3GS's compass is reliable and robust enough for this to work is another thing. Normal compasses don't work when the north vector is perpendicular to their face. But I just tried the Maps app on my workmate's 3GS and it seems to cope, so maybe they have got a mechanical solution in there. Knowing what the device is actually doing will help interpret the results it gives.
Make sure to test your app at the north and south poles once you're done. :-)
Getting a much more stable gravity-based reference, can now be done using CMMotionManager.
When starting motion updates with startDeviceMotionUpdates(), you can specify a reference frame.
This fuses the accelerometer, gyroscope and optionally (depending on chose reference frame) magnetometer data. Accelerometer data is pretty noisy and bouncy (any sideways motion of the device temporarily tilts the gravity vector by any device acceleration) and alone doesn't make a good reference.
I've been low-pass filtering the accelerometer data, which helps a bit but makes the system slow.