How can I get my sprite to move up and down the screen in FANG? Right now it's just moving back and forth across the screen - scala

Right now it's just bouncing back and forth across the screen
def moveTriangleTwo {
triTwo.translateX(triTwoDX)
if (triTwo.getX < 0.0) {
// It hit the left wall - go other direction
triTwo.setX(0.0) // Place it on left wall
triTwoDX = -triTwoDX // Move in opposite direction
} else if (triTwo.getX > -1) {
// It hit the right wall - go other direction
triTwo.setX(-1.0) // Place it on right wall
triTwoDX = -triTwoDX // Move in opposite directinectin
}
}

Perhaps you should read about vectors and coordinate systems.
The short answer is, on a computer screen, coordinate Y is the vertical axis, starting with 0 on top. The X coordinate is horizontal, starting at 0 on the left and increasing to the right.
For horizontal movement you need to change X, for vertical you change Y, for any diagonal you change both at once.

Related

Positioning UI or Camera relatively to each other (and screen border)

I'm struggling with this sort of
Screen disposition.
I want to position my Camera so that the world is positionned like in the image with the origin at bottom left. It's easy to set the orthographicSize of the camera as I know how many unit I want vertically. It is also easy to calculate the Y position of the camera as I just want it to be centered vertically. But I cannot find how to compute the X position of the camera to put the origin of the world in this position, no matter what the aspectRatio of the screen is.
It brings me two questions :
How can I calculate the X position of the camera so that the origin of the world is always as the same distance from the screen left and bottom borders ?
Instead of positioning the camera regarding the UI, should I use RenderMode Worldspace for the UI canvas. And if so, how could I manage responsiveness ?
I don't understand the second question, but regarding positioning the Camera on the X axis so that the lower left corner is always at world 0 you could do the following:
var lowerLeftScreen = new Vector3(0, 0, 10);
var pos = transform.position;
var lowerLeftScreenPoint = Camera.main.ScreenToWorldPoint(lowerLeftScreen).x;
if (lowerLeftScreenPoint > 0)
{
pos.x -= lowerLeftScreenPoint;
}
else
{
pos.x += Mathf.Abs(lowerLeftScreenPoint);
}
transform.position = pos;
Debug.Log(Camera.main.ScreenToWorldPoint(lowerLeftScreen));
Not the nicest code, but gets the job done.
Also the Z component in the Vector does not really matter for our orthographic camera.

Leap Motion - Angle of proximal bone to metacarpal (side to side movement)

I am trying to get the angle between the bones, such as the metacarpal bone and the proximal bone (angle of moving the finger side to side, for example the angle when your index finger is as close to your thumb as you can move it and then the angle when your index finger is as close to your middle finger as you can move it).
I have tried using Vector3.Angle with the direction of the bones but that doesn't work as it includes the bending of the finger, so if the hand is in a fist it gives a completely different value to an open hand.
What i really want is a way i can "normalize" (i know normalizing isn't the correct term but it's the best i could think of) the direction of the bones so that even if the finger is bent, the direction vector would still point out forwards and not down, but would be in the direction of the finger (side to side).
I have added a diagram below to try and illustrate what i mean.
In the second diagram, the blue represents what i currently get if i use the bone's directions, the green is the metacarpal direction and the red is what i want (from the side view). The first diagram shows what i am looking for from a top-down view. The blue line is the metacarpal bone direction and in this example the red line is the proximal bone direction, with the green smudge representing the angle i am looking for.
To get this value, you need to "uncurl" the finger direction based on the current metacarpal direction. It's a little involved in the end; you have to construct some basis vectors in order to uncurl the hand along juuust the right axis. Hopefully the comments in this example script will explain everything.
using Leap;
using Leap.Unity;
using UnityEngine;
public class MeasureIndexSplay : MonoBehaviour {
// Update is called once per frame
void Update () {
var hand = Hands.Get(Chirality.Right);
if (hand != null) {
Debug.Log(GetIndexSplayAngle(hand));
}
}
// Some member variables for drawing gizmos.
private Ray _metacarpalRay;
private Ray _proximalRay;
private Ray _uncurledRay;
/// <summary>
/// This method returns the angle of the proximal bone of the index finger relative to
/// its metacarpal, when ignoring any angle due to the curling of the finger.
///
/// In other words, this method measures the "side-to-side" angle of the finger.
/// </summary>
public float GetIndexSplayAngle(Hand h) {
var index = h.GetIndex();
// These are the directions we care about.
var metacarpalDir = index.bones[0].Direction.ToVector3();
var proximalDir = index.bones[1].Direction.ToVector3();
// Let's start with the palm basis vectors.
var distalAxis = h.DistalAxis(); // finger axis
var radialAxis = h.RadialAxis(); // thumb axis
var palmarAxis = h.PalmarAxis(); // palm axis
// We need a basis whose forward direction is aligned to the metacarpal, so we can
// uncurl the finger with the proper uncurling axis. The hand's palm basis is close,
// but not aligned with any particular finger, so let's fix that.
//
// We construct a rotation from the palm "finger axis" to align it to the metacarpal
// direction. Then we apply that same rotation to the other two basis vectors so
// that we still have a set of orthogonal basis vectors.
var metacarpalRotation = Quaternion.FromToRotation(distalAxis, metacarpalDir);
distalAxis = metacarpalRotation * distalAxis;
radialAxis = metacarpalRotation * radialAxis;
palmarAxis = metacarpalRotation * palmarAxis;
// Note: At this point, we don't actually need the distal axis anymore, and we
// don't need to use the palmar axis, either. They're included above to clarify that
// we're able to apply the aligning rotation to each axis to maintain a set of
// orthogonal basis vectors, in case we wanted a complete "metacarpal-aligned basis"
// for performing other calculations.
// The radial axis, which has now been rotated a bit to be orthogonal to our
// metacarpal, is the axis pointing generally towards the thumb. This is our curl
// axis.
// If you're unfamiliar with using directions as rotation axes, check out the images
// here: https://en.wikipedia.org/wiki/Right-hand_rule
var curlAxis = radialAxis;
// We want to "uncurl" the proximal bone so that it is in line with the metacarpal,
// when considered only on the radial plane -- this is the plane defined by the
// direction approximately towards the thumb, and after the above step, it's also
// orthogonal to the direction our metacarpal is facing.
var proximalOnRadialPlane = Vector3.ProjectOnPlane(proximalDir, radialAxis);
var curlAngle = Vector3.SignedAngle(metacarpalDir, proximalOnRadialPlane,
curlAxis);
// Construct the uncurling rotation from the axis and angle and apply it to the
// *original* bone direction. We determined the angle of positive curl, so our
// rotation flips its sign to rotate the other direction -- to _un_curl.
var uncurlingRotation = Quaternion.AngleAxis(-curlAngle, curlAxis);
var uncurledProximal = uncurlingRotation * proximalDir;
// Upload some data for gizmo drawing (optional).
_metacarpalRay = new Ray(index.bones[0].PrevJoint.ToVector3(),
index.bones[0].Direction.ToVector3());
_proximalRay = new Ray(index.bones[1].PrevJoint.ToVector3(),
index.bones[1].Direction.ToVector3());
_uncurledRay = new Ray(index.bones[1].PrevJoint.ToVector3(),
uncurledProximal);
// This final direction is now uncurled and can be compared against the direction
// of the metacarpal under the assumption it was constructed from an open hand.
return Vector3.Angle(metacarpalDir, uncurledProximal);
}
// Draw some gizmos for debugging purposes.
public void OnDrawGizmos() {
Gizmos.color = Color.white;
Gizmos.DrawRay(_metacarpalRay.origin, _metacarpalRay.direction);
Gizmos.color = Color.blue;
Gizmos.DrawRay(_proximalRay.origin, _proximalRay.direction);
Gizmos.color = Color.red;
Gizmos.DrawRay(_uncurledRay.origin, _uncurledRay.direction);
}
}
For what it's worth, while the index finger is curled, tracked Leap hands don't have a whole lot of flexibility on this axis.

How to calculate number of sprites to spawn across the device's screen height?

In my Unity2D project, I am trying to spawn my sprite on top of each other and across the entire height of the device's screen. For example to give an idea, think of a box on top of each other across the entire device's screen height. In my case, I'm spawning arrow sprites instead of boxes
I already got the sprites spawning on top of each other successfully. My problem now is how to calculate how many sprites to spawn to make sure it spreads across the screen's height.
I currently have this snippet of code:
public void SpawnInitialArrows()
{
// get the size of our sprite first
Vector3 arrowSizeInWorld = dummyArrow.GetComponent<Renderer>().bounds.size;
// get screen.height in world coords
float screenHeightInWorld = Camera.main.ScreenToWorldPoint(new Vector3(0, Screen.height, 0)).y;
// get the bottom edge of the screen in world coords
Vector3 bottomEdgeInWorld = Camera.main.ScreenToWorldPoint(new Vector3(0,0,0));
// calculate how many arrows to spawn based on screen.height/arrow.size.y
int numberOfArrowsToSpawn = (int)screenHeightInWorld / (int)arrowSizeInWorld.y;
// create a vector3 to store the position of the previous arrow
Vector3 lastArrowPos = Vector3.zero;
for(int i = 0; i < numberOfArrowsToSpawn; ++i)
{
GameObject newArrow = this.SpawnArrow();
// if this is the first arrow in the list, spawn at the bottom of the screen
if(LevelManager.current.arrowList.Count == 0)
{
// we only handle the y position because we're stacking them on top of each other!
newArrow.transform.position = new Vector3(newArrow.transform.position.x,
bottomEdgeInWorld.y + arrowSizeInWorld.y/2,
newArrow.transform.position.z);
}
else
{
// else, spawn on top of the previous arrow
newArrow.transform.position = new Vector3(newArrow.transform.position.x,
lastArrowPos.y + arrowSizeInWorld.y,
newArrow.transform.position.z);
}
// save the position of this arrow so that we know where to spawn the next arrow!
lastArrowPos = new Vector3(newArrow.transform.position.x,
newArrow.transform.position.y,
newArrow.transform.position.z);
LevelManager.current.arrowList.Add(newArrow);
}
}
The problem with my current code is that it doesn't spawn the correct number of sprites to cover the entire height of the device's screen. It only spawns my arrow sprites approximately up to the middle of the screen. What I want is for it to be able to spawn up to the top edge of the screen.
Anyone know where the calculation went wrong? and how to make the current code cleaner?
If sprites are rendered via camera mode in perspective and the sprites appear to have varying sizes when displayed on the screen (sprites farther away from the camera are smaller than sprites that are closer to the camera) then a new way to calculate the numberOfArrowsToSpawn value is needed.
You could try adding sprites with a while loop, instead of using a for loop, just continue creating sprites until the calculated world position for the sprite will no longer be visible to the camera. Check to see if a point will be visible in camera by using the technique Jessy provides in this link:
http://forum.unity3d.com/threads/point-in-camera-view.72523/
I think your screenHeightInWorld is really a screenTopInWorld, a point can be anywhere in the space.
You need the relative screen height in world coordinate.
Which is actially the half of the camera frustum size if you use ortographic projection, as you think of it.
float screenHeightInWorld = Camera.main.orthographicSize / 2.0f;
I did not read the rest, but is probably fine, up to you how you implement this.
I'd simply create an arrow method, something like bool SpawnArrowAboveIfFits(), which can call itself iteratively on the new instances.

Swift: comparing 2 object's positions in if statement not working?

Alright, so I'm trying to do something simple here and I think I've overcomplicated it- I need an if statement saying if this object goes off the screen (into negative y coordinates), have something happen. I can't get it.
I've tried a number of things, including if statements that compare to numbers like this, having it be equal and then trying greater/less than:
if block1.position.y == -50 {
savior.hidden = true
}
I've tried having the object be less than the size of the self.size.height :
if block1.position.y < self.size.height {
savior.hidden = true
}
And I've tried placing an object at the point off the screen and having an if statement comparing the 2 object's y positions:
if block1.position.y == ptBlock1.position.y {
savior.hidden = true
}
And nothing's working. Block1, the object I'm working with, is being sent to the specific point in an SKAction, so I know that it's getting there:
var moveDownLeft = SKAction.moveTo(CGPointMake(self.size.width * 0.35,-50), duration:5.5)
block1.runAction(moveDownLeft)
Why won't the if statement work?
EDIT:
I have tried this, and even when block1 visibly has a y position lower than ptBlock1, nothing happens:
if block1.position.y < ptBlock1.position.y {
savior.hidden = true
}
else if block1.position.y > ptBlock1.position.y {
savior.hidden = false
}
this object goes off the screen (into negative y coordinates)
You're making a false assumption there. Off the screen is not necessarily negative y coordinates.
The position of an SKNode is not measured with respect to screen; it is measured with respect to its superview, which is the SKScene. And the SKScene is much bigger than the screen! You need to convert from those coordinates to screen coordinates if you want to know what's happening relative to the screen.
(Just to give an example, if you make a new SpriteKit project from the template and log on touchesBegan to show the tap position, you will discover that a tap in the top left corner is at about {303,758}. So in that coordinate system an object is off the screen to the top if its y is greater than about 760. Contrast that with screen coordinates, where you are off the screen to the top if you are less than 0. These are very different coordinate systems!)

Two-finger rotation gesture on the iPhone?

I'm working on an iPhone app with a lot of different gesture inputs that you can do. Currently there is single finger select / drag, two finger scroll, and two finger pinch zoom-in / zoom-out. I want to add in two finger rotation (your fingers rotate a point in between them), but I can't figure out how to get it to work right. All the other gestures were linear so they were only a matter of using the dot or cross product, pretty much.
I'm thinking I've got to store the slope between the previous two points of each finger, and if the angle between the vectors is near 90, then there is the possibility of a rotation. If the next finger movement angle is also near 90, and the direction of the vector on one finger changed positively and changed negatively, then you've got a rotation. The problem is, I need a really clean distinction between this gesture and the other ones - and the above isn't far enough removed.
Any suggestions?
EDIT: Here's how I did it in a vector analysis manner (as opposed to the suggestion below about matching pixels, note that I use my Vector struct in here, you should be able to guess what each function does):
//First, find the vector formed by the first touch's previous and current positions.
struct Vector2f firstChange = getSubtractedVector([theseTouches get:0], [lastTouches get:0]);
//We're going to store whether or not we should scroll.
BOOL scroll = NO;
//If there was only one touch, then we'll scroll no matter what.
if ([theseTouches count] <= 1)
{
scroll = YES;
}
//Otherwise, we might scroll, scale, or rotate.
else
{
//In the case of multiple touches, we need to test the slope between the two touches.
//If they're going in roughly the same direction, we should scroll. If not, zoom.
struct Vector2f secondChange = getSubtractedVector([theseTouches get:1], [lastTouches get:1]);
//Get the dot product of the two change vectors.
float dotChanges = getDotProduct(&firstChange, &secondChange);
//Get the 2D cross product of the two normalized change vectors.
struct Vector2f normalFirst = getNormalizedVector(&firstChange);
struct Vector2f normalSecond = getNormalizedVector(&secondChange);
float crossChanges = getCrossProduct(&normalFirst, &normalSecond);
//If the two vectors have a cross product that is less than cosf(30), then we know the angle between them is 30 degrees or less.
if (fabsf(crossChanges) <= SCROLL_MAX_CROSS && dotChanges > 0)
{
scroll = YES;
}
//Otherwise, they're in different directions so we should zoom or rotate.
else
{
//Store the vectors represented by the two sets of touches.
struct Vector2f previousDifference = getSubtractedVector([lastTouches get:1], [lastTouches get:0]);
struct Vector2f currentDifference = getSubtractedVector([theseTouches get:1], [theseTouches get:0]);
//Also find the normals of the two vectors.
struct Vector2f previousNormal = getNormalizedVector(&previousDifference);
struct Vector2f currentNormal = getNormalizedVector(&currentDifference );
//Find the distance between the two previous points and the two current points.
float previousDistance = getMagnitudeOfVector(&previousDifference);
float currentDistance = getMagnitudeOfVector(&currentDifference );
//Find the angles between the two previous points and the two current points.
float angleBetween = atan2(previousNormal.y,previousNormal.x) - atan2(currentNormal.y,currentNormal.x);
//If we had a short change in distance and the angle between touches is a big one, rotate.
if ( fabsf(previousDistance - currentDistance) <= ROTATE_MIN_DISTANCE && fabsf(angleBetween) >= ROTATE_MAX_ANGLE)
{
if (angleBetween > 0)
{
printf("Rotate right.\n");
}
else
{
printf("Rotate left.\n");
}
}
else
{
//Get the dot product of the differences of the two points and the two vectors.
struct Vector2f differenceChange = getSubtracted(&secondChange, &firstChange);
float dotDifference = getDot(&previousDifference, &differenceChange);
if (dotDifference > 0)
{
printf("Zoom in.\n");
}
else
{
printf("Zoom out.\n");
}
}
}
}
if (scroll)
{
prinf("Scroll.\n");
}
You should note that if you're just doing image manipulation or direct rotation / zooming, then the above approach should be fine. However, if you're like me and you're using a gesture to cause something that takes time to load, then it's likely that you'll want to avoid doing the action until that gesture has been activated a few times in a row. The difference between each with my code is still not perfectly separate, so occasionally in a bunch of zooms you'll get a rotation, or vise versa.
I've done that before by finding the previous and current distances between the two fingers, and the angle between the previous and current lines.
Then I picked some empirical thresholds for that distance delta and angle theta, and that has worked out pretty well for me.
If the distance was greater than my threshold, and the angle was less than my threshold, I scaled the image. Otherwise I rotated it.
2 finger scroll seems easy to distinguish.
BTW in case you are actually storing the values, the touches have previous point values already stored.
CGPoint previousPoint1 = [self scalePoint:[touch1 previousLocationInView:nil]];
CGPoint previousPoint2 = [self scalePoint:[touch2 previousLocationInView:nil]];
CGPoint currentPoint1 = [self scalePoint:[touch1 locationInView:nil]];
CGPoint currentPoint2 = [self scalePoint:[touch2 locationInView:nil]];
Two fingers, both moving, opposit(ish) directions. What gesture conflicts with this?
Pinch/zoom I guess comes close, but whereas pinch/zoom will start off moving away from a center point (if you trace backwards from each line, your lines will be parallel and close), rotate will initially have parallel lines (tracing backwards) that will be far away from each other and those lines will constantly change slope (while retaining distance).
edit: You know--both of these could be solved with the same algorithm.
Rather than calculating lines, calculate the pixel under each finger. If the fingers move, translate the image so that the two initial pixels are still under the two fingers.
This solves all two-finger actions including scroll.
Two-finger scroll or Zoom might look a little wobbly at times since it will do other operations as well, but this is how the map app seems to work (excluding the rotate which it doesn't have).