The following code rotates the second cube around the origin. How can I rotate the second cube around its center point ([5,5,0]) instead?
cube([10,10,1]);
rotate([0,0,45]) cube([10,10,1]);
This module will perform the desired rotation.
// rotate as per a, v, but around point pt
module rotate_about_pt(a, v, pt) {
translate(pt)
rotate(a,v)
translate(-pt)
children();
}
cube([10,10,1]);
rotate_about_pt(45,0,[5,5,0]) cube([10,10,1]);
In newer versions (tested with the January 2019 preview) the above code generates a warning. To fix that, update the parameters to rotate:
module rotate_about_pt(z, y, pt) {
translate(pt)
rotate([0, y, z]) // CHANGE HERE
translate(-pt)
children();
}
If you are willing to 'center' the shape it is much easier:
cube(center =true,[10,10,1]);
rotate([0,0,45]) cube(center =true,[10,10,1]);
Related
I'm a newbie in Swift and MacOS.
I gonna find a method to get the exact display coordinate
NSEvent.mouseLocation
I have found method in CoreGraphic :
func CGDisplayBounds(_ display: CGDirectDisplayID) -> CGRect
but the coordinate is different.
I can workaround to apply a method to mathematically method to convert point Y.
But is there any method to get or convert the position programmatically?
I expect to get the same coordinate with NSEvent.mouseLocation.
Thank for your attention.
It returns to the same coordinate with mouse location.
As you noted, CoreGraphics has what Apple calls ‘flipped’ geometry, with the origin at the top left and the y coordinates increasing toward the bottom of the screen. This is the geometry used by most computer graphics systems.
AppKit prefers what Apple calls ‘non-flipped’, with the origin at the bottom left and the y coordinates increasing toward the top of the screen. This is the geometry normally used in mathematics.
The origin (0, 0) of the CoreGraphics global geometry is always at the top-left of the ‘main’ display (identified by CGMainDisplayID()). The origin of the AppKit global geometry is always at the bottom-left of the main display. To convert between the two geometries, subtract your y coordinate from the height of the main display.
That is:
extension CGPoint {
func convertedToAppKit() -> CGPoint {
return .init(
x: x,
y: CGDisplayBounds(CGMainDisplayID()).height - y
)
}
func convertedToCoreGraphics() -> CGPoint {
return .init(
x: x,
y: CGDisplayBounds(CGMainDisplayID()).height - y
)
}
}
You may notice that these two functions have the same implementation. You don't really need two functions; you can just use one. It converts in both directions.
Calling CGDisplayBounds(CGMainDisplayID()) might also be inefficient. You might want to cache the value or batch your transformations if you're going to be doing a lot of them. But if you cache the value, you'll want to subscribe to NSApplication.didChangeScreenParametersNotification so you can update the cached value if it needs to change.
how do I get mouse world position. X Y plane only in unity . ScreenToWorldPosition isn't working. I think I need to cast a ray to mouse but not sure.
This is what I am using. doesnt seem to give the correct coordinates or right plane. need for targeting and raycasting.
private void Get3dMousePoint()
{
var screenPosition = Input.mousePosition;
screenPosition.z = 1;
worldPosition = mainCamera.ScreenToWorldPoint(screenPosition);
worldPosition.z = 0;
}
Just need XY coords.
I tried with ScreenToWorldPoint () and it works.
The key I think is in understanding the z coordinate of the position.
Geometrically, in 3D space we need 3 coordinates to define a point. With only 2 coordinates we have a straight line with variable z parameter. To obtain a point from that line, we must choose at what distance (i.e. set z) we want the point sought to be.
Obviously, since the camera is perspective, the coordinates you have at z = 1 are different from those at z = 100, differently from the 2D plane.
If you can figure out how far away, that is, to set the z correctly, you can find the point you want.
Just remember that the z must be greater than the minimum rendering distance of the chamber. I set that very value in the script.
Also remember that the resulting vector will have the z equal to the z position of the camera + the z value of the vector used in ScreenToWorldPoint.
void Get3dMousePoint()
{
Vector3 worldPosition = Camera.main.ScreenToWorldPoint(new Vector3(Input.mousePosition.x, Input.mousePosition.y, Camera.main.nearClipPlane));
print(worldPosition);
}
if you think my answer helped you, you can mark it as accepted and vote positively. I would very much appreciate it :)
someone can tell me how can I reset my accelerometer.
I have a 3D environment, and my ball bounce left-right correctly, my Y is ok, I don't need it, but my z is not ok, I use my device in landscape mode.
Pratically when my devide is 90° (landscape mode) with the floor is in neutral z position.
I need to have neutral position when user start game, I see other answer on unity blog with they don't work for me.
This is my code :
public float posZ;
// Use this for initialization
void Start () {
Salvataggio.control.Load ();
posZ = Input.acceleration.z;
}
void Update () {
trovaPallina = GameObject.FindGameObjectWithTag ("Pallina");
if (trovaPallina) {
trovaPallina.transform.position += new Vector3(Input.acceleration.x / 10, 0, (Input.acceleration.z - posZ) / 10) ;
}
}
Why not create some sort of calibration for it? So, you could create a calibration menu in which you click a button once you are satisfied with the zero position of the device. Store the x, y & z values as your zero values. That way, you can use the current values against your zero values to determine the offset in each direction.
The image below shows a rotated box that should be moved horizontally on the X and Z axes. Y should stay unaffected to simplify the scenario. The box could also be the SCNNode of the camera, so I guess a projection does not make sense at this point.
So lets say we want to move the box in the direction of the red arrow. How to achieve this using SceneKit?
The red arrow indicates -Z direction of the box. It also shows us it is not parallel to the camera's projection or to the global axes that are shown as dark grey lines of the grid.
My last approach is the product of a translation matrix and a rotation matrix that results in a new transformation matrix. Do I have to add the current transform to the new transform then?
If yes, where is the SceneKit function for the addition of matrices like SCNMatrix4Mult for multiplication or do I have to write it myself using Metal?
If no, what I'm missing out with the matrix calculations?
I don't want to make use of GLKit.
So my understanding is that you want to move the Box Node along its own X axis (not it's parents X axis). And because the Box Node is rotated, its X axis is not aligned with its parent's one, so you have the problem to convert the translation between the two coordinate systems.
The node hierarchy is
parentNode
|
|----boxNode // rotated around Y (vertical) axis
Using Transformation Matrices
To move boxNode along its own X axis
// First let's get the current boxNode transformation matrix
SCNMatrix4 boxTransform = boxNode.transform;
// Let's make a new matrix for translation +2 along X axis
SCNMatrix4 xTranslation = SCNMatrix4MakeTranslation(2, 0, 0);
// Combine the two matrices, THE ORDER MATTERS !
// if you swap the parameters you will move it in parent's coord system
SCNMatrix4 newTransform = SCNMatrix4Mult(xTranslation, boxTransform);
// Allply the newly generated transform
boxNode.transform = newTransform;
Please Note: The order matters when multiplying matrices
Another option:
Using SCNNode coordinate conversion functions, looks more straight forward to me
// Get the boxNode current position in parent's coord system
SCNVector3 positionInParent = boxNode.position;
// Convert that coordinate to boxNode's own coord system
SCNVector3 positionInSelf = [boxNode convertPosition:positionInParent fromNode:parentNode];
// Translate along own X axis by +2 points
positionInSelf = SCNVector3Make(positionInSelf.x + 2,
positionInSelf.y,
positionInSelf.z);
// Convert that back to parent's coord system
positionInParent = [parentNode convertPosition: positionInSelf fromNode:boxNode];
// Apply the new position
boxNode.position = positionInParent;
Building on #Sulevus's correct answer, here's an extension to SCNNode that simplifies things by using the convertVector rather than the convertPosition transformation, in Swift.
I've done it as a var returning a unit vector, and supplied an SCNVector3 overload of multiply so you can say things like
let action = SCNAction.move(by: 2 * cameraNode.leftUnitVectorInParent, duration: 1)
public extension SCNNode {
var leftUnitVectorInParent: SCNVector3 {
let vectorInSelf = SCNVector3(x: 1, y: 0, z: 0)
guard let parent = self.parent else { return vectorInSelf }
// Convert to parent's coord system
return parent.convertVector(vectorInSelf, from: self)
}
var forwardUnitVectorInParent: SCNVector3 {
let vectorInSelf = SCNVector3(x: 0, y: 0, z: 1)
guard let parent = self.parent else { return vectorInSelf }
// Convert to parent's coord system
return parent.convertVector(vectorInSelf, from: self)
}
func *(lhs: SCNVector3, rhs: CGFloat) -> SCNVector3 {
return SCNVector3(x: lhs.x * rhs, y: lhs.y * rhs, z: lhs.z * rhs)
}
func *(lhs: CGFloat, rhs: SCNVector3) -> SCNVector3 {
return SCNVector3(x: lhs * rhs.x, y: lhs * rhs.y, z: lhs * rhs.z)
}
}
The far easier way this is usually done:
The usual, normal, and extremely easy way to do this in any game engine or 3D engine is:
You simply have a wrapper node, which, holds the node in question.
This is indeed the entire point of transforms, they enable you to abstract out a certain motion.
That's the whole point of 3D engines - the GPU just multiplies out all the quaternions on the way down to the object; it's wholly pointless to (A) figure out in your head the math and (B) do it manually (indeed in the CPU).
In Unity it's "game objects", in scene kit it's "nodes" and so on.
In all 3D engines, including scene kit, almost everything has one or more "holders" around it.
To repeat, the reasons for this are (A) it's the entire raison d'etre of a game engine, to achieve performance in multiplying out the quaternions of every vertex and (B) sheer convenience and code solidity.
One of a million examples ...
Of course you can trivially do it in code,
cameraHolder.addChildNode(camera)
In the OP's example. It looks like you would use cameraHolder only to rotate the camera. And then for the motion the OP is asking about, simply move camera.
It's perfectly normal to have a chain of a number of nodes to get to an object.
This is often used for "effects". Say you have an object, which sometimes has to "vibrate up and down". You can have one node which only does that movement. Note that then, all the animations etc for that movement only have to be on that node. And critically, they can run independently of any other animations or movements. (And indeed you can just use the node elsewhere to jiggle something else.)
Suppose you have two points in 3-D space. Call the first o for origin and the other t for target. The rotation axes of each are alligned with the world/parent coordinate system (and each other). Place a third point r coincident with the origin, same position and rotation.
How, in Swift, can you rotate r such that its y-axis points at t? If pointing the z-axis is easier, I'll take that instead. The resulting orientation of the other two axes is immaterial for my needs.
I've been through many discussions related to this but none satisfy. I have learned, from reading and experience, that Euler angles is probably not the way to go. We didn't cover this in calculus and that was 50 years ago anyway.
Got it! Incredibly simple when you add a container node. The following seems to work for any positions in any quadrants.
// pointAt_c is a container node located at, and child of, the originNode
// pointAtNode is its child, position coincident with pointAt_c (and originNode)
// get deltas (positions of target relative to origin)
let dx = targetNode.position.x - originNode.position.x
let dy = targetNode.position.y - originNode.position.y
let dz = targetNode.position.z - originNode.position.z
// rotate container node about y-axis (pointAtNode rotated with it)
let y_angle = atan2(dx, dz)
pointAt_c.rotation = SCNVector4(0.0, 1.0, 0.0, y_angle)
// now rotate the pointAtNode about its z-axis
let dz_dx = sqrt((dz * dz) + (dx * dx))
// (due to rotation the adjacent side of this angle is now a hypotenuse)
let x_angle = atan2(dz_dx, dy)
pointAtNode.rotation = SCNVector4(1.0, 0.0, 0.0, x_angle)
I needed this to replace lookAt constraints which cannot, easily anyway, be archived with a node tree. I'm pointing the y-axis because that's how SCN cylinders and capsules are directed.
If anyone knows how to obviate the container node please do tell. Everytime I try to apply sequential rotations to a single node, the last overwrites the previous one. I haven't the knowledge to formulate a rotation expression to do it in one shot.