I am wondering how to avoid a sharp corner between two objects in openscad.
MWE:
I have the following code which gives the sharp corners between the cylinder and sphere objects.
sphere (r=0.3, $fn=50);
rotate([90,0,0])
{
cylinder (h=2, r=0.1, center=true, $fn=20);
}
rotate([0,90,0])
{
cylinder (h=2, r=0.1, center=true, $fn=20);
}
What am I actually looking for?
Its an '3D-arc in 360 degrees' connecting the cylinder and the sphere. Something like an expanding column.
I tried several other combinations using minkowski() function (code below) but the cylinder end in connection with the sphere never get smoothed.
module draw(){
sphere (r=0.3, $fn=50);
rotate([90,0,0])
{
cylinder (h=2, r=0.1, center=true, $fn=20);
}
rotate([0,90,0])
{
cylinder (h=2, r=0.1, center=true, $fn=20);
}
}
minkowski(){
draw();
sphere(0.01);
}
Can anyone give me a hint here please.
How about some negative donuts?
sphere (r=0.3, $fn=50);
rotate([90,0,0])
cylinder (h=2, r=0.1, center=true, $fn=20);
rotate([0,90,0])
cylinder (h=2, r=0.1, center=true, $fn=20);
for(r = [0 : 90 : 270])
rotate([90,0,r]) negative_donut();
module negative_donut() {
difference(){
translate([0, 0, 0.3]) donut(0.1,0.05);
translate([0, 0, 0.345]) donut(0.2,0.1);
}
}
module donut(r1, r2){
rotate_extrude($fn=50)
translate([r1, 0, 0])
circle(r = r2);
}
I eyeballed the sizes and distances, so you can probably improve on the math here.
Related
I am working on a simple AR engine and I am having a problem with matching 3d object with a camera image.
For better understanding, I illustrated it with the picture. Points A and B are in 3d space. Points C and D are given on a texture plane. The distance to the plane from the camera is known.
I know how to obtain the coordinates of Anear, Bnear, Afar, Bfar, Cnear, Dnear, Cfar and Dfar.
The problem is how to find points A' and B' in 3d space such as vector d==d' and points Anear == Cnear and Bnear == Dnear (the projection of 3d points to the screen should result with the same coordinates)
Could anyone please help me with the math here, or at least point me to where to look for the answer?
PS. Seems like my problem description is not clear enough so to put it in other words: I have a pair of points in 3d space and a pair of points on texture plane (image from webcam). I need to put the points in 3d space at the correct distance from camera - so after perspective transformation they overlay the points on texture plane. The spatial relation of the 3d points need to be preserved. In the drawing the visual solution are points A' and B'. The dashed line illustrates the perspective transformation (where they are casted on near plane at the same location as points C and D).
So if I understand correct
given points in world-space are
A
B
C
D
also known is the distance d and implicitely the Camera.position origin and Camera.transform.forward direction.
Searched are
A'
B'
As I understand you could find the first point A' by finding the intersection point of the line (origin = A, direction = camera forward) and the line (origin = camera.position, direction = Camera.position -> C).
Equally also the second point B' by finding the intersection point of the line (origin = B, direction = camera.forward) and the line (origin = camera.position, direction = Camera.position -> D).
Unity offers some special Math3d that come to help here e.g.:
//Calculate the intersection point of two lines. Returns true if lines intersect, otherwise false.
//Note that in 3d, two lines do not intersect most of the time. So if the two lines are not in the
//same plane, use ClosestPointsOnTwoLines() instead.
public static bool LineLineIntersection(out Vector3 intersection, Vector3 linePoint1, Vector3 lineVec1, Vector3 linePoint2, Vector3 lineVec2)
{
Vector3 lineVec3 = linePoint2 - linePoint1;
Vector3 crossVec1and2 = Vector3.Cross(lineVec1, lineVec2);
Vector3 crossVec3and2 = Vector3.Cross(lineVec3, lineVec2);
float planarFactor = Vector3.Dot(lineVec3, crossVec1and2);
//is coplanar, and not parrallel
if(Mathf.Abs(planarFactor) < 0.0001f && crossVec1and2.sqrMagnitude > 0.0001f)
{
float s = Vector3.Dot(crossVec3and2, crossVec1and2) / crossVec1and2.sqrMagnitude;
intersection = linePoint1 + (lineVec1 * s);
return true;
}
else
{
intersection = Vector3.zero;
return false;
}
}
So you could probably do something like
public static bool TryFindPoints(Vector3 cameraOrigin, Vector3 cameraForward, Vector3 A, Vector3 B, Vector3 C, Vector3 D, out Vector3 AMapped, out Vector3 BMapped)
{
AMapped = default;
BMapped = default;
if(LineLineIntersection(out AMapped, A, cameraForward, cameraOrigin, C - cameraOrigin))
{
if(LineLineIntersection(out BMapped, B, cameraForward, cameraOrigin, D - cameraOrigin))
{
return true;
}
}
return false;
}
and then use it like
if(TryFindPoints(Camera.transform.position, Camera.transform.forward, A, B, C, D, out var aMapped, out var bMapped))
{
// do something with aMapped and bMapped
}
else
{
Debug.Log("It was mathematically impossible to find valid points");
}
Note: Typed on smartphone but I hope the idea gets clear
Given K the camera position and X=A' and Y=B'
var angleK = Vector3.Angle(C-K,D-K);
var angleB = Vector3.Angle(D-K, A-B);
var XK = Mathf.Sin(angleB)*(Vector3.Distance(A,B))/Mathf.Sin(angleK);
var X= K+(C-K).normalized*XK;
var Y= B + X - A;
I have a plane with four vertices. It can be rotate around z-axis (0, 0,1).(achieve using model matrix in metal).Model matrix is changed base on rotation gesture.
So what I need to do is rotate plane around z-axis through arbitrary (x,y) where x,y not equal to zero.It means rotate plane around an axis which is perpendicular to xy plane an going through (x,y) point.
Any sugestion please?
This works for me.Here dragCanvas method change translation in model matix while rotateCanvas change its rotation.You may implement your own which does the same. Metod convertCoodinates maps coordinate system to suit as describe in https://developer.apple.com/documentation/metal/hello_triangle
#objc func rotate(rotateGesture: UIRotationGestureRecognizer){
guard rotateGesture.view != nil else { return }
let location = rotateGesture.location(in: self.view)
var rotatingAnchorPoint = convertCoodinates(tapx:location.x , tapy:location.y )
if rotateGesture.state == UIGestureRecognizerState.changed {
print("rotation:\(rotateGesture.rotation)")
renderer?.dargCanvas(axis:float3(Float(rotatingAnchorPoint.x) ,Float(rotatingAnchorPoint.y ),0))
renderer?.rotateCanvas(rotation:Float(rotateGesture.rotation))
renderer?.dargCanvas(axis:float3(Float(-rotatingAnchorPoint.x ) ,Float(-rotatingAnchorPoint.y ),0))
rotateGesture.rotation = 0
} else if rotateGesture.state == UIGestureRecognizerState.began {
}else if rotateGesture.state == UIGestureRecognizerState.ended{
}
}
so the quest is this, I got an ARPointCloud with a bunch of 3d points and I'd like to select them based on a 2d frame from the perspective of the camera / screen.
I was thinking about converting the 2d frame to a 3d frustum and check if the points where the 3d frustum box, not sure if this is the ideal method, and not even sure how to do that.
Would anyone know how to do this or have a better method of achieving this?
Given the size of the ARKit frame W x H and the camera intrinsics we can create planes for the view frustum sides.
For example using C++ / Eigen we can construct our four planes (which pass
through the origin) as
std::vector<Eigen::Vector3d> _frustumPlanes;
frustumPlanes.emplace_back(Eigen::Vector3d( fx, 0, cx - W));
frustumPlanes.emplace_back(Eigen::Vector3d(-fx, 0, -cx));
frustumPlanes.emplace_back(Eigen::Vector3d( 0, fy, cy - H));
frustumPlanes.emplace_back(Eigen::Vector3d( 0, -fy, -cy));
We can then clip a 3D point by checking its position against the z < 0
half-space and the four sides of the frustum:
auto pointIsVisible = [&](const Eigen::Vector3d& P) -> bool {
if (P.z() >= 0) return false; // behind camera
for (auto&& N : frustumPlanes) {
if (P.dot(N) < 0)
return false; // outside frustum plane
}
return true;
};
Note that it is best to perform this clipping in 3D (before the projection) since points behind or near the camera or points far outside
the frustum can have unstable projection values (u,v).
In a surface shader, given the world's up axis (and the others too), a world space position and a normal in world space, how can we rotate the worldspace position into the space of the normal?
That is, given a up vector and a non-orthogonal target-up vector, how can we transform the position by rotating its up vector?
I need this so I can get the vertex position only affected by the object's rotation matrix, which I don't have access to.
Here's a graphical visualization of what I want to do:
Up is the world up vector
Target is the world space normal
Pos is arbitrary
The diagram is bidimensional, but I need to solve this for a 3D space.
Looks like you're trying to rotate pos by the same rotation that would transform up to new_up.
Using the rotation matrix found here, we can rotate pos using the following code. This will work either in the surface function or a supplementary vertex function, depending on your application:
// Our 3 vectors
float3 pos;
float3 new_up;
float3 up = float3(0,1,0);
// Build the rotation matrix using notation from the link above
float3 v = cross(up, new_up);
float s = length(v); // Sine of the angle
float c = dot(up, new_up); // Cosine of the angle
float3x3 VX = float3x3(
0, -1 * v.z, v.y,
v.z, 0, -1 * v.x,
-1 * v.y, v.x, 0
); // This is the skew-symmetric cross-product matrix of v
float3x3 I = float3x3(
1, 0, 0,
0, 1, 0,
0, 0, 1
); // The identity matrix
float3x3 R = I + VX + mul(VX, VX) * (1 - c)/pow(s,2) // The rotation matrix! YAY!
// Finally we rotate
float3 new_pos = mul(R, pos);
This is assuming that new_up is normalized.
If the "target up normal" is a constant, the calculation of R could (and should) only happen once per frame. I'd recommend doing it on the CPU side and passing it into the shader as a variable. Calculating it for every vertex/fragment is costly, consider what it is you actually need.
If your pos is a vector-4, just do the above with the first three elements, the fourth element can remain unchanged (it doesn't really mean anything in this context anyway).
I'm away from a machine where I can run shader code, so if I made any syntactical mistakes in the above, please forgive me.
Not tested, but should be able to input a starting point and an axis. Then all you do is change procession which is a normalized (0-1) float along the circumference and your point will update accordingly.
using UnityEngine;
using System.Collections;
public class Follower : MonoBehaviour {
Vector3 point;
Vector3 origin = Vector3.zero;
Vector3 axis = Vector3.forward;
float distance;
Vector3 direction;
float procession = 0f; // < normalized
void Update() {
Vector3 offset = point - origin;
distance = offset.magnitude;
direction = offset.normalized;
float circumference = 2 * Mathf.PI * distance;
angle = (procession % 1f) * circumference;
direction *= Quaternion.AngleAxis(Mathf.Rad2Deg * angle, axis);
Ray ray = new Ray(origin, direction);
point = ray.GetPoint(distance);
}
}
I have a sphere gameobject with 5 cubes placed on different points on the surface of the sphere. When a key is pressed, i would like the sphere to spin for a few seconds and then slowly stops on the first cube point on the cube and always keeping the same direction in rotation. The issue i am facing now is that, Quaternion.Slerp always takes the shortest path to the next cube which means sometimes the rotation direction is changed. Any ideas?
Thanks
You can easily handle the rotation as a Vector3 of Euler angles.
This way you can Linearly Interpolate the angles to the correct value. And you can use coterminal angles so that you're always interpolating to a higher value ( so no backwards rotations would occur ). After every rotational step you might want to normalize the angles to the 0, 360 range with this approach though.
Example Code :
using UnityEngine;
using System.Collections;
public class Rotation : MonoBehaviour {
public Transform firstPosition;
public Transform secondPosition;
public float rotationDuration = 3;
void Start () {
transform.rotation = firstPosition.rotation;
StartCoroutine(Rotate());
}
IEnumerator Rotate() {
var deltaTime = 0.0f;
var distance = firstPosition.rotation.eulerAngles - secondPosition.rotation.eulerAngles;
while(deltaTime < rotationDuration) {
var rotation = transform.rotation.eulerAngles;
rotation = firstPosition.rotation.eulerAngles + deltaTime/rotationDuration*distance;
transform.rotation = Quaternion.Euler(rotation);
deltaTime += Time.deltaTime;
yield return null;
}
transform.rotation = secondPosition.rotation;
}
}