I use the marching cubes to obtain the surface of a volume, which is actually vtkPolyData. Then, I want to visualize the vtkPolyData as triangulation lines (the first picture) rather than the surface rendering (the second picture).
My current code is:
surface = vtk.vtkMarchingCubes()
surface.SetInputData(vtkImage)
surface.ComputeNormalsOn()
surface.SetValue(0, 0.5)
surface.Update()
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputConnection(surface.GetOutputPort())
mapper.ScalarVisibilityOff()
renderer = vtk.vtkRenderer()
renderer.SetBackground(0.1, 0.2, 0.3)
renderWindow = vtk.vtkRenderWindow()
renderWindow.AddRenderer(renderer)
interactor = vtk.vtkRenderWindowInteractor()
interactor.SetRenderWindow(renderWindow)
actor = vtk.vtkActor()
actor.SetMapper(mapper)
actor.GetProperty().SetRepresentationToWireframe()
actor.GetProperty().ShadingOff()
renderer.AddActor(actor)
renderWindow.Render()
interactor.Start()
It can success visualize the second picture. How can I get the first picture using the "surface"?
The SetRepresentationToWireframe() method already controls whether to show the polydata as wireframe or surface... so the question is not very clear to me.
Related
It's so confusing to me, would be grateful if anyone help me on it.
I have a shadow plane to show the shadow below the AR object. I read some article that they define this shadow in viewDidLoadand add it as the child bode to sceneView.scene. The question is, it should be defined only once for the floor surface?
for instance, I can add the shadow plane to renderer(_:didAdd:for:), it call it once when a new surface is detected. That is so cool for me. But the position of the shadow plane should be changed as well? can someone explain it to me that where it should be defined and wehere/when it should be updated?
here how I define the shadow plane
private func addShadowPlane(node: SCNNode, planeAnchor: ARPlaneAnchor) {
let anchorX = planeAnchor.center.x
let anchorY: planeAnchor.center.y
let anchorZ = planeAnchor.center.z
let floor = SCNFloor()
let floorNode = SCNNode(geometry: floor)
floorNode.position = SCNVector3(anchorX, anchorY, anchorZ)
floor.length = CGFloat(planeAnchor.extent.z)
floor.width = CGFloat(planeAnchor.extent.x)
floor.reflectivity = 0
floor.materials = [shadowMaterialStandard()]
node.addChildNode(floorNode)
}
func shadowMaterialStandard() -> SCNMaterial {
let material = SCNMaterial()
material.lightingModel = .physicallyBased
material.writesToDepthBuffer = true
material.readsFromDepthBuffer = true
material.colorBufferWriteMask = []
return material
}
The issue you might run into is: Do you want one single shadow plane in a kind of initial defined position and then remains there (or can be repositioned). Or do you want a lots of shadow planes, like on any surface captured with the ARKit? The problem might be, that all those planes will not be exact and accurate to the surfaces on top they are created (just more or less). You can make more accurate shapes for surfaces, but they are built up in an ongoing process and need more time to complete (imagine you scan a table by walking around). I also did some ARApps with Shadow planes. I usually create one single shadow plane (like 20x20 meters) on my request using a focus square. I fetch the worldPosition from the focus square, then I add a plane to that location using Scenekit (and not the Renderer for plane anchors). Keep in mind, there are many ways to do this. There is no best way.
Try to study this Apple Sample App for more information on placing objects, casting shadows etc:
https://developer.apple.com/documentation/arkit/environmental_analysis/placing_objects_and_handling_3d_interaction
How to create a line between two points in 3d space with RealityKit?
There are examples of creating lines between two points in Scenekit, however, there are basically none using RealityKit.
To create the line, I've created a rectangle model entity and placed it between my first touched point and the current touched point. From here, all I would need to do is rotate the rectangle to face the current touched point. However, using the simd_quatf(from: to:) doesn't work as intended.
rectangleModelEntity.transform.rotation = simd_quatf(from: firstTouchedPoint,
to: currTouchedPoint)
If I were to touch a point and then drag directly downwards, the rectangle model should be to be a straight line between first touched and current touched point, but it stays horizontal with a slight tilt.
To solve this, I tried getting the angle between my initially horzontal line as a vector and a vector from the first touched to current touched point
let startVec = currTouchedPoint - firstTouchedPoint
let endVec = endOfModelEntityPoint - modelEntityCenterPoint
let lengthVec = simd_length(cross(startVec, endVec))
let theta = atan2(lengthVec, dot(startVec, endVec))
This gives me the angle between two vectors in 3d space, which seems correct, when I checked it gave me 90 degrees when touching and dragging directly between it.
The problem is I don't know what the axis to rotate it on should be. Since this is 3d space, the line doesn't need to be on a 2d plane, the current touched position can be downwards and in front of the starting touch position.
rectangleModelEntity = simd_quatf(angle: theta, axis: ???)
Personally, I'm not even too sure if the above is the correct solution to creating a line between two points. In theory it's rather basic, create a rectangle with low height/depth to mimic a line, position it in the center of the starting and current touch point then rotate it so it's oriented correctly.
What should be the axis for the above degrees between two vectors?
Is there a better method of creating two lines between points in 3d space with RealityKit/ARKit?
I have implemented using a box. Let me know if you have a better way.
let midPosition = SIMD3(x:(position1.x + position2.x) / 2,
y:(position1.y + position2.y) / 2,
z:(position1.z + position2.z) / 2)
let anchor = AnchorEntity()
anchor.position = midPosition
anchor.look(at: position1, from: midPosition, relativeTo: nil)
let meters = simd_distance(position1, position2)
let lineMaterial = SimpleMaterial.init(color: .red,
roughness: 1,
isMetallic: false)
let bottomLineMesh = MeshResource.generateBox(width:0.025,
height: 0.025/2.5,
depth: meters)
let bottomLineEntity = ModelEntity(mesh: bottomLineMesh,
materials: [lineMaterial])
bottomLineEntity.position = .init(0, 0.025, 0)
anchor.addChild(bottomLineEntity)
The axis is the cross product of the direction your object is facing at the beginning and the direction it should be facing now.
Like if it's at position p1=[x1,y1,z1], initially facing d1=[0, 0, -1], and you want it to face a point p2=[x, y, z] the axis would be the cross product: |d1|✕|p2 - p1|.
May have to swap the two around, or just negate the angle though, depending on how it works out.
Finding the angle between the view vector and the surface normal can be beneficial in getting the visible surfaces since we use it to conduct back face culling techniques and obtaining contours, crease edges of the object.
To obtain the visible surfaces I use the back face culling code below:
N = normals(vertex,faces);
BC = barycenter(vertex,faces);
back_facing = sum(N.*bsxfun(#minus,BC,campos),2)<=0
t.FaceVertexCData = 1*(sum(N.*bsxfun(#minus,BC,campos),2)<=0)
t.FaceVertexCData(sum(N.*bsxfun(#minus,BC,campos),2)>0) = nan;
faces1=faces(t.FaceVertexCData(:)==1,:);
facesv=sort(unique(faces1(:)));
How does one obtain the angle?
r=(sum(N.*bsxfun(#minus,BC,campos),2))
rr=bsxfun(#minus,BC,campos);
V_mag= sqrt(rr(:,1).^2+rr(:,2).^2+rr(:,3).^2);
N_mag= sqrt(N(:,1).^2+N(:,2).^2+N(:,3).^2);
for i = 1:(size(r,1))
A(i)=acosd(r(i)/(N_mag(i).*V_mag(i)));
end
This is what I have done thus far. I am not sure if it is correct and code is slow.
I am implementing a boids simulation using Swift and Scenekit. Implementing the simulation itself has been fairly straight forward however I have been unable to make my models faces the direction they are flying (at least all the time and correctly) To see the full project, you can get it here (https://github.com/kingreza/Swift-Boids)
Here is what I am doing to rotate the models to face the direction they are going:
func rotateShipToFaceForward(ship: Ship, positionToBe: SCNVector3)
{
var source = (ship.node.position - ship.velocity).normalized();
// positionToBe is ship.node.position + ship.velocity which is assigned to ship.position at the end of this call
var destination = (positionToBe - ship.node.position).normalized();
var dot = source.dot(destination)
var rotAngle = GLKMathDegreesToRadians(acos(dot));
var rotAxis = source.cross(destination);
rotAxis.normalize();
var q = GLKQuaternionMakeWithAngleAndAxis(Float(rotAngle), Float(rotAxis.x), Float(rotAxis.y), Float(rotAxis.z))
ship.node.rotation = SCNVector4(x: CGFloat(q.x), y: CGFloat(q.y), z: CGFloat(q.z), w: CGFloat(q.w))
}
Here is how they are behaving right now
https://youtu.be/9k07wxod3yI
Three years too late to help the original questioner, and the original YouTube video is gone, but you can see one at the project's GitHub page.
The original Boids code stored orientation as the three basis vectors of the boid’s local coordinate space, which can be thought of as the columns of a 3x3 rotation matrix. Each frame a behavioral “steering force” would act on the current velocity to produce a new velocity. Assuming “velocity alignment” this new velocity is parallel to the new forward (z) vector. It did a cross product of the old up (y) vector and the new forward vector to produce a new side vector. Then it crossed the new side and forward to get a new up vector. FYI, here is the code for that in OpenSteer
Since it looks like you want orientation as a quaternion, there is probably a constructor for your quaternion class that takes a rotation matrix as an argument.
Currently our system uses the ILNumerics 3D plot cube class with an ILNumerics surface component to display a 3D meshed surface. An aim for our system is to be able to interrogate individual points on the surface from a mouse click on the plot. We have the MouseClick event set up on our plot the problem is I am unsure on how to get the values for the particular point on the surface that has been clicked, could anyone help with this issue?
The conversion from 2D mouse coordinates to 3D 'model' coordinates is possible - under some limitations:
The conversion is not unambiguous. The mouse event only provides 2 dimensions: X and Y screen coordinates. In the 3D model there might be more than one point 'behind' this 2D screen point. Therefore, the best you can get is to compute a line in 3D, starting at the camera and ending in infinite depth.
While in theory it would be possible at least to try to find the crossing of the line with the 3D objects, ILNumerics currently does not. Even in the simple case of a surface it is easy to construct a 3D model which crosses the line at more than one point.
For a simplified situation a solution exists: If the Z coordinate in 3D does not matter, one can use common matrix conversions in order to acquire the X and Y coordinates in 3D and use these only. Let's say, your plot is a 2D line plot or a surface plot - but only watched from
'above' (i.e. The unrotated X-Y plane). The Z coordinate of the point clicked may not be of interest. Let's further assume, you have setup an ILScene scene in a common windows application with ILPanel:
private void ilPanel1_Load(object sender, EventArgs e) {
var scene = new ILScene() {
new ILPlotCube(twoDMode: true) {
new ILSurface(ILSpecialData.sincf(20,30))
}
};
scene.First<ILSurface>().MouseClick += (s,arg) => {
// we start at the mouse event target -> this will be the
// surface group node (the parent of "Fill" and "Wireframe")
var group = arg.Target.Parent;
if (group != null) {
// walk up to the next camera node
Matrix4 trans = group.Transform;
while (!(group is ILCamera) && group != null) {
group = group.Parent;
// collect all nodes on the path up
trans = group.Transform * trans;
}
if (group != null && (group is ILCamera)) {
// convert args.LocationF to world coords
// The Z coord is not provided by the mouse! -> choose arbitrary value
var pos = new Vector3(arg.LocationF.X * 2 - 1, arg.LocationF.Y * -2 + 1, 0);
// invert the matrix.
trans = Matrix4.Invert(trans);
// trans now converts from the world coord system (at the camera) to
// the local coord system in the 'target' group node (surface).
// In order to transform the mouse (viewport) position, we
// left multiply the transformation matrix.
pos = trans * pos;
// view result in the window title
Text = "Model Position: " + pos.ToString();
}
}
};
ilPanel1.Scene = scene;
}
What it does: it registers a MouseClick event handler on the surface group node. In the handler it accumulates the transformation matrices on the path from the clicked target (the surface group node) up to the next camera node the surface is a child of. While rendering, the (model) coordinates of the vertices are transformed by the local coordinate transformation matrix, hosted in every group node. All transformations are accumulated and so the vertex coordinates end up in the 'world coordinate' system, established by every camera. So rendering finds the 2D screen position from the 3D model vertex positions.
In order to find the 3D position from the 2D screen coordinates - one must go the other way around. In the example, we acquire the transformation matrices for every group node, multiply them all up and invert the resulting transformation matrix. This is needed, because such transforms naturally describe the conversion from the child node to the parent. Here, we need the other way around - hence the inversion is necessary.
This method gives the correct 3D coordinates at the mouse position. However, keep the limitations in mind! Here, we do not take into account any rotation of the plot cube (the plot cube must be left unrotated) and no projection transforms (plot cubes do use orthographic transform by default, which basically is a noop). In order to recognize those variables as well, you may extend the example accordingly.