Visualization failed when display two polygon in the same time - visualization

I want to display two polygons in one renderer. And the result is:
The green and red color indicates two polygons.
However, when I rotate it, some part of green color disappear:
My environment is:
win 10
python 3.7.13
vtk: 9.2.4
I can 100% reproduce this phenomenon, and the code to reproduce my problem is:
import vtkmodules.all as vtk
def buildPolygon(points):
polydata = vtk.vtkPolyData()
vps = vtk.vtkPoints()
polygon = vtk.vtkPolygon()
polygon.GetPointIds().SetNumberOfIds(len(points))
for i in range(len(points)):
vps.InsertNextPoint(points[i][0], points[i][1], points[i][2])
polygon.GetPointIds().SetId(i, i)
polygons = vtk.vtkCellArray()
polygons.InsertNextCell(polygon)
polydata.SetPoints(vps)
polydata.SetPolys(polygons)
return polydata
polydata1 = buildPolygon([
[0, 0, 0],
[10, 0, 0],
[10, 10, 0],
[0, 10, 0]
])
map1 = vtk.vtkPolyDataMapper()
map1.SetInputData(polydata1)
actor1 = vtk.vtkActor()
actor1.SetMapper(map1)
actor1.GetProperty().SetColor(1, 0, 0)
polydata2 = buildPolygon([
[0, 0, 0],
[5, 0, 0],
[5, 5, 0],
[0, 5, 0]
])
map2 = vtk.vtkPolyDataMapper()
map2.SetInputData(polydata2)
actor2 = vtk.vtkActor()
actor2.SetMapper(map2)
actor2.GetProperty().SetColor(0, 1, 0)
render = vtk.vtkRenderer()
render.AddActor(actor1)
render.AddActor(actor2)
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(render)
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renWin)
iren.SetInteractorStyle(vtk.vtkInteractorStyleTrackballCamera())
iren.Initialize()
iren.Start()

As they lie exactly on the same plane z=0 there is no way to know which should be drawn on top of the other. Just add a small tolerance:
polydata2 = buildPolygon([
[0, 0, 0.001],
[5, 0, 0.001],
[5, 5, 0.001],
[0, 5, 0.001]
])

Related

Metal/Swift - First element of vertex buffer has one wrong value

I'm just trying to render a red square using metal, and I'm creating a vertex buffer from an array of Vertex structures that look like this:
struct Vertex {
var position: SIMD3<Float>
var color: SIMD4<Float>
}
This is where I'm rendering the square:
var vertices: [Vertex] = [
Vertex(position: [-0.5, -0.5, 0], color: [1, 0, 0, 1]),
Vertex(position: [-0.5, 0.5, 0], color: [1, 0, 0, 1]),
Vertex(position: [0.5, -0.5, 0], color: [1, 0, 0, 1]),
Vertex(position: [0.5, 0.5, 0], color: [1, 0, 0, 1])
]
var vertexBuffer: MTLBuffer?
func render(using renderCommandEncoder: MTLRenderCommandEncoder) {
if self.vertexBuffer == nil {
self.vertexBuffer = self.device.makeBuffer(
bytes: self.vertices,
length: MemoryLayout<Vertex>.stride * self.vertices.count,
options: []
)
}
if let vertexBuffer = self.vertexBuffer {
renderCommandEncoder.setRenderPipelineState(RenderPipelineStates.defaultState)
renderCommandEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
renderCommandEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: vertexBuffer.length / MemoryLayout<Vertex>.stride)
}
}
This is what my render pipeline state looks like:
let library = device.makeDefaultLibrary()!
let vertexShader = library.makeFunction(name: "basicVertexShader")
let fragmentShader = library.makeFunction(name: "basicFragmentShader")
let renderPipelineDescriptor = MTLRenderPipelineDescriptor()
renderPipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
renderPipelineDescriptor.vertexFunction = vertexShader
renderPipelineDescriptor.fragmentFunction = fragmentShader
renderPipelineDescriptor.sampleCount = 4
let vertexDescriptor = MTLVertexDescriptor()
vertexDescriptor.attributes[0].format = .float3
vertexDescriptor.attributes[0].bufferIndex = 0 // Position
vertexDescriptor.attributes[0].offset = 0
vertexDescriptor.attributes[1].format = .float4
vertexDescriptor.attributes[1].bufferIndex = 0 // Color
vertexDescriptor.attributes[1].offset = MemoryLayout<SIMD3<Float>>.stride
vertexDescriptor.layouts[0].stride = MemoryLayout<Vertex>.stride
renderPipelineDescriptor.vertexDescriptor = vertexDescriptor
self.defaultState = try! device.makeRenderPipelineState(descriptor: renderPipelineDescriptor)
The vertex and fragment shaders just pass through the position and color. For some reason, when this is rendered the first float of the color of the first vertex comes into the vertex shader as an extremely small value, effectively showing black. It only happens for the red value of the first vertex in the array.
Red square with one black vertex
I can see from debugging the GPU frame that the first vertex has a red color component of 5E-41 (essentially 0).
I have no idea why this is the case, it happens some time when the vertices are added to the vertex buffer. I'm guessing it has something to do with my render pipeline vertex descriptor, but I haven't been able to figure out what's wrong. Thanks for any help!
This is, with high likelihood, a duplicate of this question. I'd encourage you to consider the workarounds there, and also to file your own feedback to raise visibility of this bug. - warrenm
Correct, this appears to be a driver bug of some sorts. I fixed it by adding the cpuCacheModeWriteCombined option to makeBuffer and have filed feedback.
self.vertexBuffer = self.device.makeBuffer(
bytes: self.vertices,
length: MemoryLayout<Vertex>.stride * self.vertices.count,
options: [.cpuCacheModeWriteCombined]
)

Order of connecting points in paraview under table to structured grids

I'm a beginner in Paraview. I have a question about displaying CSV file in Paraview. If my data file looked like this
x coord, y coord, z coord, scalar
0, 0, 0, 1
1, 0, 0, 2
0, 1, 0, 3
1, 1, 0, 4
0, 0, 1, 5
1, 0, 1, 6
0, 1, 1, 7
1, 1, 1, 8
It will create a cubic grid. But if I switch the order of points like
x coord, y coord, z coord, scalar
0, 0, 0, 1
1, 0, 0, 2
1, 0, 1, 6
0, 1, 0, 3
1, 1, 1, 8
1, 1, 0, 4
0, 0, 1, 5
0, 1, 1, 7
It will give me a really messy connected wireframe. I want to know what's the order of connection? How does Paraview form those grids?
In ParaView (and indeed in the underlying VTK library it uses), structured grid points are ordered such that the index of the x dimension varies fastest, the index of the y dimension varies second-fastest, and the index of the z dimension varies slowest. Hence, your first example gives the expected result while the second example does not.

Polygons transparency

I need to bind texture on 2 crossed polygons and make them(polygons) invisible (with alpha=0). But textures are transparent with polygons.
Is is possible to make transparent only polygons without their textures?
By this way i bind textute
Gl.glEnable(Gl.GL_BLEND);
Gl.glEnable(Gl.GL_ALPHA_TEST);
Gl.glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA);
Gl.glColor4d(255,255,255,0.1);
Gl.glBegin(Gl.GL_QUADS);
Gl.glTexCoord2f(1, 0); Gl.glVertex3d(2, 2, 3);
Gl.glTexCoord2f(0, 0); Gl.glVertex3d(4, 2, 3);
Gl.glTexCoord2f(0, 1); Gl.glVertex3d(4, 4, 3);
Gl.glTexCoord2f(1, 1); Gl.glVertex3d(2, 4, 3);
Gl.glEnd();
Image
I need smth like on the left part of the img.
I found the solution.
Load png!! image
GL.glBindTexture(GL.GL_TEXTURE_2D, this.texture[i]);
Gl.glTexEnvi(Gl.GL_TEXTURE_ENV, Gl.GL_TEXTURE_ENV_MODE, Gl.GL_REPLACE);
Gl.glAlphaFunc(Gl.GL_LESS, 0.2f);
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
GL.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, (int)Gl.GL_RGBA, image[i].Width, image[i].Height, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, bitmapdata.Scan0);
And drow object:
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);
GL.glEnable(GL.GL_BLEND); // Enable Blending
GL.glDisable(GL.GL_DEPTH_TEST);
GL.glBlendFunc(Gl.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
GL.glBindTexture(GL.GL_TEXTURE_2D, texture[0]);
Gl.glBegin(Gl.GL_QUADS);
Gl.glTexCoord2f(1, 0); Gl.glVertex3d(2, 2, 3);
Gl.glTexCoord2f(0, 0); Gl.glVertex3d(4, 2, 3);
Gl.glTexCoord2f(0, 1); Gl.glVertex3d(4, 6, 3);
Gl.glTexCoord2f(1, 1); Gl.glVertex3d(2, 6, 3);
Gl.glEnd();
And you will get your image without any background.

Program for specific sequence of Integers

I am solving steady state heat equation with the boundary condition varying like this 10,0,0,10,0,0,10,0,0,10,0,0,10.... and so on depending upon number of points i select.
I want to construct a matrix for these boundary conditions but unable to specify the logic for the sequence in terms of ith element for a matrix.
i am using mathematica for this however i need the formula only like for odd we can specify 2n+1 and for even 2n , something like this for the sequence 10,0,0,10,0,0,10,0,0,10,....
In MATLAB, it would be
M = zeros(1000, 1);
M(1:3:1000) = 10;
to make a 1000 long vector with such structure. 1:3:1000 is 1,4,7,....
Since you specifically want a mathematical formula let me suggest a method:
seq = PadRight[{}, 30, {10, 0, 0}];
func = FindSequenceFunction[seq]
10/3 (1 + Cos[2/3 \[Pi] (-1 + #1)] + Cos[4/3 \[Pi] (-1 + #1)]) &
Test it:
Array[func, 10]
{10, 0, 0, 10, 0, 0, 10, 0, 0, 10}
There are surely simpler programs to generate this sequence, such as:
Array[10 Boole[1 == Mod[#, 3]] &, 10]
{10, 0, 0, 10, 0, 0, 10, 0, 0, 10}
A way to do this in Mathematica:
Take[Flatten[ConstantArray[{10, 0, 0}, Ceiling[1000/3] ], 1],1000]
Another way
Table[Boole[Mod[i,3]==1]*10, {i,1,1000}]

Correct format for loading vertex arrays from file

I've been banging my head on my keyboard for the past couple of weeks over this. What I'm trying to do is load an array of floats (GLfloat) and an array of unsigned shorts (GLushort) from a text file into equivalent arrays in objective-c so that I can render the contained objects. I've got my arrays loaded into vector objects as
vector<float> vertices;
and
vector<GLushort> indices;
But for some reason I can't figure out why I can't get these to render. Here is my code for rendering the above:
glVertexPointer(3, GL_FLOAT, sizeof(vertices[0])*6, &vertices[0]);
glNormalPoitner(GL_FLOAT, sizeof(vertices[0])*6, &vertices[3]);
glDrawElements(GL_TRIANGLES, sizeof(indices)/sizeof(indices[0]), GL_UNSIGNED_SHORT, indices);
Sample arrays are below:
vertices: (Vx, Vy, Vz, Nx, Ny, Nz)
{10, 10, 0, 0, 0, 1,
-10, 10, 0, 0, 0, 1,
-10, -10, 0, 0, 0, 1,
10, -10, 0, 0, 0, 1};
indices: (v1, v2, v3)
{0, 1, 2,
0, 2, 3};
The text file I want to load these arrays from for rendering looks like this:
4 //Number of Vertices
###Vertices###
v 10 10 0 0 0 1
v -10 10 0 0 0 1
v -10 -10 0 0 0 1
v 10 -10 0 0 0 1
###Object1###
2 //Number of faces
f 0 1 2
f 3 4 5
Are vector objects the best approach to take? If not, what is? And what am I doing wrong that these won't render? Thanks.
You use GL_TRIANGLES to specify the vertices format.
See the graph about GL_TRIANGLES, your format is wrong.
And, I prefer to use GL_TRIANGLE_STRIP format. It needs few vertices.