I created a collection of particles that are held together by a bond. The file is in .vtu format. I don't get to see the particles when I tried to view the surface: only the collections of the bonds were shown.
This is what I am getting:
Related
I think i have a difficult problem right here..
I want to able to get the surfaces of f.e. the orange object in this three.js example https://threejs.org/examples/?q=stl#webgl_loader_stl
i want to click with the mouse, find the correct surface, which should then be highlighted, so i make sure this was the surface i want.
(i already implemented raycaster successfully, so thats not an issue)
The intersectObject method returns an array of intersections, each of which has face property. The face contains vertex indices.
For STL files containing multiple solids, each solid is assigned to a different group, and the groups are available in the geometry object that is returned from STLLoader. Each group is defined by a range of vertex indices.
So, I think you can correlate the vertex indices returned from the raycaster with the vertex indices in the geometry groups.
I have a GameObject sphere in my program that represents the Earth.
So I apply a material to it like so:
Using data and a positioning script, I position markers on the globe that represent locations (by longitude and latitude).
Everything seems to work, except that the texture does not line up with the points plotted.
How can I shift the texture so that my data points are on top of the actual locations?
You can see this in the following figure, where South America points are clearly plotted over the ocean between Antarctica and South America in the wrong orientation.
EDIT:
After playing a lot, I found that X offset works, but Y offset does not work. The combination will help me accomplish the task, but it's not wrapping correctly...
To create a new Material, use Assets->Create->Material from the main menu or the Project View context menu.
Drag your texture into the inspector field and change the Offset variables until you get the desired offset result.
You should consider using modeling programs such as Blender for creating textured models or circles but keep in mind if you have textured models it needs to be in .fbx format.
I am attempting to isolate the shape of a cow in an image. The image is captured using a modified kinect camera. Below is the stage I have got to so far, showing what is left after I have deleted all non-required parts of the image. (The image shows the torso of a cow, viewed from above with the head of the animal on the left).
This was done by deleting all the pixels that were furthest away from the camera (the floor) and then isolating the region that can be seen below.
I am struggling to obtain useful data about this shape. Ideally, I would like to obtain the perimeter, area and major axis. If anyone can help I would be very grateful.
The end goal is to be able to detect a 'cow shape' and then I can move onto the next phase which is to ID each animal.
I have created a simple scene in directx11 that has a plane as a floor, with several spheres, cubes and rectangular walls. There is only 3 objects loaded: a plane, cube and sphere; but the cube and sphere are instanced several times with different scaling, positions and rotations. Two of these objects are dynamic.
I would like to voxelize this entire scene (100x100x20 units) into 0.2 unit voxels, taking into account the object instances which have different scales and rotations.
I have read several articles on voxelization and have the source code from GPU Pro 3 of "Practical Binary Surface and Solid Voxelization with Direct3D 11"; but all of these articles show the voxelization of single objects - taking their triangles and splitting them into a grid.
How would I extend these methods to account for an entire scene with multiple object instances?
The only thing that I can think of is I would have to do a top-down octree division of the entire scene. But for a dynamic scene, would this be too expensive?
For my scene, I use a buffer for each model loaded, so if I were to voxelize in the Compute Shader, would I need to copy all three buffers to a single buffer? How do I account for model instances?
Thank you.
In most of the tutorials in opengl es they create a structure which holds the vertices of the geometry. This structure contains the position and color for each vertex. This vertex information is then sent to the vertex buffer and is then used to render the geometry on the screen. My question is if I want to draw 2 cubes on the screen do I need to create 2 different structures objects or can I get by just creating a single structure and then changing the color dynamically.
This is the definition of my structure
struct Vertex{
float Position[3];
float Color[4];
}
Yes you can use just one instance of the structure, draw it, than change the colors of it and draw it again with another world matrix. Though I don't think that would be very good for performance.
But the best thing to do would be to create two instances of that one structure each one containing different colors, then draw them in differents positions by multiplying a translation matrix to their world matrix.