I am creating meshes based off of a CSV files in the format "x,y,z,dataValue". Based on the data value, I am coloring the vertex. For example, for data value 10-20, it will be a dark green, and for 20-30 a lighter green, etc... I have this working, but I would like to be able to create an analysis tool for the mesh. Basically I want to show the data value of the point being hovered over by the mouse. So, if they are hovering over a vertex, it shows "data value = x". However, once the mesh is created I can only access the color, that data value is basically transformed into a material. I need a way to store the data value relative to each vertex.
I have about 450,000 lines of data, so I need an efficient way to find the data value. I have thought of two options: store the vertices mapped to a data value, and search through the map to find the right data value (I think this might me too slow).. OR store the data value in the shader for each vertex and then when hovering over the vertex, access the shader to grab the data value. I'm not sure how to do the second approach, or if it is the better way to go about it. I'm just looking for an efficient way to go about this.
I think you need to organize your vertex data in an Octree.
First get the hit point by the mouse ray and the MeshCollider via Physics.Raycast, and
then search for the vertex near the hit point in the octree.
Related
I think i have a difficult problem right here..
I want to able to get the surfaces of f.e. the orange object in this three.js example https://threejs.org/examples/?q=stl#webgl_loader_stl
i want to click with the mouse, find the correct surface, which should then be highlighted, so i make sure this was the surface i want.
(i already implemented raycaster successfully, so thats not an issue)
The intersectObject method returns an array of intersections, each of which has face property. The face contains vertex indices.
For STL files containing multiple solids, each solid is assigned to a different group, and the groups are available in the geometry object that is returned from STLLoader. Each group is defined by a range of vertex indices.
So, I think you can correlate the vertex indices returned from the raycaster with the vertex indices in the geometry groups.
I am currently developing an asteroid mining/exploration game with fully deformable, smooth voxel terrain using marching cubes in Unity 3D. I want to implement an "element ID" system that is kind of similar to Minecraft's, as in each type of material has a unique integer ID. This is simple enough to generate, but right now I am trying to figure out a way to render it, so that each individual face represents the element its voxel is assigned to. I am currently using a triplanar shader with a texture array, and I have gotten it set up to work with pre-set texture IDs. However, I need to be able to pass in the element IDs into this shader for the entire asteroid, and this is where my limited shader knowledge runs out. So, I have two main questions:
How do I get data from a 3D array in an active script to my shader, or otherwise how can I sample points from this array?
Is there a better/more efficient way to do this? I thought about creating an array with only the surface vertices and their corresponding ID, but then I would have trouble sampling them correctly. I also thought about possibly bundling an extra variable in with the vertices themselves, but I don't know if this is even possible. I appreciate any ideas, thanks.
Each frame unity generate an image. I want that it will also create an additional arrays of int's and every time it decide to write a new color on the generated image it will write the id of the object on the correspond place in the array of int's.
In OpenGL I know that it’s pretty common and I found a lot of tutorials for this kind of things, basically based on the depth map you decide which id should be written at each pixel of the helper array. but in unity i using a given Shader and i didn't find a proper way to do just that. i think there should be any build in functions for this kind of common problem.
my goal is to know for every pixel on the screen which object it belongs to.
Thanks.
In forward rendering if you don't use it for another purpose you could store the ID into the alpha channel of the back buffer (and it would only be valid for opaque objects), up to 256 IDs without HDR. In deferred you could edit the unused channel of the gbuffer potentially.
This is if you want to minimize overhead, otherwise you could have a more generic system that re-renders specific objects into a texture in screenspace, whith a very simple shader that just outputs ID, into whatever format you need, using command buffers.
You'll want to make a custom shader that renders the default textures and colors to the mainCamera and renders an ID color to a renderTexture trough another camera.
Here's an example of how it works Implementing Watering in my Farming Game!
I am creating a mobile painting application. I have two textures (Texture2D), which is a template of an image and a color map for it.
This color map contains a unique color for each region of the template where the player can draw.
I need to have several other textures, one texture per each unique color in the color map.
For now I am trying to use GetPixels for the color map, and using a dictionary, I check every pixel.
If there is no color as a key value in this dictionary, create a new texture with SetPixel using the coordinate
If there is a color as a key, get the texture by using the key and SetPixel with the coordinates to get this texture.
But when I run this even my computer begins to extremely lag, no word about mobiles.
Is there a more efficent way?
To help you visualize the issue I am adding the color map, the texture I need to split.
I don't see a magically fast way to do it, but here are a few tips that may help:
Try using GetPixels32 (and SetPixels32) instead of simply GetPixels - the return value is not Color but Color32 which uses bytes and not floating points, it should be faster. See http://docs.unity3d.com/ScriptReference/Texture2D.SetPixels32.html http://docs.unity3d.com/ScriptReference/Texture2D.GetPixels32.html
Do not call SetPixel for each pixel, this is really slow. Instead, for each color create a temporary Color32 array and work with it, and only at the end assign all the arrays to new textures using SetPixels32.
If you use foreach loop or Array.ForEach or some linq stuff to parse the colors array, don't do it - use simple for loop, it is the fastest way.
Hope this helps.
Now there is a faster way to do this, which is using Texture2D.GetRawTextureData() and Texture2D.LoadRawTextureData().
I have 3 OpenGL objects which are shown at the same time. If the user touches any one of them, then that particular OpenGL object alone should display in screen.
Just use gluUnProject to convert your touch point to a point on your near clipping plane and a point on your far clipping plane. Use the ray between those two points in a ray-triangle intersection algorithm. Figure out which triangle was closest, and whatever object that triangle is part of is your object. Another approach is to give each object a unique ID color. Then, whenever the user touches the screen, render using your unique ID colors with no lighting, but don't present the render buffer. Now you can just check the color of the pixel where the user touched and compare it against your list of object color ID's. Quick and easy, and it supports up to 16,581,375 unique objects.
You would have to parse every object of your scene and check the possible collision of each one of them with the ray you computed thanks to gluUnProject.
Depending on whether you want to select a face or an object, you could test the collision of the ray with bounding volumes (e.g. bounding box) of your objects for efficiency purposes.