I have this quad in the 3D scene:
I need to get the local positions of all painted (non transparent) pixels of this quad. Already tried to use GetPixels() and filter the result by the alpha value to get only pixels with a valid color in it. But then I noticed that it isn't possible to get the pixels' local positions using this method, cause it returns a Color array, which doesn't offer a way to retrieve that information. Already tried to google and nothing came up, maybe the only way to get what I want is to build something at shader level, but I don't know much about this subject either. I can offer more context to my doubt if needed, but I'm trying to keep things short here. Also, there's no code to show except for the wrong one using GetPixels(), which doesn't work for my case as far as I know.
Any help is appreciated!
Related
I'm creating a puzzle game that generates random sized pieces with 2D meshes. The images contain transparent portions and sometimes a piece is completely transparent. I need to detect what percentage of a piece is transparent. One way I found to do this is to go pixel by pixel. I posted my solution to this HERE. However, this process adds a few seconds during loading which I'd like to avoid and I'm looking for other ideas
I've considered using the selection outline of a MeshCollider to somehow to get a surface area I can compare to the surface area of the mesh but everything I find is on the rendering of outline with specialized shaders. Does anyone have any ideas on to solve this?
.
1) I guess you could add a PolygonCollider2D to your sprite and use its Path for the outline and calculation of the surface area. Not sure however if this will be faster.
PolygonCollider2D.GetPath:
A path is a cyclic sequence of line segments between points that define the outline of the Collider
Checking PolygonCollider2D.GetTotalPointCount or path length may be good enough to determine if the sprite is 'empty'.
Sprite.vertices, Sprite.triangles may also be helpful.
2) You could also improve performance of your first approach:
instead of calling GetPixel as you do now use GetPixels or GetPixels32 and loop through the array in one for loop.
Using GetPixels can be faster than calling GetPixel repeatedly, especially for large textures. In addition, GetPixels can access individual mipmap levels. For most textures, even faster is to use GetPixels32 which returns low precision color data without costly integer-to-float conversions.
check only every 2nd or nth pixel as it should be good enough for approximation
limit number of type casts
Each frame unity generate an image. I want that it will also create an additional arrays of int's and every time it decide to write a new color on the generated image it will write the id of the object on the correspond place in the array of int's.
In OpenGL I know that it’s pretty common and I found a lot of tutorials for this kind of things, basically based on the depth map you decide which id should be written at each pixel of the helper array. but in unity i using a given Shader and i didn't find a proper way to do just that. i think there should be any build in functions for this kind of common problem.
my goal is to know for every pixel on the screen which object it belongs to.
Thanks.
In forward rendering if you don't use it for another purpose you could store the ID into the alpha channel of the back buffer (and it would only be valid for opaque objects), up to 256 IDs without HDR. In deferred you could edit the unused channel of the gbuffer potentially.
This is if you want to minimize overhead, otherwise you could have a more generic system that re-renders specific objects into a texture in screenspace, whith a very simple shader that just outputs ID, into whatever format you need, using command buffers.
You'll want to make a custom shader that renders the default textures and colors to the mainCamera and renders an ID color to a renderTexture trough another camera.
Here's an example of how it works Implementing Watering in my Farming Game!
I'm trying to write a simple frag/vert shader that, depending on whether it is in the range of a light, will paint the appropriate colour from either the 'lit' texture or from the 'unlit' texture.
Therefore, I need to compare the distance between the light to the range of the light.
I've been googling all kinds of things, but I can't seem to find a way of accessing the range value of the light. Is there a way to do so? If not, is there some kind of derived data I could use as an alternative?
Update
I was able to find this method here, which seems to be the most promising so far, however after playing around for a bit, I still can't seem to get what I need. There's some talk about _LightMatrix0 not being populated. Can anyone confirm?
Update 2
I found the variable unity_LightAtten in the Unity Shader Variables documentation. However, this is only used for Vertex Lit shading, which isn't exactly ideal, especially considering the lack of console support.
Could there be a way to pipe this variable to Forward Rendering?
You can pass Light.range into the shader using Material.SetFloat. You need to attach a script to do that.
I have an image (see attached) and I am trying to calculate the variance of the image inside the region of interest (dark region) using the stdfilt function.Image here.
The dark side is what I need to work on. When I use stdfilt on this image, it shows me the boundaries of the dark and bright.
My idea is that we can threshold the image to show only the dark side and tell Matlab to work only with this region of interest. So far, did not find a proper way of doing this.
The area is not a perfect polygon, which would make things way easier. At that point, I'm not sure what to do, so any suggestions are welcome.
Cheers
If the spatial location of the pixels is not relevant, you could just do:
datatoprocess=I(I<threshold);
Being threshold a value that separates the white from black. [graythresh][1] is a fantastic function for that. datatoprocess will be a 1xN array with the pixel values.
If, instead, the spatial location of the pixels is relevant, then you need to modify your functions to not work on specific pixels. The best approach for this is generally setting NaN values in pixels you dont want to take into account.
Itoprocess=I;
Itoprocess(I>threshold)=NaN;
Without more information on what exactly are you doing with the image, this is the best anyone can get to.
I would like to track (if that is the right word for this) the movement of a point on an object and return the co-ordinates for the point in each frame to arrays for plotting. How would you go about doing this?
The point on the video is a certain color and so my first effort was to eliminate all other colors and change the part I wish to follow to black and everything else to white. Doing this left me with some areas in the background which are the same color but I wish to ignore them and just focus on the moving point. I do not know where to even begin with this or if I've even been trying to do the right thing so far?
Any help would be greatly appreciated! :)
Try searching for terms like 'tracking', 'morphological', 'computer vision', 'matlab'
Here's a project that I found that will probably get you started.
http://www.mathworks.com/matlabcentral/fileexchange/28757-tracking-red-color-objects-using-matlab
if your object of interests is of a certain specific color. You can always apply a color-filter. To give you a bit of a background, i was trying to track not a point on an object, but a moving object in one of the videos i have. (it was a ping-pong video and my goal was to track the ping-pong ball). My algorithm was simple and fast (as i did not want any of my filters to induce heavy computations at one single frame). The basic idea was to apply a color filter. Similar to other shape filters, if your target is of high similarity to the filter, the response will be distinctive enough for you to notice. In other words, if you minus two objects that are extremely similar, you will get 0, otherwise, it will be far greater than 0.