I have a brown sprite, which contains a hole with triangular shape.
I've added a trail renderer (and set its order in layer to appear behind the sprite), so the user can paint the sprite's hole without painting the sprite itself.
My question is: how can it detect when the hole is all painted?
I thought about using a shader to check if there is any black pixel in the screen, but I don't know if it's possible, because the shader won't know in what percentage of the image it is.
One way would be to take a screenshot with the ScreenCapture.CaptureScreenshotAsTexture method and then loop through an array of pixel colors from Texture2D.GetPixels32. You could then check if the array contains 'black' pixels.
I would do it in a coroutine for better performance as doing it every frame may slow down your application. Also what is important when it comes to CaptureScreenshotAsTexture according to unity docs:
To get a reliable output from this method you must make sure it is called once the frame rendering has ended, and not during the rendering process. A simple way of ensuring this is to call it from a coroutine that yields on WaitForEndOfFrame. If you call this method during the rendering process you will get unpredictable and undefined results.
Related
I'm creating a puzzle game that generates random sized pieces with 2D meshes. The images contain transparent portions and sometimes a piece is completely transparent. I need to detect what percentage of a piece is transparent. One way I found to do this is to go pixel by pixel. I posted my solution to this HERE. However, this process adds a few seconds during loading which I'd like to avoid and I'm looking for other ideas
I've considered using the selection outline of a MeshCollider to somehow to get a surface area I can compare to the surface area of the mesh but everything I find is on the rendering of outline with specialized shaders. Does anyone have any ideas on to solve this?
.
1) I guess you could add a PolygonCollider2D to your sprite and use its Path for the outline and calculation of the surface area. Not sure however if this will be faster.
PolygonCollider2D.GetPath:
A path is a cyclic sequence of line segments between points that define the outline of the Collider
Checking PolygonCollider2D.GetTotalPointCount or path length may be good enough to determine if the sprite is 'empty'.
Sprite.vertices, Sprite.triangles may also be helpful.
2) You could also improve performance of your first approach:
instead of calling GetPixel as you do now use GetPixels or GetPixels32 and loop through the array in one for loop.
Using GetPixels can be faster than calling GetPixel repeatedly, especially for large textures. In addition, GetPixels can access individual mipmap levels. For most textures, even faster is to use GetPixels32 which returns low precision color data without costly integer-to-float conversions.
check only every 2nd or nth pixel as it should be good enough for approximation
limit number of type casts
I want to mix SpriteKit and Metal, and I found that there is a special class for that - SKRenderer. But, as I see, it can only draw whole SKScene as a single layer above all screen. So, can it use the depth buffer and zPosition property for the proper rendering? For example, if my scene contains two SKNodes, and I want to draw "Metal object" between them?
I see in a GPU debugger that every SKNode is rendered with a separate draw call (even more that one, actually), so, in theory, it's possible to use SKNode.zPosition not only for sorting inside SKScene. For example, they can just translate it into viewport's z-position as is (and, of course, keep depth test).
The reason why I think that it's possible is that in the documentation, I see this sentence:
For example, you might write the environmental effects layer of your app that does fog, clouds, and rain, with custom Metal shaders, and continue to layer content below and above that with SpriteKit.
and I just can't believe that as "continue to layer content below and above that with SpriteKit" they mean "OK, you can create two different SKScenes".
I have a death transformation for one of my GameObjects which goes from a spherical ball to a bunch of small individual blocks. Each of these blocks I want to fade at different times but since they all use the same shader I cannot seem to figure out how to make all of them not fade out at the same time.
This first picture is the Spherical Ball in its first step for when it turns from a spherical ball to a Minecraft'ish looking block ball and to the right of it is one of the blocks that make up the Minecraft'ish looking ball shown by the red arrow.
Now this is my Inspector for one of the little blocks that make up the Minecraft'ish looking ball.
I have an arrow pointing to what makes the object fade but that is globally across all of the blocks since they use the same shader. Is it possible to have each block fade separately or am I stuck and need to find a new disappear act for the little block dudes?
You need to modify the material property by script at runtime, and you need to do it through the Renderer.material property. When you access Renderer.material, Unity will automatically create a copy of the material for you that is handled separately -- including getting its own draw call, if you care about performance. You can tell this has happened because the material name in the renderer will change to "Materialname (Instance)".
Set the material's fade property using Renderer.material.SetFloat() (or whatever the appropriate Set... function is). Unfortunately the property's name isn't "Fade Factor". You can find the property's name by looking at the shader script, or by switching the inspector to debug mode and digging through the Saved Properties array for one that looks right.
I am using Unity 5 to develop a game. I'm still learning, so this may be a dumb question. I have read about Depth Buffer and Depth Texture, but I cannot seem to understand if that applies here or not.
My setting is simple: I create a grid using several quads (40x40) which I use to snap buildings. Those buildings also have a base, made with quads. Every time I put one one the map, the Quads overlap and they look like the picture.
As you can see, the red quad is "merging" with the floor (white quads).
How can I make sure Unity renders the red one first, and the white ones are background? Of course, I can change the red quad Y position, but that seems like the wrong way of solving this.
This is a common issue, called Z-Fighting.
Usually you can reduce it by reducing the range of “Clipping Planes” of the camera, but in your case the quads are at the same Y position, so you can’t avoid it without changing the Y position.
I don't know if it is an option for you, but if you use SpriteRenderer (Unity 2D) you don’t have that problem and you can just set “Sorting Layer” or “Order in Layer” if you want modify the rendering order.
We're in the process of creating an iPhone game using cocos2d. We're trying to layer several sprites on top of each other and have them cast shadows.
Right now the shadows are rendered as sprites which works fine for the most part. But we only want the shadows to hit the closest layer.
I've made an image that hopefully explains what we're trying to accomplish:
And here's what we have at the moment:
Basically we want the sprite to only render the part of the shadow that is at the same depth as the z-buffer.
We've played around with glDepthFunc and GL_DEPTH_TEST but nothing seems to work.
Here's how we're rendering the shadow sprite (subclassed CCSprite):
- (void)draw {
glDisable( GL_BLEND );
glEnable( GL_DEPTH_TEST );
glDepthFunc( GL_LESS );
glDepthMask( GL_FALSE );
[super draw];
glDepthMask( GL_TRUE );
glDisable( GL_DEPTH_TEST );
glEnable( GL_BLEND );
}
The GL_BLEND calls are only there so we can see the sprite at all times.
All sprites that aren't shadows use glDepthMask( GL_TRUE ) and we're clearing the depth buffer on each frame.
Any help would be much appreciated!
glDepthFunc(GL_LESS)
is actually the default value; it means "draw the pixel only if the thing currently in the depth buffer is further away". If you wanted exactly equal you'd use glDepthFunc(GL_EQUAL), but in practice you'll get all sorts of rounding oddities if you do that.
Assuming you're able to use depth values for this purpose, if you have ten objects then I'd suggest you:
set glClearDepth to 0 before you glClear; this'll fill the depth buffer with the nearest storable value so that with normal depth buffering nothing else would be drawn.
disable the depth and draw the shadows such as they're supposed to fall on the back plane; at this point your depth buffer will still be full of the nearest possible value.
enable the depth test but set glDepthFunc to GL_ALWAYS. Then draw all your solid rectangles in back to front order with their depth values set appropriately.
set glDepthFunc to GL_LESS and draw the shadows that are meant to fall on other sprites, each positioned further back than the sprite they're associated with but in front of the sprite behind.
By the time you get to step 4, you'll have correct depth information everywhere a sprite was drawn and you'll have the closest possible value set wherever the background plane was. So normal depth testing will work on the intermediate shadows — they'll draw on top of anything drawn in step 3 but not on top of anything drawn in step 2.
You're sort of using the depth buffer as a surrogate stencil, which the older iPhones don't support.
If you can't afford to use the depth buffer for this task then all I can think of is projecting the shadows as textures in the second texture unit, using the first for a mask texture (or not if you're actually drawing rectangles, but I guess you're probably not) and doing one rendering pass per sprite per shadow that falls upon it. Is that a passable solution?