How to 9-slice a sprite while keeping the center not scaled? - unity3d

I wonder is there any way to slice this sprite(dialog pop up thing) that could keep the bottom center (the upside-down triangle) not scaled? I'm using nGUI if it matters.

Nope
Sorry, but that's how 9-slice scaling works. You would need 25 slice scaling to do what you're looking for and that's overkill for most things, so I've never seen an implementation.
What to do instead...
Break up your sprite into two pieces: the 9-slice portion and the "notch" portion. Then just position the notch to be in the right place.
I haven't used nGUI (only iGUI and the Unity native--both old and new) so I'm not sure on the precise nature of how nGUI will let you do that, but you'd still need two sprites, one of which is scaled and the other one which isn't, positioned either manually or through parent-child relative relationship. If your dialog is always the same width, it'll be pretty straight forward. If not, it might be more challenging.
A few other things:
You'll probably want the notch sprite and the bubble sprite to the same native image size, but its not necessary (might make things easier, might not).
The notch will want to have some "overbleed" so that when the two stack the underlying rendering code doesn't go all squinty eyed and go "there's a gap here..." and draw through in some cases.
Depending on the bubble portion's drawn edge, you might want the notch to be in front or behind. In your precise case, I don't think it'll make a difference. It's a little hard to tell due to the colors, but when I did a selectable tab (which is built similarly), the tab sits on top of the container window so that the shaded edge flows nicely. The unselected version then has no overbleed so it looks like it sits "behind" (accurate pixel placement--2D game at a fixed size--insures that no "gap" is rendered).

It's a little tedious but pretty straightforward to implement this for UI images. I recently did it in order to make a slice stretch the left/right borders of a 9-slice instead of the center.
The trick is to subclass Image and override OnPopulateMesh, where you do the calculations you need and set positions/uvs to whatever you require.
Here's a helpful how-to article: https://www.hallgrimgames.com/blog/2018/11/25/custom-unity-ui-meshes
Things for a non-UI sprite will be harder. I think you'll have to create all your geometry in a script, and the calculations might be a little complicated because you're using an atlas.

Related

2D sprite problem when setting up an instant messaging UI

I'm new to Unity and to game development in general.
I would like to make a text-based game.
I'm looking to reproduce the behavior of an instant messenger like messenger or whatapp.
I made the choice to use the Unity UI system for the pre-made components like the rect scroll.
But this choice led me to the following problem:
I have "bubbles" of dialogs, which must be able to grow in width as well as in height with the size of the text. Fig.1
I immediately tried to use VectorGraphics to import .svg with the idea to move runtime the points of my curves of Beziers.
But I did not find how to access these points and edit them runtime.
I then found the "Sprite shapes" but they are not part of the "UI",
so if I went with such a solution, I would have to reimplement
scroll, buttons etc...
I thought of cutting my speech bubble in 7 parts Fig.2 and scaling it according to the text size. But I have the feeling that this is very heavy for not much.
Finally I wonder if a hybrid solution would not be the best, use the
UI for scrolling, get transforms and inject them into Shape sprites
(outside the Canvas).
If it is possible to do 1. and then I would be very grateful for an example.
If not 2. 3. 4. seem feasible, I would like to have your opinion on the most relevant of the 3.
Thanks in advance.
There is a simpler and quite elegant solution to your problem that uses nothing but the sprite itself (or rather the design of the sprite).
Take a look at 9-slicing Sprites from the official unity documentation.
With the Sprite Editor you can create borders around the "core" of your speech bubble. Since these speech bubbles are usually colored in a single color and contain nothing else, the ImageType: Sliced would be the perfect solution for what you have in mind. I've created a small Example Sprite to explain in more detail how to approach this:
The sprite itself is 512 pixels wide and 512 pixels high. Each of the cubes missing from the edges is 8x8 pixels, so the top, bottom, and left borders are 3x8=24 pixels deep. The right side has an extra 16 pixels of space to represent a small "tail" on the bubble (bottom right corner). So, we have 4 borders: top=24, bottom=24, left=24 and right=40 pixels. After importing such a sprite, we just have to set its MeshType to FullRect, click Apply and set the 4 borders using the Sprite Editor (don't forget to Apply them too). The last thing to do is to use the sprite in an Image Component on the Canvas and set the ImageType of this Component to Sliced. Now you can scale/warp the Image as much as you like - the border will always keep its original size without deforming. And since your bubble has a solid "core", the Sliced option will stretch this core unnoticed.
Edit: When scaling the Image you must use its Width and Height instead of the (1,1,1)-based Scale, because the Scale might still distort your Image. Also, here is another screenshot showing the results in different sizes.

Why does unity material not render semi-transparency properly?

I have a Unity material whose albedo is based on a spritesheet. The sprite has semi-transparency, and is formatted to RGBA 32bit.
Now, the transparency renders in the sprite, but not in the material.
How do I do this without also making supposedly opaque parts of the albedo not transparent?
I have tried setting render mode to transparent, fade, and unlit/transparent. The result looks like this:
I tried opaque, but it ruins the texture. I tried cutout, but the semi-transparency will get out or become fully opaque. (depending on cutout)
There is no code to this.
I expect the output to make the semi-transparent parts of the material semi-transparent, and the opaque parts opaque. The actual output is either fully opaque or fully "semi-transparent", which is super annoying.
Edit
So I delayed work and I added submesh. So, it is really close to solving the problem.
It's still doing that glitch.
Okay, good news and bad news. The good news is, this problem is not uncommon. It's not even unique to Unity. The bad news is, the reason it's not uncommon or unique to Unity is because it's a universal issue with no perfect solution. But we may be able to find you a work around, so let's go through this together.
There's a fundamental issue in 3D Graphics: In what order do you draw things? If you're drawing a regular picture in real life, the obvious answer is you draw the things based on how far away from the viewer they are. This works fine for a while, but what do you do with objects that aren't cleanly "in front" of other things? Consider the following image:
Is the fruit in that basket in front of the bowl, or behind it? It's kind of neither, right? And even if you can split objects up into front and back, how do you deal with intersecting objects? Enter the Z-Buffer:
The Z-Buffer is a simple idea: When drawing the pixels of an object, you also draw the depth of those pixels. That is, how far away from the camera they are. When you draw a new object into the scene, you check the depth of the underlying pixel and compare it with the depth of the new one. If the new pixel is closer, you overwrite the old one. If the old pixel is closer, you don't do anything. The Z Buffer is generally a single channel (read: greyscale) image that never gets shown directly. As well as depth sorting, it can also be used for various post processing effects such as fog or ambient occlusion.
Now, one key component of the depth buffer is that it can only store one value per pixel. Most of the time, this is fine; After all, if you're just trying to sort a bunch of objects, the only depth you really care about is the depth of the front-most one. Anything behind that front-most object will be rendered invisible, and that's all you need to know or care about.
That is, unless your front-most object is transparent.
The issue here is that the renderer doesn't know how to deal with drawing an object behind a transparent one. To avoid this, a smart renderer (including unity) goes through the following steps:
Draw all opaque objects, in any order.
Sort all transparent objects by distance from the camera.
Draw all transparent objects, from furthest to closest.
This way, the chances of running into weird depth sorting issues is minimized. But this will still fall apart in a couple of places. When you make your object use a transparent material, the fact that 99% of the object is actually solid is completely irrelevant. As far as Unity is concerned, your entire object is transparent, and so it gets drawn according to its depth relative to other transparent objects in the scene. If you've got lots of transparent objects, you're going to have problems the moment you have intersecting meshes.
So, how do you deal with these problems? You have a few options.
The first and most important thing you can do is limit transparent materials to areas that are explicitly transparent. I believe the rendering order is based on materials above all else, so having a mesh with several opaque materials and a single transparent one will probably work fine, with the opaque parts being rendered before the single transparent part, but don't quote me on that.
Secondly, if you have alternatives, use them. The reason "cutout" mode seems to be a binary mask rather than real transparency is because it is. Because it's not really transparent, you don't run into any of the depth sorting issues that you typically would. It's a trade-off.
Third, try to avoid large intersecting objects with transparent materials. Large bodies of water are notorious for causing problems in this regard. Think carefully about what you have to work with.
Finally, if you absolutely must have multiple large intersecting transparent objects, consider breaking them up into multiple pieces.
I appreciate that none of these answers are truly satisfying, but the key takeaway from all this that this is not a bug so much as a limitation of the medium. If you're really keen you could try digging into custom render pipelines that solve your problem explicitly, but keep in mind you'll be paying a premium in performance if you do.
Good luck.
You said you tried Transparent Shader mode, but, did you tried to change Alpha Channel values in your material color after it?
The second image seems like the Alpha in RGBA is 0, try changing it.

Unity: Positioning an element on canvas

I need to move an image down through canvas so that its central point would be where is now its top edge. It makes some 50 points, but if I decrease y by 50, it moves to different part of the screen on devices with different screen size. I guess, it's because my main canvas is set to scale with the screen size. So I suppose I need to manually divide the number 50 by my screen height and then code to multiply by Screen.height? Isn't there a more convenient way to move UI objects?
Allow me a second question: Do you think it is even wise to make a game purely on canvas? My game is simple 2D, only slightly animated and contains many layout elements, so I decided to go for it, but I have hard time to grasp the UI position rules.
you may have the problem of the anchoring.
Unity UI totally depends on the Anchoring, if you have got right anchoring there is no issue.
For example. if you anchored something at the Center than changing left and right value moves them according to the center anchor.
for clear visualization, you can paste a screenshot of the behavior.

Draw curved lines with texture and glow with Unity

I'm looking for an efficient way to draw curved lines and to make an object follow them in Unity.
I also need to draw them using a custom image and not a solid color.
And on top of that I would like to apply an outer glow to them, and not to the rest of the scene.
I don't ask for a copy/paste solution for each of these elements, I list them all to give some context.
I did something similar in a web app using the html5 canvas to draw text progressively. Here a gif showing you the render:
I only used small lines to draw what you see above. Here a very big letter with thinker lines so lines are more visible:
Of course it's not perfect, but the goal was to keep it simple and efficient. And spaces on the outer edges are not very visible in normal size.
This is used in an educational game working on mobile as a progressive app. In real world usage I attach a particles emitter to it for better effect :
And it runs smoothly even on low end devices.
I don't want to recreate this exact effect on Unity but the core functionality is very close.
Because of how I did it the first time, I thought about creating a big list of segments to draw manually, but unity may have better tools to create this kind of stuff, maybe working directly with bezier curves.
I a beginner in Unity so I don't really know what is the most efficient way to do it.
I looked at the line renderer which seemed (at first) to be a good choice but I'm a little bit worried about performances with a list of 500+ points (considering mobiles are a target).
Also, the glow I would like to add may impact on the technique to choose.
Do you have any advice or direction to give me?
Thank you very much.

OpenGL ES. Scrolling 3 layer starfield textures gets me from 60 -> 40 FPS

I need to draw the background for a 2D space scrolling shooter. I need to implement 3 layers of stars: one distant nebula (moving really slow) in the background, one layer of far away stars (moving slow) and one layer of close stars (moving normal) on top of the other two.
The way i first tried this was using 3 textures of 320 x 480 that were transparent pngs of the stars. I used GL_BLEND and SRC_ALPHA, ONE_MINUS_SRC_ALPHA.
The results were not great even on the 3GS. On the first generation devices the FPS dropped to 40..50 so i think i'm doing this the wrong way.
When i disable the GL_BLEND everything works great even on the 1st gen devices and the FPS is back to 60 again... so it's must be the fact that i'm trying to belnd large transparent textures.
The problem is i don't know how to do it some other way...
Should i draw only the first nebula like an opaque texture and then try to emulate the middle and top star layer with small points moving around the screen?
Is there any other approach on the blending issue? How can i speed up the rendering process? Is one big texture (tileset) the answer?
Please help me cuz i'm stuck here and i can't get out.
I don't know how you want your stars to look like, but you might want to try to move them from a texture to geometry by using GL_POINTS in the DrawElements or DrawArrays maybe just replace the top two layers with layers of geometry. You can manipulate the points using PointSize, PointSizePointerOES and PointParameter to modify the rendering of the points.
You might want to use multi-texturing to see if that speeds it up. Each multi-texture stage can be assigned a unique transformation matrix, so you should be able to translate each layer at different speeds.
I believe all iPhone models support two texture stages, so you should be able to combine two of your layers into a single draw call. You might still need to resort to blending for the third layer.
Also note that alpha testing could be faster than alpha blending.
Good luck!
The back nebula should definitely be opaque; everything else is getting drawn on top of it, and I assume the only thing behind it is black. Also, prideout has a point: assuming your star layers can have effectively 1-bit alpha, that's definitely something you can try. Failing that, the GL_POINTS technique Harald mentions would work as well.