Unity3D - Make texture edges not stretch - unity3d

I've been searching around for this one for a bit, and unfortunately I can't seem to find any good, consistent results. So, in the Unity UI system, buttons can stretch without becoming pixelated or distorted. This is because the texture is split up into 9 parts - the corners, middle, and sides.
This works because the button's middle and sides are stretched, but not the corners. Then, the button appears not pixelated, at any dimension.
So, the question is as follows: How can I do the same thing for a transparent, unlit texture in 3D space? I have a speech bubble texture on a flat plane that I know how to re-scale to fit the text in the speech bubble.
I've set the texture type to Multiple Sprite, and divided it up into 9 parts. However, I cannot seem to find where I can set the texture to act like the UI button does, and I'm not sure that this is even possible in this way in 3D space.
Is there a way, or should I just make the different parts of the texture different objects, and move them together? That would seem very inefficient and ugly compared to this.

To accomplish what you are asking, you would need to create tiles for this speech bubble and then write a script that procedurally builds a speech bubble based on the plane's scale value. You could also try just changing the texture's Filter Mode to Point.
However I really don't think you should be using textures for this anyway. Why not just use a Unity Canvas and set the Render Mode to World Space? Then you can just set your text box to be a sprite, not a texture, and set its filter mode to Point (See below). This would also make it a lot easier for when you want there to be text in the speech bubble later on.

Related

2D sprite problem when setting up an instant messaging UI

I'm new to Unity and to game development in general.
I would like to make a text-based game.
I'm looking to reproduce the behavior of an instant messenger like messenger or whatapp.
I made the choice to use the Unity UI system for the pre-made components like the rect scroll.
But this choice led me to the following problem:
I have "bubbles" of dialogs, which must be able to grow in width as well as in height with the size of the text. Fig.1
I immediately tried to use VectorGraphics to import .svg with the idea to move runtime the points of my curves of Beziers.
But I did not find how to access these points and edit them runtime.
I then found the "Sprite shapes" but they are not part of the "UI",
so if I went with such a solution, I would have to reimplement
scroll, buttons etc...
I thought of cutting my speech bubble in 7 parts Fig.2 and scaling it according to the text size. But I have the feeling that this is very heavy for not much.
Finally I wonder if a hybrid solution would not be the best, use the
UI for scrolling, get transforms and inject them into Shape sprites
(outside the Canvas).
If it is possible to do 1. and then I would be very grateful for an example.
If not 2. 3. 4. seem feasible, I would like to have your opinion on the most relevant of the 3.
Thanks in advance.
There is a simpler and quite elegant solution to your problem that uses nothing but the sprite itself (or rather the design of the sprite).
Take a look at 9-slicing Sprites from the official unity documentation.
With the Sprite Editor you can create borders around the "core" of your speech bubble. Since these speech bubbles are usually colored in a single color and contain nothing else, the ImageType: Sliced would be the perfect solution for what you have in mind. I've created a small Example Sprite to explain in more detail how to approach this:
The sprite itself is 512 pixels wide and 512 pixels high. Each of the cubes missing from the edges is 8x8 pixels, so the top, bottom, and left borders are 3x8=24 pixels deep. The right side has an extra 16 pixels of space to represent a small "tail" on the bubble (bottom right corner). So, we have 4 borders: top=24, bottom=24, left=24 and right=40 pixels. After importing such a sprite, we just have to set its MeshType to FullRect, click Apply and set the 4 borders using the Sprite Editor (don't forget to Apply them too). The last thing to do is to use the sprite in an Image Component on the Canvas and set the ImageType of this Component to Sliced. Now you can scale/warp the Image as much as you like - the border will always keep its original size without deforming. And since your bubble has a solid "core", the Sliced option will stretch this core unnoticed.
Edit: When scaling the Image you must use its Width and Height instead of the (1,1,1)-based Scale, because the Scale might still distort your Image. Also, here is another screenshot showing the results in different sizes.

What is a good way to have a 2d battle royal circle with an adjustable center and radius in Unity?

I am having trouble creating a battle royal zone in unity. I want it to be translucent and cover the map with a masked circle in the middle. However, I dont know if there is any way to control the radius of this mask which is something I need to do regularly. Does anyone know how to achive these results?
It should look like the the map view from Fortnite BR.(https://assets.rockpapershotgun.com/images/2019/01/fortnite-small-storm-690x388.jpg/RPSS/resize/690x-1/format/jpg/)
There are a few ways to achieve this effect.
The first and easiest is using the built in Mask, if you are using UI to render the minimap. You can also create a custom UI mesh, as described here for example.
The last option is a (custom) shader. You can download the built-in shaders here and copy over the UI one. Then you have to add your own logic to the fragment shader, something that looks for the Length of the distance of the current UV coordinates and the circle, and if thats smaller than the radius it must be inside. In that case, either clip or set the alpha to 0.

How can I make the SCNCamera that i instantiated zoom into only nodes that i want in SceneKit

Pretend i have 3 nodes in total. One of the nodes is a large SCNShere and i put the camera inside this sphere and make the sphere double sided with a textured image. I then put in two smaller spheres next to each other in the center inside this sphere. I also allowCameraControl. I want to be able to zoom into these two smaller spheres without zooming into the larger sphere and messing up the detail on that sphere.
You can't put limits on the camera that's automatically created with allowCameraControl. You'll have to do your own camera management, using your own gesture recognizers.
Another solution would be to rethink your approach to the background image. Instead of using a sky sphere for the background (which is what it sounds like you're doing), use a skybox, or cube map. You can supply a cube map through the scene's background property. The SCNMaterial documentation explains the options for supply a cube map.
Hmm, I wonder what would happen if you use the large sphere's textured image/material as the scene's background, instead of putting it on an enclosing sphere?
I like the idea of using an image as the background but there are two problems. One is i looked on the web for ways to make an image the background and none of them work. Two I want the background to have depth so in order to go on that idea I need to find a way to zoom into the background and have the image pan in the opposite direction that I drag.

Unity - Render Texture from Camera's targetTexture produces seams

I am attempting to render a specific section of my scene using a separate camera, and a render texture. That object is on a separate layer that the main camera is not rendering, but a separate camera is. The secondary camera has a target texture set to be a render texture that I have created. Everything is working as intended except for the fact that the object, when rendered to a texture, has a bunch of seams that are not present when rendering directly to the screen.
What it looks like when rendered directly to the screen:
Correct
What it looks like when rendered to a texture, and then displayed on a quad in the scene:
Incorrect
Notice how the second image has a bunch of transparent "lines" in between the sprites where there shouldn't be any.
I am using a basic transparent shader to display the render texture on the quad (since the background isn't part of the render texture, just the black crowd part). I have tried a number of different shaders, and none of them seem to make a difference.
The render texture's settings are: Width: Screen.width Height: Screen.height Format: RenderTextureFormat.ARGBFloat;
Unity Version: 5.2.3f1 - iOS Platform
Note: The reason I am doing this is so that I can apply a "Blur" image effect to the texture, and make the crowd in the foreground appear to be out of focus. Any alternative suggestions for how to do this are also welcome.
I'm not quite sure -- but it almost sounds like you have line ghosting. You may want to give this a read and let me know if that's what you're dealing with or not:
The reason for this is due to how the texture image was authored, combined with the filtering that most 3d engines use when textures are displayed at different sizes on screen.
Your image may have coloured areas which are completely opaque, coloured areas which are partially transparent, and areas which are completely transparent. However, the areas where your alpha channel is completely transparent (0% opacity) actually still have a colour value too. In PNGs (or at least, the way Photoshop exports PNGs) seems to default to using white for the completely transparent pixels. With other formats or editors, this may be black. Both are equally undesirable when it comes to use in a 3d engine.
You may think, "why is the white colour a problem if it's completely transparent?". The problem occurs because when your texture appears on screen, it's usually either upscaled or downscaled depending whether the pixels in the texture's image are appearing larger or smaller than actual size. For the downsizing, a series of downscaled versions get created during import. These downscaled versions get used when the texture is displayed at smaller sizes or steeper angles in relation to the view, and is intended to improve visual quality and make rendering faster. This process is called "mip-mapping" - read more about mip-mapping here. For upscaling, simple bilinear interpolation is normally used.
The scaled versions are usually created using simple bilinear interpolation, which means that the transparent pixels are mixed with the neighbouring visible pixels. With the mipmaps, for each smaller level, the problem with the invisible mixing with the visible pixel colours increases (with the result that your nasty white edges become more apparent at further distances away).
The solution is to ensure that these completely transparent pixels have a colour value which matches their neighbouring visible pixels, so that when the interpolation occurs, the colour 'bleed' from the invisible pixels is of the appropriate colour.
To solve this (in Photoshop) I always use the free "Solidify" tool from the Flaming Pear Free Plugins pack, like this:
Download and install the Flaming Pear "Free Plugins" pack (near the bottom of that list)
Open your PNG in photoshop.
Go to Select -> Load Selection and click OK.
Go to Select -> Save Selection and click OK. This will create a new alpha channel.
Now Deselect all (Ctrl-D or Cmd-D)
Select Filter -> Flaming Pear -> Solidify B
Your image will now appear to be entirely made of solid colour, with no transparent areas, however your transparency information is now stored in an explicit alpha channel, which you can view and edit by selecting it in the channels palette.
Now re-save your image, and you should find your white fuzzies have dissappeared!
Source: http://answers.unity3d.com/questions/10302/messy-alpha-problem-white-around-edges.html
Turns out that the shader I was using for my scene was using "Blend SrcAlpha OneMinusSrcAlpha" for some reason, when it should have been using "Blend One OneMinusSrcAlpha". This was causing objects with alpha less than 1 to make the objects under them become semi-transparent as well exposing the camera's clear colour background.

OpenGL, primitives with opacity without visible overlap

I'm trying to draw semi-transparent primitives (lines, circles) in OpenGL ES using Cocos2d but can't avoid the visible overlap regions. Does anyone know how to solve this?
This is a problem you usually come across pretty often, even in 3D.
I'm not too familiar with Cocos2D, but one way to solve this in generic OpenGL is to fill the framebuffer with your desired alpha channel, switch the blending mode to glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA) and draw the rectangles. The idea behind this is that you draw a rectangle with the desired transparency which is taken from the framebuffer, but in the progress mask the area you've drawn to so that your subsequent rectangles will be masked there.
Another approach is to render the whole thing to a texture or assemble the shape using polygons that don't overlap.
I'm not sure whether Cocos2D supports any of theseā€¦
I do not know what capabilities Cocos2D specifically provides, but I can see two options,
One, do not overlap like that, but rather construct more complex geometry such that each pixel is only covered once,
Two, use stencil buffer to create a mask as you draw, and to reject any pixels that are already masked.