I'm implementing a parallax background for my 2d side-scrolling game with Flame, similar to the example.
The ParallaxComponent api seems to support only placing layers using Alignment which puts them to the top/center/bottom of the screen.
I wonder if there's a way to position layers more precisely and change their size. Thanks!
The API of ParallaxComponent doesn't support this yet, but what you could do is using multiple ParallaxComponents on top of each other since they are PositionComponents that you can place and size as you like.
Related
I'm new to Unity and to game development in general.
I would like to make a text-based game.
I'm looking to reproduce the behavior of an instant messenger like messenger or whatapp.
I made the choice to use the Unity UI system for the pre-made components like the rect scroll.
But this choice led me to the following problem:
I have "bubbles" of dialogs, which must be able to grow in width as well as in height with the size of the text. Fig.1
I immediately tried to use VectorGraphics to import .svg with the idea to move runtime the points of my curves of Beziers.
But I did not find how to access these points and edit them runtime.
I then found the "Sprite shapes" but they are not part of the "UI",
so if I went with such a solution, I would have to reimplement
scroll, buttons etc...
I thought of cutting my speech bubble in 7 parts Fig.2 and scaling it according to the text size. But I have the feeling that this is very heavy for not much.
Finally I wonder if a hybrid solution would not be the best, use the
UI for scrolling, get transforms and inject them into Shape sprites
(outside the Canvas).
If it is possible to do 1. and then I would be very grateful for an example.
If not 2. 3. 4. seem feasible, I would like to have your opinion on the most relevant of the 3.
Thanks in advance.
There is a simpler and quite elegant solution to your problem that uses nothing but the sprite itself (or rather the design of the sprite).
Take a look at 9-slicing Sprites from the official unity documentation.
With the Sprite Editor you can create borders around the "core" of your speech bubble. Since these speech bubbles are usually colored in a single color and contain nothing else, the ImageType: Sliced would be the perfect solution for what you have in mind. I've created a small Example Sprite to explain in more detail how to approach this:
The sprite itself is 512 pixels wide and 512 pixels high. Each of the cubes missing from the edges is 8x8 pixels, so the top, bottom, and left borders are 3x8=24 pixels deep. The right side has an extra 16 pixels of space to represent a small "tail" on the bubble (bottom right corner). So, we have 4 borders: top=24, bottom=24, left=24 and right=40 pixels. After importing such a sprite, we just have to set its MeshType to FullRect, click Apply and set the 4 borders using the Sprite Editor (don't forget to Apply them too). The last thing to do is to use the sprite in an Image Component on the Canvas and set the ImageType of this Component to Sliced. Now you can scale/warp the Image as much as you like - the border will always keep its original size without deforming. And since your bubble has a solid "core", the Sliced option will stretch this core unnoticed.
Edit: When scaling the Image you must use its Width and Height instead of the (1,1,1)-based Scale, because the Scale might still distort your Image. Also, here is another screenshot showing the results in different sizes.
I am making a 3D game on Unity. Currently I work on main menu. The problem is
when I use 16:9 aspect ratio I get this result:
The background is not scaled.
However when I use Free aspect ratio, I get this:
Here is background object properties and sprite properties:
I have no clue what is the problem here and how to solve it. I hope to find some help here. Thanks in advance.
You should extract one tile of the background sprite like this:
Then it should be tiled correctly.
The key is to extract the smallest repeatable element.
Also, you don't have to worry about the Free Aspect. Just make sure it looks fine on the real screen resolutions.
In Unity, is there a way to give slight color variations to a scene (a strain of purple here, some yellow blur there) without adjusting every single texture? And for that to work in VR stereo images too (and ideally in semi-consistent way as one moves around, and perhaps also without having to use computing-heavy colored lights)? Many thanks!
A simple way to achieve this if your color effect is fixed would be to add a canvas that renders a half transparent image to the whole screen. But I suppose that you might prefer some dynamic effect.
To achieve this, look at Unity's post processing stack. It allows you to add many post process effects, such as chromatic aberation and color grading, that might allow you to do what you want
I'm trying to add lighting to a certain extent within my tilemap based iPhone game. For lack of a better example, I'm trying to add minecraft style lighting - the further a tile is from the light source the greater "dark" tint it has.
The most efficient way I can think of doing this would be to add some type of mask over the tilemap layer in order to create this effect and simply move the masks with the tilemap as the player moves around.
I haven't been able to find any documentation on how to add masks to an entire layer, is this possible? Or is it bad practice? Or can you think of a better possible method for achieving this effect?
The simplest and most efficient solution would be to modify the color property of a tile. By default all nodes have the color "white" and by applying gray colors between black & white you'll be able to control the brightness of the tile.
Note however that when you do treat a tile like a CCSprite, cocos2d will change the tile from its basic implementation and change it into a CCSprite. This may become a performance and/or memory issue. Each CCSprite instance was 420 Bytes last time I checked in cocos2d 0.99.
I've come across a strange render bug on iPhone OS 3.0...
I have two images. One is a non-transparent PNG that is predominately black with a white gradient fading upward.
The second is a transparent PNG with translucent clouds.
When I overlay the two using UIImageView, the intersection of the clouds and white gradient triggers a render bug that causes a rather odd looking graphical glitch that removes all opacity from the image on top (in this case the clouds), and causes the glitched portion of the image to render on top of all layers in the current view (including ones it is technically underneath).
It only occurs at the intersection of the two portions of the images. So typically only a very small block is experiencing the error while the rest of the images render normally.
Has anyone seen this and does anyone have a fix? I want to check before I move on to Core Animation which will hopefully address the problem (since I imagine that CA or even OpenGL is more apt to handle overlapping alpha channels).
Screenshot found here:
http://www.jasconi.us/glitch.jpg
You can see the intersect of the two images at the lower right.
From your description, this seems to be a bug in Apple's code. I would report it to Apple and wait for a fix.
In the meantime, you can try to implement the same functionality in Core Animation or OpenGL in the hope that the bug is in the higher-level UIImageView, but since the UIImageView itself uses Core Animation, it's possible that this bug is simply unavoidable until it's fixed.
I assume you're displaying them using UIImageView? If so, have you set opaque to NO on the transparent view?