I want to make an irregular shape display its colors from a different set of images. Currently, I have a flashlight represented by an irregular shape (kite shaped) that when casted over an area, certain objects appear. When the shape is removed, the items disappear. Now, I want to have the background that is within this irregular shape to display an altered version of the background.
I am planning to use a RenderTexture to get its info from a camera that views the flashlight's corrosponding location in the other world, and then use this image as the basis of the flashlight's altered background. However, when I try this, the flashlight shows black instead of the other world. When I texture a plane, the RenderTexture works properly. Anyone have any ideas how to accomplish this?
Related
I want to apply a brighten effect above my scene.
My scene contains tiles, and I want to perform white flash for a few frames by code.
I have already tried this code:
private Tilemap tm;
...
tm = GetComponent<Tilemap>();
tm.color = new Color(0.5f,1.0f,1.0f,1.0f);
This code darkens the scene by a certain color amount, but I wish to brighten it.
Your code is not working because in Unity if you render an image (in your case tile) the original color of the image is when its color is white (255,255,255,255).
It means that if you change the color of an image it will add this color to this image.
For instance, if you set the color of an image to red, the image colors will become more similar to red than the original image.
As I see it you have 2 ways to perform the white flash:
A) Add another image of a white rectangle that covers all the screen and set it's alpha color to a low number (the lower the number the lighter flash effect).
In the editor disable this object's renderer and when you want to perform a flash effect enable this object from the code (You can improve this with animations or code to get a smooth flash animation).
B) Install the package "2D Light". This is an experimental package that allow you to render 2d light.
This package contains a lot of components that allow you to stimulate light.
I have found a way to do this.
I created a new PNG that only contains white shapes on a transparent background.
There are about 20 pieces that match the shapes of my level tilemap.
Now I just create a new (white) tilemap above the level tilemap in the shape of the highlight.
Then I set the alpha of the white tilemap in code.
It works :)
Pretend i have 3 nodes in total. One of the nodes is a large SCNShere and i put the camera inside this sphere and make the sphere double sided with a textured image. I then put in two smaller spheres next to each other in the center inside this sphere. I also allowCameraControl. I want to be able to zoom into these two smaller spheres without zooming into the larger sphere and messing up the detail on that sphere.
You can't put limits on the camera that's automatically created with allowCameraControl. You'll have to do your own camera management, using your own gesture recognizers.
Another solution would be to rethink your approach to the background image. Instead of using a sky sphere for the background (which is what it sounds like you're doing), use a skybox, or cube map. You can supply a cube map through the scene's background property. The SCNMaterial documentation explains the options for supply a cube map.
Hmm, I wonder what would happen if you use the large sphere's textured image/material as the scene's background, instead of putting it on an enclosing sphere?
I like the idea of using an image as the background but there are two problems. One is i looked on the web for ways to make an image the background and none of them work. Two I want the background to have depth so in order to go on that idea I need to find a way to zoom into the background and have the image pan in the opposite direction that I drag.
I've been searching around for this one for a bit, and unfortunately I can't seem to find any good, consistent results. So, in the Unity UI system, buttons can stretch without becoming pixelated or distorted. This is because the texture is split up into 9 parts - the corners, middle, and sides.
This works because the button's middle and sides are stretched, but not the corners. Then, the button appears not pixelated, at any dimension.
So, the question is as follows: How can I do the same thing for a transparent, unlit texture in 3D space? I have a speech bubble texture on a flat plane that I know how to re-scale to fit the text in the speech bubble.
I've set the texture type to Multiple Sprite, and divided it up into 9 parts. However, I cannot seem to find where I can set the texture to act like the UI button does, and I'm not sure that this is even possible in this way in 3D space.
Is there a way, or should I just make the different parts of the texture different objects, and move them together? That would seem very inefficient and ugly compared to this.
To accomplish what you are asking, you would need to create tiles for this speech bubble and then write a script that procedurally builds a speech bubble based on the plane's scale value. You could also try just changing the texture's Filter Mode to Point.
However I really don't think you should be using textures for this anyway. Why not just use a Unity Canvas and set the Render Mode to World Space? Then you can just set your text box to be a sprite, not a texture, and set its filter mode to Point (See below). This would also make it a lot easier for when you want there to be text in the speech bubble later on.
I have an image of mountain with small gutters and tunnels in it. I want to pass a small image through that tunnels. How to trace the intersection of that small image with the exact boundaries of large image in cocos2d?
I would make a collision mask for this.
What this means is to create an exact copy of the image you are using for your terrain except make it only two colors: white and black.
Make the areas that you want the player to be able to move through (not walls) white. Make the walls and anything you want the player to collide with back. Next, just do some pixel collision detection. To do this, I would get the RGB (not RGBA because alpha doesn't matter) data. Loop through this data (or a section of it for better performance) and detect whether or not the player is on a black or white pixel.
Do whatever you need to accordingly.
If you need more help, feel free to ask.
I have a scene with a background image (a lit room), and a black image (shadow) over that. I need to be able to move my finger over the background and reveal some parts of the scene, simulating a dim light source in a dark room.
My current approach was to generate a mask depending on the position of the touch, and then applying that mask to the shadow image. The problem is I'm generating a new mask and applying it every time I receive a touch event. It's a large image (800x600) and this causes the performance to go down and it increases a lot the memory usage, eventually crashing the game (I think I don't have any memory leaks, but that's not warrantied... anyway the performance itself isn't acceptable).
Can anyone think of a better approach (which doesn't involve using OpenGL ES -- that's not an option in this project) to do this?
To go with my comments above.
Maybe to get around the different shadow levels you could also have a grid of views (squares) between the image and the shadow view. each grid square has a different alpha opacity and when the spot is over a grid square, the grid square's alpha opacity changes to 0. when the spot moves off the grid square it's alpha opacity changes back to it's default.
Without more information it is a little difficult to know whether this approach will work in your case but what you could do is generate a single mask image, say, a radial alpha gradient and then apply an affine transform to it to shape it according to the touches. This can be used to simulate a torch/flashlight beam.
I would try this: use one view with a custom drawRect implemetation: first draw the shadow image (in grayscale) then a bright spot image in white an alpha. And finally the background image in a 'multiply' blend mode.
Just a thought, does the shadow has to be an image? Perhaps you could simply fill the shadow layer with a color and mask it then? This way the memory usage should be less and the effect should be nearly identical (if not exactly the same).
There is no reason to generate a new mask on every touch move. Instead, let the mask be initialized once and manipulate it (reset it's frame) as needed upon touch events.