Im trying to figure out if there is a better way to draw a fading line then the method that I am currently using.
Currently to draw a fading line that can be moved around the screen I am using SKEmitterNodes. The SkEmitterNodes however are extremely CPU expensive. They have a birth rate of 300 to be able to maintain a thick line while being moved around the screen. Does anyone know of a better way to achieve this fading line drawing effect better?
The effect I am looking for is similar to the lines being drawn in Dark Echo. Here is a video. https://www.youtube.com/watch?v=tuOC8oTrFbM
Thanks,
Chris
If you are looking to fade a node's alpha property then you can use (SKAction *)fadeOutWithDuration:(NSTimeInterval)sec. This will gradually fade your node's alpha to zero in the specified time frame.
For proper memory management you should remove the node from parent if you no longer need it and the alpha has reached zero.
Related
In Unity, is there a way to give slight color variations to a scene (a strain of purple here, some yellow blur there) without adjusting every single texture? And for that to work in VR stereo images too (and ideally in semi-consistent way as one moves around, and perhaps also without having to use computing-heavy colored lights)? Many thanks!
A simple way to achieve this if your color effect is fixed would be to add a canvas that renders a half transparent image to the whole screen. But I suppose that you might prefer some dynamic effect.
To achieve this, look at Unity's post processing stack. It allows you to add many post process effects, such as chromatic aberation and color grading, that might allow you to do what you want
I'm trying to create a cumulative trail effect in a render texture. By cumulative I mean that the render texture would show the last few frames overlaid on each other. Currently, when my camera outputs to a render texture it completely overwrites whatever was there previously.
Let me know if I can clarify anything.
Thanks!
You could set the clear flag on the camera to Don't clear. This will prevent the clearing of previous frame on your camera and then will create this overlapping kinda like Flash movement style.
The issue is that everything will be kept on screen so if only the character moves then it is ok but if the camera moves then the effect also applies to environment and your scene becomes a big blur.
You could have two cameras for the effect, each with different rendering layers. One takes care of the items that should not have the effect and one takes care of those that are considered for the effect. This way you can apply the effect on characters and ignore the environment, if that is required else just go with one camera.
I have a scene with a background image (a lit room), and a black image (shadow) over that. I need to be able to move my finger over the background and reveal some parts of the scene, simulating a dim light source in a dark room.
My current approach was to generate a mask depending on the position of the touch, and then applying that mask to the shadow image. The problem is I'm generating a new mask and applying it every time I receive a touch event. It's a large image (800x600) and this causes the performance to go down and it increases a lot the memory usage, eventually crashing the game (I think I don't have any memory leaks, but that's not warrantied... anyway the performance itself isn't acceptable).
Can anyone think of a better approach (which doesn't involve using OpenGL ES -- that's not an option in this project) to do this?
To go with my comments above.
Maybe to get around the different shadow levels you could also have a grid of views (squares) between the image and the shadow view. each grid square has a different alpha opacity and when the spot is over a grid square, the grid square's alpha opacity changes to 0. when the spot moves off the grid square it's alpha opacity changes back to it's default.
Without more information it is a little difficult to know whether this approach will work in your case but what you could do is generate a single mask image, say, a radial alpha gradient and then apply an affine transform to it to shape it according to the touches. This can be used to simulate a torch/flashlight beam.
I would try this: use one view with a custom drawRect implemetation: first draw the shadow image (in grayscale) then a bright spot image in white an alpha. And finally the background image in a 'multiply' blend mode.
Just a thought, does the shadow has to be an image? Perhaps you could simply fill the shadow layer with a color and mask it then? This way the memory usage should be less and the effect should be nearly identical (if not exactly the same).
There is no reason to generate a new mask on every touch move. Instead, let the mask be initialized once and manipulate it (reset it's frame) as needed upon touch events.
I have my app up and running and would like to add some polish to it. One of the first things I'd like to do is improve the transitions.
Unfortunately I have speent most of my time in OpenGL and still haven't got a solid grasp on working with the UIView system. What is a good way to enter into your App?
I load pretty quickly so I was thinking a quick fade in, but my GL view loads and draws at least a frame before I really get control so I am not sure the best way to go about this.
A quick and dirty way would be to just create a black (or white) solid color full screen UIView overlaying the opengl view, and have it fade its alpha down to zero over some n number of seconds.
There is a SetAlpha method from UIView. In your draw call; decrease gradually the alpha value until 0. When it hits 0, then draw the next "view" of your app (pretty much loads your next 3d objects); and increase the alpha back to 1. should do the trick.
I have never seen this issue before, its very strange. Just wondering if anyone else has come across this too.
I have added a sprite to my game, its supposed to be the top left corner of a box to put text on to. I want to make it scalable without loosing anything so i have it broken up into sections. In the image above the one on top is the image itself, and then the one on the bottom is the image when its being drawn in the iPhone simulator.
Any idea why the last column of pixels on the right are altered? I have not scaled the image at all.
I don't know about Cocos2D, but in general terms what you've done here is to draw the image at a position that isn't an exact multiple of one pixel.
Consequently even without scaling you have resampled the image across a grid that doesn't coincide with the original image data, causing all pixels to be a bit off. It's just the right-hand edge is the most obvious case, since the resampling leaves you with a partial transparency here. But look at eg the two rows of purple pixels in the border: they're not the same, because your vertical alignment is off as well, causing a small amount of colour from the grey border below it to bleed into the lower row of purple.
Ok I actually figured it out this time. Cocos2D adds a bit of antialiasing to CCTextures. To stop it from doing this you need to call this:
[[mySprite texture] setAliasTexParameters];
to turn it back on you call this:
[[mySprite texture] setAntiAliasTexParameters];