im developing a 2d game on iphone in which i want to maginify the views on the screen to give the effect that the user is looking through the sniper whenever the user taps the screen..
i am required to show that the objects which the player wants to shoot should be moving...so im incremeting their position as well as increasing their size...
so i want this magnified image to show the updated positions of the moved objects at runtime to give the effect that the user is looking through the snpier...
scaling the images didnt help me cause it slowed my application a llot as the objects position is updated every 0.01 sec...
please help
Is it a 2d game, or are you using OpenGL ES? If the latter, you can always render your scene to a texture and use the hardware to scale that for you. That shouldn't slow down the game too much...
If 2d, it's always faster to scale one single image than a lot of individual objects, so here it also may be beneficial to render to an image first, and later scale and draw it on the screen.
Related
I was going to use the VideoPlayer to render to Camera Near Plane, but I also want to display subtitles for the video for the sake of accessibility. I'm wondering what the best way to do that is.
I can't see anything on a canvas if I render to Near Plane. I'd like the video to appear in front of the scene so that I can have the scene there once the video is complete.
Do I need to be using a render texture to achieve this? Seems like a render texture might incur some unnecessary overhead for my purposes, but I could be wrong.
The idea is this:
Far Background - Scene
Background - Black Image (so i can fade to scene)
Middleground - Video
Foreground - Subtitles
More info:
This is a 2D point and click adventure game with a pre-rendered cutscene.
You could do this with a render texture, place it in front of the camera at an exact distance and size, but I wouldn't. Probably would be a different camera anyway for lighting or clipping purposes.
I would use a second Camera, rendering over top of the Main Camera, with the subtitle UI's canvas targeting the second camera's screen space, and clearing depth only. It will render what it sees, but with a totally transparent background. Then, you can render your video on either the main camera's near plane or the new subtitle camera's far plane.
You could put your black square in front of this camera, too, though it would be in front of the video. It could be UI on the main camera, or stick a third camera in between them. You might have to worry about performance if there are too many cameras, but I have used two or three before to no noticeable performance hit.
Robert Mocks's answer is perfectly tenable and makes sense to me. Thank you for that!
What I decided to do instead was use a RawImage so that I wouldn't have to deal with extra cameras. This way I can use the canvas as I normally would and don't have to deal with render textures.
This involves using the API Only setting along with the following code:
rawImage.texture = videoPlayer.texture;
That seems to work well for me.
Hey guys I'm using sprite kit to develop my game. I just want to create a big world for my player to roam around. But when I use large tile images of 1024 X 768 size it uses a lot of memory which I don't want for sure.
In my game the player as the ability to move in all directions. The camera is centered on the player. I've converted my tiles into 128 X 128. I've loaded all the tiles and added them to the array as in apple's adventure game. But, I want to load only those tiles which are at a particular distance (suppose x =1024 and y =768) from the player. And those tiles which are farther than that specific distance should be removed from the parent.
Is there a way to achieve this? And I'm open to all suggestions. Please Help.
Thank You.
I am currently working on a library for a tile scroller, it's not totally finished but it may suit your needs. It's using a Tableview datasource pattern to ask for the nodes, with a logic for node reuse included, it will going to be asking you for the tiles before showing them, and it will remove the not showed ones. Take a look at it:
RPTileScroller
I take some effort to make it the most efficient possible, but mind, I am not a game developer. I tried it with my iPhone 5 with random colors tiles of 10x10 pixels, and is running on a solid 60 fps.
I'm developing a game with large obstacle and sprites(in cocos2d+box2d for iPhone), then after zooming out my sprites and layer (by increasing cameraZ), I make my game to play by user, which causes some problem in touch detection of dynamic objects.
Can it be said a good approach to work with? If No then what will be the solution to work properly(consider that I have traveled so far with this approach)?
NOTE:[self.camera setEyeX:0 eyeY:0 eyeZ:180]; (i'm using this line for zooming out, putting camera far from my sprites by increasing z)
If you use a camera for zooming then cocos2d will no longer correctly convert your touch locations to opengl coordinates, since it doesn't invert the camera transform. I would recommend using scale on the layer that your objects reside on to implement zooming. This gives you precise control over the zoom factor and touches will be correctly transformed when you use methods to convert touches from screen space to node space.
Hi Friends
I Want to make a simple gaming Application in which the user hit the car and car breaks from that point means the image get little deformed when the user hit the car image. I know everything could be possible with using of lots of images and get change when user hit that car image but i don't want to use so many images.
is there any solution for this , how can i deform the image ..sorry for my English but , here i paste a link of the game that is on flash and this is what i exactly want..
http://www.playgecogames.com/file.php?f=657&a=popup
please respond soon
thanks
You don't say if this is in 2D or 3D, or what techniques you're going to use.
If you're implementing the game using OpenGL, it's fairly straightforward. The object can be made up of a regular mesh, with the image as a texture mapped to the mesh. When the user hits the object, you just deform the mesh.
A simple method would be to take a vector in the direction of the hit, displace the nearest vertex by an amount proportional to the force of the strike, and then fan out in to deform the rest of the mesh in decreasing amounts. By deforming the mesh, the image texture will be rendered with all the dents or deformations you like.
If you want to to this without OpenGL and just straight images, you could use image resampling to simulate the effect. You have your original pristine image which is 'filtered' to make up the resulting image. At first there are no deformations so you copy the original image verbatim. Each time the user hits the object, you can add a deformation using a filter or transform within a local region of interest. This function would resample the source image in a distorted manner, causing it to look like the object is damaged.
If you look up some good books on game development, you'll find a great range of approaches to object collisions, deformations and so on.
If you know a bit about image processing technics here is the documentation for accessing the pixels of the image :
Apple Reference
You also have libraries for this such as this one :
simple-iphone-image-processing
But for what you want to do this might not be the easiest way. What I would suggest is that you divide the car into several images depending on what areas can be impacted. Then you just change the image corresponding to the damaged zone each time the car is hit.
I think you should use the cocos2d effects http://www.cocos2d-iphone.org/wiki/doku.php/prog_guide%3aeffects + multiple images. Because there are many parts which drops after the player kick the car. Like when user kick the side mirror you should change the car image with without side mirror car image.
The person that has made that flash game used around 4 images to display the car. If you want the game to be in 2d, the easiest way is to draw the car, cut it into about 4 pieces (: left side + right side (duplicate of the left side) hood and roof).
If you want to "really" deform the car you'll have to use a 3d engine like openGLES.
Id really suggest doing it in 2d :)
I suggest having a look at the cocos2d game engine. You can modify images with effects, which are applied using a virtual grid. Have a look at the effects page in their programming guide.
I am writing a game in which there are thumbnails of mini games displayed in a grid, CCSprites in a NSArray. One of these is then scaled and moved to create a zooming effect. Once it has zoomed in it is hidden to reveal the actual "live" minigame (a CCNode), which has been added to the scene invisibly while the zooming animation took place. This means that if the minigame looks exactly the same as the thumbnail there is a seamless transition. After a few seconds, the zoomed in thumbnail reappears covering the actual minigame and zooms out.
My question is, how can I take a snapshot of the actual minigame and use that as the thumbnail so the user cannot tell that the thumbnails are not actually real games? This would have to happen in the split second when the game has paused but the sprite has not reappeared.
I fear that my explanation is not very good, but I hope that someone will understand it!
Ok... solved it. I guess I should have searched more before posting.
After a while, I came accross these two articles:
http://www.bit-101.com/blog/?p=1861 and
Replacing image in sprite - cocos2d game development of iphone
I used the code in the first article (after adjusting it for the retina display) to create an array containing the pixel data. This is then inverted (its upside down to start with) and then pushed into a UIImage. I then init a CCTexture2D with the image and replace the existing sprite texture with this.
I hope this helps someone else at some point.