sequence of images of my sprite hurt performance - iphone

so I have a sprite that I create every second on the screen. This sprite is a sequence of 20 images . I would like to know if it can hurt performance ? if yes how can I reduce the impact on the performance thank you :) sorry for my english I'm french :/

I've worked with sprites before and yes the more you have on the screen the lower your performance will be. The sequence of 20 images part is what worries me. Instead of using 20 images, look into something called a spritesheet.
A spritesheet is where all your images (an animation In your case right??) are in one file and you keep some parameters stored like
-How big is one frame?
-How many frames?
-X and Y positions.
Example: If I have a 5 frame animation, and each image is 20x100 pixels. I would put them all in one image file side by side, making the image file 100x100. I would draw each portion of this spritesheet on the screen in sequential order.
So my parameters would be:
-SizePerFrame = (20, 100)
-TotalSizeOfImage = (100, 100)
-FramesTotal = 5
-x y of First frame = (0,0)
So I would draw a portion from (0,0) to (20,100) first frame, (20,100) to (40,100) the second and so on.
Hope this makes sense

Related

Getting a sprites 'real space' height

My Android game uses screen co-ordinates based on real space co-ordinates and my conversion goes like this: All this is pseudo code, but I've highlighted as code
Real-space=(position/480); (so for example 240/480 would be halfway across the screen)
Velocity = 1/Time; (Time = seconds)
Real-space= Real-space+ (Velocity * delta time);
Screen Coordinates = Real-space* screen width / length
Now the problem I have is that in my game, I have the need to match the screen co-ordinates of 2 sprites, so they move together (with one on top of the other, so I'm simply using something like
Sprite1_Screen_Coords = (Sprite_2_Screen_Coords-Sprite1 Height)
(All hights are scaled so are relative to the screen size the app is currently being run on)
This works OK, but I have the need to match the 'real space' co-ordinates.
Obviously I can do something like:
Sprite1-Real-Co-ordinates = Sprite2-Real-Co-ordinates
which means they would be the same but what value would I subtract from this so it 'sits' perfectly on top of the other sprite? How do I derive this missing value from/for the sprites I have? So to summerise, I need something like:
Sprite1-Real-Co-ordinates = (Sprite2-Real-Co-ordinates - something representing the sprites height)
Thank all! :-)
The answer was that I simply divided the sprite's scaled height by the current screen height and used that value - seems to work on the 3 resolutions I tested it on.

how to scale down Cocos2D image repeatedly

I'm repeatedly shrinking an image (and then render it to a new full sized image) by a small amount, and the result is that a stripe down the middle is not being shrunk. I'm assuming this has to do with the resize method cocos2d uses. If I increase the amount I scale down the image by the resize is too fast, and if I decrease the shrink size the bar down the middle gets even bigger! the following code is called 60 times a second. the picture below shows the result! So.. any suggestions on how to get rid of the bar?
[mySprite setScaleX:rtt.scaleX - .05];
I wasn't sure quite what you meant, but did you mean you're calling this line 60 times a second?
[mySprite setScaleX:rtt.scaleX - .05];
If so then your sprite's scale will become negative in a third of a second...
Every time you manipulate an image, you lose information.
A better approach would be to always resize from the original, and just change the resize amount each time, rather than continually resizing the result of the last resize operation.
I'm new to cocos2d engine, so hope this helps. If your shrinking an image, I would suggest using CCScaleBy. You can try something like this...
CCScaleBy *yourSprite = [CCScaleBy actionWithDuration: .01 scaleX: .95 scaleY: 1.0f];
This will scale your sprite down by 5% each time its called. Then you can have it replaced by the new image when it reaches what you would consider its smallest pixel point. The duration may need to be played with, but thought this would help.

Cocos2d. Diffuse image (60 fps)

The game was created by support cocos2d 0.99.5 and Box2d.
Iphone SDK 4.3
We have a character. When a character moves quickly, it looks blurred (fuzzy // unfocused). On a simulator and on device (iPhone 3G).
To move a character using mouseJoint (dampingRatio = 0 // frequencyHz = -1).
In the screenshot image clearly. link
The character is focused. The screenshot not transfer problems.
All the time 60 fps.
Tried params:
use kCCDirectorProjection2D // 3D
alies // antialies to texture params
CC_COCOSNODE_RENDER_SUBPIXEL 1 and 0
Video sample: link
How to get a clear image of the character during the move?
I also had a problem like this and fixed it by changing this line in ccConfig.h:
#define CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL 0
to
#define CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL 1
This is the comment for this define, maybe it helps someone.
/** #def CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL
If enabled, the texture coordinates will be calculated by using this formula:
- texCoord.left = (rect.origin.x*2+1) / (texture.wide*2);
- texCoord.right = texCoord.left + (rect.size.width*2-2)/(texture.wide*2);
The same for bottom and top.
This formula prevents artifacts by using 99% of the texture.
The "correct" way to prevent artifacts is by using the spritesheet-artifact-fixer.py or a similar tool.
Affected nodes:
- CCSprite / CCSpriteBatchNode and subclasses: CCLabelBMFont, CCTMXTiledMap
- CCLabelAtlas
- CCQuadParticleSystem
- CCTileMap
To enabled set it to 1. Disabled by default.
#since v0.99.5
*/
I am pretty sure that what you are describing is an optical illusion. LCDs, especially lower-quality LCDs, have a finite response time. If this response time is too slow, it can cause ghosting, i.e. the moving object looks smeared. Basically what's happening is the previous frame's (or several frames') pixels take a long time to actually "turn off" and you see fainter versions of your sprite left behind as it moves.
With regards to your comment:
For the experiment, I took a pencil and put it to a sheet of paper
began to move quickly. Eyes see a pencil in focus, then problem is not
an optical effect, a code problems
Looking at a moving object in the real world is not the same as looking at a moving object on the screen, with or without a poor display response time. The real-world object moves continuously, but the screen object moves in discrete steps. Your eye can follow the pencil exactly and keep the image sharp on your retina. If you follow a screen image, however, your eye moves smoothly, while the screen image "jumps" from place to place. This can cause a "juddering" effect for sufficiently fast-moving objects, even at high framerates. If 60fps is still juddery, there is basically no way around this; it is a limitation of current technology.

Getting pixels data from image on iPhone

I want to use bitmap images as a "map" for levels in iphone game. Basicly it's all about the location of obstacles in the rectangular world. The obstacles would be color-coded -- where the white pixel is, there's no obstacle. Black means there is one at this point.
Now I need to use this data to do 2 things: (a) display the level map, (b) for in-game calculations. So, in general, I need a way to read the data from the bitmap and create some data structure (matrix-like) with those information - to both overlay the bitmap onto the level graphics as well as to calculate collisions and such.
How should I do it? Is there any easy way to read the data from image? And what's the best format to keep the images for this?
Have you looked at how Texture2D translates an image file to an OpenGL Texture ?
Tip: take a look at this Method in Texture2D.m:
- (id) initWithCGImage:(CGImageRef)image orientation:(UIImageOrientation)orientation sizeToFit:(BOOL)sizeToFit pixelFormat:(Texture2DPixelFormat)pixelFormat filter:(GLenum) filter
In 3D apps, it's quite common to use this kind of representation for height maps, in a height map, you use a Texture with colors that range from black to white ( white represents the maximum altitude )
For example, from this:
To this:
That was just to tell you that your representation is not that crazy :).
About reading the bitmap, I would also recommend you to read this (just in case you want to go deeper)
Hope I helped a bit!

Warping an image on the iphone with OpenGL

I am fairly new to programming and I'm doing it, at this point, just to educate myself and have fun.
I'm having a lot of trouble understanding some OpenGL stuff despite having read this great article here. I've also downloaded and played around with an example from the apple developer site that uses a .png image for a sprite. I do eventually want to use an image.
All I want to do is take an image and warp it such that it's four corners end up at four different x,y coordinates that I supply. This would be on a timer of sorts (CADisplayLink?) with one or more of these points changing at each moment. I just want to stretch it between these dynamic points.
I'm just having trouble understanding exactly how this works. As I've understood some example code over at the developer center, I can use:
glVertexPointer(2, GL_FLOAT, 0, spriteVertices);
where spriteVertices is something like:
const GLfloat spriteVertices[] = {
-0.90f, -.85f,
0.95f, -0.83f,
-0.85f, 0.85f,
0.80f, 0.80f,
};
The problem is that I don't understand what the numbers actually mean, why some have negatives infront of them, and where they are counting from to get the four corners. How would I need to change normal x,y coordinates that I get in order to plug them into this? (the numbers I would have for x,y wouldn't look like numbers between 1 and 0 would they? I would like something akin to per pixel accuracy.
Any help is greatly appreciated even if it's just a link to more reading. I'm having trouble finding resources for a newb.
It isn't as complicated as it seems at first. Each pair of numbers relates to an x,y position on the screen. So, 0.80f, 0.80f, would say go to 80% of the drawable area for both x and y(left to right, down to up). While -0.80,-0.80 would say go to 80% of the drawable area from right to left, up to down. The negatives just switch the sides. A point of note, openGL draws down to up(as if you were looking up a building from the ground), while the iPhone draws up to down (as though you were reading a book).
To get pixels, you multiply the float value by drawable area 1024 X 0.8 = 819.2.
This tutorial is for textures, but it is amazing and really helps you learn the coordinate systems:
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-6_25.html