Pixelation issues on iPhone Air game - iphone

I am working on a cross-platform mobile game for Android and iOS devices. I am using Adobe Flash with AIR and AS3 to code the game. I am drawing my character, obstacles, and backgrounds in Adobe Illustrator. The canvas in Flash is set to 960x640. The character was intended to be 1/3 of the screen height so around 213 pixels high. I designed the character in Adobe Illustrator to be somewhere around 900 pixels high. When I imported the character into Flash I animated him, instantiated him using var player:Player = new Player(), and scaled him down to size using the scaleX and scaleY properties. I tested it out on the desktop and Android phone and it looked wonderful. However, when I tested it out on an iPhone, the player was unacceptably pixelated around the edges. I figured the fact that I drew the animation much larger than the intended height must be the problem, so I redrew the player to exactly 213 pixels high and retested on the iPhone without any improvement in the quality of the animation. I also tried converting the MovieClip to a Bitmap vector explained here but that also had no effect on the quality of the animation.
At this point, I am at a loss. Does anyone have any suggestions on how to avoid this pixelation issue that I am experiencing when going from Adobe Illustrator to Flash to the iPhone?

For a more optimised iPhone route you might want to consider creating your animation as a set of bitmap graphics, i.e. create them as png files using PhotoShop at the size you want them to be displayed at.
By doing this you'll save CPU activity in having Flash create smoothed bitmaps

Try to set smoothing to true for your bitmap.
yourbitmap.smoothing = true;

I have some solutions for your problems.
Character should not have dark outlines. Use gradients for better effects.
Color combination should understand for your character. which background you use and other things.

Related

Unity Viewport size to Full HD?

at the moment I start with Unity in 2D development for android. Before I develop in Unity i develop with LibGDX where I can the Viewport to a static screen resolution at 1080 * 1920, if the game starts on a smaller device for example 480 * 800 the game still looks pretty good. In Unity when I use a Orthographics Camera and set the width to 1080 and the height to 1920 it looks like a equal long quadrate and not portrait.
How can I use a static Camera which the Viewport is 1080 * 1920 and for other devices unity self charge the resolution for the game?
Sorry for my bad english :(
greetings coco07!
You can't. You're thinking about 2D engines. Unity is a 3D engine with a 2D extension (sort of). You set your camera up (doesn't matter what orthographic size you use), and it gets scaled to the viewport of the device it's running on. the size you set is the size of the viewport, in world units, along the y axis of the device's display. The width is set automatically based on the device's aspect ratio. You can see this by resizing the game window in unity. Doing that causes the visualization of the camera's bounds to change. To create a static camera, you'd have to manually add black borders around the edges (or at least I think you do), OR, better yet, create your game in a manner that runs on all aspects. You should consider aspect ratios between 4:3 and 16:9 for landscape and 3:4 to 9:16 for portrait games. This is specially important since many recent devices use on-screen system buttons (such as the google nexus and Xperia Z series) which means you get an aspect ratio of a little bit less that 16:9 (or a bit more, on tablets). This doesn't sound like much, but if you down-scale a 16:9 image to fit on those devices' screens, it looks ugly as hell.

Cocos2d : tile maps VS 1 giant image

I have been doing some reading and tutorials on tile maps in cocos2d, but what i want is to have a large graphic map, not made up of tiles that the user can drag around. so my question is this.
Is it going to cause performance issues to have a large map, (this will be on the ipad so maybe x4 the screen size)?
The maximum texture size you can have is 2048x2048 for any device above the iPhone 3G (This includes all iPads, although it may be larger on the iPhone 4S, but I doubt it). You can find tons of links on google by a simple search of iphone max texture size. That means that the largest image you can have is between 4-6 iPad screens. If this is enough for you, great! If not, you'll have to tile the screen anyways, regardless of the size of your tiles. Hope that Helps!

How can I get rid of artifacts in my OpenGL ES iPhone app?

I'm using the Texture2D class from the CrashLanding sample code. I'm getting strange artifacts around my images in both the simulator and the phone. The artifacts are little gray borders around the textures. The borders are inconsistent and do not surround the entire texture. I'm using pngs.
Hey MrDatabase - It sounds like the problem is that your texture images have premultiplied alpha. I've had this problem on the iPhone too - the PNG compression that it performs when you build your app automatically premultiplies all the alpha values. If you're using glBlend(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), you're basically applying the alpha twice - try using glBlend(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) instead. There's lots of stuff in the Apple forums about this :-)
Are your textures all powers of two in width and height? If not, that's probably your problem.
I also had problems with textures smaller than a certain size. I remember someone said that for small textures, clear the memory after it's allocated. Changing malloc to calloc in the Texture2D source fixed the problem.

iPhone camera images as OpenGL ES textures

Is it possible to use an image captured with the iPhone's camera as a texture that is then manipulated in OpenGL ES (flag wave effect, etc.)? The main problem being the size of the iPhone screen being 320x480 (no status bar) and thus the image won't have dimensions that are power-of-2. Is the main option copying it into a 512x512 texture and adjusting the vertices?
Yes, that's the way to do it.
Just use a larger texture. It's a waste of memory but unfortunately there is no way around this problem.
An alternative would be deviding the picture into squares with a length and height of 32 pixels (aka tiling), resulting into 15x8 tiles. Displaying it would however involve many texture switches while drawing which might become a bottleneck. On the other hand you would save a lot of memory using a tiled approach.

Why do images for textures on the iPhone need to have power-of-two dimensions?

I'm trying to solve this flickering problem on the iphone (open gl es game). I have a few images that don't have pow-of-2 dimensions. I'm going to replace them with images with appropriate dimensions... but why do the dimensions need to be powers of two?
The reason that most systems (even many modern graphics cards) demand power-of-2 textures is mipmapping.
What is mipmapping?
Smaller versions of the image will be created in order to make the thing look correctly at a very small size. The image is divided by 2 over and over to make new images.
So, imagine a 256x128 image. This would have smaller versions created of dimensions 128x64, 64x32, 32x16, 16x8, 8x4, 4x2, 2x1, and 1x1.
If this image was 256x192, it would work fine until you got down to a size of 4x3. The next smaller image would be 2x1.5 which is obviously not a valid size. Some graphics hardware can deal with this, but many types cannot.
Some hardware also requires a square image but this isn't very common anymore.
Why do you need mipmapping?
Imagine that you have a picture that is VERY far away, so far away as to be only the size of 4 pixels. Now, when each pixel is drawn, a position on the image will be selected as the color for that pixel. So you end up with 4 pixels that may not be at all representative of the image as a whole.
Now, imagine that the picture is moving. Every time a new frame is drawn, a new pixel is selected. Because the image is SO far away, you are very likely to see very different colors for small changes in movement. This leads to very ugly flashing.
Lack of mipmapping causes problems for any size that is smaller than the texture size, but it is most pronounced when the image is drawn down to a very small number of pixels.
With mipmaps, the hardware will have access to 2x2 version of the texture, so each pixel on it will be the average color of that quadrant of the image. This eliminates the odd color flashing.
http://en.wikipedia.org/wiki/Mipmap
Edit to people who say this isn't true anymore:
It's true that many modern GPUs can support non-power-of-two textures but it's also true that many cannot.
In fact, just last week I had a 1024x768 texture in an XNA app I was working on, and it caused a crash upon game load on a laptop that was only about a year old. It worked fine on most machines though. It's a safe bet that the iPhone's gpu is considerably more simple than a full PC gpu.
Typically, graphics hardware works natively with textures in power-of-2 dimensions. I'm not sure of the implementation/construction details that cause this to be the case, but it's generally how it is everywhere.
EDIT: With a little research, it turns out my knowledge is a little out of date -- a lot of modern graphics cards can handle arbitrary texture sizes now. I would imagine that with the space limitations of a phone's graphics processor though, they'd probably need to omit anything that would require extra silicon like that.
You can find OpenGL ES support info about Apple Ipod/Iphone devices here:
Apple OpenES support
OpenGL ES 2.0 is defined as equal to OpenGL 2.0
The constraint about texture size's has been disappear only from version 2.0
So if you use OpenGL ES with version less then 2.0 - it is normal situation.
I imagine it's a pretty decent optimization in the graphics hardware to assume power-of-2 textures. I bought a new laptop, with latest laptop graphics hardware, and if textures aren't power-of-2 in Maya, the rendering is all messed up.
Are you using PVRTC compression? That requires powers of 2 and square images.
Try implementing wrapping texture-mapping in software and you will quickly discover why power-of-2 sized are desirable.
In short, you will find that if you can assume power-of-2 dimensions then a lot of integer multiplications and divisions turn into bit-shifts.
I would hazard a guess that the recent trend in relaxing this restriction is due to GPUs moving to floating-point maths.
Edit: The "because of mipmapping" answer is incorrect. Mipmapped, non-power-of-two textures are a common feature of modern GPUs.