I am trying to draw individual pixels in xcode.
I already know Objective-C but not the Quartz/graphics stuffs and I'nm not interested at the moment. I simply want a basic app that let me have a map of X*Y and being able to show pixel at (x,y) with color rgb.
I don't know how to find a tutorial for this, and I think it must be very quick. Do you guys have a file like this, or could point me to a tutorial?
Any help is greatly appreciated.
Simply use NSBitmapImageRep with setColor:atX:y:
Create a empty NSBitmapImageRep.
Every time you need to update the view do something like this:
- Set the specific pixel to a certain color with setColor:atX:y:
- Convert NSBitmapImageRep to NSImage
- Show result in a NSImageView
This works perfect if you don't need to update the view too many times/sec.
I already know Objective-C but not the Quartz/graphics stuffs and I'nm not interested at the moment. I simply want a basic app that let me have a map of X*Y and being able to show pixel at (x,y) with color rgb.
If you're not interested in learning Core Graphics, then tough luck. You get two choices for graphics: OpenGL, or UIKit/Core Graphics. Your choice, but Core Graphics is considerably easier. You can use OpenGL to paint on a per pixel basis, but I'm assuming if you have no interest in learning Core Graphics you're probably not going to be keen on OpenGL. For high performance applications you're looking at OpenGL as the only realistic option.
So, if you don't want to learn OpenGL your best bet is the Quartz programming guide:
http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html%23//apple_ref/doc/uid/TP40007533-SW1
Out of interest, why wouldn't you want to look into Quartz?
You could either use CoreGraphic to draw a filled rect in a drawRect after a setNeedsDisplay on the view. Or you could generate a bitmap context and assign it to the layer contents of your view at a 1.0 scale if you want actual pixels instead of 1x1 point rectangles.
Related
Suppose I was writing a game which involved a relatively complex geometric game board. Something like a dartboard.
I would want a view to display the game state. What is the best way to implement that view?
For example, should I draw the board off line in something like photoshop, add it as a resource, and then show it using a UIImageView? Or should I use drawing primitives and essentially draw the board programmatically?
What are the trade-offs?
If I do use an image, what format should I prefer? .png, .tiff, .gif, .jpg?
Thanks,
John
If you decide to go the image route you should use png. Displaying any other format you pay a performance hit (as mentioned in the comment).
To decide between building photoshop vs drawing via code you need to decide how much time you want to put into learning Quartz/CoreGraphics. Apple's docs:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html
If you already know Photoshop then building the graphic there is probably much easier, if you don't then learning Quartz is prob a less steep learning curve than Photoshop...
If it's a simple board, it's easy enough to draw it into the view, which gives you the possibility of easily manipulating it in interesting ways. Drawing in a view is done with a set of postscript like primitives.
For something more fancy, photoshop might be the way to go.
PNGs are preferred.
I'm trying to create a "page curl" animation of an image in my iPhone application. I t UIViewAnimationTransitionCurlUp, and it's undocumented Core Animation siblings, however the image I need to animate is a transparent PNG, with "uneven" (some alpha pixels) outlines. When using the aforementioned pre-made transition, those alpha pixels are painted black as soon as the animation starts, which looks terribly ugly.
Therefore, I seek to create a Core Animation of my own. I have tried to research the subject, but have been unable to find a good overview of the techniques involved. The implementation would of course have to be more complex than a single property change, I get the feeling that even CATransform3D would be to limited for this purpose, as the image needs to have different 3D transformations applied in different parts of it - changing over time. How would one then go about this subject? I'm very grateful for any thoughts or ideas!
Best,
Eli
As Corey points out, you'll probably need to go with OpenGL ES for this one. Core Animation exposes the ability to work with layers, even in 3-D, but all layers are just rectangles and they are manipulated as such. You can animate the flipping of a layer about an axis, even with a perspective distortion, but the kind of curving you want to do is more complex than you can manage using the Core Animation APIs.
You might be able to split your image up into a mesh of tiny layers and manipulate each using a CATransform3D to create this curving effect, but at that point you might as well be using OpenGL ES to create the same effect.
The book Core Animation for Mac OS X and the iPhone: Creating Compelling Dynamic User Interfaces from Pragmatic Programmer may help you write custom Core Animation animations.
I have a straight image and I want to deform it in a wave-like manner.
Original image:
straight texture http://img145.imageshack.us/img145/107/woodstraight.png
and I want it to look like this (except animated):
bent texture http://img145.imageshack.us/img145/8496/woodbent.png
I haven't tackled the learning curve of openGL yet so if I can do this with Core Animation it would be great.
Is this possible?
Unfortunately, I think this is a job for OpenGL. You could achieve the same affect in Quartz by slicing the image up vertically and drawing segments with different vertical offsets... but I don't think you'd be able to achieve good enough performance to animate it. (At least, with 1px or 2px wide slices)
You could also leave the image stationary, and use Quartz to animate a masking path that would create the waving edges. That probably wouldn't look too natural, though.
As far as I know, Core Animation on the iPhone isn't capable of doing this, either. On the Mac it comes with some more advanced filters, but I think you'd probably see a lot more stuff like this if the iPhone filters could do it :-)
OpenGL does have quite a learning curve, but here's what you'd want to do to achieve the effect: Create a flat rectangle in OpenGL with several verticies along it's length. Point the camera at the rectangle so that it appears flat. Then, use a sine() function of some sort to animate the verticies back and forth in place.
This approach is also used to achieve the rippling-water effect, and you might be able find an example or two of it.
Sorry to bring bad news :-) Hope that helps!
I'm writing a game that displays 56 hexagon pieces filling the screen in the shape of a board. I'm currently drawing each piece using a singleton rendering class that when called to draw a piece, creates a path from 6 points based of the coordinate passed in. This path is filled with a solid color and then a 59x59 png with an alpha to white gradient is overlayed over the drawing to give the piece a shiny look. Note I'm currently doing this in Core Graphics.
My first thought is that creating a path everytime I draw is costly and seems like I can somehow do this once and then reuse it, but I'm not sure of the best approach for this. When I look at the bottlenecks with Shark, it looks like the drawing of the png is the most taxing part of the process. I've tried just rendering the png overlay or just rendering the path without the overlay and both give me some frame gains, although removing the png overlay yields the most frames.
My current thought is that at startup, I should render 6 paths (1 for each color piece I have) and overlay them with the png and then store an image of these pieces and then just redraw the pieces each time I need them. Is there an effecient machanism for storing something you've drawn once and redrawing it? It kinda just sounds like I'd be running into the whole drawing pngs too often thing again, but maybe there's a less taxing method that does a similar thing...
Any suggestions are much appreciated.
Thanks!
You might try CGLayer or CALayer.
General thoughts:
Game programming on iPhone usually necessitates OpenGL. Core Graphics is a bit easier to work with, but OpenGL is optimized for speed.
Prerender this "shiny look" into the textures as much as is possible (as in: do it in Photoshop before you even insert them into your project). Alpha blending is hell on performance.
Maybe try PVRTC (also this tutorial) as it's a format used by iPhone's GPU's manufacturer. Then again, this could make things worse depending on where your bottleneck is.
If you really need speed you have to go the OpenGL route. Be careful if you want to mix OpenGL and Core Animation, they can conflict.
OpenGL is a pain if you haven't done much with it. It sounds like you could use Core Animation and make each tile a layer. CA doesn't call the redraw again unless you change something, so you should be able to just move that layer around without taking a big hit. Also note that CA stores the layer in the texture memory so it should be much faster.
Some others have mentioned that you should use OpenGL. Here's a nice introduction specifically for the iPhone: OpenGL ES from the Ground Up: Table of Contents
You might also want to look at cocos2d. It seems to be significantly faster than using CoreAnimation in my tests, and provides lots of useful stuff for games.
I'm trying to work out how to draw from a TexturePage using CoreGraphics.
Given a texture page (CGImageRef) which contains multiple 64x64 packed textures, how do I render sub areas from that page onto the device context.
CGContextDrawImage seems to only take a destination rect. I noticed CGImageCreateWithImageInRect, however this creates a new image. I don't want a new image I simply want to draw from the original image.
I'm sure this is possible, however I'm new to iPhone development.
Any help much appreciated.
Thanks
What's wrong with CGImageCreateWithImageInRect?
CGImageRef subImage = CGImageCreateWithImageInRect(image, srcRect);
if (subImage) {
CGContextDrawImage(context, destRect, subImage);
CFRelease(subImage);
}
Edit: Wait a minute. Use CGImageCreateWithImageInRect. That is what it's for.
Here are the ideas I wrote up initially; I will leave them in case they're useful.
See if you can create a sub-image of some kind from another image, such that it borrows the original image's buffer (much like some substring implementations). Then you could draw using the sub-image.
It might be that Core Graphics is intended more for compositing than for image manipulation, so you may have to use separate image files in your application bundle. If the SDK docs don't particularly recommend what you're doing, then I suggest you go that route since it seems the most simple and natural way to do it.
You could use OpenGLES instead, in which case you can specify the texture coordinates of polygon vertices to select just that section of your big texture.