iPhone, how do you draw from a texturepage using coregraphics? - iphone

I'm trying to work out how to draw from a TexturePage using CoreGraphics.
Given a texture page (CGImageRef) which contains multiple 64x64 packed textures, how do I render sub areas from that page onto the device context.
CGContextDrawImage seems to only take a destination rect. I noticed CGImageCreateWithImageInRect, however this creates a new image. I don't want a new image I simply want to draw from the original image.
I'm sure this is possible, however I'm new to iPhone development.
Any help much appreciated.
Thanks

What's wrong with CGImageCreateWithImageInRect?
CGImageRef subImage = CGImageCreateWithImageInRect(image, srcRect);
if (subImage) {
CGContextDrawImage(context, destRect, subImage);
CFRelease(subImage);
}

Edit: Wait a minute. Use CGImageCreateWithImageInRect. That is what it's for.
Here are the ideas I wrote up initially; I will leave them in case they're useful.
See if you can create a sub-image of some kind from another image, such that it borrows the original image's buffer (much like some substring implementations). Then you could draw using the sub-image.
It might be that Core Graphics is intended more for compositing than for image manipulation, so you may have to use separate image files in your application bundle. If the SDK docs don't particularly recommend what you're doing, then I suggest you go that route since it seems the most simple and natural way to do it.
You could use OpenGLES instead, in which case you can specify the texture coordinates of polygon vertices to select just that section of your big texture.

Related

split the image into a mesh of small quadrangles in OpenGLES

How do I split the image into a mesh of small quadrangles in OpenGLES.
I need to split image in small parts and after that stretch only one part of the image, not the whole image.
Is it possible using OpenGL? I am new in OpenGL.
before editing in hair image like
and after editing hair image like
So the image stretches from any side and in any way.
Can you explain more detailed? What quadrangles would you like to use? Are they simple rectangles or squares? Or they are a bit more complicated like parallelogram, trapezoid, kite or rhombus?
Anyway it's easy to split an image into any quadrangles you want. You can get any quadrangle from your image by telling OpenGL about image coordinates: glTexCoord2f(x, y)
I forgot to mention that image in OpenGL is loaded with size (1.f, 1.f) and it doesn't matter what size your picture is you can split it in as many as you want quadrangles.
There is one very useful tutorial about texture mapping maybe there you get your answers more detailed. Also i recommend you to keep an eye on official OpenGL documentation.
Good luck.

How to copy arbitrary path from one UIImage to another

I'm new to iPhone graphics and it's a bit daunting.
The problem: I have UIImageA, and UIImageB. Both are the same picture, except UIImageB has all the pixels values darkened.
I'd like to copy an arbitrary piece of UIImageA onto the top of UIImageB. The end result would be a dark image, with the part of the original image bright.
My guess is that I will need to:
Create a "path" that is the arbitrary shape to copy. I think I can figure this out.
Take UIImageA and somehow crop it or mask it to the path.
Copy the part of UIImageA onto UIImageB at the exact same position.
It's steps 2 and 3 that have me confused. I've seen many examples of cropping images to a rectangle, or masking images with another pre-defined image, but nothing that exactly does this.
Does anyone have any general pointers?
You could try Core Image.
You can do all that with CGImage in your environment. You can
use a bitmap or layer context and then later render it on whatever view
you wish.
There is a good Core Graphics Quartz tutorial at
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html

Drawing pixels with XCode

I am trying to draw individual pixels in xcode.
I already know Objective-C but not the Quartz/graphics stuffs and I'nm not interested at the moment. I simply want a basic app that let me have a map of X*Y and being able to show pixel at (x,y) with color rgb.
I don't know how to find a tutorial for this, and I think it must be very quick. Do you guys have a file like this, or could point me to a tutorial?
Any help is greatly appreciated.
Simply use NSBitmapImageRep with setColor:atX:y:
Create a empty NSBitmapImageRep.
Every time you need to update the view do something like this:
- Set the specific pixel to a certain color with setColor:atX:y:
- Convert NSBitmapImageRep to NSImage
- Show result in a NSImageView
This works perfect if you don't need to update the view too many times/sec.
I already know Objective-C but not the Quartz/graphics stuffs and I'nm not interested at the moment. I simply want a basic app that let me have a map of X*Y and being able to show pixel at (x,y) with color rgb.
If you're not interested in learning Core Graphics, then tough luck. You get two choices for graphics: OpenGL, or UIKit/Core Graphics. Your choice, but Core Graphics is considerably easier. You can use OpenGL to paint on a per pixel basis, but I'm assuming if you have no interest in learning Core Graphics you're probably not going to be keen on OpenGL. For high performance applications you're looking at OpenGL as the only realistic option.
So, if you don't want to learn OpenGL your best bet is the Quartz programming guide:
http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html%23//apple_ref/doc/uid/TP40007533-SW1
Out of interest, why wouldn't you want to look into Quartz?
You could either use CoreGraphic to draw a filled rect in a drawRect after a setNeedsDisplay on the view. Or you could generate a bitmap context and assign it to the layer contents of your view at a 1.0 scale if you want actual pixels instead of 1x1 point rectangles.

OpenGL ES for iPhone blending not working

I'm a beginner to 3D graphics in general and I'm trying to make a 3D game for the iPhone, and more specifically, to use textures that contain transparency. I am able to load a texture (an 8 bit .png file) into OpenGL and map it to a square (made from a triangle strip) but the transparent parts of the image are not transparent when I run the app in the simulator - they take on the background colour, whatever it is set to, but obscure images that are further away. I am unable to post a screenshot as I am a new user, so my apologies for that. I will try to upload and link it some other way.
Even more annoying is that when I load the image into Apple's GLSprite example code, it works exactly as I want it to. I have copied the code from GLSprite's setupView into my project and it still doesn't work properly.
I am using the blend function:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I was under the impression that this is correct for what I want to do.
Is there something very basic I am missing here? Any help would be much appreciated as I am submitting this as a coursework project in a few weeks and would very much like it to work.
Let me break this down:
First of all your transparent object is drawn.
At this point two things happen:
The pixels are drawn correctly to the back buffer
The depth buffer pixels are set in the depth buffer. Note that the depth buffer will write values all across your object, and transparency does not affect it.
You then draw other objects behind the transparent object.
But any of these objects pixels will not be drawn, because their depth buffer value are less than those already drawn.
The solution to this problem is to draw your scene back-to-front (draw starting at the further away things).
Hope that helps.
Edit: I'm assuming you are using the depth buffer here. If this isn't correct I'll consider writing another answer.

Generating graphics at runtime with Cocos2D - How to display?

I'm trying to create dynamic graphics for my game, which I'm building with Cocos2D. The graphics generation will occur at predictable, finite points, such as level loading. I'm having a hard time figuring out how to actually draw this at runtime. From what I can tell, the easiest way would be to draw into a PNG file at runtime and then load an AtlasSprite based on the PNG file, but I can't seem to figure out if this is indeed the best way or how to go about doing it. Any suggestions?
I'm not sure how Cocos2D loads Sprites or Atlases so this is a more general answer.
It might be worth taking a look at the Texture2D class that comes with the old CrashLanding example app. It uses a bitmap graphics context to generate a texture of a string for drawing with OpenGL. The code uses the CGBitmapContextCreate function to create a context. You can draw whatever you want onto it.
Then once you've finished drawing, you can either save the file as a PNG or you can call glTexImage2D on the data to use it with OpenGL.
There's more information about it in the Graphics and Drawing
documentation, specifically the section: Creating and Drawing Images.
Edit: It looks like Cocos2D comes with Texture2D so you should be in good shape. Check out the initWithString method here.