I have zero experience with manipulating image files of any sort with code, so I am lost about where to begin. All I need to do is open a PNG image file and save it rotated 90 degrees in objective-C. I am a quick learner, so even a push in the right direction would help immensely. I know this is no obscure function; any GUI image editor is capable of this, so I figure someone should be able to help. Thanks in advance!
(also, I have tagged this with iPhone to get more exposure; this is not something that needs to be iPhone-exclusive.)
Here's your "push":
Create a CGImage from the original file, using ImageIO (CGImageSourceCreateWithURL, CGImageSourceCreateImageAtIndex).
Create a bitmap context with transposed size, using Core Graphics (CGBitmapContextCreate).
Rotate the context's transformation matrix (CGContextConcatCTM)
Draw the original image into that context (CGContextDrawImage).
Create a new image from the bitmap context (CGBitmapContextCreateImage)
Save the image to a new file (CGImageDestinationCreateWithURL, CGImageDestinationAddImage, CGImageDestinationFinalize).
Have you tried looking at NSImage Rotation asked on Stackoverflow?
Related
I'm new to iPhone graphics and it's a bit daunting.
The problem: I have UIImageA, and UIImageB. Both are the same picture, except UIImageB has all the pixels values darkened.
I'd like to copy an arbitrary piece of UIImageA onto the top of UIImageB. The end result would be a dark image, with the part of the original image bright.
My guess is that I will need to:
Create a "path" that is the arbitrary shape to copy. I think I can figure this out.
Take UIImageA and somehow crop it or mask it to the path.
Copy the part of UIImageA onto UIImageB at the exact same position.
It's steps 2 and 3 that have me confused. I've seen many examples of cropping images to a rectangle, or masking images with another pre-defined image, but nothing that exactly does this.
Does anyone have any general pointers?
You could try Core Image.
You can do all that with CGImage in your environment. You can
use a bitmap or layer context and then later render it on whatever view
you wish.
There is a good Core Graphics Quartz tutorial at
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html
I am trying to draw individual pixels in xcode.
I already know Objective-C but not the Quartz/graphics stuffs and I'nm not interested at the moment. I simply want a basic app that let me have a map of X*Y and being able to show pixel at (x,y) with color rgb.
I don't know how to find a tutorial for this, and I think it must be very quick. Do you guys have a file like this, or could point me to a tutorial?
Any help is greatly appreciated.
Simply use NSBitmapImageRep with setColor:atX:y:
Create a empty NSBitmapImageRep.
Every time you need to update the view do something like this:
- Set the specific pixel to a certain color with setColor:atX:y:
- Convert NSBitmapImageRep to NSImage
- Show result in a NSImageView
This works perfect if you don't need to update the view too many times/sec.
I already know Objective-C but not the Quartz/graphics stuffs and I'nm not interested at the moment. I simply want a basic app that let me have a map of X*Y and being able to show pixel at (x,y) with color rgb.
If you're not interested in learning Core Graphics, then tough luck. You get two choices for graphics: OpenGL, or UIKit/Core Graphics. Your choice, but Core Graphics is considerably easier. You can use OpenGL to paint on a per pixel basis, but I'm assuming if you have no interest in learning Core Graphics you're probably not going to be keen on OpenGL. For high performance applications you're looking at OpenGL as the only realistic option.
So, if you don't want to learn OpenGL your best bet is the Quartz programming guide:
http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html%23//apple_ref/doc/uid/TP40007533-SW1
Out of interest, why wouldn't you want to look into Quartz?
You could either use CoreGraphic to draw a filled rect in a drawRect after a setNeedsDisplay on the view. Or you could generate a bitmap context and assign it to the layer contents of your view at a 1.0 scale if you want actual pixels instead of 1x1 point rectangles.
I'm a beginner to 3D graphics in general and I'm trying to make a 3D game for the iPhone, and more specifically, to use textures that contain transparency. I am able to load a texture (an 8 bit .png file) into OpenGL and map it to a square (made from a triangle strip) but the transparent parts of the image are not transparent when I run the app in the simulator - they take on the background colour, whatever it is set to, but obscure images that are further away. I am unable to post a screenshot as I am a new user, so my apologies for that. I will try to upload and link it some other way.
Even more annoying is that when I load the image into Apple's GLSprite example code, it works exactly as I want it to. I have copied the code from GLSprite's setupView into my project and it still doesn't work properly.
I am using the blend function:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I was under the impression that this is correct for what I want to do.
Is there something very basic I am missing here? Any help would be much appreciated as I am submitting this as a coursework project in a few weeks and would very much like it to work.
Let me break this down:
First of all your transparent object is drawn.
At this point two things happen:
The pixels are drawn correctly to the back buffer
The depth buffer pixels are set in the depth buffer. Note that the depth buffer will write values all across your object, and transparency does not affect it.
You then draw other objects behind the transparent object.
But any of these objects pixels will not be drawn, because their depth buffer value are less than those already drawn.
The solution to this problem is to draw your scene back-to-front (draw starting at the further away things).
Hope that helps.
Edit: I'm assuming you are using the depth buffer here. If this isn't correct I'll consider writing another answer.
So I'd like to create a class that accepts a CGImage from an image file I just read from disk, does work on that image texture (color transformations) then returns the texture as a CGImage and does all this in the background w/out drawing to screen. I've looked at Apple's demo app on GLImageProcessing but it draws all the processing to the screen and I've seen bits and bites of how to do parts of what I want but can't assemble it.
Any suggestions would be greatly appreciated.
Thanks
You will have to use Framebuffer Objects (FBOs) to draw offscreen, but you still need a GL rendering context.
I'm trying to work out how to draw from a TexturePage using CoreGraphics.
Given a texture page (CGImageRef) which contains multiple 64x64 packed textures, how do I render sub areas from that page onto the device context.
CGContextDrawImage seems to only take a destination rect. I noticed CGImageCreateWithImageInRect, however this creates a new image. I don't want a new image I simply want to draw from the original image.
I'm sure this is possible, however I'm new to iPhone development.
Any help much appreciated.
Thanks
What's wrong with CGImageCreateWithImageInRect?
CGImageRef subImage = CGImageCreateWithImageInRect(image, srcRect);
if (subImage) {
CGContextDrawImage(context, destRect, subImage);
CFRelease(subImage);
}
Edit: Wait a minute. Use CGImageCreateWithImageInRect. That is what it's for.
Here are the ideas I wrote up initially; I will leave them in case they're useful.
See if you can create a sub-image of some kind from another image, such that it borrows the original image's buffer (much like some substring implementations). Then you could draw using the sub-image.
It might be that Core Graphics is intended more for compositing than for image manipulation, so you may have to use separate image files in your application bundle. If the SDK docs don't particularly recommend what you're doing, then I suggest you go that route since it seems the most simple and natural way to do it.
You could use OpenGLES instead, in which case you can specify the texture coordinates of polygon vertices to select just that section of your big texture.