How to copy arbitrary path from one UIImage to another - iphone

I'm new to iPhone graphics and it's a bit daunting.
The problem: I have UIImageA, and UIImageB. Both are the same picture, except UIImageB has all the pixels values darkened.
I'd like to copy an arbitrary piece of UIImageA onto the top of UIImageB. The end result would be a dark image, with the part of the original image bright.
My guess is that I will need to:
Create a "path" that is the arbitrary shape to copy. I think I can figure this out.
Take UIImageA and somehow crop it or mask it to the path.
Copy the part of UIImageA onto UIImageB at the exact same position.
It's steps 2 and 3 that have me confused. I've seen many examples of cropping images to a rectangle, or masking images with another pre-defined image, but nothing that exactly does this.
Does anyone have any general pointers?

You could try Core Image.
You can do all that with CGImage in your environment. You can
use a bitmap or layer context and then later render it on whatever view
you wish.
There is a good Core Graphics Quartz tutorial at
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html

Related

How to merge an image to fill in other image with shape?

I want to merge an image to another image in one shape. Example:
1- People image
2- Shape Image:
So how to do draw that. I already implement for merging but it's not fill to that shape.
It's possible to do this using the masking functions in the Quartz 2D framework. It's a little bit more involved than using the higher level image functions of UI Kit, but Quartz 2D gives you a lot more power to do cool graphics techniques.
The relevant Apple Developer guide to this can be found here: https://developer.apple.com/library/mac/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html
For this example, you'd want to create a mask shape for the inside part of the shape image. There are two ways you can do this. One way is to use image editing software to create a second mask image, with the same size as your shape image, with pure black in the area where you want the people image to appear, and white where you don't want to appear. In this example, that would be the area inside the blue shape. It is important to not crop this image, or else they won't match up exactly.
The other way to create the masking image would be to do that dynamically based on the shape image, and honestly, this is the way I would do it. This would mean that you're including fewer images in your app, and if you made any changes to the shape image, you wouldn't have to recreate the mask image as well. You could do this by making a small change to the way your shape image is formatted. You would need to use a format that allows transparency - png is preferred - so that there is alpha transparency in the part of the image outside of the shape, which is white in your JPEG image. Make sure the section in the center of the image is white (really, any color that is NOT USED in the wanted part of the shape image would work, but I'll say white for this example) and that you don't have parts of it that aren't pure white after image compression.
You will then use Quartz to select the area that's white, and create a mask from that. This technique is a bit more involved, but what you need can be found in the document I linked to above. Because of this, you might start with a static masking image, and then convert to the more involved technique after you've got the code to make the first technique work.
When you have your masking image, you would create the mask itself with the function CGImageMaskCreate(::::::::). You can then apply the mask to the people image using the function CGImageCreateWithMask(::), which will give you an image with the person's portrait, with the correct shape cropped from the center.
Finally, you would display this in your app by placing the masked people image on top of the shape image, and voila, you'll have what you're looking for.
Also, keep in mind, when using the Quartz 2D framework, you'll have to make sure you release images when they are no longer needed, or else you could have memory leaks.

Blur Partial Part of Image

I am new for iOS Development . After googling I found that, it is easy to blur whole image but it is difficult to blur specific part of image such like rectangular or circular. So please help me how can I blur specific part of image rather then whole image ?
Thanks in advance.
Blur the whole image, then crop to the part you care about. You can use a mask for non-rectangular/non-sharp-edged blurs, but don't skip the crop.
The lovely, but sometimes tricky, thing about
Core Image is that it's extremely lazy. It doesn't work from the start to the end; it's more of a pull model, working from the last thing you asked for all the way back to the original rasters. Moreover, it won't actually filter any pixels you have not asked for.
So, in your case, a crop means not asking for any blurred pixels outside of the crop. Since you didn't ask for them, they don't get blurred. The blur only runs on the pixels you ask for—the ones inside the crop.
Masking works differently; by definition, it needs to look at every pixel in the mask image, and I would be surprised if it didn't also look at every pixel in the source (even to multiply it by zero). This is why you should still crop, even with a mask.
Note that the blurred-and-cropped portion of the image will still be where it is in the original image. It doesn't copy/move the pixels within the image, because that would be expensive; instead, it returns an image with a different extent—namely, the crop rectangle. You'll want to retrieve that extent and subtract its origin from the coordinates where you want to draw the image—either that or use an affine transform filter, but, again, that would probably be expensive.

Rotating a PNG file

I have zero experience with manipulating image files of any sort with code, so I am lost about where to begin. All I need to do is open a PNG image file and save it rotated 90 degrees in objective-C. I am a quick learner, so even a push in the right direction would help immensely. I know this is no obscure function; any GUI image editor is capable of this, so I figure someone should be able to help. Thanks in advance!
(also, I have tagged this with iPhone to get more exposure; this is not something that needs to be iPhone-exclusive.)
Here's your "push":
Create a CGImage from the original file, using ImageIO (CGImageSourceCreateWithURL, CGImageSourceCreateImageAtIndex).
Create a bitmap context with transposed size, using Core Graphics (CGBitmapContextCreate).
Rotate the context's transformation matrix (CGContextConcatCTM)
Draw the original image into that context (CGContextDrawImage).
Create a new image from the bitmap context (CGBitmapContextCreateImage)
Save the image to a new file (CGImageDestinationCreateWithURL, CGImageDestinationAddImage, CGImageDestinationFinalize).
Have you tried looking at NSImage Rotation asked on Stackoverflow?

OpenGL ES for iPhone blending not working

I'm a beginner to 3D graphics in general and I'm trying to make a 3D game for the iPhone, and more specifically, to use textures that contain transparency. I am able to load a texture (an 8 bit .png file) into OpenGL and map it to a square (made from a triangle strip) but the transparent parts of the image are not transparent when I run the app in the simulator - they take on the background colour, whatever it is set to, but obscure images that are further away. I am unable to post a screenshot as I am a new user, so my apologies for that. I will try to upload and link it some other way.
Even more annoying is that when I load the image into Apple's GLSprite example code, it works exactly as I want it to. I have copied the code from GLSprite's setupView into my project and it still doesn't work properly.
I am using the blend function:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I was under the impression that this is correct for what I want to do.
Is there something very basic I am missing here? Any help would be much appreciated as I am submitting this as a coursework project in a few weeks and would very much like it to work.
Let me break this down:
First of all your transparent object is drawn.
At this point two things happen:
The pixels are drawn correctly to the back buffer
The depth buffer pixels are set in the depth buffer. Note that the depth buffer will write values all across your object, and transparency does not affect it.
You then draw other objects behind the transparent object.
But any of these objects pixels will not be drawn, because their depth buffer value are less than those already drawn.
The solution to this problem is to draw your scene back-to-front (draw starting at the further away things).
Hope that helps.
Edit: I'm assuming you are using the depth buffer here. If this isn't correct I'll consider writing another answer.

iPhone, how do you draw from a texturepage using coregraphics?

I'm trying to work out how to draw from a TexturePage using CoreGraphics.
Given a texture page (CGImageRef) which contains multiple 64x64 packed textures, how do I render sub areas from that page onto the device context.
CGContextDrawImage seems to only take a destination rect. I noticed CGImageCreateWithImageInRect, however this creates a new image. I don't want a new image I simply want to draw from the original image.
I'm sure this is possible, however I'm new to iPhone development.
Any help much appreciated.
Thanks
What's wrong with CGImageCreateWithImageInRect?
CGImageRef subImage = CGImageCreateWithImageInRect(image, srcRect);
if (subImage) {
CGContextDrawImage(context, destRect, subImage);
CFRelease(subImage);
}
Edit: Wait a minute. Use CGImageCreateWithImageInRect. That is what it's for.
Here are the ideas I wrote up initially; I will leave them in case they're useful.
See if you can create a sub-image of some kind from another image, such that it borrows the original image's buffer (much like some substring implementations). Then you could draw using the sub-image.
It might be that Core Graphics is intended more for compositing than for image manipulation, so you may have to use separate image files in your application bundle. If the SDK docs don't particularly recommend what you're doing, then I suggest you go that route since it seems the most simple and natural way to do it.
You could use OpenGLES instead, in which case you can specify the texture coordinates of polygon vertices to select just that section of your big texture.