I am very new with IOS and I am developing a funny painting app for iPhone & iPad which use Object - C.
The app will allow you to touch on a the image then it will fill all the near and same color pixel with your touched pixel by your selected color (Paint bucket tool) .
I know Floodfill algorithm is what I need but I am really stuck on how to implement FloodFill algorithm to fill color on which area I want .
I also saw that one, but it just has 2 files and no any description, I tried to use it, but I wasnt sucessfull .
All I want is loading an image (like that one) to ImageView , and it will fill color when I touch on the ImageView.
If you use UIBezierPath's for drawing you can use its -fill method to fill the shapes.
You can get the byte data of the UIImage by access the CGImage of it.
From this you can find the colour of the pixel that was touched.
Then it's just a case of running a simple flood fill algorithm from the pixel that was touched.
Getting colour of pixel... How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?
Flood fill algorithm... http://en.wikipedia.org/wiki/Flood_fill
I'd probably do this myself rather than look for a framework or anything.
Related
I want to merge an image to another image in one shape. Example:
1- People image
2- Shape Image:
So how to do draw that. I already implement for merging but it's not fill to that shape.
It's possible to do this using the masking functions in the Quartz 2D framework. It's a little bit more involved than using the higher level image functions of UI Kit, but Quartz 2D gives you a lot more power to do cool graphics techniques.
The relevant Apple Developer guide to this can be found here: https://developer.apple.com/library/mac/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html
For this example, you'd want to create a mask shape for the inside part of the shape image. There are two ways you can do this. One way is to use image editing software to create a second mask image, with the same size as your shape image, with pure black in the area where you want the people image to appear, and white where you don't want to appear. In this example, that would be the area inside the blue shape. It is important to not crop this image, or else they won't match up exactly.
The other way to create the masking image would be to do that dynamically based on the shape image, and honestly, this is the way I would do it. This would mean that you're including fewer images in your app, and if you made any changes to the shape image, you wouldn't have to recreate the mask image as well. You could do this by making a small change to the way your shape image is formatted. You would need to use a format that allows transparency - png is preferred - so that there is alpha transparency in the part of the image outside of the shape, which is white in your JPEG image. Make sure the section in the center of the image is white (really, any color that is NOT USED in the wanted part of the shape image would work, but I'll say white for this example) and that you don't have parts of it that aren't pure white after image compression.
You will then use Quartz to select the area that's white, and create a mask from that. This technique is a bit more involved, but what you need can be found in the document I linked to above. Because of this, you might start with a static masking image, and then convert to the more involved technique after you've got the code to make the first technique work.
When you have your masking image, you would create the mask itself with the function CGImageMaskCreate(::::::::). You can then apply the mask to the people image using the function CGImageCreateWithMask(::), which will give you an image with the person's portrait, with the correct shape cropped from the center.
Finally, you would display this in your app by placing the masked people image on top of the shape image, and voila, you'll have what you're looking for.
Also, keep in mind, when using the Quartz 2D framework, you'll have to make sure you release images when they are no longer needed, or else you could have memory leaks.
I'm trying to create an app for children. It will have different images. And those images will have different closed shapes like stars, balls or any other similar images.
I am aware of how to draw on iphone. I know how to select a color, how to change the brush size etc.
What I want to know is to select a color and on touch of image, it has to flood fill the closed area around touched co-ordinate. Is it possible? How can I do it?
(For example, if I touch inside the ball, the ball must be filled with one color)
check flood fill library
http://gauravstomar.blogspot.com/2014/09/uiimage-fill-color-objective-c.html
hope it will help
I think you need to use blending for that, see this answer:
Iphone App: How to fill an image with empty areas with coregraphics?
I am trying to detect a touch event on a PNG image loaded into a UIImageView. I have everything working fine except that the touch is being tested for the bounding rectangle around the image (as expected). What I would like to do is test if the user has selected part of the visible PNG as opposed to the UIImageView itself.
For example if I have a horseshoe image, I want it to only respond to touches when you select the sides and not the center part where nothing is being drawn. I am kind of at a loss on this one, google reveals a number of people with the same issue but not even a hint towards where to begin looking.
Two ways:
a) you examine a pixel data of your image to determine if the touched pixel is a transparent pixel. You have to draw your image to an offline buffer to make this possible. Use CGContextDrawImage and CGBitmapContextGetData to get access to pixel data from UIImage.CGImage This Apple's Q&A explains the basic method to access pixel data.
b) you have a polygon representation of the horseshoe and use polygon hit testing to determine if the horseshoe was touched. Google for "point in polygon" for algorithms.
a) is probably less work if you need this just for a few images, but if you have a lot of hit testing (game with a lot of movement) b) might be better.
i am developing a iphone app. i have a background image lets say an airplane with black color out lines and from color palette user can pick a color and fill the region of airplane....any help, code , suggestion will highly be appriciated
A simple fill algorithm should do. just expand from the point you are on until you meet region end pixels
see http://en.wikipedia.org/wiki/Flood_fill you can also try googling for Boundary Fill algorithm
My first though was to have a UIView and a mask image on top of that with the plane but this only works in certain situations. If the shape of the plane does not change you could also change the color and then "fill" the plan in during the drawRect using functions like CGContextAddArc and CGContextAddRect.
I'm working in a view based application and am trying to find some code that will let me grab some pixel colors from one of my images and use it for collision detection against one of my UIImageViews but haven't had any luck finding anything on this subject. So if my UIImageView for my player collides with the UIImageView of my map && collides with the color black in my image that's placed inside of my map view... then run collision code... or something along those lines.
Is your question about getting the pixel color, or about doing collision detection?
If you want to get the pixel color, I'm not sure there's an easy way to do it - you may have to mess with your current graphics context to get it, and nothing is coming up in the docs.
If it's just collision detection you want to do, take a look at UIView's convertPoint:toView: and convertPoint:fromView: methods. They let you take defined points within a given view and get their equivalents in other views. With some basic math on the resultant points, you could theoretically do some pretty good collision detection without having to worry about pixel colors.