I know how to blur a whole image, but I just want to blur a part of the image like the face. Answers are appreciated. Thanks!
Just to close this thread, I used both #Anshu's and #Michael's suggestions. Here are my steps:
Crop the part the image that needs blurred.
Use UIImage+Stack to blur the cropped image.
Place the blurred image on top of the original image.
Combine the two images together to one new image.
This should get you started: https://github.com/tomsoft1/StackBluriOS ?
You could look inside the UIImage+Stack implementation and use what you find there to make a version of this method that blurs a specific section of your image.
Something like:
- (UIImage*) stackBlur:(NSUInteger)inradius insideRect:(NSRect)rect;
Hint: the code from line #166 is where the blur class starts looping over the image pixels and blurring them.
Related
I have this poster image:
I have to blend it with another image. I have to put this image on a picture of a building like this:
The rotated poster image has white pixels that I don't know how to get rid of.
Can someone please help me with code to paste this image on the building image? The front of the building.
Firstly find the four corner points of building where you want to blend image. Also find four corner points of your poster. Then use perspective transform and warpaffine to paste poster on image.
This http://www.learnopencv.com/homography-examples-using-opencv-python-c/ link may help you to get the idea but it is using OpenCV.
I have tried this using OpenCV C++ and I am getting following image after applying perspective transform. Let me know if you want this kind of results:
I want to merge an image to another image in one shape. Example:
1- People image
2- Shape Image:
So how to do draw that. I already implement for merging but it's not fill to that shape.
It's possible to do this using the masking functions in the Quartz 2D framework. It's a little bit more involved than using the higher level image functions of UI Kit, but Quartz 2D gives you a lot more power to do cool graphics techniques.
The relevant Apple Developer guide to this can be found here: https://developer.apple.com/library/mac/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html
For this example, you'd want to create a mask shape for the inside part of the shape image. There are two ways you can do this. One way is to use image editing software to create a second mask image, with the same size as your shape image, with pure black in the area where you want the people image to appear, and white where you don't want to appear. In this example, that would be the area inside the blue shape. It is important to not crop this image, or else they won't match up exactly.
The other way to create the masking image would be to do that dynamically based on the shape image, and honestly, this is the way I would do it. This would mean that you're including fewer images in your app, and if you made any changes to the shape image, you wouldn't have to recreate the mask image as well. You could do this by making a small change to the way your shape image is formatted. You would need to use a format that allows transparency - png is preferred - so that there is alpha transparency in the part of the image outside of the shape, which is white in your JPEG image. Make sure the section in the center of the image is white (really, any color that is NOT USED in the wanted part of the shape image would work, but I'll say white for this example) and that you don't have parts of it that aren't pure white after image compression.
You will then use Quartz to select the area that's white, and create a mask from that. This technique is a bit more involved, but what you need can be found in the document I linked to above. Because of this, you might start with a static masking image, and then convert to the more involved technique after you've got the code to make the first technique work.
When you have your masking image, you would create the mask itself with the function CGImageMaskCreate(::::::::). You can then apply the mask to the people image using the function CGImageCreateWithMask(::), which will give you an image with the person's portrait, with the correct shape cropped from the center.
Finally, you would display this in your app by placing the masked people image on top of the shape image, and voila, you'll have what you're looking for.
Also, keep in mind, when using the Quartz 2D framework, you'll have to make sure you release images when they are no longer needed, or else you could have memory leaks.
I want to cut a part of large image with the shape of a small image. Is there any way to find the exact shape of small image instead of making rect of that image using boundingBox? Please reply..
You could do it by masking the bigger sprite with the smaller one.
Here you have a great tutorial on that:
http://www.raywenderlich.com/4421/how-to-mask-a-sprite-with-cocos2d-1-0
And here is the tool that they use to play with the blending functions
http://www.andersriggelsen.dk/glblendfunc.php
I was wondering if it's possible to cut an image within an image with the Corona SDK using your finger? If it is indeed possible, I'd like to know how. It's for an iphone game. Thanks.
There are two ways you can do this.
One is to replace the image with two halves of the original.
You will need to edit the image, which is cumbersome.
Second is to use bitmap masks as alpha masks.
The masks are grayscale images: white = show, black = hide.
This method won't require you to change the image.
Create two halves of the original image.
Hide half of each image shown here as A & B
using 2 separate alpha masks.
Sample Code from Corona here
This would assume your image would always be sliced in the same way.
I'm trying to achieve a sort of dynamic UIView masking effect. Here is a sketch:
So as you can see, I'm trying to create a UIView that can effectively cut through an image to reveal the image behind it. I already know how to return an image with a mask statically, however I would like the "revealer" to be draggable (I'll use pan gesture) and live.
Does anyone have any ideas or starting points on how to achieve this? Thanks
(NOTE: My demo says White layer, but I'd actually like to show another image or photo).
masking an image is not that difficult.
This link shows the basics.
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
But personally I think i would make 2 UIImage views and crop the content of the draggable UIView. I'm not sure but I would expect that clipping and panning the second image will be less computationally expensive then applying the mask and will get you a better frame rate.
So I would do: UIImageView of the full image. A UIView on top of it with a white and some transparency setting to make it look white, then a UIImageView with the image either places or cropped so that only the correct section is showing.