GPUImage for face blending [duplicate] - iphone

Im trying to use GPUImagePoissonBlendFilter of the GPUImage framework to blend two faces in my face blending application. Here is my code.
- (void)applyPoissonBlendToImage:(UIImage *) rearFace withImage:(UIImage *) frontFace
{
GPUImagePicture* picture1 = [[GPUImagePicture alloc] initWithImage:rearFace];
GPUImagePicture* picture2 = [[GPUImagePicture alloc] initWithImage:frontFace];
GPUImagePoissonBlendFilter * poissonFilter = [[GPUImagePoissonBlendFilter alloc] init];
poissonFilter.mix = .7;
poissonFilter.numIterations = 200;
[picture1 addTarget:poissonFilter];
[picture1 processImage];
[picture2 addTarget:poissonFilter];
[picture2 processImage];
finalResultImage = [poissonFilter imageFromCurrentlyProcessedOutputWithOrientation:rearFace.imageOrientation];
}
i.e As you can see, I am giving two images (rearFace and frontFace) as inputs to this method. The image front face is a shape (polygon shape formed by joining relative eye's and mouth postion's) and is of the same size as the rearFace image i .e (To match the size, I've filled the the space external to the polygonal shape with transparent color while drawing).
However blending does not happen as I expected. i.e the sharp edges of the front face are not blended into rear face properly. Here my assumption is that the PoissonBlendFilter starts blending second image from its top left corner rather than the top left boundary of the face.
Problem:I feel that the input image is not fed into the filter correctly . Do I need to apply some kind of masking to the input image? Can anyone guide me in this?

GPUImage can sometimes become tricky with two-input filters. When you are adding the blend filter to the first source image, specify the texture location explicitly. So instead of:
[picture1 addTarget:poissonFilter];
Try this:
[picture1 addTarget:poissonFilter atTextureLocation:0];
The rest (picture1 or any others) don't need this, but there is a little bug with two input filters that sometimes require excplicitly specifying the texture location.

Related

iOS Sprite Kit - SKSpriteNode - blend two sprites

Actually, I'm migrating a game from another platform, and I need to generate a sprite with two images.
The first image will be something like the form, a pattern or stamp, and the second is only a rectangle that sets color to the first. If the color was plane, it will be easy, I could use sprite.color and sprite.colorBlendFactor to play with it, but there are levels where the second image is a rectangle with two colors (red and green, for example).
Is there any way to implement these with Sprite Kit?
I mean, something like using Core image filter, and CIBlendWithAlphaMask, but only with Image and Mask image. (https://developer.apple.com/library/ios/documentation/graphicsimaging/Reference/CoreImageFilterReference/Reference/reference.html#//apple_ref/doc/uid/TP40004346) -> CIBlendWithAlphaMask.
Thanks.
Look into the SKCropNode class (documentation here) - it allows you to set a mask for an image underneath it.
In short, you would create two SKSpriteNodes - one with your stamp, the other with your coloured rectangle. Then:
SKCropNode *myCropNode = [SKCropNode node];
[myCropNode addChild:colouredRectangle]; // the colour to be rendered by the form/pattern
myCropNode.maskNode = stampNode; // the pattern sprite node
[self addChild:myCropNode];
Note that the results will probably be more similar to CIBlendWithMask rather than CIBlendWithAlphaMask, since the crop node will mask out any pixels below 5% transparency and render all pixels above this level, so the edges will be jagged rather than smoothly faded. Just don't use any semi-transparent areas in your mask and you'll be fine.

CAShapeLayer annoying clipping error

I am working on a map functionality. The map is built up out of multiple CAShapeLayers with CGPaths from calculated coordinates. I have a clipping problem. Look below on the screenshot where Alaska is badly clipped. The coordinates of the Alaska path extend beyond the bounds of my container layer. In effect, if i make my container layer big enough the clipping effect is gone (of course).
You see a dark line because at the bottom of Alaska is solid from left to right. Also the line is darker than the rest of the map because the map has opacity (it gets darker because it adds up).
I drilled down into the problem and i have narrowed it down to the single big polygon (there are not other polygons responsible for the clipping error).
As a workaround, i make the layer bigger to hide the line, then make the UIView smaller again to hide the line.
I'd like to know what is causing the issue instead of working with workarounds.
After a lot of digging, i managed to find an answer to my own question.
I was rendering the layers to an UIImage for improved performance. The background layer was scaled up by a UIScrollView and then several things went wrong:
Apparently, setting masksToBounds:YES has no effect when using renderInContext, just as it does with the mask property of a CALayer. MasksToBounds (or clipToBounds) only applies to childlayers.
When scaling a bitmap, be sure to include integral values to the scale argument of UIGraphicsBeginImageContextWithOptions. If not, the image will have fractional sizes, e.g. 24.2323 x 34.3290. Btw, that scale argument is used to create amazing detail on Retina screens, but it can be misused to zoom in on CAShapeLayer drawings.
When using fractional size images as a background layer, you get distortion at the edge.
The clipping effect disappeared after i updated my layer to image function. This one did the trick:
- (UIImage *)getImageWithSize:(CGSize)size opaque:(bool)opaque contentScale:(CGFloat)scale
{
CGContextRef context;
size = CGSizeMake(ceilf(size.width), ceilf(size.height));
scale = roundf(scale);
UIGraphicsBeginImageContextWithOptions(size, opaque, scale);
context = UIGraphicsGetCurrentContext();
[self renderInContext:context];
UIImage *outputImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImg;
}
Using ceilf, roundf, or floorf didn't really matter. As long as you lose the fractions.
Sorry if my stupidity wasted any of your time, but perhaps others have the same issue.

Move the image around another image

I need to move the one image around other image where both images are in circle shape,they should not collide or overlap with each other. I tried with CGRectIntersectsRect but no use of it because of corner radius of image i.e intersect function get called before they collide.
You can do this with animation but for this you should take it as single image as shown in 1st image and make different image with different position of blue image in circle.
loadingImageView.animationImages = [[NSArray alloc]initWithObjects:[UIImage imageNamed:#"circle1.png"],[UIImage imageNamed:#"circle2.png"],[UIImage imageNamed:#"circle3.png"],[UIImage imageNamed:#"circle4.png"],[UIImage imageNamed:#"circle5.png"],[UIImage imageNamed:#"circle6.png"],[UIImage imageNamed:#"circle7.png"],[UIImage imageNamed:#"circle8.png"],[UIImage imageNamed:#"circle9.png"],[UIImage imageNamed:#"circle10.png"],[UIImage imageNamed:#"circle11.png"],[UIImage imageNamed:#"circle12.png"],[UIImage imageNamed:#"circle13.png"], nil];
if(![loadingImageView isAnimating])
{
loadingImageView.animationDuration=4;
[loadingImageView startAnimating];
}
circle1.png,circle2.png,circle3.png... etc are images which contain blue and red image as one image with different position of blue image in circle. Now hope if will helpful for you.If any problem is there then tell me.

How do I binarize a CGImage using OpenCV on iOS?

In my iOS project, I have a CGImage in RGB that I'd like to binarize (convert to black and white). I would like to use OpenCV to do this, but I'm new to OpenCV. I found a book on OpenCV, but it was not for iPhone.
How can I binarize such an image using OpenCV on iOS?
If you don't want to set up OpenCV in your iOS project, my open source GPUImage framework has two threshold filters within it for binarization of images, a simple threshold and an adaptive one based on local luminance near a pixel.
You can apply a simple threshold to an image and then extract a resulting binarized UIImage using code like the following:
UIImage *inputImage = [UIImage imageNamed:#"inputimage.png"];
GPUImageLuminanceThresholdFilter *thresholdFilter = [[GPUImageLuminanceThresholdFilter alloc] init];
thresholdFilter.threshold = 0.5;
UIImage *thresholdFilter = [thresholdFilter imageByFilteringImage:inputImage];
(release the above filter if not using ARC in your application)
If you wish to display this image to the screen instead, you can send the thresholded output to a GPUImageView. You can also process live video with these filters, if you wish, because they are run entirely on the GPU.
Take a look at cv::threshold() adn pass thresholdType as cv::THRESH_BINARY:
double cv::threshold(const cv::Mat& src,
cv::Mat& dst,
double thresh,
double maxVal,
int thresholdType)
This example uses the C interface of OpenCV to convert an image to black & white.
What you want to do is remove the low rate of changes and leave the high rate of changes, this is a high pass filter. I only have experience with audio signal processing so I don't really know what options are available to you but that is the direction I would be looking.

Animated Masks in C4

I know that it is possible to create layer masks in C4 like this:
object.layer.mask = anotherObject.layer;
Is there a known way to use an animated mask?
Yes. You can animate a mask in a couple of different ways.
First, if you use basic shapes as the object whose layer will become the mask, you can animate them as a you would normally and this becomes an animated mask.
This can be done for any visible object in C4 (i.e. shapes, movies, images, etc...).
For instance:
object.layer.mask = aShape.layer;
aShape.animationDuration = 1.0f;
aShape.origin = CGPointMake(x, y);
The above can be done with images as well. When using images any clear parts of the image will turn out transparent in your original object.
Furthermore, there is an undocumented animatable image method, which is experimental and available only in the latest template.
Using it would look like:
NSArray *imageNamesArray = [NSArray arrayWithObjects:#"imageName01.png",...,nil];
C4Image *animatedImage = [C4Image animatedImageWithNames:imageNamesArray];
object.layer.mask = animatedImage.mask;
Essentially, this method creates an animated gif style image... But, because this method is brand new / experimental, there isn't any control over the speed of the transitions between images.