I need to move the one image around other image where both images are in circle shape,they should not collide or overlap with each other. I tried with CGRectIntersectsRect but no use of it because of corner radius of image i.e intersect function get called before they collide.
You can do this with animation but for this you should take it as single image as shown in 1st image and make different image with different position of blue image in circle.
loadingImageView.animationImages = [[NSArray alloc]initWithObjects:[UIImage imageNamed:#"circle1.png"],[UIImage imageNamed:#"circle2.png"],[UIImage imageNamed:#"circle3.png"],[UIImage imageNamed:#"circle4.png"],[UIImage imageNamed:#"circle5.png"],[UIImage imageNamed:#"circle6.png"],[UIImage imageNamed:#"circle7.png"],[UIImage imageNamed:#"circle8.png"],[UIImage imageNamed:#"circle9.png"],[UIImage imageNamed:#"circle10.png"],[UIImage imageNamed:#"circle11.png"],[UIImage imageNamed:#"circle12.png"],[UIImage imageNamed:#"circle13.png"], nil];
if(![loadingImageView isAnimating])
{
loadingImageView.animationDuration=4;
[loadingImageView startAnimating];
}
circle1.png,circle2.png,circle3.png... etc are images which contain blue and red image as one image with different position of blue image in circle. Now hope if will helpful for you.If any problem is there then tell me.
Related
Actually, I'm migrating a game from another platform, and I need to generate a sprite with two images.
The first image will be something like the form, a pattern or stamp, and the second is only a rectangle that sets color to the first. If the color was plane, it will be easy, I could use sprite.color and sprite.colorBlendFactor to play with it, but there are levels where the second image is a rectangle with two colors (red and green, for example).
Is there any way to implement these with Sprite Kit?
I mean, something like using Core image filter, and CIBlendWithAlphaMask, but only with Image and Mask image. (https://developer.apple.com/library/ios/documentation/graphicsimaging/Reference/CoreImageFilterReference/Reference/reference.html#//apple_ref/doc/uid/TP40004346) -> CIBlendWithAlphaMask.
Thanks.
Look into the SKCropNode class (documentation here) - it allows you to set a mask for an image underneath it.
In short, you would create two SKSpriteNodes - one with your stamp, the other with your coloured rectangle. Then:
SKCropNode *myCropNode = [SKCropNode node];
[myCropNode addChild:colouredRectangle]; // the colour to be rendered by the form/pattern
myCropNode.maskNode = stampNode; // the pattern sprite node
[self addChild:myCropNode];
Note that the results will probably be more similar to CIBlendWithMask rather than CIBlendWithAlphaMask, since the crop node will mask out any pixels below 5% transparency and render all pixels above this level, so the edges will be jagged rather than smoothly faded. Just don't use any semi-transparent areas in your mask and you'll be fine.
Im trying to use GPUImagePoissonBlendFilter of the GPUImage framework to blend two faces in my face blending application. Here is my code.
- (void)applyPoissonBlendToImage:(UIImage *) rearFace withImage:(UIImage *) frontFace
{
GPUImagePicture* picture1 = [[GPUImagePicture alloc] initWithImage:rearFace];
GPUImagePicture* picture2 = [[GPUImagePicture alloc] initWithImage:frontFace];
GPUImagePoissonBlendFilter * poissonFilter = [[GPUImagePoissonBlendFilter alloc] init];
poissonFilter.mix = .7;
poissonFilter.numIterations = 200;
[picture1 addTarget:poissonFilter];
[picture1 processImage];
[picture2 addTarget:poissonFilter];
[picture2 processImage];
finalResultImage = [poissonFilter imageFromCurrentlyProcessedOutputWithOrientation:rearFace.imageOrientation];
}
i.e As you can see, I am giving two images (rearFace and frontFace) as inputs to this method. The image front face is a shape (polygon shape formed by joining relative eye's and mouth postion's) and is of the same size as the rearFace image i .e (To match the size, I've filled the the space external to the polygonal shape with transparent color while drawing).
However blending does not happen as I expected. i.e the sharp edges of the front face are not blended into rear face properly. Here my assumption is that the PoissonBlendFilter starts blending second image from its top left corner rather than the top left boundary of the face.
Problem:I feel that the input image is not fed into the filter correctly . Do I need to apply some kind of masking to the input image? Can anyone guide me in this?
GPUImage can sometimes become tricky with two-input filters. When you are adding the blend filter to the first source image, specify the texture location explicitly. So instead of:
[picture1 addTarget:poissonFilter];
Try this:
[picture1 addTarget:poissonFilter atTextureLocation:0];
The rest (picture1 or any others) don't need this, but there is a little bug with two input filters that sometimes require excplicitly specifying the texture location.
I've been doing some research on online for a project I'm doing but so far haven't been able to quite get it working. I want to be able to slide my finger over a UIImage and delete part of it, kind of like an eraser. I'm able to draw lines on the screen but can't figure out how to do this. Any help would be greatly appreciated.
Can you mask the image and when you draw on it, it adds the lines to the mask (in white, rest of mask is black) and then it should make those spots transparent
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
There are two parts to this problem-
a) Determining the curve along which the finger was moved
b) Drawing the curve (which is really a combination of short lines) with the white color
For part (a), have a look at UIPanGestureRecognizer. Using the touchesBegan: & touchesMoved methods, you will be notified every time the finger moves even the smallest distance, and the source and destination co-ordinates, say (x1, y1) & (x2, y2).
Part (b), As you know how to draw a line, now you need to draw a line from the source to the destination with the line's width (thickness) equal to the finger's. For that you can set the line's width using CGContextSetLineWidth.
I'm trying to build this:
Where the white background is in fact transparent. I know how to clip a CGPath to a set region, but this seems to be to other way around, since I need to substract regions from a filled CGPath.
I guess the right way to go would be to substract the whole outer-circles from the CGPath and then to draw smaller circles at my CGPoints, but I'm not sure how to execute the former. Can anyone point me in the right direction?
That's what I would do :
1) Draw your general line
2) CGContextSetBlendMode(context, kCGBlendModeClear) to "clear the context" when you draw.
3) Draw you bigger circles
4) CGContextSetBlendMode(context, kCGBlendModeNormal) to return to normal drawing
5) Draw your little circles.
You could instead start a transparency layer, draw the lines, then draw the larger transparent circles using the clear color, then draw the smaller black circles. Then when you finish the transparency layer, it will composite exactly what you want back onto the context.
i am doing a simple jigsaw puzzle game.
for that i crop a single image into 9 pieces and display it on 9 image views.
Now i need to detect collision when one image view is comes over the half of another image view frame,
and replace image or image view each other.
how can i done can any one please help me.
You can use the CGRectIntersectsRect() function, it takes to CGRect's and returns YES if the rects intersect, otherwise NO.
Here is a short example:
if(CGRectIntersectsRect(image1.frame, image2.frame))
{
UIImage *temp = [[image1 image] retain];
[image1 setImage:image2.image];
[image2 setIamge:[temp autorelease]];
}
(It works of course easier if you have an array to iterate through)