I know that it is possible to create layer masks in C4 like this:
object.layer.mask = anotherObject.layer;
Is there a known way to use an animated mask?
Yes. You can animate a mask in a couple of different ways.
First, if you use basic shapes as the object whose layer will become the mask, you can animate them as a you would normally and this becomes an animated mask.
This can be done for any visible object in C4 (i.e. shapes, movies, images, etc...).
For instance:
object.layer.mask = aShape.layer;
aShape.animationDuration = 1.0f;
aShape.origin = CGPointMake(x, y);
The above can be done with images as well. When using images any clear parts of the image will turn out transparent in your original object.
Furthermore, there is an undocumented animatable image method, which is experimental and available only in the latest template.
Using it would look like:
NSArray *imageNamesArray = [NSArray arrayWithObjects:#"imageName01.png",...,nil];
C4Image *animatedImage = [C4Image animatedImageWithNames:imageNamesArray];
object.layer.mask = animatedImage.mask;
Essentially, this method creates an animated gif style image... But, because this method is brand new / experimental, there isn't any control over the speed of the transitions between images.
Related
Im trying to use GPUImagePoissonBlendFilter of the GPUImage framework to blend two faces in my face blending application. Here is my code.
- (void)applyPoissonBlendToImage:(UIImage *) rearFace withImage:(UIImage *) frontFace
{
GPUImagePicture* picture1 = [[GPUImagePicture alloc] initWithImage:rearFace];
GPUImagePicture* picture2 = [[GPUImagePicture alloc] initWithImage:frontFace];
GPUImagePoissonBlendFilter * poissonFilter = [[GPUImagePoissonBlendFilter alloc] init];
poissonFilter.mix = .7;
poissonFilter.numIterations = 200;
[picture1 addTarget:poissonFilter];
[picture1 processImage];
[picture2 addTarget:poissonFilter];
[picture2 processImage];
finalResultImage = [poissonFilter imageFromCurrentlyProcessedOutputWithOrientation:rearFace.imageOrientation];
}
i.e As you can see, I am giving two images (rearFace and frontFace) as inputs to this method. The image front face is a shape (polygon shape formed by joining relative eye's and mouth postion's) and is of the same size as the rearFace image i .e (To match the size, I've filled the the space external to the polygonal shape with transparent color while drawing).
However blending does not happen as I expected. i.e the sharp edges of the front face are not blended into rear face properly. Here my assumption is that the PoissonBlendFilter starts blending second image from its top left corner rather than the top left boundary of the face.
Problem:I feel that the input image is not fed into the filter correctly . Do I need to apply some kind of masking to the input image? Can anyone guide me in this?
GPUImage can sometimes become tricky with two-input filters. When you are adding the blend filter to the first source image, specify the texture location explicitly. So instead of:
[picture1 addTarget:poissonFilter];
Try this:
[picture1 addTarget:poissonFilter atTextureLocation:0];
The rest (picture1 or any others) don't need this, but there is a little bug with two input filters that sometimes require excplicitly specifying the texture location.
hi all upto now i know making rectangle with the CGrectmake and this rect(frame) i am using as imageview frame like UIImageView *someImage=[[uiimageview alloc]initwithframe:someRect]; now i can add an image with the frame of someRect. my problem here is when the coordinates like
(rectangleFirstx-coordinate,tectangleFirstY-cordinate)=(10,10)
(rectangleLastx-cordinate,rectangleLasty-cordinate)=(17,7) this, how can i give frame to the uiimageview....This is like a inclined rectangle..can any one suggest me how to apply frame through the ios library for these type of coordinates..Thanks in advance..
Your example isn't very clear because a rectangle with opposite corners at (10,10) and (10,7) can be in any one of a myriad of different orientations, including one perfectly aligned along the x and y axis.
What you can certainly do is create a UIImageView of the desired size and location and then rotate it by using one of many techniques, including animation methods.
[UIImageView animateWithDuration:0.1 animations:^
{
your_UIImageView_here.transform = CGAffineTransformMakeRotation((M_PI/180.0) * degrees);
}];
You can hide the UIImageView until the rotation is done and then show it.
If your question is about how to use the coordinates you provided to arrive at an angle I'd suggest that more data is needed because it is impossible to pick one of the billions of possible rectangles with corners at those two points without more information. Once you have more data then it is pretty basic trigonometry to figure out the angle to feed into the rotation.
I have a UIView container that has two UIImageViews inside it, one partially obscuring the other (they're being composed like this to allow for occasional animation of one "layer" or another.
Sometimes I want to make this container 50% alpha, so what the users sees fades. Here's the problem: setting my container view to 50% alpha makes all my subviews inherit this as well, and now you can see through the first subview into the second, which in my application has a weird X-Ray effect that I'm not looking for.
What I'm after, of course, is for what the user currently sees to become 50% transparent-- the equivalent of flattening the visible view into one bitmap, and then making that 50% alpha.
What are my best bets for accomplishing this? Ideally would like to avoid actually, dynamically flattening the views if I can help it, but best practices on that welcome as well. Am I missing something obvious? Since most views have subviews and would run into this issue, I feel like there's some obvious solution here.
Thanks!
EDIT: Thanks for the thoughts folks. I'm just moving one image around on top of another image, which it only partially obscures. And this pair of images has to move together sometimes, as well. And sometimes I want to fade the whole thing out, wherever it is, and whatever the state of the image pair is at the moment. Later, I want to bring it back and continue animating it.
Taking a snapshot of the container, either by rendering its layer (?) or by doing some other offscreen compositing on the fly before alpha'ing out the whole thing, is definitely possible, and I know there are a couple ways to do it. But what if the animation should continue to happen while the whole thing's at 50% alpha, for example?
It sounds like there's no obvious solution to what I'm trying to do, which seems odd to me, but thank you all for the input.
Recently I had this same problem, where I needed to animate layers of content with a global transparency. Since my animation was quite complex, I discovered that flattening the UIView hierarchy made for a choppy animation.
The solution I found was using CALayers instead of UIViews, and setting the .shouldRasterize property to YES in the container layer, so that any sublayers would be flattened automatically prior to applying the opacity.
Here's what a UIView could look like:
#import <QuartzCore/QuartzCore.h> //< Needed to use CALayers
...
#interface MyView : UIView{
CALayer *layer1;
CALayer *layer2;
CALayer *compositingLayer; //< Layer where compositing happens.
}
...
- (void)initialization
{
UIImage *im1 = [UIImage imageNamed:#"image1.png"];
UIImage *im2 = [UIImage imageNamed:#"image2.png"];
/***** Setup the layers *****/
layer1 = [CALayer layer];
layer1.contents = im1.CGImage;
layer1.bounds = CGRectMake(0, 0, im1.size.width, im1.size.height);
layer1.position = CGPointMake(100, 100);
layer2 = [CALayer layer];
layer2.contents = im2.CGImage;
layer2.bounds = CGRectMake(0, 0, im2.size.width, im2.size.height);
layer2.position = CGPointMake(300, 300);
compositingLayer = [CALayer layer];
compositingLayer.shouldRasterize = YES; //< Here we turn this into a compositing layer.
compositingLayer.frame = self.bounds;
/***** Create the layer three *****/
[compositingLayer addSublayer:layer1]; //< Add first, so it's in back.
[compositingLayer addSublayer:layer2]; //< Add second, so it's in front.
// Don't mess with the UIView's layer, it's picky; just add sublayers to it.
[self.layer addSublayer:compositingLayer];
}
- (IBAction)animate:(id)sender
{
/* Since we're using CALayers, we can use implicit animation
* to move and change the opacity.
* Layer2 is over Layer1, the compositing is partially transparent.
*/
layer1.position = CGPointMake(200, 200);
layer2.position = CGPointMake(200, 200);
compositingLayer.opacity = 0.5;
}
I think that flattening the UIView into a UIImageView is your best bet if you have your heart set on providing this feature. Also, I don't think that flattening the image is going to be as complicated as you might think. Take a look at the answer provided in this question.
Set the bottom UIImageView to have .hidden = YES, then set .hidden = NO when you setup a cross-fade animation between the top and bottom UIImageViews.
When you need to fade the whole thing, you can either set .alpha = 0.5 on the container view or the top image view - it shouldn't matter. It may be computationally more efficient to set .alpha = 0.5 on the image view itself, but I don't know enough about the graphics pipeline on the iPhone to be sure about that.
The only downside to this approach is that you can't do a cross-fade when your top image is set to 50% opacity.
A way to do this would be to add the ImageViews to the UIWindow (the container would be a fake one)
I want Stretch a image. For that i use sprite. I want stretch sprite & this stretching is may be Circular or curve animation. I don't understand what methode used for that. Can anyone help me?
Since you tagged your question with cocos2d I guess you'll be using that. It's really basic to strech an image
Sprite *mySprite = [Sprite spriteWithFile:#"mysprite.png"];
mySprite.position = ccp(100, 100);
mySprite.scale = 2.0;
[self addChild:mySprite];
If you want to animate it you can use the cocos2d actions or just create your own animation. The example below does a linear animation to 3x original sprite size in 1 second:
id action1 = [ScaleTo actionWithDuration:1.0 scale:3.0];
[mySprite runAction: action1];
For manipulating views and images in general in ways such as streching you can read up on transforms provided by the sdk, you can learn about 2D transforms here http://developer.apple.com/iphone/library/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_affine/dq_affine.html and you can extend that further to 3D by manipulating the layers transforms instead of the views transforms. Youll be able to do things such as scaling and rotating and you can define your own transforms as well. This example project http://developer.apple.com/iphone/library/samplecode/MoveMe/ is a good reference to get started with transforms and animating them.
I have created a CG context that is 800 pixels wide and 1200 pixels height. I have created CGLayer over this context that has been transformed (scaled, translated and rotated). So, at some point, as the CGLayer is bigger than the context and has been translated, rotated, etc., not all parts of this CGLayer falls inside the context. See next picture:
layer and context
As you can see by the picture, some parts of the layer falls outside the context area. When I render the final composition using
CGContextDrawLayerInRect(context, superRect, objectLayer);
it will render the full layer, including those unnecessary parts outside the context.
My problem is: if I can make it draw just the relevant parts inside the context I can make it render fast and save memory.
Is there any way to do that?
NOTE: LAYER contains transparency.
Please refrain from giving solutions that don't involve CGLayers.
thanks in advance.
You can clip the context using CGContextClip/-ToMask/-ToRect.
But i think it's actually cheaper/faster to simply 'dump' pixels into a context, than to have to calculate the clipping bounds and 'draw less'.
The surplus drawing doesn't (normally) use-up extra memory.
Can you use a CATiledLayer? This should lazy-load in squares ala google maps....
+(Class)layerClass
{
return [CATiledLayer class];
}
-(id)init {
CATiledLayer *tiledLayer = (CATiledLayer *) self.layer;
tiledLayer.tileSize = CGSize(x,x);
tiledLayer.levelsOfDetail = y;
tiledLayer.levelsOfDetailBias = z;
}