Drawing a vector path with a custom style - iphone

I was very surprised to not find the answer on Stackoverflow to this one.
I have a vector path in pdf format, like Safari or the Mac App Store apps usually use as image icons.
Now, I'd like to specify the fill-color and a custom shadow in code, rather than making images and exporting them. I didn't figure out how to do this.
The shadow works, however the fill color does not.
Can anyone tell me how to do this?
Current Code
[NSGraphicsContext saveGraphicsState];
{
// Has no effect
[[NSColor colorWithCalibratedRed:0.92f green:0.97f blue:0.98f alpha:1.00f] setFill];
// Neither has this
[[NSColor colorWithCalibratedRed:0.92f green:0.97f blue:0.98f alpha:1.00f] set];
NSShadow *shadow = [NSShadow new];
[shadow setShadowOffset:NSMakeSize(0, -1)];
[shadow setShadowColor:[NSColor blackColor]];
[shadow setShadowBlurRadius:3.0];
[shadow set];
[image drawInRect:imgRect
fromRect:NSZeroRect
operation:NSCompositeXOR
fraction:1.0
respectFlipped:YES
hints:nil];
}
[NSGraphicsContext restoreGraphicsState];

In terms of calling drawInRect:... the image "is what it is". Setting the fill and stroke will effect only primitive operations. A good way to think about this is to realize that all images, vector or raster, have to behave the same way; It would be weird for the current fill color that's set on the context to affect the drawing of a raster-based image, right? Same idea -- the image is the image. The vector image might also have multiple paths in it, each with different fills. It wouldn't make sense for those to be overridden by the fill color set on the context either.
The shadow works regardless because it's effectively a compositing operation; Drawing a given image with a given shadow setting produces the same shadow whether the image was raster-based or the vector equivalent thereof.
In short, if you want to change the contents of the image, you're going to have to write the code to extract the vectors from the image and then draw them as primitives.
Alternately, if all you want is to fill any filled areas with the color, you could use the vector image to create a mask on the context, then you could set the color
on the context, and fill. That might produce the desired effect.

Related

How to capture screen after applying mask to the layer in iOS?

I added two UIView to ViewController.view, and applied 2 squares image into each view.layer.mask to make it like a square is sliced into 2 pieces, and addSubview the imageview over it.

I am having a problem rendering the masked layers and save it to photo album.
I want the saved photo to be look like picture no. 1, but it always looks like picture no. 2 after I save it to photo album.

Is there any solution to capture like picture No. 1 after applying mask?
the below is the reference from apple regarind renderIngContext.
Important The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
I've created an image capture function before, which literally does a printscreen of a UIView. I don't use it, because it does not work well for my needs but maybe you can use it:
UIImage *img;
UIGraphicsBeginImageContextWithOptions(UIViewYouWantToCapture.bounds.size, self.opaque, 0.0);
[[UIViewYouWantToCapture layer] renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
When we apply masking to any image then the we get the resultant image with alpha property of masked image to 1 and remaining image to 0,
and when we are capturing image of view then there is still complete image(we are able to seen half image due due to alpa = 0 of half image, but there is still a complete image) , so we getting the screenshot of complete view.

Iphonesdk boundries checking for coloring

im creating and app where user already have an image (with different objects) without colors, i have to check the object and then color with respected color with the touch on that objects. how should i do this. can anyone help me.
I would say that that is non-trivial. I can only give hints since I have not done such an app yet.
First, you need to convert the image into a CGImageRef, for example by doing [uiimage_object CGImage].
Next you need convert the CGImageRef into array of pixel colors. You can follow the tutorial at http://www.fiveminutes.eu/iphone-image-processing/ for sample code. But for your app you need to convert the array into two dimension based on image width and height.
Then, use the coordinate of the user touch to access the exact pixel color value from the array. Next you read off the color values of the surrounding pixels and determine if color is similar to the touched pixel or not (you might need to read some wikipedia articles etc on doing the color comparison). If the color is similar, change the color to the one you want. Recurse until the surrounding color is different (i.e. you hit the boundary).
When you are finished modifying the pixel color value array, you need to convert the array back into CGImageRef using CGImageCreate function. Then you convert back to UIImage using [UIImage imageWithCGImage:imageref].
Now you are on your own to implement the steps into code. It would be unreasonable if you expect me to code all that for you, wouldn't it?

Creating a stroke/outline of a .PNG with CALayer?

I would like to apply a "stroke" or outline to a png, identically to how Photoshop does it. I have a feeling this can be done with CALayer, but after some tinkering, it is not immediately obvious. setBorderWidth + setBorderColor is almost what I want, except that it only adds a border to the entire dimension of the image, rather than the outline of the png image itself.
Once the stroke is applied, I'd like to also knockout the fill of the png, leaving only an outlined border of the initial shape.
There is no automatic way to do what you're asking. You have to know the path of the shape within your png that you want to "knockout". Once you've defined that, you can create a CAShapeLayer, which accepts a CGPathRef, containing your points. You can stroke and fill the path layer with whatever color you choose and then add it to the layer hierarchy of the displaying view or use it to define a mask of one of the layers in your view.

Only White Fill Color is Transparent in UIView

I've got a UIView that is set to opaque = NO and it all works nicely. In the drawRect I'm doing custom drawing, and this works
CGContextSetFillColor(context, CGColorGetComponents([UIColor blueColor].CGColor));
CGContextFillRect(context, labelOutside);
CGContextAddRect(context, labelOutside);
but this
CGContextSetFillColor(context, CGColorGetComponents([UIColor whiteColor].CGColor));
CGContextFillRect(context, labelOutside);
CGContextAddRect(context, labelOutside);
results in NO fill being produced (you can even see other stuff that I drew on the CGContext through it). How can I draw a white fill?
Note: If I set the control not be opaque, it still doesn't work.
Why not use CGContextSetFillColorWithColor? Then you can pass the CGColor object directly instead of extracting its components and assuming it's in the same color space as the context.
Edited to add:
this works
CGContextSetFillColor(context, CGColorGetComponents([UIColor blueColor].CGColor));
but this
CGContextSetFillColor(context, CGColorGetComponents([UIColor whiteColor].CGColor));
results in NO fill being produced (you can even see other stuff that I drew on the CGContext through it).
As I mentioned, this way of setting the fill color assumes that the color is in the same color space as the context.
Normally, most CGContexts are in an RGB color space. whiteColor, on the other hand, is almost certainly in a white color space, which has only one or two (white and alpha) components.
So, since the context is in an RGB color space, CGContextSetFillColor expects to be passed three or four (red, green, blue, and alpha) components. When you get the components of whiteColor's CGColor, you get a pointer to a C array holding only two components. CGContextSetFillColor will read the three or four components it wants no matter what; if you pass fewer, it may find garbage after them, or it may find zeroes.
If it finds a zero in the alpha position, then the result is that you have set the fill color to something between yellow (red=1.0, green=1.0, blue=0.0) and white (all three=1.0) with zero alpha. That's what causes the fill to not show up: You are filling with a transparent color.
The problem is that mismatch between the color space the context is in (with three or four components) and the color space the color is in (with one or two components). The solution is to remove the mismatch, which is what CGContextSetFillColorWithColor does: Since you are passing the whole CGColor object, not just an array of numbers, Core Graphics will be able to set the context's color space to that of the color object and interpret the color object's components correctly.

Writing a masked image to disk as a PNG file

Basically I'm downloading images off of a webserver and then caching them to the disk, but before I do so I want to mask them.
I'm using the masking code everyone seems to point at which can be found here:
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
What happens though, is that the image displays fine, but the version that gets written to the disk with
UIImage *img = [self maskImage:[UIImage imageWithData:data] withMask:self.imageMask];
[UIImagePNGRepresentation(img) writeToFile:cachePath atomically:NO];
has it's alpha channel inverted when compared to the one displayed later on (using the same UIImage instance here).
Any ideas? I do need the cached version to be masked, otherwise displaying the images in a table view get's awfully slow if I have to mask them every time.
Edit: So yeah, UIImagePNGRepresentation(img) seems to invert the alpha channel, doesn't have anything to do with the code that writes to disk, which is rather obvious but I checked anyway.
How about drawing into a new image, and then save that?
UIGraphicsBeginImageContext(img.size);
[img drawAtPoint:CGPointZero];
UIImage *newImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(newImg) writeToFile:cachePath atomically:NO];
(untested)
See the description in CGImageCreateWithMask in CGImage Reference:
The resulting image depends on whether the mask parameter is an image mask or an image. If the mask parameter is an image mask, then the source samples of the image mask act as an inverse alpha value. That is, if the value of a source sample in the image mask is S, then the corresponding region in image is blended with the destination using an alpha value of (1-S). For example, if S is 1, then the region is not painted, while if S is 0, the region is fully painted.
If the mask parameter is an image, then it serves as an alpha mask for blending the image onto the destination. The source samples of mask' act as an alpha value. If the value of the source sample in mask is S, then the corresponding region in image is blended with the destination with an alpha of S. For example, if S is 0, then the region is not painted, while if S is 1, the region is fully painted.
It seems for some reason the image mask is treated as a mask image to mask with while saving. According to:
UIImagePNGRepresentation and masked images
http://lists.apple.com/archives/quartz-dev/2010/Sep/msg00038.html
to correctly save with UIImagePNGRepresentation, there are several choices:
Use inverse version of the image mask.
Use "mask image" instead of "image mask".
Render to a bitmap context and then save it, like epatel mentioned.