What does proposedRect mean in NSImage.cgImage(forProposedRect:context:hints:)? - swift

I'm looking into this method because I would like to convert a rather large NSImage to a smaller CGImage in order to assign it to a CALayer's contents.
From Apple's documentation I get that the proposedRect is supposed to be the size of the CGImage that will be returned and that, if I pass nil for the proposedRect, I will get a CGImage the size of the original NSImage. (Please correct me, if I'm wrong.)
I tried calling it with nil for the proposed rect and it works perfectly, but when I try giving it some rectangle like (0,0,400,300), the resulting CGImage still is the size of the original image. The bit of code I'm using is as follows.
var r = NSRect(x: 0, y: 0, width: 400, height: 300)
let img = NSImage(contentsOf: url)?.cgImage(forProposedRect: &r, context: nil, hints: nil)
There must be something about this that I understood wrong. I really hope someone can tell me what that is.

This method is not for producing scaled images. The basic idea is that drawing the NSImage to the input rect in the context would produce a certain result. This method creates a CGImage such that, if it were drawn to the output rect in that same context, it would produce the same result.
So, it's perfectly valid for the method to return a CGImage the size of the original image. The scaling would occur when that CGImage is drawn to the rect.
There's some documentation about this that only exists in the historical release notes from when it was first introduced. Search for "NSImage, CGImage, and CoreGraphics impedance matching".
To produce a scaled-down image, you should create a new image of the size you want, lock focus on it, and draw the original image to it. Or, if you weren't aware, you can just assign your original image as the layer's contents and see if that's performant enough.

Related

How to render UIView into an image at higher resolution?

I'm trying to create image-snapshot tests for UIView. Unfortunately for me my CI machines have #1x pixel-to-point ratio, and my local machine has #2x, so basically I'm trying to render a UIView on #1x machine as it would look on #2x machine.
My code looks like this:
let contentsScale = 2
view.contentScaleFactor = contentsScale
view.layer.contentsScale = contentsScale
let format = UIGraphicsImageRendererFormat()
format.scale = contentsScale
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: format)
let image = renderer.image { ctx in
self.drawHierarchy(in: bounds, afterScreenUpdates: true)
}
So problem is that when it reaches CALayer.draw(in ctx: CGContext) inside of drawHierarchy, the view.contentScaleFactor and view.layer.contentsScale are back to 1 (or whatever UIScreen.main.scale is). It happens in this callstack:
* MyUIView.contentScaleFactor
* _UIDrawViewRectAfterCommit
* closure #1 in convertViewToImage
I also noticed that there is ; _moveViewToTemporaryWindow in assembly code of _UIDrawViewRectAfterCommit call, which I guess it means it attaches my view to some temporary window which resets the scale. I tried changing the scale again in didMoveToWindow, i.e. right before the drawing, but the view comes out as pixelated even if view.contentScaleFactor is correct in the rendering of of the layer.
I noticed that some people try to solve it with using scaling on CGContext, but it makes no sense as the underlying quality is not scaled.
So what am I missing? How do render UIView into an image using desired scale?
I did several things to get this working:
Render layer instead of the view itself (view.layer.render(in: ctx.cgContext) as suggested here: https://stackoverflow.com/a/51944513/6257435
Views should have a size that is multiple of your contentsScale otherwise you get weird antialiasing and interpolation issues on the lines.
Avoid transforms that have odd scales (like 1.00078... in my case) otherwise you get weird antialiasing and interpolation issues on the lines.
I'm using format.preferredRange = .standard color range to make it work the same locally and on CI too.

Change the color of one pixel of an UIImage without creating a new UIImage/CGImage

All solutions I've found for doing this (like this one Change color of certain pixels in a UIImage ) suggest to create a new uiimage, but I want to modify directly a pixel in the uiimage without creating a new one. Is there a way to do this ?
It seems that cgimage is not mutable, but is there a way to create an image from a pixel data buffer, and modifying this pixel data buffer would directly modify the image ?

How to capture screen after applying mask to the layer in iOS?

I added two UIView to ViewController.view, and applied 2 squares image into each view.layer.mask to make it like a square is sliced into 2 pieces, and addSubview the imageview over it.

I am having a problem rendering the masked layers and save it to photo album.
I want the saved photo to be look like picture no. 1, but it always looks like picture no. 2 after I save it to photo album.

Is there any solution to capture like picture No. 1 after applying mask?
the below is the reference from apple regarind renderIngContext.
Important The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
I've created an image capture function before, which literally does a printscreen of a UIView. I don't use it, because it does not work well for my needs but maybe you can use it:
UIImage *img;
UIGraphicsBeginImageContextWithOptions(UIViewYouWantToCapture.bounds.size, self.opaque, 0.0);
[[UIViewYouWantToCapture layer] renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
When we apply masking to any image then the we get the resultant image with alpha property of masked image to 1 and remaining image to 0,
and when we are capturing image of view then there is still complete image(we are able to seen half image due due to alpa = 0 of half image, but there is still a complete image) , so we getting the screenshot of complete view.

How can I adjust the RGB pixel data of a UIImage on the iPhone?

I want to be able to take a UIImage instance, modify some of its pixel data, and get back a new UIImage instance with that pixel data. I've made some progress with this, but when I try to change the array returned by CGBitmapContextGetData, the end result is always an overlay of vertical blue lines over the image. I want to be able to change the colors and alpha values of pixels within the image.
Can someone provide an end-to-end example of how to do this, starting with a UIImage and ending with a new UIImage with modified pixels?
This might answer your question: How to get the RGB values for a pixel on an image on the iphone
I guess you'd follow those instructions to get a copy of the data, modify it, and then create a new CGDataProvider (CGDataProviderCreateWithCFData), use that to create a new CGImage (CGImageCreate) and then create a new UIImage from that.
I haven't actually tried this (no access to a Mac at the moment), though.

Writing a masked image to disk as a PNG file

Basically I'm downloading images off of a webserver and then caching them to the disk, but before I do so I want to mask them.
I'm using the masking code everyone seems to point at which can be found here:
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
What happens though, is that the image displays fine, but the version that gets written to the disk with
UIImage *img = [self maskImage:[UIImage imageWithData:data] withMask:self.imageMask];
[UIImagePNGRepresentation(img) writeToFile:cachePath atomically:NO];
has it's alpha channel inverted when compared to the one displayed later on (using the same UIImage instance here).
Any ideas? I do need the cached version to be masked, otherwise displaying the images in a table view get's awfully slow if I have to mask them every time.
Edit: So yeah, UIImagePNGRepresentation(img) seems to invert the alpha channel, doesn't have anything to do with the code that writes to disk, which is rather obvious but I checked anyway.
How about drawing into a new image, and then save that?
UIGraphicsBeginImageContext(img.size);
[img drawAtPoint:CGPointZero];
UIImage *newImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(newImg) writeToFile:cachePath atomically:NO];
(untested)
See the description in CGImageCreateWithMask in CGImage Reference:
The resulting image depends on whether the mask parameter is an image mask or an image. If the mask parameter is an image mask, then the source samples of the image mask act as an inverse alpha value. That is, if the value of a source sample in the image mask is S, then the corresponding region in image is blended with the destination using an alpha value of (1-S). For example, if S is 1, then the region is not painted, while if S is 0, the region is fully painted.
If the mask parameter is an image, then it serves as an alpha mask for blending the image onto the destination. The source samples of mask' act as an alpha value. If the value of the source sample in mask is S, then the corresponding region in image is blended with the destination with an alpha of S. For example, if S is 0, then the region is not painted, while if S is 1, the region is fully painted.
It seems for some reason the image mask is treated as a mask image to mask with while saving. According to:
UIImagePNGRepresentation and masked images
http://lists.apple.com/archives/quartz-dev/2010/Sep/msg00038.html
to correctly save with UIImagePNGRepresentation, there are several choices:
Use inverse version of the image mask.
Use "mask image" instead of "image mask".
Render to a bitmap context and then save it, like epatel mentioned.