I'm trying to create image-snapshot tests for UIView. Unfortunately for me my CI machines have #1x pixel-to-point ratio, and my local machine has #2x, so basically I'm trying to render a UIView on #1x machine as it would look on #2x machine.
My code looks like this:
let contentsScale = 2
view.contentScaleFactor = contentsScale
view.layer.contentsScale = contentsScale
let format = UIGraphicsImageRendererFormat()
format.scale = contentsScale
let renderer = UIGraphicsImageRenderer(size: bounds.size, format: format)
let image = renderer.image { ctx in
self.drawHierarchy(in: bounds, afterScreenUpdates: true)
}
So problem is that when it reaches CALayer.draw(in ctx: CGContext) inside of drawHierarchy, the view.contentScaleFactor and view.layer.contentsScale are back to 1 (or whatever UIScreen.main.scale is). It happens in this callstack:
* MyUIView.contentScaleFactor
* _UIDrawViewRectAfterCommit
* closure #1 in convertViewToImage
I also noticed that there is ; _moveViewToTemporaryWindow in assembly code of _UIDrawViewRectAfterCommit call, which I guess it means it attaches my view to some temporary window which resets the scale. I tried changing the scale again in didMoveToWindow, i.e. right before the drawing, but the view comes out as pixelated even if view.contentScaleFactor is correct in the rendering of of the layer.
I noticed that some people try to solve it with using scaling on CGContext, but it makes no sense as the underlying quality is not scaled.
So what am I missing? How do render UIView into an image using desired scale?
I did several things to get this working:
Render layer instead of the view itself (view.layer.render(in: ctx.cgContext) as suggested here: https://stackoverflow.com/a/51944513/6257435
Views should have a size that is multiple of your contentsScale otherwise you get weird antialiasing and interpolation issues on the lines.
Avoid transforms that have odd scales (like 1.00078... in my case) otherwise you get weird antialiasing and interpolation issues on the lines.
I'm using format.preferredRange = .standard color range to make it work the same locally and on CI too.
Related
I'm experiencing an issue with a UIProgressView (using Xcode 11.5 and current Swift version). My aim is to create a progress bar with an increased height (vs. the Apple standard height) and rounded corners, which represents the progress of a played audio file smoothly.
intended progress bar style
I can easily achieve the desired look by combining a height constraint in interface builder with the following code:
audioProgress.layer.cornerRadius = bubble.frame.size.height / 3
audioProgress.clipsToBounds = true
However, the issue I'm experiencing is that the progress bar does not start smoothy from the beginning – instead it jumps to about 1/3 of the bar (changes depending on device size) and stops there, until the audio "catches up" on that position, from where it continues running smoothly until the end.
I made sure this is not an error in my code re audiofiles, timer etc. but some layout issue. E.g. once I change the height of the bar back to Apple's standard height, the progress shows as it should.
I can avoid this odd behaviour by using the below code to increase the bar's height (instead of a constraint):
let transform: CGAffineTransform = CGAffineTransform(scaleX: 1.0, y: transformFactor)
audioProgress.transform = transform
However, this no longer allows me to round the corners as shown above, as the combination of layer.cornerRadius and the CGAffineTransform commands leads to oddly skewed corners, as shown in other posts.
So my 2 questions are:
Can anyone explain what causes this weird behavior of the bar jumping and stopping for a while in the first place? And how to avoid it?
If using the CGAffineTransform command is the only way to go – is there some other way to achieve the rounded corners? E.g. sth along the lines of "first transform the height, and only THEN round the corners" (as opposed to doing it the other way around, which, in my understanding, causes the skewed look...)
Try this -
audioProgress?.layer.cornerRadius = bubble.frame.size.height / 3
audioProgress?.layer.masksToBounds = true
I'm looking into this method because I would like to convert a rather large NSImage to a smaller CGImage in order to assign it to a CALayer's contents.
From Apple's documentation I get that the proposedRect is supposed to be the size of the CGImage that will be returned and that, if I pass nil for the proposedRect, I will get a CGImage the size of the original NSImage. (Please correct me, if I'm wrong.)
I tried calling it with nil for the proposed rect and it works perfectly, but when I try giving it some rectangle like (0,0,400,300), the resulting CGImage still is the size of the original image. The bit of code I'm using is as follows.
var r = NSRect(x: 0, y: 0, width: 400, height: 300)
let img = NSImage(contentsOf: url)?.cgImage(forProposedRect: &r, context: nil, hints: nil)
There must be something about this that I understood wrong. I really hope someone can tell me what that is.
This method is not for producing scaled images. The basic idea is that drawing the NSImage to the input rect in the context would produce a certain result. This method creates a CGImage such that, if it were drawn to the output rect in that same context, it would produce the same result.
So, it's perfectly valid for the method to return a CGImage the size of the original image. The scaling would occur when that CGImage is drawn to the rect.
There's some documentation about this that only exists in the historical release notes from when it was first introduced. Search for "NSImage, CGImage, and CoreGraphics impedance matching".
To produce a scaled-down image, you should create a new image of the size you want, lock focus on it, and draw the original image to it. Or, if you weren't aware, you can just assign your original image as the layer's contents and see if that's performant enough.
I have spent hours on Google searching for an answer to this and trying pieces of code but I have just not been able to find one. I also recognise that this is a question that has been asked lots of times, however I do not know what else to do now.
I have access to 500x500 pixel rainfall radar images from the Met Offices' DataPoint API, covering the UK. They must be displayed in a 640x852 pixel area (an NSImageView, which I currently have the scaling property set to axis independent) because this is the correct size of the map generated for the boundaries covered by the imagery. I want to display them at the enlarged size of 640x852 using the nearest neighbour algorithm and in an aliased format. This can be achieved in Photoshop by going to Image > Image Size... and setting resample to nearest neighbour (hard edges). The source images should remain at 500x500 pixels, I just want to display them in a larger view.
I have tried setting the magnificationFilter of the NSImageView.layer to all three of the different kCAFilter... options but this has made no difference. I have also tried setting the shouldRasterize property of the NSImageView.layer to true, which also had no effect. The images always end up being smoothed or anti-aliased, which I do not want.
Having recently come from C#, there could be something I have missed as I have not been programming in Swift for very long. In C# (using WPF), I was able to get what I want by setting the BitmapScalingOptions of the image element to NearestNeighbour.
To summarise, I want to display a 500x500 pixel image in a 640x852 pixel NSImageView in a pixelated form, without any kind of smoothing (irrespective of whether the display is retina or not) using Swift. Thanks for any help you can give me.
Below is the image source:
Below is the actual result (screenshot from a 5K iMac):
This was created by simply setting the image property on an NSImageSource with the tableViewSelectionDidChange event of my NSTableView used to select the times to show the image for, using:
let selected = times[timesTable.selectedRow]
let formatter = NSDateFormatter()
formatter.dateFormat = "d/M/yyyy 'at' HH:mm"
let date = formatter.dateFromString(selected)
formatter.dateFormat = "yyyyMMdd'T'HHmmss"
imageData.image = NSImage(contentsOfFile: basePathStr +
"RainObs_" + formatter.stringFromDate(date!) + ".png")
Below is what I want it to look like (ignoring the background and cropped out parts). If you save the image yourself you will see it is pixellated and aliased:
Below is the map that the source is displayed over (the source is just in an NSImageView laid on top of another NSImageView containing the map):
Try using a custom subclass of NSView instead of an NSImageView. It will need an image property with a didSet observer that sets needsDisplay. In the drawRect() method, either:
use the drawInRect(_:fromRect:operation:fraction:respectFlipped:hints:) method of the NSImage with a hints dictionary of [NSImageHintInterpolation:NSImageInterpolation.None], or
save the current value of NSGraphicsContext.currentContext.imageInterpolation, change it to .None, draw the NSImage with any of the draw...(...) methods, and then restore the context's original imageInterpolation value
I added two UIView to ViewController.view, and applied 2 squares image into each view.layer.mask to make it like a square is sliced into 2 pieces, and addSubview the imageview over it.
I am having a problem rendering the masked layers and save it to photo album.
I want the saved photo to be look like picture no. 1, but it always looks like picture no. 2 after I save it to photo album.
Is there any solution to capture like picture No. 1 after applying mask?
the below is the reference from apple regarind renderIngContext.
Important The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
I've created an image capture function before, which literally does a printscreen of a UIView. I don't use it, because it does not work well for my needs but maybe you can use it:
UIImage *img;
UIGraphicsBeginImageContextWithOptions(UIViewYouWantToCapture.bounds.size, self.opaque, 0.0);
[[UIViewYouWantToCapture layer] renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
When we apply masking to any image then the we get the resultant image with alpha property of masked image to 1 and remaining image to 0,
and when we are capturing image of view then there is still complete image(we are able to seen half image due due to alpa = 0 of half image, but there is still a complete image) , so we getting the screenshot of complete view.
I got a subclass of UIView in which I draw an NSString object using the drawInRect:withFont:lineBreakMode:alignment: method.
Unfortunately, when I change the orientation of the device, say from portrait into landscape mode, the text doesn't get redrawn correctly but gets scaled and distorted against my intentions.
How can I solve this problem?
This is part of how Core Animation animates transitions. A snapshot of your view is taken, and then stretched/moved into the new location.
There are a number of ways you can tackle this. First of all you can try this:
self.contentMode = UIViewContentModeRedraw;
This might be all you need and will tell Core Animation to redraw your contents instead of using a snapshot. If you still have issues, you can try defining a "stretchable" region that is stretched instead of your text.
For example, if you know you have a vertical slice of your view where there is never any text, you can define the content stretch to be in that little section and any stretching only occurs there, hopefully keeping your text intact.
CGFloat stretchStartX = 25.0f;
CGFloat stretchEndX = 30.0f;
/* Content stretch values are from 0.0-1.0, as a percentage of the view bounds */
self.contentStretch = CGRectMake(stretchStartX / self.bounds.size.width,
0.0f,
stretchEndX / self.bounds.size.width,
1.0f);
This causes the your view to be stretched only between x values 25 and 30.