I currently have a custom progress view in my application. I am attempting to draw the progress view and then take a "snapshot" of it so I can pass this along where needed. So once I have drawn my layers, I want to then convert them into a single UIImage.
Here is my current code, which also includes an attempt to save to the documents directory in order to view the image.
UIGraphicsBeginImageContext(self.frame.size)
self.backgroundRingLayer.renderInContext(UIGraphicsGetCurrentContext())
self.progressRingLayer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext()
let paths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)
let documentsDirectory = paths[0] as String
let data:NSData = UIImagePNGRepresentation(image) as NSData
data.writeToFile(documentsDirectory, atomically: true)
UIGraphicsEndImageContext()
I am relatively new to using CGContext and such, so I may be way off with this. Any help would be great, thanks!
Just render the layer of the superview. It should include all subviews and sublayers.
self.view.renderInContext(UIGraphicsGetCurrentContext())
Related
Using CIGaussianBlur causes UIImageView to apply the blur from the border in, making the image appear to shrink (right image). Using .blur on a SwiftUI view does the opposite; the blur is applied from the border outwards (left image). This is the effect I’m trying to achieve in UIKit. How can I go about this?
I've seen a few posts about using CIAffineClamp, but that causes the blur to stop at the image boarder which is not what I want.
private let context = CIContext()
private let filter = CIFilter(name: "CIGaussianBlur")!
private func createBluredImage(using image: UIImage, value: CGFloat) -> UIImage? {
let beginImage = CIImage(image: image)
filter.setValue(beginImage, forKey: kCIInputImageKey)
filter.setValue(value, forKey: kCIInputRadiusKey)
guard
let outputImage = filter.outputImage,
let cgImage = context.createCGImage(outputImage, from: outputImage.extent)
else {
return nil
}
return UIImage(cgImage: cgImage)
}
When I used CIGaussianBlur I wanted my output image to be contained inside the image frame, so I used CIAffineClamp on the image before applying the blur, as you describe.
You might need to render your source image into a larger frame, clamp to that larger frame using CIAffineClamp, apply your blur filter, then load the resulting blurred output image. Core Image is a bit of a pain to set up and figure out, so I don’t have a full solution ready for you, but that’s what I would suggest.
I am trying to safe an NSView to an PNG.
I start with the NSView and then call dataWithPDF or cacheDisplay for PNG. The code to do both looks like this.
guard view.lockFocusIfCanDraw() else {
assert (false)
return
}
let pdfData = view.dataWithPDF(inside: rect)
guard let imgData = view.bitmapImageRepForCachingDisplay(in: rect) else {
assert(false)
}
view.cacheDisplay(in: rect, to: imgData)
view.unlockFocus()
try pdfData.write(to: pdfName, options: .atomic)
let pngData = imgData.representation(using: .png, properties: [:])
try pngData!.write(to: pngName, options: .atomic)
So far, so good. However, this is the different outcome.
PDF (correct!)
And this is the PNG output. As one can see, the subviews aren't included. The arrows are drawn as part of view
Why is the outcome so different?
Many thanks in advance!
Ok, I found the answer. Thanks to "View Debugging" did I see that the subviews use a layer (self.wantsLayer = true). And layers are not finding their way into the PNG, but into the PDF. Not sure whether this is a bug or a feature. However, now I can fix the PNG output.
Why is the outcome so different?
Trying your code using a different (I obviously don't have your view) view with subviews works as expected and the PNG is fine. So it has to be something to do with your views, but I can make no suggestion as to what. However...
As you've got valid PDF data you can generate your PNG from that using something like:
let captured = NSImage(data:pdfData)
let rep = NSBitmapImageRep(data:(captured?.tiffRepresentation)!)
let pngData = rep?.representation(using: NSPNGFileType, properties:[:])
(that is Swift 3, hence NSPNGFileType rather than .png)
This of course doesn't solve whatever problem you have, it avoids it :-) You should really figure out why your views are failing and treat this as a temporary band aid (assuming it works for you...).
HTH
I was wondering if there was a way to convert an array of NSImages using Swift in macOS/osx?
I should be able to export it to file afterwards, so an animation of images displayed on my app would not be enough.
Thanks!
Image I/O has the functionalities you need. Try this:
var images = ... // init your array of NSImage
let destinationURL = NSURL(fileURLWithPath: "/path/to/image.gif")
let destinationGIF = CGImageDestinationCreateWithURL(destinationURL, kUTTypeGIF, images.count, nil)!
// The final size of your GIF. This is an optional parameter
var rect = NSMakeRect(0, 0, 350, 250)
// This dictionary controls the delay between frames
// If you don't specify this, CGImage will apply a default delay
let properties = [
(kCGImagePropertyGIFDictionary as String): [(kCGImagePropertyGIFDelayTime as String): 1.0/16.0]
]
for img in images {
// Convert an NSImage to CGImage, fitting within the specified rect
// You can replace `&rect` with nil
let cgImage = img.CGImageForProposedRect(&rect, context: nil, hints: nil)!
// Add the frame to the GIF image
// You can replace `properties` with nil
CGImageDestinationAddImage(destinationGIF, cgImage, properties)
}
// Write the GIF file to disk
CGImageDestinationFinalize(destinationGIF)
I am implementing TopShelf in Apple tvOS. I have downloaded images and assigned to imageURL of TVContentItems. The downloaded images aspect ratio does not fit into the TopShelf Images properly. I have tried to change the size by appending width + height to the image link.
www.mydownloadedimages.com/{width}x{height}
But it didn't work.
Can I do resizing at client end in any other way. In the TVContentItem class I have only have NSURL object. There is no UIImage Object.
Thanks a lot.
Here is Apple's documentation on Image Sizes and shapes
// ~ 2 : 3
// ~ 1 : 1
// ~ 4 : 3
// ~ 16 : 9
// ~ 8 : 3
// ~ 87 : 28
//#property imageShape
//#abstract A TVContentItemImageShape value describing the intended aspect ratio or shape of the image.
//#discussion For Top Shelf purposes: the subset of values which are //valid in this property, for TVContentItems in the topShelfItems property //of the TVTopShelfProvider, depends on the value of the topShelfStyle //property of the TVTopShelfProvider:
TVTopShelfContentStyleInset:
valid: TVContentItemImageShapeExtraWide
TVTopShelfContentStyleSectioned:
valid: TVContentItemImageShapePoster
valid: TVContentItemImageShapeSquare
valid: TVContentItemImageShapeHDTV
When the value of this property is not valid for the Top Shelf style, the system reserves the right to scale the image in any way.
You're right saying that the TVContentItem has no UIImage type property. Since TVContentItem also accepts local file URLs in the imageURL property, a workaround can be:
grabbing the UIImage from internet
creating a new image context with the size of the top shelf image
saving it into the NSCacheDirectory
setting the local image URL as imageURL.
Here are the steps:
Let's create our TVContentItem object:
let identifier = TVContentIdentifier(identifier: "myPicture", container: wrapperID)!
let contentItem = TVContentItem(contentIdentifier: identifier )!
Set the contentItem's imageShape:
contentItem.imageShape = .HDTV
Grab the image from Internet. Actually I did this synchronously, you can also try to use other async methods to get that (NSURLConnection, AFNetworking, etc...):
let data : NSData = NSData(contentsOfURL: NSURL(string: "https://s3-ak.buzzfed.com/static/2014-07/16/9/enhanced/webdr08/edit-14118-1405517808-7.jpg")!)!
Prepare the path where your image will be saved and get your UIImage from the data object:
let filename = "picture-test.jpg"
let paths = NSSearchPathForDirectoriesInDomains(.CachesDirectory, .UserDomainMask, true)
let filepath = paths.first! + "/" + filename
let img : UIImage = UIImage(data: data)!
Assuming you've already set the topShelfStyle property, get the size of the top shelf image with the method TVTopShelfImageSizeForShape. This will be the size of your image context:
let shapeSize : CGSize = TVTopShelfImageSizeForShape(contentItem.imageShape, self.topShelfStyle)
Create your image context of shapeSize size and draw the downloaded image into the context rect. Here you can do all your modifications to the image to adjust it into the desired size. In this example I took a square image from Instagram and I put white letterbox bands on the right and left sides.
UIGraphicsBeginImageContext(shapeSize)
let imageShapeInRect : CGRect = CGRectMake((shapeSize.width-shapeSize.height)/2,0,shapeSize.height,shapeSize.height)
img.drawInRect(imageShapeInRect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
In the end, save this image into your NSCacheDirectory and set the image path as contentItem's imageURL.
UIImageJPEGRepresentation(newImage, 0.8)!.writeToFile(filepath, atomically: true)
contentItem.imageURL = NSURL(fileURLWithPath: filepath)
Complete your TVContentItem with other details (like title, internal link, etc...), run the top shelf extension... et voilà!
I would like to add to Nicola Giancecchi's answer. The Top Shelf extension will run on a non main thread so using methods like UIImageJPEGRepresentation or UIImagePNGRepresentation will sometimes stop the Top Shelf thread printing this in the console:
Program ended with exit code: 0
To fix this you can wrap your code like this:
DispatchQueue.main.sync {
UIImageJPEGRepresentation(newImage, 0.8)!.writeToFile(filepath, atomically: true)
let imageURL = NSURL(fileURLWithPath: filepath)
if #available(tvOSApplicationExtension 11.0, *) {
contentItem.setImageURL(imageURL, forTraits: .userInterfaceStyleLight)
contentItem.setImageURL(imageURL, forTraits: .userInterfaceStyleDark)
} else {
contentItem.imageURL = imageURL
}
}
Currently I am working on an NSImageView, on which the user drags and drops an image, and the images gets saved to the disk. I am able to save png and jpeg images, but while saving GIF images, all that is saved is a single frame from the gif. The imageView is able to display the whole animated gif though.
My current implementation to save image from NSImageView to disk is:
let cgRef = image.CGImageForProposedRect(nil, context: nil, hints: nil)
let newRep = NSBitmapImageRep(CGImage: cgRef!)
newRep.size = image.size
let type = getBitmapImageFileType(imageName.lowercaseString) // getBitmapImageFileType: returns NSBitmapImageFileType
let properties = type == .NSGIFFileType ? [NSImageLoopCount: 0] : Dictionary<String, AnyObject>()
let data: NSData = newRep.representationUsingType(type, properties: properties)!
data.writeToFile(link, atomically: true)
How should I modify this code to be able to save all the frames of GIF to the disk.