I was wondering if there was a way to convert an array of NSImages using Swift in macOS/osx?
I should be able to export it to file afterwards, so an animation of images displayed on my app would not be enough.
Thanks!
Image I/O has the functionalities you need. Try this:
var images = ... // init your array of NSImage
let destinationURL = NSURL(fileURLWithPath: "/path/to/image.gif")
let destinationGIF = CGImageDestinationCreateWithURL(destinationURL, kUTTypeGIF, images.count, nil)!
// The final size of your GIF. This is an optional parameter
var rect = NSMakeRect(0, 0, 350, 250)
// This dictionary controls the delay between frames
// If you don't specify this, CGImage will apply a default delay
let properties = [
(kCGImagePropertyGIFDictionary as String): [(kCGImagePropertyGIFDelayTime as String): 1.0/16.0]
]
for img in images {
// Convert an NSImage to CGImage, fitting within the specified rect
// You can replace `&rect` with nil
let cgImage = img.CGImageForProposedRect(&rect, context: nil, hints: nil)!
// Add the frame to the GIF image
// You can replace `properties` with nil
CGImageDestinationAddImage(destinationGIF, cgImage, properties)
}
// Write the GIF file to disk
CGImageDestinationFinalize(destinationGIF)
Related
Some context first:
I simply draw a UIImage to a PDFPage by subclassing PDFPage and overriding draw(with box,to context):
override func draw(with box: PDFDisplayBox, to context: CGContext) {
/* Draw image on PDF */
UIGraphicsPushContext(context)
// Change the PDF context to match the UIKit coordinate system.
context.translateBy(x: 0, y: pageBounds.height)
context.scaleBy(x: 1, y: -1)
context.interpolationQuality = .high
// The important line is here: drawing the image
self.myImage.draw(in: CGRect(x: leftMargin, y: topMargin, width: fittedImageSize.width, height: fittedImageSize.height))
}
where self.myImage contains a UIImage. So far so good.
The problem -> if I persist the image to save memory
If I init my CustomPDFPage with the original UIImage from memory --> I get a PDF file with a reasonable size, everything works well
However: if I persist the image using pngData(), then reload it using UIImage(contentsOfFile: url.path) for drawing, my PDF file is suddenly MUCH more heavier in size.
Writing the image to TMP:
let urlToWrite = tmpDir.appendingPathComponent(fileName)
do {
if let tmpData = image.png() {
DLog("TMPDATA SIZE = \(tmpData.count). Image dimensions = \(image.size) with scale = \(image.scale)")
}
try image.pngData()?.write(to: urlToWrite)
self.tmpImgURL = urlToWrite
} catch {
DLog("ERROR: could not write image to \(urlToWrite). Error is \(error)")
}
Reloading the image into memory:
var image = UIImage(contentsOfFile: self.tmpImgURL.path)
--> using that image to draw the PDF increases the PDF size dramatically.
Inspecting the UIImage size, the scale, and the bytes count of the image before writing to file and after reading to file give the exact same values.
So the reason behind this mess is because the user has the possibility to choose to reduce the quality of the image.
In that case, the source UIImage was an image recreated from jpegData (that was used to apply compression).
In short, calling UIImage.pngData() after UIImage.jpegData(...) is not a good idea. Just write directly the jpegData when the image might have been compressed.
How do we name an image programmatically. For example, assign a name to the image generated below. A name that we can use to distinguish the image from other images drawn programmatically.
func drawOval (width: CGFloat, height: CGFloat, name: String) -> UIImage {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: width, height: height))
let image = renderer.image { ctx in
let path = UIBezierPath(ovalIn: CGRect(x: 0, y: 0, width: width, height: height))
path.stroke()
}
// TO DO: Assign this image a name, for example "image01"
return image
}
You can use tags on each UIImageView. I’m not aware of a way to add an identifier to a UIImage directly since it is a subclass of NSObject and not UIView. In order to add a tag to an object in Swift, the object must be a view of some kind.
To implement this, you would keep a variable outside of that function that keeps track of the current tag, then increment it in your function. For example:
var currentTag = 0
//Function now returns a UIImageView
func drawOval (width: CGFloat, height: CGFloat, name: String) -> UIImageView {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: width, height: height))
let image = renderer.image { ctx in
let path = UIBezierPath(ovalIn: CGRect(x: 0, y: 0, width: width, height: height))
path.stroke()
}
let imageView = UIImageView(image: image)
imageView.tag = currentTag
currentTag += 1
return imageView
}
Then later in your code:
if (imageView.tag == 0) {
//Do something
}
//You can also use
let taggedImageView = viewWithTag(0)
EDIT: If you want to save the images and load them via one of the available UIImage initializers, you can write them to a cache folder on disk, then retrieve them using UIImage(pathToFile:):
//This will store the images in the caches directory for your app, which
//the system can clear when the device is low on storage. It will not be
//cleared while your app is open, though.
func saveImageToCacheDynamically(image: UIImage, name: String) {
let paths = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask)
let localPath = paths[0].appendingPathComponent(“ImageCache”, isDirectory: true)
do {
if FileManager.default.fileExists(atPath: localPath.absoluteString) {
//Write the png data representation of the image to disk in plaintext format
try image.pngData().write(to: localPath.appendingPathComponent("\(name).txt"))
} else {
try FileManager.default.createDirectory(at: localPath, withIntermediateDirectories: true, attributes: nil)
//Write the png data representation of the image to disk in plaintext format
try image.pngData().write(to: localPath.appendingPathComponent("\(name).txt"))
}
} catch {
print("Error locally saving image: \(error.localizedDescription)")
}
//Later in your code...
let paths = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask)
let localPath = paths[0].appendingPathComponent("ImageCache/\(imageIdentifier).txt")
if FileManager.default.fileExists(atPath: localPath.absoluteString) {
var fileData: Data!
do {
try fileData = Data(contentsOf: localPath)
} catch {
print("Error reading image file: \(error.localizedDescription)")
}
let image = UIImage(data: fileData)
//Do something with image
} else {
print("Error: image does not exist")
}
I believe what you are looking for is something like NSCache...
You can define a cache by something similar to this:
let imageCache = NSCache<String, UIImage>()
Then you can add objects to the cache like this, where someKeyString is the 'name' you are referring to:
imageCache.setObject(someImage, forKey: someKeyString)
And then finally you can retrieve images from the cache like
imageCache.object(forKey: someKeyString)
I would recommend using extensions or something similar to maintain a reference to your cache everywhere in your app.
** NOTE:
NSCaches are cleared when memory space is short, your app closes, etc. See here
For more permanent storage, I would recommend using UserDefaults, which Apple describes as "An interface to the user’s defaults database, where you store key-value pairs persistently across launches of your app." Use this for things like profile images or things that won't change very often. I would also recommend looking into Core Data
I've been trying to wrap my head around this problem with no luck. I have a very simple Swift command-line application which takes one argument - image path to load. It crops the image and filters that image fragment with SepiaTone filter.
It works just fine. It crops the image to 200x200 and filters it with SepiaTone. Now here's the problem that I'm facing - the whole process takes 600ms on my MacBook Air. Now when I RESIZE (instead of cropping) input image to the same dimensions (200x200) it takes 150ms.
Why is that? In both cases I'm filtering an image which is 200x200 in size. I'm using this particular image for testing (5966x3978).
UPDATE:
It's this particular line of code that takes 4x longer when dealing with cropped image:
var ciImage:CIImage = CIImage(cgImage: cgImage)
END OF UPDATE
Code for cropping (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// CROP THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
let rect = CGRect(x: 0, y: 0, width: 200, height: 200)
if let croppedImage = cgImage.cropping(to: rect) {
cgImage = croppedImage
} else {
exit(EXIT_FAILURE)
}
// END CROPPING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Code for resizing (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// RESIZE THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
guard let CGcontext = CGContext(data: nil,
width: 200,
height: 200,
bitsPerComponent: cgImage.bitsPerComponent,
bytesPerRow: cgImage.bytesPerRow,
space: cgImage.colorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: cgImage.bitmapInfo.rawValue)
else {
exit(EXIT_FAILURE)
}
CGcontext.draw(cgImage, in: CGRect(x: 0, y: 0, width: 200, height: 200))
if let resizeOutput = CGcontext.makeImage() {
cgImage = resizeOutput
}
// END RESIZING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Its very likely that the cgImage lives in video memory and when you scale the image it actually uses the hardware to write the image to a new area of memory. When you crop the cgImage the documentation implies that it is just referencing the original image. The line
var ciImage:CIImage = CIImage(cgImage: cgImage)
must be triggering a read (maybe to main memory?), and in the case of your scaled image it can probably just read the whole buffer continuously. In the case of the cropped image it may be reading it line by line and this could account for the difference, but thats just me guessing.
It looks like you are doing two very different things. In the "slow" version you are cropping (as in taking a small CGRect of the original image) and in the "fast" version you are resizing (as in reducing the original down to a CGRect).
You can prove this by adding two UIImageViews and adding these lines after each declaration of ciImage:
slowImage.image = UIImage(ciImage: ciImage)
fastImage.image = UIImage(ciImage: ciImage)
Here are two simulator screenshots, with the "slow" image above the "fast" image. The first is with your code where the "slow" CGRect origin is (0,0) and the second is with it adjusted to (2000,2000):
Origin is (0,0)
Origin is (2000,2000)
Knowing this, I can come up with a few things happening on the timing.
I'm including a link to Apple's documentation on the cropping function. It explains that it is doing some CGRect calculations behind the scenes but it doesn't explain how it pulls the pixel bits out of the full-sized CG image - I think that's where the real slow down is.
In the end though, it looks like the timing is due to doing two entirely different things.
CGRect.cropping(to:)
In a Swift playground, I am loading a JPEG, converting it to a UIImage and filtering it to monochrome.
I then convert the resulting filtered image to a UIImage.
The input and the filtered images display correctly.
I then convert both images to a CGImage type.
This works for the input image, but the filtered image returns nil from the conversion:
// Get an input image
let imageFilename = "yosemite.jpg"
let inputImage = UIImage(named: imageFilename )
let inputCIImage = CIImage(image:inputImage!)
// Filter the input image - make it monochrome
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setDefaults()
filter!.setValue(inputCIImage, forKey: kCIInputImageKey)
let CIout = filter!.outputImage
let filteredImage = UIImage(CIImage: CIout!)
// Convert the input image to a CGImage
let inputCGImageRef = inputImage!.CGImage // Result: <CGImage 0x7fdd095023d0>
// THE LINE ABOVE WORKS
// Try to convert the filtered image to a CGImage
let filteredCGImageRef = filteredImage.CGImage // Result: nil
// THE LINE ABOVE DOES NOT WORK
// Note that the compiler objects to 'filteredImage!.CGImage'
What's wrong?
A UIImage created from a CIImage as you've done isn't backed by a CGImage. You need to explicitly create one:
let context = CIContext()
let filteredCGImageRef = context.createCGImage(
CIout!,
fromRect: CIout!.extent)
If you need a UIImage, create that from the CGImage rendered by the CIContext:
UIImage(CGImage: filteredCGImageRef)
Cheers,
Simon
Currently I am working on an NSImageView, on which the user drags and drops an image, and the images gets saved to the disk. I am able to save png and jpeg images, but while saving GIF images, all that is saved is a single frame from the gif. The imageView is able to display the whole animated gif though.
My current implementation to save image from NSImageView to disk is:
let cgRef = image.CGImageForProposedRect(nil, context: nil, hints: nil)
let newRep = NSBitmapImageRep(CGImage: cgRef!)
newRep.size = image.size
let type = getBitmapImageFileType(imageName.lowercaseString) // getBitmapImageFileType: returns NSBitmapImageFileType
let properties = type == .NSGIFFileType ? [NSImageLoopCount: 0] : Dictionary<String, AnyObject>()
let data: NSData = newRep.representationUsingType(type, properties: properties)!
data.writeToFile(link, atomically: true)
How should I modify this code to be able to save all the frames of GIF to the disk.