UIImagePNGRepresentation(UIImage()) returns nil - swift

Why does UIImagePNGRepresentation(UIImage()) returns nil?
I'm trying to create a UIImage() in my test code just to assert that it was correctly passed around.
My comparison method for two UIImage's uses the UIImagePNGRepresentation(), but for some reason, it is returning nil.
Thank you.

UIImagePNGRepresentation() will return nil if the UIImage provided does not contain any data. From the UIKit Documentation:
Return Value
A data object containing the PNG data, or nil if there was a problem generating the data. This function may return nil if the image has no data or if the underlying CGImageRef contains data in an unsupported bitmap format.
When you initialize a UIImage by simply using UIImage(), it creates a UIImage with no data. Although the image isn't nil, it still has no data. And, because the image has no data, UIImagePNGRepresentation() just returns nil.
To fix this, you would have to use UIImage with data. For example:
var imageName: String = "MyImageName.png"
var image = UIImage(named: imageName)
var rep = UIImagePNGRepresentation(image)
Where imageName is the name of your image, included in your application.
In order to use UIImagePNGRepresentation(image), image must not be nil, and it must also have data.
If you want to check if they have any data, you could use:
if(image == nil || image == UIImage()){
//image is nil, or has no data
}
else{
//image has data
}

The UIImage documentation says
Image objects are immutable, so you cannot change their properties after creation. This means that you generally specify an image’s properties at initialization time or rely on the image’s metadata to provide the property value.
Since you've created the UIImage without providing any image data, the object you've created has no meaning as an image. UIKit and Core Graphics don't appear to allow 0x0 images.
The simplest fix is to create a 1x1 image instead:
UIGraphicsBeginImageContext(CGSizeMake(1, 1))
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

I faced the same issue i was converting UIImage to pngData but sometime it returns nil. i fixed it by just creating copy of image
func getImagePngData(img : UIImage) -> Data {
let pngData = Data()
if let hasData = img.pngData(){
print(hasData)
pngData = hasData
}
else{
UIGraphicsBeginImageContext(img.size)
img.draw(in: CGRect(x: 0.0, y: 0.0, width: img.width,
height: img.height))
let resultImage =
UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
print(resultImage.pngData)
pngData = resultImage.pngData
}
return pngData
}

Related

Question regarding UIImage -> CVPixelBuffer -> UIImage conversion

I am working on a simple denoising POC in SwiftUI where I want to:
Load an input image
Apply a CoreML model (denoising) to the input image
Display the output image
I have something working based on dozens of source codes I found online. Based on what I've read, a CoreML model (at least the one I'm using) accepts a CVPixelBuffer and outputs also a CVPixelBuffer. So my idea was to do the following:
Convert the input UIImage into a CVPixelBuffer
Apply the CoreML model to the CVPixelBuffer
Convert the newly created CVPixelBuffer into a UIImage
(Note that I've read that using the Vision framework, one can input a CGImage directly into the model. I'll try this approach as soon as I'm familiar with what I'm trying to achieve here as I think it is a good exercise.)
As a start, I wanted to skip the step (2) to focus on the conversion problem. What I tried to achieve in the code bellow is:
Convert the input UIImage into a CVPixelBuffer
Convert the CVPixelBuffer into a UIImage
I'm not a Swift or an Objective-C developer, so I'm pretty sure that I've made at least a few mistakes. I found this code quite complex and I was wondering if there was a better / simpler way to do the same thing?
func convert(input: UIImage) -> UIImage? {
// Input CGImage
guard let cgInput = input.cgImage else {
return nil
}
// Image size
let width = cgInput.width
let height = cgInput.height
let region = CGRect(x: 0, y: 0, width: width, height: height)
// Attributes needed to create the CVPixelBuffer
let attributes = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue]
// Create the input CVPixelBuffer
var pbInput:CVPixelBuffer? = nil
let status = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32ARGB,
attributes as CFDictionary,
&pbInput)
// Sanity check
if status != kCVReturnSuccess {
return nil
}
// Fill the input CVPixelBuffer with the content of the input CGImage
CVPixelBufferLockBaseAddress(pbInput!, CVPixelBufferLockFlags(rawValue: 0))
guard let context = CGContext(data: CVPixelBufferGetBaseAddress(pbInput!),
width: width,
height: height,
bitsPerComponent: cgInput.bitsPerComponent,
bytesPerRow: cgInput.bytesPerRow,
space: cgInput.colorSpace!,
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue) else {
return nil
}
context.draw(cgInput, in: region)
CVPixelBufferUnlockBaseAddress(pbInput!, CVPixelBufferLockFlags(rawValue: 0))
// Create the output CGImage
let ciOutput = CIImage(cvPixelBuffer: pbInput!)
let temporaryContext = CIContext(options: nil)
guard let cgOutput = temporaryContext.createCGImage(ciOutput, from: region) else {
return nil
}
// Create and return the output UIImage
return UIImage(cgImage: cgOutput)
}
When I used this code in my SwiftUI project, input and output images looked the same, but there were not identical. I think the input image had a colormap (ColorSync Profile) associated to it that have been lost during the conversion. I assumed I was supposed to use cgInput.colorSpace during the CGContext creation, but it seemed that using CGColorSpace(name: CGColorSpace.sRGB)! was working better. Can somebody please explain that to me?
Thanks for your help.
You can also use CGImage objects with Core ML, but you have to create the MLFeatureValue object by hand and then put it into an MLFeatureProvider to give it to the model. But that only takes care of the model input, not the output.
Another option is to use the code from my CoreMLHelpers repo.

Application performance issue with CGImageSourceCreateThumbnailAtIndex

I m using CGImageSourceCreateThumbnailAtIndex to convert the Data into UIImage , but if I convert around 7-8 image using this method application gets slow, instead of this If I use UIImage(data:imageData) everything works fine. How to fix this issue , I need to use CGImageSourceCreateThumbnailAtIndex to resize the image.
Below is the code I m using.
convenience init?(data: Data, maxSize: CGSize) {
let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let imageSource = CGImageSourceCreateWithData(data as CFData, imageSourceOptions) else {
return nil
}
let options = [
// The size of the longest edge of the thumbnail
kCGImageSourceThumbnailMaxPixelSize: max(maxSize.width, maxSize.width),
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceShouldCacheImmediately: true,
kCGImageSourceCreateThumbnailWithTransform: true,
] as CFDictionary
// Generage the thumbnail
guard let cgImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options) else {
return nil
}
print("Generating Image....")
self.init(cgImage: cgImage)
}
I had the same problem when batch processing images. Take a look at your RAM usage, it's off the charts. According to Apple, with CGImageSourceCreateWithData and CGImageSourceCreateWithURL, "You’re responsible for releasing this type using CFRelease."
Apple Docs
With Swift, you can do it using:
autoreleasepool {
let img = CGImageSourceCreateWithURL ...
}

Changing JUST .scale in UIImage?

Here, I'm creating a typical graphic (it's full-screen size, on all devices) on the fly...
func buildImage()->UIImage
{
let wrapperA:UIView = say, a picture
let wrapperB:UIView = say, some text to go on top
let mainSize = basicImage.bounds.size
UIGraphicsBeginImageContextWithOptions(mainSize, false, 0.0)
basicImage.drawHierarchy(in: basicImage.bounds, afterScreenUpdates:true)
wrapperA.drawHierarchy(in: wrapperA.bounds, afterScreenUpdates:true)
wrapperB.drawHierarchy(in: wrapperB.bounds, afterScreenUpdates: true)
let result:UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// so we've just created a nice big image for some reason,
// no problem so far
print( result?.scale )
// I want to change that image to have a scale of 1.
// I don't know how to do that, so I actually just
// make a new identical one, with scale of 1
let resultFixed:UIImage = UIImage(cgImage: result!.cgImage!,
scale: 1.0,
orientation: result!.imageOrientation)
print( resultFixed.scale )
print("Let's use only '1-scale' images to upload to things like Instagram")
// return result
return resultFixed
// be sure to ask on SO if there's a way to
// just "change the scale" rather than make new.
}
I need the final image to be .scale of 1 - but .scale is a read only property.
The only thing I know how to do is make a whole new image copy ... but set the scale to 1 as it's being created.
Is there a better way?
Handy tip -
This was motivated by: say you're saving a large image to the user's album, and also allowing UIActivityViewController so as to post to (example) Instagram. As a general rule, it seems to be best to make the scale 1 before sending to (example) Instagram; if the scale is say 3 you actually just get the top-left 1/3 of the image on your Instagram post. In terms of saving it to the iOS photo album, it does seem to be harmless (perhaps, better in some ways) to set the scale to 1. (I only say "better" as, if the image is, example, ultimately say emailed to a friend on PC, it can cause less confusion if the scale is 1.) Interestingly though, if you just use the iOS Photos album, and take a scale 2 or 3 image, and share it to Instagram: it does in fact appear properly on Instagram! (perhaps Apple's Photos indeed knows it os best to make it scale 1, before sending it to somewhere like Instagram!).
As you say, the scale property of UIImage is read-only – therefore you cannot change it directly.
However, using UIImage's init(cgImage:scale:orientation) initialiser doesn't really copy the image – the underlying CGImage that it's wrapping (which contains the actual bitmap data) is still the same instance. It's only a new UIImage wrapper that is created.
Although that being said, you could cut out the intermediate UIImage wrapper in this case by getting the CGImage from the context directly through CGContext's makeImage() method. For example:
func buildImage() -> UIImage? {
// ...
let mainSize = basicImage.bounds.size
UIGraphicsBeginImageContextWithOptions(mainSize, false, 0.0)
defer {
UIGraphicsEndImageContext()
}
// get the current context
guard let context = UIGraphicsGetCurrentContext() else { return nil }
// -- do drawing here --
// get the CGImage from the context by calling makeImage() – then wrap in a UIImage
// through using Optional's map(_:) (as makeImage() can return nil)
// by default, the scale of the UIImage is 1.
return context.makeImage().map(UIImage.init(cgImage:))
}
btw you can change scale of result image throw creating new image
let newScaleImage = UIImage(cgImage: oldScaleImage.cgImage!, scale: 1.0, orientation: oldScaleImage.imageOrientation)

Making a GIF from images using Swift (macOS)

I was wondering if there was a way to convert an array of NSImages using Swift in macOS/osx?
I should be able to export it to file afterwards, so an animation of images displayed on my app would not be enough.
Thanks!
Image I/O has the functionalities you need. Try this:
var images = ... // init your array of NSImage
let destinationURL = NSURL(fileURLWithPath: "/path/to/image.gif")
let destinationGIF = CGImageDestinationCreateWithURL(destinationURL, kUTTypeGIF, images.count, nil)!
// The final size of your GIF. This is an optional parameter
var rect = NSMakeRect(0, 0, 350, 250)
// This dictionary controls the delay between frames
// If you don't specify this, CGImage will apply a default delay
let properties = [
(kCGImagePropertyGIFDictionary as String): [(kCGImagePropertyGIFDelayTime as String): 1.0/16.0]
]
for img in images {
// Convert an NSImage to CGImage, fitting within the specified rect
// You can replace `&rect` with nil
let cgImage = img.CGImageForProposedRect(&rect, context: nil, hints: nil)!
// Add the frame to the GIF image
// You can replace `properties` with nil
CGImageDestinationAddImage(destinationGIF, cgImage, properties)
}
// Write the GIF file to disk
CGImageDestinationFinalize(destinationGIF)

Swift playground - cannot convert a filtered UIImage to a CGImage

In a Swift playground, I am loading a JPEG, converting it to a UIImage and filtering it to monochrome.
I then convert the resulting filtered image to a UIImage.
The input and the filtered images display correctly.
I then convert both images to a CGImage type.
This works for the input image, but the filtered image returns nil from the conversion:
// Get an input image
let imageFilename = "yosemite.jpg"
let inputImage = UIImage(named: imageFilename )
let inputCIImage = CIImage(image:inputImage!)
// Filter the input image - make it monochrome
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setDefaults()
filter!.setValue(inputCIImage, forKey: kCIInputImageKey)
let CIout = filter!.outputImage
let filteredImage = UIImage(CIImage: CIout!)
// Convert the input image to a CGImage
let inputCGImageRef = inputImage!.CGImage // Result: <CGImage 0x7fdd095023d0>
// THE LINE ABOVE WORKS
// Try to convert the filtered image to a CGImage
let filteredCGImageRef = filteredImage.CGImage // Result: nil
// THE LINE ABOVE DOES NOT WORK
// Note that the compiler objects to 'filteredImage!.CGImage'
What's wrong?
A UIImage created from a CIImage as you've done isn't backed by a CGImage. You need to explicitly create one:
let context = CIContext()
let filteredCGImageRef = context.createCGImage(
CIout!,
fromRect: CIout!.extent)
If you need a UIImage, create that from the CGImage rendered by the CIContext:
UIImage(CGImage: filteredCGImageRef)
Cheers,
Simon