Application performance issue with CGImageSourceCreateThumbnailAtIndex - swift

I m using CGImageSourceCreateThumbnailAtIndex to convert the Data into UIImage , but if I convert around 7-8 image using this method application gets slow, instead of this If I use UIImage(data:imageData) everything works fine. How to fix this issue , I need to use CGImageSourceCreateThumbnailAtIndex to resize the image.
Below is the code I m using.
convenience init?(data: Data, maxSize: CGSize) {
let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let imageSource = CGImageSourceCreateWithData(data as CFData, imageSourceOptions) else {
return nil
}
let options = [
// The size of the longest edge of the thumbnail
kCGImageSourceThumbnailMaxPixelSize: max(maxSize.width, maxSize.width),
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceShouldCacheImmediately: true,
kCGImageSourceCreateThumbnailWithTransform: true,
] as CFDictionary
// Generage the thumbnail
guard let cgImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options) else {
return nil
}
print("Generating Image....")
self.init(cgImage: cgImage)
}

I had the same problem when batch processing images. Take a look at your RAM usage, it's off the charts. According to Apple, with CGImageSourceCreateWithData and CGImageSourceCreateWithURL, "You’re responsible for releasing this type using CFRelease."
Apple Docs
With Swift, you can do it using:
autoreleasepool {
let img = CGImageSourceCreateWithURL ...
}

Related

cvPixelBuffer to CGImage Conversion only Gives Black-White Image

I am trying to convert raw camera sensor data to a color image. The data are firstly provided in a [UInt16] array and subsequently converted to a cvPixelBuffer.
The following Swift 5 code "only" creates a black-and-white image and disregards the color filter array of the RGGB pixel data.
I also tried VTCreateCGImageFromCVPixelBuffer to no avail. It returns nil.
// imgRawData is a [UInt16] array
var pixelBuffer: CVPixelBuffer?
let attrs: [ CFString : Any ] = [ kCVPixelBufferPixelFormatTypeKey : "rgg4" ]
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, width, height, kCVPixelFormatType_14Bayer_RGGB, &imgRawData, 2*width, nil, nil, attrs as CFDictionary, &pixelBuffer)
// This creates a black-and-white image, not color
let ciimg = CIImage(cvPixelBuffer: pixelBuffer! )
let context: CIContext = CIContext.init(options: [ CIContextOption(rawValue: "workingFormat") : CIFormat.RGBA16 ] )
guard let cgi = context.createCGImage(ciimg, from: ciimg.extent, format: CIFormat.RGBA16, colorSpace: CGColorSpace(name: CGColorSpace.sRGB), deferred: false)
else { return dummyImg! }
// This function returns nil
var cgI: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer!, options: nil, imageOut: &cgI)
Any hint is highly appreciated.
As for the demosaicing, I want CoreImage or CoreGraphics take care of the RGGB pixel-color interpolation.

CIFIlter Apply to animationImages of UIImageView

Using CIFIlter I want to apply same filter to multiple images
I have multiple animationImages of UIImageView
let sepiaFilter = CIFilter(name:"CIColorControls")
let brightness = 0.8
for image in imageView.animationImages {
guard let ciimage = CIImage(image: image) else { return }
if let newimage = self.sepiaFilter(ciimage, filter: filter, intensity:brightness )
{
let cgImage:CGImage = ciImageCtx!.createCGImage(newimage, from: newimage.extent)!
let image:UIImage = UIImage.init(cgImage: cgImage)
newImages.append(image)
}
}
}
func sepiaFilter(_ input: CIImage,filter: CIFilter?, intensity: Double) -> CIImage?
{
filter?.setValue(input, forKey: kCIInputImageKey)
filter?.setValue(intensity, forKey: kCIInputBrightnessKey)
return filter?.outputImage
}
So let me know what is best solution to apply CIFilter to multiple images ?
Using above for loop CPU Usage increased more than 100% so it is totally wrong way.
Is it possible animations in GLKit View ?
If yes let me provide deatils about it or Give best solution
**let cgImage:CGImage = ciImageCtx!.createCGImage(newimage, from: newimage.extent)!**
This line taking more CPU usage and time
Thanks.

Binarize Picture with Core Image on iOS

I was wondering if it is possible to binarize an image (convert to black and white only) with Core Image?
I made it with OpenCV and GPUImage, but would prefer it to use Apple Core Image, if that's possible
You can use MetalPerformanceShaders for that. And the CIImageProcessingKernel.
https://developer.apple.com/documentation/coreimage/ciimageprocessorkernel
Here is the code of the class needed.
class ThresholdImageProcessorKernel: CIImageProcessorKernel {
static let device = MTLCreateSystemDefaultDevice()
override class func process(with inputs: [CIImageProcessorInput]?, arguments: [String : Any]?, output: CIImageProcessorOutput) throws {
guard
let device = device,
let commandBuffer = output.metalCommandBuffer,
let input = inputs?.first,
let sourceTexture = input.metalTexture,
let destinationTexture = output.metalTexture,
let thresholdValue = arguments?["thresholdValue"] as? Float else {
return
}
let threshold = MPSImageThresholdBinary(
device: device,
thresholdValue: thresholdValue,
maximumValue: 1.0,
linearGrayColorTransform: nil)
threshold.encode(
commandBuffer: commandBuffer,
sourceTexture: sourceTexture,
destinationTexture: destinationTexture)
}
}
And this is how you can use it:
let context = CIContext(options: nil)
if let binaryCIImage = try? ThresholdImageProcessorKernel.apply(
withExtent: croppedCIImage.extent,
inputs: [croppedCIImage],
arguments: ["thresholdValue": Float(0.2)]) {
if let cgImage = context.createCGImage(binaryCIImage, from: binary.extent) {
DispatchQueue.main.async {
let resultingImage = UIImage(cgImage: cgImage)
if resultingImage.size.width > 100 {
print("Received an image \(resultingImage.size)")
}
}
}
}
Yes. You have at least two options, CIPhotoEffectMono or a small custom CIColorKernel.
CIPhotoEffectMono:
func createMonoImage(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setValue(CIImage(image: image), forKey: "inputImage")
let outputImage = filter!.outputImage
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Note, I'm writing this quickly, you may need to tighten up things for nil returns.
CIColorKernel:
The FadeToBW GLSL (0.0 factor full color, 1.0 factor is no color):
kernel vec4 fadeToBW(__sample s, float factor) {
vec3 lum = vec3(0.299,0.587,0.114);
vec3 bw = vec3(dot(s.rgb,lum));
vec3 pixel = s.rgb + (bw - s.rgb) * factor;
return vec4(pixel,s.a);
}
The code below opens this as a file called FadeToBW.cikernel. You can also post this as a String directly into the openKernelFile call.
The Swift code:
func createMonoImage(image:UIImage, inputColorFade:NSNumber) -> UIImage {
let ciKernel = CIColorKernel(string: openKernelFile("FadeToBW"))
let extent = image.extent
let arguments = [image, inputColorFade]
let outputImage = ciKernel.applyWithExtent(extent, arguments: arguments)
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Again, add some guards, etc.
I have had success by converting it to greyscale using CIPhotoEffectMono or equivalent, and then using CIColorControls with a ridiculously high inputContrast number (I used 10000). This effectively makes it black and white and thus binarized. Useful for those who don't want to mess with custom kernels.
Also, you can use an example like Apple's "Chroma Key" filter which uses Hue to filter, but instead of looking at Hue you just give the rules for binarizing the data (ie: when to set RGB all to 1.0 and when to set to 0.0).
https://developer.apple.com/documentation/coreimage/applying_a_chroma_key_effect
Found this thread from a Google search, and thought I'd mention that as of iOS 14 and OSX 11.0, CoreImage includes CIColorThreshold and CIColorThresholdOtsu filters (the latter using Otsu's method to calculate the threshold value from the image histogram)
See:
https://cifilter.io/CIColorThreshold/
https://cifilter.io/CIColorThresholdOtsu/
let outputImage = inputImage.applyingFilter("CIColorMonochrome",
parameters: [kCIInputColorKey: CIColor.white])
In you want to play with every out of 250 CIFilters please check this app out: https://apps.apple.com/us/app/filter-magic/id1594986951

Why filtering a cropped image is 4x slower than filtering resized image (both have the same dimensions)

I've been trying to wrap my head around this problem with no luck. I have a very simple Swift command-line application which takes one argument - image path to load. It crops the image and filters that image fragment with SepiaTone filter.
It works just fine. It crops the image to 200x200 and filters it with SepiaTone. Now here's the problem that I'm facing - the whole process takes 600ms on my MacBook Air. Now when I RESIZE (instead of cropping) input image to the same dimensions (200x200) it takes 150ms.
Why is that? In both cases I'm filtering an image which is 200x200 in size. I'm using this particular image for testing (5966x3978).
UPDATE:
It's this particular line of code that takes 4x longer when dealing with cropped image:
var ciImage:CIImage = CIImage(cgImage: cgImage)
END OF UPDATE
Code for cropping (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// CROP THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
let rect = CGRect(x: 0, y: 0, width: 200, height: 200)
if let croppedImage = cgImage.cropping(to: rect) {
cgImage = croppedImage
} else {
exit(EXIT_FAILURE)
}
// END CROPPING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Code for resizing (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// RESIZE THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
guard let CGcontext = CGContext(data: nil,
width: 200,
height: 200,
bitsPerComponent: cgImage.bitsPerComponent,
bytesPerRow: cgImage.bytesPerRow,
space: cgImage.colorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: cgImage.bitmapInfo.rawValue)
else {
exit(EXIT_FAILURE)
}
CGcontext.draw(cgImage, in: CGRect(x: 0, y: 0, width: 200, height: 200))
if let resizeOutput = CGcontext.makeImage() {
cgImage = resizeOutput
}
// END RESIZING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Its very likely that the cgImage lives in video memory and when you scale the image it actually uses the hardware to write the image to a new area of memory. When you crop the cgImage the documentation implies that it is just referencing the original image. The line
var ciImage:CIImage = CIImage(cgImage: cgImage)
must be triggering a read (maybe to main memory?), and in the case of your scaled image it can probably just read the whole buffer continuously. In the case of the cropped image it may be reading it line by line and this could account for the difference, but thats just me guessing.
It looks like you are doing two very different things. In the "slow" version you are cropping (as in taking a small CGRect of the original image) and in the "fast" version you are resizing (as in reducing the original down to a CGRect).
You can prove this by adding two UIImageViews and adding these lines after each declaration of ciImage:
slowImage.image = UIImage(ciImage: ciImage)
fastImage.image = UIImage(ciImage: ciImage)
Here are two simulator screenshots, with the "slow" image above the "fast" image. The first is with your code where the "slow" CGRect origin is (0,0) and the second is with it adjusted to (2000,2000):
Origin is (0,0)
Origin is (2000,2000)
Knowing this, I can come up with a few things happening on the timing.
I'm including a link to Apple's documentation on the cropping function. It explains that it is doing some CGRect calculations behind the scenes but it doesn't explain how it pulls the pixel bits out of the full-sized CG image - I think that's where the real slow down is.
In the end though, it looks like the timing is due to doing two entirely different things.
CGRect.cropping(to:)

Core Graphics alternative to UIImage/NSImage file representations

I have a CGImage that I have manipulated, and I wish to write it to file (or, rather, end up with an NSData representation suitable for writing to file). I know "how" to do this in both OSX & iOS, but only using the platform dependent calls, not the Core Graphics calls. I cannot find any documentation on how to do this in a platform independent way. Here's what I have:
func newImageData() -> NSData? {
var newImageRef: CGImage
// ...
// Create & manipulate newImageRef
// ...
#if os(iOS)
let newImage = UIImage(CGImage: newImageRef, scale: 1, orientation: 0)
return UIImagePNGRepresentation(newImage)
#elseif os(OSX)
let newImage = NSImage(CGImage: newImageRef, size: NSSize(width: width, height: height))
return newImage.TIFFRepresentation
#endif
}
Just to be clear - the code above works fine, I am just frustrated that having done everything else in CG, I have to step out of it to get the file representation.
Finally, I don't care that one platform returns a PNG and the other a TIFF - in actual fact this is creating a tile for MKTileOverlay, so either will do...
You are looking for the ImageIO framework, which is the same for both platforms (and, incidentally, is a much better way to save an image to a file than what you are doing now).
Many thanks to #Matt for pointing me in the correct direction - ImageIO framework.
What I needed was this:
let dataOut = CFDataCreateMutable(nil, CFIndex(byteCount))
if let iDest = CGImageDestinationCreateWithData(dataOut, kUTTypePNG, 1, nil) {
let options: [NSObject: AnyObject] = [
kCGImagePropertyOrientation : 1, // Top left
kCGImagePropertyHasAlpha : true,
kCGImageDestinationLossyCompressionQuality : 1.0
]
CGImageDestinationAddImage(iDest, newImageRef, options)
CGImageDestinationFinalize(iDest)
return dataOut
} else {
print("Failed to create image destination")
return nil
}