CIFIlter Apply to animationImages of UIImageView - swift

Using CIFIlter I want to apply same filter to multiple images
I have multiple animationImages of UIImageView
let sepiaFilter = CIFilter(name:"CIColorControls")
let brightness = 0.8
for image in imageView.animationImages {
guard let ciimage = CIImage(image: image) else { return }
if let newimage = self.sepiaFilter(ciimage, filter: filter, intensity:brightness )
{
let cgImage:CGImage = ciImageCtx!.createCGImage(newimage, from: newimage.extent)!
let image:UIImage = UIImage.init(cgImage: cgImage)
newImages.append(image)
}
}
}
func sepiaFilter(_ input: CIImage,filter: CIFilter?, intensity: Double) -> CIImage?
{
filter?.setValue(input, forKey: kCIInputImageKey)
filter?.setValue(intensity, forKey: kCIInputBrightnessKey)
return filter?.outputImage
}
So let me know what is best solution to apply CIFilter to multiple images ?
Using above for loop CPU Usage increased more than 100% so it is totally wrong way.
Is it possible animations in GLKit View ?
If yes let me provide deatils about it or Give best solution
**let cgImage:CGImage = ciImageCtx!.createCGImage(newimage, from: newimage.extent)!**
This line taking more CPU usage and time
Thanks.

Related

Add Dark and Highlight effect on image Like apple native app Dark and Highlight effect

Hi Every one I am facing issue regarding to implement Dark and Highlight effect using Core Image builtin filters. I have worked with other filters but this one giving me wrong result or may be I am not using right filter. I am using CIFilter.highlightShadowAdjust filter Here is my implementation.
let context = CIContext(options: nil)
let filter = CIFilter.highlightShadowAdjust()
let aCIImage = CIImage(image: self.image)
filter.setValue(aCIImage, forKey: kCIInputImageKey)
filter.setValue(highlitValue, forKey: "inputHighlightAmount")
filter.setValue(shadowValue, forKey: "inputShadowAmount")
filter.setValue(radiousValue, forKey: "inputRadius")
guard let outputImage = filter.outputImage else {return UIImage()}
guard let cgimg = context.createCGImage(outputImage, from: outputImage.extent) else {return UIImage()}
return UIImage(cgImage: cgimg)
For clarification I also uploaded a Video what I want to achieve.
Any Suggestion or guid will be greatly thankful.

Swift, blurred images in UITableViewCell does not run smooth

what i am trying to do is show a list with blurred images from the web. This works fine with this code in my custom UITableViewCell
func blurImage(image:UIImage, imageView: UIImageView) {
DispatchQueue.global(qos: .background).async {
let inputImage = CIImage(image: image)
let originalOrientation = image.imageOrientation
let originalScale = image.scale
let filter = CIFilter(name: "CIGaussianBlur")
filter?.setValue(inputImage, forKey: kCIInputImageKey)
filter?.setValue(15.0, forKey: kCIInputRadiusKey)
let outputImage = filter?.outputImage
var cgImage:CGImage?
if let outputImage = outputImage{
cgImage = self.context.createCGImage(outputImage, from: (inputImage?.extent)!)
}
DispatchQueue.main.async {
if let cgImageA = cgImage{
imageView.image = UIImage(cgImage: cgImageA, scale: originalScale, orientation: originalOrientation)
}
}
}
}
the problem is that the blur calculation takes sometime, and allthought its on a BG thread the scrolling is not that fast and smooth as if i don't have the blur effect at all.
Is there a way to make it run smoother OR to show a placeholder image until the blurred image is ready to been draw again resulting in smooth scrolling?
Step 1) Don't use qos: .background for user-initiated tasks. Docs say: Background tasks have the lowest priority of all tasks. Assign this class to tasks or dispatch queues that you use to perform work while your app is running in the background.

UIView invert mask MaskView

I want to apply an inverted mask to my UIView. I set the mask to a UIImageView with a transparent image. However the output with
view.mask = imageView
is not the desired result. How can I achieve the desired result as I illustrated below? The desired result uses the mask cutout as transparency. When I check the mask of the View, it isn't a CAShapeLayer so I can't invert it that way.
Seems like you could do a few things. You could use the image you have but mask a white view and place a blue view behind it. Or you could adjust the image asset you’re using to by reversing the transparency. Or you could use CoreImage to do that in code. For example:
func invertMask(_ image: UIImage) -> UIImage?
{
guard let inputMaskImage = CIImage(image: image),
let backgroundImageFilter = CIFilter(name: "CIConstantColorGenerator", withInputParameters: [kCIInputColorKey: CIColor.black]),
let inputColorFilter = CIFilter(name: "CIConstantColorGenerator", withInputParameters: [kCIInputColorKey: CIColor.clear]),
let inputImage = inputColorFilter.outputImage,
let backgroundImage = backgroundImageFilter.outputImage,
let filter = CIFilter(name: "CIBlendWithAlphaMask", withInputParameters: [kCIInputImageKey: inputImage, kCIInputBackgroundImageKey: backgroundImage, kCIInputMaskImageKey: inputMaskImage]),
let filterOutput = filter.outputImage,
let outputImage = CIContext().createCGImage(filterOutput, from: inputMaskImage.extent) else { return nil }
let finalOutputImage = UIImage(cgImage: outputImage)
return finalOutputImage
}

Binarize Picture with Core Image on iOS

I was wondering if it is possible to binarize an image (convert to black and white only) with Core Image?
I made it with OpenCV and GPUImage, but would prefer it to use Apple Core Image, if that's possible
You can use MetalPerformanceShaders for that. And the CIImageProcessingKernel.
https://developer.apple.com/documentation/coreimage/ciimageprocessorkernel
Here is the code of the class needed.
class ThresholdImageProcessorKernel: CIImageProcessorKernel {
static let device = MTLCreateSystemDefaultDevice()
override class func process(with inputs: [CIImageProcessorInput]?, arguments: [String : Any]?, output: CIImageProcessorOutput) throws {
guard
let device = device,
let commandBuffer = output.metalCommandBuffer,
let input = inputs?.first,
let sourceTexture = input.metalTexture,
let destinationTexture = output.metalTexture,
let thresholdValue = arguments?["thresholdValue"] as? Float else {
return
}
let threshold = MPSImageThresholdBinary(
device: device,
thresholdValue: thresholdValue,
maximumValue: 1.0,
linearGrayColorTransform: nil)
threshold.encode(
commandBuffer: commandBuffer,
sourceTexture: sourceTexture,
destinationTexture: destinationTexture)
}
}
And this is how you can use it:
let context = CIContext(options: nil)
if let binaryCIImage = try? ThresholdImageProcessorKernel.apply(
withExtent: croppedCIImage.extent,
inputs: [croppedCIImage],
arguments: ["thresholdValue": Float(0.2)]) {
if let cgImage = context.createCGImage(binaryCIImage, from: binary.extent) {
DispatchQueue.main.async {
let resultingImage = UIImage(cgImage: cgImage)
if resultingImage.size.width > 100 {
print("Received an image \(resultingImage.size)")
}
}
}
}
Yes. You have at least two options, CIPhotoEffectMono or a small custom CIColorKernel.
CIPhotoEffectMono:
func createMonoImage(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setValue(CIImage(image: image), forKey: "inputImage")
let outputImage = filter!.outputImage
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Note, I'm writing this quickly, you may need to tighten up things for nil returns.
CIColorKernel:
The FadeToBW GLSL (0.0 factor full color, 1.0 factor is no color):
kernel vec4 fadeToBW(__sample s, float factor) {
vec3 lum = vec3(0.299,0.587,0.114);
vec3 bw = vec3(dot(s.rgb,lum));
vec3 pixel = s.rgb + (bw - s.rgb) * factor;
return vec4(pixel,s.a);
}
The code below opens this as a file called FadeToBW.cikernel. You can also post this as a String directly into the openKernelFile call.
The Swift code:
func createMonoImage(image:UIImage, inputColorFade:NSNumber) -> UIImage {
let ciKernel = CIColorKernel(string: openKernelFile("FadeToBW"))
let extent = image.extent
let arguments = [image, inputColorFade]
let outputImage = ciKernel.applyWithExtent(extent, arguments: arguments)
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Again, add some guards, etc.
I have had success by converting it to greyscale using CIPhotoEffectMono or equivalent, and then using CIColorControls with a ridiculously high inputContrast number (I used 10000). This effectively makes it black and white and thus binarized. Useful for those who don't want to mess with custom kernels.
Also, you can use an example like Apple's "Chroma Key" filter which uses Hue to filter, but instead of looking at Hue you just give the rules for binarizing the data (ie: when to set RGB all to 1.0 and when to set to 0.0).
https://developer.apple.com/documentation/coreimage/applying_a_chroma_key_effect
Found this thread from a Google search, and thought I'd mention that as of iOS 14 and OSX 11.0, CoreImage includes CIColorThreshold and CIColorThresholdOtsu filters (the latter using Otsu's method to calculate the threshold value from the image histogram)
See:
https://cifilter.io/CIColorThreshold/
https://cifilter.io/CIColorThresholdOtsu/
let outputImage = inputImage.applyingFilter("CIColorMonochrome",
parameters: [kCIInputColorKey: CIColor.white])
In you want to play with every out of 250 CIFilters please check this app out: https://apps.apple.com/us/app/filter-magic/id1594986951

Why filtering a cropped image is 4x slower than filtering resized image (both have the same dimensions)

I've been trying to wrap my head around this problem with no luck. I have a very simple Swift command-line application which takes one argument - image path to load. It crops the image and filters that image fragment with SepiaTone filter.
It works just fine. It crops the image to 200x200 and filters it with SepiaTone. Now here's the problem that I'm facing - the whole process takes 600ms on my MacBook Air. Now when I RESIZE (instead of cropping) input image to the same dimensions (200x200) it takes 150ms.
Why is that? In both cases I'm filtering an image which is 200x200 in size. I'm using this particular image for testing (5966x3978).
UPDATE:
It's this particular line of code that takes 4x longer when dealing with cropped image:
var ciImage:CIImage = CIImage(cgImage: cgImage)
END OF UPDATE
Code for cropping (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// CROP THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
let rect = CGRect(x: 0, y: 0, width: 200, height: 200)
if let croppedImage = cgImage.cropping(to: rect) {
cgImage = croppedImage
} else {
exit(EXIT_FAILURE)
}
// END CROPPING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Code for resizing (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// RESIZE THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
guard let CGcontext = CGContext(data: nil,
width: 200,
height: 200,
bitsPerComponent: cgImage.bitsPerComponent,
bytesPerRow: cgImage.bytesPerRow,
space: cgImage.colorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: cgImage.bitmapInfo.rawValue)
else {
exit(EXIT_FAILURE)
}
CGcontext.draw(cgImage, in: CGRect(x: 0, y: 0, width: 200, height: 200))
if let resizeOutput = CGcontext.makeImage() {
cgImage = resizeOutput
}
// END RESIZING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Its very likely that the cgImage lives in video memory and when you scale the image it actually uses the hardware to write the image to a new area of memory. When you crop the cgImage the documentation implies that it is just referencing the original image. The line
var ciImage:CIImage = CIImage(cgImage: cgImage)
must be triggering a read (maybe to main memory?), and in the case of your scaled image it can probably just read the whole buffer continuously. In the case of the cropped image it may be reading it line by line and this could account for the difference, but thats just me guessing.
It looks like you are doing two very different things. In the "slow" version you are cropping (as in taking a small CGRect of the original image) and in the "fast" version you are resizing (as in reducing the original down to a CGRect).
You can prove this by adding two UIImageViews and adding these lines after each declaration of ciImage:
slowImage.image = UIImage(ciImage: ciImage)
fastImage.image = UIImage(ciImage: ciImage)
Here are two simulator screenshots, with the "slow" image above the "fast" image. The first is with your code where the "slow" CGRect origin is (0,0) and the second is with it adjusted to (2000,2000):
Origin is (0,0)
Origin is (2000,2000)
Knowing this, I can come up with a few things happening on the timing.
I'm including a link to Apple's documentation on the cropping function. It explains that it is doing some CGRect calculations behind the scenes but it doesn't explain how it pulls the pixel bits out of the full-sized CG image - I think that's where the real slow down is.
In the end though, it looks like the timing is due to doing two entirely different things.
CGRect.cropping(to:)