I was wondering if it is possible to binarize an image (convert to black and white only) with Core Image?
I made it with OpenCV and GPUImage, but would prefer it to use Apple Core Image, if that's possible
You can use MetalPerformanceShaders for that. And the CIImageProcessingKernel.
https://developer.apple.com/documentation/coreimage/ciimageprocessorkernel
Here is the code of the class needed.
class ThresholdImageProcessorKernel: CIImageProcessorKernel {
static let device = MTLCreateSystemDefaultDevice()
override class func process(with inputs: [CIImageProcessorInput]?, arguments: [String : Any]?, output: CIImageProcessorOutput) throws {
guard
let device = device,
let commandBuffer = output.metalCommandBuffer,
let input = inputs?.first,
let sourceTexture = input.metalTexture,
let destinationTexture = output.metalTexture,
let thresholdValue = arguments?["thresholdValue"] as? Float else {
return
}
let threshold = MPSImageThresholdBinary(
device: device,
thresholdValue: thresholdValue,
maximumValue: 1.0,
linearGrayColorTransform: nil)
threshold.encode(
commandBuffer: commandBuffer,
sourceTexture: sourceTexture,
destinationTexture: destinationTexture)
}
}
And this is how you can use it:
let context = CIContext(options: nil)
if let binaryCIImage = try? ThresholdImageProcessorKernel.apply(
withExtent: croppedCIImage.extent,
inputs: [croppedCIImage],
arguments: ["thresholdValue": Float(0.2)]) {
if let cgImage = context.createCGImage(binaryCIImage, from: binary.extent) {
DispatchQueue.main.async {
let resultingImage = UIImage(cgImage: cgImage)
if resultingImage.size.width > 100 {
print("Received an image \(resultingImage.size)")
}
}
}
}
Yes. You have at least two options, CIPhotoEffectMono or a small custom CIColorKernel.
CIPhotoEffectMono:
func createMonoImage(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setValue(CIImage(image: image), forKey: "inputImage")
let outputImage = filter!.outputImage
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Note, I'm writing this quickly, you may need to tighten up things for nil returns.
CIColorKernel:
The FadeToBW GLSL (0.0 factor full color, 1.0 factor is no color):
kernel vec4 fadeToBW(__sample s, float factor) {
vec3 lum = vec3(0.299,0.587,0.114);
vec3 bw = vec3(dot(s.rgb,lum));
vec3 pixel = s.rgb + (bw - s.rgb) * factor;
return vec4(pixel,s.a);
}
The code below opens this as a file called FadeToBW.cikernel. You can also post this as a String directly into the openKernelFile call.
The Swift code:
func createMonoImage(image:UIImage, inputColorFade:NSNumber) -> UIImage {
let ciKernel = CIColorKernel(string: openKernelFile("FadeToBW"))
let extent = image.extent
let arguments = [image, inputColorFade]
let outputImage = ciKernel.applyWithExtent(extent, arguments: arguments)
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Again, add some guards, etc.
I have had success by converting it to greyscale using CIPhotoEffectMono or equivalent, and then using CIColorControls with a ridiculously high inputContrast number (I used 10000). This effectively makes it black and white and thus binarized. Useful for those who don't want to mess with custom kernels.
Also, you can use an example like Apple's "Chroma Key" filter which uses Hue to filter, but instead of looking at Hue you just give the rules for binarizing the data (ie: when to set RGB all to 1.0 and when to set to 0.0).
https://developer.apple.com/documentation/coreimage/applying_a_chroma_key_effect
Found this thread from a Google search, and thought I'd mention that as of iOS 14 and OSX 11.0, CoreImage includes CIColorThreshold and CIColorThresholdOtsu filters (the latter using Otsu's method to calculate the threshold value from the image histogram)
See:
https://cifilter.io/CIColorThreshold/
https://cifilter.io/CIColorThresholdOtsu/
let outputImage = inputImage.applyingFilter("CIColorMonochrome",
parameters: [kCIInputColorKey: CIColor.white])
In you want to play with every out of 250 CIFilters please check this app out: https://apps.apple.com/us/app/filter-magic/id1594986951
Related
I am trying to get the average RGB value for my "AVCaptureVideoDataOutput" feed. I found the following solution on StackOverflow:
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(CVPixelBuffer: pixelBuffer!)
let filter = CIFilter(name: "CIAreaAverage")
filter!.setValue(cameraImage, forKey: kCIInputImageKey)
let outputImage = filter!.valueForKey(kCIOutputImageKey) as! CIImage!
let ctx = CIContext(options:nil)
let cgImage = ctx.createCGImage(outputImage, fromRect:outputImage.extent)
let rawData:NSData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage))!
let pixels = UnsafePointer<UInt8>(rawData.bytes)
let bytes = UnsafeBufferPointer<UInt8>(start:pixels, count:rawData.length)
var BGRA_index = 0
for pixel in UnsafeBufferPointer(start: bytes.baseAddress, count: bytes.count) {
switch BGRA_index {
case 0:
bluemean = CGFloat (pixel)
case 1:
greenmean = CGFloat (pixel)
case 2:
redmean = CGFloat (pixel)
case 3:
break
default:
break
}
BGRA_index++
}
But this produces the average as an Int but I need it in a Float format with the precision kept. The rounding is quite problematic in the problem domain I'm working with. Is there a way to a Float average efficiently?
Thanks a lot!
May I recommend using our library CoreImageExtensions for reading the value? We added methods for reading pixel values from CIImages in different formats. For your case it would look like this:
import CoreImageExtensions
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
let filter = CIFilter(name: "CIAreaAverage")!
filter.setValue(cameraImage, forKey: kCIInputImageKey)
filter.setValue(CIVector(cgRect: cameraImage.extent), forKey: kCIInputExtentKey)
let outputImage = filter.outputImage!
let context = CIContext()
// get the value of a specific pixel as a `SIMD4<Float32>`
let average = context.readFloat32PixelValue(from: outputImage, at: CGPoint.zero)
Also keep in mind, if you want to compute the average regularly (not just once), to only create a single instance of CIContext and reuse it for every camera frame. Creating it is expensive and it actually increases performance to use the same instance since it caches internal resources.
There is a problem with QR code generation using the following simple code:
override func viewDidLoad() {
super.viewDidLoad()
let image = generateQRCode(from: "Hacking with Swift is the best iOS coding tutorial I've ever read!")
imageView.image = image
}
func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 5.3, y: 5.3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
This code produces the following image:
But when magnifying any corner marker, we can see the difference in border thickness:
I. e. not every scale value produces correct final image. How to fix it out?
The behavior you show is expected whenever you use a non-integer scale, such as 5.3. If having consistent marker widths is something you care about, use only integer scales, such as 5 or 6.
I have followed this tutorial (https://medium.com/#dominicfholmes/generating-qr-codes-in-swift-4-b5dacc75727c) to generate qr, but I am trying to generate customized qr and one of the requirements is that instead of being squares they are circles in the corners. This is possible?
func generateQR(fromString : String) -> UIImage? {
let data = fromString.data(using: String.Encoding.ascii)
// Get a QR CIFilter
guard let qrFilter = CIFilter(name: "CIQRCodeGenerator") else { return nil}
// Input the data
qrFilter.setValue(data, forKey: "inputMessage")
// Get the output image
guard let qrImage = qrFilter.outputImage else { return nil}
// Scale the image
let transform = CGAffineTransform(scaleX: 10, y: 10)
let scaledQrImage = qrImage.transformed(by: transform)
// Invert the colors
guard let colorInvertFilter = CIFilter(name: "CIColorInvert") else { return nil}
colorInvertFilter.setValue(scaledQrImage, forKey: "inputImage")
guard let outputInvertedImage = colorInvertFilter.outputImage else { return nil}
// Replace the black with transparency
guard let maskToAlphaFilter = CIFilter(name: "CIMaskToAlpha") else { return nil}
maskToAlphaFilter.setValue(outputInvertedImage, forKey: "inputImage")
guard let outputCIImage = maskToAlphaFilter.outputImage else { return nil}
// Do some processing to get the UIImage
let context = CIContext()
guard let cgImage = context.createCGImage(outputCIImage, from: outputCIImage.extent) else { return nil}
let processedImage = UIImage(cgImage: cgImage)
return processedImage
}
There is an example of expected result
https://www.qrcode-monkey.com/img/qrcode-logo.png
It's been a while since I've used the Core Image QR code generator filter, CIQRCodeGenerator. Looking at the docs, it only takes a couple of parameters, inputMessage and inputCorrectionLevel. There's no facility other than those parameters to customize the QR code it generates.
I guess you could do image processing on the resulting image to find the "bullseye" corner squares to change them to rounded rectangles, but that would be a fair challenge.
Conversely you could always write your own QR code rendering library. The image processing part isn't that complicated. It's figuring out the QR code standard and how to generate the dot pattern that would be hard. I haven't looked up the specs for QR codes but it's public.
You might take an existing open source QR code library and modify it to create the rounded rectangle corner squares you are after. I think this is the option I would pursue if it was my task. With any luck you can find a well-written library that first generates the QR code as a grid of booleans, and then uses a separate function to render that grid of boons into an image.
Using CIFIlter I want to apply same filter to multiple images
I have multiple animationImages of UIImageView
let sepiaFilter = CIFilter(name:"CIColorControls")
let brightness = 0.8
for image in imageView.animationImages {
guard let ciimage = CIImage(image: image) else { return }
if let newimage = self.sepiaFilter(ciimage, filter: filter, intensity:brightness )
{
let cgImage:CGImage = ciImageCtx!.createCGImage(newimage, from: newimage.extent)!
let image:UIImage = UIImage.init(cgImage: cgImage)
newImages.append(image)
}
}
}
func sepiaFilter(_ input: CIImage,filter: CIFilter?, intensity: Double) -> CIImage?
{
filter?.setValue(input, forKey: kCIInputImageKey)
filter?.setValue(intensity, forKey: kCIInputBrightnessKey)
return filter?.outputImage
}
So let me know what is best solution to apply CIFilter to multiple images ?
Using above for loop CPU Usage increased more than 100% so it is totally wrong way.
Is it possible animations in GLKit View ?
If yes let me provide deatils about it or Give best solution
**let cgImage:CGImage = ciImageCtx!.createCGImage(newimage, from: newimage.extent)!**
This line taking more CPU usage and time
Thanks.
I've been trying to wrap my head around this problem with no luck. I have a very simple Swift command-line application which takes one argument - image path to load. It crops the image and filters that image fragment with SepiaTone filter.
It works just fine. It crops the image to 200x200 and filters it with SepiaTone. Now here's the problem that I'm facing - the whole process takes 600ms on my MacBook Air. Now when I RESIZE (instead of cropping) input image to the same dimensions (200x200) it takes 150ms.
Why is that? In both cases I'm filtering an image which is 200x200 in size. I'm using this particular image for testing (5966x3978).
UPDATE:
It's this particular line of code that takes 4x longer when dealing with cropped image:
var ciImage:CIImage = CIImage(cgImage: cgImage)
END OF UPDATE
Code for cropping (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// CROP THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
let rect = CGRect(x: 0, y: 0, width: 200, height: 200)
if let croppedImage = cgImage.cropping(to: rect) {
cgImage = croppedImage
} else {
exit(EXIT_FAILURE)
}
// END CROPPING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Code for resizing (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// RESIZE THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
guard let CGcontext = CGContext(data: nil,
width: 200,
height: 200,
bitsPerComponent: cgImage.bitsPerComponent,
bytesPerRow: cgImage.bytesPerRow,
space: cgImage.colorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: cgImage.bitmapInfo.rawValue)
else {
exit(EXIT_FAILURE)
}
CGcontext.draw(cgImage, in: CGRect(x: 0, y: 0, width: 200, height: 200))
if let resizeOutput = CGcontext.makeImage() {
cgImage = resizeOutput
}
// END RESIZING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Its very likely that the cgImage lives in video memory and when you scale the image it actually uses the hardware to write the image to a new area of memory. When you crop the cgImage the documentation implies that it is just referencing the original image. The line
var ciImage:CIImage = CIImage(cgImage: cgImage)
must be triggering a read (maybe to main memory?), and in the case of your scaled image it can probably just read the whole buffer continuously. In the case of the cropped image it may be reading it line by line and this could account for the difference, but thats just me guessing.
It looks like you are doing two very different things. In the "slow" version you are cropping (as in taking a small CGRect of the original image) and in the "fast" version you are resizing (as in reducing the original down to a CGRect).
You can prove this by adding two UIImageViews and adding these lines after each declaration of ciImage:
slowImage.image = UIImage(ciImage: ciImage)
fastImage.image = UIImage(ciImage: ciImage)
Here are two simulator screenshots, with the "slow" image above the "fast" image. The first is with your code where the "slow" CGRect origin is (0,0) and the second is with it adjusted to (2000,2000):
Origin is (0,0)
Origin is (2000,2000)
Knowing this, I can come up with a few things happening on the timing.
I'm including a link to Apple's documentation on the cropping function. It explains that it is doing some CGRect calculations behind the scenes but it doesn't explain how it pulls the pixel bits out of the full-sized CG image - I think that's where the real slow down is.
In the end though, it looks like the timing is due to doing two entirely different things.
CGRect.cropping(to:)