I've been trying to find a good way to convert white pixels in an NSImage on the fly to transparent pixels. I've seen Swift examples for UIImage, but not NSImage. Below is what I've created (and which is working). Two questions:
Is there a better way to lose the alpha channel on an NSImage? I'm now converting it first to the JPEG representation (which doesn't have an alpha channel) and then to another representation again. That feels rather ugly
Is there a method / way to only trim white pixels on the edges of the images (not on the 'inside' of an image?
func mask(color: NSColor, tolerance: CGFloat = 4) -> NSImage {
guard let data = tiffRepresentation,
// This is an extremely ugly hack to strip the alpha channel from a bitmap representation (because maskingColorComponents does not want an alpha channel)
// https://stackoverflow.com/questions/43852900/swift-3-cgimage-copy-always-nil
// https://stackoverflow.com/questions/36770611/replace-a-color-colour-in-a-skspritenode
let rep = NSBitmapImageRep(data: (NSBitmapImageRep(data: data)?.representation(using: .jpeg, properties: [:])!)!) else {
return self
}
if let ciColor = CIColor(color: color) {
NSLog("trying to mask image")
let maskComponents: [CGFloat] = [ciColor.red, ciColor.green, ciColor.blue].flatMap { value in
[(value * 255) - tolerance, (value * 255) + tolerance]
}
guard let masked = rep.cgImage(forProposedRect: nil, context: nil, hints: nil)?.copy(maskingColorComponents: maskComponents) else { return self }
return NSImage(cgImage: masked, size: size)
}
return self
}
I am trying to get the average RGB value for my "AVCaptureVideoDataOutput" feed. I found the following solution on StackOverflow:
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(CVPixelBuffer: pixelBuffer!)
let filter = CIFilter(name: "CIAreaAverage")
filter!.setValue(cameraImage, forKey: kCIInputImageKey)
let outputImage = filter!.valueForKey(kCIOutputImageKey) as! CIImage!
let ctx = CIContext(options:nil)
let cgImage = ctx.createCGImage(outputImage, fromRect:outputImage.extent)
let rawData:NSData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage))!
let pixels = UnsafePointer<UInt8>(rawData.bytes)
let bytes = UnsafeBufferPointer<UInt8>(start:pixels, count:rawData.length)
var BGRA_index = 0
for pixel in UnsafeBufferPointer(start: bytes.baseAddress, count: bytes.count) {
switch BGRA_index {
case 0:
bluemean = CGFloat (pixel)
case 1:
greenmean = CGFloat (pixel)
case 2:
redmean = CGFloat (pixel)
case 3:
break
default:
break
}
BGRA_index++
}
But this produces the average as an Int but I need it in a Float format with the precision kept. The rounding is quite problematic in the problem domain I'm working with. Is there a way to a Float average efficiently?
Thanks a lot!
May I recommend using our library CoreImageExtensions for reading the value? We added methods for reading pixel values from CIImages in different formats. For your case it would look like this:
import CoreImageExtensions
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
let filter = CIFilter(name: "CIAreaAverage")!
filter.setValue(cameraImage, forKey: kCIInputImageKey)
filter.setValue(CIVector(cgRect: cameraImage.extent), forKey: kCIInputExtentKey)
let outputImage = filter.outputImage!
let context = CIContext()
// get the value of a specific pixel as a `SIMD4<Float32>`
let average = context.readFloat32PixelValue(from: outputImage, at: CGPoint.zero)
Also keep in mind, if you want to compute the average regularly (not just once), to only create a single instance of CIContext and reuse it for every camera frame. Creating it is expensive and it actually increases performance to use the same instance since it caches internal resources.
There is a problem with QR code generation using the following simple code:
override func viewDidLoad() {
super.viewDidLoad()
let image = generateQRCode(from: "Hacking with Swift is the best iOS coding tutorial I've ever read!")
imageView.image = image
}
func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 5.3, y: 5.3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
This code produces the following image:
But when magnifying any corner marker, we can see the difference in border thickness:
I. e. not every scale value produces correct final image. How to fix it out?
The behavior you show is expected whenever you use a non-integer scale, such as 5.3. If having consistent marker widths is something you care about, use only integer scales, such as 5 or 6.
I have some problem with modify NSImage's pixel color. What I'm doing is checking the pixel's color and change them.
Here is my code:
image.lockFocus()
guard let ctx = NSGraphicsContext.current?.cgContext else {
image.unlockFocus()
return nil
}
// draw
guard let buffer = ctx.data else {
return nil
}
let pixelBuffer = buffer.bindMemory(to: UInt32.self, capacity: width * height)
let widthInPixelbuffer = ctx.bytesPerRow / (ctx.bitsPerPixel / ctx.bitsPerComponent)
let heightInPixelBuffer = ctx.height
let upperBound = Int(Float(heightInPixelBuffer) * 0.5)
for column in 0 ..< widthInPixelbuffer {
for row in 1 ... upperBound {
let offset = (upperBound - row) * widthInPixelbuffer + column
if pixelBuffer[offset] == bgColor {
break
} else {
pixelBuffer[offset] = 0xFFFFFF00 // yellow
}
}
}
image.unlockFocus()
My problem is that after I changed some pixel to yellow, what I see in the NSImage is transparent.(on my Mac book pro)
But it works fine on my Mac mini. I see the yellow color as expected.
I can't figure out it, could someone told me why?
How can I fixe it?
I have searched for this some days but I don't find the reason on the network.
thanks!
I was wondering if it is possible to binarize an image (convert to black and white only) with Core Image?
I made it with OpenCV and GPUImage, but would prefer it to use Apple Core Image, if that's possible
You can use MetalPerformanceShaders for that. And the CIImageProcessingKernel.
https://developer.apple.com/documentation/coreimage/ciimageprocessorkernel
Here is the code of the class needed.
class ThresholdImageProcessorKernel: CIImageProcessorKernel {
static let device = MTLCreateSystemDefaultDevice()
override class func process(with inputs: [CIImageProcessorInput]?, arguments: [String : Any]?, output: CIImageProcessorOutput) throws {
guard
let device = device,
let commandBuffer = output.metalCommandBuffer,
let input = inputs?.first,
let sourceTexture = input.metalTexture,
let destinationTexture = output.metalTexture,
let thresholdValue = arguments?["thresholdValue"] as? Float else {
return
}
let threshold = MPSImageThresholdBinary(
device: device,
thresholdValue: thresholdValue,
maximumValue: 1.0,
linearGrayColorTransform: nil)
threshold.encode(
commandBuffer: commandBuffer,
sourceTexture: sourceTexture,
destinationTexture: destinationTexture)
}
}
And this is how you can use it:
let context = CIContext(options: nil)
if let binaryCIImage = try? ThresholdImageProcessorKernel.apply(
withExtent: croppedCIImage.extent,
inputs: [croppedCIImage],
arguments: ["thresholdValue": Float(0.2)]) {
if let cgImage = context.createCGImage(binaryCIImage, from: binary.extent) {
DispatchQueue.main.async {
let resultingImage = UIImage(cgImage: cgImage)
if resultingImage.size.width > 100 {
print("Received an image \(resultingImage.size)")
}
}
}
}
Yes. You have at least two options, CIPhotoEffectMono or a small custom CIColorKernel.
CIPhotoEffectMono:
func createMonoImage(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setValue(CIImage(image: image), forKey: "inputImage")
let outputImage = filter!.outputImage
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Note, I'm writing this quickly, you may need to tighten up things for nil returns.
CIColorKernel:
The FadeToBW GLSL (0.0 factor full color, 1.0 factor is no color):
kernel vec4 fadeToBW(__sample s, float factor) {
vec3 lum = vec3(0.299,0.587,0.114);
vec3 bw = vec3(dot(s.rgb,lum));
vec3 pixel = s.rgb + (bw - s.rgb) * factor;
return vec4(pixel,s.a);
}
The code below opens this as a file called FadeToBW.cikernel. You can also post this as a String directly into the openKernelFile call.
The Swift code:
func createMonoImage(image:UIImage, inputColorFade:NSNumber) -> UIImage {
let ciKernel = CIColorKernel(string: openKernelFile("FadeToBW"))
let extent = image.extent
let arguments = [image, inputColorFade]
let outputImage = ciKernel.applyWithExtent(extent, arguments: arguments)
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Again, add some guards, etc.
I have had success by converting it to greyscale using CIPhotoEffectMono or equivalent, and then using CIColorControls with a ridiculously high inputContrast number (I used 10000). This effectively makes it black and white and thus binarized. Useful for those who don't want to mess with custom kernels.
Also, you can use an example like Apple's "Chroma Key" filter which uses Hue to filter, but instead of looking at Hue you just give the rules for binarizing the data (ie: when to set RGB all to 1.0 and when to set to 0.0).
https://developer.apple.com/documentation/coreimage/applying_a_chroma_key_effect
Found this thread from a Google search, and thought I'd mention that as of iOS 14 and OSX 11.0, CoreImage includes CIColorThreshold and CIColorThresholdOtsu filters (the latter using Otsu's method to calculate the threshold value from the image histogram)
See:
https://cifilter.io/CIColorThreshold/
https://cifilter.io/CIColorThresholdOtsu/
let outputImage = inputImage.applyingFilter("CIColorMonochrome",
parameters: [kCIInputColorKey: CIColor.white])
In you want to play with every out of 250 CIFilters please check this app out: https://apps.apple.com/us/app/filter-magic/id1594986951