I've been trying to find a good way to convert white pixels in an NSImage on the fly to transparent pixels. I've seen Swift examples for UIImage, but not NSImage. Below is what I've created (and which is working). Two questions:
Is there a better way to lose the alpha channel on an NSImage? I'm now converting it first to the JPEG representation (which doesn't have an alpha channel) and then to another representation again. That feels rather ugly
Is there a method / way to only trim white pixels on the edges of the images (not on the 'inside' of an image?
func mask(color: NSColor, tolerance: CGFloat = 4) -> NSImage {
guard let data = tiffRepresentation,
// This is an extremely ugly hack to strip the alpha channel from a bitmap representation (because maskingColorComponents does not want an alpha channel)
// https://stackoverflow.com/questions/43852900/swift-3-cgimage-copy-always-nil
// https://stackoverflow.com/questions/36770611/replace-a-color-colour-in-a-skspritenode
let rep = NSBitmapImageRep(data: (NSBitmapImageRep(data: data)?.representation(using: .jpeg, properties: [:])!)!) else {
return self
}
if let ciColor = CIColor(color: color) {
NSLog("trying to mask image")
let maskComponents: [CGFloat] = [ciColor.red, ciColor.green, ciColor.blue].flatMap { value in
[(value * 255) - tolerance, (value * 255) + tolerance]
}
guard let masked = rep.cgImage(forProposedRect: nil, context: nil, hints: nil)?.copy(maskingColorComponents: maskComponents) else { return self }
return NSImage(cgImage: masked, size: size)
}
return self
}
Related
What I Have:
Referencing Apple's Chroma Key Code, it states that we can create a Chroma Key Filter Cube via
func chromaKeyFilter(fromHue: CGFloat, toHue: CGFloat) -> CIFilter?
{
// 1
let size = 64
var cubeRGB = [Float]()
// 2
for z in 0 ..< size {
let blue = CGFloat(z) / CGFloat(size-1)
for y in 0 ..< size {
let green = CGFloat(y) / CGFloat(size-1)
for x in 0 ..< size {
let red = CGFloat(x) / CGFloat(size-1)
// 3
let hue = getHue(red: red, green: green, blue: blue)
let alpha: CGFloat = (hue >= fromHue && hue <= toHue) ? 0: 1
// 4
cubeRGB.append(Float(red * alpha))
cubeRGB.append(Float(green * alpha))
cubeRGB.append(Float(blue * alpha))
cubeRGB.append(Float(alpha))
}
}
}
let data = Data(buffer: UnsafeBufferPointer(start: &cubeRGB, count: cubeRGB.count))
// 5
let colorCubeFilter = CIFilter(name: "CIColorCube", withInputParameters: ["inputCubeDimension": size, "inputCubeData": data])
return colorCubeFilter
}
I then created a function to be able to insert any image into this filter and return the filtered image.
public func filteredImage(ciimage: CIImage) -> CIImage? {
let filter = chromaKeyFilter(fromHue: 110/360, toHue: 130/360)! //green screen effect colors
filter.setValue(ciimage, forKey: kCIInputImageKey)
return RealtimeDepthMaskViewController.filter.outputImage
}
I can then execute this function on any image and obtain a chroma key'd image.
if let maskedImage = filteredImage(ciimage: ciimage) {
//Do something
}
else {
print("Not filtered image")
}
Update Issues:
let data = Data(buffer: UnsafeBufferPointer(start: &cubeRGB, count: cubeRGB.count))
However, once I updated Xcode to v11.6, I obtain the warning Initialization of 'UnsafeBufferPointer<Float>' results in a dangling buffer pointer as well as a runtime error Thread 1: EXC_BAD_ACCESS (code=1, address=0x13c600020) on the line of code above.
I tried addressing this issue with this answer to correct Swift's new UnsafeBufferPointer warning. The warning is then corrected and I no longer have a runtime error.
Problem
Now, although the warning doesn't appear and I don't experience a runtime error, I still get the print statement Not filtered image. I assume that the issue stems from the way the data is being handled, or deleted, not entirely sure how to correctly handle UnsafeBufferPointers alongside Data.
What is the appropriate way to correctly obtain the Data for the Chroma Key?
I wasn't sure what RealtimeDepthMaskViewController was in this context, so just returned the filter output instead. Apologies if this was meant to be left as-is. Also added a guard statement with the possibility of returning nil - which matches your optional return type for the function.
public func filteredImage(ciImage: CIImage) -> CIImage? {
guard let filter = chromaKeyFilter(fromHue: 110/360, toHue: 130/360) else { return nil }
filter.setValue(ciImage, forKey: "inputImage")
return filter.outputImage // instead of RealtimeDepthMaskViewController.filter.outputImage
}
For the dangling pointer compiler warning, I found a couple approaches:
// approach #1
var data = Data()
cubeRGB.withUnsafeBufferPointer { ptr in
data = Data(buffer: ptr)
}
// approach #2
let byteCount = MemoryLayout<Float>.size * cubeRGB.count
let data = Data(bytes: &cubeRGB, count: byteCount)
One caveat: looked at this with Xcode 11.6 rather than 11.5
I have a strange problem when resizing an image that's in a NSAttributedString. The resizing extension is working fine, but when the image is added to the NSAttributedString, it gets flipped vertically for some reason.
This is the resizing extension:
extension NSImage {
func resize(containerWidth: CGFloat) -> NSImage {
var scale : CGFloat = 1.0
let currentWidth = self.size.width
let currentHeight = self.size.height
if currentWidth > containerWidth {
scale = (containerWidth * 0.9) / currentWidth
}
let newWidth = currentWidth * scale
let newHeight = currentHeight * scale
self.size = NSSize(width: newWidth, height: newHeight)
return self
}
}
And here is the enumeration over the images in the attributed string:
newAttributedString.enumerateAttribute(NSAttributedStringKey.attachment, in: NSMakeRange(0, newAttributedString.length), options: []) { value, range, stop in
if let attachement = value as? NSTextAttachment {
let image = attachement.image(forBounds: attachement.bounds, textContainer: NSTextContainer(), characterIndex: range.location)!
let newImage = image.resize(containerWidth: markdown.bounds.width)
let newAttribute = NSTextAttachment()
newAttribute.image = newImage
newAttributedString.addAttribute(NSAttributedStringKey.attachment, value: newAttribute, range: range)
}
}
I've set breakpoints and inspected the images, and they are all in the correct rotation, except when it reaches this line:
newAttributedString.addAttribute(NSAttributedStringKey.attachment, value: newAttribute, range: range)
where the image gets flipped vertically.
I have no clue what could be causing this vertical flip. Is there a way to fix this?
If you look at the developer docs for NSTextAttachment:
https://developer.apple.com/documentation/uikit/nstextattachment
The bounds parameter is defined as follows:
“Defines the layout bounds of the receiver's graphical representation in the text coordinate system.”
I know that when using CoreText to layout text, you need to flip the coordinates, so I should imagine you need to transform your bounds parameter with a vertical reflection too.
Hope that helps.
I figured it out and it was so much simpler than I was making it.
Because the image was in a NSAttribuetdString being appended into a NSTextView I didn't need to resize each image in the NSAttributedString, rather I just had to set the attachment scaling inside the NSTextView with
markdown.layoutManager?.defaultAttachmentScaling = NSImageScaling.scaleProportionallyDown
One line is all it took
Im currently trying to scale an image using swift. This shouldnt be a difficult task, since i've implemented a scaling solution in C# in 30 mins - however, i've been stuck for 2 days now.
I've tried googling/crawling through stack posts but to no avail. The two main solutions i have seen people use are:
A function written in Swift to resize an NSImage proportionately
and
resizeNSImage.swift
An Obj C Implementation of the above link
So i would prefer to use the most efficient/least cpu intensive solution, which according to my research is option 2. Due to option 2 using NSImage.lockfocus() and NSImage.unlockFocus, the image will scale fine on non-retina Macs, but double the scaling size on retina macs. I know this is due to the pixel density of Retina macs, and is to be expected, but i need a scaling solution that ignores HiDPI specifications and just performs a normal scale operation.
This led me to do more research into option 1. It seems like a sound function, however it literally doesnt scale the input image, and then doubles the filesize as i save the returned image (presumably due to pixel density). I found another stack post with someone else having the exact same problem as i am, using the exact same implementation (found here). Of the two suggested answers, the first one doesnt work, and the second is the other implementation i've been trying to use.
If people could post Swift-ified answers, as opposed to Obj C, i'd appreciate it very much!
EDIT:
Here's a copy of my implementation of the first solution - I've divided it into 2 functions:
func getSizeProportions(oWidth: CGFloat, oHeight: CGFloat) -> NSSize {
var ratio:Float = 0.0
let imageWidth = Float(oWidth)
let imageHeight = Float(oHeight)
var maxWidth = Float(0)
var maxHeight = Float(600)
if ( maxWidth == 0 ) {
maxWidth = imageWidth
}
if(maxHeight == 0) {
maxHeight = imageHeight
}
// Get ratio (landscape or portrait)
if (imageWidth > imageHeight) {
// Landscape
ratio = maxWidth / imageWidth;
}
else {
// Portrait
ratio = maxHeight / imageHeight;
}
// Calculate new size based on the ratio
let newWidth = imageWidth * ratio
let newHeight = imageHeight * ratio
return NSMakeSize(CGFloat(newWidth), CGFloat(newHeight))
}
func resizeImage(image:NSImage) -> NSImage {
print("original: ", image.size.width, image.size.height )
// Cast the NSImage to a CGImage
var imageRect:CGRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let imageRef = image.cgImage(forProposedRect: &imageRect, context: nil, hints: nil)
// Create a new NSSize object with the newly calculated size
let newSize = NSSize(width: CGFloat(450), height: CGFloat(600))
//let newSize = getSizeProportions(oWidth: CGFloat(image.size.width), oHeight: CGFloat(image.size.height))
// Create NSImage from the CGImage using the new size
let imageWithNewSize = NSImage(cgImage: imageRef!, size: newSize)
print("scaled: ", imageWithNewSize.size.width, imageWithNewSize.size.height )
return NSImage(data: imageWithNewSize.tiffRepresentation!)!
}
EDIT 2:
As pointed out by Zneak: i need to save the returned image to disk - Using both implementations, my save function writes the file to disk successfully. Although i dont think my save function could be screwing with my current resizing implementation, i've attached it anyways just in case:
func saveAction(image: NSImage, url: URL) {
if let tiffdata = image.tiffRepresentation,
let bitmaprep = NSBitmapImageRep(data: tiffdata) {
let props = [NSImageCompressionFactor: Appearance.imageCompressionFactor]
if let bitmapData = NSBitmapImageRep.representationOfImageReps(in: [bitmaprep], using: .JPEG, properties: props) {
let path: NSString = "~/Desktop/out.jpg"
let resolvedPath = path.expandingTildeInPath
try! bitmapData.write(to: URL(fileURLWithPath: resolvedPath), options: [])
print("Your image has been saved to \(resolvedPath)")
}
}
To anyone else experiencing this problem - I ended up spending countless hours trying to find a way to do this, and ended up just getting the scaling factor of the screen (1 for normal macs, 2 for retina)... The code looks like this:
func getScaleFactor() -> CGFloat {
return NSScreen.main()!.backingScaleFactor
}
Then once you have the scale factor you either scale normally or half the dimensions for retina:
if (scaleFactor == 2) {
//halve size proportions for saving on Retina Macs
return NSMakeSize(CGFloat(oWidth*ratio)/2, CGFloat(oHeight*ratio)/2)
} else {
return NSMakeSize(CGFloat(oWidth*ratio), CGFloat(oHeight*ratio))
}
I was wondering if it is possible to binarize an image (convert to black and white only) with Core Image?
I made it with OpenCV and GPUImage, but would prefer it to use Apple Core Image, if that's possible
You can use MetalPerformanceShaders for that. And the CIImageProcessingKernel.
https://developer.apple.com/documentation/coreimage/ciimageprocessorkernel
Here is the code of the class needed.
class ThresholdImageProcessorKernel: CIImageProcessorKernel {
static let device = MTLCreateSystemDefaultDevice()
override class func process(with inputs: [CIImageProcessorInput]?, arguments: [String : Any]?, output: CIImageProcessorOutput) throws {
guard
let device = device,
let commandBuffer = output.metalCommandBuffer,
let input = inputs?.first,
let sourceTexture = input.metalTexture,
let destinationTexture = output.metalTexture,
let thresholdValue = arguments?["thresholdValue"] as? Float else {
return
}
let threshold = MPSImageThresholdBinary(
device: device,
thresholdValue: thresholdValue,
maximumValue: 1.0,
linearGrayColorTransform: nil)
threshold.encode(
commandBuffer: commandBuffer,
sourceTexture: sourceTexture,
destinationTexture: destinationTexture)
}
}
And this is how you can use it:
let context = CIContext(options: nil)
if let binaryCIImage = try? ThresholdImageProcessorKernel.apply(
withExtent: croppedCIImage.extent,
inputs: [croppedCIImage],
arguments: ["thresholdValue": Float(0.2)]) {
if let cgImage = context.createCGImage(binaryCIImage, from: binary.extent) {
DispatchQueue.main.async {
let resultingImage = UIImage(cgImage: cgImage)
if resultingImage.size.width > 100 {
print("Received an image \(resultingImage.size)")
}
}
}
}
Yes. You have at least two options, CIPhotoEffectMono or a small custom CIColorKernel.
CIPhotoEffectMono:
func createMonoImage(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setValue(CIImage(image: image), forKey: "inputImage")
let outputImage = filter!.outputImage
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Note, I'm writing this quickly, you may need to tighten up things for nil returns.
CIColorKernel:
The FadeToBW GLSL (0.0 factor full color, 1.0 factor is no color):
kernel vec4 fadeToBW(__sample s, float factor) {
vec3 lum = vec3(0.299,0.587,0.114);
vec3 bw = vec3(dot(s.rgb,lum));
vec3 pixel = s.rgb + (bw - s.rgb) * factor;
return vec4(pixel,s.a);
}
The code below opens this as a file called FadeToBW.cikernel. You can also post this as a String directly into the openKernelFile call.
The Swift code:
func createMonoImage(image:UIImage, inputColorFade:NSNumber) -> UIImage {
let ciKernel = CIColorKernel(string: openKernelFile("FadeToBW"))
let extent = image.extent
let arguments = [image, inputColorFade]
let outputImage = ciKernel.applyWithExtent(extent, arguments: arguments)
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Again, add some guards, etc.
I have had success by converting it to greyscale using CIPhotoEffectMono or equivalent, and then using CIColorControls with a ridiculously high inputContrast number (I used 10000). This effectively makes it black and white and thus binarized. Useful for those who don't want to mess with custom kernels.
Also, you can use an example like Apple's "Chroma Key" filter which uses Hue to filter, but instead of looking at Hue you just give the rules for binarizing the data (ie: when to set RGB all to 1.0 and when to set to 0.0).
https://developer.apple.com/documentation/coreimage/applying_a_chroma_key_effect
Found this thread from a Google search, and thought I'd mention that as of iOS 14 and OSX 11.0, CoreImage includes CIColorThreshold and CIColorThresholdOtsu filters (the latter using Otsu's method to calculate the threshold value from the image histogram)
See:
https://cifilter.io/CIColorThreshold/
https://cifilter.io/CIColorThresholdOtsu/
let outputImage = inputImage.applyingFilter("CIColorMonochrome",
parameters: [kCIInputColorKey: CIColor.white])
In you want to play with every out of 250 CIFilters please check this app out: https://apps.apple.com/us/app/filter-magic/id1594986951
A SKSpriteNode's SKColor has a way to be created with Hue, Saturation, Brightness & Alpha:
let myColor = SKColor(hue: 0.5, saturation: 1, brightness: 1, alpha: 1)
mySprite.color = myColor
How do I get at the hue of a SKSpriteNode and make a change to it? eg, divide it by 2.
An SKSpriteNode is a node that draws a texture (optionally blended with a color), an image, a colored square. So, this is it's nature.
When you make an SKSpriteNode, you have an instance property that represent the texture used to draw the sprite called also texture
Since iOS 9.x, we are able to retrieve an image from a texture following the code below. In this example I call my SKSpriteNode as spriteBg:
let spriteBg = SKSpriteNode.init(texture: SKTexture.init(imageNamed: "myImage.png"))
if let txt = spriteBg.texture {
if #available(iOS 9.0, *) {
let image : UIImage = UIImage.init(cgImage:txt.cgImage())
} else {
// Fallback on earlier versions and forgot this code..
}
}
Following this interesting answer, we can translate it to a more confortable Swift 3.0 version:
func imageWith(source: UIImage, rotatedByHue: CGFloat) -> UIImage {
// Create a Core Image version of the image.
let sourceCore = CIImage(cgImage: source.cgImage!)
// Apply a CIHueAdjust filter
guard let hueAdjust = CIFilter(name: "CIHueAdjust") else { return source }
hueAdjust.setDefaults()
hueAdjust.setValue(sourceCore, forKey: "inputImage")
hueAdjust.setValue(CGFloat(rotatedByHue), forKey: "inputAngle")
let resultCore = hueAdjust.value(forKey: "outputImage") as! CIImage!
let context = CIContext(options: nil)
let resultRef = context.createCGImage(resultCore!, from: resultCore!.extent)
let result = UIImage(cgImage: resultRef!)
return result
}
So, finally with the previous code we can do:
if let txt = spriteBg.texture {
if #available(iOS 9.0, *) {
let image : UIImage = UIImage.init(cgImage:txt.cgImage())
let changedImage = imageWith(source: image, rotatedByHue: 0.5)
spriteBg.texture = SKTexture(image: changedImage)
} else {
// Fallback on earlier versions or bought a new iphone
}
}
I'm not in a place to be able to test this right now, but looking at the UIColor documentation (UIColor and SKColor are basically the same thing), you should be able to use the .getHue(...) function retrieve the color's components, make changes to it, then set the SKSpriteNode's color property to the new value. the .getHue(...) function "Returns the components that make up the color in the HSB color space."
https://developer.apple.com/reference/uikit/uicolor/1621949-gethue