How do you extend the space (bounds) of a CIImage without stretching the original? - swift

I'm applying several filters on an already cropped image, and I'd like a flipped duplicate of it next to the original. This would make it twice as wide.
Problem: How do you extend the bounds so both can fit? .cropped(to:CGRect) will stretch whatever original content was there. The reason there is existing content is because I'm trying to use applyingFilter as much as possible to save on processing. It's also why I'm cropping the original un-mirrored image.
Below is my CIImage "alphaMaskBlend2" with a compositing filter, and a transform applied to the same image that flips it and adjusts its position. sourceCore.extent is the size I want the final image.
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: (alphaMaskBlend2?.transformed(by: scaledImageTransform))!,
kCIInputBackgroundImageKey: alphaMaskBlend2!]).cropped(to: sourceCore.extent)
I've played around with the position of the transform in LLDB. I found with this filter being cropped, the left most image becomes stretched. If I use clamped to the same extent, and then I re-crop the image to the same extent again, the image is no longer distorted, but the bounds of the image is only half the width that it should be.
The only way I could achieve this, is compositing against a background image (sourceCore) that would be the size of the two images combined, and then compositing the other image:
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: alphaMaskBlend2!,
kCIInputBackgroundImageKey: sourceCore])
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: (alphaMaskBlend2?.cropped(to: cropRect).transformed(by: scaledImageTransform))!,
kCIInputBackgroundImageKey: alphaMaskBlend2!])
Problem is, that this is more expensive than necessary. I even tested it with benchmarking. It would make a lot more sense if I could do this with one composite.

While I can "flip" a CIImage I couldn't find a way to use an existing CIFilter to "stitch" it along side the original. However, with some basic knowledge of writing your own CIKernel, you can. A simple project of achieving this is here.
This project contains a sample image, and using CoreImage and a GLKView it:
flips the image by transposing the Y "bottom/top" coordinates for CIPerspectiveCorrection
creates a new "palette" image using CIConstantColor and then crops it using CICrop to be twice the width of the original
uses a very simple CIKernel (registered as "Stitch" to actually stitch it together
Here's the code to flip:
// use CIPerspectiveCorrection to "flip" on the Y axis
let minX:CGFloat = 0
let maxY:CGFloat = 0
let maxX = originalImage?.extent.width
let minY = originalImage?.extent.height
let flipFilter = CIFilter(name: "CIPerspectiveCorrection")
flipFilter?.setValue(CIVector(x: minX, y: maxY), forKey: "inputTopLeft")
flipFilter?.setValue(CIVector(x: maxX!, y: maxY), forKey: "inputTopRight")
flipFilter?.setValue(CIVector(x: minX, y: minY!), forKey: "inputBottomLeft")
flipFilter?.setValue(CIVector(x: maxX!, y: minY!), forKey: "inputBottomRight")
flipFilter?.setValue(originalImage, forKey: "inputImage")
flippedImage = flipFilter?.outputImage
Here's the code to create the palette:
let paletteFilter = CIFilter(name: "CIConstantColorGenerator")
paletteFilter?.setValue(CIColor(red: 0.7, green: 0.4, blue: 0.4), forKey: "inputColor")
paletteImage = paletteFilter?.outputImage
let cropFilter = CIFilter(name: "CICrop")
cropFilter?.setValue(paletteImage, forKey: "inputImage")
cropFilter?.setValue(CIVector(x: 0, y: 0, z: (originalImage?.extent.width)! * 2, w: (originalImage?.extent.height)!), forKey: "inputRectangle")
paletteImage = cropFilter?.outputImage
Here's the code to register and use the custom CIFilter:
// register and use stitch filer
StitchedFilters.registerFilters()
let stitchFilter = CIFilter(name: "Stitch")
stitchFilter?.setValue(originalImage?.extent.width, forKey: "inputThreshold")
stitchFilter?.setValue(paletteImage, forKey: "inputPalette")
stitchFilter?.setValue(originalImage, forKey: "inputOriginal")
stitchFilter?.setValue(flippedImage, forKey: "inputFlipped")
finalImage = stitchFilter?.outputImage
All of this code (long with layout constraints) in the demo project is in viewDidLoad, so please, place it where it belongs!
Here's the code to (a) create a CIFilter subclass called Stitch and (b) register it so you can use it like any other filter:
func openKernelFile(_ name:String) -> String {
let filePath = Bundle.main.path(forResource: name, ofType: ".cikernel")
do {
return try String(contentsOfFile: filePath!)
}
catch let error as NSError {
return error.description
}
}
let CategoryStitched = "Stitch"
class StitchedFilters: NSObject, CIFilterConstructor {
static func registerFilters() {
CIFilter.registerName(
"Stitch",
constructor: StitchedFilters(),
classAttributes: [
kCIAttributeFilterCategories: [CategoryStitched]
])
}
func filter(withName name: String) -> CIFilter? {
switch name {
case "Stitch":
return Stitch()
default:
return nil
}
}
}
class Stitch:CIFilter {
let kernel = CIKernel(source: openKernelFile("Stitch"))
var inputThreshold:Float = 0
var inputPalette: CIImage!
var inputOriginal: CIImage!
var inputFlipped: CIImage!
override var attributes: [String : Any] {
return [
kCIAttributeFilterDisplayName: "Stitch",
"inputThreshold": [kCIAttributeIdentity: 0,
kCIAttributeClass: "NSNumber",
kCIAttributeDisplayName: "Threshold",
kCIAttributeDefault: 0.5,
kCIAttributeMin: 0,
kCIAttributeSliderMin: 0,
kCIAttributeSliderMax: 1,
kCIAttributeType: kCIAttributeTypeScalar],
"inputPalette": [kCIAttributeIdentity: 0,
kCIAttributeClass: "CIImage",
kCIAttributeDisplayName: "Palette",
kCIAttributeType: kCIAttributeTypeImage],
"inputOriginal": [kCIAttributeIdentity: 0,
kCIAttributeClass: "CIImage",
kCIAttributeDisplayName: "Original",
kCIAttributeType: kCIAttributeTypeImage],
"inputFlipped": [kCIAttributeIdentity: 0,
kCIAttributeClass: "CIImage",
kCIAttributeDisplayName: "Flipped",
kCIAttributeType: kCIAttributeTypeImage]
]
}
override init() {
super.init()
}
override func setValue(_ value: Any?, forKey key: String) {
switch key {
case "inputThreshold":
inputThreshold = value as! Float
case "inputPalette":
inputPalette = value as! CIImage
case "inputOriginal":
inputOriginal = value as! CIImage
case "inputFlipped":
inputFlipped = value as! CIImage
default:
break
}
}
#available(*, unavailable) required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override var outputImage: CIImage {
return kernel!.apply(
extent: inputPalette.extent,
roiCallback: {(index, rect) in return rect},
arguments: [
inputThreshold as Any,
inputPalette as Any,
inputOriginal as Any,
inputFlipped as Any
])!
}
}
Finally, the CIKernel code:
kernel vec4 stitch(float threshold, sampler palette, sampler original, sampler flipped) {
vec2 coord = destCoord();
if (coord.x < threshold) {
return sample(original, samplerCoord(original));
} else {
vec2 flippedCoord = coord - vec2(threshold, 0.0);
vec2 flippedCoordinate = samplerTransform(flipped, flippedCoord);
return sample(flipped, flippedCoordinate);
}
}
Now, someone else may have something more elegant - maybe even using an existing CIFilter - but this works well. It only uses the GPU, so performance-wise, can be used in "real time". I added unneeded code (registering the filter, using a dictionary to define attributes) to make it more of a teaching exercise for those new to creating CIKernels that anyone with knowledge of using CIFilters can consume. If you focus on the kernel code, you'll recognize how similar to C it looks.
Last, a caveat. I am only stitching the (Y-axis) flipped image to the right of the original. You'll need to adjust things if you want something else.

Related

CIQRCodeGenerator produces wrong image

There is a problem with QR code generation using the following simple code:
override func viewDidLoad() {
super.viewDidLoad()
let image = generateQRCode(from: "Hacking with Swift is the best iOS coding tutorial I've ever read!")
imageView.image = image
}
func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 5.3, y: 5.3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
This code produces the following image:
But when magnifying any corner marker, we can see the difference in border thickness:
I. e. not every scale value produces correct final image. How to fix it out?
The behavior you show is expected whenever you use a non-integer scale, such as 5.3. If having consistent marker widths is something you care about, use only integer scales, such as 5 or 6.

Scaled up MTKView shows gaps when joining CIImages

I'm using a MTKView written by Simon Gladman that "exposes an image property type of 'CIImage' to simplify Metal based rendering of Core Image filters." It has been slightly altered for performance. I left out an additional scaling operation since it has nothing to do with the issue here.
Problem: When creating a composite of smaller CIImages into a larger one, they are aligned pixel perfect. MTKView's image property is set to this CIImage composite. However, there is a scale done to this image so it fits the entire MTKView which makes gaps between the joined images visible. This is done by dividing the drawableSize width/height by the CIImage's extent width/height.
This makes me wonder if something needs to be done the CIImage side to actually join those pixels. Saving that CIImage to the camera roll shows no separation between the joined images. It's only visible when the MTKView scales up. In addition, whatever needs to be done needs to have virtually no impact on performance since these image renders are being done in real time through the camera's output. (The MTKView is a preview of the effect being done)
Here is the MTKView that I'm using to render with:
class MetalImageView: MTKView
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
var textureCache: CVMetalTextureCache?
var sourceTexture: MTLTexture!
lazy var commandQueue: MTLCommandQueue =
{
[unowned self] in
return self.device!.makeCommandQueue()
}()!
lazy var ciContext: CIContext =
{
[unowned self] in
//cacheIntermediates
return CIContext(mtlDevice: self.device!, options:[.cacheIntermediates:false])
//return CIContext(mtlDevice: self.device!)
}()
override init(frame frameRect: CGRect, device: MTLDevice?)
{
super.init(frame: frameRect,
device: device ?? MTLCreateSystemDefaultDevice())
if super.device == nil
{
fatalError("Device doesn't support Metal")
}
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.device!, nil, &textureCache)
framebufferOnly = false
enableSetNeedsDisplay = true
isPaused = true
preferredFramesPerSecond = 30
}
required init(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
/// The image to display
var image: CIImage?
{
didSet
{
//renderImage()
//draw()
setNeedsDisplay()
}
}
override func draw(_ rect: CGRect)
{
guard let
image = image,
let targetTexture = currentDrawable?.texture else
{
return
}
let commandBuffer = commandQueue.makeCommandBuffer()
let bounds = CGRect(origin: CGPoint.zero, size: drawableSize)
let originX = image.extent.origin.x
let originY = image.extent.origin.y
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scale = min(scaleX, scaleY)
let scaledImage = image
.transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
ciContext.render(scaledImage,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable!)
commandBuffer?.commit()
}
}
When compositing the images, I have a full size camera image as the background just as a foundation for what the size should be, then I duplicate half of that halfway across the width or height of the image using the CISourceAtopCompositing CIFilter and translate it using a CGAffineTransform. I also give it a negative scale to add a mirror effect:
var scaledImageTransform = CGAffineTransform.identity
scaledImageTransform = scaledImageTransform.translatedBy(x:0, y:sourceCore.extent.height)
scaledImageTransform = scaledImageTransform.scaledBy(x:1.0, y:-1.0)
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: alphaMaskBlend2!,
kCIInputBackgroundImageKey: sourceCore])
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: (alphaMaskBlend2?.cropped(to: cropRect).transformed(by: scaledImageTransform))!,
kCIInputBackgroundImageKey: alphaMaskBlend2!])
sourceCore is the original image that came through the camera. alphaMaskBlend2 is the final CIImage that I assign the MTKView to. The cropRect correctly crops the mirrored part of the image. In the scaled up MTKView there is a visible gap between these two joined CIImages. What can be done to make this image display as continuous pixels no matter how scaled the MTKView is just like any other image does?

Programmatically creating an SKTileDefinition

I've been beating my head against a wall for hours now. I am trying to modify a texture inside my app using a CIFilter and then use that new texture as a part of a new SKTileDefinition to recolor tiles on my map.
The function bellow finds tiles that players "own" and attempts to recolor them by changing the SKTileDefinition to the coloredDefinition.
func updateMapTileColoration(for players: Array<Player>){
for player in players {
for row in 0..<mainBoardMap!.numberOfRows {
for col in 0..<mainBoardMap!.numberOfColumns {
let rowReal = mainBoardMap!.numberOfRows - 1 - row
if player.crownLocations!.contains(CGPoint(x: row, y: col)) {
if let tile = mainBoardMap!.tileDefinition(atColumn: col, row: rowReal) {
let coloredDefinition = colorizeTile(tile: tile, into: player.color!)
print(coloredDefinition.name)
mainBoardMap!.tileSet.tileGroups[4].rules[0].tileDefinitions.append(coloredDefinition)
mainBoardMap!.setTileGroup(crownGroup!, andTileDefinition: crownGroup!.rules[0].tileDefinitions[1], forColumn: col, row: rowReal)
}
}
}
}
}
And here is the function that actulaly applies the CIFilter: colorizeTile
func colorizeTile(tile: SKTileDefinition, into color: UIColor) -> SKTileDefinition{
let texture = tile.textures[0]
let colorationFilter = CIFilter(name: "CIColorMonochrome")
colorationFilter!.setValue(CIImage(cgImage: texture.cgImage()), forKey: kCIInputImageKey)
colorationFilter!.setValue(CIColor(cgColor: color.cgColor), forKey: "inputColor")
colorationFilter!.setValue(0.25, forKey: "inputIntensity")
let coloredTexture = texture.applying(colorationFilter!)
let newDefinition = SKTileDefinition(texture: texture)
newDefinition.textures[0] = coloredTexture
newDefinition.name = "meow"
return newDefinition
}
I would love any help in figuring out why I cannot change the tileDefinition like I am trying to do. It seems intuitively correct to be able to define a new TileDefinition and add it to the tileGroup and then set the tile group to the specific tile definition. However, this is leading to blank tiles...
Any pointers?
After trying a bunch of things I finally figured out what is wrong. The tile definition wasn't being created correctly because I never actually drew a new texture. As I learned, a CIImage is not the drawn texture its just a recipe and we need a context to draw the texture. After this change, the SKTileDefinition is properly created. The problem wasn't where I thought it was so I am sort-of second hand anwering the question. My method for creating a SKTileDefinition was correct.
if drawContext == nil{
drawContext = CIContext()
}
let texture = tile.textures[0]
let colorationFilter = CIFilter(name: "CIColorMonochrome")
colorationFilter!.setValue(CIImage(cgImage: texture.cgImage()), forKey: kCIInputImageKey)
colorationFilter!.setValue(CIColor(cgColor: color.cgColor), forKey: "inputColor")
colorationFilter!.setValue(0.75, forKey: "inputIntensity")
let result = colorationFilter!.outputImage!
let output = drawContext!.createCGImage(result, from: result.extent)
let coloredTexture = SKTexture(cgImage: output!)
let newDefinition = SKTileDefinition(texture: texture)
newDefinition.textures[0] = coloredTexture
newDefinition.name = "meow"

Binarize Picture with Core Image on iOS

I was wondering if it is possible to binarize an image (convert to black and white only) with Core Image?
I made it with OpenCV and GPUImage, but would prefer it to use Apple Core Image, if that's possible
You can use MetalPerformanceShaders for that. And the CIImageProcessingKernel.
https://developer.apple.com/documentation/coreimage/ciimageprocessorkernel
Here is the code of the class needed.
class ThresholdImageProcessorKernel: CIImageProcessorKernel {
static let device = MTLCreateSystemDefaultDevice()
override class func process(with inputs: [CIImageProcessorInput]?, arguments: [String : Any]?, output: CIImageProcessorOutput) throws {
guard
let device = device,
let commandBuffer = output.metalCommandBuffer,
let input = inputs?.first,
let sourceTexture = input.metalTexture,
let destinationTexture = output.metalTexture,
let thresholdValue = arguments?["thresholdValue"] as? Float else {
return
}
let threshold = MPSImageThresholdBinary(
device: device,
thresholdValue: thresholdValue,
maximumValue: 1.0,
linearGrayColorTransform: nil)
threshold.encode(
commandBuffer: commandBuffer,
sourceTexture: sourceTexture,
destinationTexture: destinationTexture)
}
}
And this is how you can use it:
let context = CIContext(options: nil)
if let binaryCIImage = try? ThresholdImageProcessorKernel.apply(
withExtent: croppedCIImage.extent,
inputs: [croppedCIImage],
arguments: ["thresholdValue": Float(0.2)]) {
if let cgImage = context.createCGImage(binaryCIImage, from: binary.extent) {
DispatchQueue.main.async {
let resultingImage = UIImage(cgImage: cgImage)
if resultingImage.size.width > 100 {
print("Received an image \(resultingImage.size)")
}
}
}
}
Yes. You have at least two options, CIPhotoEffectMono or a small custom CIColorKernel.
CIPhotoEffectMono:
func createMonoImage(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setValue(CIImage(image: image), forKey: "inputImage")
let outputImage = filter!.outputImage
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Note, I'm writing this quickly, you may need to tighten up things for nil returns.
CIColorKernel:
The FadeToBW GLSL (0.0 factor full color, 1.0 factor is no color):
kernel vec4 fadeToBW(__sample s, float factor) {
vec3 lum = vec3(0.299,0.587,0.114);
vec3 bw = vec3(dot(s.rgb,lum));
vec3 pixel = s.rgb + (bw - s.rgb) * factor;
return vec4(pixel,s.a);
}
The code below opens this as a file called FadeToBW.cikernel. You can also post this as a String directly into the openKernelFile call.
The Swift code:
func createMonoImage(image:UIImage, inputColorFade:NSNumber) -> UIImage {
let ciKernel = CIColorKernel(string: openKernelFile("FadeToBW"))
let extent = image.extent
let arguments = [image, inputColorFade]
let outputImage = ciKernel.applyWithExtent(extent, arguments: arguments)
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Again, add some guards, etc.
I have had success by converting it to greyscale using CIPhotoEffectMono or equivalent, and then using CIColorControls with a ridiculously high inputContrast number (I used 10000). This effectively makes it black and white and thus binarized. Useful for those who don't want to mess with custom kernels.
Also, you can use an example like Apple's "Chroma Key" filter which uses Hue to filter, but instead of looking at Hue you just give the rules for binarizing the data (ie: when to set RGB all to 1.0 and when to set to 0.0).
https://developer.apple.com/documentation/coreimage/applying_a_chroma_key_effect
Found this thread from a Google search, and thought I'd mention that as of iOS 14 and OSX 11.0, CoreImage includes CIColorThreshold and CIColorThresholdOtsu filters (the latter using Otsu's method to calculate the threshold value from the image histogram)
See:
https://cifilter.io/CIColorThreshold/
https://cifilter.io/CIColorThresholdOtsu/
let outputImage = inputImage.applyingFilter("CIColorMonochrome",
parameters: [kCIInputColorKey: CIColor.white])
In you want to play with every out of 250 CIFilters please check this app out: https://apps.apple.com/us/app/filter-magic/id1594986951

Get and change hue of SKSpriteNode's SKColor(HSBA)?

A SKSpriteNode's SKColor has a way to be created with Hue, Saturation, Brightness & Alpha:
let myColor = SKColor(hue: 0.5, saturation: 1, brightness: 1, alpha: 1)
mySprite.color = myColor
How do I get at the hue of a SKSpriteNode and make a change to it? eg, divide it by 2.
An SKSpriteNode is a node that draws a texture (optionally blended with a color), an image, a colored square. So, this is it's nature.
When you make an SKSpriteNode, you have an instance property that represent the texture used to draw the sprite called also texture
Since iOS 9.x, we are able to retrieve an image from a texture following the code below. In this example I call my SKSpriteNode as spriteBg:
let spriteBg = SKSpriteNode.init(texture: SKTexture.init(imageNamed: "myImage.png"))
if let txt = spriteBg.texture {
if #available(iOS 9.0, *) {
let image : UIImage = UIImage.init(cgImage:txt.cgImage())
} else {
// Fallback on earlier versions and forgot this code..
}
}
Following this interesting answer, we can translate it to a more confortable Swift 3.0 version:
func imageWith(source: UIImage, rotatedByHue: CGFloat) -> UIImage {
// Create a Core Image version of the image.
let sourceCore = CIImage(cgImage: source.cgImage!)
// Apply a CIHueAdjust filter
guard let hueAdjust = CIFilter(name: "CIHueAdjust") else { return source }
hueAdjust.setDefaults()
hueAdjust.setValue(sourceCore, forKey: "inputImage")
hueAdjust.setValue(CGFloat(rotatedByHue), forKey: "inputAngle")
let resultCore = hueAdjust.value(forKey: "outputImage") as! CIImage!
let context = CIContext(options: nil)
let resultRef = context.createCGImage(resultCore!, from: resultCore!.extent)
let result = UIImage(cgImage: resultRef!)
return result
}
So, finally with the previous code we can do:
if let txt = spriteBg.texture {
if #available(iOS 9.0, *) {
let image : UIImage = UIImage.init(cgImage:txt.cgImage())
let changedImage = imageWith(source: image, rotatedByHue: 0.5)
spriteBg.texture = SKTexture(image: changedImage)
} else {
// Fallback on earlier versions or bought a new iphone
}
}
I'm not in a place to be able to test this right now, but looking at the UIColor documentation (UIColor and SKColor are basically the same thing), you should be able to use the .getHue(...) function retrieve the color's components, make changes to it, then set the SKSpriteNode's color property to the new value. the .getHue(...) function "Returns the components that make up the color in the HSB color space."
https://developer.apple.com/reference/uikit/uicolor/1621949-gethue