I have UIImage and I would like to change it to black & white picture, or if you know how to do some other filters I would also appreciate it. For example: chrome filter(with nice colours).
Most of the codes I have already found are in objective-c, which I do not understand a lot.
Right now I am using this code to give it some effects.
func applyFilter() {
let inputImage = CIImage(image: tempImageView.image!)
let randomColor = [kCIInputAngleKey: (Double(arc4random_uniform(314)) / 100)]
let filteredImage = inputImage!.imageByApplyingFilter("CIHueAdjust", withInputParameters: randomColor)
let renderedImage = context.createCGImage(filteredImage, fromRect: filteredImage.extent)
tempImageView.image = UIImage(CGImage: renderedImage)
}
it works , but effects are terrible.
Thanks.
Related
When I capture the contents of an MTKView into a UIImage, the resulting image looks qualitatively different, as shown below:
The code I use to generate the UIImage is as follows:
let kciOptions = [kCIContextWorkingColorSpace: CGColorSpace(name: CGColorSpace.sRGB)!,
kCIContextOutputPremultiplied: true,
kCIContextUseSoftwareRenderer: false] as [String : Any]
let lastDrawableDisplayed = self.currentDrawable! // needed to hold the last drawable presented to screen
drawingUIView.image = UIImage(ciImage: CIImage(mtlTexture: lastDrawableDisplayed.texture, options: kciOptions)!)
Since I don't modify the ciImage orientation (.oriented(CGImagePropertyOrientation.downMirrored)) the resulting image is upside down, as shown in the image above. I leave the mirrored orientation as is so I can point out the color differences between the two image captures.
No matter how I change the kciOptions parameters, (say, even changing the colorspace to grayscale) I'm not seeing any changes in the resulting UIImage, which appears much more dim/desaturated than the original. Does anybody have any suggestions for how I can accurately capture what I'm drawing on MTKView to an UIImage? Any suggestions would be much appreciated.
Below are my MTKView settings which may prove relevant:
let renderPipelineDescriptor = MTLRenderPipelineDescriptor()
renderPipelineDescriptor.vertexFunction = vertexProgram
renderPipelineDescriptor.sampleCount = self.sampleCount
renderPipelineDescriptor.colorAttachments[0].pixelFormat = MTLPixelFormat.bgra8Unorm
renderPipelineDescriptor.colorAttachments[0].isBlendingEnabled = true
renderPipelineDescriptor.colorAttachments[0].rgbBlendOperation = .add
renderPipelineDescriptor.colorAttachments[0].alphaBlendOperation = .add
renderPipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = .sourceAlpha renderPipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha renderPipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .sourceAlpha renderPipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
self.isOpaque = false // makes MTKView bg transparent
let context = CIContext()
let texture = metalView.currentDrawable!.texture
let cImg = CIImage(mtlTexture: texture, options: nil)!
let cgImg = context.createCGImage(cImg, from: cImg.extent)!
let uiImg = UIImage(cgImage: cgImg)
I've seen this issue in a couple of posts, but no clear answer. Here is what I've found:
For starters,
renderPipelineDescriptor.colorAttachments[0].pixelFormat = MTLPixelFormat.bgra8Unorm
should really just be set to the MTKView's native pixel format
renderPipelineDescriptor.colorAttachments[0].pixelFormat = self.colorPixelFormat
Secondly, when I set the CIImage's options:
let kciOptions = [kCIContextWorkingColorSpace: CGColorSpace(name: CGColorSpace.sRGB)!,
kCIContextOutputPremultiplied: true,
kCIContextUseSoftwareRenderer: false] as [String : Any]
It didn't matter what I set kCIContextWorkingColorSpace to, I never saw any visual difference regardless of what I used. The property I really needed to set is called KCIImageColorSpace. So the updated kciOptions looks like:
let kciOptions = [kCIImageColorSpace: CGColorSpaceCreateDeviceRGB(),
kCIContextOutputPremultiplied: true,
kCIContextUseSoftwareRenderer: false] as [String : Any]
In a similar way of using the view's native pixel format, calling CGColorSpaceCreateDeviceRGB() creates an RGB colorspace that is specific to the device being used.
Your CGColorSpace is .sRGB but your renderPipelineDescriptor's pixelFormat is .bgra8Unorm. Try changing that line to:
renderPipelineDescriptor.colorAttachments[0].pixelFormat = MTLPixelFormat.bgra8Unorm_srgb
I want to apply an inverted mask to my UIView. I set the mask to a UIImageView with a transparent image. However the output with
view.mask = imageView
is not the desired result. How can I achieve the desired result as I illustrated below? The desired result uses the mask cutout as transparency. When I check the mask of the View, it isn't a CAShapeLayer so I can't invert it that way.
Seems like you could do a few things. You could use the image you have but mask a white view and place a blue view behind it. Or you could adjust the image asset you’re using to by reversing the transparency. Or you could use CoreImage to do that in code. For example:
func invertMask(_ image: UIImage) -> UIImage?
{
guard let inputMaskImage = CIImage(image: image),
let backgroundImageFilter = CIFilter(name: "CIConstantColorGenerator", withInputParameters: [kCIInputColorKey: CIColor.black]),
let inputColorFilter = CIFilter(name: "CIConstantColorGenerator", withInputParameters: [kCIInputColorKey: CIColor.clear]),
let inputImage = inputColorFilter.outputImage,
let backgroundImage = backgroundImageFilter.outputImage,
let filter = CIFilter(name: "CIBlendWithAlphaMask", withInputParameters: [kCIInputImageKey: inputImage, kCIInputBackgroundImageKey: backgroundImage, kCIInputMaskImageKey: inputMaskImage]),
let filterOutput = filter.outputImage,
let outputImage = CIContext().createCGImage(filterOutput, from: inputMaskImage.extent) else { return nil }
let finalOutputImage = UIImage(cgImage: outputImage)
return finalOutputImage
}
I was wondering if it is possible to binarize an image (convert to black and white only) with Core Image?
I made it with OpenCV and GPUImage, but would prefer it to use Apple Core Image, if that's possible
You can use MetalPerformanceShaders for that. And the CIImageProcessingKernel.
https://developer.apple.com/documentation/coreimage/ciimageprocessorkernel
Here is the code of the class needed.
class ThresholdImageProcessorKernel: CIImageProcessorKernel {
static let device = MTLCreateSystemDefaultDevice()
override class func process(with inputs: [CIImageProcessorInput]?, arguments: [String : Any]?, output: CIImageProcessorOutput) throws {
guard
let device = device,
let commandBuffer = output.metalCommandBuffer,
let input = inputs?.first,
let sourceTexture = input.metalTexture,
let destinationTexture = output.metalTexture,
let thresholdValue = arguments?["thresholdValue"] as? Float else {
return
}
let threshold = MPSImageThresholdBinary(
device: device,
thresholdValue: thresholdValue,
maximumValue: 1.0,
linearGrayColorTransform: nil)
threshold.encode(
commandBuffer: commandBuffer,
sourceTexture: sourceTexture,
destinationTexture: destinationTexture)
}
}
And this is how you can use it:
let context = CIContext(options: nil)
if let binaryCIImage = try? ThresholdImageProcessorKernel.apply(
withExtent: croppedCIImage.extent,
inputs: [croppedCIImage],
arguments: ["thresholdValue": Float(0.2)]) {
if let cgImage = context.createCGImage(binaryCIImage, from: binary.extent) {
DispatchQueue.main.async {
let resultingImage = UIImage(cgImage: cgImage)
if resultingImage.size.width > 100 {
print("Received an image \(resultingImage.size)")
}
}
}
}
Yes. You have at least two options, CIPhotoEffectMono or a small custom CIColorKernel.
CIPhotoEffectMono:
func createMonoImage(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setValue(CIImage(image: image), forKey: "inputImage")
let outputImage = filter!.outputImage
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Note, I'm writing this quickly, you may need to tighten up things for nil returns.
CIColorKernel:
The FadeToBW GLSL (0.0 factor full color, 1.0 factor is no color):
kernel vec4 fadeToBW(__sample s, float factor) {
vec3 lum = vec3(0.299,0.587,0.114);
vec3 bw = vec3(dot(s.rgb,lum));
vec3 pixel = s.rgb + (bw - s.rgb) * factor;
return vec4(pixel,s.a);
}
The code below opens this as a file called FadeToBW.cikernel. You can also post this as a String directly into the openKernelFile call.
The Swift code:
func createMonoImage(image:UIImage, inputColorFade:NSNumber) -> UIImage {
let ciKernel = CIColorKernel(string: openKernelFile("FadeToBW"))
let extent = image.extent
let arguments = [image, inputColorFade]
let outputImage = ciKernel.applyWithExtent(extent, arguments: arguments)
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Again, add some guards, etc.
I have had success by converting it to greyscale using CIPhotoEffectMono or equivalent, and then using CIColorControls with a ridiculously high inputContrast number (I used 10000). This effectively makes it black and white and thus binarized. Useful for those who don't want to mess with custom kernels.
Also, you can use an example like Apple's "Chroma Key" filter which uses Hue to filter, but instead of looking at Hue you just give the rules for binarizing the data (ie: when to set RGB all to 1.0 and when to set to 0.0).
https://developer.apple.com/documentation/coreimage/applying_a_chroma_key_effect
Found this thread from a Google search, and thought I'd mention that as of iOS 14 and OSX 11.0, CoreImage includes CIColorThreshold and CIColorThresholdOtsu filters (the latter using Otsu's method to calculate the threshold value from the image histogram)
See:
https://cifilter.io/CIColorThreshold/
https://cifilter.io/CIColorThresholdOtsu/
let outputImage = inputImage.applyingFilter("CIColorMonochrome",
parameters: [kCIInputColorKey: CIColor.white])
In you want to play with every out of 250 CIFilters please check this app out: https://apps.apple.com/us/app/filter-magic/id1594986951
I want to convert a SKSpriteNode into an UIImage.
let testImage = SKSpriteNode(imageNamed: "PlainProject") as! UIImage
but I get a crash on the thread above. Is there another way to do this?
You should get the cgImage of the texture of the sprite first, and then cast it to UIImage:
let testImage = SKSpriteNode(imageNamed: "someImage.png")
let image = UIImage(cgImage: (testImage.texture?.cgImage())!)
or a better version without force-unwraping a cgImage that might be nil:
let image = UIImage()
if testImage.texture?.cgImage() != nil{
image = UIImage(cgImage: (testImage.texture?.cgImage())!)
}
Result (in playground):
That is exactly what my image looks like. Hope this helps!
You can not directly do that.
First of all you need to get texture from SKSpriteNode.
After that you can get image with textureOfNode!.cgImage() as shown in below example:
let testNode = SKSpriteNode(imageNamed: "Spaceship")
let textureOfNode = testNode.texture
let imageFromTexture = UIImage.init(cgImage: textureOfNode!.cgImage())
print(imageFromTexture) //<UIImage: 0x61000008afa0>, {394, 347}
As the title says I need to implement GaussianBlur to an UIImage; i tried to search for a tutorial but I am not still able to implement it. I tried this
var imageToBlur = CIImage(image: coro.logo)
var blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter.setValue(imageToBlur, forKey: "inputImage")
blurfilter.setValue(2, forKey: "inputImage")
var resultImage = blurfilter.valueForKey("outputImage") as! CIImage
var blurredImage = UIImage(CIImage: resultImage)
self.immagineCoro.image = blurredImage
importing CoreImage framework, but Xcode shows me an error ("NSInvalidArgumentException") at line 5. Can anyone help me to implement gaussianBlur and CIFilter in general?
Edit: thank to you both, but I have an other question; I need to apply blur only to a little part of the image like this
I just tried your code, and here's the modification I suggest, this works:
let fileURL = NSBundle.mainBundle().URLForResource("th", withExtension: "png")
let beginImage = CIImage(contentsOfURL: fileURL)
var blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter.setValue(beginImage, forKey: "inputImage")
//blurfilter.setValue(2, forKey: "inputImage")
var resultImage = blurfilter.valueForKey("outputImage") as! CIImage
var blurredImage = UIImage(CIImage: resultImage)
self.profileImageView.image = blurredImage
So, commenting out the portion you see above, did the trick and I get a blurred image as expected. And I'm using the file path, but this shouldn't make a difference from what you have.
You've used inputImage twice. The second time is probably meant to be inputRadius.
You might want to create a CIImage greyscale mask image with the shape you want, a blurred CIImage (using CIGaussianBlur), and then use CIBlendWithMask to blend them together.
The inputs of CIBlendWithMask are the input image (the blurred image), the input background image (the unblurred image), and the mask image (the shape you want). The output is the image you desire.