CIGaussianBlur shrinks UIImageView - swift

Using CIGaussianBlur causes UIImageView to apply the blur from the border in, making the image appear to shrink (right image). Using .blur on a SwiftUI view does the opposite; the blur is applied from the border outwards (left image). This is the effect I’m trying to achieve in UIKit. How can I go about this?
I've seen a few posts about using CIAffineClamp, but that causes the blur to stop at the image boarder which is not what I want.
private let context = CIContext()
private let filter = CIFilter(name: "CIGaussianBlur")!
private func createBluredImage(using image: UIImage, value: CGFloat) -> UIImage? {
let beginImage = CIImage(image: image)
filter.setValue(beginImage, forKey: kCIInputImageKey)
filter.setValue(value, forKey: kCIInputRadiusKey)
guard
let outputImage = filter.outputImage,
let cgImage = context.createCGImage(outputImage, from: outputImage.extent)
else {
return nil
}
return UIImage(cgImage: cgImage)
}

When I used CIGaussianBlur I wanted my output image to be contained inside the image frame, so I used CIAffineClamp on the image before applying the blur, as you describe.
You might need to render your source image into a larger frame, clamp to that larger frame using CIAffineClamp, apply your blur filter, then load the resulting blurred output image. Core Image is a bit of a pain to set up and figure out, so I don’t have a full solution ready for you, but that’s what I would suggest.

Related

Swift, blurred images in UITableViewCell does not run smooth

what i am trying to do is show a list with blurred images from the web. This works fine with this code in my custom UITableViewCell
func blurImage(image:UIImage, imageView: UIImageView) {
DispatchQueue.global(qos: .background).async {
let inputImage = CIImage(image: image)
let originalOrientation = image.imageOrientation
let originalScale = image.scale
let filter = CIFilter(name: "CIGaussianBlur")
filter?.setValue(inputImage, forKey: kCIInputImageKey)
filter?.setValue(15.0, forKey: kCIInputRadiusKey)
let outputImage = filter?.outputImage
var cgImage:CGImage?
if let outputImage = outputImage{
cgImage = self.context.createCGImage(outputImage, from: (inputImage?.extent)!)
}
DispatchQueue.main.async {
if let cgImageA = cgImage{
imageView.image = UIImage(cgImage: cgImageA, scale: originalScale, orientation: originalOrientation)
}
}
}
}
the problem is that the blur calculation takes sometime, and allthought its on a BG thread the scrolling is not that fast and smooth as if i don't have the blur effect at all.
Is there a way to make it run smoother OR to show a placeholder image until the blurred image is ready to been draw again resulting in smooth scrolling?
Step 1) Don't use qos: .background for user-initiated tasks. Docs say: Background tasks have the lowest priority of all tasks. Assign this class to tasks or dispatch queues that you use to perform work while your app is running in the background.

Cropping/Compositing An Image With Vision/CoreImage

I am working with the Vision framework in iOS 13 and am trying to achieve the following tasks;
Take an image (in this case, a CIImage) and locate all faces in the image using Vision.
Crop each face into its own CIImage (I'll call this a "face image").
Filter each face image using a CoreImage filter, such as a blur or comic book effect.
Composite the face image back over the original image, hereby creating effects that only apply to the face.
A better example of this would be the end goal of taking a live camera feed from an AVCaptureSession and blurring every face in the video frame, compositing the blurred faces back over the original image for saving.
I almost have this working, save for the fact that there seems to be a coordinates/translation issue. For example, when I test this code and move my face, the "blurred" section goes the wrong direction (if I turn my face right, the box goes left, if I look up, the box goes down). While I think this may have something to do with mirroring on the front-facing camera, I can't seem to figure out what I should try next;
func drawFaceBox(bufferImage: CIImage, observations: [VNFaceObservation]) -> CVPixelBuffer? {
// The filter
let blur = CIFilter(name: "CICrystallize")
// The unfiltered image, prepared for filtering
var filteredImage = bufferImage
// Find and crop each face
if !observations.isEmpty {
for face in observations {
let faceRect = VNImageRectForNormalizedRect(face.boundingBox, Int(bufferImage.extent.size.width), Int(bufferImage.extent.size.height))
let croppedFace = bufferImage.cropped(to: faceRect)
blur?.setValue(croppedFace, forKey: kCIInputImageKey)
blur?.setValue(10.0, forKey: kCIInputRadiusKey)
if let blurred = blur?.value(forKey: kCIOutputImageKey) as? CIImage {
compositorCIFilter?.setValue(blurred, forKey: kCIInputImageKey)
compositorCIFilter?.setValue(filteredImage, forKey: kCIInputBackgroundImageKey)
if let output = compositorCIFilter?.value(forKey: kCIOutputImageKey) as? CIImage {
filteredImage = output
}
}
}
}
// Convert image to CVPixelBuffer and return. This part works fine.
}
Any thoughts on how I can composite the blurred face image(s) back to their original position with accuracy? Or any other approach to only filter part of the original CIImage to avoid this issue altogether/save processing? Thanks!
I believe this issue stems from an orientation problem earlier on in the pipeline (specifically, during the output of the sample buffers from the camera, which is where the Vision task was instantiated). I have updated my didOutputSampleBuffer code like so;
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
...
// Setup the current device orientation
let curDeviceOrientation = UIDevice.current.orientation
// Handle the image property orientation
//let orientation = self.exifOrientation(from: curDeviceOrientation)
// Setup the image request handler
//let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: CGImagePropertyOrientation(rawValue: UInt32(1))!)
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:])
// Setup the completion handler
let completion: VNRequestCompletionHandler = {request, error in
let observations = request.results as! [VNFaceObservation]
// Draw faces
DispatchQueue.main.async {
// HANDLE FACES
self.drawFaceBoxes(for: observations)
}
}
// Setup the image request
let request = VNDetectFaceRectanglesRequest(completionHandler: completion)
// Handle the request
do {
try handler.perform([request])
} catch {
print(error)
}
}
As noted, I have commented out the let orientation = ... and the first let handler = ..., which was using the orientation. By removing the reference to the orientation, I seem to have removed any issue with orientation in the Vision calculations.

UIView invert mask MaskView

I want to apply an inverted mask to my UIView. I set the mask to a UIImageView with a transparent image. However the output with
view.mask = imageView
is not the desired result. How can I achieve the desired result as I illustrated below? The desired result uses the mask cutout as transparency. When I check the mask of the View, it isn't a CAShapeLayer so I can't invert it that way.
Seems like you could do a few things. You could use the image you have but mask a white view and place a blue view behind it. Or you could adjust the image asset you’re using to by reversing the transparency. Or you could use CoreImage to do that in code. For example:
func invertMask(_ image: UIImage) -> UIImage?
{
guard let inputMaskImage = CIImage(image: image),
let backgroundImageFilter = CIFilter(name: "CIConstantColorGenerator", withInputParameters: [kCIInputColorKey: CIColor.black]),
let inputColorFilter = CIFilter(name: "CIConstantColorGenerator", withInputParameters: [kCIInputColorKey: CIColor.clear]),
let inputImage = inputColorFilter.outputImage,
let backgroundImage = backgroundImageFilter.outputImage,
let filter = CIFilter(name: "CIBlendWithAlphaMask", withInputParameters: [kCIInputImageKey: inputImage, kCIInputBackgroundImageKey: backgroundImage, kCIInputMaskImageKey: inputMaskImage]),
let filterOutput = filter.outputImage,
let outputImage = CIContext().createCGImage(filterOutput, from: inputMaskImage.extent) else { return nil }
let finalOutputImage = UIImage(cgImage: outputImage)
return finalOutputImage
}

Apply CiFilter GaussianBlur

As the title says I need to implement GaussianBlur to an UIImage; i tried to search for a tutorial but I am not still able to implement it. I tried this
var imageToBlur = CIImage(image: coro.logo)
var blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter.setValue(imageToBlur, forKey: "inputImage")
blurfilter.setValue(2, forKey: "inputImage")
var resultImage = blurfilter.valueForKey("outputImage") as! CIImage
var blurredImage = UIImage(CIImage: resultImage)
self.immagineCoro.image = blurredImage
importing CoreImage framework, but Xcode shows me an error ("NSInvalidArgumentException") at line 5. Can anyone help me to implement gaussianBlur and CIFilter in general?
Edit: thank to you both, but I have an other question; I need to apply blur only to a little part of the image like this
I just tried your code, and here's the modification I suggest, this works:
let fileURL = NSBundle.mainBundle().URLForResource("th", withExtension: "png")
let beginImage = CIImage(contentsOfURL: fileURL)
var blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter.setValue(beginImage, forKey: "inputImage")
//blurfilter.setValue(2, forKey: "inputImage")
var resultImage = blurfilter.valueForKey("outputImage") as! CIImage
var blurredImage = UIImage(CIImage: resultImage)
self.profileImageView.image = blurredImage
So, commenting out the portion you see above, did the trick and I get a blurred image as expected. And I'm using the file path, but this shouldn't make a difference from what you have.
You've used inputImage twice. The second time is probably meant to be inputRadius.
You might want to create a CIImage greyscale mask image with the shape you want, a blurred CIImage (using CIGaussianBlur), and then use CIBlendWithMask to blend them together.
The inputs of CIBlendWithMask are the input image (the blurred image), the input background image (the unblurred image), and the mask image (the shape you want). The output is the image you desire.

UIImageView Image Disappears With Certain UIImageOrientations

Working through a Core Graphics tutorial http://www.raywenderlich.com/76285/beginning-core-image-swift
Theres one part where you need to preserve the UIImageOrientation (of course!). However I'm noticing something VERY quirky and I'm not sure what the cause is.
Here is the code block
#IBAction func amountSliderValueChanged(sender: UISlider) {
let sliderValue = sender.value
filter.setValue(sliderValue, forKey: kCIInputIntensityKey)
let outputImage = filter.outputImage;
let cgimg = context.createCGImage(filter.outputImage, fromRect: filter.outputImage.extent())
let newImage = UIImage(CGImage: cgimg, scale:1, orientation: UIImageOrientation.UpMirrored)
// let newImage = UIImage(CGImage: cgimg)
println("New image is \(newImage)")
self.imageView.image = newImage
}
When I change the image to certain rotations, say Up, the image appears. However when I change to right or left it moves outside the UIImageView (and if I set the scale up really high I can see parts of it coming back on the screen).
I am not rotating the UIImageView, only the UIImage. Before, when I was doing this in Objective-C I never had this issue. I would just set the UIImage rotation and it would always be 0,0 in the UIImageView.
Using Swift (or perhaps something else?) seems to result in different behaviour.
Can you think of any reason why rotating the image moves it way from the its 0,0 of its UIImageView, and what I can do fix that?
Thanks!