Apply CIFilters on UI elements - swift

I want to apply an CIFilter on an UI element. I tried to apply it onto the views layer via the .filters member. However the filter won`t get applied.

Here's an approach: use UIGraphicsGetImageFromCurrentImageContext to generate a UIImage, apply the filter to that and overlay an image view containing the filtered image over your original component.
Here's a way to do that with a blur (taken from my blog):
Getting a blurred representation of a UIView is pretty simple: I need to begin an image context, use the view's layer's renderInContext method to render into the context and then get a UIImage from the context:
UIGraphicsBeginImageContextWithOptions(CGSize(width: frame.width, height: frame.height), false, 1)
layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
Once I have the image populated, it's a fairly standard workflow to apply a Gaussian blur to it:
guard let blur = CIFilter(name: "CIGaussianBlur") else
{
return
}
blur.setValue(CIImage(image: image), forKey: kCIInputImageKey)
blur.setValue(blurRadius, forKey: kCIInputRadiusKey)
let ciContext = CIContext(options: nil)
let result = blur.valueForKey(kCIOutputImageKey) as! CIImage!
let boundingRect = CGRect(x: -blurRadius * 4,
y: -blurRadius * 4,
width: frame.width + (blurRadius * 8),
height: frame.height + (blurRadius * 8))
let cgImage = ciContext.createCGImage(result, fromRect: boundingRect)
let filteredImage = UIImage(CGImage: cgImage)
A blurred image will be larger than its input image, so I need to be explicit about the size I require in createCGImage.
The next step is to add a UIImageView to my view and hide all the other views. I've subclassed UIImageView to BlurOverlay so that when it comes to removing it, I can be sure I'm not removing an existing UIImageView:
let blurOverlay = BlurOverlay()
blurOverlay.frame = boundingRect
blurOverlay.image = filteredImage
subviews.forEach{ $0.hidden = true }
addSubview(blurOverlay)
When it comes to de-blurring, I want to ensure the last subview is one of my BlurOverlay remove it and unhide the existing views:
func unBlur()
{
if let blurOverlay = subviews.last as? BlurOverlay
{
blurOverlay.removeFromSuperview()
subviews.forEach{ $0.hidden = false }
}
}
Finally, to see if a UIView is currently blurred, I just need to see if its last subview is a BlurOverlay:
var isBlurred: Bool
{
return subviews.last is BlurOverlay
}

Related

How do I position an image correctly in MTKView?

I am trying to implement an image editing view using MTKView and Core Image filters and have the basics working and can see the filter applied in realtime. However the image is not positioned correctly in the view - can someone point me in the right direction for what needs to be done to get the image to render correctly in the view. It needs to fit the view and retain its original aspect ratio.
Here is the metal draw function - and the empty drawableSizeWillChange!? - go figure. its probably also worth mentioning that the MTKView is a subview of another view in a ScrollView and can be resized by the user. It's not clear to me how Metals handles resizing the view but it seems that doesn't come for free.
I am also trying to call the draw() function from a background thread and this appears to sort of work. I can see the filter effects as they are applied to the image using a slider. As I understand it this should be possible.
It also seems that the coordinate space for rendering is in the images coordinate space - so if the image is smaller than the MTKView then to position the image in the centre the X and Y coordinates will be negative.
When the view is resized then everything gets crazy with the image suddenly becoming way too big and parts of the background not being cleared.
Also when resting the view the image gets stretched rather than redrawing smoothly.
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
}
public func draw(in view: MTKView) {
if let ciImage = self.ciImage {
if let currentDrawable = view.currentDrawable { // 1
let commandBuffer = commandQueue.makeCommandBuffer()
let inputImage = ciImage // 2
exposureFilter.setValue(inputImage, forKey: kCIInputImageKey)
exposureFilter.setValue(ev, forKey: kCIInputEVKey)
context.render(exposureFilter.outputImage!,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: CGRect(origin: .zero, size: view.drawableSize),
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable)
commandBuffer?.commit()
}
}
}
As you can see the image is on the bottom left
let scaleFilter = CIFilter(name: "CILanczosScaleTransform")
That should help you out. The issue is that your CIImage, wherever it might come from, is not the same size as the view you are rendering it in.
So what you could opt to do is calculate the scale, and apply it as a filter:
let scaleFilter = CIFilter(name: "CILanczosScaleTransform")
scaleFilter?.setValue(ciImage, forKey: kCIInputImageKey)
scaleFilter?.setValue(scale, forKey: kCIInputScaleKey)
This resolves your scale issue; I currently do not know what the most efficient approach would be to actually reposition the image
Further reference: https://nshipster.com/image-resizing/
The problem is your call to context.render — you are calling render with bounds: origin .zero. That’s the lower left.
Placing the drawing in the correct spot is up to you. You need to work out where the right bounds origin should be, based on the image dimensions and your drawable size, and render there. If the size is wrong, you also need to apply a scale transform first.
Thanks to Tristan Hume's MetalTest2 I now have it working nicely in two synchronised scrollViews. The basics are in the subclass below - the renderer and shaders can be found at Tristan's MetalTest2 project. This class is managed by a viewController and is a subview of the scrollView's documentView. See image of the final result.
//
// MetalLayerView.swift
// MetalTest2
//
// Created by Tristan Hume on 2019-06-19.
// Copyright © 2019 Tristan Hume. All rights reserved.
//
import Cocoa
// Thanks to https://stackoverflow.com/questions/45375548/resizing-mtkview-scales-old-content-before-redraw
// for the recipe behind this, although I had to add presentsWithTransaction and the wait to make it glitch-free
class ImageMetalView: NSView, CALayerDelegate {
var renderer : Renderer
var metalLayer : CAMetalLayer!
var commandQueue: MTLCommandQueue!
var sourceTexture: MTLTexture!
let colorSpace = CGColorSpaceCreateDeviceRGB()
var context: CIContext!
var ciMgr: CIManager?
var showEdits: Bool = false
var ciImage: CIImage? {
didSet {
self.metalLayer.setNeedsDisplay()
}
}
#objc dynamic var fileUrl: URL? {
didSet {
if let url = fileUrl {
self.ciImage = CIImage(contentsOf: url)
}
}
}
/// Bind to this property from the viewController to receive notifications of changes to CI filter parameters
#objc dynamic var adjustmentsChanged: Bool = false {
didSet {
self.metalLayer.setNeedsDisplay()
}
}
override init(frame: NSRect) {
let _device = MTLCreateSystemDefaultDevice()!
renderer = Renderer(pixelFormat: .bgra8Unorm, device: _device)
self.commandQueue = _device.makeCommandQueue()
self.context = CIContext()
self.ciMgr = CIManager(context: self.context)
super.init(frame: frame)
self.wantsLayer = true
self.layerContentsRedrawPolicy = .duringViewResize
// This property only matters in the case of a rendering glitch, which shouldn't happen anymore
// The .topLeft version makes glitches less noticeable for normal UIs,
// while .scaleAxesIndependently matches what MTKView does and makes them very noticeable
// self.layerContentsPlacement = .topLeft
self.layerContentsPlacement = .scaleAxesIndependently
}
required init(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func makeBackingLayer() -> CALayer {
metalLayer = CAMetalLayer()
metalLayer.pixelFormat = .bgra8Unorm
metalLayer.device = renderer.device
metalLayer.delegate = self
// If you're using the strategy of .topLeft placement and not presenting with transaction
// to just make the glitches less visible instead of eliminating them, it can help to make
// the background color the same as the background of your app, so the glitch artifacts
// (solid color bands at the edge of the window) are less visible.
// metalLayer.backgroundColor = CGColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0)
metalLayer.allowsNextDrawableTimeout = false
// these properties are crucial to resizing working
metalLayer.autoresizingMask = CAAutoresizingMask(arrayLiteral: [.layerHeightSizable, .layerWidthSizable])
metalLayer.needsDisplayOnBoundsChange = true
metalLayer.presentsWithTransaction = true
metalLayer.framebufferOnly = false
return metalLayer
}
override func setFrameSize(_ newSize: NSSize) {
super.setFrameSize(newSize)
self.size = newSize
renderer.viewportSize.x = UInt32(newSize.width)
renderer.viewportSize.y = UInt32(newSize.height)
// the conversion below is necessary for high DPI drawing
metalLayer.drawableSize = convertToBacking(newSize)
self.viewDidChangeBackingProperties()
}
var size: CGSize = .zero
// This will hopefully be called if the window moves between monitors of
// different DPIs but I haven't tested this part
override func viewDidChangeBackingProperties() {
guard let window = self.window else { return }
// This is necessary to render correctly on retina displays with the topLeft placement policy
metalLayer.contentsScale = window.backingScaleFactor
}
func display(_ layer: CALayer) {
if let drawable = metalLayer.nextDrawable(),
let commandBuffer = commandQueue.makeCommandBuffer() {
let passDescriptor = MTLRenderPassDescriptor()
let colorAttachment = passDescriptor.colorAttachments[0]!
colorAttachment.texture = drawable.texture
colorAttachment.loadAction = .clear
colorAttachment.storeAction = .store
colorAttachment.clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0)
if let outputImage = self.ciImage {
let xscale = self.size.width / outputImage.extent.width
let yscale = self.size.height / outputImage.extent.height
let scale = min(xscale, yscale)
if let scaledImage = self.ciMgr!.scaleTransformFilter(outputImage, scale: scale, aspectRatio: 1),
let processed = self.showEdits ? self.ciMgr!.processImage(inputImage: scaledImage) : scaledImage {
let x = self.size.width/2 - processed.extent.width/2
let y = self.size.height/2 - processed.extent.height/2
context.render(processed,
to: drawable.texture,
commandBuffer: commandBuffer,
bounds: CGRect(x:-x, y:-y, width: self.size.width, height: self.size.height),
colorSpace: colorSpace)
}
} else {
print("Image is nil")
}
commandBuffer.commit()
commandBuffer.waitUntilScheduled()
drawable.present()
}
}
}

Getting masked layer as UIImage on Swift on top of UIImageView

I'm trying to get the UIImage of the mask that I applied to a UIImageView.
I'm adding the mask using UIBezierPath and want the actual masked layer as UIImage, not the whole image. Think of it as a crop feature.
I'm cropping the image using:
func cropImage() {
shapeLayer.fillColor = UIColor.black.cgColor
viewSource.imageView.layer.mask = shapeLayer
viewSource.imageView.layer.masksToBounds = true
UIGraphicsBeginImageContextWithOptions(viewSource.imageView.bounds.size, false, 1)
viewSource.imageView.layer.render(in: UIGraphicsGetCurrentContext()!)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
self.completionObservable.onNext(newImage)
}
This eventually gives me the masked image on top of the old dimensions (the initial imageView width and height). But I want to have only the masked image, excluding the white background around them.
The screens are as shown:
I know what you mean now. Here is the answer, just update the size of imageContext.
UIGraphicsBeginImageContextWithOptions((shapeLayer.path?.boundingBoxOfPath)!.size, false, 1)
If it's not so simple, can try CIImage pipeline to achieve.
let context = CIContext()
let m1 = newImage?.cgImage
let m = CIImage.init(cgImage: m1!)
let bounds = imageView.layer.bounds
let cgImage = context.createCGImage(m, from: CGRect.init(x: 0, y: bounds.size.height, width: bounds.size.width, height: bounds.size.height))
let newUIImage = UIImage.init(cgImage: cgImage!)
You may need to adjust transform.

Swift 4 ImageScrollView resizing visible cells in UITableView

I am using the ImageScrollView cocoa pod v1.5 inside a UITableViewCell (Swift 4).
I am downloading images from Firestore using SDWebImage (but also tried Kingfisher with no change in my issue). My images are 700x700 and are being displayed in a ImageScrollView that, depending upon device, is about 400x100. I have set the SDWebImage imageContentMode to .widthFill. I rotate the image to get it in the form I want for the tableview. I use it in it's regular orientation in other places.
The first time my cells are shown the images are shown correctly. If I go back to the previous page, then show the same results in the table again, the visible cells will have their images no longer fitting correctly with regards to width, they are now too wide. If I scroll down hiding those cells all new cells are properly displayed, when I scroll back up the incorrect cells are now displaying correctly. Happens in simulator and actual phone.
Here are the important parts of my UITableViewCell class :
class MyTableCell: UITableViewCell {
var ski : Ski!
var skiImageView: UIImageView = UIImageView()
#IBOutlet weak var skiImageScrollView: ImageScrollView!
func configureCell(ski: Ski) {
self.ski = ski
let imageUrl = ski.imageUrl!
let url = URL(string: imageUrl)
skiImageView.sd_setImage(with: url, placeholderImage: placeholderImage, options: [.retryFailed, .continueInBackground]
, completed: {
(image, error, cacheType, url) in
if (error != nil) {
print("ConfigureCell Error : \(error!)")
return;
}
let rotatedImage = self.imageRotatedByDegrees(oldImage: image!, deg: 90.0)
self.skiImageScrollView.display(image: rotatedImage) var image = self.skiImageView.image!
self.skiImageScrollView.imageContentMode = .widthFill
})
}
func imageRotatedByDegrees(oldImage: UIImage, deg degrees: CGFloat) -> UIImage {
//Calculate the size of the rotated view's containing box for our drawing space
let rotatedViewBox: UIView = UIView(frame: CGRect(x: 0, y: 0, width: oldImage.size.width, height: oldImage.size.height))
let t: CGAffineTransform = CGAffineTransform(rotationAngle: degrees * CGFloat.pi / 180)
rotatedViewBox.transform = t
let rotatedSize: CGSize = rotatedViewBox.frame.size
//Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize)
let bitmap: CGContext = UIGraphicsGetCurrentContext()!
//Move the origin to the middle of the image so we will rotate and scale around the center.
bitmap.translateBy(x: rotatedSize.width / 2, y: rotatedSize.height / 2)
//Rotate the image context
bitmap.rotate(by: (degrees * CGFloat.pi / 180))
//Now, draw the rotated/scaled image into the context
bitmap.scaleBy(x: 1.0, y: -1.0)
bitmap.draw(oldImage.cgImage!, in: CGRect(x: -oldImage.size.width / 2, y: -oldImage.size.height / 2, width: oldImage.size.width, height: oldImage.size.height))
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
Another strange thing is if I slap the back of my phone or drop it on my desk, the images will jump down so only parts of the top of my image are visible in the ImageScrollView, but callbacks from ImageScrollView for zooming are not activated, so no idea what is happening there either.
You might try the following:
imageScrollView.imageContentMode = .aspectFit
before
self.skiImageScrollView.display(image: rotatedImage)

Image shown in imageView loses aspect ratio and becomes squished when saved to camera roll

I'm making an image app that can put simple frames over images loaded into the imageView and save them as a new image for practice, because I'm very new to swift and trying to learn. These screenshots show the frame hidden because the problem is with the image loaded behind the frame.
My rep isn't high enough to post images but the specific issue is: Image shown in imageView loses aspect ratio and becomes squished widthwise when saved to camera roll.
I have my imageView's constraints set to maintain a specific aspect ratio with any sized device so it grows and shrinks accordingly. I also have it's content mode set to aspect fill via IB.
The aspect fill works exactly how I'd expect until I save the image. When I hit save the image inside the image view instantly squishes width wise and loses its aspect ratio.
I import the image to the imageView with this:
func importPicture() {
let picker = UIImagePickerController()
picker.allowsEditing = true
picker.delegate = self
present(picker, animated: true)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
guard let image = info[UIImagePickerControllerEditedImage] as? UIImage else { return }
dismiss(animated: true)
currentImage = image
unchangedImage = image
self.imageView.image = currentImage
}
Then I draw:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
let frame = UIImage(named: "5x4frame")
frame?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
imageView.image = img
}
This was the only way I could think to do it to allow the image to be drawn while also having the image view dynamic.
this is the save function I'm using, I have this function tied to a "save button's" outlet from IB.
UIImageWriteToSavedPhotosAlbum(img, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
}
thanks for any and all advice

How can I resize a UIImage as it becomes more disorted?

I've got a dynamically distorted UIImage appearing in a UIImageView (basically updating the UIImageView's image with a function). Every time I try to crop the image as it grows based on the distortion and touch response, I get a fatal error - trying to unwrap an optional coming back with nil.
Any ideas?
func pincushionDistortionWithAmount(amount:CGFloat, position:CGPoint){
//1 Remove All Subviews
for view in subviews {
view.removeFromSuperview()
}
//2
let beginImage = CIImage(image: UIImageView.image)
// 3
let filter :CIFilter = CIFilter(name: "CIBumpDistortion")!
filter.setValue(beginImage, forKey: kCIInputImageKey)
filter.setValue(CIVector(x: position.x, y: position.y), forKey:"inputCenter")
filter.setValue(75, forKey:"inputRadius")
filter.setValue(-1.0 * amount, forKey:"inputScale")
// 4
let newImage:UIImage = UIImage(CIImage: filter.outputImage!)
let imageViewWidth = pinImageView.frame.size.width
let imageViewHeight = pinImageView.frame.size.height
let imageWidth = newImage.size.width
let imageHeight = newImage.size.height
if imageWidth > imageViewWidth{
print("Image is wider than ImageView")
//Code to crop image to fit in imageView
}
if imageHeight > imageViewHeight{
print("Image is taller than ImageView")
//Code to crop image to fit in imageView
}
UIImageView.image = newImage
addSubview(UIImageView)
}