I have an IOSurface-backed CVPixelBuffer that is getting updated from an outside source at 30fps. I want to render a preview of the image data in an NSView -- what's the best way for me to do that?
I can directly set the .contents of a CALayer on the view, but that only updates the first time my view updates (or if, say, I resize the view). I've been poring over the docs but I can't figure out the correct invocation of needsDisplay on the layer or view to let the view infrastructure know to refresh itself, especially when updates are coming from outside the view.
Ideally I'd just bind the IOSurface to my layer and any changes I make to it would be propagated, but I'm not sure if that's possible.
class VideoPreviewController: NSViewController, VideoFeedConsumer {
let customLayer : CALayer = CALayer()
override func viewDidLoad() {
super.viewDidLoad()
// Do view setup here.
print("Loaded our video preview")
view.layer?.addSublayer(customLayer)
customLayer.frame = view.frame
// register our view with the browser service
VideoFeedBrowser.instance.registerConsumer(self)
}
override func viewWillDisappear() {
// deregister our view from the video feed
VideoFeedBrowser.instance.deregisterConsumer(self)
super.viewWillDisappear()
}
// This callback gets called at 30fps whenever the pixelbuffer is updated
#objc func updateFrame(pixelBuffer: CVPixelBuffer) {
guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {
print("pixelbuffer isn't IOsurface backed! noooooo!")
return;
}
// Try and tell the view to redraw itself with new contents?
// These methods don't work
//self.view.setNeedsDisplay(self.view.visibleRect)
//self.customLayer.setNeedsDisplay()
self.customLayer.contents = surface
}
}
Here's my attempt of a scaling version that's NSView rather than NSViewController-based, that also doesn't update correctly (or scale correctly for that matter):
class VideoPreviewThumbnail: NSView, VideoFeedConsumer {
required init?(coder decoder: NSCoder) {
super.init(coder: decoder)
self.wantsLayer = true
// register our view with the browser service
VideoFeedBrowser.instance.registerConsumer(self)
}
override init(frame frameRect: NSRect) {
super.init(frame: frameRect)
self.wantsLayer = true
// register our view with the browser service
VideoFeedBrowser.instance.registerConsumer(self)
}
deinit{
VideoFeedBrowser.instance.deregisterConsumer(self)
}
override func updateLayer() {
// Do I need to put something here?
print("update layer")
}
#objc
func updateFrame(pixelBuffer: CVPixelBuffer) {
guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {
print("pixelbuffer isn't IOsurface backed! noooooo!")
return;
}
self.layer?.contents = surface
self.layer?.transform = CATransform3DMakeScale(
self.frame.width / CGFloat(CVPixelBufferGetWidth(pixelBuffer)),
self.frame.height / CGFloat(CVPixelBufferGetHeight(pixelBuffer)),
CGFloat(1))
}
}
What am I missing?
Maybe I'm wrong, but I think you are you updating your NSView on a background thread. (I suppose that the callback to updateFrame is on a background thread)
If I'm right, when you want to update the NSView, convert your pixelBuffer to whatever you want (NSImage?), and then dispatch it on the main thread.
Pseudocode (I don't work often with CVPixelBuffer so I'm not sure this is the right way to convert to an NSImage)
let ciImage = CIImage(cvImageBuffer: pixelBuffer)
let context = CIContext(options: nil)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let cgImage = context.createCGImage(ciImage, from: CGRect(x: 0, y: 0, width: width, height: height))
let nsImage = NSImage(cgImage: cgImage, size: CGSize(width: width, height: height))
DispatchQueue.main.async {
// assign the NSImage to your NSView here
}
Another catch: I did some tests, and it seems that you cannot assign an IOSurface directly to the contents of a CALayer.
I tried with this:
let textureImageWidth = 1024
let textureImageHeight = 1024
let macPixelFormatString = "ARGB"
var macPixelFormat: UInt32 = 0
for c in macPixelFormatString.utf8.reversed() {
macPixelFormat *= 256
macPixelFormat += UInt32(c)
}
let ioSurface = IOSurfaceCreate([kIOSurfaceWidth: textureImageWidth,
kIOSurfaceHeight: textureImageHeight,
kIOSurfaceBytesPerElement: 4,
kIOSurfaceBytesPerRow: textureImageWidth * 4,
kIOSurfaceAllocSize: textureImageWidth * textureImageHeight * 4,
kIOSurfacePixelFormat: macPixelFormat] as CFDictionary)!
IOSurfaceLock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
let test = CIImage(ioSurface: ioSurface)
IOSurfaceUnlock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
v1?.layer?.contents = ioSurface
Where v1 is my view. No effect
Even with a CIImage no effect (just last few lines)
IOSurfaceLock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
let test = CIImage(ioSurface: ioSurface)
IOSurfaceUnlock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
v1?.layer?.contents = test
If I create a CGImage it works
IOSurfaceLock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
let test = CIImage(ioSurface: ioSurface)
IOSurfaceUnlock(ioSurface, IOSurfaceLockOptions.readOnly, nil)
let context = CIContext.init()
let img = context.createCGImage(test, from: test.extent)
v1?.layer?.contents = img
I encountered this problem myself and the solution is to double buffer the IOSurface source: use two IOSurface objects instead of one and render to the current surface, set the surface to the layer contents and then on the next rendering pass use the alternate (back/front) surface and then swap.
It would appear that setting the CALayer.contents twice to the same CVPixelBufferRef has no effect. However, if you alternate between two IOSurfaceRef it works wonderfully.
It maybe also possible to invalidate the layer contents by setting it to nil and then reset. I did not try that case but am using the double buffer technique.
If you have some IBActions that update it then create an observed variable with the didSet block and whenever the IBAction is triggered, change its value. Also remember to write the code you want to run when updated in that block.
I'd suggest making the variable an Int, set its default value to 0 and add 1 to it every time it updates.
And you can cast the NSView into an NSImageView for the part where you ask about showing the IMAGE data on an NSView so that does the job.
You need to convert the pixel buffer to CGImage and convert it to a layer so that you can change the layer of the main view.
Please try this code
#objc
func updateFrame(pixelBuffer: CVPixelBuffer) {
guard let surface = CVPixelBufferGetIOSurface(pixelBuffer)?.takeUnretainedValue() else {
print("pixelbuffer isn't IOsurface backed! noooooo!")
return;
}
void *baseAddr = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef cgContext = CGBitmapContextCreate(baseAddr, width, height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), colorSpace, kCGImageAlphaNoneSkipLast);
CGImageRef cgImage = CGBitmapContextCreateImage(cgContext);
CGContextRelease(cgContext);
let outputImage = UIImage(cgImage: outputCGImage, scale: 1, orientation: img.imageOrientation)
let newLayer:CGLayer = CGLayer.init(cgImage: outputImage)
self.layer = newLayer
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
CVPixelBufferRelease(pixelBuffer);
}
Related
I'm stuck with SwiftUI and Metal up to the point of being about to give up.
I got this example from https://developer.apple.com/forums/thread/119112?answerId=654964022#654964022 :
import MetalKit
struct MetalView: NSViewRepresentable {
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
func makeNSView(context: NSViewRepresentableContext<MetalView>) -> MTKView {
let mtkView = MTKView()
mtkView.delegate = context.coordinator
mtkView.preferredFramesPerSecond = 60
mtkView.enableSetNeedsDisplay = true
if let metalDevice = MTLCreateSystemDefaultDevice() {
mtkView.device = metalDevice
}
mtkView.framebufferOnly = false
mtkView.clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0)
mtkView.drawableSize = mtkView.frame.size
mtkView.enableSetNeedsDisplay = true
return mtkView
}
func updateNSView(_ nsView: MTKView, context: NSViewRepresentableContext<MetalView>) {
}
class Coordinator : NSObject, MTKViewDelegate {
var parent: MetalView
var metalDevice: MTLDevice!
var metalCommandQueue: MTLCommandQueue!
init(_ parent: MetalView) {
self.parent = parent
if let metalDevice = MTLCreateSystemDefaultDevice() {
self.metalDevice = metalDevice
}
self.metalCommandQueue = metalDevice.makeCommandQueue()!
super.init()
}
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
}
func draw(in view: MTKView) {
guard let drawable = view.currentDrawable else {
return
}
let commandBuffer = metalCommandQueue.makeCommandBuffer()
let rpd = view.currentRenderPassDescriptor
rpd?.colorAttachments[0].clearColor = MTLClearColorMake(0, 1, 0, 1)
rpd?.colorAttachments[0].loadAction = .clear
rpd?.colorAttachments[0].storeAction = .store
let re = commandBuffer?.makeRenderCommandEncoder(descriptor: rpd!)
re?.endEncoding()
commandBuffer?.present(drawable)
commandBuffer?.commit()
}
}
}
... but I can't get my head around how to use this MetalView(), which does seem to work when I call it from a SwiftUI view, to display data. I want to use it to display a CIImage which will be filtered and manipulated with CIFilters...
Can someone please point me in the right direction on how to tell this view how to display something? I think I need it to display the content of a texture but tried countless hours and ended up starting from scratch for more countless times...
This is how I run my image filters now but it results in very slow sliders, which is why I decided to try learning about Metal... but it's been really time-consuming and. frustrating due to the lack of documentation...
func ciExposure (inputImage: CIImage, inputEV: Double) -> CIImage {
let filter = CIFilter(name: "CIExposureAdjust")!
filter.setValue(inputImage, forKey: kCIInputImageKey)
filter.setValue(inputEV, forKey: kCIInputEVKey)
return filter.outputImage!
}
I think I need to take that filter.outputImage and pass it on to the MetalView somehow?
Any help is really, really appreciated...
Apple's WWDC 2022 contained a tutorial/video entitled "Display EDR Content with Core Image, Metal, and SwiftUI" which describes how to blend Core Image with Metal and SwiftUI. It points to some new sample code entitled "Generating an Animation with a Core Image Render Destination" (here).
This sample project is very CoreImage-centric (which should suit your purposes nicely), but I wish Apple would post more sample-code examples showing Metal integrated with SwiftUI.
I have a small Core Image + SwiftUI sample project on Github that might be a good starting point for you. It doesn't cover a lot yet, but it demonstrates how to display filtered camera frames already.
Especially check out the draw function of the view. It's used to render a CIImage into the MTKView (you can do the same in your delegate's draw function).
Ok so this does the trick for me:
func draw(in view: MTKView) {
guard let drawable = view.currentDrawable else {
return
}
let colorSpace = CGColorSpaceCreateDeviceRGB()
let commandBuffer = metalCommandQueue.makeCommandBuffer()
let rpd = view.currentRenderPassDescriptor
rpd?.colorAttachments[0].clearColor = MTLClearColorMake(0, 1, 0, 1)
rpd?.colorAttachments[0].loadAction = .clear
rpd?.colorAttachments[0].storeAction = .store
let re = commandBuffer?.makeRenderCommandEncoder(descriptor: rpd!)
re?.endEncoding()
context.render((AppState.shared.rawImage ?? AppState.shared.rawImageOriginal)!,
to: drawable.texture,
commandBuffer: commandBuffer,
bounds: AppState.shared.rawImageOriginal!.extent,
colorSpace: colorSpace)
commandBuffer?.present(drawable)
commandBuffer?.commit()
}
AppState.shared.rawImage is my CIImage texture I got from my filtering function.
The context is made somewhere else but should be:
context = CIContext(mtlDevice: metalDevice)
Next up is adding the centering part of the code provided by Frank Schlegel.
Hey I have been struggling with this for a couple of days now and can't seem to find any documentation out side of the standard grid views for MSStickerView sizes
I am working on an app that creates MSStickerViews dynamically - it does this via converting a UIView into an UIImage saving this to disk then passing the URL to MSSticker before creating the MSStickerView the frame of this is then set to the size of the original view.
The problem I have is that when I drag the MSStickerView into the messages window, the MSStickerView shrinks while being dragged - then when dropped in the messages window, changes to a larger size. I have no idea how to control the size when dragged or the final image size
Heres my code to create an image from a view
extension UIView {
func imageFromView() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
self.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
}
And here's the code to save this to disk
extension UIImage {
func savedPath(name: String) -> URL{
let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
let filePath = "\(paths[0])/name.png"
let url = URL(fileURLWithPath: filePath)
// Save image.
if let data = self.pngData() {
do {
try data.write(to: url)
} catch let error as NSError {
}
}
return url
}
}
finally here is the code that converts the data path to a Sticker
if let stickerImage = backgroundBox.imageFromView() {
let url = stickerImage.savedPath(name: textBox.text ?? "StickerMCSticker")
if let msSticker = try? MSSticker(contentsOfFileURL: url, localizedDescription: "") {
var newFrame = self.backgroundBox.frame
newFrame.size.width = newFrame.size.width
newFrame.size.height = newFrame.size.height
let stickerView = MSStickerView(frame: newFrame, sticker: msSticker)
self.view.addSubview(stickerView)
print("** sticker frame \(stickerView.frame)")
self.sticker = stickerView
}
}
I wondered first off if there was something I need to do regarding retina sizes, but adding #2x in the file just breaks the image - so am stuck on this - the WWDC sessions seem to show stickers being created from file paths and not altering in size in the transition between drag and drop - any help would be appreciated!
I fixed this issue eventually by getting the frame from the view I was copying's frame then calling sizeToFit()-
init(sticker: MSSticker, size: CGSize) {
let stickerFrame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
self.sticker = MSStickerView(frame: stickerFrame, sticker: sticker)
self.sticker.sizeToFit()
super.init(nibName: nil, bund
as the StickerView was not setting the correct size. Essentially the experience I was seeing was that the sticker size on my view was not accurate with the size of the MSSticker - so the moment the drag was initialized, the real sticker size was implemented (which was different to the frame size / autoLayout I was applying in my view)
I'm using live camera output to update a CIImage on a MTKView. My main issue is that I have a large, negative performance difference where an older iPhone gets better CPU performance than a newer one, despite all their settings I've come across are the same.
This is a lengthy post, but I decided to include these details since they could be important to the cause of this problem. Please let me know what else I can include.
Below, I have my captureOutput function with two debug bools that I can turn on and off while running. I used this to try to determine the cause of my issue.
applyLiveFilter - bool whether or not to manipulate the CIImage with a CIFilter.
updateMetalView - bool whether or not to update the MTKView's CIImage.
// live output from camera
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
/*
Create CIImage from camera.
Here I save a few percent of CPU by using a function
to convert a sampleBuffer to a Metal texture, but
whether I use this or the commented out code
(without captureOutputMTLOptions) does not have
significant impact.
*/
guard let texture:MTLTexture = convertToMTLTexture(sampleBuffer: sampleBuffer) else{
return
}
var cameraImage:CIImage = CIImage(mtlTexture: texture, options: captureOutputMTLOptions)!
var transform: CGAffineTransform = .identity
transform = transform.scaledBy(x: 1, y: -1)
transform = transform.translatedBy(x: 0, y: -cameraImage.extent.height)
cameraImage = cameraImage.transformed(by: transform)
/*
// old non-Metal way of getting the ciimage from the cvPixelBuffer
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else
{
return
}
var cameraImage:CIImage = CIImage(cvPixelBuffer: pixelBuffer)
*/
var orientation = UIImage.Orientation.right
if(isFrontCamera){
orientation = UIImage.Orientation.leftMirrored
}
// apply filter to camera image
if debug_applyLiveFilter {
cameraImage = self.applyFilterAndReturnImage(ciImage: cameraImage, orientation: orientation, currentCameraRes:currentCameraRes!)
}
DispatchQueue.main.async(){
if debug_updateMetalView {
self.MTLCaptureView!.image = cameraImage
}
}
}
Below is a chart of results between both phones toggling the different combinations of bools discussed above:
Even without the Metal view's CIIMage updating and no filters being applied, the iPhone XS's CPU is 2% greater than iPhone 6S Plus's, which isn't a significant overhead, but makes me suspect that somehow how the camera is capturing is different between the devices.
My AVCaptureSession's preset is set identically between both phones
(AVCaptureSession.Preset.hd1280x720)
The CIImage created from captureOutput is the same size (extent)
between both phones.
Are there any settings I need to set manually between these two phones AVCaptureDevice's settings, including activeFormat properties, to make them the same between devices?
The settings I have now are:
if let captureDevice = AVCaptureDevice.default(for:AVMediaType.video) {
do {
try captureDevice.lockForConfiguration()
captureDevice.isSubjectAreaChangeMonitoringEnabled = true
captureDevice.focusMode = AVCaptureDevice.FocusMode.continuousAutoFocus
captureDevice.exposureMode = AVCaptureDevice.ExposureMode.continuousAutoExposure
captureDevice.unlockForConfiguration()
} catch {
// Handle errors here
print("There was an error focusing the device's camera")
}
}
My MTKView is based off code written by Simon Gladman, with some edits for performance and to scale the render before it is scaled up to the width of the screen using Core Animation suggested by Apple.
class MetalImageView: MTKView
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
var textureCache: CVMetalTextureCache?
var sourceTexture: MTLTexture!
lazy var commandQueue: MTLCommandQueue =
{
[unowned self] in
return self.device!.makeCommandQueue()
}()!
lazy var ciContext: CIContext =
{
[unowned self] in
return CIContext(mtlDevice: self.device!)
}()
override init(frame frameRect: CGRect, device: MTLDevice?)
{
super.init(frame: frameRect,
device: device ?? MTLCreateSystemDefaultDevice())
if super.device == nil
{
fatalError("Device doesn't support Metal")
}
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.device!, nil, &textureCache)
framebufferOnly = false
enableSetNeedsDisplay = true
isPaused = true
preferredFramesPerSecond = 30
}
required init(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
// The image to display
var image: CIImage?
{
didSet
{
setNeedsDisplay()
}
}
override func draw(_ rect: CGRect)
{
guard var
image = image,
let targetTexture:MTLTexture = currentDrawable?.texture else
{
return
}
let commandBuffer = commandQueue.makeCommandBuffer()
let customDrawableSize:CGSize = drawableSize
let bounds = CGRect(origin: CGPoint.zero, size: customDrawableSize)
let originX = image.extent.origin.x
let originY = image.extent.origin.y
let scaleX = customDrawableSize.width / image.extent.width
let scaleY = customDrawableSize.height / image.extent.height
let scale = min(scaleX*IVScaleFactor, scaleY*IVScaleFactor)
image = image
.transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
ciContext.render(image,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable!)
commandBuffer?.commit()
}
}
My AVCaptureSession (captureSession) and AVCaptureVideoDataOutput (videoOutput) are setup below:
func setupCameraAndMic(){
let backCamera = AVCaptureDevice.default(for:AVMediaType.video)
var error: NSError?
var videoInput: AVCaptureDeviceInput!
do {
videoInput = try AVCaptureDeviceInput(device: backCamera!)
} catch let error1 as NSError {
error = error1
videoInput = nil
print(error!.localizedDescription)
}
if error == nil &&
captureSession!.canAddInput(videoInput) {
guard CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, MetalDevice, nil, &textureCache) == kCVReturnSuccess else {
print("Error: could not create a texture cache")
return
}
captureSession!.addInput(videoInput)
setDeviceFrameRateForCurrentFilter(device:backCamera)
stillImageOutput = AVCapturePhotoOutput()
if captureSession!.canAddOutput(stillImageOutput!) {
captureSession!.addOutput(stillImageOutput!)
let q = DispatchQueue(label: "sample buffer delegate", qos: .default)
videoOutput.setSampleBufferDelegate(self, queue: q)
videoOutput.videoSettings = [
kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: NSNumber(value: kCVPixelFormatType_32BGRA),
kCVPixelBufferMetalCompatibilityKey as String: true
]
videoOutput.alwaysDiscardsLateVideoFrames = true
if captureSession!.canAddOutput(videoOutput){
captureSession!.addOutput(videoOutput)
}
captureSession!.startRunning()
}
}
setDefaultFocusAndExposure()
}
The video and mic are recorded on two separate streams. Details on the microphone and recording video have been left out since my focus is performance of live camera output.
UPDATE - I have a simplified test project on GitHub that makes it a lot easier to test the problem I'm having: https://github.com/PunchyBass/Live-Filter-test-project
From the top of my mind, you are not comparing pears with pears, even if you are running with the 2.49 GHz of A12 against 1.85 GHz of A9, the differences between the cameras are also huge, even if you use them with the same parameters there are several features from XS's camera that require more CPU resources (dual camera, stabilization, smart HDR, etc).
Sorry for the sources, I tried to find metrics of the CPU cost of those features, but I couldn't find it, unfortunately for your needs, that information is not relevant for marketing, when they are selling it as the best camera ever for an smartphone.
They are selling it as the best processor as well, we don't know what would happen using the XS camera with an A9 processor, it would probably crash, we will never know...
PS.... Your metrics are for the whole processor or for the used core? For the whole processor, you also need to consider other tasks that the devices can be executing, for the single core, is 21% of 200% against 39% of 600%
I'm using a MTKView written by Simon Gladman that "exposes an image property type of 'CIImage' to simplify Metal based rendering of Core Image filters." It has been slightly altered for performance. I left out an additional scaling operation since it has nothing to do with the issue here.
Problem: When creating a composite of smaller CIImages into a larger one, they are aligned pixel perfect. MTKView's image property is set to this CIImage composite. However, there is a scale done to this image so it fits the entire MTKView which makes gaps between the joined images visible. This is done by dividing the drawableSize width/height by the CIImage's extent width/height.
This makes me wonder if something needs to be done the CIImage side to actually join those pixels. Saving that CIImage to the camera roll shows no separation between the joined images. It's only visible when the MTKView scales up. In addition, whatever needs to be done needs to have virtually no impact on performance since these image renders are being done in real time through the camera's output. (The MTKView is a preview of the effect being done)
Here is the MTKView that I'm using to render with:
class MetalImageView: MTKView
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
var textureCache: CVMetalTextureCache?
var sourceTexture: MTLTexture!
lazy var commandQueue: MTLCommandQueue =
{
[unowned self] in
return self.device!.makeCommandQueue()
}()!
lazy var ciContext: CIContext =
{
[unowned self] in
//cacheIntermediates
return CIContext(mtlDevice: self.device!, options:[.cacheIntermediates:false])
//return CIContext(mtlDevice: self.device!)
}()
override init(frame frameRect: CGRect, device: MTLDevice?)
{
super.init(frame: frameRect,
device: device ?? MTLCreateSystemDefaultDevice())
if super.device == nil
{
fatalError("Device doesn't support Metal")
}
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.device!, nil, &textureCache)
framebufferOnly = false
enableSetNeedsDisplay = true
isPaused = true
preferredFramesPerSecond = 30
}
required init(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
/// The image to display
var image: CIImage?
{
didSet
{
//renderImage()
//draw()
setNeedsDisplay()
}
}
override func draw(_ rect: CGRect)
{
guard let
image = image,
let targetTexture = currentDrawable?.texture else
{
return
}
let commandBuffer = commandQueue.makeCommandBuffer()
let bounds = CGRect(origin: CGPoint.zero, size: drawableSize)
let originX = image.extent.origin.x
let originY = image.extent.origin.y
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scale = min(scaleX, scaleY)
let scaledImage = image
.transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
ciContext.render(scaledImage,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable!)
commandBuffer?.commit()
}
}
When compositing the images, I have a full size camera image as the background just as a foundation for what the size should be, then I duplicate half of that halfway across the width or height of the image using the CISourceAtopCompositing CIFilter and translate it using a CGAffineTransform. I also give it a negative scale to add a mirror effect:
var scaledImageTransform = CGAffineTransform.identity
scaledImageTransform = scaledImageTransform.translatedBy(x:0, y:sourceCore.extent.height)
scaledImageTransform = scaledImageTransform.scaledBy(x:1.0, y:-1.0)
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: alphaMaskBlend2!,
kCIInputBackgroundImageKey: sourceCore])
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: (alphaMaskBlend2?.cropped(to: cropRect).transformed(by: scaledImageTransform))!,
kCIInputBackgroundImageKey: alphaMaskBlend2!])
sourceCore is the original image that came through the camera. alphaMaskBlend2 is the final CIImage that I assign the MTKView to. The cropRect correctly crops the mirrored part of the image. In the scaled up MTKView there is a visible gap between these two joined CIImages. What can be done to make this image display as continuous pixels no matter how scaled the MTKView is just like any other image does?
I was wondering how to set the radius/blur factor of iOS new UIBlurEffectStyle.Light? I could not find anything in the documentation. But I want it to look similar to the classic UIImage+ImageEffects.h blur effect.
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
let blur = UIBlurEffect(style: UIBlurEffectStyle.Light)
let effectView = UIVisualEffectView(effect: blur)
effectView.frame = frame
addSubview(effectView)
}
Changing alpha is not a perfect solution. It does not affect blur intensity. You can setup an animation from nil to target blur effect and manually set time offset to get desired blur intensity. Unfortunately iOS will reset the animation offset when app returns from background.
Thankfully there is a simple solution that works on iOS >= 10. You can use UIViewPropertyAnimator. I didn't notice any issues with using it. I keeps custom blur intensity when app returns from background. Here is how you can implement it:
class CustomIntensityVisualEffectView: UIVisualEffectView {
/// Create visual effect view with given effect and its intensity
///
/// - Parameters:
/// - effect: visual effect, eg UIBlurEffect(style: .dark)
/// - intensity: custom intensity from 0.0 (no effect) to 1.0 (full effect) using linear scale
init(effect: UIVisualEffect, intensity: CGFloat) {
super.init(effect: nil)
animator = UIViewPropertyAnimator(duration: 1, curve: .linear) { [unowned self] in self.effect = effect }
animator.fractionComplete = intensity
}
required init?(coder aDecoder: NSCoder) {
fatalError()
}
// MARK: Private
private var animator: UIViewPropertyAnimator!
}
I also created a gist: https://gist.github.com/darrarski/29a2a4515508e385c90b3ffe6f975df7
You can change the alpha of the UIVisualEffectView that you add your blur effect to.
let blurEffect = UIBlurEffect(style: UIBlurEffectStyle.Light)
let blurEffectView = UIVisualEffectView(effect: blurEffect)
blurEffectView.alpha = 0.5
blurEffectView.frame = self.view.bounds
self.view.addSubview(blurEffectView)
This is not a true solution, as it doesn't actually change the radius of the blur, but I have found that it gets the job done with very little work.
Although it is a hack and probably it won't be accepted in the app store, it is still possible. You have to subclass the UIBlurEffect like this:
#import <objc/runtime.h>
#interface UIBlurEffect (Protected)
#property (nonatomic, readonly) id effectSettings;
#end
#interface MyBlurEffect : UIBlurEffect
#end
#implementation MyBlurEffect
+ (instancetype)effectWithStyle:(UIBlurEffectStyle)style
{
id result = [super effectWithStyle:style];
object_setClass(result, self);
return result;
}
- (id)effectSettings
{
id settings = [super effectSettings];
[settings setValue:#50 forKey:#"blurRadius"];
return settings;
}
- (id)copyWithZone:(NSZone*)zone
{
id result = [super copyWithZone:zone];
object_setClass(result, [self class]);
return result;
}
#end
Here blur radius is set to 50. You can change 50 to any value you need.
Then just use MyBlurEffect class instead of UIBlurEffect when creating your effect for UIVisualEffectView.
Recently developed Bluuur library to dynamically change blur radius of UIVisualEffectsView without usage any of private APIs: https://github.com/ML-Works/Bluuur
It uses paused animation of setting effect to achieve changing radius of blur. Solution based on this gist: https://gist.github.com/n00neimp0rtant/27829d87118d984232a4
And the main idea is:
// Freeze animation
blurView.layer.speed = 0;
blurView.effect = nil;
[UIView animateWithDuration:1.0 animations:^{
blurView.effect = [UIBlurEffect effectWithStyle:UIBlurEffectStyleLight];
}];
// Set animation progress from 0 to 1
blurView.layer.timeOffset = 0.5;
UPDATE:
Apple introduced UIViewPropertyAnimator class in iOS 10. Thats what we need exactly to animate .effect property of UIVisualEffectsView. Hope community will be able to back-port this functionality to previous iOS version.
This is totally doable. Use CIFilter in CoreImage module to customize blur radius. In fact, you can even achieve a blur effect with continuous varying (aka gradient) blur radius (https://stackoverflow.com/a/51603339/3808183)
import CoreImage
let ciContext = CIContext(options: nil)
guard let inputImage = CIImage(image: yourUIImage),
let mask = CIFilter(name: "CIGaussianBlur") else { return }
mask.setValue(inputImage, forKey: kCIInputImageKey)
mask.setValue(10, forKey: kCIInputRadiusKey) // Set your blur radius here
guard let output = mask.outputImage,
let cgImage = ciContext.createCGImage(output, from: inputImage.extent) else { return }
outUIImage = UIImage(cgImage: cgImage)
I'm afraid there's no such api currently. According to Apple's way of doing things, new functionality was always brought with restricts, and capabilities will bring out gradually. Maybe that will be possible on iOS 9 or maybe 10...
I have ultimate solution for this question:
fileprivate final class UIVisualEffectViewInterface {
func setIntensity(effectView: UIVisualEffectView, intensity: CGFloat){
let effect = effectView.effect
effectView.effect = nil
animator = UIViewPropertyAnimator(duration: 1, curve: .linear) { [weak effectView] in effectView?.effect = effect }
animator.fractionComplete = intensity
}
private var animator: UIViewPropertyAnimator! }
extension UIVisualEffectView{
private var key: UnsafeRawPointer? { UnsafeRawPointer(bitPattern: 16) }
private var interface: UIVisualEffectViewInterface{
if let key = key, let visualEffectViewInterface = objc_getAssociatedObject(self, key) as? UIVisualEffectViewInterface{
return visualEffectViewInterface
}
let visualEffectViewInterface = UIVisualEffectViewInterface()
if let key = key{
objc_setAssociatedObject(self, key, visualEffectViewInterface, objc_AssociationPolicy.OBJC_ASSOCIATION_RETAIN)
}
return visualEffectViewInterface
}
func intensity(_ value: CGFloat){
interface.setIntensity(effectView: self, intensity: value)
}}
This idea hits me after tried the above solutions, a little hacky but I got it working. Since we cannot modify the default radius which is set as "50", we can just enlarge it and scale it back down.
previewView.snp.makeConstraints { (make) in
make.centerX.centerY.equalTo(self.view)
make.width.height.equalTo(self.view).multipliedBy(4)
}
previewBlur.snp.makeConstraints { (make) in
make.edges.equalTo(previewView)
}
And then,
previewView.transform = CGAffineTransform(scaleX: 0.25, y: 0.25)
previewBlur.transform = CGAffineTransform(scaleX: 0.25, y: 0.25)
I got a 12.5 blur radius. Hope this will help :-)
Currently I didn't find any solution.
By the way you can add a little hack in order to let blur mask less "blurry", in this way:
let blurView = .. // here create blur view as usually
if let blurSubviews = self.blurView?.subviews {
for subview in blurSubviews {
if let filterView = NSClassFromString("_UIVisualEffectFilterView") {
if subview.isKindOfClass(filterView) {
subview.hidden = true
}
}
}
}
for iOS 11.*
in viewDidLoad()
let blurEffect = UIBlurEffect(style: .dark)
let blurEffectView = UIVisualEffectView()
view.addSubview(blurEffectView)
//always fill the view
blurEffectView.frame = self.view.bounds
blurEffectView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
UIView.animate(withDuration: 1) {
blurEffectView.effect = blurEffect
}
blurEffectView.pauseAnimation(delay: 0.5)
There is an undocumented way to do this. Not necessarily recommended, as it may get your app rejected by Apple. But it does work.
if let blurEffectType = NSClassFromString("_UICustomBlurEffect") as? UIBlurEffect.Type {
let blurEffectInstance = blurEffectType.init()
// set any value you want here. 40 is quite blurred
blurEffectInstance.setValue(40, forKey: "blurRadius")
let effectView: UIVisualEffectView = UIVisualEffectView(effect: blurEffectInstance)
// Now you have your blurred visual effect view
}
This works for me.
I put UIVisualEffectView in an UIView before add to my view.
I make this function to use easier. You can use this function to make blur any area in your view.
func addBlurArea(area: CGRect) {
let effect = UIBlurEffect(style: UIBlurEffectStyle.Dark)
let blurView = UIVisualEffectView(effect: effect)
blurView.frame = CGRect(x: 0, y: 0, width: area.width, height: area.height)
let container = UIView(frame: area)
container.alpha = 0.8
container.addSubview(blurView)
self.view.insertSubview(container, atIndex: 1)
}
For example, you can make blur all of your view by calling:
addBlurArea(self.view.frame)
You can change Dark to your desired blur style and 0.8 to your desired alpha value
If you want to accomplish the same behaviour as iOS spotlight search you just need to change the alpha value of the UIVisualEffectView (tested on iOS9 simulator)