How do I position an image correctly in MTKView? - swift

I am trying to implement an image editing view using MTKView and Core Image filters and have the basics working and can see the filter applied in realtime. However the image is not positioned correctly in the view - can someone point me in the right direction for what needs to be done to get the image to render correctly in the view. It needs to fit the view and retain its original aspect ratio.
Here is the metal draw function - and the empty drawableSizeWillChange!? - go figure. its probably also worth mentioning that the MTKView is a subview of another view in a ScrollView and can be resized by the user. It's not clear to me how Metals handles resizing the view but it seems that doesn't come for free.
I am also trying to call the draw() function from a background thread and this appears to sort of work. I can see the filter effects as they are applied to the image using a slider. As I understand it this should be possible.
It also seems that the coordinate space for rendering is in the images coordinate space - so if the image is smaller than the MTKView then to position the image in the centre the X and Y coordinates will be negative.
When the view is resized then everything gets crazy with the image suddenly becoming way too big and parts of the background not being cleared.
Also when resting the view the image gets stretched rather than redrawing smoothly.
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
}
public func draw(in view: MTKView) {
if let ciImage = self.ciImage {
if let currentDrawable = view.currentDrawable { // 1
let commandBuffer = commandQueue.makeCommandBuffer()
let inputImage = ciImage // 2
exposureFilter.setValue(inputImage, forKey: kCIInputImageKey)
exposureFilter.setValue(ev, forKey: kCIInputEVKey)
context.render(exposureFilter.outputImage!,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: CGRect(origin: .zero, size: view.drawableSize),
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable)
commandBuffer?.commit()
}
}
}
As you can see the image is on the bottom left

let scaleFilter = CIFilter(name: "CILanczosScaleTransform")
That should help you out. The issue is that your CIImage, wherever it might come from, is not the same size as the view you are rendering it in.
So what you could opt to do is calculate the scale, and apply it as a filter:
let scaleFilter = CIFilter(name: "CILanczosScaleTransform")
scaleFilter?.setValue(ciImage, forKey: kCIInputImageKey)
scaleFilter?.setValue(scale, forKey: kCIInputScaleKey)
This resolves your scale issue; I currently do not know what the most efficient approach would be to actually reposition the image
Further reference: https://nshipster.com/image-resizing/

The problem is your call to context.render — you are calling render with bounds: origin .zero. That’s the lower left.
Placing the drawing in the correct spot is up to you. You need to work out where the right bounds origin should be, based on the image dimensions and your drawable size, and render there. If the size is wrong, you also need to apply a scale transform first.

Thanks to Tristan Hume's MetalTest2 I now have it working nicely in two synchronised scrollViews. The basics are in the subclass below - the renderer and shaders can be found at Tristan's MetalTest2 project. This class is managed by a viewController and is a subview of the scrollView's documentView. See image of the final result.
//
// MetalLayerView.swift
// MetalTest2
//
// Created by Tristan Hume on 2019-06-19.
// Copyright © 2019 Tristan Hume. All rights reserved.
//
import Cocoa
// Thanks to https://stackoverflow.com/questions/45375548/resizing-mtkview-scales-old-content-before-redraw
// for the recipe behind this, although I had to add presentsWithTransaction and the wait to make it glitch-free
class ImageMetalView: NSView, CALayerDelegate {
var renderer : Renderer
var metalLayer : CAMetalLayer!
var commandQueue: MTLCommandQueue!
var sourceTexture: MTLTexture!
let colorSpace = CGColorSpaceCreateDeviceRGB()
var context: CIContext!
var ciMgr: CIManager?
var showEdits: Bool = false
var ciImage: CIImage? {
didSet {
self.metalLayer.setNeedsDisplay()
}
}
#objc dynamic var fileUrl: URL? {
didSet {
if let url = fileUrl {
self.ciImage = CIImage(contentsOf: url)
}
}
}
/// Bind to this property from the viewController to receive notifications of changes to CI filter parameters
#objc dynamic var adjustmentsChanged: Bool = false {
didSet {
self.metalLayer.setNeedsDisplay()
}
}
override init(frame: NSRect) {
let _device = MTLCreateSystemDefaultDevice()!
renderer = Renderer(pixelFormat: .bgra8Unorm, device: _device)
self.commandQueue = _device.makeCommandQueue()
self.context = CIContext()
self.ciMgr = CIManager(context: self.context)
super.init(frame: frame)
self.wantsLayer = true
self.layerContentsRedrawPolicy = .duringViewResize
// This property only matters in the case of a rendering glitch, which shouldn't happen anymore
// The .topLeft version makes glitches less noticeable for normal UIs,
// while .scaleAxesIndependently matches what MTKView does and makes them very noticeable
// self.layerContentsPlacement = .topLeft
self.layerContentsPlacement = .scaleAxesIndependently
}
required init(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func makeBackingLayer() -> CALayer {
metalLayer = CAMetalLayer()
metalLayer.pixelFormat = .bgra8Unorm
metalLayer.device = renderer.device
metalLayer.delegate = self
// If you're using the strategy of .topLeft placement and not presenting with transaction
// to just make the glitches less visible instead of eliminating them, it can help to make
// the background color the same as the background of your app, so the glitch artifacts
// (solid color bands at the edge of the window) are less visible.
// metalLayer.backgroundColor = CGColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0)
metalLayer.allowsNextDrawableTimeout = false
// these properties are crucial to resizing working
metalLayer.autoresizingMask = CAAutoresizingMask(arrayLiteral: [.layerHeightSizable, .layerWidthSizable])
metalLayer.needsDisplayOnBoundsChange = true
metalLayer.presentsWithTransaction = true
metalLayer.framebufferOnly = false
return metalLayer
}
override func setFrameSize(_ newSize: NSSize) {
super.setFrameSize(newSize)
self.size = newSize
renderer.viewportSize.x = UInt32(newSize.width)
renderer.viewportSize.y = UInt32(newSize.height)
// the conversion below is necessary for high DPI drawing
metalLayer.drawableSize = convertToBacking(newSize)
self.viewDidChangeBackingProperties()
}
var size: CGSize = .zero
// This will hopefully be called if the window moves between monitors of
// different DPIs but I haven't tested this part
override func viewDidChangeBackingProperties() {
guard let window = self.window else { return }
// This is necessary to render correctly on retina displays with the topLeft placement policy
metalLayer.contentsScale = window.backingScaleFactor
}
func display(_ layer: CALayer) {
if let drawable = metalLayer.nextDrawable(),
let commandBuffer = commandQueue.makeCommandBuffer() {
let passDescriptor = MTLRenderPassDescriptor()
let colorAttachment = passDescriptor.colorAttachments[0]!
colorAttachment.texture = drawable.texture
colorAttachment.loadAction = .clear
colorAttachment.storeAction = .store
colorAttachment.clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0)
if let outputImage = self.ciImage {
let xscale = self.size.width / outputImage.extent.width
let yscale = self.size.height / outputImage.extent.height
let scale = min(xscale, yscale)
if let scaledImage = self.ciMgr!.scaleTransformFilter(outputImage, scale: scale, aspectRatio: 1),
let processed = self.showEdits ? self.ciMgr!.processImage(inputImage: scaledImage) : scaledImage {
let x = self.size.width/2 - processed.extent.width/2
let y = self.size.height/2 - processed.extent.height/2
context.render(processed,
to: drawable.texture,
commandBuffer: commandBuffer,
bounds: CGRect(x:-x, y:-y, width: self.size.width, height: self.size.height),
colorSpace: colorSpace)
}
} else {
print("Image is nil")
}
commandBuffer.commit()
commandBuffer.waitUntilScheduled()
drawable.present()
}
}
}

Related

Draw graphics and export with pixel precision with CoreGraphics

I saw few questions here on stackoverflow but none of them is solving my problem. What I want to do is to subclass NSView and draw some shapes on it. Then I want to export/save created graphics to png file. And while drawing is quite simple, I want to be able to store image with pixel precision - I know that drawing is being done in points instead of pixels. So what I am doing is I override draw() method to draw any graphic like so:
override func draw(_ dirtyRect: NSRect) {
super.draw(dirtyRect)
NSColor.white.setFill()
dirtyRect.fill()
NSColor.green.setFill()
NSColor.green.setStroke()
currentContext?.beginPath()
currentContext?.setLineWidth(1.0)
currentContext?.setStrokeColor(NSColor.green.cgColor)
currentContext?.move(to: CGPoint(x: 0, y: 0))
currentContext?.addLine(to: CGPoint(x: self.frame.width, y: self.frame.height))
currentContext?.closePath()
}
and since on screen it looks OK, after saving this to file is not what I expected. I set line width to 1 but in exported file it is 2 pixels wide. And to save image, I create NSImage from current view:
func getImage() -> NSImage? {
let size = self.bounds.size
let imageSize = NSMakeSize(size.width, size.height)
guard let imageRepresentation = self.bitmapImageRepForCachingDisplay(in: self.bounds) else {
return nil
}
imageRepresentation.size = imageSize
self.cacheDisplay(in: self.bounds, to: imageRepresentation)
let image = NSImage(size: imageSize)
image.addRepresentation(imageRepresentation)
return image
}
and this image is then save to file:
do {
guard let image = self.canvasView?.getImage() else {
return
}
let imageRep = image.representations.first as? NSBitmapImageRep
let data = imageRep?.representation(using: .png, properties: [:])
try data?.write(to: url, options: .atomic)
} catch {
print(error.localizedDescription)
}
Do you have any tips of what I am doing wrong?

Render SwiftUI view offscreen and save view as UIImage to share

I'm trying to create a share button with SwiftUI that when pressed can share a generated image. I've found some tutorials that can screen shot a current displayed view and convert it to an UIImage. But I want to create a view programmatically off the screen and then save that to a UIImage that users can then share with a share sheet.
import SwiftUI
import SwiftyJSON
import MapKit
struct ShareRentalView : View {
#State private var region = MKCoordinateRegion(center: CLLocationCoordinate2D(latitude: 32.786038, longitude: -117.237324) , span: MKCoordinateSpan(latitudeDelta: 0.025, longitudeDelta: 0.025))
#State var coordinates: [JSON] = []
#State var origin: CGPoint? = nil
#State var size: CGSize? = nil
var body: some View {
GeometryReader{ geometry in
VStack(spacing: 0) {
ZStack{
HistoryMapView(region: region, pointsArray: $coordinates)
.frame(height: 300)
}.frame(height: 300)
}.onAppear {
self.origin = geometry.frame(in: .global).origin
self.size = geometry.size
}
}
}
func returnScreenShot() -> UIImage{
return takeScreenshot(origin: self.origin.unsafelyUnwrapped, size: self.size.unsafelyUnwrapped)
}
}
extension UIView {
var renderedImage: UIImage {
// rect of capure
let rect = self.bounds
// create the context of bitmap
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
let context: CGContext = UIGraphicsGetCurrentContext()!
self.layer.render(in: context)
// get a image from current context bitmap
let capturedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return capturedImage
}
}
extension View {
func takeScreenshot(origin: CGPoint, size: CGSize) -> UIImage {
let window = UIWindow(frame: CGRect(origin: origin, size: size))
let hosting = UIHostingController(rootView: self)
hosting.view.frame = window.frame
window.addSubview(hosting.view)
window.makeKeyAndVisible()
return hosting.view.renderedImage
}
}
This is kind of my code idea at the moment. I have a view I've built that onAppear sets the CGpoint and CGsize of the screen capture. Then an attached method that can then take the screen shot of the view. The problem right now this view never renders because I never add this to a parent view as I don't want this view to appear to the user. In the parent view I have
struct HistoryCell: View {
...
private var shareRental : ShareRentalView? = nil
private var uiimage: UIImage? = nil
...
init(){
...
self.shareRental = ShareRentalView()
}
var body: some View {
...
Button{action: {self.uiimage = self.shareRental?.returnScreenShot()}}
...
}
}
This doesn't work because there view I want to screen shot is never rendered? Is there a way to render it in memory or off screen and then create an image from it? Or do I need to think of another way of doing this?
This ended up working to get the a screen shot of a view that was not presented on the screen to save as a UIImage
extension UIView {
func asImage() -> UIImage {
let format = UIGraphicsImageRendererFormat()
format.scale = 1
return UIGraphicsImageRenderer(size: self.layer.frame.size, format: format).image { context in
self.drawHierarchy(in: self.layer.bounds, afterScreenUpdates: true)
}
}
}
extension View {
func asImage() -> UIImage {
let controller = UIHostingController(rootView: self)
let size = controller.sizeThatFits(in: UIScreen.main.bounds.size)
controller.view.bounds = CGRect(origin: .zero, size: size)
let image = controller.view.asImage()
return image
}
}
And then in my parent view
var shareRental: ShareRentalView?
init(){
....
self.shareRental = ShareRentalView()
}
var body: some View {
Button(action: {
let shareImage = self.shareRental.asImage()
}
This gets me almost there. The MKMapSnapshotter has a delay while loading and the image creation happens too fast and there is no map when the UIImage is created.
In order to get around the issue with the delay in the map loading I created an array in a class that builds all the UIImages and stores them in an array.
class MyUser: ObservableObject {
...
public func buildHistoryRental(){
self.historyRentals.removeAll()
MapSnapshot().generateSnapshot(completion: self.snapShotRsp)
}
}
}
}
private func snapShotRsp(image: UIImage){
self.historyRentals.append(image))
}
And then I made a class to create snap shot images like this
func generateSnapshot(completion: #escaping (JSON, UIImage)->() ){
let mapSnapshotOptions = MKMapSnapshotOptions()
// Set the region of the map that is rendered. (by polyline)
let polyLine = MKPolyline(coordinates: &yourCoordinates, count: yourCoordinates.count)
let region = MKCoordinateRegionForMapRect(polyLine.boundingMapRect)
mapSnapshotOptions.region = region
// Set the scale of the image. We'll just use the scale of the current device, which is 2x scale on Retina screens.
mapSnapshotOptions.scale = UIScreen.main.scale
// Set the size of the image output.
mapSnapshotOptions.size = CGSize(width: IMAGE_VIEW_WIDTH, height: IMAGE_VIEW_HEIGHT)
// Show buildings and Points of Interest on the snapshot
mapSnapshotOptions.showsBuildings = true
mapSnapshotOptions.showsPointsOfInterest = true
let snapShotter = MKMapSnapshotter(options: mapSnapshotOptions)
var image: UIImage = UIImage()
snapshotter.start(completionHandler: { (snapshot: MKMapSnapshotter.Snapshot?, Error) -> Void in
if(Error != nil){
print("\(String(describing: Error))");
}else{
image = self.drawLineOnImage(snapshot: snapshot.unsafelyUnwrapped, pointsToUse: pointsToUse)
}
completion(image)
})
}
}
func drawLineOnImage(snapshot: MKMapSnapshot) -> UIImage {
let image = snapshot.image
// for Retina screen
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, true, 0)
// draw original image into the context
image.draw(at: CGPoint.zero)
// get the context for CoreGraphics
let context = UIGraphicsGetCurrentContext()
// set stroking width and color of the context
context!.setLineWidth(2.0)
context!.setStrokeColor(UIColor.orange.cgColor)
// Here is the trick :
// We use addLine() and move() to draw the line, this should be easy to understand.
// The diificult part is that they both take CGPoint as parameters, and it would be way too complex for us to calculate by ourselves
// Thus we use snapshot.point() to save the pain.
context!.move(to: snapshot.point(for: yourCoordinates[0]))
for i in 0...yourCoordinates.count-1 {
context!.addLine(to: snapshot.point(for: yourCoordinates[i]))
context!.move(to: snapshot.point(for: yourCoordinates[i]))
}
// apply the stroke to the context
context!.strokePath()
// get the image from the graphics context
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
// end the graphics context
UIGraphicsEndImageContext()
return resultImage!
}
It's important to return the image back async with the callback. Trying to return the image directly from the func call yielded a blank map.

.aspectFill on NSImageView

I'm porting my SpriteKit app from iOS to MacOS. I am designing my main menu in the main.storyboard, and I have an image as the background. When I resize the window, however, my image does not fill the whole screen.
I've tried:
.scaleAxesIndependently //???
.scaleNone //Centre
.scaleProportionallyDown //???
.scaleProportionallyUpOrDown //AspectFit
but none are the same as .aspectFill.
I am using swift
Subclassing NSImageView and overriding intrinsicContentSize you will be able to resizing image keeping aspect ratio, like so:
class AspectFillImageView: NSImageView {
override var intrinsicContentSize: CGSize {
guard let img = self.image else { return .zero }
let viewWidth = self.frame.size.width
let ratio = viewWidth / img.size.width
return CGSize(width: viewWidth, height: img.size.height * ratio)
}
}
If you just want to fill the whole view ignoring the ratio, use this extension instead:
extension NSImage {
func resize(to size: NSSize) -> NSImage {
return NSImage(size: size, flipped: false, drawingHandler: {
self.draw(in: $0)
return true
})
}
}
Extension usage:
NSImage.resize(to: self.view.frame.size)

Applying MPSImageGaussianBlur with depth data

I am trying to create an imitation of the portrait mode in Apple's native camera.
The problem is, that applying the blur effect using CIImage with respect to depth data, is too slow for the live preview I want to show to the user.
My code for this is mission is:
func blur(image: CIImage, mask: CIImage, orientation: UIImageOrientation = .up, blurRadius: CGFloat) -> UIImage? {
let start = Date()
let invertedMask = mask.applyingFilter("CIColorInvert")
let output = image.applyingFilter("CIMaskedVariableBlur", withInputParameters: ["inputMask" : invertedMask,
"inputRadius": blurRadius])
guard let cgImage = context.createCGImage(output, from: image.extent) else {
return nil
}
let end = Date()
let elapsed = end.timeIntervalSince1970 - start.timeIntervalSince1970
print("took \(elapsed) seconds to apply blur")
return UIImage(cgImage: cgImage, scale: 1.0, orientation: orientation)
}
I want to apply the blur on the GPU for better performance. For this task, I found this implementation provided by Apple here.
So in Apple's implementation, we have this code snippet:
/** Applies a Gaussian blur with a sigma value of 0.5.
This is a pre-packaged convolution filter.
*/
class GaussianBlur: CommandBufferEncodable {
let gaussian: MPSImageGaussianBlur
required init(device: MTLDevice) {
gaussian = MPSImageGaussianBlur(device: device,
sigma: 5.0)
}
func encode(to commandBuffer: MTLCommandBuffer, sourceTexture: MTLTexture, destinationTexture: MTLTexture) {
gaussian.encode(commandBuffer: commandBuffer,
sourceTexture: sourceTexture,
destinationTexture: destinationTexture)
}
}
How can I apply the depth data into the filtering through the Metal blur version? Or in other words - how can I achieve the first code snippets functionality, with the performance speed of the second code snippet?
For anyone still looking you need to get currentDrawable first in draw(in view: MTKView) method. Implement MTKViewDelegate
func makeBlur() {
device = MTLCreateSystemDefaultDevice()
commandQueue = device.makeCommandQueue()
selfView.mtkView.device = device
selfView.mtkView.framebufferOnly = false
selfView.mtkView.delegate = self
let textureLoader = MTKTextureLoader(device: device)
if let image = self.backgroundSnapshotImage?.cgImage, let texture = try? textureLoader.newTexture(cgImage: image, options: nil) {
sourceTexture = texture
}
}
func draw(in view: MTKView) {
if let currentDrawable = view.currentDrawable,
let commandBuffer = commandQueue.makeCommandBuffer() {
let gaussian = MPSImageGaussianBlur(device: device, sigma: 5)
gaussian.encode(commandBuffer: commandBuffer, sourceTexture: sourceTexture, destinationTexture: currentDrawable.texture)
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
}

Can't reliably verify texture for SKSpriteNode

I am trying to write a test that verifies a SKSpriteNode in my scene has the correct texture.
The test looks like this:
let sceneSprite = scene.childNodeWithName("sceneSprite") as SKSpriteNode!
let sprite = SKSpriteNode(imageNamed: expectedSpriteTexture)
sprite.size = sceneSprite.size // 102.4 x 136.533
XCTAssertTrue(sceneSprite.texture!.sameAs(sprite.texture!), "Scene sprite has wrong texture")
The sameAs method for SKTexture is implemented with the following extensions:
extension SKTexture {
func sameAs(texture: SKTexture) -> Bool {
return self.image.sameAs(texture.image)
}
var image: UIImage {
let view = SKView(frame:CGRectMake(0, 0, size().width, size().height))
let scene = SKScene(size: size())
let sprite = SKSpriteNode(texture: self)
sprite.position = CGPoint(x: CGRectGetMidX(view.frame), y: CGRectGetMidY(view.frame))
scene.addChild(sprite)
view.presentScene(scene)
return self.imageWithView(view)
}
func imageWithView(view: UIView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0)
view.drawViewHierarchyInRect(view.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
extension UIImage {
func sameAs(image: UIImage) -> Bool {
let firstData = UIImagePNGRepresentation(self);
let secondData = UIImagePNGRepresentation(image);
return firstData.isEqual(secondData)
}
}
The problem is sometimes the tests passes and sometimes the test fails. I have change the code so it save the images on failure, and discovered the test fails because even though the first image is correct, the second image is completely black.
What can be done so the test will pass reliably?
This failure is happening on the simulator for iPad2.
I made the following change, and it seems to make the test pass reliably
func imageWithView(view: UIView) -> UIImage {
UIGraphicsBeginImageContext(view.bounds.size)
view.drawViewHierarchyInRect(view.bounds, afterScreenUpdates: false)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
What ideas do people have for why this allows the test to succeed?