I am compositing an array of UIImages via an MTKView, and I am seeing refresh issues that only manifest themselves during the composite phase, but which go away as soon as I interact with the app. In other words, the composites are working as expected, but their appearance on-screen looks glitchy until I force a refresh by zooming in/translating, etc.
I posted two videos that show the problem in action: Glitch1, Glitch2
The composite approach I've chosen is that I convert each UIImage into an MTLTexture which I submit to a render buffer set to ".load" which renders a poly with this texture on it, and I repeat the process for each image in the UIImage array.
The composites work, but the screen feedback, as you can see from the videos is very glitchy.
Any ideas as to what might be happening? Any suggestions would be appreciated
Some pertinent code:
for strokeDataCurrent in strokeDataArray {
let strokeImage = UIImage(data: strokeDataCurrent.image)
let strokeBbox = strokeDataCurrent.bbox
let strokeType = strokeDataCurrent.strokeType
self.brushStrokeMetal.drawStrokeImage(paintingViewMetal: self.canvasMetalViewPainting, strokeImage: strokeImage!, strokeBbox: strokeBbox, strokeType: strokeType)
} // end of for strokeDataCurrent in strokeDataArray
...
func drawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect, strokeType: brushTypeMode) {
// set up proper compositing mode fragmentFunction
self.updateRenderPipeline(stampCompStyle: drawStampCompMode)
let stampTexture = UIImageToMTLTexture(strokeUIImage: strokeUIImage)
let stampColor = UIColor.white
let stampCorners = self.stampSetVerticesFromBbox(bbox: strokeBbox)
self.stampAppendToVertexBuffer(stampUse: stampUseMode.strokeBezier, stampCorners: stampCorners, stampColor: stampColor)
self.renderStampSingle(stampTexture: stampTexture)
} // end of func drawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect)
func renderStampSingle(stampTexture: MTLTexture) {
// this routine is designed to update metalDrawableTextureComposite one stroke at a time, taking into account
// whatever compMode the stroke requires. Note that we copy the contents of metalDrawableTextureComposite to
// self.currentDrawable!.texture because the goal will be to eventually display a resulting composite
let renderPassDescriptorSingleStamp: MTLRenderPassDescriptor? = self.currentRenderPassDescriptor
renderPassDescriptorSingleStamp?.colorAttachments[0].loadAction = .load
renderPassDescriptorSingleStamp?.colorAttachments[0].clearColor = MTLClearColorMake(0, 0, 0, 0)
renderPassDescriptorSingleStamp?.colorAttachments[0].texture = metalDrawableTextureComposite
// Create a new command buffer for each tessellation pass
let commandBuffer: MTLCommandBuffer? = commandQueue.makeCommandBuffer()
let renderCommandEncoder: MTLRenderCommandEncoder? = commandBuffer?.makeRenderCommandEncoder(descriptor: renderPassDescriptorSingleStamp!)
renderCommandEncoder?.label = "Render Command Encoder"
renderCommandEncoder?.setTriangleFillMode(.fill)
defineCommandEncoder(
renderCommandEncoder: renderCommandEncoder,
vertexArrayStamps: vertexArrayStrokeStamps,
metalTexture: stampTexture) // foreground sub-curve chunk
renderCommandEncoder?.endEncoding() // finalize renderEncoder set up
//begin presentsWithTransaction approach (needed to better synchronize with Core Image scheduling
copyTexture(buffer: commandBuffer!, from: metalDrawableTextureComposite, to: self.currentDrawable!.texture)
commandBuffer?.commit() // commit and send task to gpu
commandBuffer?.waitUntilScheduled()
self.currentDrawable!.present()
// end presentsWithTransaction approach
self.initializeStampArray(stampUse: stampUseMode.strokeBezier) // clears out the stamp array in preparation of next draw call
} // end of func renderStampSingle(stampTexture: MTLTexture)
First of all, the domain Metal is very deep, and it's use within the MTKView construct is sparsely documented, especially for any applications that fall outside the more traditional gaming paradigm. This is where I have found myself in the limited experience I have accumulated with Metal with the help from folks like #warrenm, #ken-thomases, and #modj, whose contributions have been so valuable to me, and to the Swift/Metal community at large. So a deep thank you to all of you.
Secondly, to anyone troubleshooting metal, please take note of the following: If you are getting the message:
[CAMetalLayerDrawable present] should not be called after already presenting this drawable. Get a nextDrawable instead
please don't ignore it. It mays seem harmless enough, especially if it only gets reported once. But beware that this is a sign that a part of your implementation is flawed, and must be addressed before you can troubleshoot any other Metal-related aspect of your app. At least this was the case for me. As you can see from the video posts, the symptoms of having this problem were pretty severe and caused unpredictable behavior that I was having a difficult time pinpointing the source of. The thing that was especially difficult for me to see was that I only got this message ONCE early on in the app cycle, but that single instance was enough to throw everything else graphically out of whack in ways that I thought were attributable to CoreImage and/or other totally unrelated design choices I had made.
So, how did I get rid of this warning? Well, in my case, I assumed that having the settings:
self.enableSetNeedsDisplay = true // needed so we can call setNeedsDisplay() to force a display update as soon as metal deems possible
self.isPaused = true // needed so the draw() loop does not get called once/fps
self.presentsWithTransaction = true // for better synchronization with CoreImage (such as simultaneously turning on a layer while also clearing MTKView)
meant that I could pretty much call currentDrawable!.present() or commandBuffer.presentDrawable(view.currentDrawable) directly whenever I wanted to refresh the screen. Well, this is not the case AT ALL. It turns out these calls should only be made within the draw() loop and only accessed via a setNeedsDisplay() call. Once I made this change, I was well on my way to solving my refresh riddle.
Furthermore, I found that the MTKView setting self.isPaused = true (so that I could make setNeedsDisplay() calls directly) still resulted in some unexpected behavior. So, instead, I settled for:
self.enableSetNeedsDisplay = false // needed so we can call setNeedsDisplay() to force a display update as soon as metal deems possible
self.isPaused = false // draw() loop gets called once/fps
self.presentsWithTransaction = true // for better synchronization with CoreImage
as well as modifying my draw() loop to drive what kind of update to carry out once I set a metalDrawableDriver flag AND call setNeedsDisplay():
override func draw(_ rect: CGRect) {
autoreleasepool(invoking: { () -> () in
switch metalDrawableDriver {
case stampRenderMode.canvasRenderNoVisualUpdates:
return
case stampRenderMode.canvasRenderClearAll:
renderClearCanvas()
case stampRenderMode.canvasRenderPreComputedComposite:
renderPreComputedComposite()
case stampRenderMode.canvasRenderStampArraySubCurve:
renderSubCurveArray()
} // end of switch metalDrawableDriver
}) // end of autoreleasepool
} // end of draw()
This may seem round-about, but it was the only mechanism I found to get consistent user-driven display updates.
It is my hope that this post describes an error-free and viable solution that Metal developers may find useful in the future.
Related
I would like to animate the appearance of a NSSplitViewItem using .setPosition() using Swift, Cocoa and storyboards. My app allows a student to enter a natural deduction proof. When it is not correct, an 'advice view' appears on the right. When it is correct, this advice view will disappear.
The code I'm using is the below, where the first function makes the 'advice' appear, and the second makes it disappear:
func showAdviceView() {
// Our window
let windowSize = view.window?.frame.size.width
// A CGFloat proportion currently held as a constant
let adviceViewProportion = BKPrefConstants.adviceWindowSize
// Position is window size minus the proportion, since
// origin is top left
let newPosition = windowSize! - (windowSize! * adviceViewProportion)
NSAnimationContext.runAnimationGroup { context in
context.allowsImplicitAnimation = true
context.duration = 0.75
splitView.animator().setPosition(newPosition, ofDividerAt: 1)
}
}
func hideAdviceView() {
let windowSize = view.window?.frame.size.width
let newPosition = windowSize!
NSAnimationContext.runAnimationGroup{ context in
context.allowsImplicitAnimation = true
context.duration = 0.75
splitView.animator().setPosition(newPosition, ofDividerAt: 1)
}
}
My problem is that the animation action itself is causing the text in the views to stretch, as you can see in this example: Current behaviour
What I really want is the text itself to maintain all proportions and slide gracefully in the same manner that we see when the user themselves moves the separator: Ideal behaviour (but to be achieved programmatically, not manually)
Thus far in my troubleshooting process, I've tried to animate this outside of NSAnimationContext; played with concurrent drawing and autoresizing of subviews in XCode; and looked generally into Cocoa's animation system (though much of what I've read doesn't seem to have direct application here, but I might well be misunderstanding it). I suspect what's going on is that the .animator() proxy object allows only alpha changes and stretches---redrawing so that text alignment is honoured during the animation might be too non-standard. My feeling is that I need to 'trick' the app into treating the animation as though it's being performed by the user, but I'm not sure how to go about that.
Any tips greatly appreciated...
Cheers
I am processing a PHLivePhoto using .frameProcessor to modify each frame. The frames appear to be processed in sequence, which is slow. Can I get PHLivePhotoEditingContext.frameProcessor to take advantage of more than one core?
func processLivePhoto(input: PHContentEditingInput) {
guard let context = PHLivePhotoEditingContext(livePhotoEditingInput: input)
else { fatalError("not a Live Photo editing input") }
context.frameProcessor = { frame, _ in
let renderedFrame = expensiveOperation(using: frame.image)
return renderedFrame
}
// ...logic for saving
}
I'm afraid there's no way to parallelize the frame processing in this case. You have to keep in mind:
Video frames need to be written in order.
The more frames you would process in parallel, the more memory you would need.
Core Image is processing the frames on the GPU, which can usually only process one frame at a time anyways.
Your expensiveOperation is not really happening in the frameProcessor block anyways, since the actual rendering is handled by the framework outside this scope.
When I run my program. The code I put into "override func sceneDidLoad()" runs two times.
E.g.
Note: I have no idea why this picture is not uploading, but it shows "spawn" happening twice.
This code should only run once when "sceneDidLoad()" is called.
Here is the code for the "sceneDidLoad" function, and for the "testSpawn()" function (which is the specific one that gave the duplicated printout).
class GameScene: SKScene {
var mapTerrain: SKTileMapNode!
override func sceneDidLoad() {
cam = SKCameraNode()
cam.xScale = 1
cam.yScale = 1
//do zoom by change in scale in pinch. (E.g. if they start out 5 units apart and end up 15 units apart, zoom by a factor of 3
self.camera = cam
self.addChild(cam)
cam.position = CGPoint(x: 100, y: 100)
setupLayers()
loadSceneNodes()
setUpUI()
testSpawn()
//print("\(self.frame.width), \(self.frame.height)")
}
func testSpawn(){
let RedLegion = legion(texture: textureRedLegion, moveTo: nil, tag: 1, health: 2)
RedLegion.position = mapTerrain.centerOfTile(atColumn: 0, row: 0)
RedLegion.team = "Red"
unitsLayer.addChild(RedLegion)
legionList.append(RedLegion)
print("spawn")
}
}
Note: Not all of the code is here (like "setUpLayers()"), if needed I can supply it, I just do not think it is neccessary.
Search your whole document for "print("spawn")" just to make sure that is the only time you call the function. Also check for "testSpawn()" to make sure it is only called once. Additionally, instead of relying on this print to count how many times the sceneDidLoad runs, place a print directly within your sceneDidLoad. Finally, check to make sure you are not creating the scene twice.
I've also seen this and submitted a bug report but apple responded saying that it is intended behavior. Apple said that it creates a dummy scene and then creates the actual scene. Before it runs the second time it gets rid of anything done the first time so you shouldn't get any errors from it. The bug is really hard to reproduce, one of my friends working off the same repository that I was but did not experience the bug.
I changed sceneDidLoad to didMoveToView:(SKView *)view if you are looking for a solution to this. Make sure you xcode is up to date.
I am programming a small game using SpriteKit
I added a SKLabelNode to my SKScene with the initial text of just "0".
When I try and update the text of this SKLabel using:
func updateScoreLabel() {
scoreNumber++
scoreLabel.text = String(scoreNumber)
}
There is a short pause of the entire SKScene between when it gets called and then when it is updated.
However this only happens the first time it is called so If I am running this a second time as in updating the scoreLabel any subsequent time the Scene Pausing then the pause does not occur.
what is triggering the method call is... CC is an enum of physicsBody categoryBitMasks typed to Int
func isCollisionBetween(nodeTypeOne: CC, nodeTypeTwo: CC, contact: SKPhysicsContact) -> Bool {
let isAnodeTypeOne = contact.nodeA.physicsBody!.categoryBitMask == nodeTypeOne.rawValue
let isAnodeTypeTwo = contact.nodeA.physicsBody!.categoryBitMask == nodeTypeTwo.rawValue
let isBnodeTypeOne = contact.nodeB.physicsBody!.categoryBitMask == nodeTypeOne.rawValue
let isBnodeTypeTwo = contact.nodeB.physicsBody!.categoryBitMask == nodeTypeTwo.rawValue
if (isAnodeTypeOne && isBnodeTypeTwo) || (isAnodeTypeTwo && isBnodeTypeOne) {
return true
} else {
return false
}
}
then if
if isCollisionBetween(CC.TypeA, nodeTypeTwo: CC.TypeB, contact: contact) {
updateScoreLabel()
}
Can someone please point out the problem. The score updating does not pause the scene when the same collision is detected and a println statement is used to output the score so I think it is specific to changing the text of the SKLabelNode
You should check for two things which for sure can cause the lag:
for typos in a font name
that you are not loading entire font family, ie Arial instead of just Arial-BoldMT or Arial-ItalicMT. You want to be specific, because loading entire font family can make a delay when using certain fonts. List of iOS fonts could be found here.
If you need to list available fonts(and see real font names), you can use something like this:
for familyName in UIFont.familyNames() as [String] {
println("\(familyName)")
for fontName in UIFont.fontNamesForFamilyName(familyName) as [String] {
println("\tFont: \(fontName)")
}
}
When initializing label for the first time (say at the moment when collision occurs) the delay may happen if you are using custom fonts which are not available in iOS.
In that case try to "preload" a font before using the label. So before actual gameplay, you should instantiate SKLabelNode and set its text property to some value. You have to set a text property, because by doing that the font will be preloaded and ready for use. Otherwise it will be loaded at the time you set label's text property.
EDIT:
Sorry, I just noticed that you are said that you are initializing already a label with initial text. So the just ignore part of my answer related to that and look for typos and the part about loading specific font.
Hope this will take you somewhere. Good luck!
I'm trying to alpha blend some layers: [CGImageRef] in the drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!) routine of my custom NSView. Until now I used CGContextDrawImage() for drawing those layers into the drawLayer context. While profiling I noticed CGContextDrawImage() needs 70% of the CPU time so I decided to try the Accelerate framework. I changed the code but it just crashes and I have no clue what the reason could be.
I'm creating those layers like this:
func addLayer() {
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB)
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedFirst.rawValue)
var layerContext = CGBitmapContextCreate(nil, UInt(canvasSize.width), UInt(canvasSize.height), 8, UInt(canvasSize.width * 4), colorSpace, bitmapInfo)
var newLayer = CGBitmapContextCreateImage(layerContext)
layers.append( newLayer )
}
My drawLayers routine looks like this:
override func drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!)
{
var ctxImageBuffer = vImage_Buffer(data:CGBitmapContextGetData(ctx),
height:CGBitmapContextGetHeight(ctx),
width:CGBitmapContextGetWidth(ctx),
rowBytes:CGBitmapContextGetBytesPerRow(ctx))
for imageLayer in layers
{
//CGContextDrawImage(ctx, CGRect(origin: frameOffset, size: canvasSize), imageLayer)
var inProvider:CGDataProviderRef = CGImageGetDataProvider(imageLayer)
var inBitmapData:CFDataRef = CGDataProviderCopyData(inProvider)
var buffer:vImage_Buffer = vImage_Buffer(data: &inBitmapData, height:
CGImageGetHeight(imageLayer), width: CGImageGetWidth(imageLayer), rowBytes:
CGImageGetBytesPerRow(imageLayer))
vImageAlphaBlend_ARGB8888(&buffer, &ctxImageBuffer, &ctxImageBuffer, 0)
}
}
the canvasSize is allways the same and also all the layers have the same size, so I don't understand why the last line crashes.
Also I don't see how to use the new convenience functions to create vImageBuffers directly from CGLayerRefs. That's why I do it the complicated way.
Any help appreciated.
EDIT
inBitmapData indeed holds pixel data that reflect the background color I set. However the debugger can not po &inBitmapData and fails with this message:
error: reference to 'CFData' not used to initialize a inout parameter &inBitmapData
So I looked for a way to get the pointer to inBitmapData. That is what I came up with:
var bitmapPtr: UnsafeMutablePointer<CFDataRef> = UnsafeMutablePointer<CFDataRef>.alloc(1)
bitmapPtr.initialize(inBitmapData)
I also had to change the way to point at my data for both buffers that i need for the alpha blend input. Now it's not crashing anymore and luckily the speed boost is inspectable with a profiler (vImageAlphaBlend only takes about a third of CGContextDrawImage), but unfortunately the image results in a transparent image with pixel failures instead of the white image background.
So far I don't get any runtime errors anymore but since the result is not as expected I fear that I still don't use the alpha blend function correctly.
vImage_Buffer.data should point to the CFData data (pixel data), not the CFDataRef.
Also, not all images store their data as four channel, 8-bit per channel data. If it turns out to be three channel or RGBA or monochrome, you may get more crashing or funny colors. Also, you have assumed that the raw image data is not premultiplied, which may not be a safe assumption.
You are better off using vImageBuffer_initWithCGImage so that you can guarantee the format and colorspace of the raw image data. A more specific question about that function might help us resolve your confusion about it.
Some CG calls fall back on vImage to do the work. Rewriting your code in this way might be unprofitable in such cases. Usually the right thing to do first is to look carefully at the backtraces in the CG call to try to understand why you are causing so much work for it. Often the answer is colorspace conversion. I would look carefully at the CGBitmapInfo and colorspace of the drawing surface and your images and see if there wasn't something I could do to get those to match up a bit better.
IIRC, CALayerRefs usually have their data in non cacheable storage for better GPU access. That could cause problems for the CPU. If the data is in a CALayerRef I would use CA to do the compositing. Also, I thought that CALayers are nearly always BGRA 8-bit premultiplied. If you are not going to use CA to do the compositing, then the right vImage function is probably vImagePremultipliedAlphaBlend_RGBA/BGRA8888.