I am processing a PHLivePhoto using .frameProcessor to modify each frame. The frames appear to be processed in sequence, which is slow. Can I get PHLivePhotoEditingContext.frameProcessor to take advantage of more than one core?
func processLivePhoto(input: PHContentEditingInput) {
guard let context = PHLivePhotoEditingContext(livePhotoEditingInput: input)
else { fatalError("not a Live Photo editing input") }
context.frameProcessor = { frame, _ in
let renderedFrame = expensiveOperation(using: frame.image)
return renderedFrame
}
// ...logic for saving
}
I'm afraid there's no way to parallelize the frame processing in this case. You have to keep in mind:
Video frames need to be written in order.
The more frames you would process in parallel, the more memory you would need.
Core Image is processing the frames on the GPU, which can usually only process one frame at a time anyways.
Your expensiveOperation is not really happening in the frameProcessor block anyways, since the actual rendering is handled by the framework outside this scope.
Related
I have a Entity containing a Srite Sheet and class instance
let texture_handle = asset_server.load("turret_idle.png");
let texture_atlas: TextureAtlas = TextureAtlas::from_grid(texture_handle, ...);
let texture_atlas_handle = texture_atlases.add(texture_atlas);
let mut turret = Turret::create(...);
commands.spawn_bundle(SpriteSheetBundle {
texture_atlas: texture_atlas_handle,
transform: Transform::from_xyz(pos),
..default()
})
.insert(turret)
.insert(AnimationTimer(Timer::from_seconds(0.04, true)));
The AnimationTimer will then be used in a query together with the Texture Atlas Handle to render the next sprite
fn animate_turret(
time: Res<Time>,
texture_atlases: Res<Assets<TextureAtlas>>,
mut query: Query<(
&mut AnimationTimer,
&mut TextureAtlasSprite,
&Handle<TextureAtlas>,
)>,
) {
for (mut timer, mut sprite, texture_atlas_handle) in &mut query {
timer.tick(time.delta());
if timer.just_finished() {
let texture_atlas = texture_atlases.get(texture_atlas_handle).unwrap();
sprite.index = (sprite.index + 1) % texture_atlas.textures.len();
}
}
}
This works perfectly fine as long as the tower is idle thus plays the idle animation. As soon as a target is found and attacked, I want to display another sprite sheet instead.
let texture_handle_attack = asset_server.load("turret_attack.png");
Unfortunately, It seems that I cannot add multiple TextureAtlas Handles to a Sprite Sheet Bundle and decide later which one to render. How do I solve this? I thought about merging all animations into one Sprite Sheet already but this is very messy as they have different frames.
Maybe create a struct with all the different handles you need and add it as a resource? Then you need a component for the enum states "idle", "attacked" etc.. and a system that handles setting the correct handle in texture_atlas from your resource handles.
I am compositing an array of UIImages via an MTKView, and I am seeing refresh issues that only manifest themselves during the composite phase, but which go away as soon as I interact with the app. In other words, the composites are working as expected, but their appearance on-screen looks glitchy until I force a refresh by zooming in/translating, etc.
I posted two videos that show the problem in action: Glitch1, Glitch2
The composite approach I've chosen is that I convert each UIImage into an MTLTexture which I submit to a render buffer set to ".load" which renders a poly with this texture on it, and I repeat the process for each image in the UIImage array.
The composites work, but the screen feedback, as you can see from the videos is very glitchy.
Any ideas as to what might be happening? Any suggestions would be appreciated
Some pertinent code:
for strokeDataCurrent in strokeDataArray {
let strokeImage = UIImage(data: strokeDataCurrent.image)
let strokeBbox = strokeDataCurrent.bbox
let strokeType = strokeDataCurrent.strokeType
self.brushStrokeMetal.drawStrokeImage(paintingViewMetal: self.canvasMetalViewPainting, strokeImage: strokeImage!, strokeBbox: strokeBbox, strokeType: strokeType)
} // end of for strokeDataCurrent in strokeDataArray
...
func drawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect, strokeType: brushTypeMode) {
// set up proper compositing mode fragmentFunction
self.updateRenderPipeline(stampCompStyle: drawStampCompMode)
let stampTexture = UIImageToMTLTexture(strokeUIImage: strokeUIImage)
let stampColor = UIColor.white
let stampCorners = self.stampSetVerticesFromBbox(bbox: strokeBbox)
self.stampAppendToVertexBuffer(stampUse: stampUseMode.strokeBezier, stampCorners: stampCorners, stampColor: stampColor)
self.renderStampSingle(stampTexture: stampTexture)
} // end of func drawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect)
func renderStampSingle(stampTexture: MTLTexture) {
// this routine is designed to update metalDrawableTextureComposite one stroke at a time, taking into account
// whatever compMode the stroke requires. Note that we copy the contents of metalDrawableTextureComposite to
// self.currentDrawable!.texture because the goal will be to eventually display a resulting composite
let renderPassDescriptorSingleStamp: MTLRenderPassDescriptor? = self.currentRenderPassDescriptor
renderPassDescriptorSingleStamp?.colorAttachments[0].loadAction = .load
renderPassDescriptorSingleStamp?.colorAttachments[0].clearColor = MTLClearColorMake(0, 0, 0, 0)
renderPassDescriptorSingleStamp?.colorAttachments[0].texture = metalDrawableTextureComposite
// Create a new command buffer for each tessellation pass
let commandBuffer: MTLCommandBuffer? = commandQueue.makeCommandBuffer()
let renderCommandEncoder: MTLRenderCommandEncoder? = commandBuffer?.makeRenderCommandEncoder(descriptor: renderPassDescriptorSingleStamp!)
renderCommandEncoder?.label = "Render Command Encoder"
renderCommandEncoder?.setTriangleFillMode(.fill)
defineCommandEncoder(
renderCommandEncoder: renderCommandEncoder,
vertexArrayStamps: vertexArrayStrokeStamps,
metalTexture: stampTexture) // foreground sub-curve chunk
renderCommandEncoder?.endEncoding() // finalize renderEncoder set up
//begin presentsWithTransaction approach (needed to better synchronize with Core Image scheduling
copyTexture(buffer: commandBuffer!, from: metalDrawableTextureComposite, to: self.currentDrawable!.texture)
commandBuffer?.commit() // commit and send task to gpu
commandBuffer?.waitUntilScheduled()
self.currentDrawable!.present()
// end presentsWithTransaction approach
self.initializeStampArray(stampUse: stampUseMode.strokeBezier) // clears out the stamp array in preparation of next draw call
} // end of func renderStampSingle(stampTexture: MTLTexture)
First of all, the domain Metal is very deep, and it's use within the MTKView construct is sparsely documented, especially for any applications that fall outside the more traditional gaming paradigm. This is where I have found myself in the limited experience I have accumulated with Metal with the help from folks like #warrenm, #ken-thomases, and #modj, whose contributions have been so valuable to me, and to the Swift/Metal community at large. So a deep thank you to all of you.
Secondly, to anyone troubleshooting metal, please take note of the following: If you are getting the message:
[CAMetalLayerDrawable present] should not be called after already presenting this drawable. Get a nextDrawable instead
please don't ignore it. It mays seem harmless enough, especially if it only gets reported once. But beware that this is a sign that a part of your implementation is flawed, and must be addressed before you can troubleshoot any other Metal-related aspect of your app. At least this was the case for me. As you can see from the video posts, the symptoms of having this problem were pretty severe and caused unpredictable behavior that I was having a difficult time pinpointing the source of. The thing that was especially difficult for me to see was that I only got this message ONCE early on in the app cycle, but that single instance was enough to throw everything else graphically out of whack in ways that I thought were attributable to CoreImage and/or other totally unrelated design choices I had made.
So, how did I get rid of this warning? Well, in my case, I assumed that having the settings:
self.enableSetNeedsDisplay = true // needed so we can call setNeedsDisplay() to force a display update as soon as metal deems possible
self.isPaused = true // needed so the draw() loop does not get called once/fps
self.presentsWithTransaction = true // for better synchronization with CoreImage (such as simultaneously turning on a layer while also clearing MTKView)
meant that I could pretty much call currentDrawable!.present() or commandBuffer.presentDrawable(view.currentDrawable) directly whenever I wanted to refresh the screen. Well, this is not the case AT ALL. It turns out these calls should only be made within the draw() loop and only accessed via a setNeedsDisplay() call. Once I made this change, I was well on my way to solving my refresh riddle.
Furthermore, I found that the MTKView setting self.isPaused = true (so that I could make setNeedsDisplay() calls directly) still resulted in some unexpected behavior. So, instead, I settled for:
self.enableSetNeedsDisplay = false // needed so we can call setNeedsDisplay() to force a display update as soon as metal deems possible
self.isPaused = false // draw() loop gets called once/fps
self.presentsWithTransaction = true // for better synchronization with CoreImage
as well as modifying my draw() loop to drive what kind of update to carry out once I set a metalDrawableDriver flag AND call setNeedsDisplay():
override func draw(_ rect: CGRect) {
autoreleasepool(invoking: { () -> () in
switch metalDrawableDriver {
case stampRenderMode.canvasRenderNoVisualUpdates:
return
case stampRenderMode.canvasRenderClearAll:
renderClearCanvas()
case stampRenderMode.canvasRenderPreComputedComposite:
renderPreComputedComposite()
case stampRenderMode.canvasRenderStampArraySubCurve:
renderSubCurveArray()
} // end of switch metalDrawableDriver
}) // end of autoreleasepool
} // end of draw()
This may seem round-about, but it was the only mechanism I found to get consistent user-driven display updates.
It is my hope that this post describes an error-free and viable solution that Metal developers may find useful in the future.
I am building an app that uses microphone input to detect sounds and trigger events. I based my code on AKAmplitudeTap, but I when I ran it, I found that I was only obtaining sample data for intervals with missing sections.
The tap code looks like this (with the guts ripped out and simply keeping track of how many samples would have been processed):
open class MyTap {
// internal let bufferSize: UInt32 = 1_024 // 8-9 kSamples/sec
internal let bufferSize: UInt32 = 4096 // 39.6 kSamples/sec
// internal let bufferSize: UInt32 = 16536 // 43.3 kSamples/sec
public init(_ input: AKNode?) {
input?.avAudioNode.installTap(onBus: 0, bufferSize: bufferSize, format: nil ) { buffer, _ in
sampleCount += self.bufferSize
}
}
I initialize the tap with:
func afterLoad() {
assert(!loaded)
AKSettings.audioInputEnabled = true
do {
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
} catch {
print("Could not set session category.")
}
mic = AKMicrophone()
myTap = MyTap(mic) // seriously, can it be that easy?
loaded = true
}
The original tap code was capturing samples to a buffer, but I saw that big chunks of time were missing with a buffer size of 1024. I suspected that the processing time for the sample buffer might be excessive, so...
I simplified the code to simply keep track of how many samples were being passed to the tap. In another part of the code, I simply print out sampleCount/elapsedTime and, as noted in the comments after 'bufferSize' I get different amounts of samples per second.
The sample rate converges on 43.1 KSamples/sec with a 16K buffer, and only collects about 20% of the samples with a 1K buffer. I would prefer to use the small buffer size to obtain near real-time response to detected sounds. As I've been writing this, the 4K buffer version has been running and has stabilized at 39678 samples/sec.
Am I missing something? Can a tap with a small buffer size actually capture 44.1 Khz sample data?
Problem resolved... the tap requires this line of code
buffer.frameLength = self.bufferSize
... and suddenly all the samples appear. I obviously stripped out a bit too much code from the code I obviously didn't understand.
I'm building a 2d tile-based game for iOS with swift and Firebase. Because the world is large, I've designed it so that I only subscribe to the tiles that are on screen. That is, instead of adding listeners for all 10,000x10,000 tiles, I add them to just the tiles on screen. As the player moves, I de-register the old listeners and register the new ones. I've added a bit of a buffer zone around the edge of the screen, in the hopes that everything will be sufficiently loaded by the time it moves on screen. Unfortunately, there is often significant enough lag from Firebase that this strategy simply isn't working. On sub-optimal internet connections, it's possible to keep walking into the "unloaded world," taking several seconds at times to load the missing tiles.
Here's the thing, though: other MMO iOS games on the same connection and same device work fine. It's not a terrible connection. Which makes me suspect my implementation, or Firebase itself is at fault.
Fundamentally I'm waiting on the "load once" event for about 20 tiles each time I take a step. A step takes about 1/4 of a second, so each second I'm requesting about 100 items from Firebase. I can't think of a better way, though. Firebase documentation suggests that this should not be a problem, since it's all one socket connection. I could "bucket" the objects into, say, 10x10 blocks which would mean that I'd subscribe to fewer objects, but this would also be more wasteful in terms of total data transfer. If the socket connection is truly optimized, total data transfer should be the only bottleneck, implying this strategy would be wrong.
edit
Here's a video showing how it works. The buffer-size has been reduced to -1, so that you can easily see the edges of the screen and the tiles loading and unloading. Near the end of the video, lag strikes and I wander into the emptiness. I opened up another game and it loaded almost instantly. http://forevermaze.inzania.com/videos/FirebaseLag.mov (n.b., I ended the recording before the screen loaded again. It never fails to load, so it's not as if the code is failing to work. It's pure lag.)
Here is the code I'm using to load the tiles. It's called once for each tile. As I said, this means that this code is called about 20 times per step, in parallel. All other apps are running at a fine speed with no lag. I'm on a MiFi with LTE connectivity in Tokyo, so it's a solid connection.
/**
* Given a path to a firebase object, get the snapshot with a timeout.
*/
static func loadSnapshot(firebasePath: String!) -> Promise<FDataSnapshot?> {
let (promise, fulfill, _) = Promise<FDataSnapshot?>.pendingPromise()
let connection = Firebase(url: Config.firebaseUrl + firebasePath)
connection.observeSingleEventOfType(.Value, withBlock: { snapshot in
if !promise.resolved {
fulfill(snapshot)
}
})
after(Config.timeout).then { () -> Void in
if !promise.resolved {
DDLogWarn("[TIMEOUT] [FIREBASE-READ] \(firebasePath)")
fulfill(nil)
//reject(Errors.network)
}
}
return promise
}
The tiles reside at [ROOT]/tiles/[X]x[Y]. Most tiles contain very little data, but if there are objects on that tile (i.e., other players) those are stored. Here's a screenshot from Firebase:
edit2
Per request, I've recreated this issue very simply. Here is a 100-line XCTestCase class: http://forevermaze.com/code/LagTests.swift
Usage:
Drop the file into your Swift project (it should be stand-alone, requiring only Firebase)
Change the value of firebaseUrl to your root URL (i.e., https://MyProject.firebaseio.com)
Run the testSetupDatabase() function test once to setup the initial state of the database
Run the testWalking() function to test the lag. This is the main test. It will fail if any tile takes longer than 2 seconds to load.
I've tried this test on several different connections. A top-notch office connection passes with no problem, but even a high-end LTE or MiFi connection fails. 2 seconds is already a very long timeout, since it implies that I need to have a 10 tile buffer zone (0.2 seconds * 10 tiles = 2 seconds). Here's some output when I'm connected to a LTE connection, showing that it took nearly 10 seconds (!!) to load a tile:
error: -[ForeverMazeTests.LagTests testWalking] : XCTAssertTrue failed - Tile 2x20 took 9.50058007240295
I ran a few tests and the loading completes in 15-20 seconds when I test over a 3G connection. Over my regular connection it takes 1-2 seconds, so the difference is likely purely based on bandwidth.
I rewrote your test case into a JavaScript version, because I had a hard time figuring out what's going on. Find mine here: http://jsbin.com/dijiba/edit?js,console
var ref = new Firebase(URL);
var tilesPerStep = 20;
var stepsToTake = 100;
function testWalking() {
var startTime = Date.now();
var promises = [];
for (var x=0; x < stepsToTake; x++) {
promises.push(testStep(x));
}
Promise.all(promises).then(function() {
console.log('All '+promises.length+' steps done in '+(Date.now()-startTime)+'ms');
});
}
function testStep(x) {
var result = new Promise(function(resolve, reject){
var tiles = ref.child("/tiles_test");
var loading = 0;
var startTime = Date.now();
console.log('Start loading step '+x);
for (var y=0; y < tilesPerStep; y++) {
loading ++;
tiles.child(x+'x'+y).once('value', function(snapshot) {
var time = Date.now() - startTime;
loading--;
if (loading === 0) {
console.log('Step '+x+' took '+(Date.now()-startTime)+'ms');
resolve(Date.now() - startTime);
}
});
}
});
return result;
}
testWalking();
The biggest difference is that I don't delay starting any of the loading and I don't fail for a specific tile. I think that last bit is why your tests are failing.
All loading from Firebase happens asynchronously, but all requests are going through the same connection. When you start loading, you are queueing up a lot of requests. This timing is skewed by "preceding requests that haven't been fulfilled yet".
This is a sample of a test run with just 10 steps:
"Start loading step 0"
"Start loading step 1"
"Start loading step 2"
"Start loading step 3"
"Start loading step 4"
"Start loading step 5"
"Start loading step 6"
"Start loading step 7"
"Start loading step 8"
"Start loading step 9"
"Step 0 took 7930ms"
"Step 1 took 7929ms"
"Step 2 took 7948ms"
"Step 3 took 8594ms"
"Step 4 took 8669ms"
"Step 5 took 9141ms"
"Step 6 took 9851ms"
"Step 7 took 10365ms"
"Step 8 took 10425ms"
"Step 9 took 11520ms"
"All 10 steps done in 11579ms"
You'll probably note that the time taken for each step does not add up to the time taken for all steps combined. Essentially you are starting each request while there are still requests in the pipeline. This is the most efficient way of loading these items, but it does mean that you'll need to measure performance differently.
Essentially all steps start at almost the same time. Then you're waiting for the first response (which is in the above case includes establishing a WebSocket connection from the client to the correct Firebase server) and after that the responses come in reasonable intervals (given that there are 20 requests per step).
All of this is very interesting, but it doesn't solve your problem of course. I would recommend that you model your data into screen-sized buckets. So instead of having each tile separate, store every 10x10 tiles in a "bucket". You'll reduce the overhead of each separate request and only need to request at most one bucket for every 10 steps.
Update
I'm pretty sure we're just debugging multiple artifacts of your benchmark approach. If I update the code to this:
func testWalking() {
let expectation = expectationWithDescription("Load tiles")
let maxTime = self.timeLimit + self.stepTime * Double(stepsToTake)
let startTime = NSDate().timeIntervalSince1970
for (var x=0; x<stepsToTake; x++) {
let delay = Double(x) * stepTime
let data = ["x":x, "ex": expectation]
stepsRemaining++
NSTimer.scheduledTimerWithTimeInterval(0, target: self, selector: Selector("testStep:"), userInfo: data, repeats: false)
}
waitForExpectationsWithTimeout(maxTime) { error in
let time = NSDate().timeIntervalSince1970 - startTime
print("Completed loading after \(time)")
if error != nil {
print("Error: \(error!.localizedDescription)")
}
}
}
/**
* Helper function to test a single step (executes `tilesPerStep` number of tile loads)
*/
func testStep(timer : NSTimer) {
let tiles = Firebase(url: firebaseUrl).childByAppendingPath("/tiles_test")
let data = timer.userInfo as! Dictionary<String, AnyObject>
let x = data["x"] as! Int
let expectation = data["ex"] as! XCTestExpectation
var loading = 0
print("Start loading \(x)")
for (var y=0; y<tilesPerStep; y++) {
loading++
tiles.childByAppendingPath("\(x)x\(y)").observeSingleEventOfType(.Value, withBlock: { snapshot in
loading--
if loading == 0 {
print("Done loading \(x)")
self.stepsRemaining--
if self.stepsRemaining == 0 {
expectation.fulfill()
}
}
})
}
}
It completes the entire loading in less than 2 seconds over a high-speed network, over 3G it takes between 15 and 25 seconds.
But my recommendation of modeling at a level of more than each single tile remains.
I have the following pseudo code to clarify my problem and a solution. My original posting and detailed results are on Stack Overflow at: Wait() & Sleep() Not Working As Thought.
public class PixelArtSlideShow { // called with click of Menu item.
create List<File> of each selected pixelArtFile
for (File pixelArtFile : List<File>) {
call displayFiles(pixelArtFile);
TimeUnits.SECONDS.sleep(5); }
}
public static void displayFiles(File pixelArtFile) {
for (loop array rows)
for (loop array columns)
read-in sRGB for each pixel - Circle Object
window.setTitle(....)
}
// when above code is used to Open a pixelArtFile, it will appear instantly in a 32 x 64 array
PROBLEM: As detailed extensively on the other post. Each pixelArtFile will display the setTitle() correctly and pause for about 5 secs but the Circle’s will not change to the assigned color except for the last file, after the 5 secs have passed. It's like all the code in the TimeUnits.SECONDS.sleep(5); are skipped EXCEPT the window.setTitle(...)?
My understanding is the TimeUnits.SECONDS.sleep(5); interrupts the UI Thread uncontrollable and I guess must somehow be isolated to allow the displayFiles(File pixelArtFile) to fully execute.
Could you please show me the most straight forward way to solve this problem using the pseudo code for a more completed solution?
I have tried Runnables, Platform.runLater(), FutureTask<Void>, etc. and I'm pretty confused as to how they are meant to work and exactly coded.
I also have the two UI windows posted on the web at: Virtual Art. I think the pixelArtFile shown in the Pixel Array window may clarify the problem.
THANKS
Don't sleep the UI thread. A Timeline will probably do what you want.
List<File> files;
int curFileIdx = 0;
// prereq, files have been appropriately populated.
public void runAnimation() {
Timeline timeline = new Timeline(
new KeyFrame(Duration.seconds(5), event -> {
if (!files.isEmpty()) {
displayFile(curFileIdx);
curFileIdx = (curFileIdx + 1) % files.size();
}
})
);
timeline.setCycleCount(Timeline.INDEFINITE);
timeline.play();
}
// prereq, files have been appropriately populated.
public void displayFile(int idx) {
File fileToDisplay = files.get(idx);
// do your display logic.
}
Note, in addition to the above, you probably want to run a separate task to read the file data into memory, and just have a List<ModelData> where ModelData is some class for data you have read from a file. That way you wouldn't be continuously running IO in your animation loop. For a five second per frame animation, it probably doesn't matter much. But, for a more frequent animation, such optimizations are very important.