I'm new to Metal and I was able to achieve a simple simple scene Quad rotating on the screen, but I want to export a 1 minute video/frames without having to do in real-time.
So think about the user just opens app and taps the 'export button' and the cpu/gpu goes full speed to output a 1 minute video/frames of the Quad rotating without previewing it.
I know how to convert frames to video using AVFoundation, but not my 3d scene into frames without doing it realtime.
can someone point me where to look?.
Thank you so much!
I adapted my answer here and the Apple Metal game template to create this sample, which demonstrates how to record a video file directly from a sequence of frames rendered by Metal.
Since all rendering in Metal draws to a texture, it's not too hard to adapt normal Metal code so that it's suitable for rendering offline into a movie file. To recap the core recording process:
Create an AVAssetWriter that targets your URL of choice
Create an AVAssetWriterInput of type .video so you can write video frames
Wrap an AVAssetWriterInputPixelBufferAdaptor around the input so you can append CVPixelBuffers as frames to the video
After you start recording, each frame, copy the pixels from your rendered frame texture into a pixel buffer obtained from the adapter's pixel buffer pool.
When you're done, mark the input as finished and finish writing to the asset writer.
As for driving the recording, since you aren't getting delegate callbacks from an MTKView or CADisplayLink, you need to do it yourself. The basic pattern looks like this:
for t in stride(from: 0, through: duration, by: frameDelta) {
draw(in: renderBuffer, depthTexture: depthBuffer, time: t) { (texture) in
recorder.writeFrame(forTexture: texture, time: t)
}
}
If your rendering and recording code is asynchronous and thread-safe, you can throw this on a background queue to keep your interface responsive. You could also throw in a progress callback to update your UI if your rendering takes a long time.
Note that since you're not running in real-time, you'll need to ensure that any animation takes into account the current frame time (or the timestep between frames) so things run at the proper rate when played back. In my sample, I do this by just having the rotation of the cube depend directly on the frame's presentation time.
Related
I am using Unity to create movies. Right now, I create a scene and play it and record it in real time and export the video file. But I'd like render the scene as fast as possible and reduce the time for creating the movie.
How can I render simple scenes faster than real time in Unity? For example, if I have a 10-second movie at 24 frames/second and my machine can render 240 frames in a second, I'd like to finish the job in a second rather than wait for 10 seconds. Working with framerate and time might do the job but I was wondering if there is a better/simpler way to do it.
Unreal Engine's capture movie renders simple scenes faster than real time without me changing anything and I'd like to know if there is something similar in Unity.
Update: You can disable Cap FPS to speed up rendering if you are using the Unity recorder. I am using something else because I want to record the video in a Unity build.
I'm currently working on a 2D pixel Jump'n'Run. I want the player to be able to "buy" new skins for the player-character. I have multiple sprite-sheets. They all have the same structure. I'm using sprite animations.
How can I change the sprite-sheet at runtime? I found the following solution, but it's very resource intense: https://youtu.be/HM17mAmLd7k?t=1818
Sincerly,
Julian
The reason it's so resource intensive in the video is because the all the sprites are loaded in each LateUpdate(), which is once per frame. The script looks like it's grabbing all the sprites in the sprite-sheet and loading them every frame so that if the spriteSheetName ever changes, it will update the renderer on the next frame.
I don't believe that's necessary and in the video he mentions that it's just being used as an example. What I'd do is move it out of the LateUpdate() method and into its own method that can be called only whenever the user wants to change the sprite-sheet. So instead of mindlessly loading the sprites from the sprite-sheet each frame, you'll only load them whenever the user selects it.
That should accomplish drastically cutting down the intensity of this script because you're not loading all the sprites in a sprite-sheet and looping through each of their renderers on every single frame.
How to capture the background frames while a unity game is running?
I already knew
cam.Render();
Texture2D image = new Texture2D(cam.targetTexture.width, cam.targetTexture.height); image.ReadPixels(new Rect(0, 0, cam.targetTexture.width, cam.targetTexture.height), 0, 0);
and then
converting the texture into an image using EncodeToPng/JPG
But what I have seen using this extra cam.render and encoding to png and capturing the frames differently slow down the operation drastically, takes huge time to do this.
How can I directly get the textures or frames online while a game is running, maybe from the OpenGL calls which GPU are using?
Does anyone have any idea how to achieve this?
Have you tried using Unity's Video Capture API to record gameplay? There are tutorials and instructionals you can find, but the Unity Documentation is a good place to start.
https://docs.unity3d.com/Manual/windowsholographic-videocapture.html
It's flexibly implemented, so you can work around whatever requirements you want in terms of fps, resolution, etc based on your application and how it's set up.
I am using this example on Gaussian mixture models.
I have a video displaying moving cars, but it's on a street that isn't very busy. A few cars go past every now and again, but the vast majority of the time there isn't any motion in the background. It gets pretty tedious watching nothing moving, so I would like to cut that time out. Is it possible to remove the still frames from the video, only leaving the motion frames? I guess it would essentially crop the video.
The example you give uses a foreground detector. Still frame should not have foreground pixels detected. You can then choose to skip them when building a demo video of your results.
You can build your new video by creating a rule of the type if N frames in a row do not contain foreground, do not write these frames in the output video.
This is just an idea...
I am building a 2d OpenGL es application For iPad it displays a background texture and numerous textures on top of it which are always in motion.
Every frame their location is recalculated based on time delta and speed and the entire thing is being rendered at 60 fps successfully, but still as the movement speed of the sprites raises, thing look stuttering.
Any ideas? Are there inherit problems with what I'm doing? Are there known design patterns for smooth animation?
Try to compute time delta as average of last N frames because it's possible to have some frames that take more time than others. Do you use time delta for animations? It is very important to use it! Also try to load all resources at loading time instead loading them when you use them. This also may slow some frames.
If you take a look at the time deltas, you'll probably find they're not very consistent frame-to-frame. This probably isn't because the frames are taking different amounts of time to render, it's just an artifact of the batched, pipelined nature of the GLES driver.
Try using some kind of moving average of the time deltas instead: http://en.wikipedia.org/wiki/Moving_average
Note that even though the time deltas you're observing in your application aren't very consistent, the fact that you're getting 60fps implies the frames are probably getting displayed at fixed intervals (I'm assuming you're getting one frame per display v-sync). What this means is that the delay between your application making GL calls and the resulting frame appearing on the display is changing. It's this change that you're trying to hide with a moving average.