I am using Unity to create movies. Right now, I create a scene and play it and record it in real time and export the video file. But I'd like render the scene as fast as possible and reduce the time for creating the movie.
How can I render simple scenes faster than real time in Unity? For example, if I have a 10-second movie at 24 frames/second and my machine can render 240 frames in a second, I'd like to finish the job in a second rather than wait for 10 seconds. Working with framerate and time might do the job but I was wondering if there is a better/simpler way to do it.
Unreal Engine's capture movie renders simple scenes faster than real time without me changing anything and I'd like to know if there is something similar in Unity.
Update: You can disable Cap FPS to speed up rendering if you are using the Unity recorder. I am using something else because I want to record the video in a Unity build.
Related
I am using a prefab with an audio source attached to it and is used only if you click on the prefab to play a short sound (click sound). There is a scene that I am using this prefab for ~50 times.
There is no problem at all, it works great but I was just wondering is it a bad practice to have so many prefabs each one using its own audio source?
Thank you.
It depends on the use case, but in most cases you can't really avoid it(using more than one audio source). If you look at the inspector of the Audio Source-Component, you see a field for referencing the audio file. So basically even if you have 50 Audio Source-Components it just remains one audio file (in case that you only want to play this single sound). The intention of this approach with multiple audio sources is to get a "physically realistic" feeling. So as in real life if you are out of the range of an audio source you won't hear it.
For example if you have a game with around 50 enemies in the current scene, it's more or less necessary to attach each of them an Audio Source-Component because you want to hear only the enemies, which are in your range.
If you have just one central Audio Source it has to play everything and in most cases you have more work than benefit from it. But a static game like a card game can work very well with this approach, so that you have only one GameObject which holds an Audio Source-Component. If you have more than one sound effect, you have to change the referenced AudioClip programmatically every time you want to play a sound, which is not the currently selected one.
So basically its not really a bad practice, because in most cases it is more or less intended that you have more than one audio source.
I'm new to Metal and I was able to achieve a simple simple scene Quad rotating on the screen, but I want to export a 1 minute video/frames without having to do in real-time.
So think about the user just opens app and taps the 'export button' and the cpu/gpu goes full speed to output a 1 minute video/frames of the Quad rotating without previewing it.
I know how to convert frames to video using AVFoundation, but not my 3d scene into frames without doing it realtime.
can someone point me where to look?.
Thank you so much!
I adapted my answer here and the Apple Metal game template to create this sample, which demonstrates how to record a video file directly from a sequence of frames rendered by Metal.
Since all rendering in Metal draws to a texture, it's not too hard to adapt normal Metal code so that it's suitable for rendering offline into a movie file. To recap the core recording process:
Create an AVAssetWriter that targets your URL of choice
Create an AVAssetWriterInput of type .video so you can write video frames
Wrap an AVAssetWriterInputPixelBufferAdaptor around the input so you can append CVPixelBuffers as frames to the video
After you start recording, each frame, copy the pixels from your rendered frame texture into a pixel buffer obtained from the adapter's pixel buffer pool.
When you're done, mark the input as finished and finish writing to the asset writer.
As for driving the recording, since you aren't getting delegate callbacks from an MTKView or CADisplayLink, you need to do it yourself. The basic pattern looks like this:
for t in stride(from: 0, through: duration, by: frameDelta) {
draw(in: renderBuffer, depthTexture: depthBuffer, time: t) { (texture) in
recorder.writeFrame(forTexture: texture, time: t)
}
}
If your rendering and recording code is asynchronous and thread-safe, you can throw this on a background queue to keep your interface responsive. You could also throw in a progress callback to update your UI if your rendering takes a long time.
Note that since you're not running in real-time, you'll need to ensure that any animation takes into account the current frame time (or the timestep between frames) so things run at the proper rate when played back. In my sample, I do this by just having the rotation of the cube depend directly on the frame's presentation time.
I am using the Video player API that unity provides for playing a video on the surface texture. While I am changing the video clip after each update the FPS on the editor is really slow. The switching and loading the new video clip takes a lot of time (500-600 ms)
videoPlayer.clip = videoClips [vindex]; //this command used for changing the videoclip
I just put the timer before and after and found it consumes a huge amount of time.
Can anyone please tell me how to reduce the time and increase the FPS. Any alternative way or suggestion will highly be appreciated. (Platform: Unity Editor on windows)
If the videos are really small, you can consider to use multiple videoPlayer to play each video in the same time. Set the RenderMode to RenderTexture, and switch RenderTexture instead of videoClip.
surface.GetComponent<MeshRenderer>().material = videoPlays[vindex].targetTexture;
I'm using Unity3D 5.3 version. I'm working on a 2D "Endless running" game. It's working normally on PC. But when I compile it to my phone, all of my gameobjects are shaking when they are moving. Gameobjects are in a respawn loop. I'm increasing my Camera's transform position x. So when my camera is in action, all of the other objects look like they are shaking a lot and my game is working slowly on my phone as a result. I tried to play my game at Samsung, on discovery phones. It's working normally on some of them. But even on some Samsung devices it's still shaking. So i don't understand what the problem is. Can you help me with this?
One thing you can do is start optimising, if you have a game that is either finished or close to it. If you open the profiler, click "Deep Profile" and then run it in the editor on your PC, you'll get a very detailed breakdown of what is using the most resources within your game. Generally it's something like draw calls or the physics engine doing unnecessary work.
Another thing that might help is to use Time.deltaTime, if you aren't already. If the script that increases the transform doesn't multiply the increase by Time.deltaTime, then you're moving your camera by an amount per frame rather than per second, which means that if you have any framerate drops for any reason, the camera will move a smaller distance and that could be throwing out some of your other calculations. Using Time.deltaTime won't improve your framerate, but it will make your game framerate independant, which is very important.
I'm currently using box2d with cocos2d on iPhone. I have quite a complex scene set up, and I want the end user to be able to record it as video as part of the app. I have implemented a recorder using the AVAssetWriter etc. and have managed to get it recording frames grabbed from OpenGL pixel data.
However, this video recording seems to a) slow down the app a bit, but more importantly b) only record a few frames per second at best.
This led me to the idea of rendering a Box2D scene, manually firing ticks and grabbing an image every tick. However, dt could be an issue here.
Just wondering if anyone has already done this, or if anyone has any better ideas?
A good solution I guess would be to use a screen recorder solution like ScreenFlow or similar...
I think your box2d is a good idea... however, you would want to used a fixed-time step. if you use dt the steps in the physics simulation will be to big, and box2d will be unstable and jittery.
http://gafferongames.com/game-physics/fix-your-timestep/
The frame rate will take a hit, but you'll get every frame. I don't think you'll be able to record every frame and still maintain a steady frame rate - that seems to be asking a lot of the hardware.