I have a program to allow user to record the video. I have to provide options like Black&White, Crystal etc. like how "Viddy" iPhone application does for setting more Effects to the recorded video. How can i achieve it programmatically?
Please guide me.
Thank you!
Here's one way:
Start capturing video frames with AVCaptureSession+AVCaptureVideoDataOutput
Convert the frames to OpenGL textures for display
Write GLSL shaders for each desired effect and apply to textures from step 2
Read back textures + effect and write to movie file
Optimise until performance is adequate
goto 5
5 is probably the most important step. You'll need to tune and tweak the algorithm, video quality, texture size, frame rates, shaders, etc.
Enjoy!
Related
How to capture the background frames while a unity game is running?
I already knew
cam.Render();
Texture2D image = new Texture2D(cam.targetTexture.width, cam.targetTexture.height); image.ReadPixels(new Rect(0, 0, cam.targetTexture.width, cam.targetTexture.height), 0, 0);
and then
converting the texture into an image using EncodeToPng/JPG
But what I have seen using this extra cam.render and encoding to png and capturing the frames differently slow down the operation drastically, takes huge time to do this.
How can I directly get the textures or frames online while a game is running, maybe from the OpenGL calls which GPU are using?
Does anyone have any idea how to achieve this?
Have you tried using Unity's Video Capture API to record gameplay? There are tutorials and instructionals you can find, but the Unity Documentation is a good place to start.
https://docs.unity3d.com/Manual/windowsholographic-videocapture.html
It's flexibly implemented, so you can work around whatever requirements you want in terms of fps, resolution, etc based on your application and how it's set up.
I've got bad results trying to render video from unity game.
I imagined (and I could be easily wrong) that I need to capture screen every 33ms to get nearly 30 images for Blender. I first tried to record when playing the game in unity and my game display tab is on 16:9 aspect ratio and well I got the result far from what I thought (bad quality).
I further built the game and run the game choosing 1280x1024 resolution. I wanted 1920x1080, but I don't understand how to get this, because in CaptureScreenshot of unity documentation the last parameter seems to be about "how much bigger to do", but I could not figure out "related to what exactly". I tried to play with it going 4 and 8 and I could not control the result.
I recorded 90 images of 1280x1024 and added to Blender in video editing perspective/view. I found myself confused that when adding images, what should I choose from the left side-bar in Start-frame, and End-frame! I get confused as I don't know if to count every picture a frame or not. If yes, then 90 images must produce 3s video, which I failed to get!
Back to Blender, the settings I change:
File format: H.264
Encoding: Preset H.264 (format)
Resolution/Quality to 100% (it is X:1920, Y:1080)
Start frame: 1
End frame: ??? (anything I get ends to unexpected result)
Frame rate: 30
Other than output path and file name, I didn't change anything else. What I get is nearly 3s video but the character movements are not realistic, it looks like forwarding fast the video.
How I can achieve a good result or point me to where to read what that could help me understand what to do in terms of obtaining screen shots from unity and what setting in blender?
I am going to build a FPS video game. When I developing my game, I got this problem in my mind. Each and every video game developer spend very big time and use a lot of effort to make their game's environment more realistic and life-like. So my question is,
Can We Use HD or 4K Real Videos as Our Game's Environment? (As we seen on Google Streetview - but with more quality)
If we can, How we program the game engine?
Thank you very much..!
The simple answer to this is NO.
Of-course, you can extract texture from the video by capturing frames from it but that's it. Once you capture the texture, you still need a way to make a 3D Model/Mesh you can apply the texture to.
Now, there have been many companies working on video to 3D model converter. That technology exist but is more for movie stuff. Even with this technology, the generated 3D models from a video are not accurate and they are not meant to be used in a game because they end up generating a 3D model with many polygons, that will easily choke your Game engine.
Also, doing this in real-time is another story. So you will need to continuously read a frame from the video, extract a texture from the video, generate a mesh with the HQ texture, cleanup/reduce/reconstruct the mesh so that your game engine won't crash or drop many frames. You then have to generate a UV for the mesh so that the extracted image can be applied to the current mesh.
Finally, each one of these are CPU intensive. Doing them all in series,in real-time, will likely make your game unplayable.I also made doing this sound easy but it's not. What you can do with the video is to use it as a reference to model your 3D environment in a 3D application. That's it.
I am working on development for simultaneous camera streaming and the recording using MediaCodec API. I want to merge the frame from both the camera and give to rendering as well as to Mediacodec for recording as surface.
I do not want to create multiple EGLContext Rather same should be used across.
I am taking Bigflake media codec example as reference however i am not clear whether it is possible or not. Also how to bind the multiple textures? As we require two textures for two camera.
Your valueable input will help me to progress further. Currently i am stuck and not clear what to do further.
regards
Nehal
I'm currently using box2d with cocos2d on iPhone. I have quite a complex scene set up, and I want the end user to be able to record it as video as part of the app. I have implemented a recorder using the AVAssetWriter etc. and have managed to get it recording frames grabbed from OpenGL pixel data.
However, this video recording seems to a) slow down the app a bit, but more importantly b) only record a few frames per second at best.
This led me to the idea of rendering a Box2D scene, manually firing ticks and grabbing an image every tick. However, dt could be an issue here.
Just wondering if anyone has already done this, or if anyone has any better ideas?
A good solution I guess would be to use a screen recorder solution like ScreenFlow or similar...
I think your box2d is a good idea... however, you would want to used a fixed-time step. if you use dt the steps in the physics simulation will be to big, and box2d will be unstable and jittery.
http://gafferongames.com/game-physics/fix-your-timestep/
The frame rate will take a hit, but you'll get every frame. I don't think you'll be able to record every frame and still maintain a steady frame rate - that seems to be asking a lot of the hardware.