Custom Buffer Rendering in NextLevel's slow motion mode - swift

There's a NextLevel video capture library in Swift called "NextLevel".
According to NextLevel's description, https://github.com/NextLevel/NextLevel,
it supports Custom Buffer Rendering.
But I'd like to know it's supported for slow motion mode. So far, I was trying to use it without luck. It worked for normal recording, but not for slow motion.
Am I missing something?
My goal is adding a logo/watermark to the recorded video.
Any help would be much appreciated.

Related

How to record video with ARCore with Unity?

I have been stuck on this problem for over a month now. I just need to record the video feed when people are using the AR app.
There are several options:
1. Take the screenshot in Unity for every frame.
I tried taking a screenshot every frame. This is way too slow. The fps is only 5.
Then I tried saving the texture to an array and encode them to images afterwards. This would take a lot of memory and would cause a significant frame drop on mobile phone. The fps is around 10.
If anyone has a great idea for this method, please let me know.
2. Use native plugins to record video.
I haven't found any solutions on this one. I am afraid that this may conflict with the ARCore.
I know that there is an Android solution but ideally I want to use Unity. Any help is appreciated, thank you!
3. Save the texture from the texture reader api provided by ARCore computer vision example.
There is a Computer Vision example in the directory. And I can get the texture directly from GPU with its api.
However, the fps is still low. With its edge detector example, the fps is around 15. I succeeded in saving those frames to local directory in another thread. But the fps is still not acceptable. The bottomline is 720p at 30fps.
PS: I just need to save the frames. I can handle encoding them into videos.
PPS: Just recording the camera feed and recording the camera feed and the augmented objects together are both okay. Either one achieved is great.
You can easily implement video recording AND sharing using the (really great) NatCorder unity asset (asset store link) and the related NatShare API. I did this very same thing in my own ARCore experiment/"game."
Edit: you may have to implement this workaround to get a smooth framerate.

what is difference between AVCapture and camera default of iPhone

my app use AVCapture for capture image, this is my supervisor's ideal. But i research in internet and a can't get any information about the difference between AVCapture and default camera of iPhone or iPop (tab focus or camera quality...). please tell me what advance of AVFoundation framework ...
with the AVCaptureSession you can give your recorder a lot more functionality. You can customize nearly every aspect of the recording session. and you can ever get the raw data straight from the camera. the code can get quite complex however, and nothing is taken care of for you.
With the iOS default image capture controller you will be stuck with a few presets, and you will only have a little bit of camera functionality. But it is really simple to implement.
updated with link to apple code
If you want to see how to use the AVFoundation to do you camera recording you will probably like this app from apple.
Like I said, you will have to do everything manually. so be prepared for a handful of work.
AVCam demo app by Apple

Export CoreAnimation to video file

I wrote a basic animation framework using Core Animation on iPhone. It has the functionality for pause and resume of animations and also can run animations at specified time. My basic problem is that I cannot find a way to export my animations to a video file (.mov, .avi etc). I have read about AVAssetWriter and AVComposition but cannot understand how to make them work in my case.
By searching the internet the closest I was able to get was it by doing frame by frame reading of my animations. Even for it I could not find a way to make it work and could not find that whether iPhone SDK have something to do that for this kind of frame by frame reading in my case. I also came close to this question on stackoverflow and still could not figure out (sorry if I feel that I am beginner in those things, but I am not, just could not understand some things)
If anyone know how to make it work or even how to something similar to it please share. And if there is no way then if you know any other way to do i.e using OpenGL ES instead of CoreAnimation please share it too.
Check this presentation: http://www.slideshare.net/invalidname/advanced-media-manipulation-with-av-foundation
Around page 84 he talks about adding animation to video compositions. I believe this will help you get what you need.
EDIT: Specifically you need to look at the animationTool of your video composition. This is a AVVideoCompositionCoreAnimationTool object that allows you to add a core animation layer to your output video. See also this question:
Recording custom overlay on iPhone
I am sorry I do not have the time to get you a full code snippet, but basically you set this animation tool of your video composition, and then create an AVAssetExportSession and set its videoComposition to the one you made.

Reduced quality OpenGL ES screenshots (iPhone)

I'm currently using this method from Apple to take screenshots of my OpenGL ES iPhone game. The screenshots look great. However taking a screenshot causes a small stutter in the game play (which otherwise runs smoothly at 60 fps). How can I modify the method from Apple to take lower quality screenshots (hence eliminating the stutter caused by taking the screenshot)?
Edit #1: the end goal is to create a video of the game play using AVAssetWriter. Perhaps there's a more efficient way to generate the CVPixelBuffers referenced in this SO post.
What is the purpose of the recording?
If you want to replay a sequence on the device you can look into saving the object positions etc instead and redraw the sequence in 3D. This also makes it possible to replay sequences from other view positions.
If you want to show the game play on i.e. youtube or other you can look into recording the game play with another device/camera or record some game play running in the simulator using some screen capture software as ScreenFlow.
The Apple method uses glReadPixels() which just pulls all the data across from the display buffer, and probably triggers sync barriers, etc, between GPU and CPU. You can't make that part faster or lower resolution.
Are you doing this to create a one-off video? Or are you wanting the user to be able to trigger this behavior in the production code? If the former, you could do all sorts of trickery to speed it up-- render to a smaller size for everything, don't present at all and just capture frames based on a recording of the input data running into the game, or other such tricks, or going even further run that whole simulation at half speed to get all the frames.
I'm less helpful if you need an actual in-game function for this. Perhaps someone else will be.
If all else fails.
Get one of these
http://store.apple.com/us/product/MC748ZM/A
And then convert that composite video to digital through some sort of external device.
I've done this when I converted vhs movies to dvd a long time ago.

How to capture a motion in iphone camera

In My application as the user opens the camera, camera should capture a image as soon as there is a difference in image when compared to previous image and camera should always be in capturing mode.
This should be done automatically without any user interaction.Please Help me out as i couldn't find the solution asap.
Thanks,
ravi
I don't think the iPhone camera can do what you want.
It sounds like your doing a type of motion detection by comparing two snapshots taken at different times and seeing if something has changed between the older and the newer image. To that you need:
I don't think the iPhone can do what you want. The camera is not setup to automatically take photos and I don't think the hardware can support the level of processing needed to compare two images in enough detail to detect motion.
Hmmmm, in thinking about it, you might be able to detect motion by somehow measuring the frame differentials in the compression of video. All the video codecs save space by only registering the parts of the video that change from frame-to-frame. So, a large change in the saved data would indicate a large change in the environment.
I have no idea how to go about doing that but it might give you a starting point.
You could try using opencv for motion detection based on differences between captured frames but I'm not sure if the iPhone API allows reading multiple frames from the camera.
Look for motempl.c in the opencv distribution.
You can do a screenshot to automatically capture the image, using the UIGetScreenImage function.