How to record video with ARCore with Unity? - unity3d

I have been stuck on this problem for over a month now. I just need to record the video feed when people are using the AR app.
There are several options:
1. Take the screenshot in Unity for every frame.
I tried taking a screenshot every frame. This is way too slow. The fps is only 5.
Then I tried saving the texture to an array and encode them to images afterwards. This would take a lot of memory and would cause a significant frame drop on mobile phone. The fps is around 10.
If anyone has a great idea for this method, please let me know.
2. Use native plugins to record video.
I haven't found any solutions on this one. I am afraid that this may conflict with the ARCore.
I know that there is an Android solution but ideally I want to use Unity. Any help is appreciated, thank you!
3. Save the texture from the texture reader api provided by ARCore computer vision example.
There is a Computer Vision example in the directory. And I can get the texture directly from GPU with its api.
However, the fps is still low. With its edge detector example, the fps is around 15. I succeeded in saving those frames to local directory in another thread. But the fps is still not acceptable. The bottomline is 720p at 30fps.
PS: I just need to save the frames. I can handle encoding them into videos.
PPS: Just recording the camera feed and recording the camera feed and the augmented objects together are both okay. Either one achieved is great.

You can easily implement video recording AND sharing using the (really great) NatCorder unity asset (asset store link) and the related NatShare API. I did this very same thing in my own ARCore experiment/"game."
Edit: you may have to implement this workaround to get a smooth framerate.

Related

Custom Buffer Rendering in NextLevel's slow motion mode

There's a NextLevel video capture library in Swift called "NextLevel".
According to NextLevel's description, https://github.com/NextLevel/NextLevel,
it supports Custom Buffer Rendering.
But I'd like to know it's supported for slow motion mode. So far, I was trying to use it without luck. It worked for normal recording, but not for slow motion.
Am I missing something?
My goal is adding a logo/watermark to the recorded video.
Any help would be much appreciated.

How to improve camera quality in ARKit

I am building an ARKit app where we want to be able to take a photo of the scene. I am finding the image quality of the ARCamera view is not good enough to take photos with on an iPad Pro.
Standard camera image:
ARCamera image:
I have seen an Apple forum post that mentions this could be iPad Pro 10.5 specific and is related to fixed lens position (https://forums.developer.apple.com/message/262950#262950).
Is there are public way to change the setting?
Alternatively, I have tried to use AVCaptureSession to take a normal photo and apply it to sceneView.scene.background.contents to switch out a blurred image for higher res image at the point the photo is taken but can't get AVCapturePhotoOutput to work with ARKit
Update: Congrats to whoever filed feature requests! In iOS 11.3 (aka "ARKit 1.5"), you can control at least some of the capture settings. And you now get 1080p with autofocus enabled by default.
Check ARWorldTrackingConfiguration.supportedVideoFormats for a list of ARConfiguration.VideoFormat objects, each of which defines a resolution and frame rate. The first in the list is the default (and best) option supported on your current device, so if you just want the best resolution/framerate available you don't have to do anything. (And if you want to step down for performance reasons by setting videoFormat, it's probably better to do that based on array order rather than hardcoding sizes.)
Autofocus is on by default in iOS 11.3, so your example picture (with a subject relatively close to the camera) should come out much better. If for some reason you need to turn it off, there's a switch for that.
There's still no API for changing the camera settings for the underlying capture session used by ARKit.
According to engineers back at WWDC, ARKit uses a limited subset of camera capture capabilities to ensure a high frame rate with minimal impact on CPU and GPU usage. There's some processing overhead to producing higher quality live video, but there's also some processing overhead to the computer vision and motion sensor integration systems that make ARKit work — increase the overhead too much, and you start adding latency. And for a technology that's supposed to show users a "live" augmented view of their world, you don't want the "augmented" part to lag camera motion by multiple frames. (Plus, on top of all that, you probably want some CPU/GPU time left over for your app to render spiffy 3D content on top of the camera view.)
The situation is the same between iPhone and iPad devices, but you notice it more on the iPad just because the screen is so much larger — 720p video doesn't look so bad on a 4-5" screen, but it looks awful stretched to fill a 10-13" screen. (Luckily you get 1080p by default in iOS 11.3, which should look better.)
The AVCapture system does provide for taking higher resolution / higher quality still photos during video capture, but ARKit doesn't expose its internal capture session in any way, so you can't use AVCapturePhotoOutput with it. (Capturing high resolution stills during a session probably remains a good feature request.)
config.videoFormat = ARWorldTrackingConfiguration.supportedVideoFormats[1]
I had to look for a while on how to set the config, so maybe it will help somebody.
This lets you pick the one with the highest resolution, you can change it so that it picks by most fps, etc...
if let videoFormat = ARWorldTrackingConfiguration.supportedVideoFormats.sorted { ($0.imageResolution.width * $0.imageResolution.height) < ($1.imageResolution.width * $1.imageResolution.height) }.last{
configuration.videoFormat = videoFormat
}

Screen record in unity3d

How to do screen record in unity?
I want to record my screen(gameplay) during my running game.
That should be play/stop , replay , save that recording on locally from device, open/load from my device (which is already we recorded).
In my game one camera which can capture native camera, and one 3d model.
I wish to record that both and use my functionality whenever i want.
Thank you in advance.
This is hard to implement, but not impossible. Because every frame or interval you need to capture screen shot of your camera view and store it in the list. You need good, (Smaller interval but not much. Because when it becomes smaller, needs more memory) interval value. If your interval is big raplay can be seen laggy.
While you play game your ram becomes full and os will terminate the app. So you need to fully cover memory optimization. Another solution is assets in Unity Asset store.
EZ Replay Manager can be used. (Keep in mind: I haven't tried it yet.)
Free
Pro
Check out this open-source project: https://github.com/getsocial-im/getsocial-capture.
By default our project records Main Camera's rendered content. C# examples are in the repo.
You can record in 2 modes:
Continuous mode - capture last X frames.
Manual mode - capture frames on your own when needed. For example, record a timelapse of the level.
Once the recording is done, you can generate GIF, get raw bytes and do whatever you want. E.g. let your users share that GIF with friends.
Here's the recording of a game session from the test app. The recorded GIF shows up in the end:
Disclaimer: I worked at GetSocial at the time of writing.
well i know a guy who post a similar project on github. link :- https://github.com/thanh-nguyen-kim/Unity_Android_Screen_Recorder
but there is a limitation and that is this code is only works on android devices(android means only android not even on ios).
but this is very powerful recorder and it is capture whatever appear on screen(so basically it is a screen recorder made with unity) and also it will capture your microphone output.give it a try.
and if you find any other solution then please also tell me. because it will very helpful for me.because i want to record video with in-game audio and also save it into gallery
Unity now has a screen recording tool builtin. It's called Recorder and doesn't require any coding.
In Unity, go to the Window menu, then click on Package Manager
By default, Packages might be set to "In Project". Select "Unity
Registry" instead
Type "Recorder" in the search box
Select the Recorder and click Install in the lower right corner of the window
That's about all you need to get everything set up and hopefully the
options make sense. The main thing to be aware of that setting
"Recording Mode" to "Single" will take a single screenshot (with
F10)
NOTE: This is a copy of my answer from a Unity screenshots question

Reduced quality OpenGL ES screenshots (iPhone)

I'm currently using this method from Apple to take screenshots of my OpenGL ES iPhone game. The screenshots look great. However taking a screenshot causes a small stutter in the game play (which otherwise runs smoothly at 60 fps). How can I modify the method from Apple to take lower quality screenshots (hence eliminating the stutter caused by taking the screenshot)?
Edit #1: the end goal is to create a video of the game play using AVAssetWriter. Perhaps there's a more efficient way to generate the CVPixelBuffers referenced in this SO post.
What is the purpose of the recording?
If you want to replay a sequence on the device you can look into saving the object positions etc instead and redraw the sequence in 3D. This also makes it possible to replay sequences from other view positions.
If you want to show the game play on i.e. youtube or other you can look into recording the game play with another device/camera or record some game play running in the simulator using some screen capture software as ScreenFlow.
The Apple method uses glReadPixels() which just pulls all the data across from the display buffer, and probably triggers sync barriers, etc, between GPU and CPU. You can't make that part faster or lower resolution.
Are you doing this to create a one-off video? Or are you wanting the user to be able to trigger this behavior in the production code? If the former, you could do all sorts of trickery to speed it up-- render to a smaller size for everything, don't present at all and just capture frames based on a recording of the input data running into the game, or other such tricks, or going even further run that whole simulation at half speed to get all the frames.
I'm less helpful if you need an actual in-game function for this. Perhaps someone else will be.
If all else fails.
Get one of these
http://store.apple.com/us/product/MC748ZM/A
And then convert that composite video to digital through some sort of external device.
I've done this when I converted vhs movies to dvd a long time ago.

How to capture a motion in iphone camera

In My application as the user opens the camera, camera should capture a image as soon as there is a difference in image when compared to previous image and camera should always be in capturing mode.
This should be done automatically without any user interaction.Please Help me out as i couldn't find the solution asap.
Thanks,
ravi
I don't think the iPhone camera can do what you want.
It sounds like your doing a type of motion detection by comparing two snapshots taken at different times and seeing if something has changed between the older and the newer image. To that you need:
I don't think the iPhone can do what you want. The camera is not setup to automatically take photos and I don't think the hardware can support the level of processing needed to compare two images in enough detail to detect motion.
Hmmmm, in thinking about it, you might be able to detect motion by somehow measuring the frame differentials in the compression of video. All the video codecs save space by only registering the parts of the video that change from frame-to-frame. So, a large change in the saved data would indicate a large change in the environment.
I have no idea how to go about doing that but it might give you a starting point.
You could try using opencv for motion detection based on differences between captured frames but I'm not sure if the iPhone API allows reading multiple frames from the camera.
Look for motempl.c in the opencv distribution.
You can do a screenshot to automatically capture the image, using the UIGetScreenImage function.