Turn off camera after use in iOS 14 - swift

With the updates in iOS 14, there is a green dot that appears in the top corner when the camera is in use. My app sometimes uses the camera for AR, but it isn't needed for other aspects of the app. I'd like to make the green dot go away while the camera isn't needed so users don't think we're doing something with the camera in the background, but I can't seem to find a way to programmatically stop using the camera. Is there a way to do this?

It sounds strange – you're using AR app but do not need an AR camera. I suppose you understand that RGB sensor (as well as IMU sensors) must be active during the time when ARSession is running. ARSession is based on data coming from RGB sensor. So if you turn it off – nothing good happens.
At first let's discuss how to start and stop AR tracking. You can start running ARSession using standard ARKit procedure:
let config = ARWorldTrackingConfiguration()
sceneView.session.run(config, options: someOptions)
...and you can stop running ARSession using:
sceneView.session.pause()
After stopping ARSession, a heavy CPU/GPU processing and enormous battery energy consuming will be eliminated. However, if a session was stopped, next time you'll run it, it must begin tracking from scratch.
Solution
So the best solution in this case is to use a pre-saved scene data stored in ARWorldMap. Just track a surrounding environment, save ARWorldMap, and then stop running session. Then do whatever you want (with no active ARSession), and after it run a session again and just load ARWorldMap data. Simple.

Related

What to use to watch a video from multiple angles

I am trying to create an app &/or web based online school that I am filming instruction from four different angels. I can't find anything that I could code to allow the user to select a different camera view and progress during the video.
Ideally there will be four cameras filming the exact technique but the user should be able to jump to different views throughout the video without having to start again.
I have searched online for three days now but cannot find anything to what I want/need.
I just want the video to play and the user to be able to switch to different camera views.
You can set the property:
VideoPlayer.frame
When the user "changes camera angle", record the frame of the currently playing video, stop it, start playing the new video from the different angle and set it's frame property to match the last video.

How to stop video when target is lost in augmented reality ?

I have made an AR application which play video when the target is detected. But problem is that even when I place camera not in-front of the target image (No Target) its still keep playing until I go again an pause the video by clicking on the target.
If using Vuforia, there is a callback function, OnTrackingLost(), indicating that the tracker has been lost. You can stop the video in the body of this function.
If using another technology and you have to implement such a function by yourself, the obvious solution would be to use a timer. If the target image (previously recognized and tracked) is not detected for a given period of time, the tracker is lost. Again, you stop the video when you realize that the tracked image is lost.

record video in cocos2d iOS game, low resolution for video and high resolution for normal cases

I am using cocos2d's CCRenderTexture to record video of my game. But if recording video in retina display resolution will cost lot of CPU and memory, so I want to use low resolution for video record but keep retina-resolution for normal game play. is it possible?
I've tried "[[CCDirector sharedDirector] enableRetinaDisplay:NO];" during record video, but it seems not work. the generated output totally wrong.
This is not feasible.
You'd have to render each frame twice, once on the screen, then onto the render texture. A serious drop in framerate is inevitable even if you lower the resolution of the render texture somehow.
The reason is simply that you'll also have to write each render texture as an image to flash memory. This is extremely slow. You'll also end up with a huge amount of data. If each (PNG/JPG) image file ends up being a reasonably small 50 KB then one second of recorded data at 60 fps will consume 3 Megabytes of flash memory. One minute would be around 180 Megabytes.
To record a demo of your game, most games follow the simple principle of recording the user input, and then playing back the user input as if the user had issued these commands. This requires careful planning, no breaking changes when updating the app (or invalidating old demos), and no use of non-deterministic randomizers (ie seeded with time).
If you need to record a demo for making a trailer video, there's plenty of screengrabbing solutions around. Some even specialize in grabbing iPhone video, either from the device (usually requires a source code/library component) or from the Simulator.
You should check out Kamcord SDK for recording game play. Check at http://kamcord.com/
Kamcord has a built-in gameplay video and audio recording technology for iOS. It allows you, the game developer, to capture gameplay videos with an API. Your users can then replay and share these gameplay videos via YouTube, Facebook, Twitter, and email.

Capture video without displaying the actual video feed

So I have an application that can currently capture video with the front facing iphone camera and then do some processing on the video feed real-time. What I'm trying to do, however, is make this process run in the background and put other controls onscreen. So for example, say I'd like to run the camera and process the image feed, but I want the user to see a black screen with some buttons on it. Any ideas on how to do this?
Just so we get terminology right, by "in the background", you mean running the camera capture while your application is in the foreground, but not displaying the actual video feed. This is possible, but I wanted to make clear that if you move your whole application into the background you will not have access to the camera then.
There are a few ways to do this, but the one that I've spent the most time with is grabbing frames of video (or photos) via AV Foundation. Using an AVCaptureDevice and AVCaptureSession, you can grab the frames of video and route them to an encoder for saving to disk or for processing using your own custom code. None of this requires the camera feed to be displayed onscreen, so you can put up whatever interface you like and do this video recording or photo capture without any onscreen indication.
I would caution that you should make it explicit to your users what you are doing, so that you do not run the risk of violating someone's privacy. Apple does not react kindly to those who do this (for good reason).
I encapsulate a lot of this within my open source GPUImage video and photo processing framework, so you could look at the code for the GPUImageVideoCamera class there to see how I configure the capture inputs. I hand the video frames off to OpenGL ES for the application of filters and other processing operations, but you could ignore that portion of it if you just wanted to do your own encoding or processing.
Heres an exemple code from Apple's doc:
http://developer.apple.com/library/ios/#samplecode/PhotoPicker/Introduction/Intro.html
there is also the way to customize the camera interface.

iPhone Dev: Process each frame of a live recording movie for AR app?

I'm doing research into AR on the iPhone and am trying to figure out how people are getting each frame of video? I'm wanting to figure out AR using computer vision( OpenCV ). So basically I will have a pattern on a piece of paper that I will find using OpenCV and place a graphic on top of the pattern.
I know about the movie class UIImagePickerController, but am unsure how you would go about getting to each frame.
Can someone point me in the right direction?
UIImagePickerController is the means for displaying a camera view and taking single pictures with a camera-like front end. It's not what you're looking for.
Instead you need to look into AVFoundation, particularly the classes surrounding AVCaptureSession. You'll want to acquire a meaningful AVCaptureDevice (which can be the front or back camera on the iPhone 4 and current iPod Touch), create an AVCaptureDeviceInput that references it and add that as an input to an AVCaptureSession. Then just create an AVCaptureVideoDataOutput and set it up with a meaningful delegate and a Grand Central Dispatch dispatch queue.
When you start the session going, you'll receive delegate callbacks on the queue you created providing CMSampleBufferRefs, from which you can pull a CVImageBufferRef and hence the pixel data.