Take a Video with ARKIT - swift

Hello Community,
I try to build a App with Swift 4 and the great upcoming ARKit-Framework but I am stuck. I need to take a Video with the Framework or at least a UIImage-sequence but I dont know how.
This is what I've tried:
In ARKit you have a session which tracks your world. This session has a capturedImage instance where you can get the current Image. So I createt a Timer which appends the capturedImage every 0.1s to a List. This would work for me but If I start the Timer by clicking a "start"-button, the camera starts to lag. Its not about the Timer i guess because If I invalidate the Timer by clicking a "stop"-button the camera is fluent again.
Is there a way to solve the lags or even a better way?
Thanks

I was able to use ReplayKit to do exactly that.
To see what ReplayKit is like
On your iOS device, go to Settings -> Control Center -> Customize Controls. Move "Screen Recording" to the "Include" section, and swipe up to bring up Control Center. You should now see the round Screen Recording icon, and you'll notice that when you press it, iOS starts to record your screen. Tapping the blue bar will end recording and save the video to Photos.
Using ReplayKit, you can make your app invoke the screen recorder and capture your ARKit content.
How-to
To start recording:
RPScreenRecorder.shared().startRecording { error in
// Handle error, if any
}
To stop recording:
RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
// Do things
})
After you're done recording, .stopRecording gives you an optional RPPreviewViewController, which is
An object that displays a user interface where users preview and edit a screen recording created with ReplayKit.
So in our example, you can present previewVc if it isn't nil
RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
if let previewVc = previewVc {
previewVc.delegate = self
self.present(previewVc, animated: true, completion: nil)
}
})
You'll be able to edit and save the vide right from the previewVc, but you might want to make self (or someone) the RPPreviewViewControllerDelegate, so you can easily dismiss the previewVc when you're finished.
extension MyViewController: RPPreviewViewControllerDelegate {
func previewControllerDidFinish(_ previewController: RPPreviewViewController) {
// Called when the preview vc is ready to be dismissed
}
}
Caveats
You'll notice that startRecording will record "the app display", so if any view you have (buttons, labels, etc) will be recorded as well.
I found it useful to hide the controls while recording and let my users know that tapping the screen stops recording, but I've also read about others having success putting their essential controls on a separate UIWindow.
Excluding views from recording
The separate UIWindow trick works. I was able to make an overlay window where I had my a record button and a timer and these weren't recorded.
let overlayWindow = UIWindow(frame: view.frame)
let recordButton = UIButton( ... )
overlayWindow.backgroundColor = UIColor.clear
The UIWindow will be hidden by default. So when you want to show your controls, you must set isHidden to false.
Best of luck to you!

Use a custom renderer.
Render the scene using the custom renderer, then get texture from the custom renderer, finally covert that to a CVPixelBufferRef
- (void)viewDidLoad {
[super viewDidLoad];
self.rgbColorSpace = CGColorSpaceCreateDeviceRGB();
self.bytesPerPixel = 4;
self.bitsPerComponent = 8;
self.bitsPerPixel = 32;
self.textureSizeX = 640;
self.textureSizeY = 960;
// Set the view's delegate
self.sceneView.delegate = self;
// Show statistics such as fps and timing information
self.sceneView.showsStatistics = YES;
// Create a new scene
SCNScene *scene = [SCNScene scene];//[SCNScene sceneNamed:#"art.scnassets/ship.scn"];
// Set the scene to the view
self.sceneView.scene = scene;
self.sceneView.preferredFramesPerSecond = 30;
[self setupMetal];
[self setupTexture];
self.renderer.scene = self.sceneView.scene;
}
- (void)setupMetal
{
if (self.sceneView.renderingAPI == SCNRenderingAPIMetal) {
self.device = self.sceneView.device;
self.commandQueue = [self.device newCommandQueue];
self.renderer = [SCNRenderer rendererWithDevice:self.device options:nil];
}
else {
NSAssert(nil, #"Only Support Metal");
}
}
- (void)setupTexture
{
MTLTextureDescriptor *descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm_sRGB width:self.textureSizeX height:self.textureSizeY mipmapped:NO];
descriptor.usage = MTLTextureUsageShaderRead | MTLTextureUsageRenderTarget;
id<MTLTexture> textureA = [self.device newTextureWithDescriptor:descriptor];
self.offscreenTexture = textureA;
}
- (void)renderer:(id <SCNSceneRenderer>)renderer willRenderScene:(SCNScene *)scene atTime:(NSTimeInterval)time
{
[self doRender];
}
- (void)doRender
{
if (self.rendering) {
return;
}
self.rendering = YES;
CGRect viewport = CGRectMake(0, 0, self.textureSizeX, self.textureSizeY);
id<MTLTexture> texture = self.offscreenTexture;
MTLRenderPassDescriptor *renderPassDescriptor = [MTLRenderPassDescriptor new];
renderPassDescriptor.colorAttachments[0].texture = texture;
renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 1, 0, 1.0);
renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;
id<MTLCommandBuffer> commandBuffer = [self.commandQueue commandBuffer];
self.renderer.pointOfView = self.sceneView.pointOfView;
[self.renderer renderAtTime:0 viewport:viewport commandBuffer:commandBuffer passDescriptor:renderPassDescriptor];
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> _Nonnull bf) {
[self.recorder writeFrameForTexture:texture];
self.rendering = NO;
}];
[commandBuffer commit];
}
Then in the recorder, set up the AVAssetWriterInputPixelBufferAdaptor with AVAssetWriter. And convert the texture to CVPixelBufferRef:
- (void)writeFrameForTexture:(id<MTLTexture>)texture {
CVPixelBufferPoolRef pixelBufferPool = self.assetWriterPixelBufferInput.pixelBufferPool;
CVPixelBufferRef pixelBuffer;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pixelBufferBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
MTLRegion region = MTLRegionMake2D(0, 0, texture.width, texture.height);
[texture getBytes:pixelBufferBytes bytesPerRow:bytesPerRow fromRegion:region mipmapLevel:0];
[self.assetWriterPixelBufferInput appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);
}
Make sure the custom renderer and the adaptor share the same pixel encoding.
I tested this for the default ship.scn and it and it only consume 30% CPU compared to almost 90% compared to use snapshot method for every frame. And this will not pop up a permission dialog.

I have released an open source framework taking care of this. https://github.com/svtek/SceneKitVideoRecorder
It works by getting the drawables from scene views metal layer.
You can attach a display link to get your renderer called as the screen refreshes:
displayLink = CADisplayLink(target: self, selector: #selector(updateDisplayLink))
displayLink?.add(to: .main, forMode: .commonModes)
And then grab the drawable from metal layer by:
let metalLayer = sceneView.layer as! CAMetalLayer
let nextDrawable = metalLayer.nextDrawable()
Be wary that nextDrawable() call expends the drawables. You should call this as less as possible and do so in an autoreleasepool{} so the drawable gets released properly and replaced with a new one.
Then you should read the MTLTexture from the drawable to a pixel buffer which you can append to AVAssetWriter to create a video.
let destinationTexture = currentDrawable.texture
destinationTexture.getBytes(...)
With these in mind the rest is pretty straightforward video recording on iOS/Cocoa.
You can find all these implemented in the repo I've shared above.

I had a similar need and wanted to record the ARSceneView in the app internally, and without ReplayKit so that I can manipulate the video that is generated from the recording. I ended up using this project: https://github.com/lacyrhoades/SceneKit2Video . The project is made to render a SceneView to a video, but you can configure it to accept ARSceneViews. It works pretty well, and you can choose to get an imagefeed instead of the video using the delegate function if you like.

Related

How to swap from MTKView to UIView display seamlessly

I have an MTKView whose contents I draw into a UIView. I want to swap display from MTKView to UIView without perceptible changes. How to achieve?
Currently, I have
let strokeCIImage = CIImage(mtlTexture: metalTextureComposite...) // get MTLTexture
let imageCropCG = cicontext.createCGImage(strokeCIImage...) // convert to CGImage
let layerStroke = CALayer() // create layer
layerStroke.contents = imageCropCG // populate with CGImage
strokeUIView.layer.addSublayer(layerStroke) // add to view
strokeUIView.layerWillDraw(layerStroke) //heads up to strokeUIView
and a delegate method within layerWillDraw() that clears the MTKView.
strokeViewMetal.metalClearDisplay()
The result is that I'll see a frame drop every so often in which nothing is displayed.
In the hopes of cleanly separating the two tasks, I also tried the following:
let dispatchWorkItem = DispatchWorkItem{
print("lyr add start")
self.pageCanvasImage.layer.addSublayer(sublayer)
print("lyr add end")
}
let dg = DispatchGroup()
DispatchQueue.main.async(group: dg, execute: dispatchWorkItem)
//print message when all blocks in the group finish
dg.notify(queue: DispatchQueue.main) {
print("dispatch mtl clear")
self.strokeCanvasMetal.setNeedsDisplay() // clear MTKView
}
The idea being add the new CALayer to UIImageView, and THEN clear the MTKView.
Over many screen draws, I think this result in fewer frame drops during the View swap, but I'd like a foolproof solution with NO drops. Basically what I'm after is to only clear strokeViewMetal once strokeUIView is ready to display. Any pointers would be appreciated
Synchronicity issues between MTKView and UIView are resolved for 99% of my tests when I set MTKView's presentsWithTransaction property to true. According to Apple's documentation:
Setting this value to true changes this default behavior so that your
MTKView displays its drawable content synchronously, using whichever
Core Animation transaction is current at the time the drawable’s
present() method is called.
Once that is done, the draw loop has to be modified from:
commandEncoder.endEncoding()
commandBuffer.present(drawable)
commandBuffer.commit()
to:
commandEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilScheduled() // synchronously wait until the drawable is ready
drawable.present() // call the drawable’s present() method directly
This is done to prevent Core activities to end before we're ready to present MTKView's drawable.
With all of this set up, I can simply:
let strokeCIImage = CIImage(mtlTexture: metalTextureComposite...) // get MTLTexture
let imageCropCG = cicontext.createCGImage(strokeCIImage...) // convert to CGImage
let layerStroke = CALayer() // create layer
layerStroke.contents = imageCropCG // populate with CGImage
// the last two events will happen synchronously
strokeUIView.layer.addSublayer(layerStroke) // add to view
strokeViewMetal.metalClearDisplay() // empty out MTKView
With all of this said, I do see overlapping of the views every now and then, but at a much, much lower frequency

Swift Mac OS - How to play another video/change the URL of the video with AVPlayer upon a button click?

I am new to swift and I am trying to make a Mac OS app that loops a video from the app's resources using AVPlayer as the background of the window once the app has been launched. When the user selects a menu item/clicks a button the background video will instantly change to a different video from the app's resources and start looping that video as the window's background.
I was able to play the first video once the app launches following this tutorial: (https://youtu.be/QgeQc587w70) and I also successfully made the video loop itself seamlessly following this post: (Looping AVPlayer seamlessly).
The problem I am now facing is changing the video to the other one once a menu item was selected/a button was clicked. The approach I was going for is to change the url and create a new AVPlayer using the new URL and affect it to the playerView.player following this post: (Swift How to update url of video in AVPlayer when clicking on a button?). However every time the menu item is selected the app crashes with the error "thread 1 exc_bad_instruction (code=exc_i386_invop subcode=0x0)". This is apparently caused by the value of playerView being nil. I don't really understand the reason for this as playerView is an AVPlayerView object that I created using the xib file and linked to the swift file by control-dragging and I couldn't seem to find another appropriate method of doing the thing I wanted to do. If you know the reason for this and the way of fixing it please provide me some help or if you know a better method of doing what I've mention above please tell me as well. Any help would be much appreciated!
My code is as follow, the line that crashes the app is at the bottom:
import Cocoa
import AppKit
import AVKit
import AVFoundation
struct videoVariables {
static var videoName = "Test_Video" //declaring the video name as a global variable
}
var videoIsPlaying = true
var theURL = Bundle.main.url(forResource:videoVariables.videoName, withExtension: "mp4") //creating the video url
var player = AVPlayer.init(url: theURL!)
class BackgroundWindow: NSWindowController {
#IBOutlet weak var playerView: AVPlayerView! // AVPlayerView Linked using control-drag from xib file
#IBOutlet var mainWindow: NSWindow!
#IBOutlet weak var TempBG: NSImageView!
override var windowNibName : String! {
return "BackgroundWindow"
}
//function used for resizing the temporary background image and the playerView to the window’s size
func resizeBG() {
var scrn: NSScreen = NSScreen.main()!
var rect: NSRect = scrn.frame
var height = rect.size.height
var width = rect.size.width
TempBG.setFrameSize(NSSize(width: Int(width), height: Int(height)))
TempBG.frame.origin = CGPoint(x: 0, y: 0)
playerView!.setFrameSize(NSSize(width: Int(width), height: Int(height)))
playerView!.frame.origin = CGPoint(x: 0, y: 0)
}
override func windowDidLoad() {
super.windowDidLoad()
self.window?.titleVisibility = NSWindowTitleVisibility.hidden //hide window’s title
self.window?.styleMask = NSBorderlessWindowMask //hide window’s border
self.window?.hasShadow = false //hide window’s shadow
self.window?.level = Int(CGWindowLevelForKey(CGWindowLevelKey.desktopWindow)) //set window’s layer as desktopWindow layer
self.window?.center()
self.window?.makeKeyAndOrderFront(nil)
NSApp.activate(ignoringOtherApps: true)
if let screen = NSScreen.main() {
self.window?.setFrame(screen.visibleFrame, display: true, animate: false) //resizing the window to cover the whole screen
}
resizeBG() //resizing the temporary background image and the playerView to the window’s size
startVideo() //start playing and loop the first video as the window’s background
}
//function used for starting the video again once it has been played fully
func playerItemDidReachEnd(notification: NSNotification) {
playerView.player?.seek(to: kCMTimeZero)
playerView.player?.play()
}
//function used for starting and looping the video
func startVideo() {
//set the seeking time to be 2ms ahead to prevent a black screen every time the video loops
let playAhead = CMTimeMake(2, 100);
//loops the video
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object:
playerView.player?.currentItem, queue: nil, using: { (_) in
DispatchQueue.main.async {
self.playerView.player?.seek(to: playAhead)
self.playerView.player?.play()
}
})
var playerLayer: AVPlayerLayer?
playerLayer = AVPlayerLayer(player: player)
playerView?.player = player
print(playerView?.player)
playerLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
player.play()
}
//changing the url to the new url and create a new AVPlayer then affect it to the playerView.player once the menu item is being selected
#IBAction func renderBG(_ sender: NSMenuItem) {
videoVariables.videoName = "Test_Video_2"
var theNewURL = Bundle.main.url(forResource:videoVariables.videoName, withExtension: "mp4")
player = AVPlayer.init(url: theNewURL!)
//!!this line crashes the app with the error "thread 1 exc_bad_instruction (code=exc_i386_invop subcode=0x0)" every time the menu item is being selected!!
playerView.player = player
}
}
Additionally, the background video is not supposed to be interactive(E.g. User cannot pause/ fast-forward the video), so any issues that might be caused by user interactivity can be ignored. The purpose of the app is to play a video on the user's desktop creating the exact same effect of running the command:
"/System/Library/Frameworks/ScreenSaver.framework/Resources/
ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine -background" in terminal.
Any help would be much appreciated!
You don't need to create AVPlayer from url. There is AVPlayerItem class to manipulate player playback queue.
let firstAsset = AVURLAsset(url: firstVideoUrl)
let firstPlayerItem = AVPlayerItem(asset: firstAsset)
let player = AVPlayer(playerItem: firstPlayerItem)
let secondAsset = AVURLAsset(url: secondVideoUrl)
let secondPlayerItem = AVPlayerItem(asset: secondAsset)
player.replaceCurrentItem(with: secondPlayerItem)
Docs about AVPlayerItem

Stereo ARSCNview to make VR and AR mix

I want to make a mix of virtual reality and augmented reality.
The goal is I have a stereo camera (for each eyes).
I tried to put two ARSCNView in a viewCotnroller but it seems ARKit enable only one ARWorldTrackingSessionConfiguration at the same time. How can I do that?
I researched to copy the graphic representation of a view to past this to an other view but impossible to find. Please help me to find the solution.
I found this link, maybe can it illumine us:
ARKit with multiple users
Here's a sample of my issue:
https://www.youtube.com/watch?v=d6LOqNnYm5s
PS: before unlike my post, comment why!
The following code is basically what Hal said. I previously wrote a few lines on github that might be able to help you get started. (Simple code, no barrel distortion, no adjustment for the narrow FOV - yet).
Essentially, we connect the same scene to the second ARSCNView (so both ARSCNViews are seeing the same scene). No need to get ARWorldTrackingSessionConfiguration working with 2 ARSCNViews. Then, we offset its pointOfView so it's positioned as the 2nd eye.
https://github.com/hanleyweng/iOS-Stereoscopic-ARKit-Template
The ARSession documentation says that ARSession is a shared object.
Every AR experience built with ARKit requires a single ARSession object. If you use an
ARSCNView
or
ARSKView
object to easily build the visual part of your AR experience, the view object includes an ARSession instance. If you build your own renderer for AR content, you'll need to instantiate and maintain an ARSession object yourself.
So there's a clue in that last sentence. Instead of two ARSCNView instances, use SCNView and share the single ARSession between them.
I expect this is a common use case, so it's worth filing a Radar to request stereo support.
How to do it right now?
The (singleton) session has only one delegate. You need two different delegate instances, one for each view. You could solve that with an object that sends the delegate messages to each view; solvable but a bit of extra work.
There's also the problem of needing two slightly different camera locations, one for each eye, for stereo vision. ARKit uses one camera, placed at the iOS device's location, so you'll have to fuzz that.
Then you have to deal with the different barrel distortions for each eye.
This, for me, adds up to writing my own custom object to intercept ARKit delegate messages, convert the coordinates to what I'd see from two different cameras, and manage the two distinct SCNViews (not ARSCNViews). Or perhaps use one ARSCNView (one eye), intercept its frame updates, and pass those frames on to a SCNView (the other eye).
File the Radar, post the number, and I'll dupe it.
To accomplish this, please use the following code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet weak var sceneView: ARSCNView!
#IBOutlet weak var sceneView2: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
sceneView.isPlaying = true
// SceneView2 Setup
sceneView2.scene = scene
sceneView2.showsStatistics = sceneView.showsStatistics
// Now sceneView2 starts receiving updates
sceneView2.isPlaying = true
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
}
And don't forget to activate .isPlaying instance properties for both ARSCNViews.
Objective-C version of Han's github code, sceneViews created programatically, with y + z positions not updated - all credit Han:
-(void)setup{
//left
leftSceneView = [ARSCNView new];
leftSceneView.frame = CGRectMake(0, 0, w, h/2);
leftSceneView.delegate = self;
leftSceneView.autoenablesDefaultLighting = true;
[self.view addSubview:leftSceneView];
//right
rightSceneView = [ARSCNView new];
rightSceneView.frame = CGRectMake(0, h/2, w, h/2);
rightSceneView.playing = true;
rightSceneView.autoenablesDefaultLighting = true;
[self.view addSubview:rightSceneView];
//scene
SCNScene * scene = [SCNScene new];
leftSceneView.scene = scene;
rightSceneView.scene = scene;
//tracking
ARWorldTrackingConfiguration * configuration = [ARWorldTrackingConfiguration new];
configuration.planeDetection = ARPlaneDetectionHorizontal;
[leftSceneView.session runWithConfiguration:configuration];
}
-(void)renderer:(id<SCNSceneRenderer>)renderer updateAtTime:(NSTimeInterval)time {
dispatch_async(dispatch_get_main_queue(), ^{
//update right eye
SCNNode * pov = self->leftSceneView.pointOfView.clone;
SCNQuaternion orientation = pov.orientation;
GLKQuaternion orientationQuaternion = GLKQuaternionMake(orientation.x, orientation.y, orientation.z, orientation.w);
GLKVector3 eyePosition = GLKVector3Make(1, 0, 0);
GLKVector3 rotatedEyePosition = GLKQuaternionRotateVector3(orientationQuaternion, eyePosition);
SCNVector3 rotatedEyePositionSCNV = SCNVector3Make(rotatedEyePosition.x, rotatedEyePosition.y, rotatedEyePosition.z);
float mag = 0.066f;
float rotatedX = pov.position.x + rotatedEyePositionSCNV.x * mag;
float rotatedY = pov.position.y;// + rotatedEyePositionSCNV.y * mag;
float rotatedZ = pov.position.z;// + rotatedEyePositionSCNV.z * mag;
[pov setPosition:SCNVector3Make(rotatedX, rotatedY, rotatedZ)];
self->rightSceneView.pointOfView = pov;
});
}

Capturing Video with Swift using AVCaptureVideoDataOutput or AVCaptureMovieFileOutput

I need some guidance on how to capture video without having to use an UIImagePicker. The video needs to start and stop on a button click and then this data be saved to the NSDocumentDirectory. I am new to swift so any help will be useful.
The section of code that I need help with is starting and stopping a video session and turning that to data. I created a picture taking version that runs captureStillImageAsynchronouslyFromConnection and saves this data to the NSDocumentDirectory. I have set up a video capturing session and have the code ready to save data but do not know how to get the data from the session.
var previewLayer : AVCaptureVideoPreviewLayer?
var captureDevice : AVCaptureDevice?
var videoCaptureOutput = AVCaptureVideoDataOutput()
let captureSession = AVCaptureSession()
override func viewDidLoad() {
super.viewDidLoad()
captureSession.sessionPreset = AVCaptureSessionPreset640x480
let devices = AVCaptureDevice.devices()
for device in devices {
if (device.hasMediaType(AVMediaTypeVideo)) {
if device.position == AVCaptureDevicePosition.Back {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
beginSession()
}
}
}
}
}
func beginSession() {
var err : NSError? = nil
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
if err != nil {
println("Error: \(err?.localizedDescription)")
}
videoCaptureOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA]
videoCaptureOutput.alwaysDiscardsLateVideoFrames = true
captureSession.addOutput(videoCaptureOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.view.layer.addSublayer(previewLayer)
previewLayer?.frame = CGRectMake(0, 0, screenWidth, screenHeight)
captureSession.startRunning()
var startVideoBtn = UIButton(frame: CGRectMake(0, screenHeight/2, screenWidth, screenHeight/2))
startVideoBtn.addTarget(self, action: "startVideo", forControlEvents: UIControlEvents.TouchUpInside)
self.view.addSubview(startVideoBtn)
var stopVideoBtn = UIButton(frame: CGRectMake(0, 0, screenWidth, screenHeight/2))
stopVideoBtn.addTarget(self, action: "stopVideo", forControlEvents: UIControlEvents.TouchUpInside)
self.view.addSubview(stopVideoBtn)
}
I can supply more code or explanation if needed.
For best results, read the Still and Video Media Capture section from the AV Foundation Programming Guide.
To process frames from AVCaptureVideoDataOutput, you will need a delegate that adopts the AVCaptureVideoDataOutputSampleBufferDelegate protocol. The delegate's captureOutput method will be called whenever a new frame is written. When you set the output’s delegate, you must also provide a queue on which callbacks should be invoked. It will look something like this:
let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
videoCaptureOutput.setSampleBufferDelegate(myDelegate, queue: cameraQueue)
captureSession.addOutput(videoCaptureOutput)
NB: If you just want to save the movie to a file, you may prefer the AVCaptureMovieFileOutput class instead of AVCaptureVideoDataOutput. In that case, you won't need a queue. But you'll still need a delegate, this time adopting the AVCaptureFileOutputRecordingDelegate protocol instead. (The relevant method is still called captureOutput.)
Here's one excerpt from the part about AVCaptureMovieFileOutput from the guide linked to above:
Starting a Recording
You start recording a QuickTime movie using startRecordingToOutputFileURL:recordingDelegate:. You need to supply a
file-based URL and a delegate. The URL must not identify an existing
file, because the movie file output does not overwrite existing
resources. You must also have permission to write to the specified
location. The delegate must conform to the
AVCaptureFileOutputRecordingDelegate protocol, and must implement the
captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:
method.
AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
NSURL *fileURL = <#A file URL that identifies the output location#>;
[aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];
In the implementation of
captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:,
the delegate might write the resulting movie to the Camera Roll album.
It should also check for any errors that might have occurred.

SKScene Custom Size

I would like to create a new scene so that I can embed a UI element in it (a picker view) but I do not want the scene to take up the entire screen, but rather be smaller. I used the following code:
- (void)switchToPickerScene
{
// Present the new picker scene:
self.paused = YES;
CGSize pickerSceneSize = CGSizeMake(300, 440);
BBPickerScene *pickerScene = [BBPickerScene sceneWithSize:pickerSceneSize];
pickerScene.scaleMode = SKSceneScaleModeFill;
SKTransition *trans = [SKTransition doorsOpenHorizontalWithDuration:0.5];
[self.scene.view presentScene:pickerScene transition:trans];
}
But no matter which SKSceneScaleMode I choose it always fills the entire phone screen. Is there any way to do this?