Capturing Video with Swift using AVCaptureVideoDataOutput or AVCaptureMovieFileOutput - swift

I need some guidance on how to capture video without having to use an UIImagePicker. The video needs to start and stop on a button click and then this data be saved to the NSDocumentDirectory. I am new to swift so any help will be useful.
The section of code that I need help with is starting and stopping a video session and turning that to data. I created a picture taking version that runs captureStillImageAsynchronouslyFromConnection and saves this data to the NSDocumentDirectory. I have set up a video capturing session and have the code ready to save data but do not know how to get the data from the session.
var previewLayer : AVCaptureVideoPreviewLayer?
var captureDevice : AVCaptureDevice?
var videoCaptureOutput = AVCaptureVideoDataOutput()
let captureSession = AVCaptureSession()
override func viewDidLoad() {
super.viewDidLoad()
captureSession.sessionPreset = AVCaptureSessionPreset640x480
let devices = AVCaptureDevice.devices()
for device in devices {
if (device.hasMediaType(AVMediaTypeVideo)) {
if device.position == AVCaptureDevicePosition.Back {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
beginSession()
}
}
}
}
}
func beginSession() {
var err : NSError? = nil
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
if err != nil {
println("Error: \(err?.localizedDescription)")
}
videoCaptureOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA]
videoCaptureOutput.alwaysDiscardsLateVideoFrames = true
captureSession.addOutput(videoCaptureOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.view.layer.addSublayer(previewLayer)
previewLayer?.frame = CGRectMake(0, 0, screenWidth, screenHeight)
captureSession.startRunning()
var startVideoBtn = UIButton(frame: CGRectMake(0, screenHeight/2, screenWidth, screenHeight/2))
startVideoBtn.addTarget(self, action: "startVideo", forControlEvents: UIControlEvents.TouchUpInside)
self.view.addSubview(startVideoBtn)
var stopVideoBtn = UIButton(frame: CGRectMake(0, 0, screenWidth, screenHeight/2))
stopVideoBtn.addTarget(self, action: "stopVideo", forControlEvents: UIControlEvents.TouchUpInside)
self.view.addSubview(stopVideoBtn)
}
I can supply more code or explanation if needed.

For best results, read the Still and Video Media Capture section from the AV Foundation Programming Guide.
To process frames from AVCaptureVideoDataOutput, you will need a delegate that adopts the AVCaptureVideoDataOutputSampleBufferDelegate protocol. The delegate's captureOutput method will be called whenever a new frame is written. When you set the output’s delegate, you must also provide a queue on which callbacks should be invoked. It will look something like this:
let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
videoCaptureOutput.setSampleBufferDelegate(myDelegate, queue: cameraQueue)
captureSession.addOutput(videoCaptureOutput)
NB: If you just want to save the movie to a file, you may prefer the AVCaptureMovieFileOutput class instead of AVCaptureVideoDataOutput. In that case, you won't need a queue. But you'll still need a delegate, this time adopting the AVCaptureFileOutputRecordingDelegate protocol instead. (The relevant method is still called captureOutput.)
Here's one excerpt from the part about AVCaptureMovieFileOutput from the guide linked to above:
Starting a Recording
You start recording a QuickTime movie using startRecordingToOutputFileURL:recordingDelegate:. You need to supply a
file-based URL and a delegate. The URL must not identify an existing
file, because the movie file output does not overwrite existing
resources. You must also have permission to write to the specified
location. The delegate must conform to the
AVCaptureFileOutputRecordingDelegate protocol, and must implement the
captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:
method.
AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
NSURL *fileURL = <#A file URL that identifies the output location#>;
[aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];
In the implementation of
captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:,
the delegate might write the resulting movie to the Camera Roll album.
It should also check for any errors that might have occurred.

Related

AVFoundation Camera in view

I'm having a lot of problems trying to get a view to show my back Camera feed. I looked throughout apples docs and came up with this, but all it seems to do is make a black screen. I also added the perms in my plist and am running on a real device. I don't need it to take a photo or save anything. Just simply show the camera live in a view.
import UIKit
import AVFoundation
class ViewController: UIViewController {
#IBOutlet weak var cameraView: UIView!
var captureSession = AVCaptureSession()
var previewLayer = AVCaptureVideoPreviewLayer()
override func viewDidLoad() {
super.viewDidLoad()
loadCamera()
}
func loadCamera() {
let device = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back)
do {
let input = try AVCaptureDeviceInput(device: device!)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
previewLayer.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer)
}
} catch {
print(error)
}
}
}
Welcome!
The problem is that there is actually no data flow (of video frames) happening with your current setup. You need to at least attach one input and one output to your capture session. The preview layer doesn't count as an output itself since it will only attach to an existing connection between input and output.
So to fix it, you can just add an AVCapturePhotoOutput to the session (probably before you add the layer) but never use it. The preview layer should start displaying the frames then.
You probably also want to set the session's sessionPreset to .photo before you add the inputs and outputs. This will cause the session to produce video frames that have an ideal size for displaying on your device's screen.

How do you add an overlay while recording a video in Swift?

I am trying to record, and then save, a video in Swift using AVFoundation. This works. I am also trying to add an overlay, such as a text label containing the date, to the video.
For example: the video saved is not only what the camera sees, but the timestamp as well.
Here is how I am saving the video:
func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
saveVideo(toURL: movieURL!)
}
private func saveVideo(toURL url: URL) {
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: url)
}) { (success, error) in
if(success) {
print("Video saved to Camera Roll.")
} else {
print("Video failed to save.")
}
}
}
I have a movieOuput that is an AVCaptureMovieFileOutput. My preview layer does not contain any sublayers. I tried adding the timestamp label's layer to the previewLayer, but this did not succeed.
I have tried Ray Wenderlich's example as well as this stack overflow question. Lastly, I also tried this tutorial, all of which to no avail.
How can I add an overlay to my video that is in the saved video in the camera roll?
Without more information it sounds like what you are asking for is a WATERMARK.
Not an overlay.
A watermark is a markup on the video that will be saved with the video.
An overlay is generally showed as subviews on the preview layer and will not be saved with the video.
Check this out here: https://stackoverflow.com/a/47742108/8272698
func addWatermark(inputURL: URL, outputURL: URL, handler:#escaping (_ exportSession: AVAssetExportSession?)-> Void) {
let mixComposition = AVMutableComposition()
let asset = AVAsset(url: inputURL)
let videoTrack = asset.tracks(withMediaType: AVMediaType.video)[0]
let timerange = CMTimeRangeMake(kCMTimeZero, asset.duration)
let compositionVideoTrack:AVMutableCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))!
do {
try compositionVideoTrack.insertTimeRange(timerange, of: videoTrack, at: kCMTimeZero)
compositionVideoTrack.preferredTransform = videoTrack.preferredTransform
} catch {
print(error)
}
let watermarkFilter = CIFilter(name: "CISourceOverCompositing")!
let watermarkImage = CIImage(image: UIImage(named: "waterMark")!)
let videoComposition = AVVideoComposition(asset: asset) { (filteringRequest) in
let source = filteringRequest.sourceImage.clampedToExtent()
watermarkFilter.setValue(source, forKey: "inputBackgroundImage")
let transform = CGAffineTransform(translationX: filteringRequest.sourceImage.extent.width - (watermarkImage?.extent.width)! - 2, y: 0)
watermarkFilter.setValue(watermarkImage?.transformed(by: transform), forKey: "inputImage")
filteringRequest.finish(with: watermarkFilter.outputImage!, context: nil)
}
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPreset640x480) else {
handler(nil)
return
}
exportSession.outputURL = outputURL
exportSession.outputFileType = AVFileType.mp4
exportSession.shouldOptimizeForNetworkUse = true
exportSession.videoComposition = videoComposition
exportSession.exportAsynchronously { () -> Void in
handler(exportSession)
}
}
And heres how to call the function.
let outputURL = NSURL.fileURL(withPath: "TempPath")
let inputURL = NSURL.fileURL(withPath: "VideoWithWatermarkPath")
addWatermark(inputURL: inputURL, outputURL: outputURL, handler: { (exportSession) in
guard let session = exportSession else {
// Error
return
}
switch session.status {
case .completed:
guard NSData(contentsOf: outputURL) != nil else {
// Error
return
}
// Now you can find the video with the watermark in the location outputURL
default:
// Error
}
})
Let me know if this code works for you.
It is in swift 3 so some changes will be needed.
I currently am using this code on an app of mine. Have not updated it to swift 5 yet
I do not have an actual development environment for Swift that can utilize AVFoundation. Thus, I can't provide you with any example code.
For adding meta data(date, location, timestamp, watermark, frame rate, etc...) as an overlay to the video while recording, you would have to process the video feed, frame by frame, live, while recording. Most likely you would have to store the frames in a buffer and process them before actually record them.
Now when it come to the meta data, there are two type, static and dynamic. For static type such as a watermark, it should be easy enough, as all the frames will get the same thing.
However, for dynamic meta data type such as timestamp or GPS location, there are a few things that needed to be taken into consideration. It takes computational power and time to process the video frames. Thus, depends on the type of dynamic data and how you got them, sometime the processed value may not be a correct value. For example, if you got a frame at 1:00:01, you process it and add a timestamp to it. Just pretend that it took 2 seconds to process the timestamp. The next frame you got is at 1:00:02, but you couldn't process it until 1:00:03 because processing the previous frame took 2 seconds. Thus, depend on how you got that new timestamp for the new frame, that timestamp value may not be the value that you wanted.
For processing dynamic meta data, you should also take into consideration of hardware lag. For example, the software is supposed to add live GPS location data to each frame and there weren't any lags in development or in testing. However, in real life, a user used the software in an area with a bad connection, and his phone lag while obtaining his GPS location. Some of his lags lasted as long as 5 seconds. What do you do in that situation? Do you set a time out for the GPS location and used the last good position? Do you report the error? Do you defer that frame to be process later when the GPS data become available(This may ruin live recording) and using an expensive algorithm to try to predict the user's location for that frame?
Besides those to take into consideration, I have some references here that I think may help you. I thought the one from medium.com looked pretty good.
https://medium.com/ios-os-x-development/ios-camera-frames-extraction-d2c0f80ed05a
Adding watermark to currently recording video and save with watermark
Render dynamic text onto CVPixelBufferRef while recording video
Adding on to #Kevin Ng, you can do an overlay on video frames with an UIViewController and an UIView.
UIViewController will have:
property to work with video stream
private var videoSession: AVCaptureSession?
property to work with overlay(the UIView class)
private var myOverlay: MyUIView{view as! MyUIView}
property to work with video output queue
private let videoOutputQueue = DispatchQueue(label:
"outputQueue", qos: .userInteractive)
method to create video session
method to process and display overlay
UIView will have task-specific helper methods needed to to act as overlay. For example, if you are doing hand detection, this overlay class can have helper methods to draw points on coordinates(ViewController class will detect coordinates of hand features, do necessary coordinate conversions, then pass the coordinates to the UIView class to display coordinates as an overlay)

Swift Mac OS - How to play another video/change the URL of the video with AVPlayer upon a button click?

I am new to swift and I am trying to make a Mac OS app that loops a video from the app's resources using AVPlayer as the background of the window once the app has been launched. When the user selects a menu item/clicks a button the background video will instantly change to a different video from the app's resources and start looping that video as the window's background.
I was able to play the first video once the app launches following this tutorial: (https://youtu.be/QgeQc587w70) and I also successfully made the video loop itself seamlessly following this post: (Looping AVPlayer seamlessly).
The problem I am now facing is changing the video to the other one once a menu item was selected/a button was clicked. The approach I was going for is to change the url and create a new AVPlayer using the new URL and affect it to the playerView.player following this post: (Swift How to update url of video in AVPlayer when clicking on a button?). However every time the menu item is selected the app crashes with the error "thread 1 exc_bad_instruction (code=exc_i386_invop subcode=0x0)". This is apparently caused by the value of playerView being nil. I don't really understand the reason for this as playerView is an AVPlayerView object that I created using the xib file and linked to the swift file by control-dragging and I couldn't seem to find another appropriate method of doing the thing I wanted to do. If you know the reason for this and the way of fixing it please provide me some help or if you know a better method of doing what I've mention above please tell me as well. Any help would be much appreciated!
My code is as follow, the line that crashes the app is at the bottom:
import Cocoa
import AppKit
import AVKit
import AVFoundation
struct videoVariables {
static var videoName = "Test_Video" //declaring the video name as a global variable
}
var videoIsPlaying = true
var theURL = Bundle.main.url(forResource:videoVariables.videoName, withExtension: "mp4") //creating the video url
var player = AVPlayer.init(url: theURL!)
class BackgroundWindow: NSWindowController {
#IBOutlet weak var playerView: AVPlayerView! // AVPlayerView Linked using control-drag from xib file
#IBOutlet var mainWindow: NSWindow!
#IBOutlet weak var TempBG: NSImageView!
override var windowNibName : String! {
return "BackgroundWindow"
}
//function used for resizing the temporary background image and the playerView to the window’s size
func resizeBG() {
var scrn: NSScreen = NSScreen.main()!
var rect: NSRect = scrn.frame
var height = rect.size.height
var width = rect.size.width
TempBG.setFrameSize(NSSize(width: Int(width), height: Int(height)))
TempBG.frame.origin = CGPoint(x: 0, y: 0)
playerView!.setFrameSize(NSSize(width: Int(width), height: Int(height)))
playerView!.frame.origin = CGPoint(x: 0, y: 0)
}
override func windowDidLoad() {
super.windowDidLoad()
self.window?.titleVisibility = NSWindowTitleVisibility.hidden //hide window’s title
self.window?.styleMask = NSBorderlessWindowMask //hide window’s border
self.window?.hasShadow = false //hide window’s shadow
self.window?.level = Int(CGWindowLevelForKey(CGWindowLevelKey.desktopWindow)) //set window’s layer as desktopWindow layer
self.window?.center()
self.window?.makeKeyAndOrderFront(nil)
NSApp.activate(ignoringOtherApps: true)
if let screen = NSScreen.main() {
self.window?.setFrame(screen.visibleFrame, display: true, animate: false) //resizing the window to cover the whole screen
}
resizeBG() //resizing the temporary background image and the playerView to the window’s size
startVideo() //start playing and loop the first video as the window’s background
}
//function used for starting the video again once it has been played fully
func playerItemDidReachEnd(notification: NSNotification) {
playerView.player?.seek(to: kCMTimeZero)
playerView.player?.play()
}
//function used for starting and looping the video
func startVideo() {
//set the seeking time to be 2ms ahead to prevent a black screen every time the video loops
let playAhead = CMTimeMake(2, 100);
//loops the video
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object:
playerView.player?.currentItem, queue: nil, using: { (_) in
DispatchQueue.main.async {
self.playerView.player?.seek(to: playAhead)
self.playerView.player?.play()
}
})
var playerLayer: AVPlayerLayer?
playerLayer = AVPlayerLayer(player: player)
playerView?.player = player
print(playerView?.player)
playerLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
player.play()
}
//changing the url to the new url and create a new AVPlayer then affect it to the playerView.player once the menu item is being selected
#IBAction func renderBG(_ sender: NSMenuItem) {
videoVariables.videoName = "Test_Video_2"
var theNewURL = Bundle.main.url(forResource:videoVariables.videoName, withExtension: "mp4")
player = AVPlayer.init(url: theNewURL!)
//!!this line crashes the app with the error "thread 1 exc_bad_instruction (code=exc_i386_invop subcode=0x0)" every time the menu item is being selected!!
playerView.player = player
}
}
Additionally, the background video is not supposed to be interactive(E.g. User cannot pause/ fast-forward the video), so any issues that might be caused by user interactivity can be ignored. The purpose of the app is to play a video on the user's desktop creating the exact same effect of running the command:
"/System/Library/Frameworks/ScreenSaver.framework/Resources/
ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine -background" in terminal.
Any help would be much appreciated!
You don't need to create AVPlayer from url. There is AVPlayerItem class to manipulate player playback queue.
let firstAsset = AVURLAsset(url: firstVideoUrl)
let firstPlayerItem = AVPlayerItem(asset: firstAsset)
let player = AVPlayer(playerItem: firstPlayerItem)
let secondAsset = AVURLAsset(url: secondVideoUrl)
let secondPlayerItem = AVPlayerItem(asset: secondAsset)
player.replaceCurrentItem(with: secondPlayerItem)
Docs about AVPlayerItem

Modify AVCaptureStillImageOutput to AVCapturePhotoOutput

I am currently working on a snippet of code which looks like the following:
if error == nil && (captureSession?.canAddInput(input))!
{
captureSession?.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
//let settings = AVCapturePhotoSettings()
//settings.availablePreviewPhotoPixelFormatTypes =
stillImageOutput?.outputSettings = [AVVideoCodecKey : AVVideoCodecJPEG]
if (captureSession?.canAddOutput(stillImageOutput))!
{
captureSession?.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspect
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer!)
captureSession?.startRunning()
}
}
I am aware that I should be using AVCapturePhotoOutput() instead of AVCaptureStillImageOutput() but am confused as to how I can transform the rest of this block if I make that change.
Specifically, how can I apply the same settings using the commented let settings = AVCapturePhotoSettings()?
For reference, I am using this tutorial as a guide.
Thanks
Apple documentation explains very clear for How to use AVCapturePhotoOutput
These are the steps to capture a photo.
Create an AVCapturePhotoOutput object. Use its properties to determine supported capture settings and to enable certain features (for example, whether to capture Live Photos).
Create and configure an AVCapturePhotoSettings object to choose features and settings for a specific capture (for example, whether to enable image stabilization or flash).
Capture an image by passing your photo settings object to the capturePhoto(with:delegate:) method along with a delegate object implementing the AVCapturePhotoCaptureDelegate protocol. The photo capture output then calls your delegate to notify you of significant events during the capture process.
have this below code on your clickCapture method and don't forgot to confirm and implement to delegate in your class.
let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: 160,
kCVPixelBufferHeightKey as String: 160,
]
settings.previewPhotoFormat = previewFormat
self.cameraOutput.capturePhoto(with: settings, delegate: self)
if you would like to know the different way to capturing photo from avfoundation check out my previous SO answer

Disable Audio (and interruption) with MPMoviePlayerController using Swift

At the moment, this is how I'm playing a video on the subview of my UIViewController:
override func viewDidAppear(animated: Bool) {
let filePath = NSBundle.mainBundle().pathForResource("musicvideo", ofType: "mp4")
self.moviePlayerController.contentURL = NSURL.fileURLWithPath(filePath)
self.moviePlayerController.play()
self.moviePlayerController.repeatMode = .One
self.moviePlayerController.view.frame = self.view.bounds
self.moviePlayerController.scalingMode = .AspectFill
self.moviePlayerController.controlStyle = .None
self.moviePlayerController.allowsAirPlay = false
self.view.addSubview(self.moviePlayerController.view)
}
I've read on ways to disable the audio by doing the following below (none of which work, at all). Keep in mind I'm trying to disable it to the point of not interrupting the current music playing via the Music app, Spotify, etc.
// Playing media items with the applicationMusicPlayer will restore the user's Music state after the application quits.
// The current volume of playing music, in the range of 0.0 to 1.0.
// This property is deprecated -- use MPVolumeView for volume control instead.
1) MPMusicPlayerController.applicationMusicPlayer().volume = 0
2) MPVolumeView doesn't even have a setting for setting the actual volume? It's a control.
3) self.moviePlayerController.useApplicationAudioSession = false
So I found this answer.
This is my Swift code that I ended up going with. I then used an AVPlayerLayer to add to the view as a sublayer, which works perfectly.
Thanks to the OP who managed to get a hold of an Apple technician and provided the original Objective-C code.
The only problems I'm facing now is that it:
1) Interrupts current music playback, whether it's from Music, Spotify, etc.
2) Video stops playing if I close the app and open it up again.
override func viewDidAppear(animated: Bool) {
let filePath = NSBundle.mainBundle().pathForResource("musicvideo", ofType: "mp4")
var asset: AVURLAsset?
asset = AVURLAsset.URLAssetWithURL(NSURL.fileURLWithPath(filePath), options: nil)
var audioTracks = NSArray()
audioTracks = asset!.tracksWithMediaType(AVMediaTypeAudio)
// Mute all the audio tracks
let allAudioParams = NSMutableArray()
for track: AnyObject in audioTracks {
// AVAssetTrack
let audioInputParams = AVMutableAudioMixInputParameters()
audioInputParams.setVolume(0.0, atTime: kCMTimeZero)
audioInputParams.trackID = track.trackID
allAudioParams.addObject(audioInputParams)
}
let audioZeroMix = AVMutableAudioMix()
audioZeroMix.inputParameters = allAudioParams
// Create a player item
let playerItem = AVPlayerItem(asset: asset)
playerItem.audioMix = audioZeroMix
// Create a new Player, and set the player to use the player item
// with the muted audio mix
let player = AVPlayer.playerWithPlayerItem(playerItem) as AVPlayer
player.play()
let layer = AVPlayerLayer(player: player)
player.actionAtItemEnd = .None
layer.frame = self.view.bounds
layer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.view.layer.addSublayer(layer)
}