Modify AVCaptureStillImageOutput to AVCapturePhotoOutput - swift

I am currently working on a snippet of code which looks like the following:
if error == nil && (captureSession?.canAddInput(input))!
{
captureSession?.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
//let settings = AVCapturePhotoSettings()
//settings.availablePreviewPhotoPixelFormatTypes =
stillImageOutput?.outputSettings = [AVVideoCodecKey : AVVideoCodecJPEG]
if (captureSession?.canAddOutput(stillImageOutput))!
{
captureSession?.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspect
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer!)
captureSession?.startRunning()
}
}
I am aware that I should be using AVCapturePhotoOutput() instead of AVCaptureStillImageOutput() but am confused as to how I can transform the rest of this block if I make that change.
Specifically, how can I apply the same settings using the commented let settings = AVCapturePhotoSettings()?
For reference, I am using this tutorial as a guide.
Thanks

Apple documentation explains very clear for How to use AVCapturePhotoOutput
These are the steps to capture a photo.
Create an AVCapturePhotoOutput object. Use its properties to determine supported capture settings and to enable certain features (for example, whether to capture Live Photos).
Create and configure an AVCapturePhotoSettings object to choose features and settings for a specific capture (for example, whether to enable image stabilization or flash).
Capture an image by passing your photo settings object to the capturePhoto(with:delegate:) method along with a delegate object implementing the AVCapturePhotoCaptureDelegate protocol. The photo capture output then calls your delegate to notify you of significant events during the capture process.
have this below code on your clickCapture method and don't forgot to confirm and implement to delegate in your class.
let settings = AVCapturePhotoSettings()
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: 160,
kCVPixelBufferHeightKey as String: 160,
]
settings.previewPhotoFormat = previewFormat
self.cameraOutput.capturePhoto(with: settings, delegate: self)
if you would like to know the different way to capturing photo from avfoundation check out my previous SO answer

Related

How do you add an overlay while recording a video in Swift?

I am trying to record, and then save, a video in Swift using AVFoundation. This works. I am also trying to add an overlay, such as a text label containing the date, to the video.
For example: the video saved is not only what the camera sees, but the timestamp as well.
Here is how I am saving the video:
func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
saveVideo(toURL: movieURL!)
}
private func saveVideo(toURL url: URL) {
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: url)
}) { (success, error) in
if(success) {
print("Video saved to Camera Roll.")
} else {
print("Video failed to save.")
}
}
}
I have a movieOuput that is an AVCaptureMovieFileOutput. My preview layer does not contain any sublayers. I tried adding the timestamp label's layer to the previewLayer, but this did not succeed.
I have tried Ray Wenderlich's example as well as this stack overflow question. Lastly, I also tried this tutorial, all of which to no avail.
How can I add an overlay to my video that is in the saved video in the camera roll?
Without more information it sounds like what you are asking for is a WATERMARK.
Not an overlay.
A watermark is a markup on the video that will be saved with the video.
An overlay is generally showed as subviews on the preview layer and will not be saved with the video.
Check this out here: https://stackoverflow.com/a/47742108/8272698
func addWatermark(inputURL: URL, outputURL: URL, handler:#escaping (_ exportSession: AVAssetExportSession?)-> Void) {
let mixComposition = AVMutableComposition()
let asset = AVAsset(url: inputURL)
let videoTrack = asset.tracks(withMediaType: AVMediaType.video)[0]
let timerange = CMTimeRangeMake(kCMTimeZero, asset.duration)
let compositionVideoTrack:AVMutableCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))!
do {
try compositionVideoTrack.insertTimeRange(timerange, of: videoTrack, at: kCMTimeZero)
compositionVideoTrack.preferredTransform = videoTrack.preferredTransform
} catch {
print(error)
}
let watermarkFilter = CIFilter(name: "CISourceOverCompositing")!
let watermarkImage = CIImage(image: UIImage(named: "waterMark")!)
let videoComposition = AVVideoComposition(asset: asset) { (filteringRequest) in
let source = filteringRequest.sourceImage.clampedToExtent()
watermarkFilter.setValue(source, forKey: "inputBackgroundImage")
let transform = CGAffineTransform(translationX: filteringRequest.sourceImage.extent.width - (watermarkImage?.extent.width)! - 2, y: 0)
watermarkFilter.setValue(watermarkImage?.transformed(by: transform), forKey: "inputImage")
filteringRequest.finish(with: watermarkFilter.outputImage!, context: nil)
}
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPreset640x480) else {
handler(nil)
return
}
exportSession.outputURL = outputURL
exportSession.outputFileType = AVFileType.mp4
exportSession.shouldOptimizeForNetworkUse = true
exportSession.videoComposition = videoComposition
exportSession.exportAsynchronously { () -> Void in
handler(exportSession)
}
}
And heres how to call the function.
let outputURL = NSURL.fileURL(withPath: "TempPath")
let inputURL = NSURL.fileURL(withPath: "VideoWithWatermarkPath")
addWatermark(inputURL: inputURL, outputURL: outputURL, handler: { (exportSession) in
guard let session = exportSession else {
// Error
return
}
switch session.status {
case .completed:
guard NSData(contentsOf: outputURL) != nil else {
// Error
return
}
// Now you can find the video with the watermark in the location outputURL
default:
// Error
}
})
Let me know if this code works for you.
It is in swift 3 so some changes will be needed.
I currently am using this code on an app of mine. Have not updated it to swift 5 yet
I do not have an actual development environment for Swift that can utilize AVFoundation. Thus, I can't provide you with any example code.
For adding meta data(date, location, timestamp, watermark, frame rate, etc...) as an overlay to the video while recording, you would have to process the video feed, frame by frame, live, while recording. Most likely you would have to store the frames in a buffer and process them before actually record them.
Now when it come to the meta data, there are two type, static and dynamic. For static type such as a watermark, it should be easy enough, as all the frames will get the same thing.
However, for dynamic meta data type such as timestamp or GPS location, there are a few things that needed to be taken into consideration. It takes computational power and time to process the video frames. Thus, depends on the type of dynamic data and how you got them, sometime the processed value may not be a correct value. For example, if you got a frame at 1:00:01, you process it and add a timestamp to it. Just pretend that it took 2 seconds to process the timestamp. The next frame you got is at 1:00:02, but you couldn't process it until 1:00:03 because processing the previous frame took 2 seconds. Thus, depend on how you got that new timestamp for the new frame, that timestamp value may not be the value that you wanted.
For processing dynamic meta data, you should also take into consideration of hardware lag. For example, the software is supposed to add live GPS location data to each frame and there weren't any lags in development or in testing. However, in real life, a user used the software in an area with a bad connection, and his phone lag while obtaining his GPS location. Some of his lags lasted as long as 5 seconds. What do you do in that situation? Do you set a time out for the GPS location and used the last good position? Do you report the error? Do you defer that frame to be process later when the GPS data become available(This may ruin live recording) and using an expensive algorithm to try to predict the user's location for that frame?
Besides those to take into consideration, I have some references here that I think may help you. I thought the one from medium.com looked pretty good.
https://medium.com/ios-os-x-development/ios-camera-frames-extraction-d2c0f80ed05a
Adding watermark to currently recording video and save with watermark
Render dynamic text onto CVPixelBufferRef while recording video
Adding on to #Kevin Ng, you can do an overlay on video frames with an UIViewController and an UIView.
UIViewController will have:
property to work with video stream
private var videoSession: AVCaptureSession?
property to work with overlay(the UIView class)
private var myOverlay: MyUIView{view as! MyUIView}
property to work with video output queue
private let videoOutputQueue = DispatchQueue(label:
"outputQueue", qos: .userInteractive)
method to create video session
method to process and display overlay
UIView will have task-specific helper methods needed to to act as overlay. For example, if you are doing hand detection, this overlay class can have helper methods to draw points on coordinates(ViewController class will detect coordinates of hand features, do necessary coordinate conversions, then pass the coordinates to the UIView class to display coordinates as an overlay)

can I set different aspect ratio when capturing LivePhoto images?

I am using Swifts 4 SDK for taking live photos in my app.
Is it possible to take live photo images in different Aspect Ratios? I want to support 4:3, 16:9 and 1:1. I will add some code for how I take my photo, I couldn't realize how to change the aspect ratio, and I understand it may not be a trivial task since LivePhoto has both a video and a photo part.
//define the AVCapturePhotoOutput object somewhere in the code
let photoOutput = AVCapturePhotoOutput()
//capture a photo with the settings defined for livePhoto using the helper function that returns the AVCapturePhotoSettings
photoOutput.capturePhoto(with: getCaptureSettings(), delegate: photoCaptureDelegate!)
func getCaptureSettings() -> AVCapturePhotoSettings {
var settings = AVCapturePhotoSettings()
settings = AVCapturePhotoSettings()
let writeURL = NSURL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent("LivePhotoVideo\(settings.uniqueID).mov")
settings.livePhotoMovieFileURL = writeURL
}
So is it possible to do it? and if so, how?

Capturing still image with AVFoundation

I'm currently creating a simple application which uses AVFoundation to stream video into a UIImageView.
To achieve this, I created an instance of AVCaptureSession() and an AVCaptureSessionPreset():
let input = try AVCaptureDeviceInput(device: device)
print(input)
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
if (captureSession.canAddOutput(sessionOutput)) {
captureSession.addOutput(sessionOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer)
captureSession.startRunning()
cameraView references to the UIImageView outlet.
I now want to implement a way of capturing a still image from the AVCaptureSession.
Correct me if theres a more efficient way, but I plan to have an additional UIImageView to hold the still image placed on top of the UIImageView which holds the video?
I've created a button with action:
#IBAction func takePhoto(_sender: Any) {
// functionality to obtain still image
}
My issue is, I'm unsure how to actually obtain a still image from the capture session and populate the new UIImageView with it.
After looking at information/questions posted on Stack, the majority of the solutions is to use:
captureStillImageAsynchronouslyFromConnection
I'm unsure if it's just Swift 3.0 but xCode isn't recognising this function.
Could someone please advise me on how to actually achieve the result of obtaining and displaying a still image upon button click.
Here is a link to my full code for better understanding of my program.
Thank you all in advance for taking the time to read my question and please feel free to tell me in case i've missed out some relevant data.
if you are targeting iOS 10 or above. captureStillImageAsynchronously(from:completionHandler:) is deprecated along with AVCaptureStillImageOutput.
As per the documentation
The AVCaptureStillImageOutput class is deprecated in iOS 10.0 and does
not support newer camera capture features such as RAW image output,
Live Photos, or wide-gamut color. In iOS 10.0 and later, use the
AVCapturePhotoOutput class instead. (The AVCaptureStillImageOutput
class remains supported in macOS 10.12.)
As per your code you are already using AVCapturePhotoOutput. So just follow these below steps to take a photo from session. Same can be found here in Apple documentation.
Create an AVCapturePhotoOutput object. Use its properties to determine supported capture settings and to enable certain features (for example, whether to capture Live Photos).
Create and configure an AVCapturePhotoSettings object to choose features and settings for a specific capture (for example, whether to enable image stabilization or flash).
Capture an image by passing your photo settings object to the capturePhoto(with:delegate:) method along with a delegate object implementing the AVCapturePhotoCaptureDelegate protocol. The photo capture output then calls your delegate to notify you of significant events during the capture process.
you are already doing step 1 and 2. So add this line in your code
#IBAction func takePhoto(_sender: Any) {
print("Taking Photo")
sessionOutput.capturePhoto(with: sessionOutputSetting, delegate: self as! AVCapturePhotoCaptureDelegate)
}
and implement the AVCapturePhotoCaptureDelegate function
optional public func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?)
Note that this delegate will give lots of control over taking photos. Check out the documentation for more functions. Also you need to process the image data which means you have to convert the sample buffer to UIImage.
if sampleBuffer != nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
// ...
// Add the image to captureImageView here...
}
Note that the image you get is rotated left so we have to manually rotate right so get preview like image.
More info can be found in my previous SO answer

Use Front Camera for AVCaptureDevice Preview Layer Automatically Like Snapchat or Houseparty (Swift 3) [duplicate]

This question already has answers here:
How to get the front camera in Swift?
(8 answers)
Closed 6 years ago.
Essentially what I'm trying to accomplish is having the front camera of the AVCaptureDevice be the first and only option on a application during an AVCaptureSession.
I've looked around StackOverflow and all the methods and answers provided are deprecated as of iOS 10, Swift 3 and Xcode 8.
I know you're supposed to enumerate the devices with AVCaptureDeviceDiscoverySession and look at them to distinguish front from back, but I'm unsure of how to do so.
Could anyone help? It would amazing if so!
Here's my code:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
previewLayer.frame = singleViewCameraSlot.bounds
self.singleViewCameraSlot.layer.addSublayer(previewLayer)
captureSession.startRunning()
}
lazy var captureSession: AVCaptureSession = {
let capture = AVCaptureSession()
capture.sessionPreset = AVCaptureSessionPreset1920x1080
return capture
}()
lazy var previewLayer: AVCaptureVideoPreviewLayer = {
let preview = AVCaptureVideoPreviewLayer(session: self.captureSession)
preview?.videoGravity = AVLayerVideoGravityResizeAspect
preview?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
preview?.bounds = CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height)
preview?.position = CGPoint(x: self.view.bounds.midX, y: self.view.bounds.midY)
return preview!
}()
func setupCameraSession() {
let frontCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo) as AVCaptureDevice
do {
let deviceInput = try AVCaptureDeviceInput(device: frontCamera)
captureSession.beginConfiguration()
if (captureSession.canAddInput(deviceInput) == true) {
captureSession.addInput(deviceInput)
}
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange as UInt32)]
dataOutput.alwaysDiscardsLateVideoFrames = true
if (captureSession.canAddOutput(dataOutput) == true) {
captureSession.addOutput(dataOutput)
}
captureSession.commitConfiguration()
let queue = DispatchQueue(label: "io.goodnight.videoQueue")
dataOutput.setSampleBufferDelegate(self, queue: queue)
}
catch let error as NSError {
NSLog("\(error), \(error.localizedDescription)")
}
}
If you just need to find a single device based on simple characteristics (like a front-facing camera that can shoot video), just use AVCaptureDevice.default(_:for:position:). For example:
guard let device = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video,
position: .front)
else { fatalError("no front camera. but don't all iOS 10 devices have them?")
// then use the device: captureSession.addInput(device) or whatever
Really that's all there is to it for most use cases.
There's also AVCaptureDeviceDiscoverySession as a replacement for the old method of iterating through the devices array. However, most of the things you'd usually iterate through the devices array for can be found using the new default(_:for:position:) method, so you might as well use that and write less code.
The cases where AVCaptureDeviceDiscoverySession is worth using are the less common, more complicated cases: say you want to find all the devices that support a certain frame rate, or use key-value observing to see when the set of available devices changes.
By the way...
I've looked around StackOverflow and all the methods and answers provided are deprecated as of iOS 10, Swift 3 and Xcode 8.
If you read Apple's docs for those methods (at least this one, this one, and this one), you'll see along with those deprecation warnings some recommendations for what to use instead. There's also a guide to the iOS 10 / Swift 3 photo capture system and some sample code that both show current best practices for these APIs.
If you explicitly need the front camera, you can use AVCaptureDeviceDiscoverySession as specified here.
https://developer.apple.com/reference/avfoundation/avcapturedevicediscoverysession/2361539-init
This allows you to specify the types of devices you want to search for. The following (untested) should give you the front facing camera.
let deviceSessions = AVCaptureDeviceDiscoverySession(deviceTypes: [AVCaptureDeviceType.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: AVCaptureDevicePosition.front)
This deviceSessions has a devices property which is an array of AVCaptureDevice types containing only the devices matching that search criteria.
deviceSessions?.devices
That should be either 0 or 1 depending on if the device has a front facing camera or not (some iPods won't for example).

Capturing Video with Swift using AVCaptureVideoDataOutput or AVCaptureMovieFileOutput

I need some guidance on how to capture video without having to use an UIImagePicker. The video needs to start and stop on a button click and then this data be saved to the NSDocumentDirectory. I am new to swift so any help will be useful.
The section of code that I need help with is starting and stopping a video session and turning that to data. I created a picture taking version that runs captureStillImageAsynchronouslyFromConnection and saves this data to the NSDocumentDirectory. I have set up a video capturing session and have the code ready to save data but do not know how to get the data from the session.
var previewLayer : AVCaptureVideoPreviewLayer?
var captureDevice : AVCaptureDevice?
var videoCaptureOutput = AVCaptureVideoDataOutput()
let captureSession = AVCaptureSession()
override func viewDidLoad() {
super.viewDidLoad()
captureSession.sessionPreset = AVCaptureSessionPreset640x480
let devices = AVCaptureDevice.devices()
for device in devices {
if (device.hasMediaType(AVMediaTypeVideo)) {
if device.position == AVCaptureDevicePosition.Back {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
beginSession()
}
}
}
}
}
func beginSession() {
var err : NSError? = nil
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
if err != nil {
println("Error: \(err?.localizedDescription)")
}
videoCaptureOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_32BGRA]
videoCaptureOutput.alwaysDiscardsLateVideoFrames = true
captureSession.addOutput(videoCaptureOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.view.layer.addSublayer(previewLayer)
previewLayer?.frame = CGRectMake(0, 0, screenWidth, screenHeight)
captureSession.startRunning()
var startVideoBtn = UIButton(frame: CGRectMake(0, screenHeight/2, screenWidth, screenHeight/2))
startVideoBtn.addTarget(self, action: "startVideo", forControlEvents: UIControlEvents.TouchUpInside)
self.view.addSubview(startVideoBtn)
var stopVideoBtn = UIButton(frame: CGRectMake(0, 0, screenWidth, screenHeight/2))
stopVideoBtn.addTarget(self, action: "stopVideo", forControlEvents: UIControlEvents.TouchUpInside)
self.view.addSubview(stopVideoBtn)
}
I can supply more code or explanation if needed.
For best results, read the Still and Video Media Capture section from the AV Foundation Programming Guide.
To process frames from AVCaptureVideoDataOutput, you will need a delegate that adopts the AVCaptureVideoDataOutputSampleBufferDelegate protocol. The delegate's captureOutput method will be called whenever a new frame is written. When you set the output’s delegate, you must also provide a queue on which callbacks should be invoked. It will look something like this:
let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
videoCaptureOutput.setSampleBufferDelegate(myDelegate, queue: cameraQueue)
captureSession.addOutput(videoCaptureOutput)
NB: If you just want to save the movie to a file, you may prefer the AVCaptureMovieFileOutput class instead of AVCaptureVideoDataOutput. In that case, you won't need a queue. But you'll still need a delegate, this time adopting the AVCaptureFileOutputRecordingDelegate protocol instead. (The relevant method is still called captureOutput.)
Here's one excerpt from the part about AVCaptureMovieFileOutput from the guide linked to above:
Starting a Recording
You start recording a QuickTime movie using startRecordingToOutputFileURL:recordingDelegate:. You need to supply a
file-based URL and a delegate. The URL must not identify an existing
file, because the movie file output does not overwrite existing
resources. You must also have permission to write to the specified
location. The delegate must conform to the
AVCaptureFileOutputRecordingDelegate protocol, and must implement the
captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:
method.
AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
NSURL *fileURL = <#A file URL that identifies the output location#>;
[aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];
In the implementation of
captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:,
the delegate might write the resulting movie to the Camera Roll album.
It should also check for any errors that might have occurred.