I am working on this piece of swift 3 code:
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
let videoCaptureDevice = AVCaptureDevice.defaultDevice(withDeviceType: AVCaptureDeviceType.builtInWideAngleCamera,mediaType: AVMediaTypeVideo, position: .back)
let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
if captureSession.canAddInput(videoInput)
{
captureSession.addInput(videoInput)
}
Then, i take a picture with an AVCapturePhotoOutput object and i get the picture in AVCapturePhotoCaptureDelegate object.
It works fine.
What i want to do is to take a picture with the iPhone 7 Plus dual camera. I want to get 2 pictures, like the official iOS camera app:
- One picture with background blur
- A second picture, without blur
Do you think it is possible ?
Thanks
Related
When I enable LivePhotoCapture on my AVCapturePhotoOutput and switch to builtInUltraWideCamera on my iPhone 12, I get a distorted image on the preview layer. The issue goes away if LivePhotoCapture is disabled.
This issue isn't reproducible on iPhone 13 Pro.
Tried to play with videoGravity settings, but no luck. Any tips are appreciated!
On my AVCapturePhotoOutput:
if self.photoOutput.isLivePhotoCaptureSupported {
self.photoOutput.isLivePhotoCaptureEnabled = true
}
Preview layer:
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer.videoGravity = .resizeAspect
videoPreviewLayer.connection?.videoOrientation = .portrait
previewView.layer.addSublayer((videoPreviewLayer)!)
self.captureSession.startRunning()
self.videoPreviewLayer.frame = self.previewView.bounds
Result (the picture is mirrored, but it's not a problem, the problem is on the right and bottom edges of the picture):
I have an AV player which streams video from url, I want to show thumbnail of video while dragging slider in forward or backward direction(like feature in youtube). can you explain how to add this feature using swift.
LIKE THIS
https://drive.google.com/file/d/1g9ngVJbhTDvWABBt49T-QrSfA354bERa/view
If your playlist files already support the EXT-X-IMAGE-STREAM-INF tag and you are using the standard player controller (AVPlayerViewController) then you will have that feature for free. But if you are using customized UI for your player or your playlist doesn't support that tag, you have to build that feature by your self. It's possible because all you have to do is just display correct frames on top of the AVPlayer layer in proper positions.
if fileUrl.containsVideo {
let generateThumbnail = (try? AVAssetImageGenerator(asset: asset).copyCGImage(at: .init(seconds: 30, preferredTimescale: 60), actualTime: nil)) ?? UIImage(systemName: "music.note")?.cgImage!
img = UIImage(cgImage: generateThumbnail!)
itemtitle = fileUrl.lastPathComponent
}
I downloaded Apple's project about recognizing Objects in Live Capture.
When I tried the app I saw that if I put the object to recognize on the top or on the bottom of the camera view, the app doesn't recognize the object:
In this first image the banana is in the center of the camera view and the app is able to recognize it.
image object in center
In these two images the banana is near to the camera view's border and it is not able to recognize the object.
image object on top
image object on bottom
This is how session and previewLayer are set:
func setupAVCapture() {
var deviceInput: AVCaptureDeviceInput!
// Select a video device, make an input
let videoDevice = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back).devices.first
do {
deviceInput = try AVCaptureDeviceInput(device: videoDevice!)
} catch {
print("Could not create video device input: \(error)")
return
}
session.beginConfiguration()
session.sessionPreset = .vga640x480 // Model image size is smaller.
// Add a video input
guard session.canAddInput(deviceInput) else {
print("Could not add video device input to the session")
session.commitConfiguration()
return
}
session.addInput(deviceInput)
if session.canAddOutput(videoDataOutput) {
session.addOutput(videoDataOutput)
// Add a video data output
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)]
videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
} else {
print("Could not add video data output to the session")
session.commitConfiguration()
return
}
let captureConnection = videoDataOutput.connection(with: .video)
// Always process the frames
captureConnection?.isEnabled = true
do {
try videoDevice!.lockForConfiguration()
let dimensions = CMVideoFormatDescriptionGetDimensions((videoDevice?.activeFormat.formatDescription)!)
bufferSize.width = CGFloat(dimensions.width)
bufferSize.height = CGFloat(dimensions.height)
videoDevice!.unlockForConfiguration()
} catch {
print(error)
}
session.commitConfiguration()
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
rootLayer = previewView.layer
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
}
You can download the project here,
I am wondering if it is normal or not.
Is there any solutions to fix?
Does it take square photos to elaborate with coreml and the two ranges are not included?
Any hints? Thanks
That's probably because the imageCropAndScaleOption is set to centerCrop.
The Core ML model expects a square image but the video frames are not square. This can be fixed by setting the imageCropAndScaleOption option on the VNCoreMLRequest. However, the results may not be as good as with center crop (it depends on how the model was originally trained).
See also VNImageCropAndScaleOption in the Apple docs.
I am using Swifts 4 SDK for taking live photos in my app.
Is it possible to take live photo images in different Aspect Ratios? I want to support 4:3, 16:9 and 1:1. I will add some code for how I take my photo, I couldn't realize how to change the aspect ratio, and I understand it may not be a trivial task since LivePhoto has both a video and a photo part.
//define the AVCapturePhotoOutput object somewhere in the code
let photoOutput = AVCapturePhotoOutput()
//capture a photo with the settings defined for livePhoto using the helper function that returns the AVCapturePhotoSettings
photoOutput.capturePhoto(with: getCaptureSettings(), delegate: photoCaptureDelegate!)
func getCaptureSettings() -> AVCapturePhotoSettings {
var settings = AVCapturePhotoSettings()
settings = AVCapturePhotoSettings()
let writeURL = NSURL(fileURLWithPath: NSTemporaryDirectory()).appendingPathComponent("LivePhotoVideo\(settings.uniqueID).mov")
settings.livePhotoMovieFileURL = writeURL
}
So is it possible to do it? and if so, how?
I'm currently creating a simple application which uses AVFoundation to stream video into a UIImageView.
To achieve this, I created an instance of AVCaptureSession() and an AVCaptureSessionPreset():
let input = try AVCaptureDeviceInput(device: device)
print(input)
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
if (captureSession.canAddOutput(sessionOutput)) {
captureSession.addOutput(sessionOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer)
captureSession.startRunning()
cameraView references to the UIImageView outlet.
I now want to implement a way of capturing a still image from the AVCaptureSession.
Correct me if theres a more efficient way, but I plan to have an additional UIImageView to hold the still image placed on top of the UIImageView which holds the video?
I've created a button with action:
#IBAction func takePhoto(_sender: Any) {
// functionality to obtain still image
}
My issue is, I'm unsure how to actually obtain a still image from the capture session and populate the new UIImageView with it.
After looking at information/questions posted on Stack, the majority of the solutions is to use:
captureStillImageAsynchronouslyFromConnection
I'm unsure if it's just Swift 3.0 but xCode isn't recognising this function.
Could someone please advise me on how to actually achieve the result of obtaining and displaying a still image upon button click.
Here is a link to my full code for better understanding of my program.
Thank you all in advance for taking the time to read my question and please feel free to tell me in case i've missed out some relevant data.
if you are targeting iOS 10 or above. captureStillImageAsynchronously(from:completionHandler:) is deprecated along with AVCaptureStillImageOutput.
As per the documentation
The AVCaptureStillImageOutput class is deprecated in iOS 10.0 and does
not support newer camera capture features such as RAW image output,
Live Photos, or wide-gamut color. In iOS 10.0 and later, use the
AVCapturePhotoOutput class instead. (The AVCaptureStillImageOutput
class remains supported in macOS 10.12.)
As per your code you are already using AVCapturePhotoOutput. So just follow these below steps to take a photo from session. Same can be found here in Apple documentation.
Create an AVCapturePhotoOutput object. Use its properties to determine supported capture settings and to enable certain features (for example, whether to capture Live Photos).
Create and configure an AVCapturePhotoSettings object to choose features and settings for a specific capture (for example, whether to enable image stabilization or flash).
Capture an image by passing your photo settings object to the capturePhoto(with:delegate:) method along with a delegate object implementing the AVCapturePhotoCaptureDelegate protocol. The photo capture output then calls your delegate to notify you of significant events during the capture process.
you are already doing step 1 and 2. So add this line in your code
#IBAction func takePhoto(_sender: Any) {
print("Taking Photo")
sessionOutput.capturePhoto(with: sessionOutputSetting, delegate: self as! AVCapturePhotoCaptureDelegate)
}
and implement the AVCapturePhotoCaptureDelegate function
optional public func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?)
Note that this delegate will give lots of control over taking photos. Check out the documentation for more functions. Also you need to process the image data which means you have to convert the sample buffer to UIImage.
if sampleBuffer != nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
// ...
// Add the image to captureImageView here...
}
Note that the image you get is rotated left so we have to manually rotate right so get preview like image.
More info can be found in my previous SO answer