Intro and background:
I have been working on a project for sometime that lets the user do some custom manipulations from their camera (a live feed)
At the moment, I start the capture session in the following way:
var session: AVCaptureSession?
var stillImageOutput: AVCaptureStillImageOutput?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
videoPreviewLayer!.frame = CameraView.bounds
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
session = AVCaptureSession()
session!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError {
error = error1
input = nil
}
if error == nil && session!.canAddInput(input) {
session!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput?.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if session!.canAddOutput(stillImageOutput) {
session!.addOutput(stillImageOutput)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
videoPreviewLayer!.videoGravity = AVLayerVideoGravityResizeAspect
videoPreviewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
CameraView.layer.addSublayer(videoPreviewLayer!)
session!.startRunning()
}
}
}
where CameraView is the UIView of my viewcontroller. I now have a function called: singleTapped() that I want to get every frame of the capture, process it, then put into the CameraView frame (Perhaps I should be using a UIImageView instead?)...
Research:
I have looked here and here, as well as many others for getting the frames of the camera, yet these don't necessarily conclude where I need. What's interesting is in the first link I provided: In their answer they have:
self.stillImageOutput.captureStillImageAsynchronouslyFromConnection(self.stillImageOutput.connectionWithMediaType(AVMediaTypeVideo)) { (buffer:CMSampleBuffer!, error:NSError!) -> Void in
var image = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(buffer)
var data_image = UIImage(data: image) //THEY EXTRACTED A UIIMAGE HERE
self.imageView.image = data_image
}
which does indeed get a UIImage from the camera, but is this a viable method for 30fps?
Rational and Constraints:
The reason for why I need a UIImage is because I am utilizing a library someone else wrote for transforming a UIImage in a custom way quickly. I want to present this transformation to the user "live".
In conclusion
Please let me know if I am missing something, or if I should reword something. As said above this is my first post, so I am not quite strong with SO nuances. Thanks, and cheers
You should maybe try reconsider using AVCaptureSession. For what you are doing (I assume) you should try using OpenCV. Its a great utility for image manipulations, especially if you are doing so at 30/60fps* (The actual frame rate after processing might, and I guarantee will, be less). Depending on what this manipulation is you have been given, you can easily port that over into XCode using bridging headers or converting everything entirely to C++ for use with OpenCV.
With OpenCV you can call the camera from built-in functions and that can save you lots of processing time and therefore runtime. For example, take a look at this.
I have used OpenCV for similar situations to which you just described, and I think you could benefit. Swift is nice, but sometimes handling certain things are better through other means...
Related
I'm trying to mirror the recorded video from a capture session. The video preview for front facing camera shows a mirrored version, however, when I go to save the file and play it back, the captured video is actually mirrored. I'm using Apple's AVCam demo as a reference and can't seem to figure this out! Please help.
I've tried creating an AVCaptureConnection and trying to set the .isVideoMirrored parameter. However, I get this error:
cannot be added to the session because the source and destination media types are incompatible'
I would have thought mirroring the video would be much easier. I think I may be creating my connection incorrectly. The code below doesn't actually "Add connection" when I call the .canAddConnection check.
var captureSession: AVCaptureSession!
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
captureSession = AVCaptureSession()
//Setup Camera
if let dualCameraDevice = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .front) {
defaultVideoDevice = dualCameraDevice
} else if let frontCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front) {
// If the rear wide angle camera isn't available, default to the front wide angle camera.
defaultVideoDevice = frontCameraDevice
}
guard let videoDevice = defaultVideoDevice else {
print("Default video device is unavailable.")
// setupResult = .configurationFailed
captureSession.commitConfiguration()
return
}
let videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
if captureSession.canAddInput(videoDeviceInput) {
captureSession.addInput(videoDeviceInput)
}
let movieOutput = AVCaptureMovieFileOutput()
//Video Input variable for AVCapture Connection
let videoInput: [AVCaptureInput.Port] = videoDeviceInput.ports
if captureSession.canAddOutput(movieOutput) {
captureSession.beginConfiguration()
captureSession.addOutput(movieOutput)
captureSession.sessionPreset = .medium
Then I try to setup the AVCapture connection and try to set the parameters for mirroring. Please tell me if there is an easier way to mirror the output / playback.
avCaptureConnection = AVCaptureConnection(inputPorts: videoInput, output: movieOutput)
avCaptureConnection.isEnabled = true
//Mirror the capture connection?
avCaptureConnection.automaticallyAdjustsVideoMirroring = false
avCaptureConnection.isVideoMirrored = false
//Check if we can add a connection
if captureSession.canAddConnection(avCaptureConnection) {
//Add the connection
captureSession.addConnection(avCaptureConnection)
}
captureSession.commitConfiguration()
self.movieOutput = movieOutput
setupLivePreview()
}
}
Somewhere else in the code, connected to an IBAaction, I initialize the recording
// Start recording video to a temporary file.
let outputFileName = NSUUID().uuidString
let outputFilePath = (NSTemporaryDirectory() as NSString).appendingPathComponent((outputFileName as NSString).appendingPathExtension("mov")!)
print("Recording in tap function")
movieOutput.startRecording(to: URL(fileURLWithPath: outputFilePath), recordingDelegate: self)
I think I'm using AVCaptureConnection incorrectly, especially because of the error stating media types are incompatible. If there is a proper way to implement this function please do let me know. Also open to hearing suggestions for an easier way to mirror the playback. Thank you!
I am using MLVision cloud text recognition for my app. I capture/upload a photo and then I start the process. When it recognises the image and extract the text, then I separate it and append every separated block into an array.
The code below is for the whole process.
lazy var vision = Vision.vision()
var textRecognizer: VisionTextRecognizer!
var test = [] as Array<String>
override func viewDidLoad() {
super.viewDidLoad()
let options = VisionCloudTextRecognizerOptions()
options.languageHints = ["en","hi"]
textRecognizer = vision.cloudTextRecognizer(options: options)
}
//where pickedImage is the image that user captures.
let visionImage = VisionImage(image: pickedImage)
textRecognizer.process(visionImage, completion: { (features, error) in
guard error == nil, let features = features else {
self.resultView.text = "Could not recognize any text"
self.dismiss(animated: true, completion: nil)
return
}
for block in features.blocks {
for line in block.lines{
//for element in line.elements{
self.resultView.text = self.resultView.text + "\(line.text)"
}
}
self.separate()
})
func separate(){
let separators = CharacterSet(charactersIn: (":)(,•/·]["))
let ofWordsArray = self.resultView.text.components(separatedBy: separators)
for word in ofWordsArray{
let low = word.trimmingCharacters(in: .whitespacesAndNewlines).lowercased()
if low != ""{
test.append(low)
}
}
print(test)
}
Everything works fine and I get the result that I want.The problem is that I think is really slow. It takes about 20sec for the entire process.Is there a way to make it faster?
Thanks in advance.
You are using the VisionCloudTextRecognizer. Speed will depend on your connection, in my case it was only few seconds. Your other option is to use on-device text recognition or use a hybrid approach, where you first detect on-device, then correct with Cloud API later.
I followed this video: https://www.youtube.com/watch?v=7TqXrMnfJy8&t=45s to the T. But when I open the camera view all I see is the black screen and white button. I get no error messages when I try I load the camera view. Can someone please assist me with what I'm doing wrong?
My code is below:
import UIKit
import AVFoundation
class CameraViewController: UIViewController {
var captureSession = AVCaptureSession()
var backCamera: AVCaptureDevice?
var currentCamera: AVCaptureDevice?
var photoOutput: AVCapturePhotoOutput?
var cameraPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
setupCaptureSession()
setupDevice()
setupInputOutput()
setupPreviewLayer()
startRunningCaptureSession()
}
func setupCaptureSession(){
captureSession.sessionPreset = AVCaptureSession.Preset.photo
}
func setupDevice(){
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.unspecified)
let devices = deviceDiscoverySession.devices
for device in devices{
if device.position == AVCaptureDevice.Position.back {
backCamera = device
}
}
currentCamera = backCamera
}
func setupInputOutput(){
do {
let captureDeviceInput = try AVCaptureDeviceInput(device: currentCamera!)
captureSession.addInput(captureDeviceInput)
photoOutput?.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format:[AVVideoCodecKey: AVVideoCodecType.jpeg])], completionHandler: nil)
} catch {
print(error)
}
}
func setupPreviewLayer(){
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraPreviewLayer?.frame = self.view.frame
self.view.layer.insertSublayer(cameraPreviewLayer!, at: 1)
}
func startRunningCaptureSession(){
captureSession.startRunning()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated
}
}
I ran your code and it worked perfectly fine — almost! The only problem is that I had to add a Privacy — Camera Usage Description entry to the app's Info.plist. Otherwise the app crashes.
Once I did that and ran your code, I saw the live camera view on my device.
So why isn't it working for you? Let's think of some possible reasons. You didn't give enough info to know for sure (seeing as the code itself works just fine), but here are some possibilities:
You don't have the Privacy — Camera Usage Description entry in the app's Info.plist.
You are testing on the Simulator. Maybe this code works only on a device.
There is something in your interface in front of the sublayer that you add when you say insertSublayer. To test this, try saying addSublayer instead; this will make the camera layer the frontmost layer (this is just for testing purposes, remember).
Maybe your code never runs at all? Perhaps we never actually go to this view controller. To test that theory, put a print statement in your viewDidLoad and see if it actually prints to the console.
Maybe your code runs too soon? To test that theory, move all those calls out of viewDidLoad and into something later, such as viewDidAppear. Remember, this is just for testing purposes.
Hopefully one of those will help you figure out what the problem is.
I am following Google's guide for getting an image from the storage and then placing it into an imageView using "FirebaseStorageUI", which seems to use SDWebImage.
I have the problem that the image simply is not set. There are no errors in the console. My code:
#IBOutlet weak var imageView: UIImageView!
var ref = FIRDatabase.database().reference()
var storageRef = FIRStorage.storage().reference()
override func viewWillAppear(_ animated: Bool) {
self.getHouseID(){ (houseName) -> () in
if houseName.characters.count > 0 {
self.houseIDLabel.text = houseName // does work correctly
let photoRef = self.storageRef.child("\(houseName)/\("housePhoto.jpg")")
self.housePhotoImageView.sd_setImage(with: photoRef) // not working
} else {
print("error while setting image and house name label")
}
}
}
My Storage looks like this:
The label gets correctly set with the houseName, which is also used in the storage path to retrieve the image. Anything I have missed here?
1. I think the problem is that your awesomehouse/ has a slash in the end.
Try to check which child() your are creating: awesomehouse or awesomehouse/
2. The second possible problem is that you are trying to fetch "housePhoto.jpg" instead of stored housePhoto.
So, it should be:
let photoRef = self.storageRef.child("awesomehouse/").child("housePhoto")
self.housePhotoImageView.sd_setImage(with: photoRef) // should work now
Or it better to save photo to storage with extension ".jpg", then it would be:
let photoRef = self.storageRef.child("awesomehouse/").child("housePhoto.jpg")
self.housePhotoImageView.sd_setImage(with: photoRef) // should work now
Try both. I think it will solve your problem.
Hope it helps
My first guess would be that you are not setting this on the main queue.
Try this:
DispatchQueue.main.async {
self.housePhotoImageView.sd_setImage(with: photoRef)
}
I don't know if it matters that your image file does not have a file extension, that might also be worth trying. Hope this helps
This is a super basic question that is troubling me.
I have a UIslider IBAction that is generating a Double (var = rounded). I want to use this double in viewDidLoad. but getting the error "Use of unresolved identifier 'rounded'"
#IBAction func sliderValueChanged(sender: UISlider) {
var currentValue = Double(sender.value)
var rounded = Double(round(100*currentValue)/100)
label.text = "\(rounded)"
}
override func viewDidLoad() {
super.viewDidLoad()
let fileURL = NSBundle.mainBundle().URLForResource("puppy", withExtension: "jpg")
let beginImage = CIImage(contentsOfURL: fileURL)
let filter = CIFilter(name: "CISepiaTone")
filter.setValue(beginImage, forKey: kCIInputImageKey)
filter.setValue(rounded, forKey: kCIInputIntensityKey)
let newImage = UIImage(CIImage: filter.outputImage)
self.imageView.image = newImage
}
filter.setValue(rounded, forKey: kCIInputIntensityKey) is where i am getting the error. I want to use the 'rounded' variable from the slider function here.
Any help in using one variable from one function in another function would be very much appreciated. I have ran into this a couple times without success. So once you all help me out here, it should fix my other issues as well.
Thanks
Declare an instance variable at the beginning of the class – outside any method – with the default value of the UISlider
var rounded : Double = <defaultValueOfTheSlider>
delete the keyword var before rounded in sliderValueChanged()
A few concepts that will help you out.
The issue you are having is that you are creating an instance inside a function and it is lost when the function returns. The reason for this is that the functions data is created on the stack, the stack is temporary memory your app's process uses.
To access the variable throughout the class you can declare it at the top of your class. This will then be allocated to the heap so that it's available for later access until removed from the heap(deallocated).