How to capture depth data from camera in iOS 11 and Swift 4? - swift

I'm trying to get depth data from the camera in iOS 11 with AVDepthData, tho when I setup a photoOutput with the AVCapturePhotoCaptureDelegate the photo.depthData is nil.
So I tried setting up the AVCaptureDepthDataOutputDelegate with a AVCaptureDepthDataOutput, tho I don't know how to capture the depth photo?
Has anyone ever got an image from AVDepthData?
Edit:
Here's the code I tried:
// delegates: AVCapturePhotoCaptureDelegate & AVCaptureDepthDataOutputDelegate
#IBOutlet var image_view: UIImageView!
#IBOutlet var capture_button: UIButton!
var captureSession: AVCaptureSession?
var sessionOutput: AVCapturePhotoOutput?
var depthOutput: AVCaptureDepthDataOutput?
var previewLayer: AVCaptureVideoPreviewLayer?
#IBAction func capture(_ sender: Any) {
self.sessionOutput?.capturePhoto(with: AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]), delegate: self)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
self.previewLayer?.removeFromSuperlayer()
self.image_view.image = UIImage(data: photo.fileDataRepresentation()!)
let depth_map = photo.depthData?.depthDataMap
print("depth_map:", depth_map) // is nil
}
func depthDataOutput(_ output: AVCaptureDepthDataOutput, didOutput depthData: AVDepthData, timestamp: CMTime, connection: AVCaptureConnection) {
print("depth data") // never called
}
override func viewDidLoad() {
super.viewDidLoad()
self.captureSession = AVCaptureSession()
self.captureSession?.sessionPreset = .photo
self.sessionOutput = AVCapturePhotoOutput()
self.depthOutput = AVCaptureDepthDataOutput()
self.depthOutput?.setDelegate(self, callbackQueue: DispatchQueue(label: "depth queue"))
do {
let device = AVCaptureDevice.default(for: .video)
let input = try AVCaptureDeviceInput(device: device!)
if(self.captureSession?.canAddInput(input))!{
self.captureSession?.addInput(input)
if(self.captureSession?.canAddOutput(self.sessionOutput!))!{
self.captureSession?.addOutput(self.sessionOutput!)
if(self.captureSession?.canAddOutput(self.depthOutput!))!{
self.captureSession?.addOutput(self.depthOutput!)
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!)
self.previewLayer?.frame = self.image_view.bounds
self.previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.previewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
self.image_view.layer.addSublayer(self.previewLayer!)
}
}
}
} catch {}
self.captureSession?.startRunning()
}
I'm trying two things, one where the depth data is nil and one where I'm trying to call a depth delegate method.
Dose anyone know what I'm missing?

First, you need to use the dual camera, otherwise you won't get any depth data.
let device = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back)
And keep a reference to your queue
let dataOutputQueue = DispatchQueue(label: "data queue", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem)
You'll also probably want to synchronize the video and depth data
var outputSynchronizer: AVCaptureDataOutputSynchronizer?
Then you can synchronize the two outputs in your viewDidLoad() method like this
if sessionOutput?.isDepthDataDeliverySupported {
sessionOutput?.isDepthDataDeliveryEnabled = true
depthDataOutput?.connection(with: .depthData)!.isEnabled = true
depthDataOutput?.isFilteringEnabled = true
outputSynchronizer = AVCaptureDataOutputSynchronizer(dataOutputs: [sessionOutput!, depthDataOutput!])
outputSynchronizer!.setDelegate(self, queue: self.dataOutputQueue)
}
I would recommend watching WWDC session 507 - they also provide a full sample app that does exactly what you want.
https://developer.apple.com/videos/play/wwdc2017/507/

To give more details to #klinger answer, here is what you need to do to get Depth Data for each pixel, I wrote some comments, hope it helps!
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
//## Convert Disparity to Depth ##
let depthData = (photo.depthData as AVDepthData!).converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let depthDataMap = depthData.depthDataMap //AVDepthData -> CVPixelBuffer
//## Data Analysis ##
// Useful data
let width = CVPixelBufferGetWidth(depthDataMap) //768 on an iPhone 7+
let height = CVPixelBufferGetHeight(depthDataMap) //576 on an iPhone 7+
CVPixelBufferLockBaseAddress(depthDataMap, CVPixelBufferLockFlags(rawValue: 0))
// Convert the base address to a safe pointer of the appropriate type
let floatBuffer = unsafeBitCast(CVPixelBufferGetBaseAddress(depthDataMap), to: UnsafeMutablePointer<Float32>.self)
// Read the data (returns value of type Float)
// Accessible values : (width-1) * (height-1) = 767 * 575
let distanceAtXYPoint = floatBuffer[Int(x * y)]
}

There are two ways to do this, and you are trying to do both at once:
Capture depth data along with the image. This is done by using the photo.depthData object from photoOutput(_:didFinishProcessingPhoto:error:). I explain why this did not work for you below.
Use a AVCaptureDepthDataOutput and implement depthDataOutput(_:didOutput:timestamp:connection:). I am not sure why this did not work for you, but implementing depthDataOutput(_:didOutput:timestamp:connection:) might help you figure out why.
I think that #1 is a better option, because it pairs the depth data with the image. Here's how you would do that:
#IBAction func capture(_ sender: Any) {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
settings.isDepthDataDeliveryEnabled = true
self.sessionOutput?.capturePhoto(with: settings, delegate: self)
}
// ...
override func viewDidLoad() {
// ...
self.sessionOutput = AVCapturePhotoOutput()
self.sessionOutput.isDepthDataDeliveryEnabled = true
// ...
}
Then, depth_map shouldn't be nil. Make sure to read both this and this (separate but similar pages) for more information about obtaining depth data.
For #2, I'm not quite sure why depthDataOutput(_:didOutput:timestamp:connection:) isn't being called, but you should implement depthDataOutput(_:didDrop:timestamp:connection:reason:) to see if depth data is being dropped for some reason.

The way you init your capture device is not right.
You should use the dual camera mode.
as for oc like follows:
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInDualCamera mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionBack];

Related

Is there a way to create a spectrogram of an audio file using Swift and AudioKit?

I am trying to create a spectrogram, like the one in the image, from an audio file using Swift for a macOS app. I am using AppKit but could implement SwiftUI as well. I cam across audio kit and it seems like the perfect library to use for this type of thing, but I have not been able to find any examples of what I am looking for in an of the audio kit repositories, audio kit UI nor the cookbook. Is this something that is possible with audio kit? If so, can anyone help me with this?
Thanks so much!
I have previously tried using apple's example project and changed the code in the AudioSpectrogram + AVCaptureAudioDataOutputSampleBufferDelegate file. The original code is as follows:
extension AudioSpectrogram: AVCaptureAudioDataOutputSampleBufferDelegate {
public func captureOutput(_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from connection: AVCaptureConnection) {
var audioBufferList = AudioBufferList()
var blockBuffer: CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
bufferListSizeNeededOut: nil,
bufferListOut: &audioBufferList,
bufferListSize: MemoryLayout.stride(ofValue: audioBufferList),
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
blockBufferOut: &blockBuffer)
guard let data = audioBufferList.mBuffers.mData else {
return
}
/// The _Nyquist frequency_ is the highest frequency that a sampled system can properly
/// reproduce and is half the sampling rate of such a system. Although this app doesn't use
/// `nyquistFrequency` you may find this code useful to add an overlay to the user interface.
if nyquistFrequency == nil {
let duration = Float(CMSampleBufferGetDuration(sampleBuffer).value)
let timescale = Float(CMSampleBufferGetDuration(sampleBuffer).timescale)
let numsamples = Float(CMSampleBufferGetNumSamples(sampleBuffer))
nyquistFrequency = 0.5 / (duration / timescale / numsamples)
}
if self.rawAudioData.count < AudioSpectrogram.sampleCount * 2 {
let actualSampleCount = CMSampleBufferGetNumSamples(sampleBuffer)
let ptr = data.bindMemory(to: Int16.self, capacity: actualSampleCount)
let buf = UnsafeBufferPointer(start: ptr, count: actualSampleCount)
rawAudioData.append(contentsOf: Array(buf))
}
while self.rawAudioData.count >= AudioSpectrogram.sampleCount {
let dataToProcess = Array(self.rawAudioData[0 ..< AudioSpectrogram.sampleCount])
self.rawAudioData.removeFirst(AudioSpectrogram.hopCount)
self.processData(values: dataToProcess)
}
createAudioSpectrogram()
}
func configureCaptureSession() {
// Also note that:
//
// When running in iOS, you must add a "Privacy - Microphone Usage
// Description" entry.
//
// When running in macOS, you must add a "Privacy - Microphone Usage
// Description" entry to `Info.plist`, and check "audio input" and
// "camera access" under the "Resource Access" category of "Hardened
// Runtime".
switch AVCaptureDevice.authorizationStatus(for: .audio) {
case .authorized:
break
case .notDetermined:
sessionQueue.suspend()
AVCaptureDevice.requestAccess(for: .audio,
completionHandler: { granted in
if !granted {
fatalError("App requires microphone access.")
} else {
self.configureCaptureSession()
self.sessionQueue.resume()
}
})
return
default:
// Users can add authorization in "Settings > Privacy > Microphone"
// on an iOS device, or "System Preferences > Security & Privacy >
// Microphone" on a macOS device.
fatalError("App requires microphone access.")
}
captureSession.beginConfiguration()
#if os(macOS)
// Note than in macOS, you can change the sample rate, for example to
// `AVSampleRateKey: 22050`. This reduces the Nyquist frequency and
// increases the resolution at lower frequencies.
audioOutput.audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVLinearPCMIsFloatKey: false,
AVLinearPCMBitDepthKey: 16,
AVNumberOfChannelsKey: 1]
#endif
if captureSession.canAddOutput(audioOutput) {
captureSession.addOutput(audioOutput)
} else {
fatalError("Can't add `audioOutput`.")
}
guard
let microphone = AVCaptureDevice.default(.builtInMicrophone,
for: .audio,
position: .unspecified),
let microphoneInput = try? AVCaptureDeviceInput(device: microphone) else {
fatalError("Can't create microphone.")
}
if captureSession.canAddInput(microphoneInput) {
captureSession.addInput(microphoneInput)
}
captureSession.commitConfiguration()
}
/// Starts the audio spectrogram.
func startRunning() {
sessionQueue.async {
if AVCaptureDevice.authorizationStatus(for: .audio) == .authorized {
self.captureSession.startRunning()
}
}
}
}
I got rid of the configureCaptureSession function and replaced the rest of the code to get the following code:
public func captureBuffer() {
var samplesArray:[Int16] = []
let asset = AVAsset(url: audioFileUrl)
let reader = try! AVAssetReader(asset: asset)
let track = asset.tracks(withMediaType: AVMediaType.audio)[0]
let settings = [
AVFormatIDKey : kAudioFormatLinearPCM
]
let readerOutput = AVAssetReaderTrackOutput(track: track, outputSettings: settings)
reader.add(readerOutput)
reader.startReading()
while let buffer = readerOutput.copyNextSampleBuffer() {
var audioBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 1, mDataByteSize: 0, mData: nil))
var blockBuffer: CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
buffer,
bufferListSizeNeededOut: nil,
bufferListOut: &audioBufferList,
bufferListSize: MemoryLayout<AudioBufferList>.size,
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
blockBufferOut: &blockBuffer
);
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
for buffer in buffers {
let samplesCount = Int(buffer.mDataByteSize) / MemoryLayout<Int16>.size
let samplesPointer = audioBufferList.mBuffers.mData!.bindMemory(to: Int16.self, capacity: samplesCount)
let samples = UnsafeMutableBufferPointer<Int16>(start: samplesPointer, count: samplesCount)
for sample in samples {
//do something with you sample (which is Int16 amplitude value)
samplesArray.append(sample)
}
}
guard let data = audioBufferList.mBuffers.mData else {
return
}
/// The _Nyquist frequency_ is the highest frequency that a sampled system can properly
/// reproduce and is half the sampling rate of such a system. Although this app doesn't use
/// `nyquistFrequency` you may find this code useful to add an overlay to the user interface.
if nyquistFrequency == nil {
let duration = Float(CMSampleBufferGetDuration(buffer).value)
let timescale = Float(CMSampleBufferGetDuration(buffer).timescale)
let numsamples = Float(CMSampleBufferGetNumSamples(buffer))
nyquistFrequency = 0.5 / (duration / timescale / numsamples)
}
if self.rawAudioData.count < AudioSpectrogram.sampleCount * 2 {
let actualSampleCount = CMSampleBufferGetNumSamples(buffer)
let ptr = data.bindMemory(to: Int16.self, capacity: actualSampleCount)
let buf = UnsafeBufferPointer(start: ptr, count: actualSampleCount)
rawAudioData.append(contentsOf: Array(buf))
}
while self.rawAudioData.count >= AudioSpectrogram.sampleCount {
let dataToProcess = Array(self.rawAudioData[0 ..< AudioSpectrogram.sampleCount])
self.rawAudioData.removeFirst(AudioSpectrogram.hopCount)
self.processData(values: dataToProcess)
}
createAudioSpectrogram()
}
}
In AudioSpectrogram: CALayer file, I changed the original lines 10-30 from
public class AudioSpectrogram: CALayer {
// MARK: Initialization
override init() {
super.init()
contentsGravity = .resize
configureCaptureSession()
audioOutput.setSampleBufferDelegate(self,
queue: captureQueue)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override public init(layer: Any) {
super.init(layer: layer)
}
to the following:
public class AudioSpectrogram: CALayer {
#objc var audioFileUrl: URL
// MARK: Initialization
override init() {
self.audioFileUrl = selectedTrackUrl!
super.init()
contentsGravity = .resize
captureBuffer()
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override public init(layer: Any) {
self.audioFileUrl = selectedTrackUrl!
super.init(layer: layer)
}
The changed code allows me to specify the audio file to use when the Spectrogram is called from another area in my app.
The following is an example of what I am trying to achieve. It was done using FFMPEG.
Example Spectrogram
This is the output I get from my code:
Output Image
AudioKit is not the tool you want for this. You want AVFoundation. Apple has an example project of exactly what you're describing.
The tool at the heart of this is a DCT (discrete cosine transform) to convert windows of samples into a collection of component frequencies you can visualize. AVFoundation is the tool you use to turn your audio file or live recording into a buffer of audio samples so you can apply the DCT.
There actually is a Spectrogram in the AudioKitUI Swift package: https://github.com/AudioKit/AudioKitUI/blob/main/Sources/AudioKitUI/Visualizations/SpectrogramView.swift
You would need to pass it an AudioKit Node but it should be interchangeable with the other visualizers in the Cookbook.

How can I add multipeer connectivity to an ARKit app that doesn't have 3D assets, but uses UITextView for rendering instead? [ Swift ]

So I am trying to add a multipeer element to this Sticky Note app from Apple's own Sample Code. Link to Sample Code page There are several examples of multipeer ARKit apps but the problem here is, with the app I am working from, the Sticky Note is NOT a 3D element but
For the purposes of this sample app, the sticky note entity has no geometry and thus, no appearance. Its anchor provides a 3D location only, and itʼs the sticky noteʼs screen-space annotation that has an appearance. To display it, you define the sticky noteʼs annotation. Following RealityKitʼs entity-component model, design a component that houses the annotation, which in this case is a view. See ScreenSpaceComponent.
I have been trying to use the example of multipeer apps in ARthat use the ARKit element with 3D elements stored as either assets [the "Collaborative Session" example ] or using ModelEntity geometry [the Creating a Multiuser AR Experience example ] but I haven't been successful in translating this app which uses screen space only.
I am able to get the message on the screen that it's connected to a peer, but that is as far as it goes. It will not render the notes on the second phone. I am burned out from all the attempts of making it work:(
One alternative is to forget about the notes being tethered to the screen space, and recreating this as a regular 3D space and 2D geometry thing using SpriteKit.
The system will not render the apps sticky notes on the other phone. I know there is a way around this, but I have been trying for days and haven't been able to do it.
I have been testing this using 2 phones.
I have
Added the info on the p.list
Added the Multipeer Session file
Added the code on the ViewController file related to multipeer
Added code to the arGestureSetUp() extension file which has the rendering info for the sticky notes.
What works: I can see the notes on both phones, and I get the messages saying that a peer has joined. What I can't do is view the
other user's notes like I would in a regular 3D ARkit app. It will not
render.
This is what I have added to the insertNewSticky function
func insertNewSticky(_ sender: UITapGestureRecognizer)
from one of the other examples:
let anchor = ARAnchor(name: "Anchor for object placement", transform: raycastResult.worldTransform)
arView.session.add(anchor: anchor)
Below is the full code for the Gesture Recognizer Setup
import UIKit
import ARKit
extension ViewController {
// MARK: - Gesture recognizer setup
// - Tag: AddViewTapGesture
func arViewGestureSetup() {
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(tappedOnARView))
arView.addGestureRecognizer(tapGesture)
let swipeGesture = UISwipeGestureRecognizer(target: self, action: #selector(swipedDownOnARView))
swipeGesture.direction = .down
arView.addGestureRecognizer(swipeGesture)
}
func stickyNoteGestureSetup(_ note: StickyNoteEntity) {
let panGesture = UIPanGestureRecognizer(target: self, action: #selector(panOnStickyView))
note.view?.addGestureRecognizer(panGesture)
let tapOnStickyView = UITapGestureRecognizer(target: self, action: #selector(tappedOnStickyView(_:)))
note.view?.addGestureRecognizer(tapOnStickyView)
}
// MARK: - Gesture recognizer callbacks
/// Tap gesture input handler.
/// - Tag: TapHandler
#objc
func tappedOnARView(_ sender: UITapGestureRecognizer) {
// Ignore the tap if the user is editing a sticky note.
for note in stickyNotes where note.isEditing { return }
// Create a new sticky note at the tap location.
insertNewSticky(sender)
}
/**
Hit test the feature point cloud and use any hit as the position of a new StickyNote. Otherwise, display a tip.
- Tag: ScreenSpaceViewInsertionTag
*/
func insertNewSticky(_ sender: UITapGestureRecognizer) {
// Get the user's tap screen location.
let touchLocation = sender.location(in: arView)
// Cast a ray to check for its intersection with any planes.
guard let raycastResult = arView.raycast(from: touchLocation, allowing: .estimatedPlane, alignment: .any).first
else {
messageLabel.displayMessage("No surface detected, try getting closer.", duration: 2.0)
return
}
// Create a new sticky note positioned at the hit test result's world position.
let frame = CGRect(origin: touchLocation, size: CGSize(width: 200, height: 200))
let note = StickyNoteEntity(frame: frame, worldTransform: raycastResult.worldTransform)
// Center the sticky note's view on the tap's screen location.
note.setPositionCenter(touchLocation)
// Add the sticky note to the scene's entity hierarchy.
arView.scene.addAnchor(note)
// Add the sticky note's view to the view hierarchy.
guard let stickyView = note.view else { return }
arView.insertSubview(stickyView, belowSubview: trashZone)
// Enable gestures on the sticky note.
stickyNoteGestureSetup(note)
// Save a reference to the sticky note.
stickyNotes.append(note)
// Volunteer to handle text view callbacks.
stickyView.textView.delegate = self
let anchor = ARAnchor(name: "Anchor for object placement", transform: raycastResult.worldTransform)
arView.session.add(anchor: anchor)
}
/// Dismisses the keyboard.
#objc
func swipedDownOnARView(_ sender: UISwipeGestureRecognizer) {
dismissKeyboard()
}
fileprivate func dismissKeyboard() {
for note in stickyNotes {
guard let textView = note.view?.textView else { continue }
if textView.isFirstResponder {
textView.resignFirstResponder()
return
}
}
}
#objc
func tappedOnStickyView(_ sender: UITapGestureRecognizer) {
guard let stickyView = sender.view as? StickyNoteView else { return }
stickyView.textView.becomeFirstResponder()
}
//- Tag: PanOnStickyView
fileprivate func panStickyNote(_ sender: UIPanGestureRecognizer, _ stickyView: StickyNoteView, _ panLocation: CGPoint) {
messageLabel.isHidden = true
let feedbackGenerator = UIImpactFeedbackGenerator()
switch sender.state {
case .began:
// Prepare the taptic engine to reduce latency in delivering feedback.
feedbackGenerator.prepare()
// Drag if the gesture is beginning.
stickyView.stickyNote.isDragging = true
// Save offsets to implement smooth panning.
guard let frame = sender.view?.frame else { return }
stickyView.xOffset = panLocation.x - frame.origin.x
stickyView.yOffset = panLocation.y - frame.origin.y
// Fade in the widget that's used to delete sticky notes.
trashZone.fadeIn(duration: 0.4)
case .ended:
// Stop dragging if the gesture is ending.
stickyView.stickyNote.isDragging = false
// Delete the sticky note if the gesture ended on the trash widget.
if stickyView.isInTrashZone {
deleteStickyNote(stickyView.stickyNote)
// ...
} else {
attemptRepositioning(stickyView)
}
// Fades out the widget that's used to delete sticky notes when there are no sticky notes currently being dragged.
if !stickyNotes.contains(where: { $0.isDragging }) {
trashZone.fadeOut(duration: 0.2)
}
default:
// Update the sticky note's screen position based on the pan location, and initial offset.
stickyView.frame.origin.x = panLocation.x - stickyView.xOffset
stickyView.frame.origin.y = panLocation.y - stickyView.yOffset
// Give feedback whenever the pan location is near the widget used to delete sticky notes.
trashZoneThresholdFeedback(sender, feedbackGenerator)
}
}
/// Sticky note pan-gesture handler.
/// - Tag: PanHandler
#objc
func panOnStickyView(_ sender: UIPanGestureRecognizer) {
guard let stickyView = sender.view as? StickyNoteView else { return }
let panLocation = sender.location(in: arView)
// Ignore the pan if any StickyViews are being edited.
for note in stickyNotes where note.isEditing { return }
panStickyNote(sender, stickyView, panLocation)
}
func deleteStickyNote(_ note: StickyNoteEntity) {
guard let index = stickyNotes.firstIndex(of: note) else { return }
note.removeFromParent()
stickyNotes.remove(at: index)
note.view?.removeFromSuperview()
note.view?.isInTrashZone = false
}
/// - Tag: AttemptRepositioning
fileprivate func attemptRepositioning(_ stickyView: StickyNoteView) {
// Conducts a ray-cast for feature points using the panned position of the StickyNoteView
let point = CGPoint(x: stickyView.frame.midX, y: stickyView.frame.midY)
if let result = arView.raycast(from: point, allowing: .estimatedPlane, alignment: .any).first {
stickyView.stickyNote.transform.matrix = result.worldTransform
} else {
messageLabel.displayMessage("No surface detected, unable to reposition note.", duration: 2.0)
stickyView.stickyNote.shouldAnimate = true
}
}
fileprivate func trashZoneThresholdFeedback(_ sender: UIPanGestureRecognizer, _ feedbackGenerator: UIImpactFeedbackGenerator) {
guard let stickyView = sender.view as? StickyNoteView else { return }
let panLocation = sender.location(in: trashZone)
if trashZone.frame.contains(panLocation), !stickyView.isInTrashZone {
stickyView.isInTrashZone = true
feedbackGenerator.impactOccurred()
} else if !trashZone.frame.contains(panLocation), stickyView.isInTrashZone {
stickyView.isInTrashZone = false
feedbackGenerator.impactOccurred()
}
}
#objc
func tappedReset(_ sender: UIButton) {
reset()
}
}
and this is the full code for the ViewController file
/*
See LICENSE folder for this sample’s licensing information.
Abstract:
Main view controller for the AR experience.
*/
import UIKit
import RealityKit
import Combine
import ARKit
import MultipeerConnectivity
class ViewController: UIViewController, ARSessionDelegate {
// MARK: - Class variable declarations
#IBOutlet var arView: ARView!
#IBOutlet weak var messageLabel: MessageLabel!
var trashZone: GradientView!
var shadeView: UIView!
var resetButton: UIButton!
var keyboardHeight: CGFloat!
var stickyNotes = [StickyNoteEntity]()
var subscription: Cancellable!
//added Sat May 28 5:12pm
var multipeerSession: MultipeerSession?
// end of added Sat May 28 5:12pm
//added Sat May 28 5:12pm
// A dictionary to map MultiPeer IDs to ARSession ID's.
// This is useful for keeping track of which peer created which ARAnchors.
var peerSessionIDs = [MCPeerID: String]()
var sessionIDObservation: NSKeyValueObservation?
var configuration: ARWorldTrackingConfiguration?
// end of added Sat May 28 5:12pm
// MARK: - View Controller Life Cycle
override func viewDidLoad() {
super.viewDidLoad()
subscription = arView.scene.subscribe(to: SceneEvents.Update.self) { [unowned self] in
self.updateScene(on: $0)
}
arViewGestureSetup()
overlayUISetup()
arView.session.delegate = self
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Add observer to the keyboardWillShowNotification to get the height of the keyboard every time it is shown
let notificationName = UIResponder.keyboardWillShowNotification
let selector = #selector(keyboardIsPoppingUp(notification:))
NotificationCenter.default.addObserver(self, selector: selector, name: notificationName, object: nil)
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
arView.session.delegate = self
// Prevent the screen from being dimmed to avoid interuppting the AR experience.
UIApplication.shared.isIdleTimerDisabled = true
// Turn off ARView's automatically-configured session
// to create and set up your own configuration.
arView.automaticallyConfigureSession = false
configuration = ARWorldTrackingConfiguration()
// Enable a collaborative session.
configuration?.isCollaborationEnabled = true
// Enable realistic reflections.
configuration?.environmentTexturing = .automatic
// Begin the session.
arView.session.run(configuration!)
// Use key-value observation to monitor your ARSession's identifier.
sessionIDObservation = observe(\.arView.session.identifier, options: [.new]) { object, change in
print("SessionID changed to: \(change.newValue!)")
// Tell all other peers about your ARSession's changed ID, so
// that they can keep track of which ARAnchors are yours.
guard let multipeerSession = self.multipeerSession else { return }
self.sendARSessionIDTo(peers: multipeerSession.connectedPeers)
}
// Start looking for other players via MultiPeerConnectivity.
multipeerSession = MultipeerSession(receivedDataHandler: receivedData, peerJoinedHandler:
peerJoined, peerLeftHandler: peerLeft, peerDiscoveredHandler: peerDiscovered)
//arView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(handleTap(recognizer:))))
messageLabel.displayMessage("Tap the screen to place cubes.\nInvite others to launch this app to join you.", duration: 60.0)
}
//peerDiscovered
func peerDiscovered(_ peer: MCPeerID) -> Bool {
guard let multipeerSession = multipeerSession else { return false }
if multipeerSession.connectedPeers.count > 3 {
// Do not accept more than four users in the experience.
messageLabel.displayMessage("A fifth peer wants to join the experience.\nThis app is limited to four users.", duration: 6.0)
return false
} else {
return true
}
}
// end of added Sat May 28 5:12pm
/// - Tag: PeerJoined
// added Sat May 28 5:12pm
func peerJoined(_ peer: MCPeerID) {
messageLabel.displayMessage("""
A peer has joined the experience.
Hold the phones next to each other.
""", duration: 6.0)
// Provide your session ID to the new user so they can keep track of your anchors.
sendARSessionIDTo(peers: [peer])
}
// end of added Sat May 28 5:12pm
// added Sat May 28 5:12pm
func peerLeft(_ peer: MCPeerID) {
messageLabel.displayMessage("A peer has left the shared experience.")
// Remove all ARAnchors associated with the peer that just left the experience.
if let sessionID = peerSessionIDs[peer] {
removeAllAnchorsOriginatingFromARSessionWithID(sessionID)
peerSessionIDs.removeValue(forKey: peer)
}
}
// end of added Sat May 28 5:12pm
//added Sat May 28 5:12pm
func receivedData(_ data: Data, from peer: MCPeerID) {
if let collaborationData = try? NSKeyedUnarchiver.unarchivedObject(ofClass: ARSession.CollaborationData.self, from: data) {
arView.session.update(with: collaborationData)
return
}
// ...
let sessionIDCommandString = "SessionID:"
if let commandString = String(data: data, encoding: .utf8), commandString.starts(with: sessionIDCommandString) {
let newSessionID = String(commandString[commandString.index(commandString.startIndex,
offsetBy: sessionIDCommandString.count)...])
// If this peer was using a different session ID before, remove all its associated anchors.
// This will remove the old participant anchor and its geometry from the scene.
if let oldSessionID = peerSessionIDs[peer] {
removeAllAnchorsOriginatingFromARSessionWithID(oldSessionID)
}
peerSessionIDs[peer] = newSessionID
}
}
// end of added Sat May 28 5:12pm
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
messageLabel.displayMessage("Established joint experience with a peer.")
// ...
}
func updateScene(on event: SceneEvents.Update) {
let notesToUpdate = stickyNotes.compactMap { !$0.isEditing && !$0.isDragging ? $0 : nil }
for note in notesToUpdate {
// Gets the 2D screen point of the 3D world point.
guard let projectedPoint = arView.project(note.position) else { return }
// Calculates whether the note can be currently visible by the camera.
let cameraForward = arView.cameraTransform.matrix.columns.2.xyz
let cameraToWorldPointDirection = normalize(note.transform.translation - arView.cameraTransform.translation)
let dotProduct = dot(cameraForward, cameraToWorldPointDirection)
let isVisible = dotProduct < 0
// Updates the screen position of the note based on its visibility
note.projection = Projection(projectedPoint: projectedPoint, isVisible: isVisible)
note.updateScreenPosition()
}
}
func reset() {
guard let configuration = arView.session.configuration else { return }
arView.session.run(configuration, options: .removeExistingAnchors)
for note in stickyNotes {
deleteStickyNote(note)
}
}
func session(_ session: ARSession, didFailWithError error: Error) {
guard error is ARError else { return }
let errorWithInfo = error as NSError
let messages = [
errorWithInfo.localizedDescription,
errorWithInfo.localizedFailureReason,
errorWithInfo.localizedRecoverySuggestion
]
let errorMessage = messages.compactMap({ $0 }).joined(separator: "\n")
DispatchQueue.main.async {
// Present an alert informing about the error that has occurred.
let alertController = UIAlertController(title: "The AR session failed.", message: errorMessage, preferredStyle: .alert)
let restartAction = UIAlertAction(title: "Restart Session", style: .default) { _ in
alertController.dismiss(animated: true, completion: nil)
self.reset()
}
alertController.addAction(restartAction)
self.present(alertController, animated: true, completion: nil)
}
}
override var prefersStatusBarHidden: Bool {
return true
}
override var prefersHomeIndicatorAutoHidden: Bool {
return true
}
private func sendARSessionIDTo(peers: [MCPeerID]) {
guard let multipeerSession = multipeerSession else { return }
let idString = arView.session.identifier.uuidString
let command = "SessionID:" + idString
if let commandData = command.data(using: .utf8) {
multipeerSession.sendToPeers(commandData, reliably: true, peers: peers)
}
}
private func removeAllAnchorsOriginatingFromARSessionWithID(_ identifier: String) {
guard let frame = arView.session.currentFrame else { return }
for anchor in frame.anchors {
guard let anchorSessionID = anchor.sessionIdentifier else { continue }
if anchorSessionID.uuidString == identifier {
arView.session.remove(anchor: anchor)
}
}
}
}
Update: I spoke to a Staff Engineer on Apple's RealityKit team who explained to me that what I was trying to accomplish is not feasible because the note had an embedded subclass that is not 'codable' as per Swift's Codable Protocol
I will have to rebuild the note differently than the example i had been working with to ensure it fits within the Codable protocol which will then ensure the data can travel across the network via Multipeer Connectivity Framework.

swift Avcapture session for barcode scanning is not working

I am trying to build a barcode scanner. I adapted some of this tutorial. The video capture session is working but it is not detecting any barcode. I have gone through the code multiple times and still could not find what the problem could be. Here is my code for detecting the barcode
class ScanController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {
var captureSession: AVCaptureSession?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var qrCodeFrameView: UIView?
let supportedCodeTypes = [AVMetadataObject.ObjectType.upce,
AVMetadataObject.ObjectType.code39,
AVMetadataObject.ObjectType.qr]
override func viewDidLoad() {
super.viewDidLoad()
//Get an instance of the AVCaptureDevice class a device object and provide the video as the media type parameter
let captureDevice = AVCaptureDevice.default(for: AVMediaType.video)
do {
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
let input = try AVCaptureDeviceInput(device: captureDevice!)
// Initialize the captureSession object.
captureSession = AVCaptureSession()
// Set the input device on the capture session.
captureSession?.addInput(input)
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession?.addOutput(captureMetadataOutput)
// Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
captureMetadataOutput.metadataObjectTypes = supportedCodeTypes
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer!)
// Start video capture.
captureSession?.startRunning()
// Add the message label
self.view.addSubview(messageLabel)
//initialize QR Code Frame to highlight the QR Code
qrCodeFrameView = UIView()
if let qrCodeFrameView = qrCodeFrameView {
qrCodeFrameView.layer.borderColor = UIColor.green.cgColor
qrCodeFrameView.layer.borderWidth = 2
view.addSubview(qrCodeFrameView)
view.bringSubview(toFront: qrCodeFrameView)
}
} catch {
// If any error occurs, simply print it out and don't continue any more.
print("THERE IS A PROBLEM WITH THE CAPTURE SESSION *****************")
print(error)
return
}
}
}
what am I missing ?
maybe you missing the Delegate Methods? In the Tutorial is the delegate method :
optional func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection)
under the section Decoding the QR Code

Camera feed of dimensions one pixel by one pixel

This is a rather strange request, but I am looking to build an app that has a live camera feed taking up the whole screen. However, instead of displaying the normal resolution it would all be one color. In particular, I want to take the color of what normally would be the middle pixel on the screen and make that take up the entire screen. It needs to be done live and fast.
I attempted to make a function which saved the capturesession as a uiimage and then got the pixel data from that, however, it proved to be slow in real time. Any suggestions?
Assuming you have an AVCaptureSession setup. You need to setup a AVCaptureVideoDataOutput and then setup its sample buffer delegate. The delegate class should override func captureOutput(AVCaptureOutput!, CMSampleBuffer!, AVCaptureConnection!). Within this function you can get access to the pixel buffer to sample your centre point. You could do it as below. I've left the actual sampling of the centre point to you.
class MyClass : NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
func addVideoOutput() {
// Add video data output.
if session.canAddOutput(videoDataOutput)
{
videoDataOutput.setSampleBufferDelegate(self, queue: sessionQueue)
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as NSString:Int(kCVPixelFormatType_32BGRA)]
videoDataOutput.alwaysDiscardsLateVideoFrames = true
session.addOutput(videoDataOutput)
}
}
// AVCaptureVideoDataOutputSampleBufferDelegate
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
if let buffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
process(pixelBuffer: buffer)
}
}
func process(pixelBuffer: CVPixelBuffer) {
let sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
let width = CVPixelBufferGetWidth( pixelBuffer );
let height = CVPixelBufferGetHeight( pixelBuffer );
let rt = CVPixelBufferLockBaseAddress( pixelBuffer, .readOnly );
if (rt == kCVReturnSuccess) {
...
Do your processing of the pixeldata here
...
CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
}
}
private let session = AVCaptureSession()
private let sessionQueue = DispatchQueue(label: "session queue", attributes: [], target: nil) // Communicate with the session
private let videoDataOutput = AVCaptureVideoDataOutput()
}

How to write image data and GPS metadata to saved photos album using swift?

I am trying to add GPS metadata to images I take using a custom camera application. My camera app takes images as expected and there are lots of apple specific metadata values included in each picture I take. However I can't seem to add GPS data. What am I a doing wrong? Thanks ahead.
class ViewController: UIViewController {
var output : AVCaptureStillImageOutput!
var locationManager : CLLocationManager()
override func viewDidLoad()
createCamera()
initLocationManager()
super.viewDidLoad()
}
func createCamera(){
//create camera code. works fine
}
func takePhoto(){
self.output.captureStillImageAsynchronousFromConnection(self.connection){
buffer, error in
if let error = error{
println("Error: \(error)")
}
else{
var imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(buffer)
var metadata = self.getImageMetaData()
var library = ALAssestsLibrary()
library.writeImageDataToSavedPhotosAlbum(imageData, metadata, completionBlock: {
(assetURL: NSURL!, error: NSError!) -> Void in
})
}
}
}
func getImageMetadata() -> NSMutableDictionary{
var locDict = NSMutableDictioary()
locDict.setObject(self.locationManager.location.timestamp, forKey: "KCGImagePrropertyGPSTimeStamp")
locDict.setObject("N", forKey: kCGImagePropertyGPSLatitudeRef")
locDict.setObject(self.locationManager.location.coordinate.latitude, forKey: "kCGImagePropertyGPSLatitude")
locDict.setObject("W", forKey: "kCGImagePropertyGPSLongitudeRef")
locDict.setObject(self.locationManager.location.coordinate.longitude, forKey: "kCGImagePropertyGPSLongitude")
return locDict
//this returns the correct coordinates
//NOTE: I hard coded the Ref values for brevity.
}
}
NOTE: I am using ExifTool to view the metadata.