I am using twilio to send a video and use that video in a scenekit as a texture. But the problem is it works fine with iPhone X, but it gave this error Unsupported IOSurface format: 0x26424741 on iPhone XR and XS.
this is what I am doing:
Get Video:
func subscribed(to videoTrack: TVIRemoteVideoTrack, publication: TVIRemoteVideoTrackPublication, for participant: TVIRemoteParticipant) {
print("Participant \(participant.identity) added a video track.")
let remoteView = TVIVideoView.init(frame: UIWindow().frame,
delegate:self)
videoTrack.addRenderer(remoteView!)
delegate.participantAdded(with: remoteView!)
}
delegate:
func participantAdded(with videoView: UIView) {
sceneView.addVideo(with: videoView)
}
and add video to plane:
func addVideo(with view: UIView){
videoPlane.geometry?.firstMaterial?.diffuse.contents = view
}
The problem was actually with renderingType of remoteView. For older devices using metal was fine but newer devices it needed openGLES. I dont know why but it was the fix.
I used this solution to find out the device type.
Next I determined which renderingType to use
var renderingType: VideoView.RenderingType {
get{
let device = UIDevice()
switch device.type{
case .iPhoneXS:
return .openGLES
case .iPhoneXR:
return .openGLES
case .iPhoneXSMax:
return .openGLES
default:
return .metal
}
}
}
And used it to initialize remoteView
func didSubscribeToVideoTrack(videoTrack: RemoteVideoTrack, publication: RemoteVideoTrackPublication, participant: RemoteParticipant) {
print("Participant \(participant.identity) added a video track.")
let remoteView = VideoView.init(frame: UIWindow().frame,
delegate:self,
renderingType: renderingType)
videoTrack.addRenderer(remoteView!)
delegate.participantAddedVideo(for: participant.identity, with: remoteView!)
}
Related
I was following Apple Documentation and example project to load 3d Object using .SCN file with Virtual Object (subclass of SCNReferenceNode) class but suddenly i needed to change the model from .scn to usdz . Now my usdz object is loading successfully but it is not on surface (midway in the air) and i can't interact with it like (tap , pan , rotate) ... Is there any other way to get interaction with usdz object and how can I place it on the surface like I was doing it before with .scn file
For getting model URL (downloaded from server)
static let loadDownloadedModel : VirtualObject = {
let downloadedScenePath = getDocumentsDirectory().appendingPathComponent("\(Api.Params.inputModelName).usdz")
return VirtualObject(url: downloadedScenePath)!
}()
Loading it from URL
func loadVirtualObject(_ object: VirtualObject, loadedHandler: #escaping (VirtualObject) -> Void) {
isLoading = true
loadedObjects.append(object)
// Load the content asynchronously.
DispatchQueue.global(qos: .userInitiated).async {
object.reset()
object.load()
self.isLoading = false
loadedHandler(object)
}
}
Placing in the scene
func placeObjectOnFocusSquare() {
virtualObjectLoader.loadVirtualObject(VirtualObject.loadDownloadedModel) { (loadedObject) in
DispatchQueue.main.async {
self.placeVirtualObject(loadedObject)
self.setupBottomButtons(isSelected: true)
}
}
}
func placeVirtualObject(_ virtualObject: VirtualObject) {
guard let cameraTransform = session.currentFrame?.camera.transform,
let focusSquarePosition = focusSquare.lastPosition else {
statusViewController.showMessage("CANNOT PLACE OBJECT\nTry moving left or right.")
return
}
Api.Params.selectedModel = virtualObject
virtualObject.name = String(Api.Params.inputPreviewId)
virtualObject.scale = SCNVector3(Api.Params.modelXAxis, Api.Params.modelYAxis, Api.Params.modelZAxis)
virtualObject.setPosition(focusSquarePosition, relativeTo: cameraTransform, smoothMovement: false)
updateQueue.async {
self.sceneView.scene.rootNode.addChildNode(virtualObject)
}
}
.usdz object in sceneview
After so many tries , finally i found out that dynamic scaling of the model causing problem , Reference to this
iOS ARKit: Large size object always appears to move with the change in the position of the device camera
I scaled the object to 0.01 for all the axis (x,y and z)
virtualObject.scale = SCNVector3Make(0.01, 0.01, 0.01)
How do I animate NSSlider value change so it looks continuous?
I tried using NSAnimation context
private func moveSlider(videoTime: VLCTime) {
DispatchQueue.main.async { [weak self] in
NSAnimationContext.beginGrouping()
NSAnimationContext.current.duration = 1
self?.playerControls.seekSlider.animator().intValue = videoTime.intValue
NSAnimationContext.endGrouping()
}
}
My NSSlider still does not move smoothly.
To put you into the picture, I am trying to make a video player which uses NSSlider for scrubbing through the movie. That slider should also move as the video goes on. As I said, it does move but I can not get it to move smoothly.
There is an Apple sample code in objective-C for exactly what you are looking for. Below how it would look like in Swift.
Basically you need an extension on NSSlider
extension NSSlider {
override open class func defaultAnimation(forKey key: NSAnimatablePropertyKey) -> Any?{
if key == "floatValue"{
return CABasicAnimation()
}else {
return super.defaultAnimation(forKey: key)
}
}
}
Then you can simply use something like this in your move slider function.
private func moveSlider(videoTime: Float) {
NSAnimationContext.beginGrouping()
NSAnimationContext.current.duration = 0.5
slider.animator().floatValue = videoTime
NSAnimationContext.endGrouping()
}
I am trying to build a simple swift (4) macOS app to use an iPhone camera connected to my Mac.
I have started an blank macOS template app and have turned on sandbox to allow camera, mic and USB and added the following code to my ViewController.
import Cocoa
import AVFoundation
class ViewController: NSViewController {
#IBOutlet weak var camera: NSView!
override func viewDidLoad() {
super.viewDidLoad()
camera.layer = CALayer()
let session:AVCaptureSession = AVCaptureSession()
session.sessionPreset = AVCaptureSession.Preset.high
let device:AVCaptureDevice = (AVCaptureDevice.default(for: AVMediaType.video))!
// let listdevices = (AVCaptureDevice.devices())
do {
try session.addInput(AVCaptureDeviceInput(device: device))
//Preview
let previewLayer:AVCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
let myView:NSView = self.view
previewLayer.frame = myView.bounds
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.camera.layer?.addSublayer(previewLayer)
session.startRunning()
// print(listdevices)
// print(device)
} catch {
print(device)
}
}
override var representedObject: Any? {
didSet {
// Update the view, if already loaded.
}
}
}
In storyboard I have dropped in a Custom View.
App builds ok and uses facetime camera no problem, however with iPhone connected I dont see if as a device that AVFoundation can use. Not sure on next steps on how to get the previewLayer to select the USB camera to use aka iPhone.
p.s Needs to be landscape for all cameras orietation
According to this Apple Developer Forum post, capturing the camera of a connected iOS device from a macOS app is not supported.
The closest you can do (as the post suggests), is to capture the screen of the iOS device while the camera (Camera.app) is running, effectively capturing the live camera preview (or you can roll your own companion camera app in iOS, if you want to remove the camera app’s UI from the captured screen).
As there is no autofocus in ARKit, I wanted to load ARKit in a view that is half the screen and second half will have AVFoundation -> AVCamera.
Is it possible to load AVCamera and ARKit simultaneously in same app?
Thanks.
Nope.
ARKit uses AVCapture internally (as explained in the WWDC talk introducing ARKit). Only one AVCaptureSession can be running at a time, so if you run your own capture session it’ll suspend ARKit’s session (and break tracking).
Update: However, in iOS 11.3 (aka "ARKit 1.5"), ARKit enables autofocus by default, and you can choose to disable it with the isAutoFocusEnabled option.
Changing the camera focus would disrupt the tracking - so this is definitely not possible (right now at least).
Update: See #rickster answer above.
I managed to use AVFoundation with ARKit by calling self.sceneView.session.run(self.sceneView.session.configuration!) right after taking the photo.
Use self.captureSession?.stopRunning() right after taking photo to make the session resume faster.
self.takePhoto()
self.sceneView.session.run(self.sceneView.session.configuration!)
func startCamera() {
captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSession.Preset.photo
cameraOutput = AVCapturePhotoOutput()
if let device = AVCaptureDevice.default(for: .video),
let input = try? AVCaptureDeviceInput(device: device) {
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
if (captureSession.canAddOutput(cameraOutput)) {
captureSession.addOutput(cameraOutput)
captureSession.startRunning()
}
} else {
print("issue here : captureSesssion.canAddInput")
}
} else {
print("some problem here")
}
}
func takePhoto() {
startCamera()
let settings = AVCapturePhotoSettings()
cameraOutput.capturePhoto(with: settings, delegate: self)
self.captureSession?.stopRunning()
}
I am trying to combine CoreML and ARKit in my project using the given inceptionV3 model on Apple website.
I am starting from the standard template for ARKit (Xcode 9 beta 3)
Instead of intanciating a new camera session, I reuse the session that has been started by the ARSCNView.
At the end of my viewDelegate, I write:
sceneView.session.delegate = self
I then extend my viewController to conform to the ARSessionDelegate protocol (optional protocol)
// MARK: ARSessionDelegate
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate frame: ARFrame) {
do {
let prediction = try self.model.prediction(image: frame.capturedImage)
DispatchQueue.main.async {
if let prob = prediction.classLabelProbs[prediction.classLabel] {
self.textLabel.text = "\(prediction.classLabel) \(String(describing: prob))"
}
}
}
catch let error as NSError {
print("Unexpected error ocurred: \(error.localizedDescription).")
}
}
}
At first I tried that code, but then noticed that inception requires a pixel Buffer of type Image. < RGB,<299,299>.
Although not recommenced, I thought I would just resize my frame then try to get a prediction out of it. I am resizing using this function (took it from https://github.com/yulingtianxia/Core-ML-Sample)
func resize(pixelBuffer: CVPixelBuffer) -> CVPixelBuffer? {
let imageSide = 299
var ciImage = CIImage(cvPixelBuffer: pixelBuffer, options: nil)
let transform = CGAffineTransform(scaleX: CGFloat(imageSide) / CGFloat(CVPixelBufferGetWidth(pixelBuffer)), y: CGFloat(imageSide) / CGFloat(CVPixelBufferGetHeight(pixelBuffer)))
ciImage = ciImage.transformed(by: transform).cropped(to: CGRect(x: 0, y: 0, width: imageSide, height: imageSide))
let ciContext = CIContext()
var resizeBuffer: CVPixelBuffer?
CVPixelBufferCreate(kCFAllocatorDefault, imageSide, imageSide, CVPixelBufferGetPixelFormatType(pixelBuffer), nil, &resizeBuffer)
ciContext.render(ciImage, to: resizeBuffer!)
return resizeBuffer
}
Unfortunately, this is not enough to make it work. This is the error that is catched:
Unexpected error ocurred: Input image feature image does not match model description.
2017-07-20 AR+MLPhotoDuplicatePrediction[928:298214] [core]
Error Domain=com.apple.CoreML Code=1
"Input image feature image does not match model description"
UserInfo={NSLocalizedDescription=Input image feature image does not match model description,
NSUnderlyingError=0x1c4a49fc0 {Error Domain=com.apple.CoreML Code=1
"Image is not expected type 32-BGRA or 32-ARGB, instead is Unsupported (875704422)"
UserInfo={NSLocalizedDescription=Image is not expected type 32-BGRA or 32-ARGB, instead is Unsupported (875704422)}}}
Not sure what I can do from here.
If there is any better suggestion to combine both, I'm all ears.
Edit: I also tried the resizePixelBuffer method from the YOLO-CoreML-MPSNNGraph suggested by #dfd , the error is exactly the same.
Edit2: So I changed the pixel format to be kCVPixelFormatType_32BGRA (not the same format as the pixelBuffer passed in the resizePixelBuffer).
let pixelFormat = kCVPixelFormatType_32BGRA // line 48
I do not have the error anymore. But as soon as I try to make a prediction, the AVCaptureSession stops. Seems I am running into the same issue Enric_SA is running on the apple developers forum.
Edit3: So I tried implementing rickster solution. Works well with inceptionV3. I wanted to try a a feature observation (VNClassificationObservation). At this time, it is not working using TinyYolo. The bounding are wrong. Trying to figure it out.
Don't process images yourself to feed them to Core ML. Use Vision. (No, not that one. This one.) Vision takes an ML model and any of several image types (including CVPixelBuffer) and automatically gets the image to the right size and aspect ratio and pixel format for the model to evaluate, then gives you the model's results.
Here's a rough skeleton of the code you'd need:
var request: VNRequest
func setup() {
let model = try VNCoreMLModel(for: MyCoreMLGeneratedModelClass().model)
request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)
}
func classifyARFrame() {
let handler = VNImageRequestHandler(cvPixelBuffer: session.currentFrame.capturedImage,
orientation: .up) // fix based on your UI orientation
handler.perform([request])
}
func myResultsMethod(request: VNRequest, error: Error?) {
guard let results = request.results as? [VNClassificationObservation]
else { fatalError("huh") }
for classification in results {
print(classification.identifier, // the scene label
classification.confidence)
}
}
See this answer to another question for some more pointers.