How can I replay the paused (removed) video - swift

I am creating a AR Application to track 2 images and play videos for that images. The problem I got is I am not able to replay the first video because it get paused (and removed).
I tried to create a storage for removed videos, and then used that stored data to replay that video. But it doesnt work as desired.
The video which is removed first start playing at every image after scanning the 1st image again.
The Code:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
let videos = ["harrypotter": "harrypotter.mp4", "deatheater": "deatheater.mp4"]
if let videoName = videos[imageAnchor.referenceImage.name!] {
if let currentVideoNode = currentVideoNode {
currentVideoNode.pause()
currentVideoNode.removeFromParent()
}
let videoNode = SKVideoNode(fileNamed: videoName)
videoNode.play()
currentVideoNode = videoNode
let videoScene = SKScene(size: CGSize(width: 480, height: 360))
videoNode.position = CGPoint(x: videoScene.size.width / 2, y: videoScene.size.height / 2)
videoNode.yScale = -1.0
videoScene.addChild(videoNode)
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = videoScene
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
}
return node
}
}

Related

Turn occlusion on for ARFaceTrackingConfiguration

I am trying to follow the steps to create this following this article (image below from article):
It is basically recognising a face to put something on the face (a tattoo) and placing a background image behind it.
I am using an iPhone X device to test the code, but every time I test if .personSegmentation is supported, it is false:
if ARFaceTrackingConfiguration.supportsFrameSemantics(.personSegmentation) {
configuration.frameSemantics.insert(.personSegmentation) // code never executed.
}
My whole code for adding the plane to put on top of the face plus the background is:
The ARSCNViewDelegate delegate to add the nodes:
private var virtualBackgroundNode = SCNNode()
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard let device = sceneView.device else {
return nil
}
let faceGeometry = ARSCNFaceGeometry(device: device)
let faceNode = SCNNode(geometry: faceGeometry)
faceNode.geometry?.firstMaterial?.transparency = 0
let tattooPlane = SCNPlane(width: 0.13, height: 0.06)
tattooPlane.firstMaterial?.diffuse.contents = UIImage(named: "Tattoos/tattoo0")!
tattooPlane.firstMaterial?.isDoubleSided = true
let tattooNode = SCNNode()
tattooNode.position.z = faceNode.boundingBox.max.z * 3 / 4
tattooNode.position.y = 0.027
tattooNode.geometry = tattooPlane
faceNode.addChildNode(tattooNode)
configureBackgroundView()
sceneView.scene.rootNode.addChildNode(virtualBackgroundNode)
return faceNode
}
Resizing the background image, setting it and positioning the background node:
func configureBackgroundView() {
let (skScene, mediaAspectRatio) = makeImageBackgroundScene(image: UIImage(named: "Cats/cat0")!)
let size = skScene.size
virtualBackgroundNode.geometry = SCNGeometry.Plane(width: size.width, height: size.height)
let material = SCNMaterial()
material.diffuse.contents = skScene
virtualBackgroundNode.geometry?.materials = [material]
virtualBackgroundNode.scale = SCNVector3(1.7 * mediaAspectRatio, 1.7, 1)
let cameraPosition = sceneView.pointOfView?.scale
let position = SCNVector3(cameraPosition!.x, cameraPosition!.y, cameraPosition!.z - 1000)
virtualBackgroundNode.position = position
}
This method creates a SpriteKit image by resizing the original asset:
func makeImageBackgroundScene(image: UIImage) -> (scene: SKScene, mediaAspectRatio: Double) {
//Adjusted so that the aspect ratio of the image is not distorted
let width = image.size.width
let height = image.size.height
let mediaAspectRatio = Double(width / height)
let cgImage = image.cgImage!
let newImage = UIImage(cgImage: cgImage, scale: 1.0, orientation: .up)
let skScene = SKScene(size: CGSize(width: 1000 * mediaAspectRatio, height: 1000))
let texture = SKTexture(image: newImage)
let skNode = SKSpriteNode(texture:texture)
skNode.position = CGPoint(x: skScene.size.width / 2.0, y: skScene.size.height / 2.0)
skNode.size = skScene.size
skNode.yScale = -1.0
skScene.addChild(skNode)
return (skScene, mediaAspectRatio)
}
Any advice on what to try? Snapchat and TikTok have similar Face recognition + background setups and they work in my device.
Thanks for any help.

Add text to a recognized image via ARKIT

Right now I have a simple image detection, with an overlay of a SCNPlane.
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = UIColor(white: 1, alpha: 0.5)
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
return node
}
Instead of the image overlay, I want to display a simple text on the right side of the recognized image.
I already tried:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
let text = SCNText(string: "testtext", extrusionDepth: 1)
let material = SCNMaterial()
material.diffuse.contents = UIColor.green
text.materials = [material]
let textNode = SCNNode(geometry: text)
node.addChildNode(textNode)
return node
}
What I am doing wrong here?

3D Model is shaky with ARKit in Xcode

I am using ARKits image detection to place a 3D object when a certain image is detected. Everything works fine except for the 3D model that is being created. It is shaking like crazy. I double checked and the reference image has the right measures.
I call addModel() when the image is detected. Here is how my code looks like.
Finding reference Image:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor{
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
let planeNode = SCNNode(geometry: plane)
addModel(addTo: planeNode)
node.addChildNode(planeNode)
}
return node
}
The addModel() functions looks like this:
func addModel(addTo: SCNNode){
let testScene = SCNScene(named: "art.scnassets/testModel.scn")
let testNode = testScene?.rootNode.childNode(withName: "test", recursively: true)
let testMaterial = SCNMaterial()
testMaterial.diffuse.contents = UIImage(named: "art.scnassets/bricks")
testNode?.geometry?.materials = [testMaterial]
testNode!.position = SCNVector3Zero
testNode!.position.x = -0.3
testNode!.position.z = 0.3
addTo.addChildNode(testNode!)
}
Did your try to delete:
testNode!.position.x = -0.3
testNode!.position.z = 0.3 ?
Because when you type testNode!.position = SCNVector3Zero it is says, that
x = 0.0, y = 0.0, z = 0.0 and after that you type another coordinates.

How to play local video When Image is recognized using Arkit in Swift?

I have image recognizes by using AR kit ,when detect image I need to show and play the video on presented scene (like above the detected image)?
lazy var fadeAndSpinAction: SCNAction = {
return .sequence([
.fadeIn(duration: fadeDuration),
.rotateBy(x: 0, y: 0, z: CGFloat.pi * 360 / 180, duration: rotateDuration),
.wait(duration: waitDuration),
.fadeOut(duration: fadeDuration)
])
}()
lazy var fadeAction: SCNAction = {
return .sequence([
.fadeOpacity(by: 0.8, duration: fadeDuration),
.wait(duration: waitDuration),
.fadeOut(duration: fadeDuration)
])
}()
lazy var fishNode: SCNNode = {
guard let scene = SCNScene(named: "Catfish1.scn"),
let node = scene.rootNode.childNode(withName: "Catfish1", recursively: false) else { return SCNNode() }
let scaleFactor = 0.005
node.scale = SCNVector3(scaleFactor, scaleFactor, scaleFactor)
node.eulerAngles.x = -.pi / 2
return node
}()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
sceneView.delegate = self
configureLighting()
}
func configureLighting() {
sceneView.autoenablesDefaultLighting = true
sceneView.automaticallyUpdatesLighting = true
}
override func viewWillAppear(_ animated: Bool) {
resetTrackingConfiguration()
}
func resetTrackingConfiguration() {
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else { return }
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
let options: ARSession.RunOptions = [.resetTracking, .removeExistingAnchors]
sceneView.session.run(configuration, options: options)
statusLabel.text = "Move camera around to detect images"
}
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
guard let imageAnchor = anchor as? ARImageAnchor,
let imageName = imageAnchor.referenceImage.name else { return }
// TODO: Overlay 3D Object
let overlayNode = self.getNode(withImageName: imageName)
overlayNode.opacity = 0
overlayNode.position.y = 0.2
overlayNode.runAction(self.fadeAndSpinAction)
node.addChildNode(overlayNode)
self.statusLabel.text = "Image detected: \"\(imageName)\""
self.videoNode.geometry = SCNPlane(width: 1276.0 / 2.0, height: 712.0 / 2.0)
self.spriteKitScene.scaleMode = .aspectFit
self.videoSpriteKitNode?.position = CGPoint(x: self.spriteKitScene.size.width / 2.0, y: self.spriteKitScene.size.height / 2.0)
self.videoSpriteKitNode?.size = self.spriteKitScene.size
self.spriteKitScene.addChild(self.videoSpriteKitNode!)
self.videoNode.geometry?.firstMaterial?.diffuse.contents = self.spriteKitScene
var transform = SCNMatrix4MakeRotation(Float(M_PI), 0.0, 0.0, 1.0)
transform = SCNMatrix4Translate(transform, 1.0, 1.0, 0)
self.videoNode.geometry?.firstMaterial?.diffuse.contentsTransform = transform
self.videoNode.position = SCNVector3(x: 0, y: 30, z: 7)
node.addChildNode(self.videoNode)
self.videoSpriteKitNode?.play()
}
}
func getPlaneNode(withReferenceImage image: ARReferenceImage) -> SCNNode {
let plane = SCNPlane(width: image.physicalSize.width,
height: image.physicalSize.height)
let node = SCNNode(geometry: plane)
return node
}`
Looking at your code, firstly you are setting your SCNPlane to be 638 Metres wide and 356 Meters tall, I'm sure thats not what you actually want ^________^.
Anyway, here is an example of playing a local video using an SKScene & SKVideoNode which works well:
//--------------------------
// MARK: - ARSCNViewDelegate
//--------------------------
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have An ARImageAnchor And Have Detected Our Reference Image
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
//2. Get The Physical Width & Height Of Our Reference Image
let width = CGFloat(referenceImage.physicalSize.width)
let height = CGFloat(referenceImage.physicalSize.height)
//3. Create An SCNNode To Hold Our Video Player With The Same Size As The Image Target
let videoHolder = SCNNode()
let videoHolderGeometry = SCNPlane(width: width, height: height)
videoHolder.transform = SCNMatrix4MakeRotation(-Float.pi / 2, 1, 0, 0)
videoHolder.geometry = videoHolderGeometry
//4. Create Our Video Player
if let videoURL = Bundle.main.url(forResource: "BlackMirrorz", withExtension: "mp4"){
setupVideoOnNode(videoHolder, fromURL: videoURL)
}
//5. Add It To The Hierarchy
node.addChildNode(videoHolder)
}
/// Creates A Video Player As An SCNGeometries Diffuse Contents
///
/// - Parameters:
/// - node: SCNNode
/// - url: URL
func setupVideoOnNode(_ node: SCNNode, fromURL url: URL){
//1. Create An SKVideoNode
var videoPlayerNode: SKVideoNode!
//2. Create An AVPlayer With Our Video URL
let videoPlayer = AVPlayer(url: url)
//3. Intialize The Video Node With Our Video Player
videoPlayerNode = SKVideoNode(avPlayer: videoPlayer)
videoPlayerNode.yScale = -1
//4. Create A SpriteKitScene & Postion It
let spriteKitScene = SKScene(size: CGSize(width: 600, height: 300))
spriteKitScene.scaleMode = .aspectFit
videoPlayerNode.position = CGPoint(x: spriteKitScene.size.width/2, y: spriteKitScene.size.height/2)
videoPlayerNode.size = spriteKitScene.size
spriteKitScene.addChild(videoPlayerNode)
//6. Set The Nodes Geoemtry Diffuse Contenets To Our SpriteKit Scene
node.geometry?.firstMaterial?.diffuse.contents = spriteKitScene
//5. Play The Video
videoPlayerNode.play()
videoPlayer.volume = 0
}
}
Update:
If you want to place the video above the target you can do something like the following:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have An ARImageAnchor And Have Detected Our Reference Image
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
//2. Get The Physical Width & Height Of Our Reference Image
let width = CGFloat(referenceImage.physicalSize.width)
let height = CGFloat(referenceImage.physicalSize.height)
//3. Create An SCNNode To Hold Our Video Player
let videoHolder = SCNNode()
let planeHeight = height/2
let videoHolderGeometry = SCNPlane(width: width, height: planeHeight)
videoHolder.transform = SCNMatrix4MakeRotation(-Float.pi / 2, 1, 0, 0)
videoHolder.geometry = videoHolderGeometry
//4. Place It About The Target
let zPosition = height - (planeHeight/2)
videoHolder.position = SCNVector3(0, 0, -zPosition)
//5. Create Our Video Player
if let videoURL = Bundle.main.url(forResource: "BlackMirrorz", withExtension: "mp4"){
setupVideoOnNode(videoHolder, fromURL: videoURL)
}
//5. Add It To The Hierachy
node.addChildNode(videoHolder)
}
Hope it helps...

ARKit image detection, add image

I'm using ARKit 1.5 (beta) for image detection. Once I detect my image I would like to then place a AR scene image using the plane detected. How can this be done?
My code so far which detects the image (which is in my assets folder):
/// - Tag: ARImageAnchor-Visualizing
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
updateQueue.async {
// Create a plane to visualize the initial position of the detected image.
let plane = SCNPlane(width: referenceImage.physicalSize.width,
height: referenceImage.physicalSize.height)
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 0.25
/*
`SCNPlane` is vertically oriented in its local coordinate space, but
`ARImageAnchor` assumes the image is horizontal in its local space, so
rotate the plane to match.
*/
planeNode.eulerAngles.x = -.pi / 2
/*
Image anchors are not tracked after initial detection, so create an
animation that limits the duration for which the plane visualization appears.
*/
planeNode.runAction(self.imageHighlightAction)
// Add the plane visualization to the scene.
node.addChildNode(planeNode)
}
DispatchQueue.main.async {
let imageName = referenceImage.name ?? ""
self.statusViewController.cancelAllScheduledMessages()
self.statusViewController.showMessage("Detected image “\(imageName)”")
}
}
var imageHighlightAction: SCNAction {
return .sequence([
.wait(duration: 0.25),
.fadeOpacity(to: 0.85, duration: 1.50),
.fadeOpacity(to: 0.15, duration: 1.50),
.fadeOpacity(to: 0.85, duration: 1.50),
.fadeOut(duration: 0.75),
.removeFromParentNode()
])
Assuming that referenceImage.name's value is the actual image filename.
if let imageName = referenceImage.name {
plane.materials = [SCNMaterial()]
plane.materials[0].diffuse.contents = UIImage(named: imageName)
}