I would like my user to use zoom in AR application. Is it possible to zoom using ARView?
I have written the following code and added it to tap action.
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes:
[.builtInTrueDepthCamera, .builtInDualCamera, .builtInWideAngleCamera],
mediaType: .video, position: .back)
let devices : [AVCaptureDevice] = discoverySession.devices
let zoomFactor:CGFloat = 2
for de in devices {
print("name of camera")
print(de.localizedName)
do{
try de.lockForConfiguration()
de.videoZoomFactor = zoomFactor
de.unlockForConfiguration()
}catch {
print ("error")
}
}
I run it on IPhone X and see result in log
name of camera
Back Dual Camera
name of camera
Back Camera
But it has no effect on zoom.
Is it even possible to zoom in while using ARKit?
You can't use the camera's zoom with the reality kit. Reality kit uses the camera provided by ARKit and it's fixed with a focal length of 28mm. But you can zoom on the ArView like Andy Jazz's answer.
Try this approach:
// Use ARView as a subview of UIView
#IBOutlet var arView: ARView!
// Set minimumValue and default value of slider to 1
#IBAction func sliderForZooming(_ sender: UISlider) {
// CGAffineTransform 3x3 matrix
arView.transform = .init(a: CGFloat(sender.value), b: 0, c: 0,
d: CGFloat(sender.value), tx: 0, ty: 0)
}
Related
I have a camera node.
Around the camera node, there is another big node (.obj file) of a building.
User can move inside the building.
User can do LongPressGesture, and additional node (let's say a sphere) appears on the wall of the building. I want to rotate my camera to this new node (to tap location).
I don't know how to do it. Can someone help me?
Other answers are not correct for me. Camera just rotates in random directions.
I've found a way!
I take the location of a tap (or any coordinates you need to turn to)
#objc private func handleLongPress(pressRec: UILongPressGestureRecognizer) {
let arr: [UIGestureRecognizer.State] = [.cancelled, .ended, .failed]
if !arr.contains(pressRec.state) {
let touchPoint = pressRec.location(in: sceneView)
let hitResults = sceneView.hitTest(touchPoint, options: [:])
if let result: SCNHitTestResult = hitResults.first {
createAnnotation(result.worldCoordinates)
pressRec.state = .cancelled
}
}
}
func for turn camera:
func turnCameraTo(worldCoordinates: SCNVector3) {
SCNTransaction.begin()
SCNTransaction.animationDuration = C.hotspotAnimationDuration
cameraNode.look(at: worldCoordinates)
sceneView.defaultCameraController.clearRoll()
SCNTransaction.completionBlock = {
}
SCNTransaction.commit()
}
I'm trying to reverse engineer the 3d Scanner App using RealityKit and am having real trouble getting just a basic model working with all gestures. When I run the code below, I get a cube with scale and rotation (about the y axis only), but no translation interaction. I'm trying to figure out how to get rotation about an arbitray axis as well as translation, like in the 3d Scanner App above. I'm relatively new to iOS and read one should use RealityKit as Apple isn't really supporting SceneKit anymore, but am now wondering if SceneKit would be the way to go, as RealityKit is still young. Or if anyone knows of an extension to RealityKit ModelEntity objects to give them better interaction capabilities.
I've got my app taking a scan with the LiDAR sensor and saving it to disk as a .usda mesh, per this tutorial, but when I load the mesh as a ModelEntity and attach gestures to it, I don't get any interaction at all.
The below example code recreates the limited gestures for a box ModelEntity, and I have some commented lines showing where I would load my .usda model from disk, but again while it will render, it gets no interaction with gestures.
Any help appreciated!
// ViewController.swift
import UIKit
import RealityKit
class ViewController: UIViewController {
var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
arView = ARView(frame: view.frame, cameraMode: .nonAR, automaticallyConfigureSession: false)
view.addSubview(arView)
// create pointlight
let pointLight = PointLight()
pointLight.light.intensity = 10000
// create light anchor
let lightAnchor = AnchorEntity(world: [0, 0, 0])
lightAnchor.addChild(pointLight)
arView.scene.addAnchor(lightAnchor)
// eventually want to load my model from disk and give it gestures.
// guard let scanEntity = try? Entity.loadModel(contentsOf: urlOBJ) else {
// print("couldn't load scan in this format")
// return
// }
// entity to add gestures to
let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: true)
let myEntity = ModelEntity(mesh: .generateBox(width: 0.1, height: 0.2, depth: 0.3, cornerRadius: 0.01, splitFaces: false), materials: [cubeMaterial])
myEntity.generateCollisionShapes(recursive: false)
let myAnchor = AnchorEntity(world: .zero)
myAnchor.addChild(myEntity)
// add collision and interaction
let scanEntityBounds = myEntity.visualBounds(relativeTo: myAnchor)
myEntity.collision = CollisionComponent(shapes: [.generateBox(size: scanEntityBounds.extents).offsetBy(translation: scanEntityBounds.center)])
arView.installGestures(for: myEntity).forEach {
gestureRecognizer in
gestureRecognizer.addTarget(self, action: #selector(handleGesture(_:)))
}
arView.scene.addAnchor(myAnchor)
// without this, get no gestures at all
let camera = PerspectiveCamera()
let cameraAnchor = AnchorEntity(world: [0, 0, 0.2])
cameraAnchor.addChild(camera)
arView.scene.addAnchor(cameraAnchor)
}
#objc private func handleGesture(_ recognizer: UIGestureRecognizer) {
if recognizer is EntityTranslationGestureRecognizer {
print("translation!")
} else if recognizer is EntityScaleGestureRecognizer {
print("scale!")
} else if recognizer is EntityRotationGestureRecognizer {
print("rotation!")
}
}
}
To extend ModelEntity's gesture interaction capabilities setup your own 2D gestures. There are 8 screen gestures in UIKit, and in SwiftUI you have 5 principal gestures and additionally Sequence, Simultaneous and Exclusive variations.
Form what I have understood, that the gestures are working for the box but not for your .usdz file/model. If this is the case, then the issue is because the model does not have a collision mesh(HasCollsion). If you are using reality composer to edit your models, you could do the following:
click on the model
under the Physics dropdown, click Participate
under collision shape select automatic
Overalls, make sure that the model has collision and you cast within the code that it has collision
let myEntity = try? Entity.loadModel(named: "fileName") as! HasCollision
In iOS 14, hitTest(_:types:) was deprecated. It seems that you are supposed to use raycastQuery(from:allowing:alignment:) now. From the documentation:
Raycasting is the preferred method for finding positions on surfaces in the real-world environment, but the hit-testing functions remain present for compatibility. With tracked raycasting, ARKit continues to refine the results to increase the position accuracy of virtual content you place with a raycast.
However, how can I hit test SCNNodes with raycasting? I only see options to hit test a plane.
raycastQuery method documentation
Only choices for allowing: are planes
This is my current code, which uses hit-testing to detect taps on the cube node and turn it blue.
class ViewController: UIViewController {
#IBOutlet weak var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
/// Run the configuration
let worldTrackingConfiguration = ARWorldTrackingConfiguration()
sceneView.session.run(worldTrackingConfiguration)
/// Make the red cube
let cube = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
cube.materials.first?.diffuse.contents = UIColor.red
let cubeNode = SCNNode(geometry: cube)
cubeNode.position = SCNVector3(0, 0, -0.2) /// 20 cm in front of the camera
cubeNode.name = "ColorCube"
/// Add the node to the ARKit scene
sceneView.scene.rootNode.addChildNode(cubeNode)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesBegan(touches, with: event)
guard let location = touches.first?.location(in: sceneView) else { return }
let results = sceneView.hitTest(location, options: [SCNHitTestOption.searchMode : 1])
for result in results.filter( { $0.node.name == "ColorCube" }) { /// See if the beam hit the cube
let cubeNode = result.node
cubeNode.geometry?.firstMaterial?.diffuse.contents = UIColor.blue /// change to blue
}
}
}
How can I replace let results = sceneView.hitTest(location, options: [SCNHitTestOption.searchMode : 1]) with the equivalent raycastQuery code?
About Hit-Testing
Official documentation says that only ARKit's hitTest(_:types:) instance method is deprecated in iOS 14. However, in iOS 15 you can still use it. ARKit's hit-testing method is supposed to be replaced with a raycasting methods.
Deprecated hit-testing:
let results: [ARHitTestResult] = sceneView.hitTest(sceneView.center,
types: .existingPlaneUsingGeometry)
Raycasting equivalent
let raycastQuery: ARRaycastQuery? = sceneView.raycastQuery(
from: sceneView.center,
allowing: .estimatedPlane,
alignment: .any)
let results: [ARRaycastResult] = sceneView.session.raycast(raycastQuery!)
If you prefer raycasting method for hitting a node (entity), use RealityKit module instead of SceneKit:
let arView = ARView(frame: .zero)
let query: CollisionCastQueryType = .nearest
let mask: CollisionGroup = .default
let raycasts: [CollisionCastHit] = arView.scene.raycast(from: [0, 0, 0],
to: [5, 6, 7],
query: query,
mask: mask,
relativeTo: nil)
guard let raycast: CollisionCastHit = raycasts.first else { return }
print(raycast.entity.name)
P.S.
There is no need to look for a replacement for the SceneKit's hitTest(_:options:) instance method returning [SCNHitTestResult], because it works fine and it's not a time to make it deprecated.
I have zoom feature working(1x onwards) for custom camera implemented using AVFoundation. This is fine till the iPhone X models. But I wanted to have 0.5x zoom in iPhone 11 and iPhone 11 Pro devices.
Code that I wrote is not working to put it to 0.5x zoom. I have tried all the possible combinations of [.builtInTripleCamera, .builtInDualWideCamera, .builtInUltraWideCamera]. The capture device with the device type .builtinUltraWideCamera is not giving 0.5 for minAvailableVideoZoomFactor.
While testing on iPhone 11, I also removed [.builtInDualCamera, .builtInTelephotoCamera, .builtInWideAngleCamera, .builtInTrueDepthCamera] from the deviceTypes.
Appreciate any help to solve this. Below is the code which is working for 1x zoom onwards.
/// Called from -handlePinchGesture
private func zoom(_ scale: CGFloat) {
let captureDevice = cameraDevice(.back)
do {
try captureDevice?.lockForConfiguration()
var minZoomFactor: CGFloat = captureDevice?.minAvailableVideoZoomFactor ?? 1.0
let maxZoomFactor: CGFloat = captureDevice?.maxAvailableVideoZoomFactor ?? 1.0
if #available(iOS 13.0, *) {
if captureDevice?.deviceType == .builtInDualWideCamera || captureDevice?.deviceType == .builtInTripleCamera || captureDevice?.deviceType == .builtInUltraWideCamera {
minZoomFactor = 0.5
}
}
zoomScale = max(minZoomFactor, min(beginZoomScale * scale, maxZoomFactor))
captureDevice?.videoZoomFactor = zoomScale
captureDevice?.unlockForConfiguration()
} catch {
print("ERROR: locking configuration")
}
}
#objc private func handlePinchGesture(_ recognizer: UIPinchGestureRecognizer) {
var allTouchesOnPreviewLayer = true
let numTouch = recognizer.numberOfTouches
for i in 0 ..< numTouch {
let location = recognizer.location(ofTouch: i, in: view)
let convertedTouch = previewLayer.convert(location, from: previewLayer.superlayer)
if !previewLayer.contains(convertedTouch) {
allTouchesOnPreviewLayer = false
break
}
}
if allTouchesOnPreviewLayer {
zoom(recognizer.scale)
}
}
func cameraDevice(_ position: AVCaptureDevice.Position) -> AVCaptureDevice? {
var deviceTypes = [AVCaptureDevice.DeviceType]()
deviceTypes.append(contentsOf: [.builtInDualCamera, .builtInTelephotoCamera, .builtInWideAngleCamera, .builtInTrueDepthCamera])
if #available(iOS 13.0, *) {
deviceTypes.append(contentsOf: [.builtInTripleCamera, .builtInDualWideCamera, .builtInUltraWideCamera])
}
let availableCameraDevices = AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes, mediaType: .video, position: position).devices
guard availableCameraDevices.isEmpty == false else {
debugPrint("ERROR: No camera devices found!!!")
return nil
}
for device in availableCameraDevices {
if device.position == position {
return device
}
}
guard let defaultDevice = AVCaptureDevice.default(for: AVMediaType.video) else {
debugPrint("ERROR: Can't initialize default back camera!!!")
return nil
}
return defaultDevice
}
Updating for people who are looking to set the optical zoom level 0.5x
courtesy: https://github.com/NextLevel/NextLevel/issues/187
public class func primaryVideoDevice(forPosition position: AVCaptureDevice.Position) -> AVCaptureDevice? {
// -- Changes begun
if #available(iOS 13.0, *) {
let hasUltraWideCamera: Bool = true // Set this variable to true if your device is one of the following - iPhone 11, iPhone 11 Pro, & iPhone 11 Pro Max
if hasUltraWideCamera {
// Your iPhone has UltraWideCamera.
let deviceTypes: [AVCaptureDevice.DeviceType] = [AVCaptureDevice.DeviceType.builtInUltraWideCamera]
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes, mediaType: AVMediaType.video, position: position)
return discoverySession.devices.first
}
}
// -- Changes end
var deviceTypes: [AVCaptureDevice.DeviceType] = [AVCaptureDevice.DeviceType.builtInWideAngleCamera] // builtInWideAngleCamera // builtInUltraWideCamera
if #available(iOS 11.0, *) {
deviceTypes.append(.builtInDualCamera)
} else {
deviceTypes.append(.builtInDuoCamera)
}
// prioritize duo camera systems before wide angle
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes, mediaType: AVMediaType.video, position: position)
for device in discoverySession.devices {
if #available(iOS 11.0, *) {
if (device.deviceType == AVCaptureDevice.DeviceType.builtInDualCamera) {
return device
}
} else {
if (device.deviceType == AVCaptureDevice.DeviceType.builtInDuoCamera) {
return device
}
}
}
return discoverySession.devices.first
}
The minimum "zoomFactor" property of an AVCaptureDevice can't be less than 1.0 according to the Apple Docs. It's a little confusing becuase depending on what camera you've selected, a zoom factor of 1 will be a different field of view or optical view angle. The default iPhone camera app shows a label reading "0.5" but that's just a label for the ultra wide lens in relation to the standard camera's zoom factor.
You're already getting the minZoomFactor from the device, (which will probably be 1), so you should use the device's min and max that you're reading to set the bounds of the factor you input into "captureDevice.videoZoomFactor". Then when you;ve selected the ultra wide lens, setting the zoomfactor to 1 will be as wide as you can go! (a factor of 0.5 in relation to the standard lens's field of view).
The problem is when you try to get a device of some type from discoverySession.devices it returns the default device that can be not supporting ultrawide that you need.
That was the case for me for iPhone 12Pro Max, returning only one device for Back position, reporting type BuiltInWideAngleCamera, but that was just lyes, it was the middle camera, not wide, not telephoto. Dunno why apple devs made it like that, looks like an outdated legacy architecture.
The solution was not obvious: use AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) to get the real device capable of zooming from 1 (your logical 0.5).
We cannot set the zoom factor to less than 1.
I resolve this issue by using ".builtInDualWideCamera".
In this case, we use "Ultra-Wide Camera" with the zoom factor 2.0 (will be the default value) equal to the normal zoom factor on the "Wide Angle Camera". (minimum value will be 1.0)
If your iPhone doesn't support ".builtInDualWideCamera", we will using ".builtInWideAngleCamera" as normally and the zoom factor is 1.0 (minimum value)
func getCameraDevices() -> [AVCaptureDevice] {
var deviceTypes = [AVCaptureDevice.DeviceType]()
if #available(iOS 13.0, *) {
deviceTypes.append(contentsOf: [.builtInDualWideCamera])
self.isUltraWideCamera = true
self.defaultZoomFactor = 2.0
}
if(deviceTypes.isEmpty){
deviceTypes.append(contentsOf: [.builtInWideAngleCamera])
self.isUltraWideCamera = false
self.defaultZoomFactor = 1.0
}
return AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes, mediaType: .video, position: .unspecified).devices
}
I'm actually trying to put a 3D Object on QRCode with ARKit
For that I use a AVCaptureDevice to detect a QRCode and establish the area of the QRCode that gives me a CGRect.
Then, I make a hitTest on every point of the CGRect to get the average 3D coordinates like so :
positionGiven = SCNVector3(0, 0, 0)
for column in Int(qrZone.origin.x)...2*Int(qrZone.origin.x + qrZone.width) {
for row in Int(qrZone.origin.y)...2*Int(qrZone.origin.y + qrZone.height) {
for result in sceneView.hitTest(CGPoint(x: CGFloat(column)/2,y:CGFloat(row)/2), types: [.existingPlaneUsingExtent,.featurePoint]) {
positionGiven.x+=result.worldTransform.columns.3.x
positionGiven.y+=result.worldTransform.columns.3.y
positionGiven.z+=result.worldTransform.columns.3.z
cpts += 1
}
}
}
positionGiven.x=positionGiven.x/cpts
positionGiven.y=positionGiven.y/cpts
positionGiven.z=positionGiven.z/cpts
But the hitTest doesn't detect any result and freeze the camera while when I make a hitTest with a touch on screen it works.
Do you have any idea why it's not working ?
Do you have an other idea that can help me to achieve what I want to do ?
I already thought about 3D translation with CoreMotion that can give me the tilt of the device but that seems really tedious.
I also heard about ARWorldAlignmentCamera that can locked the scene coordinate to match the orientation of the camera but I don't know how to use it !
Edit : I try to move my 3D Object every time I touch the screen and the hitTest is positive, and it's pretty accurate ! I really don't understand why hitTest on an area of pixels doesn't work...
Edit 2 : Here is the code of the hitTest who works with 2-5 touches on the screen:
#objc func touch(sender : UITapGestureRecognizer) {
for result in sceneView.hitTest(CGPoint(x: sender.location(in: view).x,y: sender.location(in: view).y), types: [.existingPlaneUsingExtent,.featurePoint]) {
//Pop up message for testing
alert("\(sender.location(in: view))", message: "\(result.worldTransform.columns.3)")
//Moving the 3D Object to the new coordinates
let objectList = sceneView.scene.rootNode.childNodes
for object : SCNNode in objectList {
object.removeFromParentNode()
}
addObject(SCNVector3(result.worldTransform.columns.3.x,result.worldTransform.columns.3.y,result.worldTransform.columns.3.z))
}
}
Edit 3 :
I manage to resolve my problem partially.
I take the transform matrix of the camera (session.currentFrame.camera.transform) so that the object is in front of the camera.
Then I apply a translation on (x,y) with the position of the CGRect.
However i can't translate the z-axis because i don't have enough informations.
And I will probably need a estimation of z coordinate like the hitTest do..
Thanks in advance ! :)
You could use Apple's Vision API to detect the QR code and place an anchor.
To start detecting QR codes, use:
var qrRequests = [VNRequest]()
var detectedDataAnchor: ARAnchor?
var processing = false
func startQrCodeDetection() {
// Create a Barcode Detection Request
let request = VNDetectBarcodesRequest(completionHandler: self.requestHandler)
// Set it to recognize QR code only
request.symbologies = [.QR]
self.qrRequests = [request]
}
In ARSession's didUpdate Frame
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
DispatchQueue.global(qos: .userInitiated).async {
do {
if self.processing {
return
}
self.processing = true
// Create a request handler using the captured image from the ARFrame
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: frame.capturedImage,
options: [:])
// Process the request
try imageRequestHandler.perform(self.qrRequests)
} catch {
}
}
}
Handle the Vision QR request and trigger the hit test
func requestHandler(request: VNRequest, error: Error?) {
// Get the first result out of the results, if there are any
if let results = request.results, let result = results.first as? VNBarcodeObservation {
guard let payload = result.payloadStringValue else {return}
// Get the bounding box for the bar code and find the center
var rect = result.boundingBox
// Flip coordinates
rect = rect.applying(CGAffineTransform(scaleX: 1, y: -1))
rect = rect.applying(CGAffineTransform(translationX: 0, y: 1))
// Get center
let center = CGPoint(x: rect.midX, y: rect.midY)
DispatchQueue.main.async {
self.hitTestQrCode(center: center)
self.processing = false
}
} else {
self.processing = false
}
}
func hitTestQrCode(center: CGPoint) {
if let hitTestResults = self.latestFrame?.hitTest(center, types: [.featurePoint] ),
let hitTestResult = hitTestResults.first {
if let detectedDataAnchor = self.detectedDataAnchor,
let node = self.sceneView.node(for: detectedDataAnchor) {
let previousQrPosition = node.position
node.transform = SCNMatrix4(hitTestResult.worldTransform)
} else {
// Create an anchor. The node will be created in delegate methods
self.detectedDataAnchor = ARAnchor(transform: hitTestResult.worldTransform)
self.sceneView.session.add(anchor: self.detectedDataAnchor!)
}
}
}
Then handle adding the node when the anchor is added.
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
// If this is our anchor, create a node
if self.detectedDataAnchor?.identifier == anchor.identifier {
let sphere = SCNSphere(radius: 1.0)
sphere.firstMaterial?.diffuse.contents = UIColor.redColor()
let sphereNode = SCNNode(geometry: sphere)
sphereNode.transform = SCNMatrix4(anchor.transform)
return sphereNode
}
return nil
}
Source