HitTest prints AR Entity name even when I am not tapping on it - swift

My Experience.rcproject has animations that can be triggered by tap action.
Two cylinders are named “Button 1” and “Button 2” and have Collide turned on.
I am using Async method to load Experience.Map scene and addAnchor method to add mapAnchor to ARView in a ViewController.
I tried to run HitTest on the scene to see if the app reacts properly.
Nonetheless, the HitTest result prints the entity name of a button even when I am not tapping on it but area near it.
class augmentedReality: UIViewController {
#IBOutlet weak var arView: ARView!
#IBAction func onTap(_ sender: UITapGestureRecognizer) {
let tapLocation = sender.location(in: arView)
// Get the entity at the location we've tapped, if one exists
if let button = arView.entity(at: tapLocation) {
// For testing purposes, print the name of the tapped entity
print(button.name)
}
}
}
Below is my attempt to add the AR scene and tap gesture recogniser to arView.
class augmentedReality: UIViewController {
arView.scene.addAnchor(mapAnchor)
mapAnchor.notifications.hideAll.post()
mapAnchor.notifications.mapStart.post()
self.arView.isUserInteractionEnabled = true
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(onTap))
self.arView.addGestureRecognizer(tapGesture)
}
Question 1
How can I achieve the goal of only having the entity name of a button printed when I am really tapping on it instead of close to it?
Question 2
Do I actually need to turn Collide on to have both buttons able to be detected in the HitTest?
Question 3
There’s an installGestures method. There’s no online tutorials or discussions about this at the moment. I tried but I am confused by (Entity & HasCollision). How can this method be implemented?

To implement a robust Hit-Testing in RealityKit, all you need is the following code:
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let scene = try! Experience.loadScene()
#IBAction func onTap(_ sender: UITapGestureRecognizer) {
let tapLocation: CGPoint = sender.location(in: arView)
let result: [CollisionCastHit] = arView.hitTest(tapLocation)
guard let hitTest: CollisionCastHit = result.first
else { return }
let entity: Entity = hitTest.entity
print(entity.name)
}
override func viewDidLoad() {
super.viewDidLoad()
scene.steelBox!.scale = [2,2,2]
scene.steelCylinder!.scale = [2,2,2]
scene.steelBox!.name = "BOX"
scene.steelCylinder!.name = "CYLINDER"
arView.scene.anchors.append(scene)
}
}
When you tap on entities in ARView a Debug Area prints "BOX" or "CYLINDER". And if you tap anything but entities, a Debug Area prints just "Ground Plane".
If you need to implement a Ray-Casting read this post, please.
P.S.
In case you run this app on macOS Simulator, it prints just Ground Plane instead of BOX and CYLINDER. So you need to run this app on iPhone.

Related

How would I make a UiPanGestureRecognizer check if the finger is in the buttons frame?

I am trying to make an app where the user could drag a finger on top of multiple buttons and get some actions for each button.
There was a similar question from a while back but when I tried to use CGRectContainsPoint(button.frame, point) it said it was replaced with button.frame.contains(point) but this didn’t seem to work for me. Here is a link to the Photo
What I have done so far:
var buttonArray:NSMutableArray!
override func viewDidLoad()
{
super.viewDidLoad()
let panGesture = UIPanGestureRecognizer(target: self, action: #selector(panGestureMethod(_:)))
a1.addGestureRecognizer(panGesture)
a2.addGestureRecognizer(panGesture)
}
#objc func panGestureMethod(_ gesture: UIPanGestureRecognizer) {
if gesture.state == UIGestureRecognizer.State.began {
buttonArray = NSMutableArray()
}
let pointInView = gesture.location(in: gesture.view)
if !buttonArray.contains(a1!) && a1.frame.contains(pointInView) {
buttonArray.add(a1!)
a1Tapped(a1)
}
else if !buttonArray.contains(a2!) && a2.frame.contains(pointInView) {
buttonArray.add(a2!)
a2Tapped(a2)
}
The code did run fine but when I tried to activate the drag nothing happened. Any tips?
You want behavior similar to the system keyboard I assume? CGRectContainsPoint is not deprecated: See the docs. In Swift it's written like frame.contains().
When dealing with rects and points you have to make sure both are translated to the same coordinate system first. To do so you can use the convert to/from methods on UIView: See (the docs).
In your case smth. like the following should work (first translate button frames, then check if the point is inside):
func touchedButtonForGestureRecognizer(_ gesture: UIPanGestureRecognizer) -> UIView? {
let pointInView = gesture.location(in: gesture.view)
for (button in buttonArray) {
let rect = gesture.view.convert(button.frame from:button.superview)
if (rect.contains(pointInView)) {
return button
}
}
return nil
}

swift SpriteNode and SceneKit multiple touch gestures

I am making a word game. For this I am using SceneKit and adding a SpriteNodes to represent letter tiles.
The idea is that when a user clicks on a letter tile, some extra tiles appear around it with different letter options. My issue is regarding the touch gestures for various interactions.
When a user taps on a letter tile, additional tiles are shown. I have achieved this using the following method in my tile SpriteNode class:
override func touchesBegan(_ touches:Set<UITouch> , with event: UIEvent?) {
guard let touch = touches.first else {
return
}
delegate?.updateLetter(row: row, column: column, x:xcoord, y:ycoord, useCase: 1)
}
This triggers the delegate correctly which shows another sprite node.
What I would like to achieve is for a long press to remove the sprite node from parent. I have found the .removeFromParent() method, however I cannot get this to detect a long press gesture.
My understanding is that this type of gesture must be added using UIGestureRecognizer. I can add the following method to my Scene class:
override func didMove(to view: SKView) {
let longPress = UILongPressGestureRecognizer(target: self,
action: #selector(GameScene.longPress(sender:)))
view.addGestureRecognizer(longPress)
}
#objc func longPress(sender: UILongPressGestureRecognizer) {
print("Long Press")
This will detect a long press anywhere on the scene. However I need to be able to handle the pressed nodes properties before removing it. I have tried adding the below to the longPress function:
let location = sender.location(in: self)
let touchedNodes = nodes(at: location)
let firstTouchedNode = atPoint(location).name
touchedNodes[0].removeFromParent()
but I get the following error: Cannot convert value of type 'GameScene' to expected argument type 'UIView?'
This seems a little bit of a messy way of doing things, as I have touch methods in different places.
So my question is, how can I keep the current touchesBegan method that is in the tile class, and add a long press gesture to be able to reference and delete the spriteNode?
Long press gestures are continuous gestures that may be called multiple times as you are seeing. Have you tried Recognizer.State.began, .changed, .ended? I solved a similar problem doing things this way.
EDIT - I think one way to get there is to get your object on handleTap and hang on to the object. Then when LongPress happens, you already have your node. If something changes before longPress, obviously you need to reset. Sorry, this is some extra code on here, but look at hitTest.
#objc func handleTap(recognizer: UITapGestureRecognizer)
{
let location: CGPoint = recognizer.location(in: gameScene)
if(data.isAirStrikeModeOn == true)
{
let projectedPoint = gameScene.projectPoint(SCNVector3(0, 0, 0))
let scenePoint = gameScene.unprojectPoint(SCNVector3(location.x, location.y, CGFloat(projectedPoint.z)))
gameControl.airStrike(position: scenePoint)
}
else
{
let hitResults = gameScene.hitTest(location, options: hitTestOptions)
for vHit in hitResults
{
if(vHit.node.name?.prefix(5) == "Panel")
{
// May have selected an invalid panel or auto upgrade was on
if(gameControl.selectPanel(vPanel: vHit.node.name!) == false) { return }
return
}
}
}
}
So I am not completely satisfied with this answer, however it is a work around for what I need.
What I have done is added two variables ‘touchesStart’ and ‘touchesEnd’ to my tiles class.
Then in touchesBegan() I add a call to update touchesStart with CACurrentMediaTime() and update touchesEnd via the touchesEnded() function.
Then in the touchesEnded() I subtract touchesStart from touchesEnd. If the difference is more than 1.0 I call the function for long press. If less than 1.0 I call the function for tap.

Anchoring Multiple Scenes in RealityKit

While loading multiple scenes (from reality composer) into arView, the scenes is not anchored in the same space.
In this example, scene1 is loaded when the app starts. After the button is pressed, the scene2 is added into the scene. In both the scenes, the models are placed at the origin and are expected to overlap with scene2 is added into the view. However, the position of scene1 and scene2 is different when they are added into the arView.
import UIKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
#IBOutlet weak var button: UIButton!
var scene1: Experience.Scene1!
var scene2: Experience.Scene2!
override func viewDidLoad() {
super.viewDidLoad()
// Load the "Box" scene from the "Experience" Reality File
scene1 = try! Experience.loadScene1()
scene2 = try! Experience.loadScene2()
// Add the box anchor to the scene
arView.scene.addAnchor(scene1)
}
#IBAction func buttonPressed(_ sender: Any) {
arView.scene.addAnchor(scene2)
}
}
Note: This issues does not happen when both the scenes are added simultaneously.
How to make sure that both the scenes are anchored at the same ARAnchor?
Use the following approach:
let scene01 = try! Cube.loadCube()
let scene02 = try! Ball.loadSphere()
let cubeEntity: Entity = scene01.steelCube!.children[0]
let ballEntity: Entity = scene02.glassBall!.children[0]
// var cubeComponent: ModelComponent = cubeEntity.components[ModelComponent].self!
// var ballComponent: ModelComponent = ballEntity.components[ModelComponent].self!
let anchor = AnchorEntity()
anchor.addChild(cubeEntity)
anchor.addChild(ballEntity)
// scene01.steelCube!.components.set(cubeComponent)
// scene02.glassBall!.components.set(ballComponent)
arView.scene.anchors.append(anchor)

How do I make an entity a physics entity in RealityKit?

I am not able to figure out how to make the "ball" entity a physics entity/body and apply a force to it.
// I'm using UIKit for the user interface and RealityKit +
// the models made in Reality Composer for the Augmented reality and Code
import RealityKit
import ARKit
class ViewController: UIViewController {
var ball: (Entity & HasPhysics)? {
try? Entity.load(named: "golfball") as? Entity & HasPhysics
}
#IBOutlet var arView: ARView!
// referencing the play now button on the home screen
#IBAction func playNow(_ sender: Any) { }
// referencing the slider in the AR View - this slider will be used to
// control the power of the swing. The slider values range from 10% to
// 100% of swing power with a default value of 55%. The user will have
// to gain experience in the game to know how much power to use.
#IBAction func slider(_ sender: Any) { }
//The following code will fire when the view loads
override func viewDidLoad() {
super.viewDidLoad()
// defining the Anchor - it looks for a flat surface .3 by .3
// meters so about a foot by a foot - on this surface, it anchors
// the golf course and ball when you tap
let anchor = AnchorEntity(plane: .horizontal, minimumBounds: [0.3, 0.3])
// placing the anchor in the scene
arView.scene.addAnchor(anchor)
// defining my golf course entity - using modelentity so it
// participates in the physics of the scene
let entity = try? ModelEntity.load(named: "golfarnew")
// defining the ball entity - again using modelentity so it
// participates in the physics of the scene
let ball = try? ModelEntity.load(named: "golfball")
// loading my golf course entity
anchor.addChild(entity!)
// loading the golf ball
anchor.addChild(ball!)
// applying a force to the ball at the balls position and the
// force is relative to the ball
ball.physicsBody(SIMD3(1.0, 1.0, 1.0), at: ball.position, relativeTo: ball)
// sounds, add physics body to ball, iPad for shot direction,
// connect slider to impulse force
}
}
Use the following code to find out how to implement a RealityKit's physics.
Pay particular attention: Participates in Physics is ON in Reality Composer.
import ARKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let boxScene = try! Experience.loadBox()
let secondBoxAnchor = try! Experience.loadBox()
let boxEntity = boxScene.steelBox as! (Entity & HasPhysics)
let kinematics: PhysicsBodyComponent = .init(massProperties: .default,
material: nil,
mode: .kinematic)
let motion: PhysicsMotionComponent = .init(linearVelocity: [0.1 ,0, 0],
angularVelocity: [3, 3, 3])
boxEntity.components.set(kinematics)
boxEntity.components.set(motion)
let anchor = AnchorEntity()
anchor.addChild(boxEntity)
arView.scene.addAnchor(anchor)
arView.scene.addAnchor(secondBoxAnchor)
print(boxEntity.isActive) // Entity must be active!
}
}
Also, look at THIS POST to find out how to implement RealityKit's physics with a custom class.

In Swift’s UIKit Dynamics, how can I define a circle boundary to contain a UIView?

I have researched a LOT, but the only examples I can find anywhere are for the purpose of defining the bounds of a UIView so that they collide/bounce off each other on the OUTSIDE of the objects.
Example: A ball hits another ball and they bounce away from each other.
But what I want to do is create a circular view to CONTAIN other UIViews, such that the containing boundary is a circle, not the default square. Is there a way to achieve this?
Yes, that's totally possible. The key to achieving collision within a circle is to
Set the boundary for the collision behaviour to be a circle path (custom UIBezierPath) and
Set the animator’s referenceView to be the circle view.
Output:
Storyboard setup:
Below is the code of the view controller for the above Storyboard. The magic happens in the simulateGravityAndCollision method:
Full Xcode project
class ViewController: UIViewController {
#IBOutlet weak var redCircle: UIView!
#IBOutlet weak var whiteSquare: UIView!
var animator:UIDynamicAnimator!
override func viewDidLoad() {
super.viewDidLoad()
self.redCircle.setCornerRadius(self.redCircle.bounds.width / 2)
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 0.5) { [unowned self] in
self.simulateGravityAndCollision()
}
}
func simulateGravityAndCollision() {
//The dynamic animation happens only within the reference view, i.e., our red circle view
animator = UIDynamicAnimator.init(referenceView: self.redCircle)
//Only the inside white square will be affected by gravity
let gravityBehaviour = UIGravityBehavior.init(items: [self.whiteSquare])
//We also apply collision only to the white square
let collisionBehaviour = UICollisionBehavior.init(items:[self.whiteSquare])
//This is where we create the circle boundary from the redCircle view's bounds
collisionBehaviour.addBoundary(withIdentifier: "CircleBoundary" as NSCopying, for: UIBezierPath.init(ovalIn: self.redCircle.bounds))
animator.addBehavior(gravityBehaviour)
animator.addBehavior(collisionBehaviour)
}
}
extension UIView {
open override func awakeFromNib() {
super.awakeFromNib()
self.layer.allowsEdgeAntialiasing = true
}
func setCornerRadius(_ amount:CGFloat) {
self.layer.cornerRadius = amount
self.layer.masksToBounds = true
self.clipsToBounds = true
}
}