Mapbox add interactivity to polygon in FillLayer - swift

I have a FillLayer with polygons generated from a local geojson file. I want to add interactivity to these polygons but am not sure how to do it with the iOS SDK. I found an example that is able to do something similar to what I need to do but in the Mapbox-GL-JS environment:
https://docs.mapbox.com/mapbox-gl-js/example/polygon-popup-on-click/
You can add a gesture recognizer to the map in the iOS SDK like this
mapView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(self.mapClickedFunction)))
but you can't seem to add a gesture recognizer to the layer or the polygons inside the layer. Any help on this would be much appreciated!

Use the tap point received from the gesture recognizer to query the map for rendered features at the given point within the layer specified.
#objc public func findFeatures(_ sender: UITapGestureRecognizer) {
let tapPoint = sender.location(in: mapView)
mapView.mapboxMap.queryRenderedFeatures(
with: tapPoint,
options: RenderedQueryOptions(layerIds: ["US-states"], filter: nil)) { [weak self] result in
switch result {
case .success(let queriedfeatures):
if let firstFeature = queriedfeatures.first?.feature.properties,
case let .string(stateName) = firstFeature["STATE_NAME"] {
self?.showAlert(with: "You selected \(stateName)")
}
case .failure(let error):
self?.showAlert(with: "An error occurred: \(error.localizedDescription)")
}
}
}
This is a chunk of code from MapBox examples. The logic is to put the gesture recognizer on the map view. When a tap gesture is received you get the location where user tapped as CGPoint coordinate inside the map view. Then you send it to MapboxMap in order to get a list o features that contain that point.
I recommend you to take a look at the official Mapbox repository that contains a lot of ways of doing things:
https://github.com/mapbox/mapbox-maps-ios/tree/main/Apps/Examples
Have a nice day!

Related

How to install gestures for Reality Composer?

I added USDZ with animation in the Reality Composer (.rcproject) After Load the scene and added to the review
I tried to install gestures like Rotate scale ... but won't work
let ganGes = gangnim?.gnagnumObject as? (Entity & HasCollision)
arView.installGestures([.rotation,.translation,.scale], for: ganGes!)
How can I install Gestures to Reality Composer?
To implement RealityKit's translate, rotate and scale gestures, you also need to call generateCollisionShapes(recursive:) instance method to prepare a model's shape used for collision detection.
guard let ganGes = gangnim.gnagnumObject as? ModelEntity else { return }
ganGes.generateCollisionShapes(recursive: true)
arView.installGestures([.all], for: ganGes as (Entity & HasCollision))

What is the best way to utilize memory inside of "session(_:didUpdate:)" method?

My use case is I want to calculate various gestures of a hand (the first hand) seen by the camera. I am able to find body anchors and hand anchors and poses. See my video here.
I am trying to utilize previous position SIMD3 information to calculate what kind of gesture was demonstrated. I did see the example posted by Apple which shows pinching to write virtually, I am not sure that a buffer is the right solution for something like this.
A specific example of what I am trying to do is detect a swipe, long-press, tap as if the user is wearing a pair of AR glasses (made by Apple one day). For clarification I want to raycast from my hand and perform a gesture on an Entity or Anchor.
Here is a snippet for those of you that want to know how to get body anchors:
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
let capturedImage = frame.capturedImage
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: capturedImage,
orientation: .right,
options: [:])
let handPoseRequest = VNDetectHumanHandPoseRequest()
//let bodyPoseRequest = VNDetectHumanBodyPoseRequest()
do {
try imageRequestHandler.perform([handPoseRequest])
guard let observation = handPoseRequest.results?.first else {
return
}
// Get points for thumb and index finger.
let thumbPoints = try observation.recognizedPoints(.thumb)
let indexFingerPoints = try observation.recognizedPoints(.indexFinger)
let pinkyFingerPoints = try observation.recognizedPoints(.littleFinger)
let ringFingerPoints = try observation.recognizedPoints(.ringFinger)
let middleFingerPoints = try observation.recognizedPoints(.middleFinger)
self.detectHandPose(handObservations: observation)
} catch {
print("Failed to perform image request.")
}
}

swift SpriteNode and SceneKit multiple touch gestures

I am making a word game. For this I am using SceneKit and adding a SpriteNodes to represent letter tiles.
The idea is that when a user clicks on a letter tile, some extra tiles appear around it with different letter options. My issue is regarding the touch gestures for various interactions.
When a user taps on a letter tile, additional tiles are shown. I have achieved this using the following method in my tile SpriteNode class:
override func touchesBegan(_ touches:Set<UITouch> , with event: UIEvent?) {
guard let touch = touches.first else {
return
}
delegate?.updateLetter(row: row, column: column, x:xcoord, y:ycoord, useCase: 1)
}
This triggers the delegate correctly which shows another sprite node.
What I would like to achieve is for a long press to remove the sprite node from parent. I have found the .removeFromParent() method, however I cannot get this to detect a long press gesture.
My understanding is that this type of gesture must be added using UIGestureRecognizer. I can add the following method to my Scene class:
override func didMove(to view: SKView) {
let longPress = UILongPressGestureRecognizer(target: self,
action: #selector(GameScene.longPress(sender:)))
view.addGestureRecognizer(longPress)
}
#objc func longPress(sender: UILongPressGestureRecognizer) {
print("Long Press")
This will detect a long press anywhere on the scene. However I need to be able to handle the pressed nodes properties before removing it. I have tried adding the below to the longPress function:
let location = sender.location(in: self)
let touchedNodes = nodes(at: location)
let firstTouchedNode = atPoint(location).name
touchedNodes[0].removeFromParent()
but I get the following error: Cannot convert value of type 'GameScene' to expected argument type 'UIView?'
This seems a little bit of a messy way of doing things, as I have touch methods in different places.
So my question is, how can I keep the current touchesBegan method that is in the tile class, and add a long press gesture to be able to reference and delete the spriteNode?
Long press gestures are continuous gestures that may be called multiple times as you are seeing. Have you tried Recognizer.State.began, .changed, .ended? I solved a similar problem doing things this way.
EDIT - I think one way to get there is to get your object on handleTap and hang on to the object. Then when LongPress happens, you already have your node. If something changes before longPress, obviously you need to reset. Sorry, this is some extra code on here, but look at hitTest.
#objc func handleTap(recognizer: UITapGestureRecognizer)
{
let location: CGPoint = recognizer.location(in: gameScene)
if(data.isAirStrikeModeOn == true)
{
let projectedPoint = gameScene.projectPoint(SCNVector3(0, 0, 0))
let scenePoint = gameScene.unprojectPoint(SCNVector3(location.x, location.y, CGFloat(projectedPoint.z)))
gameControl.airStrike(position: scenePoint)
}
else
{
let hitResults = gameScene.hitTest(location, options: hitTestOptions)
for vHit in hitResults
{
if(vHit.node.name?.prefix(5) == "Panel")
{
// May have selected an invalid panel or auto upgrade was on
if(gameControl.selectPanel(vPanel: vHit.node.name!) == false) { return }
return
}
}
}
}
So I am not completely satisfied with this answer, however it is a work around for what I need.
What I have done is added two variables ‘touchesStart’ and ‘touchesEnd’ to my tiles class.
Then in touchesBegan() I add a call to update touchesStart with CACurrentMediaTime() and update touchesEnd via the touchesEnded() function.
Then in the touchesEnded() I subtract touchesStart from touchesEnd. If the difference is more than 1.0 I call the function for long press. If less than 1.0 I call the function for tap.

ARKit – Poster as a window to a virtual room

I am working on iOS app using ARKit.
In real world, there is a poster on the wall. The poster is a fixed thing, so any needed preprocessing may be applied.
The goal is to make this poster a window into a virtual room. So that when user approaches the poster, he can look "through" it at some virtual 3D environment (room). Of course, user cannot go through the "window" and then wander in that 3D environment. He only can observe a virtual room looking "through" the poster.
I know that it's possible to make this poster detectable by ARKit, and to play some visual effects around it, or even a movie on top of it.
But I did not find information how to turn it into a window into virtual 3D world.
Any ideas and links to sample projects are greatly appreciated.
Look at this video posted on Augmented Images webpage (use Chrome browser to watch this video).
It's easy to create that type of a virtual cube. All you need is a 3D model of simple cube primitive without a front polygon (in order to see its inner surface). Also you need a plane with a square hole. Assign an out-of-the-box RealityKit occlusion material or a hand-made SceneKit occlusion material for this plane and it will hide all the outer walls of cube behind it (look at a picture below).
In Autodesk Maya Occlusion material is a Hold-Out option in Render Stats (for Viewport 2.0 only):
When you'll be tracking your poster on a wall (with detectionImages option activated), your app must recognize a picture and "load" 3D cube and its masking plane with occlusion shader. So, if ARImageAnchor on a poster and a pivot point of 3D cube must meet, cube's pivot point has to be located on a front edge of cube (at the same level where a wall's surface is).
If you wish to download Apple's sample code containing Image Detection experience – just click a blue button on the same webpage with detectionImages.
Here is a short example of my code:
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self // for using renderer() methods of ARSCNViewDelegate
sceneView.scene = SCNScene()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
resetTrackingConfiguration()
}
func resetTrackingConfiguration() {
guard let refImage = ARReferenceImage.referenceImages(inGroupNamed: "Poster",
bundle: nil)
else { return }
let config = ARWorldTrackingConfiguration()
config.detectionImages = refImage
config.maximumNumberOfTrackedImages = 1
let options = [ARSession.RunOptions.removeExistingAnchors,
ARSession.RunOptions.resetTracking]
sceneView.session.run(config, options: ARSession.RunOptions(options))
}
...and, of course, a SceneKit's renderer() instance method:
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor,
let _ = imageAnchor.referenceImage.name
else { return }
anchorsArray.append(imageAnchor)
if anchorsArray.first != nil {
node.addChildNode(portalNode)
}
}

Hit-test on Animated Character – Getting the correct worldCoordinates

In my ARKit app I have an animated character (stored as T-Bone Model in a SCN file). The Animations are taken from several DAE files and applied to the model using SCNAnimationPlayer like so:
let myAnimation = Animations.configMyAnimationFunction()
myAnimation.stop()
enemyNode.childNodes[2].addAnimationPlayer(myAnimation, forKey: "myKey")
enemyNode.childNodes[2].animationPlayer(forKey: "myKey")?.play()
the Animation plays perfectly.
Now I do "hit-test" against the animated geometry, like this:
let currentTouchPoint = touches.first?.location(in: self.sceneView)
let hitTest = sceneView.hitTest(currentTouchPoint!, options: [SCNHitTestOption.categoryBitMask: NodeCategory.catEnemy.rawValue, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
let hitObject = hitTest.first?.node // let that stores the hitTest
if hitObject != nil {
// code...
let hitLocation = hitTest.first?.worldCoordinates
// code...
}
I want to use the result from "worldCoordinates". But it seems, that the result always contains the coordinates from the static T-Bone Model, instead of the location on which it is during the animated runtime.
Imagine the animated model is clapping its hands (as a humanoid character) or touching the ground. When I touch the Models hands the hittest works and returns the result, but finally at the wrong coordinates.
Apple Docs on worldCoordinates is like this: "The point of intersection between the geometry and the search ray, in the scene’s world coordinate system." I also tried localCoordinates, but with even less success.
How can I determine the real coordinates at the current touch location on the geometry during the model is animated?