ARKit, RealityKit: add AnchorEntities in the world after loading saved ARWorldMap - arkit

I am developing a simple AR app, where the user can:
select an object to add to the world from a list of objects
manipulate the object, changing its position, its orientation and its scale
remove the object
save all the objects added and modified as an ARWorldMap
load a previously saved ARWorldMap
I'm having some problems in getting the situation as it was before, when the user wants to load the previously saved ARWorldMap. In particular, when I re-add the objects, their positions and orientations are completely different that before.
Here are some details about the app. I initially add the objects as AnchorEntities. Doing some research I found out that the ARWorldMap doesn't save AnchorEntities but only ARAnchors, so I created a singleton object (AnchorEntitiesContainer in the following code) where I have a list of all the AnchorEntities added by the user that gets restored when the user wants to load a saved ARWorldMap.
Here is the initial insertion of objects in the world:
func addObject(objectName: String) {
let path = Bundle.main.path(forResource: objectName, ofType: "usdz")!
let url = URL(fileURLWithPath: path)
if let modelEntity = try? ModelEntity.loadModel(contentsOf: url) {
modelEntity.name = objectName
modelEntity.scale = [3.0, 3.0, 3.0]
let anchor = AnchorEntity(plane: .vertical, minimumBounds: [0.2, 0.2])
anchor.name = objectName + "_anchor"
anchor.addChild(modelEntity)
arView.scene.addAnchor(anchor)
modelEntity.generateCollisionShapes(recursive: true)
}
}
Here is the saving of the ARWorldMap and of the list of entities added:
func saveWorldMap() {
print("Save world map clicked")
self.arView.session.getCurrentWorldMap { worldMap, _ in
guard let map = worldMap else {
self.showAlert(title: "Can't get world map!", message: "Can't get current world map. Retry later.")
return
}
for anchor in self.arView.scene.anchors {
AnchorEntitiesContainer.sharedAnchorEntitiesContainer().addAnchorEntity(anchorEntity: anchorEntity)
}
do {
let data = try NSKeyedArchiver.archivedData(withRootObject: map, requiringSecureCoding: true)
try data.write(to: URL(fileURLWithPath: self.worldMapFilePath), options: [.atomic])
} catch {
fatalError("Can't save map: \(error.localizedDescription)")
}
}
showAlert(title: "Save world map", message: "AR World Map successfully saved!")
}
Here is the loading of the ARWorldMap and the re-insertion of the AnchorEntities:
func loadWorldMap() {
print("Load world map clicked")
let mapData = try? Data(contentsOf: URL(fileURLWithPath: self.worldMapFilePath))
let worldMap = try! NSKeyedUnarchiver.unarchivedObject(ofClass: ARWorldMap.self, from: mapData)
let configuration = self.defaultConfiguration
configuration.initialWorldMap = worldMap
self.arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
for anchorEntity in AnchorEntitiesContainer.sharedAnchorEntitiesContainer().getAnchorEntities()){
self.arView.scene.addAnchor(anchorEntity)
}
}
The problem is that the position and orientation of the AnchorEntities are completely different than before, so they are not where they should be.
Here are the things I tried:
I tried to save ModelEntities instead of AnchorEntities and repeat the initial insertion when loading the saved ARWorldMap but it didn't give the expected results.
I thought that maybe the problem was that the world origins are different when the ARWorldMap gets loaded, so I tried to restore the previous world origin but I don't know how to get that information and how to work with it.
I noticed that the ARWorldMap has a "center" parameter so I tried to modify the AnchorEntities transform matrix with that information but I never got what I wanted.
So my question is how do I load AnchorEntities into the world in exactly their previous positions and orientations when loading an ARWorldMap?

Related

Improving saved depth data image from TrueDepth sensor

I'm trying to save depth data from an iPad Pro's FaceId TrueDepth sensor. I have taken this demo code and have added the following code with a simple button:
#IBAction func exportData(_ sender: Any) {
let ciimage = CIImage(cvPixelBuffer: realDepthData.depthDataMap)
let depthUIImage = UIImage(ciImage: ciimage)
let data = depthUIImage.pngData()
print("data: \(realDepthData.depthDataMap)")
do {
let directory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0];
let path = directory.appendingPathComponent("FaceIdData.png");
try data!.write(to: path)
let activityViewController = UIActivityViewController(activityItems: [path], applicationActivities: nil)
activityViewController.popoverPresentationController?.sourceView = exportMeshButton
present(activityViewController, animated: true, completion: nil)
} catch {
print("Unable to save image")
}
}
realDepthData is a class property I added and that I update in dataOutputSynchronizer:
func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer,
didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) {
...
let depthData = syncedDepthData.depthData
let depthPixelBuffer = depthData.depthDataMap
self.realDepthData = depthData
...
}
I'm able to save the image (grey scale) but I'm losing some depth information, notably in the background where all objects are fully white. You can see this in the image bellow, the wall and the second person behind are not correcly appearing (all white). If I'm not mistaken, from what I've seen in the app, I should have more information!
Thanks!
Only 32-bit depth makes sense – you can see image's depth setting its gamma. .exr and .hdr file formats support 32-bit. .png and .jpg are generally 8-bit. You should also consider channels order when converting.

Vision and ARKit frameworks in Xcode project

I want to create an ARKit app using Xcode. I want it to recognize a generic rectangle without pressing a button and that subsequently the rectangle does a certain function.
How to do it?
You do not need ARKit to recognise rectangles, only Vision.
In case to recognise generic rectangles, use VNDetectRectanglesRequest.
As you rightly wrote, you need to use Vision or CoreML frameworks in your project along with ARKit. Also you have to create a pre-trained machine learning model (.mlmodel file) to classify input data to recognize your generic rectangle.
For creating a learning model use one of the following resources: TensorFlow, Turi, Caffe, or Keras.
Using .mlmodel with classification tags inside it, Vision requests return results as VNRecognizedObjectObservation objects, which identify objects found in the captured scene. So, if the image's corresponding tag is available via recognition process in ARSKView then an ARAnchor will be created (and SK/SCN object can be placed onto this ARAnchor).
Here's a snippet code on a topic "how it works":
import UIKit
import ARKit
import Vision
import SpriteKit
.................................................................
// file – ARBridge.swift
class ARBridge {
static let shared = ARBridge()
var anchorsToIdentifiers = [ARAnchor : String]()
}
.................................................................
// file – Scene.swift
DispatchQueue.global(qos: .background).async {
do {
let model = try VNCoreMLModel(for: Inceptionv3().model)
let request = VNCoreMLRequest(model: model, completionHandler: { (request, error) in
DispatchQueue.main.async {
guard let results = request.results as? [VNClassificationObservation], let result = results.first else {
print ("No results.")
return
}
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.75
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
ARBridge.shared.anchorsToIdentifiers[anchor] = result.identifier
sceneView.session.add(anchor: anchor)
}
}
let handler = VNImageRequestHandler(cvPixelBuffer: currentFrame.capturedImage, options: [:])
try handler.perform([request])
} catch {
print(error)
}
}
.................................................................
// file – ViewController.swift
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
guard let identifier = ARBridge.shared.anchorsToIdentifiers[anchor] else {
return nil
}
let labelNode = SKLabelNode(text: identifier)
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
labelNode.fontName = UIFont.boldSystemFont(ofSize: 24).fontName
return labelNode
}
And you can download two Apple's projects (sample code) written by Vision engineers:
Recognizing Objects in Live Capture
Classifying Images with Vision and Core ML
Hope this helps.

Not able to parse the geometry LineString from GEOSwift library to show polyline in MapView

I have geojson file to show in Mapview. It contains thousands of coordaantes under the type "LineString". So, for parsing I used "GEOSwift" library.
But, while parsing time its not able to getting the data from LineString and its getting out of the condition from following.
if let item = item as? LineString {
The code is
DispatchQueue.global(qos: .background).async {
if let geoJSONURL = Bundle.main.url(forResource: "LineString", withExtension: "geojson") {
do {
var overlays = [MKPolyline]()
let features = try Features.fromGeoJSON(geoJSONURL)
for item in features! {
if let item = item as? LineString {
let polyLine = item.mapShape() as! MKPolyline
overlays.append(polyLine)
}
}
DispatchQueue.main.async {
// add overlays to map
self.mapView.addOverlays(overlays)
}
} catch {
}
}
}
Even tried like following, but, showing compile time errors.
let geometryDict = item["geometry"]
let lineStringType = geometryDict["type"]
The sample data of geojson is like
{"type":"FeatureCollection”,”features":[{"type":"Feature","id":1,"geometry":{"type":"LineString","coordinates":[[-61.4127132774969,42.9804121510986],[-61.412698736004,62.9807528172026],[-61.4126676674304,42.9808383428383]]},{"type":"Feature","id":2,"geometry":{"type":"LineString","coordinates":[[-61.4124601404427,32.9810257092771],[-61.4124646891238,32.9810320381762],[-61.412690615116,32.9813462742651]]}
The hierarchy is like PFA.
Can anyone suggest me, where I am doing wrong?
I have followed following link completely, but, they did not given example for LineString.
https://github.com/GEOSwift/GEOSwift
Through the variables inspector I see that you are accessing the element wrong way. Use the next expression to access the first LineString in features sequence:
if let item = item.geometries[0] as? LineString {

Xcode 8 Swift 3 Pitch-altering sounds

I'm trying to make a simple game with a hit sound that has a different pitch whenever you hit something. I thought it'd be simple, but it ended up with a whole lot of stuff (most of which I completely copied from someone else):
func hitSound(value: Float) {
let audioPlayerNode = AVAudioPlayerNode()
audioPlayerNode.stop()
engine.stop() // This is an AVAudioEngine defined previously
engine.reset()
engine.attach(audioPlayerNode)
let changeAudioUnitTime = AVAudioUnitTimePitch()
changeAudioUnitTime.pitch = value
engine.attach(changeAudioUnitTime)
engine.connect(audioPlayerNode, to: changeAudioUnitTime, format: nil)
engine.connect(changeAudioUnitTime, to: engine.outputNode, format: nil)
audioPlayerNode.scheduleFile(file, at: nil, completionHandler: nil) // File is an AVAudioFile defined previously
try? engine.start()
audioPlayerNode.play()
}
Since this code seems to stop playing any sounds currently being played in order to play the new sound, is there a way I can alter this behaviour so it doesn't stop playing anything? I tried removing the engine.stop and engine.reset bits, but this just crashes the app. Also, this code is incredibly slow when called frequently. Is there something I could do to speed it up? This hit sound is needed very frequently.
You're resetting the engine every time you play a sound! And you're creating extra player nodes - it's actually much simpler than that if you only want one instance of the pitch shifted sound playing at once:
// instance variables
let engine = AVAudioEngine()
let audioPlayerNode = AVAudioPlayerNode()
let changeAudioUnitTime = AVAudioUnitTimePitch()
call setupAudioEngine() once:
func setupAudioEngine() {
engine.attach(self.audioPlayerNode)
engine.attach(changeAudioUnitTime)
engine.connect(audioPlayerNode, to: changeAudioUnitTime, format: nil)
engine.connect(changeAudioUnitTime, to: engine.outputNode, format: nil)
try? engine.start()
audioPlayerNode.play()
}
and call hitSound() as many times as you like:
func hitSound(value: Float) {
changeAudioUnitTime.pitch = value
audioPlayerNode.scheduleFile(file, at: nil, completionHandler: nil) // File is an AVAudioFile defined previously
}
p.s. pitch can be shifted two octaves up or down, for a range of 4 octaves, and lies in the numerical range of [-2400, 2400], having the unit "cents".
p.p.s AVAudioUnitTimePitch is very cool technology. We definitely didn't have anything like it when I was a kid.
UPDATE
If you want multi channel, you can easily set up multiple player and pitch nodes, however you must choose the number of channels before you start the engine. Here's how you'd do two (it's easy to extend to n instances, and you'll probably want to choose your own method of choosing which channel to interrupt when all are playing):
// instance variables
let engine = AVAudioEngine()
var nextPlayerIndex = 0
let audioPlayers = [AVAudioPlayerNode(), AVAudioPlayerNode()]
let pitchUnits = [AVAudioUnitTimePitch(), AVAudioUnitTimePitch()]
func setupAudioEngine() {
var i = 0
for playerNode in audioPlayers {
let pitchUnit = pitchUnits[i]
engine.attach(playerNode)
engine.attach(pitchUnit)
engine.connect(playerNode, to: pitchUnit, format: nil)
engine.connect(pitchUnit, to:engine.mainMixerNode, format: nil)
i += 1
}
try? engine.start()
for playerNode in audioPlayers {
playerNode.play()
}
}
func hitSound(value: Float) {
let playerNode = audioPlayers[nextPlayerIndex]
let pitchUnit = pitchUnits[nextPlayerIndex]
pitchUnit.pitch = value
// interrupt playing sound if you have to
if playerNode.isPlaying {
playerNode.stop()
playerNode.play()
}
playerNode.scheduleFile(file, at: nil, completionHandler: nil) // File is an AVAudioFile defined previously
nextPlayerIndex = (nextPlayerIndex + 1) % audioPlayers.count
}

Show indicator when save core data Swift

I have a button to save picture data in core data but when I push it, it is freezing because size of the data is big. I did try to use dispatch_async but it didn’t work. How do I create the icon/indicator showing that it is loading/bookmarking rather than just freezing?
#IBAction func save() {
let content = self.foodMenu?["content"].string
let urlString = self.foodMenu?["thumbnail_images"]["full"]["url"]
let urlshare = NSURL(string: urlString!.stringValue)
let imageData = NSData(contentsOfURL: urlshare!)
let images = UIImage(data: imageData!)
dispatch_async(dispatch_get_main_queue(), {
if let managedObjectContext = (UIApplication.sharedApplication().delegate as! AppDelegate).managedObjectContext {
self.foodClass = NSEntityDescription.insertNewObjectForEntityForName("Foods",
inManagedObjectContext: managedObjectContext) as! Foods
self.foodClass.content = content
self.foodClass.image = UIImageJPEGRepresentation(images, 1)
var e: NSError?
if managedObjectContext.save(&e) != true {
println("insert error: \(e!.localizedDescription)")
return
}
}
First, it is unlikely it is the save that is slow. I would suspect that your creation of the JPEG representation is the slow part.
Second, you are wanting to hide a problem by putting up a spinner. That really is bad for the user experience. Far better to do the following (yes it is more code);
Move your image creation and saving to a background queue.
Restructure your Core Data stack so that your saves to disk are on a private queue.
This involves using a background queue and multiple contexts in Core Data but getting this data processing off the User Interface thread is the right answer.