Not able to parse the geometry LineString from GEOSwift library to show polyline in MapView - swift

I have geojson file to show in Mapview. It contains thousands of coordaantes under the type "LineString". So, for parsing I used "GEOSwift" library.
But, while parsing time its not able to getting the data from LineString and its getting out of the condition from following.
if let item = item as? LineString {
The code is
DispatchQueue.global(qos: .background).async {
if let geoJSONURL = Bundle.main.url(forResource: "LineString", withExtension: "geojson") {
do {
var overlays = [MKPolyline]()
let features = try Features.fromGeoJSON(geoJSONURL)
for item in features! {
if let item = item as? LineString {
let polyLine = item.mapShape() as! MKPolyline
overlays.append(polyLine)
}
}
DispatchQueue.main.async {
// add overlays to map
self.mapView.addOverlays(overlays)
}
} catch {
}
}
}
Even tried like following, but, showing compile time errors.
let geometryDict = item["geometry"]
let lineStringType = geometryDict["type"]
The sample data of geojson is like
{"type":"FeatureCollection”,”features":[{"type":"Feature","id":1,"geometry":{"type":"LineString","coordinates":[[-61.4127132774969,42.9804121510986],[-61.412698736004,62.9807528172026],[-61.4126676674304,42.9808383428383]]},{"type":"Feature","id":2,"geometry":{"type":"LineString","coordinates":[[-61.4124601404427,32.9810257092771],[-61.4124646891238,32.9810320381762],[-61.412690615116,32.9813462742651]]}
The hierarchy is like PFA.
Can anyone suggest me, where I am doing wrong?
I have followed following link completely, but, they did not given example for LineString.
https://github.com/GEOSwift/GEOSwift

Through the variables inspector I see that you are accessing the element wrong way. Use the next expression to access the first LineString in features sequence:
if let item = item.geometries[0] as? LineString {

Related

ARKit, RealityKit: add AnchorEntities in the world after loading saved ARWorldMap

I am developing a simple AR app, where the user can:
select an object to add to the world from a list of objects
manipulate the object, changing its position, its orientation and its scale
remove the object
save all the objects added and modified as an ARWorldMap
load a previously saved ARWorldMap
I'm having some problems in getting the situation as it was before, when the user wants to load the previously saved ARWorldMap. In particular, when I re-add the objects, their positions and orientations are completely different that before.
Here are some details about the app. I initially add the objects as AnchorEntities. Doing some research I found out that the ARWorldMap doesn't save AnchorEntities but only ARAnchors, so I created a singleton object (AnchorEntitiesContainer in the following code) where I have a list of all the AnchorEntities added by the user that gets restored when the user wants to load a saved ARWorldMap.
Here is the initial insertion of objects in the world:
func addObject(objectName: String) {
let path = Bundle.main.path(forResource: objectName, ofType: "usdz")!
let url = URL(fileURLWithPath: path)
if let modelEntity = try? ModelEntity.loadModel(contentsOf: url) {
modelEntity.name = objectName
modelEntity.scale = [3.0, 3.0, 3.0]
let anchor = AnchorEntity(plane: .vertical, minimumBounds: [0.2, 0.2])
anchor.name = objectName + "_anchor"
anchor.addChild(modelEntity)
arView.scene.addAnchor(anchor)
modelEntity.generateCollisionShapes(recursive: true)
}
}
Here is the saving of the ARWorldMap and of the list of entities added:
func saveWorldMap() {
print("Save world map clicked")
self.arView.session.getCurrentWorldMap { worldMap, _ in
guard let map = worldMap else {
self.showAlert(title: "Can't get world map!", message: "Can't get current world map. Retry later.")
return
}
for anchor in self.arView.scene.anchors {
AnchorEntitiesContainer.sharedAnchorEntitiesContainer().addAnchorEntity(anchorEntity: anchorEntity)
}
do {
let data = try NSKeyedArchiver.archivedData(withRootObject: map, requiringSecureCoding: true)
try data.write(to: URL(fileURLWithPath: self.worldMapFilePath), options: [.atomic])
} catch {
fatalError("Can't save map: \(error.localizedDescription)")
}
}
showAlert(title: "Save world map", message: "AR World Map successfully saved!")
}
Here is the loading of the ARWorldMap and the re-insertion of the AnchorEntities:
func loadWorldMap() {
print("Load world map clicked")
let mapData = try? Data(contentsOf: URL(fileURLWithPath: self.worldMapFilePath))
let worldMap = try! NSKeyedUnarchiver.unarchivedObject(ofClass: ARWorldMap.self, from: mapData)
let configuration = self.defaultConfiguration
configuration.initialWorldMap = worldMap
self.arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
for anchorEntity in AnchorEntitiesContainer.sharedAnchorEntitiesContainer().getAnchorEntities()){
self.arView.scene.addAnchor(anchorEntity)
}
}
The problem is that the position and orientation of the AnchorEntities are completely different than before, so they are not where they should be.
Here are the things I tried:
I tried to save ModelEntities instead of AnchorEntities and repeat the initial insertion when loading the saved ARWorldMap but it didn't give the expected results.
I thought that maybe the problem was that the world origins are different when the ARWorldMap gets loaded, so I tried to restore the previous world origin but I don't know how to get that information and how to work with it.
I noticed that the ARWorldMap has a "center" parameter so I tried to modify the AnchorEntities transform matrix with that information but I never got what I wanted.
So my question is how do I load AnchorEntities into the world in exactly their previous positions and orientations when loading an ARWorldMap?

Can't fill my collection views with API data by using Alamofire

There is an api (https://docs.api.jikan.moe/#section/Information). I get data from it, but I can’t display them in my collection views in any way. The data should come, I checked. I implement filling the collection view cells through the view model ViewController <-> ViewModel and with Network Manager API Manager
The result is just white collectionView - Screen
For the first time I decided to work with Alamofire and apparently I don’t understand something. Please tell me what is the problem. Link to github in case someone needs it.
Updated
The problem might be with asynchronous coding. And i still have no ideas to fix it, cause don't understand the GCD as well. Screen
func fetchRequest(typeRequest: TypeRequest) -> [AnimeModel] {
var animeModels: [AnimeModel] = []
switch typeRequest {
case .name(let name):
let urlString = "https://api.jikan.moe/v4/anime?q=\(name)"
AF.request(urlString).response { response in
guard let data = response.data else { return print("NO DATA FOR - \(name)") }
do {
let json = try JSON(data: data)
let title = json["data"][0]["title_english"].string ?? "Anime"
let imageURL = json["data"][0]["images"]["jpg"]["image_url"].string ?? ""
let image = AnimeModel.downloadImage(stringURL: imageURL)
animeModels.append(AnimeModel(image: image, title: title))
print(".NAME ANIME MODELS - \(animeModels)")
} catch let error {
print(error.localizedDescription)
}
}
}
print("BEFORE RETURN ANIME MODELS - \(animeModels)")
return animeModels // returns empty array and then "animeModel.append()" is applied
}

Vision and ARKit frameworks in Xcode project

I want to create an ARKit app using Xcode. I want it to recognize a generic rectangle without pressing a button and that subsequently the rectangle does a certain function.
How to do it?
You do not need ARKit to recognise rectangles, only Vision.
In case to recognise generic rectangles, use VNDetectRectanglesRequest.
As you rightly wrote, you need to use Vision or CoreML frameworks in your project along with ARKit. Also you have to create a pre-trained machine learning model (.mlmodel file) to classify input data to recognize your generic rectangle.
For creating a learning model use one of the following resources: TensorFlow, Turi, Caffe, or Keras.
Using .mlmodel with classification tags inside it, Vision requests return results as VNRecognizedObjectObservation objects, which identify objects found in the captured scene. So, if the image's corresponding tag is available via recognition process in ARSKView then an ARAnchor will be created (and SK/SCN object can be placed onto this ARAnchor).
Here's a snippet code on a topic "how it works":
import UIKit
import ARKit
import Vision
import SpriteKit
.................................................................
// file – ARBridge.swift
class ARBridge {
static let shared = ARBridge()
var anchorsToIdentifiers = [ARAnchor : String]()
}
.................................................................
// file – Scene.swift
DispatchQueue.global(qos: .background).async {
do {
let model = try VNCoreMLModel(for: Inceptionv3().model)
let request = VNCoreMLRequest(model: model, completionHandler: { (request, error) in
DispatchQueue.main.async {
guard let results = request.results as? [VNClassificationObservation], let result = results.first else {
print ("No results.")
return
}
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.75
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
ARBridge.shared.anchorsToIdentifiers[anchor] = result.identifier
sceneView.session.add(anchor: anchor)
}
}
let handler = VNImageRequestHandler(cvPixelBuffer: currentFrame.capturedImage, options: [:])
try handler.perform([request])
} catch {
print(error)
}
}
.................................................................
// file – ViewController.swift
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
guard let identifier = ARBridge.shared.anchorsToIdentifiers[anchor] else {
return nil
}
let labelNode = SKLabelNode(text: identifier)
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
labelNode.fontName = UIFont.boldSystemFont(ofSize: 24).fontName
return labelNode
}
And you can download two Apple's projects (sample code) written by Vision engineers:
Recognizing Objects in Live Capture
Classifying Images with Vision and Core ML
Hope this helps.

How to display CloudKit RecordType instances in a tableview controller

To my knowledge, the following code (or very close to it) would retrieve one cloudkit instance from the recordtype array...
let pred = NSPredicate(value: true)
let query = CKQuery(recordType: "Stores", predicate: pred)
publicDatabase.performQuery(query, inZoneWithID: nil) { (result, error) in
if error != nil
{
print("Error" + (error?.localizedDescription)!)
}
else
{
if result?.count > 0
{
let record = result![0]
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.txtDesc.text = record.objectForKey("storeDesc") as? String
self.position = record.objectForKey("storeLocation") as! CLLocation
let img = record.objectForKey("storeImage") as! CKAsset
self.storeImage.image = UIImage(contentsOfFile: img.fileURL.path!)
....(& so on)
However, how and when (physical location in code?) would I query so that I could set each cell to the information of each instance in my DiningType record?
for instance, would I query inside the didreceivememory warning function? or in the cellforRowatIndexPath? or other!
If I am misunderstanding in my above code, please jot it down in the notes, all help at this point is valuable and extremely appreciated.
Without a little more information, I will make a few assumptions about the rest of the code not shown. I will assume:
You are using a UITableView to display your data
Your UITableView (tableView) is properly wired to your viewController, including a proper Outlet, and assigning the tableViewDataSource and tableViewDelegate to your view, and implementing the required methods for those protocols.
Your data (for each cell) is stored in some type of collection, like an Array (although there are many options).
When you call the code to retrieve records from the database (in this case CloudKit) the data should eventually be stored in your Array. When your Array changes (new or updated data), you would call tableView.reloadData() to tell the tableView that something has changed and to reload the cells.
The cells are wired up (manually) in tableView(:cellForRowAtIndexPath:). It calls this method for each item (provided you implemented the tableView(:numberOfRowsInSection:) and numberOfSectionsInTableView(_:)
If you are unfamiliar with using UITableView's, they can seem difficult at first. If you'd like to see a simple example of wiring up a UITableView just let me know!
First, I had to take care of the typical cloudkit requirements: setting up the container, publicdatabase, predicate, and query inputs. Then, I had the public database perform the query, in this case, recordtype of "DiningType". Through the first if statement of the program, if an error is discovered, the console will print "Error" and ending further action. However, if no run-time problem is discovered, each result found to be relatable to the query is appended to the categories array created above the viewdidload function.
var categories: Array<CKRecord> = []
override func viewDidLoad() {
super.viewDidLoad()
func fetchdiningtypes()
{
let container = CKContainer.defaultContainer()
let publicDatabase = container.publicCloudDatabase
let predicate = NSPredicate(value: true)
let query = CKQuery(recordType: "DiningType", predicate: predicate)
publicDatabase.performQuery(query, inZoneWithID: nil) { (results, error) -> Void in
if error != nil
{
print("Error")
}
else
{
for result in results!
{
self.categories.append(result)
}
NSOperationQueue.mainQueue().addOperationWithBlock( { () -> Void in
self.tableView.reloadData()
})
}
}
}
fetchdiningtypes()

Using MapKit within the MVC guildines

All,
I am trying to follow MVC guidelines with my app, so I am removing the code from the View (View Controller). I am trying to get a MapKit displayed with lat/long from Parse. I have it working fine when I enter the lat/long manually.
So I want a computed property (I think) on the tuple. The tuple will hold the lat/long. When the tuple is used, I want to execute the parse query to retrieve the lat/long.
I am a little stuck putting a computed property on a tuple.
Here is my code.
var latAndlongTuple = (Double, Double)?
{
var query = PFQuery(className: "TableViewData")
query.includeKey("EventLoc")
query.findObjectsInBackgroundWithBlock{
(objects: [AnyObject]!, error: NSError!) -> Void in
if error == nil {
for object in objects {
var EventLocation = object["EventLoc"] as PFObject!
EventLocation.fetchIfNeededInBackgroundWithBlock {
(EventLocation: PFObject!, error: NSError!) -> Void in
dispatch_async(dispatch_get_main_queue()) {
let longitude = EventLocation["Latitude"] as NSString
let latitude = EventLocation["Longitude"] as NSString
}
func LocationCoordinate() -> MKCoordinateRegion
{
let location = CLLocationCoordinate2D(latitude: latAndlongTuple.0 ,longitude: latAndlongTuple.1)
let span = MKCoordinateSpanMake(0.001, 0.001)
let region = MKCoordinateRegion(center: location, span: span)
return region
}
So when the tuple is executed when the function LocationCoordinate is executed, I want the parse code running in the tuple computed property. Then it updates the segments of the tuple with the lat/long.
Any questions let me know.
I have done this by :
var latAndlongTuple:(lat : CLLocationDegrees, long : CLLocationDegrees)
{
get {
return (52.606907, -1.104780)
}
}
and instead of just returning the 2 values, I have used Parse to to get the values and then returned the CLLocationDegree tuple.