Mapping Indoor Floor Plan into OSM-XML for Use in iPhone App - iphone

I am currently working on a project at my home university to create an infrastructure-less indoor navigation iPhone application. I have a couple of questions regarding IndoorOSM and hope that experts here can steer me towards the right direction.
Given an indoor floor plan, how can I make use of JOSM to map it into OSM-XML format? I understand that the floor plan will be represented as nodes, ways, tags and relations with each node having a lat-lon value. As the indoor space I would like to map is located in Singapore where there is little existing mapping work done, I am not sure where to accurately place the floor plan in JOSM before modelling (the buildings are non-existent). The thing is, if I start modelling on a wrong location, the lat-lon values generated in the OSM-XML file will be way off from the lat-lon values in actual real world space, right? In that case I don't think I will be able to make use of the magnetometer to identify where the user is currently at on the map...
In the OSM wiki, it was mentioned that nodes represent a geospatial point and ways are simply a collection of 2-2000 nodes and can be used to represent an area. Pardon my ignorance, but how can I know what's the physical size/area of this "point" or node?
Other than IndoorOSM, is there an easier way to convert an indoor floor plan into something that my application can understand and use easily to allow navigation? I seen a project known as roodin on youtube but I'm not sure how they did the mapping (link).

I'm also working on app with similar functionalities. What I understand is:
1 - When mapping a building you should mapping with correct lat/long. It shouldn't be difficult to get building coordinates if you can go there and check coordinates with a smartphone with GPS. With you can't go there, it's more difficult. Maybe ask someone to do that helps you. But, to start drawing your floor plan, you don't need coordinates. you can do that after finish your drawing
2 - When use JOSM to drawing, it show (in status bar) the length of your way
3 - Currently, I'm sticking with IndoorOSM. I liked the way they reuse nodes, ways and relations to draw a floor plan. That's my recommendation

Related

HOW do Unity World Anchors work on the HoloLens?

I'm currently building a HoloLens application and have a feature in-mind that requires holograms to be dynamically created, placed, and to persist between sessions. Those holograms don't need to be shared between devices.
I've had a nightmare trying to find (working) implementations and documentation for Unity WorldAnchors, with Azure Spatial Anchors seeming to stomp out most traces of it. Thankfully I've gotten past that and have managed to implement WorldAnchors by using the older HoloToolkit, since documentation for WorldAnchors in the newer MRTK also seems to have also disappeared.
MY QUESTION (because I am unable to find any docs for it) is how do WorldAnchors work?
I'd hazard a guess that it's based on spatial mapping, which presents the limitation that if you have 2 identical rooms or objects that move in the original room, the anchor/s is/are going to be lost.
What I'd LIKE to hear is that it's some magical management of transforms, which means my app has an understanding of its change in real-world location between uses even if the app is launched from a different location each time.
Does anybody know the answer or where I might look (beyond the limited Unity and MS Docs for this matter) to find out implementation details?
Thank you.
I'd hazard a guess that it's based on spatial mapping, which presents the limitation that if you have 2 identical rooms or objects that move in the original room, the anchor/s is/are going to be lost.
We won’t divulge the internal implementation details of the internal coding of the World Anchor but we can state that it is not based on GPS currently with HoloLens v1 or HoloLens v2. Currently, the World Anchor uses the data in the spatial map for placement. The underlying piece that is key is the anchors rely on the spatial scanning and the scanning can use wifi to improve the speed and accuracy, see these two references: 1 & 2
What I'd LIKE to hear is that it's some magical management of transforms, which means my app has an understanding of its change in real-world location between uses even if the app is launched from a different location each time.
It is certainly possible to have two identical rooms with exact layout to trick the mapping to think it is the same room. We document that here:
https://learn.microsoft.com/en-us/windows/mixed-reality/coordinate-systems#headset-tracks-incorrectly-due-to-identical-spaces-in-an-environment

Hololens-SpatialMapping (Unity3D)

I'm actually doing a project with the Hololens of Microsoft. The problem is that the Hololens memory is bad, so i can only make a spatialmapping of a room and not of a building because he can't remember all the building. I had an idea, maybe a can create more object and assemble them ? But no one talk about this... Do you think it's possible ?
Thanks for reading me.
Y.P
Since you don’t have a compass, you could establish some convention to help. For example, you could start the scanning by giving a voice command (and stop it by another one), and decide to only start scanning when you’re facing north, for example. Then it would be easy to know the orientation of each room. What may be harder is to get the angle exactly right. Your head might be off by a few degrees and you may have to work some “magic” (post processing) to correct it.
Or placing QR codes on a wall (printer paper + scotch tape) and using something like Vuforia can help you avoid this orientation problem altogether (you would get the QR code’s orientation which would match that of the wall).
You can also simplify the scanned mesh and convert it to planes. That way you can remember simpler objects instead of the raw spatial mapping mesh. (Search for the SurfaceToPlanes script in the Holographic Academy tutorials).
Scanning, the first layer, as in HoloLens trying to reason about the environment is an unstoppable process. There is no API for starting or stopping it. And that process also does slowly consume more and more memory as far as I know. The only thing you can do is deleting space (aka deleting holograms) or covering the sensors. But that's OS/hardware level, not app level, which you presumably want.
Layer two, what you are you probably talking about, is starting and stopping the spatial reconstruction process, where that raw spatial data is processed into a low-poly mesh (aka spatial mapping). This process can be started or stopped. For example through Unity's SpatialMappingCollider and SpatialMappingRenderer components, if you use Unity.
Finally the third level is extracting some objects/segments from that spatial mapping mesh into primitives. Like that SurfaceToPlanes. That you can also fully control in terms of when.
There has been a great confusion, especially due to the a re-naming parties in MixedRealityToolkit (overuse of word Scanning) and Unity (SpatialAnchor to WorldAnchor etc.) and misleading tutorials using a lot of colloquialisms instead of crisp terminology.
Theory aside. If you want the HoloLens to think of your entire building as one continuous space in terms of the first layer, you're out of luck. It was designed for a living room and there is a lot of voodoo involved into making it work stable in facilities 30x30 meters. You probably want to rely on disjointed "islands" with specific detection anchors to identify where you are. Or rely on markers and coordinates relative to them.
Cheers

Modeling a Physical Place inside iPhone Application

I need to find a way to model a physical place inside an iPhone application. For example, I want to be able to take images for a restaurant and then use some tools or programming API to model this resturant as a 3d place and make the user able to navigate and explore the place and rooms.
I have thought about HTML 5 inside a web view but I don't think the WebGL is compatible with iPhone Web View (Safari Engine).
Can you please recommend a method, API, Commercial Library or anything to help me achieve this task?
First, you need to be able to display 3D models for IPhone. One of the most popular 3D engine is Unity3D:
http://unity3d.com/
It is extremely easy to start playing with Unity3D. You even have a free license with limited features:
http://unity3d.com/unity/licenses
Then, you now need to reconstruct a 3D model from pictures. This is not a trivial problem so it is better if you know some computer vision. You can try to play with OpenCV:
http://opencv.willowgarage.com/wiki/
Best regards.
Actually Nuke from the Foundry has a decent start at the future of creating computer models from images.
Basically it takes a high contrast point and tracks it through successive moments. Given hundreds and thousands of tracked points, the next step is to calculate the perspective change between points.
Say two points are a known pixel distance apart at time zero and a certain time period later they are a different distance apart. This change in difference could be a bad tracking point. But assuming that the two points are perfectly tracking, then the distance change could be caused by a camera motion laterally or rotationally. And in real space a point further away from you will have a different perspective then a closer point . This perspective change is a mathematical certainty.
Initially the tracking is typically used to refilm a piece of film to stabilize it. But the process the software uses to analyze the film can be saved , it is often called a point cloud. connection of many close points that track very closely usually are because the points are parts of a surface, so a model can be built.
But my friend, we are barbarians to the speed and software that can do that perfectly yet. Or all the CG Artists out there would not have anything to model in Maya except fantasy monsters and space ships that don't exist yet....

iOS image comparison

I am just doing some research into image processing and would appreciate it if someone could point me in the right direction. I want to compare image 'A' which is a picture of a person's face with image's stored in a database -B,C,D,E .. etc which are also pictures of faces. I want to compare them to see if the person 'A' is already in the database.
Several questions :
1.How is face recognition comparison usually done? (do you extract features e.g. eyes/mouth and compare them to other images?).
2. Are there prebuilt libraries that are able to do a comparison between images? or do i need to write my own algorithm?
3. Where can i start with this? (would appreciate some references/reading material).
Yes, you identify, extract and quantify various aspects of human faces, such as distance between pupils, width of mouth, percentage of head height where tip of nose is, etc.
There is a company, Luxand which makes software to do this, and I think they license it. Last time I looked (2009?) they didn't have an objective-c library. They do have an app that claims to merge faces from photograhs, so you can see what the offspring of any two people would look like, but it is very cheesy, with lots of hard-coded faces. (If you cross a dog with a tea-pot, you get the same baby-face as from crossing a 2 real faces.)
AFAIK, there is nothing in the iOS SDK that does this.
I would just Google "face recognition" and start reading. Good luck.
I would go with compiling openCV for the iPhone ( http://computer-vision-talks.com/2011/02/building-opencv-for-iphone-in-one-click/ ), and then implementing one of the classical ways to do face recognition like eigenfaces ( http://www.shervinemami.info/faceRecognition.html )
But don't expect miracles the accuracy will be low, and the app will be slow.
Also when you say face recognition is difficult doesn't the first link show how easy it is to detect faces on a picture?
The face detection from the first link is just to detect the face. It is just to see if there is a face in the image, which then you can pass as input to the recognition algorithm.
face recognition are very difficult, you need to extract some kind of "features" and perform some measurement...iphone hardware isn't very appropriate for this job.
yes, you can check here
http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/
for a tutorial and here
http://maniacdev.com/2011/12/open-source-library-for-adding-easy-face-to-your-ios-app-with-the-free-face-com-api/
for a free webservice.
3.i suggest you google scholar (http://scholar.google.it/scholar?q=face+recognition&hl=it&btnG=Cerca&lr=) but i think that if you want to write your own algorithm you need a lot o spare time :)

Static maps with routing on iOS

Is there a way to have static maps on the iPhone, with either MapKit or a third-party framework? By this I mean fixed area of say, 5 sq miles, which can by zoomed/panned etc, but which doesn't require an internet connection to load the map.
Additionally, is it possible to get route directions, and draw them on the map?
You can of course always roll your own solution with CATiledLayer if the area you want to display is that small, but it's probably better and easier to have a look at routing frameworks like MapBox (http://mapbox.com/blog/introducing-mapbox-ios-sdk/), which provides offline support for iOS.
The MapKit framework doesn't offer offline maps currently.
It is possible to define an area on the maps, and lock the user into that area, but an internet connection is still required.
Maybe a more direct way to do what you want is to download a static image for the zone you are interested in and cache it, using the image of that map area to zoom and pan around in. Of course this would require an initial internet connection but that is really not such an obstacle, after all, one must have a connection to download your application.
You could also provide this image directly into your applications bundle, but you've not really told us much to conclude that the latter option is feasible.
As for routing, it's also not supported currently. You could however retrieve a list of waypoints from point A to B directly from the Google maps remote API - note you cannot do this with MapKit framework.
With these waypoints (which contain coordinates) and the current zoom level value, it's possible for you to plot these points and draw between each one in order to implement your own routing, this get a little ugly or maybe better to say "laggy" when the user begins to zoom in and out as it's only possible to know how to redraw your route when the user ends zooming (lifts their fingers from the screen), but of course like most things in programming, there is a solution to this which is, I feel out of scope for this question.
I hope this helps.