iOS 3D indoor navigation application - iphone

what are the steps needed to create an indoor 3d navigation application. I have some auto cad files for a building and it would not be a problem to create a 3d model using 3dmax. Inertial sensors will be used for localization, bit After getting the model, how can I integrate it in iOS and create the visualization?

Depending on what your complete requirements are i believe it sounds like you do require openGL programming in order to create that 3D environment. And for navigation, i would suggest using the GPS in order to specify where you are located as opposed to inertial sensors. Or maybe a Mix of both so as to reduce your errors. I am guessing you want to be able to locate yourself in a building where GPS and wifi or 3G signals are not available. Just making use of inertial sensors would definitely be error prone.

Related

Is it possible to use Reality Composer for detecting 3D assets in the real world?

I'm trying to find a way to create .arobject for detecting 3D assets in real world. The only solution I've found is to use Apple scanning application. But I wonder maybe there is a way to use Reality Composer to achieve this? Since Reality Composer can detect Images and Anchors maybe this is possible.
Of course you can use iOS/iPadOS version of Reality Composer for creating .arobject and then recognizing real-world object based on data for AnchorEntity(.object). Look at these two images to find out how you can do that.
Take into consideration that you can't scan cylindrical and moving real-world objects!

What to use for an app with an anatomy model?

I want to develop an android application which contains a 3D model. The app will be a pain diary with a 3D model of the human anatomy, which should be able to rotate and zoom in and out. The User should be able to tap on a point or to drag an area on the model to locate his pain
Since I have never made any application like this before I would be glad if you could give me some advice on which frameworks I should consider for this kind of project.
It would be awsome if you could recommend two or three frameworks, because i have to write a paper for this project, in which i have to discuss a few different frameworks.
Thanks in advance.

Making a trackable human body - Oculus Rift

I'm very new to this. During my research for my PhD thesis I found a way to solve a problem and for that I need to move my lab testing in the virtual environment. Anyway, I have an Oculus Rift and an OPTOTRAK system that allows me to motion capture a full body for VR (in theory). What my question is, can someone point me in the right direction, of what materials do I need to check out to start working on a project. I have a background in programming, so it's just that I need a nudge in the right direction (or if someone knows a similar project)
https://www.researchgate.net/publication/301721674_Insert_Your_Own_Body_in_the_Oculus_Rift_to_Improve_Proprioception - I want to make something like this :)
Tnx a lot
Nice challenge too.. how accurate and how real time is the image of your body in the Oculus Rift world ? my two - or three - cents
A selfie-based approach would be the most comfortable to the user.. there's an external camera somewhere and the software transforms your image to reflect the correct perspective, as you would see your body, through the oculus, at any moment. This is not trivial and quite expensive vision software. To let it work 360 degrees there should be more than 1 camera, watching all individual oculus users in a room !
An indirect approach could be easier.. model your body, only show dynamics. There's WII style electronics in bracelets and on/in special user clothing, involving multiple tilt and acceleration sensors. They form a cluster of "body state" sensor information, to be accessed by the modeller in the software. No camera is needed, and the software is not that complicated when you'd use a skeleton model.
Combine. Use the camera for the rendering texture and drive the skeleton model via dynamics drive by the clothing sensors. Maybe deep learning could be applied, in conjunction with a large number of tilt sensors in the clothing, a variety of body movement patterns are to be trained and connected to the rendering in the oculus. This would need the same hardware as the previous solution, but the software could be easier and your body looks properly textured and it moves less "mechanistic". There will be some research needed to find the correct deep learning strategy..

Is there a Unity plug-in that would allow you to generate a 3d model using your webcam?

I've looked into Metaio which can do Facial 3d reconstructions
video here: https://www.youtube.com/watch?v=Gq_YwW4KSjU
but I'm not looking to do that. I want to simply be able to have the user scan in a small simple object and a 3d model be created from that. I don't need it textured or anything. As far as I can tell Metaio cannot do what I'm looking for, or at least I can't find the documentation for it.
Since you are targeting mobile, you would have to take multiple pictures from different angles and use an approach used in this CSAIL paper.
Steps
For finding the keypoints, I would use FAST, or a method using the Laplacian of Gaussian. Other options include SURF and SIFT.
Once you identify the points, use triangulation to find where the points will be in 3D.
With all of the points, create a point cloud. In unity, I would recommend doing something similar to this project, which used particle systems as the points.
You now have a 3d reconstruction of the object!
Now, in implementing each of these steps, you could reinvent the wheel, or use C++ native plugins in Unity. This enables you to use OpenCV which has many of these operations already implemented (SURF, SIFT, possibly even some 3D reconstruction classes/methods, which use Stereo Calibration*).
That all being said... the Android Computer Vision Plugin(also apparently called "Starry Night") seems to have these capabilities. However, in version 1.0, only PrimeSense sensors are supported. See the description of the plugin**
Starry Night is an easy to use Unity plugin that provides high-level 3D computer vision processing functions that allow applications to interact with the real world. Version 1.0 provides SLAM (Simultaneous Localization and Mapping) functions which can be used for 3D reconstruction, augmented reality, robot controls, and many other applications. Starry Night will interface to any type of 3D sensor or stereo camera. However, version 1.0 will interface only to a PrimeSense Carmine sensor.
*Note: That tutorial is in matlab, but I think the overview section gives a good understanding of stereo calibration
**as of May 12th, 2014

How do you make maps in Flash CS4 and then use them in iPhone games?

I was watching a video showing an ngmoco rolando2 level designer.
He seemed to be using flash CS4 to make the maps.
Would anyone know how I would go about doing this?
Just in case you need to know, I am an intermediate programmer, I know both Java and Objective-C pretty well.
I don't know if any of what I'm about to say is true or not, but hopefully my input will be helpful:
It could be simply that the level used in Rolando are simply vector graphic images and the designer you saw in the video simply preferred Flash CS4 as his vector editor?
Again, I could be wrong here.
It's also possible that the game has some code that decodes flash files into usable levels somehow - assuming this would be permitted by Apple in regards to their "no interpreters" rules.
My final thought, which in my opinion would be the least likely, is that the game may be a flash game compiled to run on the iPhone using Adobe's beta flash-iphone SDK. I say this would be the least likely as I believe ngmoco haven't used this method in any of their previous games and I don't see why they would suddenly resort to this method of developing iPhone apps.
In my game Hudriks I also used flash to design levels and even make some animations.
There is no any tool to do this, so you need to develop it yourself with requirements for your game.
First of all, it depends on your game and what exactly you need to design in flash - just putting images, defining their parameters (bonus values), ground path, etc.
After that it is important to define structure of your flash file - how you store different levels (in symbols or scenes), what layers for each level you have (boundaries, objects, obstacles, etc).
If you need to have some extra information for you objects in flash, most probably, you will need to develop custom panel in flash to setup all parameters. I used setPersistentData for storing information for flash objects.
After that you need to develop script that goes through all objects in your symbols and extracts basic information, like transformation, and your custom data. I faced some problems with getting correct transformation values, especially for rotation. Had to do extra heuristics.
For animations I just used motion tween data. In my animation framework did simple implementation supporting basic parameters (transformation and alpha) and only linear curves. Fortunately, in Flash CS4 there is function copyMotion that gives you XML for the animation. You just need to parse it or convert to your own format.