WaveFront OBJ converted from LightWave taking forever to render on iPhone - iphone

I'm working on a project where I need to render a 3D human body on an iOS device. The 3D object was built in Adobe LightWave and is 7.4MB. I opened it in Blender and exported it as OBJ/MTL pair which are 5.5MB and 4KB, respectively. Using Jeff LaMarche's Wavefront Loader (linked below) as a starting point to figure out OpenGL ES and get check out performance and whatnot, I stuck the object in there (in place of an OBJ/MTL pair he'd been using) and ran it in the simulator. Of course, crash on startup, so I decided to performSelectorInBackground it. A half hour later, it's still loading.
I'm just guessing that the file is way too detailed to draw with any kind of performance expectation on a device with a 600MHz processor. Is there a way to lower the quality these files somewhat easily? Or, if performance issues have arisen with this particular loader, could somebody enlighten me?
Thanks,
Will
http://iphonedevelopment.blogspot.com/2009/03/wavefront-obj-loader-open-sourced-to.html

Will,
I don't know if I can solve your problem, but I may be able to point you in the right direction. I did a project for a client loading a 3D model exported from Blender using the SIO2 3D engine.
Anyway, at that time, I had trouble with the 3D engine taking a long time to load the model. I found that reducing the number of polygons was very important - if it is a high quality model, you most certainly will need to do so.
Blender has a function for this - polygon reduce or something like that. Blender should also report to you how many polygons and surfaces are in use, so if it's more than, say, 20,000, you're likely in for performance issues.

Related

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

Best Practice to Increase Frame Per Second for my daydream VR application [duplicate]

I have been building a game for VR using Unity3d. It has only low poly models and the file size is less then 40 mb still the game lags when played on mobile.. Please suggest how to improve the performance..
Thank you in advance..
In order to improve performance in VR for mobile you have to optimize everything as best as you can, you should keep some of these variables in mind:
Graphics Side
- Number of polygons in the scene
- How many source of lighting do you have
Programming Side
- How much work is taking your code, is doing it efficiently?
The programming part can include problems within the physics system, also some logic problems that can probably decrease the overall performance because of higher computation.
My advice is to learn about the Profiler that unity offers, actually you can observe how much work is taking your code and where exactly it is your bottle-neck. This video also can be useful.
Of course a solution could be implement your code following design standards, like design patterns and software architecture (depending on the size of the project).
I hope it can be useful for you!
What I found from developing and launching a vr game is some of the issues below
Number of polygons is usually your first to check even though your models are low poly. For example, I looked at Synty models in the unity store and some of them were over 1k for a bag and 7k for a character model. This seriously reduce the amount of objects you can if you want to target a max of 50000 per eyes.
With some models, you can use blender and the decimate tool to reduce the polygon count pretty easily. From there I would use LOD's to reduce their count further based on distance.
Use occlusion culling (pro version only)
Set your camera distance to maybe a 100 instead of the default
Use mobile shaders and careful using some of the standard shaders as they are expensive. Also transparent shaders will becomes expensive cause overdraw.
Batch your textures and make them static if possible
Don't use dynamic shadows on lighting but instead bake your lights
Try to avoid using physics as this becomes expensive and instead raycast to trigger events or shooting weapons.
Run profiler often and check for any bottlenecks (pro version only)
Reduce the count of Particles effects and their values
Character bones can also cause issue so remove as many as possible
There is also your code to look at as mentioned by Manujamming
Set quality setting to low in the inspector to gain best performance.
Could you provide a screenshot of your game scene?
I hope this makes sense.
Best of luck!

scnBox with scnTube through the middle

I have been digging around trying to find a way to show a game board of sorts.
It is basically a square board with a round hole in the middle, I am able to render the scnBox and the scnTube, but I would like the area where the scnTube sits in the box for the box to be transparent and see through the game board, but can't seem to find anywhere that has an example. Any help would be much appreciated. I am hoping that I am just missing something very simple, but this is my first time using scene kit.
Thank you.
Before Unreal Engine 4, (UDK and prior) Epic's modelling space was subtraction - a filled block was your game world and its extents. From inside this block you took chunks out to create space for players to run around in, and shoot each other. All's fair in love and war.
I'm telling you this because it's a good example of how contrived 3D modelling is compared to real world scenarios, and should (hopefully) put you at sufficient unease to digest what follows.
This approach of carving out of a finite block is still in Unreal Engine 4 and popular with older users, but it now defaults to an open, infinite world into which things are added. Most new users gravitate towards building into an infinite space of nothingness rather than carving space out of a solid, finite block.
Everything about 3D modelling is virtual, and virtually impossible to relate to the real world. Instead of thinking in terms of how things could be done if objects were real and literal, you need to think in terms of the limitations (and there are many) of geometry definitions as used in most 3D modelling and game engines.
The programming equivalent of this mental gymnastics is going from the concept of classes and objects to their realities within languages and frameworks. On the one hand the ideas and their ideals are wonderful, and on the other the realities are a bleak reminder that programming languages haven't really progressed very far, at all.
3D modelling is exactly like this. It's not much further along than it was decades ago, and is still using archaic ways to solve many of its original problems.
Cutting a nice, clean, efficient round hole in a cube is one such original problem.
A very simple shape is being intersected and cut by a shape with the potential for infinite complexity. What should happen? Should the simple become complex or the complex become simple? How to make the most graceful transition between the two?
That's the problem you're facing: a cube is a simple geometric shape, easily defined by minimal line segments. A cylinder introduces infinite possibilities for line segmentation around its circumference.
So somewhere along the lines of development, the architects of 3D modelling had to come up with a way to make these contrasting line complexities play well together for lightweight presentation on limited hardware. Their solution, in most cases, is a hybrid and a disaster of user operability, but masterful in its geometric efficiency: Polygon modelling, UVW unwrapping and subdivision!
All of which means that if you want to achieve this in the best way possible, with today's tools, for the purposes of Scene Kit, I suggest polygon modelling this board in Maya for 4 reasons.
It's got a 30 day free trial.
It works on a Mac
It's polygon modelling tools are second only to 3ds Max
It's easier to learn (for a complete newcomer) than MODO, and miles easier to learn than Blender.
MODO is interesting if you're already skilled in Polygon modelling, but it's so utterly discombobulating if you don't have that prior experience that I'd recommend using just about anything else first. Except Blender. Blender is free, but don't be tempted. It will cost you more in learning time than buying a copy of every other professional 3D app.
In MODO's favour, and the reason I mention it, it does export nicely for Scene Kit. I know that for a fact, but am not yet sure how well Maya exports for Scene Kit.
Which is the next problem you're going to come up against. All COLLADA files are not born equal.
New Maya does have Unity and Unreal export presets, so I presume it's possible to calibrate its COLLADA exports to match the demands of Scene Kit perfectly, just haven't yet needed to do it. This will (very likely) involve trial and error to get the settings right. It would be nice if Apple would tell us exactly how to configure export from all major 3D apps for Scene Kit, but instead they're giving us the half baked Model I/O, so we can double the effort of importing artwork.
All context aside (which has largely been to demonstrate that 3D is no simpler nor more refined than using an IDE and frameworks like Xcode and Cocoa), here's the meat and potatoes:
A video on one aspect of what's best to make holes, and starts out as you are, with a cube and a cylinder:
https://www.youtube.com/watch?v=zaEv5rio8bk
But it does presume a certain amount of Maya familiarity, some of which you can gain from this rather slow and ponderous examination of two other ways to make holes in cubes:
https://www.youtube.com/watch?v=lvMfoH5Ikrc
Yes, if you're counting, that's 3 ways to make holes. Actually four, because the first video starts with the boolean operation you might have been expecting to be how this could/should be done. In some parallel future we'll have well working boolean geometry operations. We're not there, yet.
Hopefully that same parallel future will offer us a programming language, frameworks and terminology that's not confusing and maintains metaphors long enough to make teaching easy and usage elegant and simple.
I dont know about that long answer but this can be achieved with Boolean Subtraction. You create a cube and a cylinder. You subtract the cylinder from the cube. In 3ds Max this is under compound objects-modifiers-boolean subtraction. I guess Maya has a similair function somwhere in the menus.

3D free rotation of object

I have a 3D CAD file of a set of products. I want to create a viewer so that the user can freely rotate the object in 3D.
How would I best go about this?
1) I had thought about exporting a series of 360 degree images every 30 degrees around the image, but that would be around 360 images per product. Then right the code to handle the matrix that would be required to handle rotation of the object. Seems very excessive, but doable.
2) OpenGL - I have never done any 3d animation using this, though.
We are using LightWave 3D, if that helps.
I'd recommend going with the 3-D rendering route, even though it might require more upfront work than the multiple sliced images approach. It will provide much greater flexibility over the long run, and I think you'll be able to generate a more pleasing experience in the end (small application binary size, smoother rotation, etc.). Also, once you have the display code done, you'll be able to pull in arbitrary models to add on to the ones you started with, and make tweaks to those models more easily.
This question points out a number of ways that you might be able to import LightWave models into formats usable by an OpenGL ES application. It looks like you'll probably need to pass through Blender or another intermediary to accomplish this.
Once you have the model in a form that you can work with, you can build off of several open source 3-D rendering applications for the iPhone / iPad, such as my Molecules application. My application is built for displaying 3-D molecular structures, but people have modified it to support rendering other models for their own needs, so I know that's possible. I go into detail on how this application works in the video for the OpenGL ES session of my class on iTunes U.
OpenGL ES may seem intimidating at first, but it only took me three weeks of nights-and-weekends development to build the initial version of Molecules, and I had no real OpenGL experience before starting that project. There are many great resources out there now, so it's easier than ever to get started.

Skybox OpenGL ES iPhone and iPad

I need to create a virtual tour tool for iOS. It's an archaeological application: the user could open it when he's inside an historic building or when he's visiting an archaeological dig. No need of doom-like subjective point of view: just a skybox. The application will have a list of points of interest (POIs). Every POI will have its own skybox.
I thought that I could use using OpenGL-ES to create a sort of textured skyboxes that could be driven/rotated by touches. Textures are hi-resolution PNG photos.
It's a funded project and I have 4 months.
Where do I have to go to learn how to develop it? Do I have to purchase a book? Which one?
I have just moderate Objectve-C and Cocoa-touch skills, since I've built just one application for the iPad. I have zero knowledge of OpenGL-ES.
Since I know OpenGL ES quite well, I had a go at a demo project, doing much of what you describe. The specific intention was to do everything in the simplest way available under OpenGL ES as long as the performance was good enough.
Starting from the OpenGL template that Apple supply, I have written one new class with a heavily commented implementation file 122 lines long that loads PNG images as textures. I've modified the sample view controller to draw a skybox as required and to respond to touches with a version of the normal iPhone inertial scrolling, which has meant writing less than 200 lines of (also commented) code.
To achieve this I needed to know:
the CoreGraphics means for getting pixel data from a PNG
how to set up the PROJECTION stack to get a perspective projection with the correct aspect ratio
how to manipulate the MODELVIEW stack to ensure two-axis rotation (first person shooter or Google StreetView style) of the scene according to member variables and to ensure that the cube geometry I defined doesn't visibly intersect the near clip plane
how to specify vertex locations and texture coordinates to OpenGL
how to specify the triangles OpenGL should construct between vertices
how to set the OpenGL texture parameters accordingly to supply only one level of detail for the texture
how to track a touch to manipulate the member variables dictating rotation, including a tiny bit of mechanics to give an inertial rotation
Of course, the normal view controller lifecycle instructions are obeyed. Textures are loaded on viewDidLoad and released on viewDidUnload, for example, to ensure that this view controller plays nicely with potential memory warnings.
The main observations are that, beyond knowing the Objective-C signalling mechanisms, most of this is C stuff. You're primarily using C arrays and references to make C function calls, both for OpenGL and CoreGraphics. So a prerequisite for coding this yourself is being happy in C, not just Objective-C.
The CoreGraphics stuff is a bit tedious but it's all just reading the docs to figure out how each type of thing relates to the next — none of it is really confusing. Just get into your head that you need a data provider for the PNG data, you can create an image from that data provider and then create a bitmap context with memory that you've allocated yourself, draw the image into the context and then release everything except the memory you allocated yourself to be left with the result. That result can be directly uploaded to OpenGL. It's relatively short boilerplate stuff, but OpenGL has no concept of PNGs and CoreGraphics has no convenient methods of pushing things into OpenGL.
I've assumed that textures are a suitable size on disk. For practical purposes, that means assuming they're a power-of-two in size along each edge. Mine are 512x512.
The OpenGL texture management stuff is easy enough; it's just reading the manual to learn about texture names, name allocation, texture parameters and uploading image data. More routine stuff that is more about knowing the right functions than managing an intuitive leap.
For supplying the geometry to OpenGL I've just written out the arrays in full. I guess you need a bit of a spatial mind to do it, but sketching out a 3d cube on paper and numbering the corners would be a big help. There are three relevant arrays:
the vertex positions
the texture coordinates that go with each vertex location
a list of indices referring to vertex positions that defines the geometry
In my code I've used 24 vertices, treating each face of the cube as a logically discrete thing (so, six faces, each with four vertices). I've defined the geometry using triangles only, for simplicity. Supplying this stuff to OpenGL is actually quite annoying when you're starting; making an error generally means your program crashes deep inside the OpenGL driver without giving you a hint as to what you did wrong. It's probably best to build up a bit at a time.
In terms of a UIView capable of hosting OpenGL content, I've more or less used the vanilla stuff Apple directly supply in the OpenGL template. The one change I made was explicitly to disable any attempted use of OpenGL ES 2.x. 1.x is more than sufficient for this task, so we gain simplicity firstly by not providing two alternative rendering paths and secondly because the ES 2.x path would be a lot more complicated. ES 2.x is the fully programmable pipeline with pixel and vertex shaders, but in ES land the fixed pipeline is completely removed. So if you want one then you have to supply your own substitutes for the normal matrix stacks, you have to write vertex and fragment shaders to do 'a triangle with a texture', etc.
The touch tracking isn't particularly complicated, more or less just requiring me to understand how the view frustum works and how touches are delivered in Cocoa Touch. Once you've done everything else, this bit should be quite easy.
Notably, the maths I had to implement was extremely simple. Just the touch tracking, really. Assuming you wanted a Google Maps-type view meant that I could rely entirely on OpenGL's built-in ability to rotate things, for example. At no point do I explicitly handle a matrix.
So, how long it would take you to write depends on your own confidence with C and with CoreGraphics, and how happy you are sometimes coding in the dark. Because I know what I'm doing, the whole thing took two or three hours.
I'll try to find somewhere to upload the project so that you can have a look at it. I think it'd be helpful to leaf through it and see how alien it looks. That'll probably give you a good idea about whether you could implement something that meets all of your needs within the time frame of your project.
I've left the view controller as having exactly one view, which is the OpenGL view. However, the normal iPhone compositing rules apply and in your project you can easily put normal controls on top. You can grab my little implementation at mediafire. StackOverflow post length limits prevent me from putting big snippets of code here, but please feel free to ask if you have any specific questions.
It's going to be pretty tough if you're learning OpenGL ES from scratch. I'd use a graphics engine to do most of the heavy lifting. I'm currently playing Ogre3d, from what I've seen so far I can recommend it: http://www.ogre3d.org/. It has Skybox (and much more) out of the box, and should be pretty straight forward to do.
I think you can do this, here are some links to help get you started:
http://sidvind.com/wiki/Skybox_tutorial
common problems:
( i would post direct links but stackoverflow wont let me )
look on stackoverflow items no 2859722 and 2297564.
some programs and tips to help make the textures:
spacescape
there are some great opengl tutorials here:
nehe.gamedev.net
they are not iphone specific, but they explain opengl pretty well. i think some folks have ported these to the phone as well, i just cant find them now.