I have a 3D CAD file of a set of products. I want to create a viewer so that the user can freely rotate the object in 3D.
How would I best go about this?
1) I had thought about exporting a series of 360 degree images every 30 degrees around the image, but that would be around 360 images per product. Then right the code to handle the matrix that would be required to handle rotation of the object. Seems very excessive, but doable.
2) OpenGL - I have never done any 3d animation using this, though.
We are using LightWave 3D, if that helps.
I'd recommend going with the 3-D rendering route, even though it might require more upfront work than the multiple sliced images approach. It will provide much greater flexibility over the long run, and I think you'll be able to generate a more pleasing experience in the end (small application binary size, smoother rotation, etc.). Also, once you have the display code done, you'll be able to pull in arbitrary models to add on to the ones you started with, and make tweaks to those models more easily.
This question points out a number of ways that you might be able to import LightWave models into formats usable by an OpenGL ES application. It looks like you'll probably need to pass through Blender or another intermediary to accomplish this.
Once you have the model in a form that you can work with, you can build off of several open source 3-D rendering applications for the iPhone / iPad, such as my Molecules application. My application is built for displaying 3-D molecular structures, but people have modified it to support rendering other models for their own needs, so I know that's possible. I go into detail on how this application works in the video for the OpenGL ES session of my class on iTunes U.
OpenGL ES may seem intimidating at first, but it only took me three weeks of nights-and-weekends development to build the initial version of Molecules, and I had no real OpenGL experience before starting that project. There are many great resources out there now, so it's easier than ever to get started.
Related
I was wondering, what is the easiest way to create an animated 3D cartoon. I want to create a character which moves and talk. I am not familiar with an easy way to do it, I used Blender and Unity but I was wondering if there is an easy way to do it? as I believe that not everyone doing 3D cartons have to dig deep in coding, they certainly use a software/GUI or something to create it.
I am not sure what easy is defined for peole but someway might be easy for me , it may be hard for you. I think it is based on experience in the past.
If you want to create a custom 3d animated model, you need to model it first. Or you can find a 3d model which is similar to your model, so you can edit it. But you are going to use a 3d modeling programs like; Blender, 3ds Max, even SketchUp might work.
One easy way. Which requires less work
You can check Adobe Fuse. Which lets you select predefined parts and lets you create a 3d humanoid model. You can tweak them a bit. After completing you can export it to Mixamo where you can rig your model (also you can select premade models here). They have alot of animations to go. After selecting your animations you can import it to Unity.
Animation techniques
For most of them, you will need rigs. When you rigged a model, you can create animations with those rigs.
This is my one of the old works from university.(Ignore the music please :D) Before i make the animations, I've recorded myself on the exact angle with model, (Front and side 2 videos) and used my video as refence while animating. To conclude i used key frames. And Blender made the rest (making transaction between keyframes). This is one of the techniques.
--EDIT--
This might give you more information about the video referencing.
More budgeted projects mostly use mo-cap. Here is one example for facial animation.
There are also cheaper infrared cameras like Kinect which allows you to do mo-cap. But results are not that promising. You can find more than few assets in the Unity Store. But i suggest you to use at least 2 cameras if you choose this way.
There might be more ways to create 3d model animation. I am not an animator. But those are that i know.
Hope this helps! Cheers!
I would like to know what kind of advantages I get from using Core Graphics instead of Open GL ES. My main question is based on this:
Creating simple View animations.
Creating some visual appealing objects (Graphics like Core Plot for instance, Animated Objects, etc).
Time consuming (both learning and implementing)
Simple 2D Games
Complex 2D Games
3D Games
Code maintenance ad also cleaner code.
Easier integration with other UI elements.
Thanks.
First, I want to clear up a little terminology here. When people talk about Core Graphics, they generally are referring to Quartz 2D drawing, which is a 2-D vector-based drawing API. It is used to draw out vector elements either to the screen or to offscreen contexts like PDFs. Core Animation is responsible for animation, layout, and some limited 3-D effects involving rectangular layers and UI elements. OpenGL ES is a lower-level API for talking with the graphics hardware on iOS devices for both 2-D and 3-D drawing.
You're asking a lot in your question, and the judgment on what's best in each scenario is subjective and completely up to the developer and their particular needs. I can, however, provide a few general tips.
In general, a recommendation you'll see in Apple's documentation and in presentations by engineers is that you're best off using the highest level of abstraction that solves your particular problem.
If you need to just draw a 2-D user interface, the first thing you should try is to implement this using Apple's provided UIKit elements. If they don't have the capability you need, make custom UIViews. If you are designing Mac-iOS cross-platform code (like in the Core Plot framework), you might drop down to using custom Core Animation CALayers. Each step down in this process requires you to write more code to handle things that the level above did for you.
You can do a surprising amount of stuff with Core Animation, with pretty good performance. This isn't just limited to 2-D animations, but can extend into some simple 3-D work as well.
OpenGL ES is underneath the drawing of everything you see on the screen for an iOS device, although this is not exposed to you. As such, it provides the least abstraction for onscreen rendering, and requires you to write the most code to get something done. However, it can be necessary in situations where you want to extract the most performance from 2-D display (say, in an action game) or to render true 3-D objects and environments.
Again, I tend to recommend that people start at the highest level of abstraction when writing an application, and only drop down when they find that they cannot do something or the performance is not within the specification they are trying to hit. Fewer lines of code makes applications easier to write, debug, and maintain.
That said, there are some nice frameworks that have developed around abstracting away OpenGL ES, such as cocos2D and Unity 3D, which might make working with OpenGL ES easier in many situations. For each case, you'll need to evaluate what makes sense for the particular needs of your application.
Basically, use OpenGL if you are making a game. Otherwise, use CoreGraphics. CoreGraphics lets you do simple things embedded in your normal UI code.
Creating simple View animations.
-> CG
Creating some visual appealing objects (Graphics like Core Plot for instance, Animated Objects, etc).
-> CG
Time consuming (both learning and implementing)
-> OpenGL and CG are both kind of tough at first.
Simple 2D Games
-> OpenGL
Complex 2D Games
-> OpenGL
3D Games
-> OpenGL
Code maintenance ad also cleaner code.
-> Irrelevant
I need to create a virtual tour tool for iOS. It's an archaeological application: the user could open it when he's inside an historic building or when he's visiting an archaeological dig. No need of doom-like subjective point of view: just a skybox. The application will have a list of points of interest (POIs). Every POI will have its own skybox.
I thought that I could use using OpenGL-ES to create a sort of textured skyboxes that could be driven/rotated by touches. Textures are hi-resolution PNG photos.
It's a funded project and I have 4 months.
Where do I have to go to learn how to develop it? Do I have to purchase a book? Which one?
I have just moderate Objectve-C and Cocoa-touch skills, since I've built just one application for the iPad. I have zero knowledge of OpenGL-ES.
Since I know OpenGL ES quite well, I had a go at a demo project, doing much of what you describe. The specific intention was to do everything in the simplest way available under OpenGL ES as long as the performance was good enough.
Starting from the OpenGL template that Apple supply, I have written one new class with a heavily commented implementation file 122 lines long that loads PNG images as textures. I've modified the sample view controller to draw a skybox as required and to respond to touches with a version of the normal iPhone inertial scrolling, which has meant writing less than 200 lines of (also commented) code.
To achieve this I needed to know:
the CoreGraphics means for getting pixel data from a PNG
how to set up the PROJECTION stack to get a perspective projection with the correct aspect ratio
how to manipulate the MODELVIEW stack to ensure two-axis rotation (first person shooter or Google StreetView style) of the scene according to member variables and to ensure that the cube geometry I defined doesn't visibly intersect the near clip plane
how to specify vertex locations and texture coordinates to OpenGL
how to specify the triangles OpenGL should construct between vertices
how to set the OpenGL texture parameters accordingly to supply only one level of detail for the texture
how to track a touch to manipulate the member variables dictating rotation, including a tiny bit of mechanics to give an inertial rotation
Of course, the normal view controller lifecycle instructions are obeyed. Textures are loaded on viewDidLoad and released on viewDidUnload, for example, to ensure that this view controller plays nicely with potential memory warnings.
The main observations are that, beyond knowing the Objective-C signalling mechanisms, most of this is C stuff. You're primarily using C arrays and references to make C function calls, both for OpenGL and CoreGraphics. So a prerequisite for coding this yourself is being happy in C, not just Objective-C.
The CoreGraphics stuff is a bit tedious but it's all just reading the docs to figure out how each type of thing relates to the next — none of it is really confusing. Just get into your head that you need a data provider for the PNG data, you can create an image from that data provider and then create a bitmap context with memory that you've allocated yourself, draw the image into the context and then release everything except the memory you allocated yourself to be left with the result. That result can be directly uploaded to OpenGL. It's relatively short boilerplate stuff, but OpenGL has no concept of PNGs and CoreGraphics has no convenient methods of pushing things into OpenGL.
I've assumed that textures are a suitable size on disk. For practical purposes, that means assuming they're a power-of-two in size along each edge. Mine are 512x512.
The OpenGL texture management stuff is easy enough; it's just reading the manual to learn about texture names, name allocation, texture parameters and uploading image data. More routine stuff that is more about knowing the right functions than managing an intuitive leap.
For supplying the geometry to OpenGL I've just written out the arrays in full. I guess you need a bit of a spatial mind to do it, but sketching out a 3d cube on paper and numbering the corners would be a big help. There are three relevant arrays:
the vertex positions
the texture coordinates that go with each vertex location
a list of indices referring to vertex positions that defines the geometry
In my code I've used 24 vertices, treating each face of the cube as a logically discrete thing (so, six faces, each with four vertices). I've defined the geometry using triangles only, for simplicity. Supplying this stuff to OpenGL is actually quite annoying when you're starting; making an error generally means your program crashes deep inside the OpenGL driver without giving you a hint as to what you did wrong. It's probably best to build up a bit at a time.
In terms of a UIView capable of hosting OpenGL content, I've more or less used the vanilla stuff Apple directly supply in the OpenGL template. The one change I made was explicitly to disable any attempted use of OpenGL ES 2.x. 1.x is more than sufficient for this task, so we gain simplicity firstly by not providing two alternative rendering paths and secondly because the ES 2.x path would be a lot more complicated. ES 2.x is the fully programmable pipeline with pixel and vertex shaders, but in ES land the fixed pipeline is completely removed. So if you want one then you have to supply your own substitutes for the normal matrix stacks, you have to write vertex and fragment shaders to do 'a triangle with a texture', etc.
The touch tracking isn't particularly complicated, more or less just requiring me to understand how the view frustum works and how touches are delivered in Cocoa Touch. Once you've done everything else, this bit should be quite easy.
Notably, the maths I had to implement was extremely simple. Just the touch tracking, really. Assuming you wanted a Google Maps-type view meant that I could rely entirely on OpenGL's built-in ability to rotate things, for example. At no point do I explicitly handle a matrix.
So, how long it would take you to write depends on your own confidence with C and with CoreGraphics, and how happy you are sometimes coding in the dark. Because I know what I'm doing, the whole thing took two or three hours.
I'll try to find somewhere to upload the project so that you can have a look at it. I think it'd be helpful to leaf through it and see how alien it looks. That'll probably give you a good idea about whether you could implement something that meets all of your needs within the time frame of your project.
I've left the view controller as having exactly one view, which is the OpenGL view. However, the normal iPhone compositing rules apply and in your project you can easily put normal controls on top. You can grab my little implementation at mediafire. StackOverflow post length limits prevent me from putting big snippets of code here, but please feel free to ask if you have any specific questions.
It's going to be pretty tough if you're learning OpenGL ES from scratch. I'd use a graphics engine to do most of the heavy lifting. I'm currently playing Ogre3d, from what I've seen so far I can recommend it: http://www.ogre3d.org/. It has Skybox (and much more) out of the box, and should be pretty straight forward to do.
I think you can do this, here are some links to help get you started:
http://sidvind.com/wiki/Skybox_tutorial
common problems:
( i would post direct links but stackoverflow wont let me )
look on stackoverflow items no 2859722 and 2297564.
some programs and tips to help make the textures:
spacescape
there are some great opengl tutorials here:
nehe.gamedev.net
they are not iphone specific, but they explain opengl pretty well. i think some folks have ported these to the phone as well, i just cant find them now.
I'm working on a project where I need to render a 3D human body on an iOS device. The 3D object was built in Adobe LightWave and is 7.4MB. I opened it in Blender and exported it as OBJ/MTL pair which are 5.5MB and 4KB, respectively. Using Jeff LaMarche's Wavefront Loader (linked below) as a starting point to figure out OpenGL ES and get check out performance and whatnot, I stuck the object in there (in place of an OBJ/MTL pair he'd been using) and ran it in the simulator. Of course, crash on startup, so I decided to performSelectorInBackground it. A half hour later, it's still loading.
I'm just guessing that the file is way too detailed to draw with any kind of performance expectation on a device with a 600MHz processor. Is there a way to lower the quality these files somewhat easily? Or, if performance issues have arisen with this particular loader, could somebody enlighten me?
Thanks,
Will
http://iphonedevelopment.blogspot.com/2009/03/wavefront-obj-loader-open-sourced-to.html
Will,
I don't know if I can solve your problem, but I may be able to point you in the right direction. I did a project for a client loading a 3D model exported from Blender using the SIO2 3D engine.
Anyway, at that time, I had trouble with the 3D engine taking a long time to load the model. I found that reducing the number of polygons was very important - if it is a high quality model, you most certainly will need to do so.
Blender has a function for this - polygon reduce or something like that. Blender should also report to you how many polygons and surfaces are in use, so if it's more than, say, 20,000, you're likely in for performance issues.
I am developing a simple iPhone app, which:
retrieves data from the server
presents the data
In order to present the data better I want to add nice 3d dynamic objects, for example:
a car with spinning wheels next to car sales bar chart.
power plant with smoke coming out of the chimney next to CO2 emission numbers
The questions are:
How do I work with the designer on this, what output should he provide for me (format)?
How do I put it in my application, should I involve some 3d engine/framework?
The team behind cocos2D has just announced cocos*3D* and this seems really promising.
The first public beta can be downloaded
http://www.cocos2d-iphone.org/archives/1274
You can use cocos2d for the iPhone and fake 3d with the art. So, you have a car that is drawn to look 3d but you're only using 2d to display it. The effects that you want to do don't require to use full 3d models.
You may also have a look at this one I discovered recently:
http://nineveh.gl/
It's pretty new but well documented and with video demos.