Collada geometry and animation file loading - iphone

I'm considering writing a Collada loader for geometry and animation. Could someone describe from a high level how this would be done? If it will take longer than a few weekends of time I may switch strategies so I'm trying to get a feel for what this involves. I tried to read the Collada spec for animation but I didn't understand once it started talking about different animation channels.
I'm not using any game engine. I'm interfacing directly with OpenGL.

I am just now learning Collada. It seems it is not impossible to get some parts up soon but to implement the entire capability set is quite a handful. There are a few libraries like OpenAsset Import Library, but for the iPhone that is somewhat heavy-weight and I haven't tested it. The code is probably portable enough that you could grab the Collada portion without too much trouble.
If you are going to parse it in yourself, I'd recommend this XML parser which I just read about from this question. It is very small, fast and works with iPhones and looks to be useful in other projects as well.

Related

I want to make a small co-op 3D RPG. Where do I begin?

I know how to program although I haven't done much in C# yet. But I understand every code snippet I read. I mostly script at my job. I have little to no experience in graphical design or animations.
The only thing I have done so far in Unity 3D is the ball rolling tutorial.
I have some questions though as I have a hard time understanding where I should begin. I know co-op 3D RPG is a project with a HUGE scope, but this is what I would like to accomplish to begin with:
Mini world (half the size of a WoW zone)
Populate world with terrain, trees and some buildings. Maybe a cave.
Have a playable character that can move around and interact with some objects.
Could anyone guide me into the right direction? What documentation should I read? Are there any RPG packs or plugins that can help me achieve this? Any nice tutorials you know of?
If 3D is too complicated to start with, I'm also willing to try an isometric game.
PS: Are there any free (or reasonably priced) HD asset packs that include animations? Or will I have to provide those myself as well?
Co-Op RPG's are quite large; Infact most people have tools that would help this speed up production; Although you're asking where to start. Most of my programming knowledge comes from theory and pretty basic knowledge; But I have an idea on where you should start!
A big factor on Co-Op is well.... Of-course CO-OP so something you should play around with first is data and how it's stored but even though you have knowledge of this factor Unity has a good way of storing it from Host to Client!
https://docs.unity3d.com/Manual/UNetStateSync.html?_ga=2.98238303.1333003911.1509902124-1740889112.1506544791
Another factor of -Co-Op is the dynamic spawning that is going to be pretty heavy although their are quite a few ways of doing this I'm sure their are plenty of correct as-well as incorrect ways of doing it!
Personally I would develop a database and call-Events with SQLite though this would be a heavy and complex job to do correctly and efficiently
but unity3d has UNetCustomSpawning documentation that seems to work pretty well for it's inteded use although it takes some heavy tinkering to get right so I would suggest once you get synchronization done you should check out some Spawning
https://docs.unity3d.com/Manual/UNetCustomSpawning.html
With Every Co-Op Action it must be handled and heres how!
https://docs.unity3d.com/Manual/UNetActions.html
How about the actual connection?
https://docs.unity3d.com/Manual/UNetDiscovery.html
Although reading alot of this may help; I would suggest rading through this and giving it a shot.
https://docs.unity3d.com/Manual/UNetSetup.html
and if you find no help or it's doing you some trouble
https://unity3d.com/learn/tutorials/s/tanks-tutorial
You may hear about "Tanks", alot though I personally am quite stubborn so I couldn't bear to look into it untill I did just earlier this month; It shows a correct way of doing things from Shooting, to spawning, and may other things.
I am un-certain this is a good or bad answer for you as this I believe is quite a personal question but as their are about a thousand and a thousand more ways of doing this I think what I have shown is a great way of starting just to get an idea of how the ball should start rolling!
-Thanks!

iPhone - Using OpenGL to create apps - What is a good wrapper or low-level engine to use?

I'm working on a couple apps which require the use of OpenGLes 2.0. I made a prototype of one starting from a simple sample project. However, I wasn't very happy with the clutter that all of the OpenGL code caused. I think that all the clutter would cause issues if I kept extending the code.
So- Is there a good solution to working with OpenGL on a slightly higher level? I don't really need all the complexity and overhead of a game engine. I just am slightly frustrated I can't deal with OpenGL like this:
ShaderProgram shader(fragmentCode, vertexCode);
RenderBuffer renderBuffer(xResolution, yResolution);
You'll have to pardon the shameless self-promotion, but I've been working on just such a framework due to the exact frustrations you've been having. I grew so tired of the nonsense of having to properly initialize resources and then clean them up. Here is a sample from my XPG framework.
XPG::Texture2D tex("texture.jpg"); // automatically cleaned up
tex.bind(); // ready for use
I have built similar objects for things like vertex buffer objects (VBO). I am still working on it, but the OpenGL tools will certainly benefit you greatly. I have yet to see another framework make things this simple. If anyone knows of one, I would love to hear about it. The one I've been working on even works in Android. It should work in iOS, but I haven't tested it there yet. It does work on OSX though. :)
To see a high level demonstration, see the test module source code: interface and implementation.
I don't think the position somewhere between raw OpenGL and a complete engine would be effective. Suppose you have the ability to manage OpenGL objects like shaders, buffers, textures and others.
You will still need a loading logic to get the input data from somewhere. Engine has it.
You'll need tools to compose shaders in and test the scenes. Engine should have it.
You'll face hidden errors about incompatible vertex attributes - shaders - uniform parameters. Engine has to check the consistency and link those instances smoothly for you.
Hence my conclusion is: once you've decided to move forward from the raw GL, you'll eventually end up in an engine. Either in a long term if you do it yourself, or in a short term if you take an existent one.
More than that, I think the engine should provide you with an ability to create shader programs and render buffers in the way you want. And I wouldn't expect much overhead from these operations.

core animation xml or json framework

Does anyone know if there is a framework for dynamically loading core animation sequences from some kind of description file like xml or json or even better if there is some kind of core animation studio. I would need some way to allow designers to work on animations without having to talk to programmers for every single change ...
In a project I worked on, we did this exact thing (at least the description part). The animation data is passed down in JSON and parsed and interpreted. It maps to a lot of the major animation capabilities provided by Core Animation--mostly position and frame animations.
Unfortunately what we developed is proprietary and it is highly doubtful the company would be willing to release it as open source.
In the end my answer to your question is there don't appear to be any frameworks that currently support this, however, implementing it yourself wouldn't be too terribly difficult. Then creating a tool your designers can use to generate the animation JSON would be the next logical step. If the tool was not WYSIWYG but rather just a bit of a pseudo design tool, it probably wouldn't be too hard to create either.
Good luck and best regards.
It sounds like you're looking for Quartz Composer, except that QC depends heavily on hardware acceleration and isn't available on iOS. Perhaps that will change in the future.

OpenGL ES and real world development

I'm trying to learn OpenGL ES quickly (I know, I know, but these are the pressures that have been thrusted upon me) and I have been read around a fair bit, which lots of success at rendering basic models, some basic lighting and 'some' texturing success too.
But this is CONSTANTLY the point at which all OpenGL ES tutorials end, they never say more of what a real life app may need. So I have a few questions that Im hoping arent too difficult.
How do people get 3d models from their favorite 3d modeling tool into the iPhone/iPad application? I have seen a couple of blog posts where people have written some python scripts for tools like Blender which create .h files that you can use, is this what people seem to do everytime? Or do the "big" tooling suites (3DS, Maya, etc...) have exporting features?
Say I have my model in a nice .h file, all the vertexes, texture points, etc.. are lined up, how to I make my model (say of a basic person) walk? Or to be more general, how do you animate "part" of a model (legs only, turn head, etc...)? Do they need to be a massive mash-up of many different tiny models, or can you pre-bake animations these days "into" models (somehow)
Truely great 3D games for the iPhone are (im sure) unbelievably complex, but how do people (game dev firms) seem to manage that designer/developer workflow? Surely not all the animations, textures, etc... are done programatically.
I hope these are not stupid questions, and in actual fact, my app that Im trying to investigate how to make is really quite simple, just a basic 3D model that I want to be able to pan/tilt around using touch. Has anyone ever done/seen anything like this that I might be able to read up on?
Thanks for any help you can give, I appreciate all types of response big or small :)
Cheers,
Mark
Trying to explain why the answer to this question always will be vague.
OpenGLES is very low level. Its all about pushing triangles to the screen and filling pixels and nothing else basicly.
What you need to create a game is, as you've realised, a lot of code for managing assets, loading objects and worlds, managing animations, textures, sound, maybe network, physics, etc.
These parts is the "game engine".
Development firms have their own preferences. Some buy their game engine, other like to develop their own. Most use some combination of bought tech, open source and inhouse built tech and tools. There are many engines on the market, and everyone have their own opinion on which is best...
Workflow and tools used vary a lot from large firms with strict roles and big budgets to small indie teams of a couple of guys and gals that do whatever is needed to get the game done :-)
For the hobbyist, and indie dev, there are several cheap and open source engines you can use of different maturity, and amount of documentation/support. Same there, you have to look around until you find one you like.
on top of the game engine, you write your game code that uses the game engine (and any other libraries you might need) to create whatever game it is you want to make.
something many people are surprised with when starting OpenGL development is that there's no such thing as a "OpenGL file format" for models, let alone animated ones. (DirectX for example comes with a .x file format supported right away). This is because OpenGL acts somewhat at a lower level. Of course, as tm1rbrt mentioned, there are plenty of libraries available. You can easily create your own file format though if you only need geometry. Things get more complex when you want to take also animation and shading into account. Take a look at Collada for that sort of things.
again, animation can be done in several ways. Characters are often animated with skeletal animation. Have a look at the cal3d library as a starting point for this.
you definitely want to spend some time creating a good pipeline for your content creation. Artist must have a set of tools to create their models and animations and to test them in the game engine. Artist must also be instructed about the limits of the engine, both in terms of polygons and of shading. Sometimes complex custom editors are coded to create levels, worlds, etc. in a way compatible with your specific needs.
Write or use a model loading library. Or use an existing graphics library; this will have routines to load models/textures already.
Animating models is done with bones in the 3d model editor. Graphics library will take care of moving the vertices etc for you.
No, artists create art and programmers create engines.
This is a link to my favourite graphics engine.
Hope that helps

From a 3D modeler to an iPhone app - what are best practices?

I am quite new in 3D programming on iPhone and I would like
to ask for hints about organizing a work between designers
and programmers on that platform. Most of all: what kind of
tools, libraries or plugins cooperate the best on both
sides.
Although I consider the question as looking for general
best-practices advice I would like to find a solution for
my current situation which I describe further, too.
I've already done some research and found following libraries:
SIO2
Khronos OpenGL ES 1.x SDK for PowerVR MBX
Unity3D
Oolong Game Engine
I've checked modellers or plugins to them giving output formats
readable by those tools:
obj2opengl Wavefront OBJ to plain header file converter
Blender with SIO2 exporter
iphonewavefrontloader
Cheetah3D
PVRGeoPOD for 3DS / Maya
Unfortunately I still have no clear vision how to combine
any of that tools to get a desinger's work in an application.
I look for a way of getting it in the most possible complete way:
models, lights, scenes, textures, maybe some simple animations
(but rather no game-like physics), but I still got nothing.
And here comes my situation: I would like to find right way to
present few (but quite complicated) models from a single scene.
The designers mostly use 3DS Max 9, sometimes 10 (which partly
prevents using PVRGeoPOD) and are rather reluctant to switch to
something else but if there's no other choice I suppose it would
be possible.
The basic rule I've already found in some places "use Wavefront
OBJ" not always works. I haven't got any acceptable results with
production files, actually. The only things worked fine were some
mere examples. Some of my models did imported incomplete, sometimes
exporters hung or generated enormous files not really useful on
an iPhone, sometimes enabling textures (with GL_TEXTURE_2D) just
crashed an app.
I know it might be a problem with too complicated models or my
mistakes coming from inexeperience but I am not able to find any
guidelines for that process to have streamlined cooperation with
designers.
I am even willing to write some things from scratch in pure
OpenGL-ES if it's necessary, but I would like to avoid what might
be avoided and get the most from the model files. The best would be
the effect I saw on some SIO2 tutorials: export, build & go. But
at that moment I've got only "import, wrong", "import, where are
textures?", "import, that almost looks fine, export, hang" and so
on...
Is it really so much frustrating or I am just missed something
obvious? Can anybody share his/her experience in that field and
tell what kind of software uses for "making things happen"?
Well I can't say I know the perfect way to do this but after some experimenting I did get something working doing the following:
created the model(s) in Blender, exported it to wavefront .obj format (TRIANGLE,normals,hq)
then used obj2opengl.pl script to convert the model to a header file(.h)
then added the header in the project and used it in GLGravity - which is a sample program from Apple and modified the drawView function
maybe that could be a starting point for you too, just to get something up and running?