GLGravity iPhone example showing how to use accelerometer and OpenGL suffers from Gimbal Lock problem. I'm wondering is there any code available using quaternion rotation instead of Euler angles? Any help will be greatly appreciated, I'm struggling with this from a long time without having a success ...
It helps to have a good grasp of the theory of things before trying to implement and use it oneself. Below are two introductory articles about using Quaternions for rotations. Both are primarily related to smooth rotation interpolations and avoiding gimbal lock in accumulated rotations:
Gamedev.net - Quaternion Powers
Gamasutra - Rotating Objects Using Quaternions
Now as far as actual code goes, I would suggest getting, and using, an "industry strength" vector math library as opposed to rolling your own. My suggestion would be grabbing the LinearMath part of the Bullet Physics Middleware project. Bullet physics, and the included linear math library, is developed by some of Sony's top engineers and has been in active development for years. It's freely available, not restricted by any license (Zlib license), and is used by professional game developers all over the world. The lib is cross platform/architecture and compiles on anything from iPhone to PS3.
The lib offers a Quaternion class that allows you to create quaternions from euler angles or from rotation about an arbitrary axis, e.g. using setEulerZYX. Once you have your quaternions, there are built in functions for all common operations applicable to them; plus, minus, mul, normalize, slerp and much more.
For actually applying your final quaternion to OpenGL rendering, the Transform class allows you to construct a matrix from a quaternion. The transform class in turn includes a function getOpenGLMatrix that directly gives you a compatible matrix to pass to OpenGL.
The lib also includes a host of other very useful matrix and vector classes and functions.
Grab the latest Bullet dist from google code, or grab just the LinearMath portion of the code directly from subversion using: svn checkout http://bullet.googlecode.com/svn/trunk/src/LinearMath
Related
I am making a game in Unity that involves creatures whose animations are determined by physics. For example, to move a limb of a creature, I can apply forces to the rigidbody it's associated with. I know how to apply forces programmatically through a script to create movements, but I'd like to create more complex and organic movements and thought that I might be able to use a neural network to do this.
I'd like each of the creatures to have a distinct way of moving in the world. I'd like to first puppeteer the creatures manually using my hand (with a Leap Motion controller), and have a neural network generate new movements based on the training I did with my hand.
More concretely, my manual puppeteering setup will apply forces to the rigidbodies of the creature as I move my hand. So if I lift my finger up, the system would apply a series of upward forces to the limb that is mapped to my finger. As I am puppeteering the creature, the NN receives Vector3 forces for each of the rigidbodies. In a way this is the same task as generating a new text based on a corpus of texts, but in this case my input is forces rather than strings.
Based on that training set, is it possible for the NN to generate movements for the characters (forces to be applied to the limbs) to mimic the movements I did with my hand?
I don't have that much experience with neural networks, but am eager to learn, specifically for this project. It would be great to know about similar projects that were done in Unity, or relevant libraries I could use that would simplify the implementation. Also, please let me know if there is anything I can clarify!
Not really an answer but would not fit for comments
I'm not sure the strategy you want to apply to train your model is the right one.
I would go for reinforcement learning methods (you can check this question for more infos about it) using, for example, the distance traveled by the center of mass of the creature on the x-axis as a fitness. If this leads to weird behaviours (like this well known robot) you could, for example, think of strategies like penalizing your individuals given the distance traveled on y and z axis (still by the CoM) to try having guys that keep there CoM on the same plane.
Without knowing exactly what you want to achieve this is hard to give you more advices. Although, if you are not looking only for neural network based techniques, there is this really great paper you might want to have a look at (here is the video of their results).
There does not seem to be a simple way to apply an affine transformation to a node in SpriteKit. (For example, in VB, I am used to setting a transform matrix as a property of e.graphics)
I've tried to look up how to do it, but the only answer I can find is this:
SpriteKit missing linear transformation matrices
However, the answer seems to be very complex for what I am trying to achieve, and perhaps it is outdated? Is there a simple way of applying a transformation matrix to any SKNode?
Whilst SpriteKit is likely a tight wrapper around some of Core Animation (which does have Affine Transformations) the 3D matrix capabilities of Core Animation have not been brought over.
This is why your example is complex, he's "faking" the results of a 3D transformation by using a filter.
Your best possible solution (and staying with Sprite Kit) is to use Scene Kit and render your SpriteKit content onto/into SceneKit objects/planes with full 3D transformation abilities...
However, whilst these frameworks have been designed to work in this manner, there are many bugs and issues, and very few people doing it, and even fewer working on it at Apple. So it's not necessarily stable, nor easy to find how to do it in your way.
Here's a starting point, point 3, using SpriteKit scenes as materials in SceneKit:
http://code.tutsplus.com/tutorials/combining-the-power-of-spritekit-and-scenekit--cms-24049
So I am looking for some general advice. For my final project in my Computational Physics class. I have to complete the following problem.
(4.16 from Computation Physics 2nd Edition By Giordano and Nakanishi)
-Carry out a true three-body simulation in which the motions of Earth, Jupiter, and the Sun are all calculated. Since all three bodies are now in motion, it is useful to take the center of mass of the three-body system as the origin, rather than the position of Sun. We also suggest that you give Sun and initial velocity which makes the total momentum of the system exactly zero(so that the center of mass will remain fixed). Study the motion of Earth with different initial conditions. Also, try increasing the mass of Jupiter to 10, 100, and 1000 times its true mass.
My question: Is it possible to write the code for the problem above, and then import that code(or the result) into Adobe After Effects to model the three-body simulation? My teacher has expressed that if i am able to do so, he would be inclined to give me extra credit, which i desperately need.
It is possible to do some scripting in After Effects, but the language is Javascript (or rather, ExtendScript), not Matlab. I would do everything in a javascript: first calculate your solution (3 streams of position data), and use these to keyframe the corresponding layers in After Effects. You will have to learn a bit of the After Effects object model, but in that case you wont need much: the Layer object and how to keyframe its position property. You should go to the Adobe After Effects scripting forum https://forums.adobe.com/community/aftereffects_general_discussion/ae_scripting/ for some script examples.
I'm currently trying to implement a silhouette algorithm in my project (using Open GLES, it's for mobile devices, primarily iPhone at the moment). One of the requirements is that a set of 3D lines be drawn. The issue with the default OpenGL lines is that they don't connect at an angle nicely when they are thick (gaps appear). Other subtle artifacts are also evident, which detract from the visual appeal of the lines.
Now, I have looked into using some sort of quad strip as an alternative to this. However, drawing a quad strip in screen space requires some sort of visibility detection - lines obscured in the actual 3D world should not be visible.
There are numerous approaches to this problem - i.e. quantitative invisibility. But such an approach, particularly on a mobile device with limited processing power, is difficult to implement efficiently, considering raycasting needs to be employed. Looking around some more I found this paper, which describes a couple of methods for using z-buffer sampling to achieve such an effect. However, I'm not an expert in this area, and while I understand the theory behind the techniques to an extent, I'm not sure how to go about the practical implementation. I was wondering if someone could guide me here at a more technical level - on the OpenGLES side of things. I'm also open to any suggestions regarding 3D line visibility in general.
The technique with z-buffer will be too complex for iOS devices - it needs heavy pixel shader and (IMHO) it will bring some visual artifacts.
If your models are not complex you can find geometric silhouette in runtime - for example by comparing normals of polygons with common edge: if z value of direction in view space has different sings (one normal is directed to camera and other is from camera) then this edge should be used for silhouette.
Another approach is more "FPS friendly": keep extruded version of your model. And render firstly extruded model with color of silhouette (without textures and lighting) and normal model over it. You will need more memory for vertices, but no real-time computations.
PS: In all games I have look at silhouettes were geometric.
I have worked out a solution that works nicely on an iPhone 4S (not tested on any other devices). It builds on the idea of rendering world-space quads, and does the silhouette detection all on the GPU. It works along these lines (pun not intended):
We generate edge information. This consists of a list of edges/"lines" in the mesh, and for each we associate two normals which represent the tris on either side of the edge.
This is processed into a set of quads that are uploaded to the GPU - each quad represents an edge. Each vertex of each quad is accompanied by three attributes (vec3s), namely the edge direction vector and the two neighbor tri normals. All quads are passed w/o "thickness" - i.e. the vertices on either end are in the same position. However, the edge direction vector is opposite for each vertex in the same position. This means they will extrude in opposite directions to form a quad when required.
We determine whether a vertex is part of a visible edge in the vertex shader by performing two dot products between each tri norm and the view vector and checking if they have opposite signs. (see standard silhouette algorithms around the net for details)
For vertices that are part of visible edges, we take the cross product of the edge direction vector with the view vector to get a screen-oriented "extrusion" vector. We add this vector to the vertex, but divided by the w value of the projected vertex in order to create a constant thickness quad.
This does not directly resolve the gaps that can appear between neighbor edges but is far more flexible when it comes to combating this. One solution may involve bridging the vertices between large angled lines with another quad, which I am exploring at the moment.
i want to make fruit ninja blade. i am using cocos2d and the MotionStreak is really ugly for this. Any other approach or better settings for MotionStreak? maybe particle system? any free great tools similar to ParticleDesigner?
I have my own implementation with OpenGL triangle strips mapped with texture. The blade is very smooth if the distances between adjacent points are small enough. I use linear interpolation to insert more points between two points which the distance is greater than a predefined constant. I'm thinking of use order 2 interpolation but the implementation is more difficult and the performance may reduces.
Source code is available here https://github.com/hiepnd/CCBlade
i don't know how much effort it will take but the thing is you can create and change shape of filter and just apply a white to gray gradient as it's texture, it'll give a very good looking results. i myself am working with cocos2d-x (it's just a c++ port of cocos2d) and it has samples for dynamic filters (it's just like you create and manipulate a mesh and all the things are done automaticaly), it uses CCActionGrid class but i just didn't used this class yet if you couln't solve your problem using that ask me to search deeper.
http://pixlatedstudios.com/2012/02/fruit-ninja-like-blade-effect/
Worth Checking out!!!! based on hiepnd CCBlade tutorial.