I am currently working on creating an import-based pipeline for my indie game using Maya ASCII .ma as source format and my own format for physics and graphics as output. I'll keep stuff like range-of-motion attributes inside Maya, such as for a hinge joint. Other types of parameters that needs a lot of tweaks end up in separate source files (possibly .ini for stuff like mass, spring constants, strength of physical engines and the like).
The input is thus one .ma and one .ini, and the output is, among other things, one .physics and several .mesh files (one .mesh file per geometry/material).
I am also probably going to use Python 3.1 to reformat the data, and I already found some LGPL 2.1 code that reads basic Maya ASCII. I'll probably also use Python to launch the platform during development. Game is developed in C++.
Is there anything in all of this which you would advice against? A quick summary of things that might be flawed:
Import-based pipeline (not export-based)?
Maya (not 3DS)?
Maya ASCII .ma (not .mb)?
.ini (not .xml)?
Separation of motion attibutes in Maya and "freak-tweak" attributes in .ini (not all in Maya)?
Python 3.1 for building data (not embedded C++)?
Edit: if you have a better suggestion of how to implement the physics/graphics import/export tool chain, I'd appreciate the input.
If you really want to do this, you should be aware of a few things. The main one being that it's probably more of a hassle than you'd first expect. Some others are:
Maya .ma (at least until the current v.2010) is built up by mel. Mel is Turing-complete, but the way the hierarchical scene is described in terms of nodes is a lot more straight-forward than “code”.
Add error-handling early or you’ll be sorry later.
You have to handle a lot of different nodes, where the transforms are by far the most obnoxious ones. Other types include meshes, materials (many different types), “shader engines” and files (such as textures).
.ma only describes shapes and tweaks; but very rarely defines raw vertices. I chose to keep a small “export” script inside the .ma to avoid having to generate all primitives exactly the same way as Maya. In hindsight, this was the right way to go. Otherwise you have to be able to do stuff like
“Create sphere”,
“Move soft selection with radius this-and-that from here to there”, and
“Move vertex 252 xyz units” (with all vertices implicitly defined).
Maya defines polygons for meshes; you might want to conve
rt to triangles.
All parameters that exist on a certain type of node are either explicitly or implicitly defined. You have to know their default values (when implicitly defined),
Basically, an object is defined by a transform, a mesh and a primitive. The transform is parent of the mesh. The transform contains the scaling, rotation, translation, pivot translations, and yet some. The mesh links to a primitive and vice versa. The primitive has a type (“polyCube”) and dimensions (“width, height, depth”).
Nodes may have “multiple inheritance”. For instance, a mesh instanced several times has a single mesh (and a single primitive), but multiple parents (the transformations).
Node transforms are computed like so (see Maya xform doc for more info):
vrt = getattr("rpt")
rt = mat4.translation(vrt)
...
m = t * rt * rpi * r * ar * rp * st * spi * sh * s * sp
I build my engine around physics, so the game engine wants meshes placed on physical shapes, but when modeling I want it the other way around. This to keep it generic for future applications (“meshes without physics”). This tiny decision caused me serious grief in transformations. Linear algebra got a brush-up. Problems in scaling, rotation, translation and shearing; you name it, I've had it.
I built my import tool on cgkit’s Maya parser. Thanks Matthias Baas!
If you’re going to do something similar I strongly recommend you peeking at my converter before writing your own. This “small” project took me three agonizing months to get to a basic working condition.
As a general serialization format that's both human readable and human writable, has excellent Python support (and, well, any language support really), you might want to consider using YAML or JSON over ini files or XML.
XML could be acceptable in your case if you never generate files by hand.
One of the advantages of JSON and YAML is typing: both formats are parsed down to Python lists, dictionaries, floats, ints... Basically: sane python types.
Also, unless you're sure that every library you'll ever use works on 3.1, you might want to consider sticking with 2.x for a bit due to library availability issues.
You should consider using an export-based pipeline or a standardized file format such as OBJ or COLLADA instead of re-implementing a .ma parser and replicating all the Maya internals necessary to interpret it.
The .ma/.mb format is not intended to be read by any program other than Maya itself, so Autodesk does not put any effort into making this an easy process. To parse it 100% correctly you would need to implement the whole MEL scripting language.
All of the Maya-based pipelines I've seen either first export content into a standardized file format, or run MEL scripts within Maya to dump content using the MEL node interfaces.
Note than Maya can be run in a "headless" mode where it loads a scene, executes a MEL script, and exists, without loading the GUI. Thus there is no problem using it within automated build systems.
Related
I am very new to operating systems, that's why this question may be very fundamental.
According to the resources which I have read, all program icons, desktop surface, and other symbols of files and folders are generated by the graphical user interface so that the computer-users can manage some processes easily. This is very senseful.
However, after this definition, I started to face the phrase "abstraction". For example, these resources say that file systems are an abstraction.
Actually, I am a little bit confused with the phrase "abstraction". Besides, I cannot understand the difference between abstraction and graphical user interface. Is there anyone who can explain what the abstraction is in operating systems and the difference between abstraction and GUI ?
abstraction | əbˈstrakʃ(ə)n |
noun [mass noun]
the quality of dealing with ideas rather than events [..]
the process of considering something independently of its associations or attributes [..]
the process of removing something [..]
ORIGIN
late Middle English: from Latin abstractio(n-), from
the verb abstrahere ‘draw away’.
An abstraction in this context is generally anything that simplifies something to a more understandable form. A computer works just with blips of electricity. That is rather difficult to comprehend on a day-to-day basis. Those electric impulses are first abstracted to "ones and zeros" or "bits". Those are further abstracted to form numbers. Those numbers are used in specific ways to represent readable characters. Bits are also used in certain ways to store data on a spinning disk of metal or in chips, which we generally call a file system. That file system is made visible in a hierarchical form using "files" and "directories". That hierarchy is made visible in a GUI using windows and icons. Interacting with those things is abstracted into using a "mouse" to push around those "icons", which in the end translates back down to moving electric impulses around on metal.
All those abstractions allow you to use a computer without having to be aware of the underlying things that are going on to make it happen.
The title pretty much says it. What I really want is a layer mode that takes the alpha channel of the one below it and in all other respects behaves the same. The general question seems worth asking.
I'm skimming the docs, and it seems like layer modes are a fixed enum, but I wan't to be sure there isn't something I'm overlooking. I'll also take any alternative suggestions.
Thanks.
No - it is not possible to add new layer modes but for including your own modes inside GIMP source code.
However, layers are a bit more generic now, since they can be written as a GEGL operation - I'd have to check the source, but all that is needed is probably to write the proper GEGL operation (which is easy to derive from the other layer modes), and add the new operation to the enums. The big drawback of this approach as compared to plug-ins is that you can't share the layer mode with other GIMP users, and even worse: the XCF files you create with your custom mode will only be "readable" in your modified copy of GIMP.
An workaround is to write a plug-in that creates a new layer from two underlying layers, combining them as you like. You'd have to invoke it manually each time you updated each layer. You'd have to use Python-fu, instead of script-fu,a s the later does not give one access to pixel values.
For the simple case you describe, though, it seems like a sequence of "alpha-to-selection",
"selection-to-channel", "copy", "add-layer-mask", "paste" can do what you want without a need to copy pixels around in a high level language.
Are there any tools which diff hierarchies?
IE, consider the following hierarchy:
A has child B.
B has child C.
which is compared to:
A has child B.
A has child C.
I would like a tool that shows that C has moved from a child of B to a child of A. Do any such utilities exist? If there are no specific tools, I'm not opposed to writing my own, so what are some good algorithms which are applicable to this problem?
A great general resource for diffing hierarchies (not specifically XML, HTML, etc) is the Hierarchical-Diff github project based on a bit of Dartmouth research. They have a pretty extensive list of related work ranging from XML diffing, to configuration file diffing to HTML diffing.
In general, actually performing diffs/patches on tree structures is a fairly well-solved problem, but displaying those diffs in a manner that makes sense to humans is still the wild west. That's double true when your data structure already has some semantic meaning like with HTML.
You might consider our SmartDifferencer tools.
These tools compare computer source code files in a diff-like way. Unlike diff, which is line oriented, these tools see changes according to code structure (variable name, expression, statement, block, function, class, etc.) as plausible edits ("move, insert, delete, replace, copy, rename"), producing answers that makes sense to programmers.
These computer source codes have exactly the "hierarchy" structure you are suggesting; the various constructs nest. Specifically to your topic, typically code blocks can nest inside code blocks. The SmartDifferencer tools use target-language accurate parsers to "deconstruct" the source text into these hierarchical entities. We have a Smart Differencer for XML in which you can obviously write nested tags.
The answer isn't reported as "Nth child of M has moved" although it is actually computed that way, by operating on the parse trees produced by the parsers. Rather it is reported as "code fragment of type at line x col y to line a col b has moved/..."
The answer my good sir is: Depth-first search, also known as Depth-first traversal. You might find some use of the Visitor pattern.
You can't swing a dead cat without hitting some sort of implementation for this when dealing with comparing XML trees. Take a gander at diffxml for an example.
I search for an timeline graph for version control systems (like git, svn, cvs, ...) with its creation dates, ancestors and versions. I've found nothing like that.
If there is no such graph, what tool can I use to create such graphs like this or this?
Edit: I've made one for myself: https://aaron-fischer.net/zed
I'd recommend that you look into:
graphviz, for visualizing graphs, and which has a variety of incarnations. First choice, very flexible language that should let you do what you want with a little programming to automate generating the graphs. (Including things like the dotted lines from your first example.
igraph, which is a library for R, Python, etc for working with
(and visualizing) graphs.
cytoscape, network (in the graph theory) analysis.
gephi, which is similar to cytoscape.
Also consider mind-mapping software like Freemind, Xmind, etc.
In all cases, these tools can display the hierarchical network that describes your data, though adding dates/times might be difficult. (Graphviz lets you place nodes exactly where you want, so you might add the time scale in another program. In any case, you'd need to do some programming to munge the actual VCS data into something graphable.)
A suitable graph for your requirement is called Sankey chart.
It is usually used to describe flow and transitions. It can be adapted to show source control revisions. You can use the width of the line to present the number of line codes changed, and colors to present different release version etc.
Another nice implementation for this is evolines.
Another option that is a bit simpler is using a SpaceTree like the one InfoViz (http://thejit.org/). Check their demo below:
http://thejit.org/static/v20/Jit/Examples/Spacetree/example1.html
I've managed to finally build and run pocketsphinx (pocketsphinx_continuous). The problem I'm running into, is how to a improve accuracy. From what I understand, you can specify a dictionary file (-dict test.dic). So I took the default dictionary file and added some more pronunciations of the same words, for example:
pencil P EH N S AH L
pencil(2) P EH N S IH L
spaghetti S P AH G EH T IY
spaghetti(2) S P UH G EH T IY
Yet pocketsphinx still does not recognize either word at all. I know there is a jsgf file you can specify as well , but that seems more for phrases and grammar. How can I get pocketsphinx to recognize common words such as pencil and spaghetti?
thanks
-Mike
With something like this, you can't be certain, but I can offer the following suggestions:
Perhaps the language model somehow has low probabilities for "spaghetti" and "pencil". As you suggested, you could use a JSGF to test out how it does for recognition if it doesn't use the N-gram models, but instead does a simple grammar (give it like twenty words, including spaghetti and pencil). This way you can see if it is perhaps the language model which makes it difficult to recognize these words, and it can do okay if it considers all the words to have equal probability.
Perhaps you simply pronounce these words poorly, even with the alternative dictionary entries. Try either A. Testing other peoples' voices, or B. Adapting the acoustic model to your voice (see http://cmusphinx.sourceforge.net/wiki/tutorialam)
Also, what is it recognizing them as when it is failing? If possible, remove the words it misrecognizes as from the dictionary.
Again, for overall accuracy, only three things are going to really help you: restricting the grammar, adapting the accoustic model, and perhaps getting higher quality recording input.
To improve accuracy you may want to try adapting the acoustic model to your voice.
http://cmusphinx.sourceforge.net/wiki/tutorialadapt
To learn how to add new words: http://ghatage.com/tech/2012/12/13/Make-Pocketsphinx-recognize-new-words/
Make sure you put a tab (not a space) after the word and before the start of the pronunciation.
May be the problem is with Pocketsphinx. I too was not getting good results with Pocketsphinx. But I was getting very good accuracy with Sphinx4 (for a US speaker with a noise-cancelling microphone.) Therefore I did a comparison between the two using the same audio recordings. For pocketsphinx I used pocketsphinx_batch with the WSJ audio model and a small vocabulary language model and dictionary (created online with the CMU Cambridge language modelling toolkit.) For Sphinx4 I wrote a small Java program using the Sphinx4 library. The result was that Sphinx4 was much more accurate. All the gory details are at http://www.jaivox.com/pocketsphinx.html.
To achieve good accuracy with a pocketshinx:
Important! Check that your mic, audio device, file supports 16 kHz while the general model is trained with 16 kHz acoustic examples.
You should create your own limited dictionary you cannot use cmusphinx-voxforge-de.dic while accuracy is dramatically dropped.
You should create your own language model.
You can search for Jasper project on GitLab to see how it's implemented.
Also, please check the documentation
This is on the CMUSphinx website
"There are various phonesets to represent phones, such as IPA or SAMPA. CMUSphinx does not yet require you to use any well-known phoneset, moreover, it prefers to use letter-only phone names without special symbols. This requirement simplifies some processing algorithms, for example, you can create files with phone names as part of the filenames without any violating of the OS filename requirements.
A dictionary should contain all the words you are interested in, otherwise the recognizer will not be able to recognize them. However, it is not sufficient to have the words in the dictionary. The recognizer looks for a word in both the dictionary and the language model. Without the language model, a word will not be recognized, even if it is present in the dictionary."
https://cmusphinx.github.io/wiki/tutorialdict/