what is the equivalent functions of glArrayElement()& glTexGeni() in openGL ES - iphone

can anyone please tell me what are the equivalent functions for glArrayElement()& glTexGeni() in openGL ES..

There is no equivalents for these two functions.
OpenGL|ES is a stripped down and leaner version of OpenGL, so some stuff has been left out.
glArrayElement is not required as the whole glBegin / glEnd rendering mode has been removed from GL|ES. glArrayElement does not make sense without the glBegin/glEnd anymore. Since you'll have to rewrite the rendering code to use glDrawArrays/glDrawElements you will not miss it.
glTexGen has been removed as well. If you use a later version of GL|ES, you can emulate all the functionality using vertex shaders. If you're stuck with early GL|ES (1.0 or 1.1) you have to emulate it in software (e.g. calculate the texture coordinates on your own and pass them to gl via glTexCoordPointer).

Related

VHDL beta function

A friend of mine needs to implement some statistical calculations in hardware.
She wants it to be accomplished using VHDL.
(cross my heart, I haven't written a line of code in VHDL and know nothing about its subtleties)
In particular, she needs a direct analogue of MATLAB's betainc function.
Is there a good package around for doing this?
Any hints on the implementation are also highly appreciated.
If it's not a good idea at all, please tell me about it as well.
Thanks a lot!
There isn't a core available that performs an incomplete beta function in the Xilinx toolset. I can't speak for the other toolsets available, although I would doubt that there is such a thing.
What Xilinx does offer is a set of signal processing blocks, like multipliers, adders and RAM Blocks (amongst other things, filters, FFTs), that can be used together to implement various custom signal transforms.
In order for this to be done, there needs to be a complete understanding of the inner workings of the transform to be applied.
A good first step is to implement the function "manually" in matlab as a proof of concept:
Instead of using the built-in function in matlab, your friend can try to implement the function just using fundamental operators like multipliers and adders.
The results can be compared with those produced by the built-in function for verification.
The concept can then be moved to VHDL using the building blocks that are provided.
Doing this for the incomplete beta function isn't something for the faint-hearted, but it can be done.
As far as I know there is no tool which allow interface of VHDL and matlab.
But interface of VHDL and C is fairly easy, so if you can implement your code(MATLAB's betainc function) in C then it can be done easily with FLI(foreign language interface).
If you are using modelsim below link can be helpful.
link
First of all a word of warning, if you haven't done any VHDL/FPGA work before, this is probably not the best place to start. With VHDL (and other HDL languages) you are basically describing hardware, rather than a sequential line of commands to execute on a processor (as you are with C/C++, etc.). You thus need a completely different skill- and mind-set when doing FPGA-development. Just because something can be written in VHDL, it doesn't mean that it actually can work in an FPGA chip (that it is synthesizable).
With that said, Xilinx (one of the major manufacturers of FPGA chips and development tools) does provide the System Generator package, which interfaces with Matlab and can automatically generate code for FPGA chips from this. I haven't used it myself, so I'm not at all sure if it's usable in your friend's case - but it's probably a good place to start.
The System Generator User guide (link is on the previously linked page) also provides a short introduction to FPGA chips in general, and in the context of using it with Matlab.
You COULD write it yourself. However, the incomplete beta function is an integral. For many values of the parameters (as long as both are greater than 1) it is fairly well behaved. However, when either parameter is less than 1, a singularity arises at an endpoint, making the problem a bit nasty. The point is, don't write it yourself unless you have a solid background in numerical analysis.
Anyway, there are surely many versions in C available. Netlib must have something, or look in Numerical Recipes. Or compile it from MATLAB. Then link it in as nav_jan suggests.
As an alternative to VHDL, you could use MyHDL to write and test your beta function - that can produce synthesisable (ie. can go into an FPGA chip) VHDL (or Verilog as you wish) out of the back end.
MyHDL is an extra set of modules on top of Python which allow hardware to be modelled, verified and generated. Python will be a much more familiar environment to write validation code in than VHDL (which is missing many of the abstract data types you might take for granted in a programming language).
The code under test will still have to be written with a "hardware mindset", but that is usually a smaller piece of code than the test environment, so in some ways less hassle than figuring out how to work around the verification limitations of VHDL.

Objective-C Data Structures (Building my own DAWG)

After not programming for a long, long time (20+ years) I'm trying to get back into it. My first real attempt is a Scrabble/Words With Friends solver/cheater (pick your definition). I've built a pretty good engine, but it's solves the problems through brute force instead of efficiency or elegance. After much research, it's pretty clear that the best answer to this problem is a DAWG or CDWAG. I've found a few C implementations our there and have been able to leverage them (search times have gone from 1.5s to .005s for the same data sets).
However, I'm trying to figure out how to do this in pure Objective-C. At that, I'm also trying to make it ARC compliant. And efficient enough for an iPhone. I've looked quite a bit and found several data structure libraries (i.e. CHDataStructures ) out there, but they are mostly C/Objective-C hybrids or they are not ARC compliant. They rely very heavily on structs and embed objects inside of the structs. ARC doesn't really care for that.
So - my question is (sorry and I understand if this was tl;dr and if it seems totally a newb question - just can't get my head around this object stuff yet) how do you program classical data structures (trees, etc) from scratch in Objective-C? I don't want to rely on a NS[Mutable]{Array,Set,etc}. Does anyone have a simple/basic implementation of a tree or anything like that that I can crib from while I go create my DAWG?
Why shoot yourself in the foot before you even started walking?
You say you're
trying to figure out how do this in pure Objective-C
yet you
don't want to rely on a NS[Mutable]{Array,Set,etc}
Also, do you want to use ARC, or do you not want to use ARC? If you stick with Objective-C then go with ARC, if you don't want to use the Foundation collections, then you're probably better off without ARC.
My suggestion: do use NS[Mutable]{Array,Set,etc} and get your basic algorithm working with ARC. That should be your first and only goal, everything else is premature optimization. Especially if your goal is to "get back into programming" rather than writing the fastest possible Scrabble analyzer & solver. If you later find out you need to optimize, you have some working code that you can analyze for bottlenecks, and if need be, you can then still replace the Foundation collections.
As for the other libraries not being ARC compatible: you can pretty easily make them compatible if you follow some rules set by ARC. Whether that's worthwhile depends a lot on the size of the 3rd party codebase.
In particular, casting from void* to id and vice versa requires a bridged cast, so you would write:
void* pointer = (__bridge void*)myObjCObject;
Similarly, if you flag all pointers in C structs as __unsafe_unretained you should be able to use the C code as is. Even better yet: if the C code can be built as a static library, you can build it with ARC turned off and only need to fix some header files.

Replace these OpenGL functions with OpenGL ES?

I search for a possibility to migrate my PC OpenGL application and an iPhone App into one XCode project (for convenience). So if I make chances to these source files I want to apply this for both plattforms and want to be able to compile for both plattforms from one project. How could I accomplish this?
Is there a way to do so in XCode 4 or 3.25? Any help would be highly appreciated
edit: Okay, I went so far - All in all, it seems to work with XCode 4.
My only problems are these openGL/Glut functions, that aren't working on iPhone:
glPushAttrib( GL_DEPTH_BUFFER_BIT | GL_LIGHTING_BIT );
glPopAttrib();
glutGet(GLUT_ELAPSED_TIME);
glutSwapBuffers();
Any ideas how to fix these issues?
PushAttrib/PopAttrib you'll need to replace yourself with code which manually tracks those states, or you could rewrite your code in such a way that anything which relies on those states, sets them itself.
glutGet(GLUT_ELAPSED_TIME) could be replaced by mach_absolute_time (not particularly easy, but the right thing to use) or [NSDate timeIntervalSinceReferenceDate] (easy, but potentially problematic since it's not guaranteed to be monotonically increasing, or to increase at 1.0 per second)
glutSwapBuffers will be replaced by -presentRenderbuffer: on your EAGLContext (but it sounds like you're already doing this; if you weren't, you wouldn't be able to see anything).
You could use dgles to wrap your desktop OpenGL implementation and provide a OpenGL ES 1.x interface.
I found this documentation online in regards to the OpenGL ES 1.0 specification
There are several reasons why one type or another of internal state needs to be queried by an ap- plication.The application may need to dynamically discover implementation limits (pixel component sizes, texture dimensions, etc.), or the application might be part of a layered library and it may need to save and restore any state that it disturbs as part of its rendering.PushAttrib andPopAttrib can be used to perform this but they are expensive to implement and use and therefore not supported.Gen- erally speaking state queries are discouraged as they are often detrimental to performance.Rather than trying to partition different types of dynamic state that can be queried, tops of matrix stacks for example, no dynamic state queries are supported and applications must shadow state changes rather than querying the pipeline.This makes things difficult for layered libraries, but there hasn’t been enough justification to retain dynamic state queries or attribute pushing and popping.
That and this other link here seems to indicate that you'll need to keep track of those bits yourself. No help for you there.
glut just calls the system code, yeah? (wglSwapBuffers, glxSwapBuffers)
Try this here eglSwapBuffers and maybe this book out
Sorry I don't have a more concrete answer for you.
very annoyed by this lack seemingly simple functionality of pushing attribs, but you can use glGetto get the state of something before you set the param , like this which worked for me:
Boolean tmpB;
int tmpSrc,tmpDst;
glGetBooleanv(GL_BLEND,&tmpB);
glGetIntegerv(GL_BLEND_SRC_ALPHA,&tmpSrc);
glGetIntegerv(GL_BLEND_DST_ALPHA,&tmpDst);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//draw rectangle
CGSize winSize=[[CCDirector sharedDirector] winSize];
CGPoint vertices[] = { ccp(0,0), ccp(winSize.width,0), ccp(winSize.width,20), ccp(0,20) };
glColor4ub(255, 0, 255, 55);
ccFillPoly( vertices, 4, YES);
if(!tmpB)
glDisable(GL_BLEND);
glBlendFunc(tmpSrc, tmpDst);

What are the major differences between making cool stuff with OpenGL ES VS making it with Flash? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am digging into OpenGL ES on the iPhone, and I have still no big idea about OpenGL ES and Flash.
About Flash I only know it is capable of making really nice animations and transitions. For best example, lets look at the Ocarina flute for the iPhone. This application makes some simple but powerful animations. It expands little circles to get bigger and bigger. And while they do, they have radial gradients and change alpha.
That is the kind of cool flash animation stuff I mean.
Now, both in Flash and OpenGL ES there is the capability of hooking up some kind of "objects" like circles or whatever with some kind of code, so that touching a circle kicks off another animation.
For my understanding both things are pretty much exactly the same, when looking on the surface. Maybe someone can point out the differences from the inside. i.e. does a Flash Developer has to learn a completely new way of thinking and working, or do both technologies have many things in common?
first of all, there is no flash player for the iPhone ... so yeah, that's not really an option ... but there is a flash like alternative ... have a look at this post ... and at Haxe ...
in short: Haxe is an open source language, that allows targetting flash player 6-10, PHP, JavaScript, neko VM, and C++ (more targets coming) ... the creator of the Haxe/cpp compiler backend uses it to get on the iPhone ... as a matter of fact, he has written a library, that allows you to use flash9 API on C++ or neko with openGL (neash? nme? can't remember which is which ... :D) ...
this allows you to write Haxe for the iPhone, using flash 9 api ... Haxe is a little different from actionscript, but you will quickly get the hang of it ...
currently, a limitation is, that his library uses either software rendering or Open GL for graphics, which is why he has to use software rendering on the iPhone (that's not really an option) ... but it's fair enough for developping and i think, hopefully he'll be having the Open GL ES integration soon ...
so maybe that would be a way to go ...
but as your active tags indicate, you never really dealt with actionscript or flash, so i am not sure, if it is not simpler to use the adequate Cocoa frameworks to get your stuff done ... there is an advantag in using Haxe and flash API: i think it is a lot easier than related Cocoa frameworks (a friend of mine, who is a mac developer, started with the iPhone, so i have a superficial insight, but it was a little shocking, how little he had up and running after 2 days of work) ... the flash API more high level, thus simpler, but also limited and may not be as performant ... but you will get results really quickly ... but perfomance matters on the iPhone, because it means power usage ...
this is a full Haxe program doing pretty much the circle stuff the ocarina does:
package;
import flash.display.DisplayObject;
import flash.display.GradientType;
import flash.display.LineScaleMode;
import flash.display.Shape;
import flash.display.StageAlign;
import flash.display.StageScaleMode;
import flash.events.Event;
import flash.filters.BlurFilter;
import flash.geom.Matrix;
import flash.Lib;
class Main {
static function main() {
for (i in 0...(Std.random(5) + 5)) {
var s = genCircle();
Lib.current.addChild(s);
pulsate(s, Std.random(10) + 10);
}
}
static function genCircle():Shape {
var radius = Std.random(50) + 50;
var s = new Shape();
var color = 0x110000 * Std.random(8) + 0x001100 * Std.random(8) + 0x000011 * Std.random(8) + 0x7F7F7F;
s.graphics.lineStyle(5, color,1,false, LineScaleMode.NONE);
if (Std.random(2) == 1)
s.filters = [new BlurFilter(4, 4, 3)];
else {
var m = new Matrix();
m.createGradientBox(radius, radius, 0, -radius/2, -radius/2);
s.graphics.beginGradientFill(GradientType.RADIAL, [color, color], [1,0.5], [0, 0xFF], m);
}
s.graphics.drawCircle(0, 0, radius);
s.x = 50 + Std.random(300);
s.y = 50 + Std.random(300);
return s;
}
static function pulsate(d:DisplayObject, speedModifier:Float):Void {
var cter = Std.int(Std.random(Std.int(speedModifier)));
d.addEventListener(Event.ENTER_FRAME, function (e:Event):Void {
var x = Math.sin(cter++ / speedModifier)/2 + 0.5;
d.scaleX = d.scaleY = x + 0.5;
d.alpha = 1.2 - x;
});
}
}
still, this i bit further in the future ... what is left to do, is use Open GL ES, as mentioned, and, by far more tricky, get Haxe binding to all the inputs an iPhone produces (acceleration, multi-touch) ...
well, you might wanna keep an eye on it, if this road seems interesting to you ... or even, as someone, who works with the iPhone, you might even want to contribute to the Haxe->C++->iPhone solution ... ;)
hope that helped ... or that it was interesting at least ... :)
In OpenGL there is no notion of 'objects' and 'gradients' and all the nice things flash offers. OpenGL just knows about vertices, normals, colors, matrices etc... Those things combined, together with a lot of programming can make amazing things, but it won't be as easy as using Flash. Keep in mind there's no (good) Flash player for the iPhone, yet.
OpenGL is a lower-level API than what you normally use when developing with Flash. You can do a lot of stuff with it, but either you have to build your higher level abstractions yourself or use some existing library.
If you are coming from Flash world, I assume that you mostly need 2D abstractions like Layers etc. Cocos 2D for iPhone is a great library for that with a lot of cool stuff already built in. You can get it from here

Importing Maya ASCII to game

I am currently working on creating an import-based pipeline for my indie game using Maya ASCII .ma as source format and my own format for physics and graphics as output. I'll keep stuff like range-of-motion attributes inside Maya, such as for a hinge joint. Other types of parameters that needs a lot of tweaks end up in separate source files (possibly .ini for stuff like mass, spring constants, strength of physical engines and the like).
The input is thus one .ma and one .ini, and the output is, among other things, one .physics and several .mesh files (one .mesh file per geometry/material).
I am also probably going to use Python 3.1 to reformat the data, and I already found some LGPL 2.1 code that reads basic Maya ASCII. I'll probably also use Python to launch the platform during development. Game is developed in C++.
Is there anything in all of this which you would advice against? A quick summary of things that might be flawed:
Import-based pipeline (not export-based)?
Maya (not 3DS)?
Maya ASCII .ma (not .mb)?
.ini (not .xml)?
Separation of motion attibutes in Maya and "freak-tweak" attributes in .ini (not all in Maya)?
Python 3.1 for building data (not embedded C++)?
Edit: if you have a better suggestion of how to implement the physics/graphics import/export tool chain, I'd appreciate the input.
If you really want to do this, you should be aware of a few things. The main one being that it's probably more of a hassle than you'd first expect. Some others are:
Maya .ma (at least until the current v.2010) is built up by mel. Mel is Turing-complete, but the way the hierarchical scene is described in terms of nodes is a lot more straight-forward than “code”.
Add error-handling early or you’ll be sorry later.
You have to handle a lot of different nodes, where the transforms are by far the most obnoxious ones. Other types include meshes, materials (many different types), “shader engines” and files (such as textures).
.ma only describes shapes and tweaks; but very rarely defines raw vertices. I chose to keep a small “export” script inside the .ma to avoid having to generate all primitives exactly the same way as Maya. In hindsight, this was the right way to go. Otherwise you have to be able to do stuff like
“Create sphere”,
“Move soft selection with radius this-and-that from here to there”, and
“Move vertex 252 xyz units” (with all vertices implicitly defined).
Maya defines polygons for meshes; you might want to conve
rt to triangles.
All parameters that exist on a certain type of node are either explicitly or implicitly defined. You have to know their default values (when implicitly defined),
Basically, an object is defined by a transform, a mesh and a primitive. The transform is parent of the mesh. The transform contains the scaling, rotation, translation, pivot translations, and yet some. The mesh links to a primitive and vice versa. The primitive has a type (“polyCube”) and dimensions (“width, height, depth”).
Nodes may have “multiple inheritance”. For instance, a mesh instanced several times has a single mesh (and a single primitive), but multiple parents (the transformations).
Node transforms are computed like so (see Maya xform doc for more info):
vrt = getattr("rpt")
rt = mat4.translation(vrt)
...
m = t * rt * rpi * r * ar * rp * st * spi * sh * s * sp
I build my engine around physics, so the game engine wants meshes placed on physical shapes, but when modeling I want it the other way around. This to keep it generic for future applications (“meshes without physics”). This tiny decision caused me serious grief in transformations. Linear algebra got a brush-up. Problems in scaling, rotation, translation and shearing; you name it, I've had it.
I built my import tool on cgkit’s Maya parser. Thanks Matthias Baas!
If you’re going to do something similar I strongly recommend you peeking at my converter before writing your own. This “small” project took me three agonizing months to get to a basic working condition.
As a general serialization format that's both human readable and human writable, has excellent Python support (and, well, any language support really), you might want to consider using YAML or JSON over ini files or XML.
XML could be acceptable in your case if you never generate files by hand.
One of the advantages of JSON and YAML is typing: both formats are parsed down to Python lists, dictionaries, floats, ints... Basically: sane python types.
Also, unless you're sure that every library you'll ever use works on 3.1, you might want to consider sticking with 2.x for a bit due to library availability issues.
You should consider using an export-based pipeline or a standardized file format such as OBJ or COLLADA instead of re-implementing a .ma parser and replicating all the Maya internals necessary to interpret it.
The .ma/.mb format is not intended to be read by any program other than Maya itself, so Autodesk does not put any effort into making this an easy process. To parse it 100% correctly you would need to implement the whole MEL scripting language.
All of the Maya-based pipelines I've seen either first export content into a standardized file format, or run MEL scripts within Maya to dump content using the MEL node interfaces.
Note than Maya can be run in a "headless" mode where it loads a scene, executes a MEL script, and exists, without loading the GUI. Thus there is no problem using it within automated build systems.