Is there any way I can set a symbolic breakpoint that will trigger when any OpenGL function call sets any state other than GL_NO_ERROR? Initial evidence suggests opengl_error_break is intended to to serve just that purpose, but it doesn't break.
Based on Lars' approach you can achieve this tracking of errors automatically, it is based on some preprocessor magic and generating stub functions.
I wrote a small Python script which processes the OpenGL header (I used the Mac OS X one in the example, but it should also work with the one of iOS).
The Python script generates two files, a header to include in your project everywhere where you call OpenGL like this (you can name the header however you want):
#include "gl_debug_overwrites.h"
The header contains macros and function declarations after this scheme:
#define glGenLists _gl_debug_error_glGenLists
GLuint _gl_debug_error_glGenLists(GLsizei range);
The script also produces a source file in the same stream which you should save separately, compile and link with your project.
This will then wrap all gl* functions in another function which is prefixed with _gl_debug_error_ which then checks for errors similar to this:
GLuint _gl_debug_error_glGenLists(GLsizei range) {
GLuint var = glGenLists(range);
CHECK_GL_ERROR();
return var;
}
Wrap your OpenGL calls to call glGetError after every call in debug mode. Within the wrapper method create a conditional breakpoint and check if the return value of glGetError is something different than GL_NO_ERROR.
Details:
Add this macro to your project (from OolongEngine project):
#define CHECK_GL_ERROR() ({ GLenum __error = glGetError(); if(__error) printf("OpenGL error 0x%04X in %s\n", __error, __FUNCTION__); (__error ? NO : YES); })
Search for all your OpenGL calls manually or with an appropriate RegEx. Then you have two options exemplary shown for the glViewport() call:
Replace the call with glViewport(...); CHECK_GL_ERROR()
Replace the call with glDebugViewport(...); and implement glDebugViewport as shown in (1).
I think that what could get you out of the problem is to capture OpenGL ES Frames (scroll down to "Capture OpenGL ES Frames"), which is now supported by Xcode. At least this is how I am debugging my OpenGL Games.
By capturing the frames when you know an error is happening you could identify the issue in the OpenGL Stack without too much effort.
Hope it helps!
Related
I'm working on a project based on the stm32f4discovery board using IAR Embedded Workbench (though I'm very close to the 32kb limit on the free version so I'll have to find something else soon). This is a learning project for me and so far I've been able to solve most of my issues with a few google searches and a lot of trial and error. But this is the first time I've encountered a run-time error that doesn't appear to be caused by a problem with my logic and I'm pretty stuck. Any general debugging strategy advice is welcome.
So here's what happens. I have an interrupt on a button; each time the button is pressed, the callback function runs my void cal_acc(uint16_t* data) function defined in stm32f4xx_it.c. This function gathers some data, and on the 6th press, it calls my void gn(float32_t* data, float32_t* beta) function. Eventually, two functions are called, gn_resids and gn_jacobian. The functions are very similar in structure. Both take in 3 pointers to 3 arrays of floats and then modify the values of the first array based on the second two. Unfortunately, when the second function gn_jacobian exits, I get the HardFault.
Please look at the link (code structure) for a picture showing how the program runs up to the fault.
Thank you very much! I appreciate any advice or guidance you can give me,
-Ben
Extra info that might be helpful below:
Running in debug mode, I can step into the function and run through all the lines click by click and it's OK. But as soon as I run the last line and it should exit and move on to the next line in the function where it was called, it crashes. I have also tried rearranging the order of the calls around this function and it is always this one that crashes.
I had been getting a similar crash on the first function gn_resids when one of the input pointers pointed to an array that was not defined as "static". But now all the arrays are static and I'm quite confused - especially since I can't tell what is different between the gn_resids function that works and the gn_jacobian function that does not work.
acc1beta is declared as a float array at the beginning of main.c and then also as extern float32_t acc1beta[6] at the top of stm32f4xx_it.c. I want it as a global variable; there is probably a better way to do this, but it's been working so far with many other variables defined in the same way.
Here's a screenshot of what I see when it crashes during debug (after I pause the session) IAR view at crash
EDIT: I changed the code of gn_step to look like this for a test so that it just runs gn_resids twice and it crashes as soon as it gets to the second call - I can't even step into it. gn_jacobian is not the problem.
void gn_step(float32_t* data, float32_t* beta) {
static float32_t resids[120];
gn_resids(resids, data, beta);
arm_matrix_instance_f32 R;
arm_mat_init_f32(&R, 120, 1, resids);
// static float32_t J_f32[720];
// gn_jacobian(J_f32, data, beta);
static float32_t J_f32[120];
gn_resids(J_f32, data, beta);
arm_matrix_instance_f32 J;
arm_mat_init_f32(&J, 120, 1, J_f32);
Hardfaults on Cortex M devices can be generated by various error conditions, for example:
Access of data outside valid memory
Invalid instructions
Division by zero
It is possible to gather information about the source of the hardfault by looking into some processor registers. IAR provides a debugger macro that helps to automate that process. It can be found in the IAR installation directory arm\config\debugger\ARM\vector_catch.mac. Please refer to this IAR Technical Note on Debugging Hardfaults for details on using this macro.
Depending on the type of the hardfault that occurs in your program you should try to narrow down the root cause within the debugger.
The answer to this question here
Libopencm3 interrupt table on STM32F4
explains the whole mechanism nicely but what I get is whole vector table filled with blocking handlers.
I know that because I see it in debugger (apart from the whole thing not working): disassembly screenshot showing vector table.
It is as though linker simply ignores my nicely defined interrupt handler function(s), e.g.:
void sys_tick_handler(void)
{
...
}
void tim1_up_isr(void)
{
...
}
I am using EmBitz IDE and have followed this tutorial here to get libopencm3 to work (and it does work except for this issue).
I have checked the function names n-fold and have tried several online examples including those from the libopencm3-examples project.
Everything compiles without a glitch and loads into the target board (STM32F103C8) and runs fine - except no ISRs get invoked (I do get interrupt(s) but they get stuck in blocking handlers).
Does anyone have an idea why is this happening?
It looks like linking with standard vector table (from ST's SPL or HAL).
To check this, try to rename your sys_tick_handler() to SysTick_Handler() and
tim1_up_isr() to TIM1_UP_IRQHandler().
If it works, find file with this SysTick_Handler and TIM1_UP_IRQHandler (I think, that will be startup*.s) and delete it from your project.
I am using the glImageProcessing example from Apple to perform some filter operations on. However, I would like to be able to load a new image into the texture.
Currently, the example loads the image with the line:
loadTexture("Image.png", &Input, &renderer);
(which I've modified to accept an actual UIImage):
loadTexture(image, &Input, &renderer);
However, in testing how to redraw a new image I tried implementing (in Imaging.c):
loadTexture(image, &Input, &renderer);
loadTexture(newImage, &Input, &renderer);
and the sample app crashes at the line:
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(CGImage));
in Texture.c
I have also tried deleting the active texture by
loadTexture(image, &Input, &renderer);
glDeleteTextures(GL_TEXTURE_2D, 0);
loadTexture(newImage, &Input, &renderer);
which also fails.
Does anyone have any idea how to remove the image/texture from the opengl es interface so that I can load a new image???
Note: in Texture.c, apple states "The caller of this function is responsible for deleting the GL texture object." I suppose this is what I am asking how to do. Apple doesn't seem to give any clues ;-)
Also note: I've seen this question posed many places, but no one seems to have an answer. I'm sure others will appreciate some help on this topic as well! Many thanks!
Cheers,
Brett
You're using glDeleteTextures() incorrectly in the second case. The first parameter to that function is how many textures you wish to delete, and the second is an array of texture names (or a pointer to a single texture name). You'll need to do something like the following:
glDeleteTextures(1, &textureName);
Where textureName is the name of the texture obtained at its creation. It looks like that value is stored within the texID component of the Image struct passed into loadTexture().
That doesn't fully explain the crash you see, which seems like a memory management issue with your input image (possibly an autoreleased object that is being discarded before you access its CGImage component).
The following function calls are deprecated in OpenAL 1.1, what is a proper replacement?? THe only answer i found in google was "write your own function!!" ;-)
alutLoadWAVFile
alutUnloadWAV
There are 8 file loading functions in ALUT (not including the three deprecated functions alutLoadWAVFile, alutLoadWAVMemory, and alutUnloadWAV).
The prefix of the function determines where the data is going; four of them start alutCreateBuffer (create a new buffer and put the sound data into it), and the other four start alutLoadMemory (allocate a new memory region and put the sound data into it).
The suffix of the function determines where the data comes from. Your options are FromFile (from a file!), FromFileImage (from a memory region), HelloWorld (fixed internal data of someone saying "Hello, world!"), and Waveform (generate a waveform).
I believe the correct replacement for alutLoadWAVFile would therefore be alutCreateBufferFromFile.
However, I would not use this blindly - it's suitable for short sound clips, but for e.g. a music track you probably want to load it in chunks and queue up multiple buffers, to ease the memory load.
These functions are all covered in the alut documentation, by the way.
"write your own" is pretty much the correct answer.
You can usually get away with using the deprecated functions since most implementations still include the WAV file handling functions, with one notable exception being iOS, for which you'd need to use audio file services.
I'd suggest making a standard prototype for "load wav file" and then depending on the OS, use a different loading routine. You can just stub it with a call to alutLoadWAVFile for systems known to still support it.
I'm working on a Mobile Game for several platforms ( Android, iOS, and some maybe even some kind of console in the future ).
I'm trying to decide whether to use tr1::unordered_map or google::dense_hash_map to retrieve Textures from a Resource Manager (for later binding using OpenGL). Usually this can happen quite a few times per second (N per frame, where my Game is running at ~60 fps)
Considerations are:
Performance (memory and cpu wise)
Portability
Any ideas or suggestions are welcome.
http://attractivechaos.wordpress.com/2008/10/07/another-look-at-my-old-benchmark/
http://attractivechaos.wordpress.com/2008/08/28/comparison-of-hash-table-libraries/
go with the STL for standard containers. They have predictable behavior, and can be used seamlessly in STL algos/iterators. You're also given some performance guarantees by the STL.
This should also guarantee portability. Most compilers have the new standard implemented.
In a C++ project I developed, I was wondering something similar: which one was best, tr1:unordered_map, boost::unordered_map or std::map? I ended up declaring a typedef, controllable at compilation:
#ifdef UnorderedMapBoost
typedef boost::unordered_map<cell_key, Cell> cell_map;
#else
#ifdef UnorderedMapTR1
typedef std::tr1::unordered_map<cell_key, Cell> cell_map;
#else
typedef std::map<cell_key, Cell> cell_map;
#endif // #ifdef UnorderedMapTR1
#endif // #ifdef UnorderedMapBoost
I could then control at compile-time which one to use, and profiled it. In my case, the portability ended up being more important, so I normally use std::map.