I am using the glImageProcessing example from Apple to perform some filter operations on. However, I would like to be able to load a new image into the texture.
Currently, the example loads the image with the line:
loadTexture("Image.png", &Input, &renderer);
(which I've modified to accept an actual UIImage):
loadTexture(image, &Input, &renderer);
However, in testing how to redraw a new image I tried implementing (in Imaging.c):
loadTexture(image, &Input, &renderer);
loadTexture(newImage, &Input, &renderer);
and the sample app crashes at the line:
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(CGImage));
in Texture.c
I have also tried deleting the active texture by
loadTexture(image, &Input, &renderer);
glDeleteTextures(GL_TEXTURE_2D, 0);
loadTexture(newImage, &Input, &renderer);
which also fails.
Does anyone have any idea how to remove the image/texture from the opengl es interface so that I can load a new image???
Note: in Texture.c, apple states "The caller of this function is responsible for deleting the GL texture object." I suppose this is what I am asking how to do. Apple doesn't seem to give any clues ;-)
Also note: I've seen this question posed many places, but no one seems to have an answer. I'm sure others will appreciate some help on this topic as well! Many thanks!
Cheers,
Brett
You're using glDeleteTextures() incorrectly in the second case. The first parameter to that function is how many textures you wish to delete, and the second is an array of texture names (or a pointer to a single texture name). You'll need to do something like the following:
glDeleteTextures(1, &textureName);
Where textureName is the name of the texture obtained at its creation. It looks like that value is stored within the texID component of the Image struct passed into loadTexture().
That doesn't fully explain the crash you see, which seems like a memory management issue with your input image (possibly an autoreleased object that is being discarded before you access its CGImage component).
Related
I am trying to use OpenCV with Unity for image processing, and I am trying to make the data transfers between OpenCV and Unity code as efficient as possible.
Currently, I am able to create a new byte[] in C#, then load an image into these bytes in OpenCV, and then use texture.LoadRawTextureData(array) and texture.Apply() to show this texture in Unity.
However, the Unity documentation recommends to use texture.GetRawTextureData() to get a reference to the NativeArray (the version of function that returns byte[] makes a copy of the raw data) and then write the data directly into this buffer (+ call Apply()).
Unfortunately, the documentation on NativeArrays is rather scarce - how exactly do NativeArrays look in the memory? They do have an ToArray() function, but this again makes a copy of the data. What I need is a byte[] array, which can be either RGB24 or RGBA32 (RGBA seems to be preferred, as even though it is memory-inefficient if the texture is opaque, the modern GPUs apparently do not support RGB24).
Is there any way to pass the pointer to the beginning of the buffer in the texture without making copies and calling LoadRawTextureData()? Or are the data in the Texture in a completely different format?
I had the same confusion over NativeArray. It looks like the CopyTo() method doesn't seem to alloc memory. There is a ToArray() method, which I'm certain allocates.
I was able to work out this utility method which is working fine for a webcam feed.
private byte[] m_byteCache = null;
public byte[] GetRawTextureData(Texture2D texture)
{
NativeArray<byte> nativeByteArray = texture.GetRawTextureData<byte>();
if (m_byteCache?.Length != nativeByteArray.Length)
{
m_byteCache = new byte[nativeByteArray.Length];
}
nativeByteArray.CopyTo(m_byteCache);
return m_byteCache;
}
ToArray() allocates a new array. CopyTo() doesn't alloc memory but of course it copies the data.
But what I gather from the documentation is that you should be able to just access the NativeArray like a normal array to modify the memory and then call .Apply() on the corresponding Texture object. If you can make your C# OpenCV code write to it, that should do the trick.
The issue with NativeArray is I guess that you directly get the memory of whatever implementation your code runs on, so the exact byte representation could differ depending on the platform. Also the memory will be invalid as soon as the texture is gone.
Is there any way I can set a symbolic breakpoint that will trigger when any OpenGL function call sets any state other than GL_NO_ERROR? Initial evidence suggests opengl_error_break is intended to to serve just that purpose, but it doesn't break.
Based on Lars' approach you can achieve this tracking of errors automatically, it is based on some preprocessor magic and generating stub functions.
I wrote a small Python script which processes the OpenGL header (I used the Mac OS X one in the example, but it should also work with the one of iOS).
The Python script generates two files, a header to include in your project everywhere where you call OpenGL like this (you can name the header however you want):
#include "gl_debug_overwrites.h"
The header contains macros and function declarations after this scheme:
#define glGenLists _gl_debug_error_glGenLists
GLuint _gl_debug_error_glGenLists(GLsizei range);
The script also produces a source file in the same stream which you should save separately, compile and link with your project.
This will then wrap all gl* functions in another function which is prefixed with _gl_debug_error_ which then checks for errors similar to this:
GLuint _gl_debug_error_glGenLists(GLsizei range) {
GLuint var = glGenLists(range);
CHECK_GL_ERROR();
return var;
}
Wrap your OpenGL calls to call glGetError after every call in debug mode. Within the wrapper method create a conditional breakpoint and check if the return value of glGetError is something different than GL_NO_ERROR.
Details:
Add this macro to your project (from OolongEngine project):
#define CHECK_GL_ERROR() ({ GLenum __error = glGetError(); if(__error) printf("OpenGL error 0x%04X in %s\n", __error, __FUNCTION__); (__error ? NO : YES); })
Search for all your OpenGL calls manually or with an appropriate RegEx. Then you have two options exemplary shown for the glViewport() call:
Replace the call with glViewport(...); CHECK_GL_ERROR()
Replace the call with glDebugViewport(...); and implement glDebugViewport as shown in (1).
I think that what could get you out of the problem is to capture OpenGL ES Frames (scroll down to "Capture OpenGL ES Frames"), which is now supported by Xcode. At least this is how I am debugging my OpenGL Games.
By capturing the frames when you know an error is happening you could identify the issue in the OpenGL Stack without too much effort.
Hope it helps!
I want to modify the apple's sample code of auriotouch to generate the waveform from and audio file instead of rendering the waveform from the mic input. I tried to do it, but i am not able to understand where and what changes to make. Can anyone guide me on how it can be achieved.
Thanks,
Look inside the render callback for a function named AudioUnitRender
The render callback happens whenever the speakers are hungry for data.
IIRC A.T. simply grabs however many samples are required from the microphone using this function
Of course, the first time round it will fail because there will be nothing waiting
Anyway, just comment out this function and instead fill the buffer yourself with samples from your file ( which I think you would probably want to load into memory in advance, probably don't want fileIO clogging a high priority thread )
that means you will probably need to create some sort of AudioFile class, and pass a reference to an instance of this class when you set up the render callback. that way you will be able to access the data from within this render callback ( which is a vanilla C function, ie not a member of a class, so it has no other way to access class data -- unless you want to do something horrible with file-level variables ).
make sure you create this AudioFile* audiofile NONATOMIC if it is a property, you don't want your render callback to be kept waiting because some other thread is inside the object and consequently has a lock on it.
We're repetitively making a CGLayer, doing processing, and then releasing it. This happens a lot in real time. Surely there is a lot of overhead in making a whole new CGLayer each time. So...
Surely it would be better to just keep the layer around, and erase all the data from it each time -- rather than creating a new one from scratch.
Note: if you paint in a blank or clear rectangle covering everything, that just adds even more data on top of your extant paths.
So, how to actually "erase" or "start again" a CGLayer?
There is a function CGContextBeginPath(cc) but it's confusing: it seems to only clear out "that" path, it does not appear to erase all of the CGLayer back to no-data state.
How to return a CGLayer to a state of no-data? Does anyone know?
Update...
It turns out there is actually NO WAY TO DO THIS.
After considerable experimentation, we have determined that there appears to be no way to clear out all the data from a CGLayer (which is disappointing really).
Note that adding a new white or clear rectangle, only does that - it actually adds more data.
So unfortunately no known way to do this. If you are building these at high hz (perhaps for a calculation), you just have to start with a fresh one each time. Or, you can apparently delete (actually delete, not just cover) just the one path using CGContextBeginPath().
Hopefully this will help someone in the future.
Once you have the context call CGContextClearRect( cc, someRect ) to clear the contents.
Why don't you just fill it with a rectangle of (clear/white) color?
Make sure the layer is not opaque if you wanna clear it with clearColor (transparent).
I am designing a music visualiser application for the iPhone.
I was thinking of doing this by picking up data via the iPhone's mic, running a Fourier Transform on it and then creating visualisations.
The best example I have been able to get of this is aurioTuch which produces a perfect graph based on FFT data. However I have been struggling to understand / replicate aurioTouch in my own project.
I am unable to understand where exactly aurioTouch picks up the data from the microphone before it does the FFT?
Also is there any other examples of code that I could use to do this in my project? Or any other tips?
Since I am planning myself to use the input of the mic, I thought your question is a good opportunity to get familiar with a relevant sample code.
I will trace back the steps of reading through the code:
Starting off in SpectrumAnalysis.cpp (since it is obvious the audio has to get to this class somehow), you can see that the class method SpectrumAnalysisProcess has a 2nd input argument const int32_t* inTimeSig --- sounds a promising starting point, since the input time signal is what we are looking for.
Using the right-click menu item Find in project on this method, you can see that except for the obvious definition & declaration, this method is used only inside the FFTBufferManager::ComputeFFT method, where it gets mAudioBuffer as its 2nd argument (the inTimeSig from step 1). Looking for this class data member gives more then 2 or 3 results, but most of them are again just definitions/memory alloc etc. The interesting search result is where mAudioBuffer is used as argument to memcopy, inside the method FFTBufferManager::GrabAudioData.
Again using the search option, we see that FFTBufferManager::GrabAudioData is called only once, inside a method called PerformThru. This method has an input argument called ioData (sounds promising) of type AudioBufferList.
Looking for PerformThru, we see it is used in the following line: inputProc.inputProc = PerformThru; - we're almost there:: it looks like registering a callback function. Looking for the type of inputProc, we indeed see it is AURenderCallbackStruct - that's it. The callback is called by the audio framework, who is responsible to feed it with samples.
You will probably have to read the documentation for AURenderCallbackStruct (or better off, the Audio Unit Hosting) to get a deeper understanding, but I hope this gave you a good starting point.