OpenGL ES 2.0 for iOS - multiple calls to glDrawElements causing EXC_BAD_ACCESS - iphone

Several years ago, I wrote a small Cocoa/Obj-C game framework for OpenGL ES 1.1 and iPhone. This was back when iOS 3.x was popular. My OpenGL ES 1.1 / iOS 3.x implementation of this all worked fine. Time passed, and here we are now with iOS 5.1, OpenGL ES 2.0, ARC, blocks, and other things. I decided that it was high time to port the project over to more... modern standards.
EDIT: Solved one of the problems on my own - that of why it was crashing on the simulator. Sort of - I am now able to draw smaller models, but larger ones (like the test police car) still cause an EXC_BAD_ACCESS - even if that is the only, single call to glDrawElements. I was also able to fix drawing-multiple-meshes on the Simulator - however, I don't know if this will function on-device until tomorrow morning. (my 5.0 test device is my friend's iPhone, don't). So I guess the main question is, why are larger models causing an EXC_BAD_ACCESS on the simulator?
Original post below
However, in moving it up to 5.0, I've run into some OpenGL ES 2.0 errors - two of them, specifically, although they may possibly be related. The first of them is simple - if I try to render my model on a device (iPhone 4S running 5.0.1), it displays, but if I try to display it on the simulator (iPhone Simulator running 5.0), it throws an EXC_BAD_ACCESS on glDrawElements. The second, also simple. I cannot draw multiple meshes. When I draw the model as one big group (one vertex array/index array combo) it draws fine - but when I draw the model as multiple parts (eg, multiple calls to drawElements) it fails, and displays a big black screen - the blackness is not from the model being drawn (I have verified this, outlined below).
To sum it up before the much-more-detailed part, attempting to render my model on the simulator crashes
Caveat: It all works fine for small meshes. I have no problem drawing my small, statically-declared cube over and over, even on the simulator. When I say statically-declared, I mean a hard-coded const array of structs that gets bound and loaded into the vertex buffer and a const array of GLushorts bound and loaded into the index array.
Note: when I say 'model' I mean an overall model, possibly made up of multiple vertex and index buffers. In code, this means that a model simply holds an array of meshes or model-groups. A mesh or model-group is a sub-unit of a model, eg one contiguous piece of the model, has one vertex array and one index array, and stores the lengths of both as well. In the case of the model I've been using, the body of the car is one mesh, the windows another, the lights a third. All together, they make up the model.
The model I am using is a police car, has several thousand vertices and faces, and is split into multiple parts (body, lights, windows, etc) - the body is about 3000 faces, the windows about 100, the lights a bit less.
Here are some things to know:
My model is loading properly. I have verified this in two ways -
printing out the model vertices and manually inspecting them, and
displaying each model-group individually as outlined in 2). I'd post images, but 'reputation limit' and this being my first question, I can't. I have also re-built the model loader twice from scratch with no change, so I know the vertex and index buffers are in the correct order/format.
When I load the model as a single model-group (ie, one vertex
buffer/index buffer) it displays the whole model correctly. When I
load the model as multiple model-groups, and display any given
model-group individually, it displays correctly. When I try to draw
multiple model-groups (multiple calls to glDrawElements) the big
black screen happens.
The black screen is not because of the model being drawn. I
verified this by changing my fragment shader to draw every pixel
red no matter what. I always clear the color buffer to a medium-gray (I clear the depth buffer as well, obviously), but attempting to draw multiple meshes/model-groups results in a black screen. We know it is not the model simply obscuring the view because it is colored black instead of red. This occurs on the device, I do not know what would happen on the simulator as I cannot get it to draw.
My model will not draw in the simulator. It will not draw as either a single mesh/model-group, nor multiple mesh/model-groups. The application loads properly, but
attempting to draw a mesh/model-group results in an EXC_BAD_ACCESS in the
glDrawElements. The relevant parts of the backtrace are:
thread #1: tid = 0x1f03, 0x10b002b5, stop reason = EXC_BAD_ACCESS (code=1, address=0x94fd020)
frame #0: 0x10b002b5
frame #1: 0x09744392 GLEngine`gleDrawArraysOrElements_ExecCore + 883
frame #2: 0x09742a9b GLEngine`glDrawElements_ES2Exec + 505
frame #3: 0x00f43c3c OpenGLES`glDrawElements + 64
frame #4: 0x0001cb11 MochaARC`-[Mesh draw] + 177 at Mesh.m:81
EDIT: It consistently is able to draw smaller dynamically-created models (~100 faces) but the 3000 of the whole model
I was able to get it to render a much-smaller, less-complicated, but still dynamically loaded, model consisting of 192 faces / 576 vertices. I was able to display it both as a single vertex and index buffer, as well as split up into parts and rendered as multiple smaller vertex and index buffers. Attempting to draw the single-mesh model in the simulator resulted in the EXC_BAD_ACCESS still being thrown, but only on the first frame. If I force it to continue, it displays a very screwed up model, and then every frame after that, it displayed 100% fine exactly as it ought to have.
My shaders are not in error. They compile and display correctly when I use a small, statically declared vertex buffer. However, for completeness I will post them at the bottom.
My code is as follows:
Render loop:
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//muShader is a subclass of a shader-handler I've written that tracks the active shader
//and handles attributes/uniforms
//[muShader use] just does glUseProgram(muShader.program); then
//disables the previous shader's attributes (if needed) and then
//activates its own attributes - in this case:
//it does:
// glEnableVertexAttribArray(self.position);
// glEnableVertexAttribArray(self.uv);
//where position and uv are handles to the position and texture coordinate attributes
[self.muShader use];
GLKMatrix4 model = GLKMatrix4MakeRotation(GLKMathDegreesToRadians(_rotation), 0, 1, 0);
GLKMatrix4 world = GLKMatrix4Identity;
GLKMatrix4 mvp = GLKMatrix4Multiply(_camera.projection, _camera.view);
mvp = GLKMatrix4Multiply(mvp,world);
mvp = GLKMatrix4Multiply(mvp, model);
//muShader.modelViewProjection is a handle to the shader's model-view-projection matrix uniform
glUniformMatrix4fv(self.muShader.modelViewProjection,1,0,mvp.m);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, self.policeTextureID);
//ditto on muShader.texture
glUniform1i(self.muShader.texture, 0);
for(int i=0; i < self.policeModel.count; i++)
{
//I'll expand muShader readyForFormat after this
[self.muShader readyForFormat:ModelVertexFormat];
//I'll expand mesh draw after this
[[self.policeModel meshAtIndex:i] draw];
}
muShader stuff
muShader binding attributes and uniforms
I won't post the whole muShader's class, it is unnecessary, suffice to say that it works or else it'd not display anything at all, ever.
//here is where we bind the attribute locations when the shader is created
-(void)bindAttributeLocations
{
_position = glGetAttribLocation(self.program, "position");
_uv = glGetAttribLocation(self.program, "uv");
}
//ditto for uniforms
-(void)bindUniformLocations
{
_modelViewProjection = glGetUniformLocation(self.program, "modelViewProjection");
_texture = glGetUniformLocation(self.program, "texture");
}
muShader readyForFormat
-(void)readyForFormat:(VertexFormat)vertexFormat
{
switch (vertexFormat)
{
//... extra vertex formats removed for brevity
case ModelVertexFormat:
//ModelVertex is a struct, with the following definition:
//typedef struct{
// GLKVector4 position;
// GLKVector4 uv;
// GLKVector4 normal;
//}ModelVertex;
glVertexAttribPointer(_position, 3, GL_FLOAT, GL_FALSE, sizeof(ModelVertex), BUFFER_OFFSET(0));
glVertexAttribPointer(_uv, 3, GL_FLOAT, GL_FALSE, sizeof(ModelVertex), BUFFER_OFFSET(16));
break;
//... extra vertex formats removed for brevity
}
}
Mesh stuff
setting up the vertex/index buffers
//this is how I set/create the vertex buffer for a mesh/model-group
//vertices is a c-array of ModelVertex structs
// created with malloc(count * sizeof(ModelVertex))
// and freed using free(vertices) - after setVertices is called, of course
-(void)setVertices:(ModelVertex *)vertices count:(GLushort)count
{
//frees previous data if necessary
[self freeVertices];
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(ModelVertex) * count, vertices, GL_STATIC_DRAW);
_vertexCount = count;
}
//this is how I set/create the index buffer for a mesh/model-group
//indices is a c-array of GLushort,
// created with malloc(count * sizeof(GLushort);
// and freed using free(vertices) - after setVertices is called, of course
-(void)setIndices:(GLushort *)indices count:(GLushort)count
{
[self freeIndices];
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort) * count, indices, GL_STATIC_DRAW);
_indexCount = count;
}
mesh draw
//vertexBuffer and indexBuffer are handles to a vertex/index buffer
//I have verified that they are loaded properly
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glDrawElements(GL_TRIANGLES, _indexCount, GL_UNSIGNED_SHORT, 0);
Shader stuff
Vertex Shader
attribute highp vec4 position;
attribute lowp vec3 uv;
varying lowp vec3 fragmentUV;
uniform highp mat4 modelViewProjection;
uniform lowp sampler2D texture;
void main()
{
fragmentUV = uv;
gl_Position = modelViewProjection * position;
}
Fragment shader
varying lowp vec3 fragmentUV;
uniform highp mat4 modelViewProjection;
uniform lowp sampler2D texture;
void main()
{
gl_FragColor = texture2D(texture,fragmentUV.xy);
//used below instead to test the aforementioned black screen by setting
//every pixel of the model being drawn to red
//the screen stayed black, so the model wasn't covering the whole screen or anything
//gl_FragColor = vec4(1,0,0,1);
}

Answered it myself, when using multiple buffer objects, glEnableVertexAttribArray has to be called for every time you bind the vertex/index buffer object, rather than simply once per frame (per shader). This was the cause of all of the problems, including the simulator crashing.
Closed.

Related

my shader is ignoring my worldspace height

Im VERY new to shaders so bear with me. I have a mesh that I want to put a sand texture on below a worldspace position y of say 10 else it should be a grass texture. Apparantly it seems to be ignoring anything I put in and only selecting the grass texture. Something IS happening because my vert and tris count explodes with this function, compared to if I just return the same texture. I just dont see anything no matter what my sandStart value is
this is in my frag function:
if (input.positionWS.y < _SandStart) {
return tex2D(_MainTex, input.uv)* mainLight.shadowAttenuation;
} else {
return tex2D(_SandTex, input.uv) * mainLight.shadowAttenuation;
}
Is there also a way I can easily debug some of the values?
Please note that the OP figured out that their specific problem wasn't caused by the code in the question, but an error in their geometry function, this answer is only about the question "Is there a way to debug shader values" as this debugging method helped the OP find the problem
Debugging shader code can be quite a challenging task, depending on what it is you need to debug, and there are multiple approaches to it. Personally the approach I like best is using colours.
if we break it down there are three aspects in your code that could be faulty:
the value of input.positionWS.y
the if statement (input.positionWS.y < _SandStart)
Returning your texture return tex2D(_MainTex, input.uv)* mainLight.shadowAttenuation;
Lets walk down the list and test each individually.
checking if input.positionWS.y actually contains a value we expect it to contain. To do this we can set any of the RGB channels to its value, and just straight up returning that.
return float4(input.positionWS.y, 0, 0, 1);
Now if input.positionWS.y isn't a normalized value (a.k.a a value that ranges from 0 to 1) this is almost guaranteed to just return your texture as entirely red. To normalize it we divide the value by its max value, lets take max = 100 for the exmaple.
return float4(input.positionWS.y / 100, 0, 0, 1);
This should now make the texture full red at the top (where input.positionWS.y / 100 would be 1) and black at the bottom (where input.positionWS.y / 100 is zero), and a gradient from black to full red inbetween. (Note that since its a position in world space you may need to move the texture up/down to see the colour shift). If this doesn't happen, for example it always stays black or full red then your issue is most likely the input.positionWS.y.
The if statement. It could be that your statement (input.positionWS.y < _SandStart) always returns either true or false, meaning it'll never split. We can test this quite easily by commenting out the current return texture, and instead just return a flat colour like so:
if(input.positionWS.y < _SandStart)
{
return float4(1,0,0,1);
}
else
{
return float4(0,0,1,1);
}
if we tested the input.positionWS.y to be correct in step 1, and _SandStart is set correctly we should see the texture be divided in parts red (if true) and the other part blue (if false) (again since we're basing off world position we might need to change the material's height a bit to see it). If this division in colours doens't happen then the likely cause is that _SandStart isn't set properly, or to an incorrect value. (assuming this is a property you can inspect its value in the material editor)
if both of above steps yield the expected result then return tex2D(_MainTex, input.uv)* mainLight.shadowAttenuation; is possibly the culprit. To debug this we can return one of the textures without the if statement and shadowAttenuation, see if it applies the texture, and then return the other texture by changing which line is commented.
return tex2D(_MainTex, input.uv);
//return tex2D(_SandTex, input.uv);
If each of these textures gets applied properly seperately then it is unlikely that that was your cause, leaving either the shadowAttenutation (just add the multiplication to the above test) or something different altogether that isn't covered by the code in your question.
bonus round. If you got a shader property you want to debug you can actually do this from C# as well using the material.Get<type> function (the supported types can be found in the docs here, and include the array variants too, as well as both Get and Set). a small example:
Properties
{
_Foo ("Foo", Float) = 2
_Bar ("Bar", Color) = (1,1,1,1)
}
can be debugged from C# using
Material mat = getComponent<Material>();
Debug.LogFormat("_Foo value: {0}", mat.GetFloat("_Foo"); //prints 2
Debug.LogFormat("_Bar value: {0}", mat.GetFloat("_Bar"); //prints (1,1,1,1)

How to get current frame from Animated Tile/Tilemap.animationFrameRate in Unity

I am using tilemaps and animated tiles from the 2dExtras in unity.
My tiles have 6 frames, at speed=2f, and my tilemap frame rate is 2.
New tiles placed always start on frame 1 and then immediately jump to the current frame of the other tiles already placed, the tilemap is keeping every tile at the same pace, which is working as I want.
However I would like the newly placed tiles to start at the frame the others are currently on,(instead of placing a tile that jumps from frame 1 to frame 4) I would like the new tile to start on frame 4
I've found how to pick the frame I want to start on, however I am having trouble retrieving which frame the animation is currently on, so I was wondering how exactly can I access the current frame of animation of a given tilemap ( Or a given tile, I can create a dummy tile and just read the info out of it, how can I get the current frame of an animated tile? )
The animated tilemaps feature seems to lack the feature to retrieve this information, also when I try tilemap.getsprite it always returns the first frame of the sequence(does not return the sprite currently displayed), and there doesn't seem to be any method to poll info from tilemap.animationFrameRate.
I thought another method would be to set a clock and sync it to the rate of the animation but since I can't get the exact framerate duration the clock eventually goes out of sync.
Any help would be appreciated!
I found a way to solve this question. But it's not 100% insurance.
First of all, I used SuperTile2Unity. That doesn't seem to be the point.
private void LateUpdate()
{
// I use this variable to monitor the run time of the game
this.totalTime += Time.deltaTime;
}
private void func()
{
// ...
TileBase[] currentTiles = tilemap.GetTilesBlock(new BoundsInt(new Vector3Int(0, 0, 0), new Vector3Int(x, y, 1)));
Dictionary<string, Sprite> tempTiles = new Dictionary<string, Sprite>();
//I use SuperTiled2Unity. But it doesn't matter, the point is to find animated tile
foreach (SuperTiled2Unity.SuperTile tile in currentTiles)
{
if (tile == null)
{
continue;
}
if (tile.m_AnimationSprites.Length > 1 && !tempTiles.ContainsKey(tile.name))
{
// find animated tile current frame
// You can easily find that the way SuperTile2Unity is used to process animation is to generate a sprite array based on the time of each frame set by Tiled animation and the value of AnimationFrameRate parameter.
// The length of array is always n times of AnimationFrameRate. You can debug to find this.
tempTiles.Add(tile.name, tile.m_AnimationSprites[GetProbablyFrameIndex(tile.m_AnimationSprites.Length)]);
}
}
//...
}
private int GetProbablyFrameIndex(int totalFrame)
{
//According to the total running time and the total length of tile animation and AnimationFrameRate, the approximate frame index can be deduced.
int overFrameTime = (int)(totalTime * animationFrameRate);
return overFrameTime % totalFrame;
}
I have done some tests. At least in 30 minutes, there will be no deviation in animations, but there may be a critical value. If the critical time is exceeded, there may be errors. It depends on the size of AnimationFrameRate and the accumulation mode of totalTime. After all, we don't know when and how the unity deals with animatedTile.
You could try using implementation presented in [1] which looks as follows:
MyAnimator.GetCurrentAnimatorClipInfo(0)[0].clip.length * (MyAnimator.GetCurrentAnimatorStateInfo(0).normalizedTime % 1) * MyAnimator.GetCurrentAnimatorClipInfo(0)[0].clip.frameRate;
[1] https://gamedev.stackexchange.com/questions/165289/how-to-fetch-a-frame-number-from-animation-clip

Copying data between metal textures of different shapes

I am converting two trained Keras models to Metal Performance Shaders. I have to reshape the output of the first graph and use it as input to the second graph. The first graph's output is an MPSImage with "shape" (1,1,8192), and the second graph's input is an MPSImage of "shape" (4,4,512).
I cast graph1's output image.texture as a float16 array, and pass it to the following function to copy the data into "midImage", a 4x4x512 MPSImage:
func reshapeTexture(imageArray:[Float16]) -> MPSImage{
let image = imageArray
image.withUnsafeBufferPointer { ptr in
let width = midImage.texture.width
let height = midImage.texture.height
for slice in 0..<128{
for w in 0..<width{
for h in 0..<height{
let region = MTLRegion(origin: MTLOriginMake(w, h, 0),
size: MTLSizeMake(1, 1, 1))
midImage.texture.replace(region: region, mipmapLevel: 0, slice: slice, withBytes: ptr.baseAddress!.advanced(by: ((slice * 4 * width * height) + ((w + h) * 4)), bytesPerRow: MemoryLayout<Float16>.stride * 4, bytesPerImage: 0)
}
}
}
}
return midImage
}
When I pass midImage to graph2, the output of the graph is a square with 3/4 garbled noise, 1/4 black in the bottom right corner. I think I am not understanding something about the MPSImage slice property for storing extra channels. Thanks!
Metal 2d texture arrays are nearly always stored in a Morton or “Z” ordering of some kind. Certainly MPS always allocates them that way, though I suppose on MacOS there may be a means to make a linear 2D texture array and wrap a MPSImage around it. So, without undue care, direct access of a 2d texture array backing store is going to result in sadness and confusion.
The right way to do this is to write a simple Metal copy kernel. This gives you storage order independence and you don’t have to wait for the command buffer to complete before you can do the operation.
A feature request in Radar might also be warranted. Please also look in the latest macOS / iOS seed to see if Apple recently added a reshape filter for you.

Render to FBO gives unexpected results

I have an Android plugin in Unity which will do some native rendering using OpenGL ES.
I have simplified the code to this, and it successfully reproduces the problem:
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fboId);
//Draw texture to framebuffer
GLES20.glViewport(0, 0, width, height);
GLES20.glUseProgram(program);
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glUniformMatrix4fv(u_MVPMatrix, 1, false, matrix, 0);
GLES20.glEnableVertexAttribArray(a_Position);
GLES20.glVertexAttribPointer(a_Position, 3, GLES20.GL_FLOAT, false, 0, verticesBuffer);
GLES20.glEnableVertexAttribArray(a_texCoord);
GLES20.glVertexAttribPointer(a_texCoord, 2, GLES20.GL_FLOAT, false, 0, uvBuffer);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.length, GLES20.GL_UNSIGNED_SHORT, indicesBuffer);
GLES20.glFinish();
GLES20.glFlush();
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
It is working fine when I am forcing Unity to use only OpenGL ES20, but if using OpenGL ES30, I get unexpected results and the input array given here:
GLES20.glVertexAttribPointer(a_Position, 3, GLES20.GL_FLOAT, false, 0, verticesBuffer);
Is ignored and the given quad is drawn in a different shape. No matter what I change the input coordinates to, I still get the same odd shape.
I am not an OpenGL coder, so I cannot find the issue. Am I missing to set some states here?
As suggested by Reto Koradi in the comments, client side vertex arrays are deprecated in ES 3.0. And when switching to VBOs it works. Not sure why, but assume it is related to some unknown OpenGL state left after Unity.

GLKBaseEffect: light + textures

I'm trying to display some simple object with texture and enable spot light in my scene. I use GLKBaseEffect's texture & light.
When textures are disabled - I can see light effect (when I rotate object, it partly becomes dark, as expected). But when I enable textures (load them with GLKTextureInfo and bind in -(void)glkView:drawInRect:) - light effect disappears.
I've tried to search in Google and re-read Apple's documentation, but still can't find the answer.
UPDATE:
Here is the code, I use to setup light:
_effect.lightingType = GLKLightingTypePerPixel;
_effect.lightModelAmbientColor = GLKVector4Make(.3f, .3f, .3f, 1);
_effect.colorMaterialEnabled = GL_TRUE;
_effect.light0.enabled = GL_TRUE;
_effect.light0.spotCutoff = [[PRSettings instance] floatForKey:PRSettingsKeyLightSpotCutoff];
_effect.light0.spotExponent = [[PRSettings instance] floatForKey:PRSettingsKeyLightExponent];
_effect.light0.diffuseColor = _effect.light0.specularColor = GLKVector4Make(1, 1, 1, 1);
_effect.light0.position = GLKVector4Make(0, 0, 0, 1);
[_effect prepareToDraw];
If I call this code twice - the light will be disabled somehow. Even without textures - second call - I there is no light at all.
Simple answer... Should use _effect.texture2d0.envMode = GLKTextureEnvModeModulate; to mix input color (the light's one) and texture.