In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:
This is the first time I'm trying to use Stencil Test but I have seen some examples using OpenGL and a few on Metal but focused on the Depth test instead. I understand the theory behind the Stencil test but I don't know how to set it up on Metal.
I want to draw irregular shapes. For the sake of simplicity lets consider the following 2D polygon:
I want the stencil to pass where the number of overlapping triangles is odd, so that I can reach something like this, where the white area is the area to be ignored:
I'm doing the following steps in the exact order:
Setting the depthStencilPixelFormat:
mtkView.depthStencilPixelFormat = .stencil8
mtkView.clearStencil = .allZeros
Stencil attachment:
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .stencil8, width: drawable.texture.width, height: drawable.texture.height, mipmapped: true)
textureDescriptor.textureType = .type2D
textureDescriptor.storageMode = .private
textureDescriptor.usage = [.renderTarget, .shaderRead, .shaderWrite]
mainPassStencilTexture = device.makeTexture(descriptor: textureDescriptor)
let stencilAttachment = MTLRenderPassStencilAttachmentDescriptor()
stencilAttachment.texture = mainPassStencilTexture
stencilAttachment.clearStencil = 0
stencilAttachment.loadAction = .clear
stencilAttachment.storeAction = .store
renderPassDescriptor.stencilAttachment = stencilAttachment
Stencil descriptor:
stencilDescriptor.depthCompareFunction = MTLCompareFunction.always
stencilDescriptor.isDepthWriteEnabled = true
stencilDescriptor.frontFaceStencil.stencilCompareFunction = MTLCompareFunction.equal
stencilDescriptor.frontFaceStencil.stencilFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthStencilPassOperation = MTLStencilOperation.invert
stencilDescriptor.frontFaceStencil.readMask = 0x1
stencilDescriptor.frontFaceStencil.writeMask = 0x1
stencilDescriptor.backFaceStencil = nil
depthStencilState = device.makeDepthStencilState(descriptor: stencilDescriptor)
and lastly, Im setting the reference value and the stencil state in the main pass:
renderEncoder.setStencilReferenceValue(0x1)
renderEncoder.setDepthStencilState(self.depthStencilState)
Am I missing something because the result I got is just like there is no stencil at all. I can see some differences when changing the settings of the depth test but nothing happens when changing the settings of the stencil ...
Any clue?
Thank you in advance
You're clearing the stencil texture to 0. The reference value is 1. The comparison function is "equal". So, the comparison will fail (1 does not equal 0). The operation for when the stencil comparison fails is "keep", so the stencil texture remains 0. Nothing changes for subsequent fragments.
I would expect that you'd get no rendering, although depending on the order of your vertexes and the front-face winding mode, you may be looking at the back faces of your triangles, in which case the stencil test is effectively disabled. If you don't otherwise care about front vs. back, just set both stencil descriptors the same way.
I think you need to do two passes: first, a stencil-only render; second, the color render governed by the stencil buffer. For the stencil only, you would make the compare function .always. This will toggle (invert) the low bit for each triangle that's drawn over a given pixel, giving you an indication of even or odd count. Because neither the compare function nor the operation involve the reference value, it doesn't matter what it is.
For the second pass, you'd set the compare function to .equal and the reference value to 1. The operations should all be .keep. Also, make sure to set the stencil attachment load action to .load (not .clear).
I have a very strange behaviour when rendering SKSpriteNodes using an atlas via imageNamed.
sprite = SKSpriteNode(imageNamed: triangle.triangleType1.spriteName)
sprite.size.height = TileHeight
sprite.size.width = TileWidth
sprite.zRotation = DegreesToRadians(triangle.rotation)
sprite.position = pointForColumn(triangle.column, row:triangle.row, columnOffset: TileWidth, rowOffset: TileHeight, tw: TileWidth,th: TileHeight)
triangleLayer.addChild(sprite)
triangle.sprite1 = sprite
addedTriangles++
print(sprite)
Some of those Sprites are not rendered adding them to the layer (another SKSpriteNode) for first time - printing the "sprite" shows that they're size 0,0 and the image shows "TrViolet" instead of "TrViolet#2x.png".
Other ones are rendered as they supposed to be.
Removing all the nodes and executing the same code for a second time DOES work though.
When I put the image in xcassets it works like a charm - even for the first time.
Is there any way to debug this kind of behavior or you have any idea on what might be happening here?
I am making a game representing freehand drawing and sprites to animate when pass over it. So i have to use color detection and cause an event when the change in color is encountered by sprite on the screen from where it passes. For this i am using glReadpixel() passing RGBA_8888 and GLES20 version and recieve its value in Red Green Blue form but everytime it returns everything to be 0. Tried to change pixelformat and make many hit and trial but no sucess. Can you please help
My code:
`
ByteBuffer PixelBuffer = ByteBuffer.allocateDirect(4);
PixelBuffer.order(ByteOrder.nativeOrder());
PixelBuffer.position(0);
int mTemp = 0;
GLES20.glReadPixels(100, 100, 1,1,GLES20.GL_RGBA,GLES20.GL_UNSIGNED_BYTE, PixelBuffer);
byte b[] = new byte[4];
PixelBuffer.get(b);
Log.e("COLOR", "R:" + PixelBuffer.get(0) + PixelBuffer.get(1) + PixelBuffer.get(2));
`
Result
Logcat : COLOR R: 000.
I tried using non black background and have red color on screen coordinate provided.
Thanks in advance
Several years ago, I wrote a small Cocoa/Obj-C game framework for OpenGL ES 1.1 and iPhone. This was back when iOS 3.x was popular. My OpenGL ES 1.1 / iOS 3.x implementation of this all worked fine. Time passed, and here we are now with iOS 5.1, OpenGL ES 2.0, ARC, blocks, and other things. I decided that it was high time to port the project over to more... modern standards.
EDIT: Solved one of the problems on my own - that of why it was crashing on the simulator. Sort of - I am now able to draw smaller models, but larger ones (like the test police car) still cause an EXC_BAD_ACCESS - even if that is the only, single call to glDrawElements. I was also able to fix drawing-multiple-meshes on the Simulator - however, I don't know if this will function on-device until tomorrow morning. (my 5.0 test device is my friend's iPhone, don't). So I guess the main question is, why are larger models causing an EXC_BAD_ACCESS on the simulator?
Original post below
However, in moving it up to 5.0, I've run into some OpenGL ES 2.0 errors - two of them, specifically, although they may possibly be related. The first of them is simple - if I try to render my model on a device (iPhone 4S running 5.0.1), it displays, but if I try to display it on the simulator (iPhone Simulator running 5.0), it throws an EXC_BAD_ACCESS on glDrawElements. The second, also simple. I cannot draw multiple meshes. When I draw the model as one big group (one vertex array/index array combo) it draws fine - but when I draw the model as multiple parts (eg, multiple calls to drawElements) it fails, and displays a big black screen - the blackness is not from the model being drawn (I have verified this, outlined below).
To sum it up before the much-more-detailed part, attempting to render my model on the simulator crashes
Caveat: It all works fine for small meshes. I have no problem drawing my small, statically-declared cube over and over, even on the simulator. When I say statically-declared, I mean a hard-coded const array of structs that gets bound and loaded into the vertex buffer and a const array of GLushorts bound and loaded into the index array.
Note: when I say 'model' I mean an overall model, possibly made up of multiple vertex and index buffers. In code, this means that a model simply holds an array of meshes or model-groups. A mesh or model-group is a sub-unit of a model, eg one contiguous piece of the model, has one vertex array and one index array, and stores the lengths of both as well. In the case of the model I've been using, the body of the car is one mesh, the windows another, the lights a third. All together, they make up the model.
The model I am using is a police car, has several thousand vertices and faces, and is split into multiple parts (body, lights, windows, etc) - the body is about 3000 faces, the windows about 100, the lights a bit less.
Here are some things to know:
My model is loading properly. I have verified this in two ways -
printing out the model vertices and manually inspecting them, and
displaying each model-group individually as outlined in 2). I'd post images, but 'reputation limit' and this being my first question, I can't. I have also re-built the model loader twice from scratch with no change, so I know the vertex and index buffers are in the correct order/format.
When I load the model as a single model-group (ie, one vertex
buffer/index buffer) it displays the whole model correctly. When I
load the model as multiple model-groups, and display any given
model-group individually, it displays correctly. When I try to draw
multiple model-groups (multiple calls to glDrawElements) the big
black screen happens.
The black screen is not because of the model being drawn. I
verified this by changing my fragment shader to draw every pixel
red no matter what. I always clear the color buffer to a medium-gray (I clear the depth buffer as well, obviously), but attempting to draw multiple meshes/model-groups results in a black screen. We know it is not the model simply obscuring the view because it is colored black instead of red. This occurs on the device, I do not know what would happen on the simulator as I cannot get it to draw.
My model will not draw in the simulator. It will not draw as either a single mesh/model-group, nor multiple mesh/model-groups. The application loads properly, but
attempting to draw a mesh/model-group results in an EXC_BAD_ACCESS in the
glDrawElements. The relevant parts of the backtrace are:
thread #1: tid = 0x1f03, 0x10b002b5, stop reason = EXC_BAD_ACCESS (code=1, address=0x94fd020)
frame #0: 0x10b002b5
frame #1: 0x09744392 GLEngine`gleDrawArraysOrElements_ExecCore + 883
frame #2: 0x09742a9b GLEngine`glDrawElements_ES2Exec + 505
frame #3: 0x00f43c3c OpenGLES`glDrawElements + 64
frame #4: 0x0001cb11 MochaARC`-[Mesh draw] + 177 at Mesh.m:81
EDIT: It consistently is able to draw smaller dynamically-created models (~100 faces) but the 3000 of the whole model
I was able to get it to render a much-smaller, less-complicated, but still dynamically loaded, model consisting of 192 faces / 576 vertices. I was able to display it both as a single vertex and index buffer, as well as split up into parts and rendered as multiple smaller vertex and index buffers. Attempting to draw the single-mesh model in the simulator resulted in the EXC_BAD_ACCESS still being thrown, but only on the first frame. If I force it to continue, it displays a very screwed up model, and then every frame after that, it displayed 100% fine exactly as it ought to have.
My shaders are not in error. They compile and display correctly when I use a small, statically declared vertex buffer. However, for completeness I will post them at the bottom.
My code is as follows:
Render loop:
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//muShader is a subclass of a shader-handler I've written that tracks the active shader
//and handles attributes/uniforms
//[muShader use] just does glUseProgram(muShader.program); then
//disables the previous shader's attributes (if needed) and then
//activates its own attributes - in this case:
//it does:
// glEnableVertexAttribArray(self.position);
// glEnableVertexAttribArray(self.uv);
//where position and uv are handles to the position and texture coordinate attributes
[self.muShader use];
GLKMatrix4 model = GLKMatrix4MakeRotation(GLKMathDegreesToRadians(_rotation), 0, 1, 0);
GLKMatrix4 world = GLKMatrix4Identity;
GLKMatrix4 mvp = GLKMatrix4Multiply(_camera.projection, _camera.view);
mvp = GLKMatrix4Multiply(mvp,world);
mvp = GLKMatrix4Multiply(mvp, model);
//muShader.modelViewProjection is a handle to the shader's model-view-projection matrix uniform
glUniformMatrix4fv(self.muShader.modelViewProjection,1,0,mvp.m);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, self.policeTextureID);
//ditto on muShader.texture
glUniform1i(self.muShader.texture, 0);
for(int i=0; i < self.policeModel.count; i++)
{
//I'll expand muShader readyForFormat after this
[self.muShader readyForFormat:ModelVertexFormat];
//I'll expand mesh draw after this
[[self.policeModel meshAtIndex:i] draw];
}
muShader stuff
muShader binding attributes and uniforms
I won't post the whole muShader's class, it is unnecessary, suffice to say that it works or else it'd not display anything at all, ever.
//here is where we bind the attribute locations when the shader is created
-(void)bindAttributeLocations
{
_position = glGetAttribLocation(self.program, "position");
_uv = glGetAttribLocation(self.program, "uv");
}
//ditto for uniforms
-(void)bindUniformLocations
{
_modelViewProjection = glGetUniformLocation(self.program, "modelViewProjection");
_texture = glGetUniformLocation(self.program, "texture");
}
muShader readyForFormat
-(void)readyForFormat:(VertexFormat)vertexFormat
{
switch (vertexFormat)
{
//... extra vertex formats removed for brevity
case ModelVertexFormat:
//ModelVertex is a struct, with the following definition:
//typedef struct{
// GLKVector4 position;
// GLKVector4 uv;
// GLKVector4 normal;
//}ModelVertex;
glVertexAttribPointer(_position, 3, GL_FLOAT, GL_FALSE, sizeof(ModelVertex), BUFFER_OFFSET(0));
glVertexAttribPointer(_uv, 3, GL_FLOAT, GL_FALSE, sizeof(ModelVertex), BUFFER_OFFSET(16));
break;
//... extra vertex formats removed for brevity
}
}
Mesh stuff
setting up the vertex/index buffers
//this is how I set/create the vertex buffer for a mesh/model-group
//vertices is a c-array of ModelVertex structs
// created with malloc(count * sizeof(ModelVertex))
// and freed using free(vertices) - after setVertices is called, of course
-(void)setVertices:(ModelVertex *)vertices count:(GLushort)count
{
//frees previous data if necessary
[self freeVertices];
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(ModelVertex) * count, vertices, GL_STATIC_DRAW);
_vertexCount = count;
}
//this is how I set/create the index buffer for a mesh/model-group
//indices is a c-array of GLushort,
// created with malloc(count * sizeof(GLushort);
// and freed using free(vertices) - after setVertices is called, of course
-(void)setIndices:(GLushort *)indices count:(GLushort)count
{
[self freeIndices];
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLushort) * count, indices, GL_STATIC_DRAW);
_indexCount = count;
}
mesh draw
//vertexBuffer and indexBuffer are handles to a vertex/index buffer
//I have verified that they are loaded properly
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glDrawElements(GL_TRIANGLES, _indexCount, GL_UNSIGNED_SHORT, 0);
Shader stuff
Vertex Shader
attribute highp vec4 position;
attribute lowp vec3 uv;
varying lowp vec3 fragmentUV;
uniform highp mat4 modelViewProjection;
uniform lowp sampler2D texture;
void main()
{
fragmentUV = uv;
gl_Position = modelViewProjection * position;
}
Fragment shader
varying lowp vec3 fragmentUV;
uniform highp mat4 modelViewProjection;
uniform lowp sampler2D texture;
void main()
{
gl_FragColor = texture2D(texture,fragmentUV.xy);
//used below instead to test the aforementioned black screen by setting
//every pixel of the model being drawn to red
//the screen stayed black, so the model wasn't covering the whole screen or anything
//gl_FragColor = vec4(1,0,0,1);
}
Answered it myself, when using multiple buffer objects, glEnableVertexAttribArray has to be called for every time you bind the vertex/index buffer object, rather than simply once per frame (per shader). This was the cause of all of the problems, including the simulator crashing.
Closed.