In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:
Related
I'm trying to find the algorithm of this blending in rust.
When searching through the image crate source, the blending method used is 'src-over' or 'SrcAlpha InvSrcAlpha' if I'm correct: source.
So the only thing I need to change is the source factor, from SrcAlpha to One.
Unity doc say: "The value of this input is one. Use this to use the value of the source or the destination color."
So only use the source value without mult.
I tried to change that:
let (bg_r_a, bg_g_a, bg_b_a) = (bg_r * bg_a, bg_g * bg_a, bg_b * bg_a);
let (fg_r_a, fg_g_a, fg_b_a) = (fg_r * fg_a, fg_g * fg_a, fg_b * fg_a);
to
let (bg_r_a, bg_g_a, bg_b_a) = (bg_r * bg_a, bg_g * bg_a, bg_b * bg_a);
let (fg_r_a, fg_g_a, fg_b_a) = (fg_r * 1., fg_g * 1., fg_b * 1.);
Here's an example result
The 'expected' is in fact not the exact result that i want but very close. I got it from a manual unmultiply alpha, and manual color boost (multiply each channel by 5).
When debugging the blend function I noticed that r,g,b overflow the range of the color (more that 1.0), which make the result image white. and alpha channel remain very very low.
So is the algorithm correct?
Otherwise the issue come from my source.
Im trying to make a script that takes and input image and fades it out. So far this is my script
imgObject = im.open(imageName)
toAppend = []
for i in range(int(256)):
imgObject.putalpha(i)
toAppend.append(imgObject)
#imgObject.save('images/'+str(i)+'.png', 'PNG')
imgObject.save('finished.gif', save_all=True, append_images=toAppend)
When this is run, the output gif is just a still of the input with no changes. But if i save each image as a png, then the transparency works! It saves 255 different images where you can see it fade out. I've also tried stitching these photos together after the fact, but the same or similar problems occurred.
I've also tried this, this, this, and this. All producing the same effect.
Read more into the docs and found this
imgObject = im.open(imageName)
bpiObject = im.open(backroundName)
toAppend = []
for i in range(100):
imgObject = im.blend(imgObject, bpiObject, i/100)
toAppend.append(imgObject)
imgObject.save('finished.gif', save_all=True, append_images=toAppend, loop = 0)
This worked for me, they do have to have the same dimensions though
How do I open an (dm4) image with annotations in a script in dm-script?
When a dm4 image has annotations (e.g. a scale bar or some text), this is displayed when I open the image via the menu (Ctrl + O). But when I open the same file in a script by openImage() they do not show up as shown below.
On the left there is the image opened via the menu, on the right is the exact same image opened by openImage(). It is missing the annotations.
The following example shows the same thing. The code adds text to an image, saves it and opens it again. The opened image does not show the annotations just as the images above:
String path = GetApplicationDirectory("current", 0);
path = PathConcatenate(path, "temp.dm4");
// get the current image
image img;
img.getFrontImage();
ImageDisplay display = img.ImageGetImageDisplay(0);
// add some test annotations
number height = img.ImageGetDimensionSize(1);
number padding = height / 100;
number font_size = height/10;
for(number y = padding; y + font_size + padding < height; y += font_size + padding){
Component annotation = NewTextAnnotation(padding, y, "Test", font_size);
annotation.componentSetForegroundColor(255, 255, 255);
display.ComponentAddChildAtEnd(annotation);
}
// save the current image
img.saveImage(path);
// show the saved image
image img2 = openImage(path);
img2.showImage();
You have a mistake in the second to last line.
By using = instead of := you are copying (the values only) from the opened image into a new image. You want to do
image img2 := openImage(path)
This is a rather typical mistake made when being new to scripting, because this is a "specialty" of the scripting language not found in other languages. It comes about because scripting aims to enable very simple scripts like Z = log(A) where new images (here Z) are created on-the-fly from processing existing images (here A).
So there needs to be a different operator when one wants to assign an image to a variable.
For further details, see the F1 help documentation here:
The same logic / source of bugs concerns the use of := instead of = when "finding" images, "creating new images" and cloning images (with meta data).
Note the differences when trying both:
image a := RealImage("Test",4,100,100)
ShowImage(a)
image b = RealImage("Test",4,100,100)
ShowImage(b)
and
image a := GetFrontImage()
a = 0
image b = GetFrontImage()
b = 0
and
image src := GetFrontImage()
image a := ImageClone( src )
showImage(a)
image b := ImageClone( src )
showImage(b)
I am converting two trained Keras models to Metal Performance Shaders. I have to reshape the output of the first graph and use it as input to the second graph. The first graph's output is an MPSImage with "shape" (1,1,8192), and the second graph's input is an MPSImage of "shape" (4,4,512).
I cast graph1's output image.texture as a float16 array, and pass it to the following function to copy the data into "midImage", a 4x4x512 MPSImage:
func reshapeTexture(imageArray:[Float16]) -> MPSImage{
let image = imageArray
image.withUnsafeBufferPointer { ptr in
let width = midImage.texture.width
let height = midImage.texture.height
for slice in 0..<128{
for w in 0..<width{
for h in 0..<height{
let region = MTLRegion(origin: MTLOriginMake(w, h, 0),
size: MTLSizeMake(1, 1, 1))
midImage.texture.replace(region: region, mipmapLevel: 0, slice: slice, withBytes: ptr.baseAddress!.advanced(by: ((slice * 4 * width * height) + ((w + h) * 4)), bytesPerRow: MemoryLayout<Float16>.stride * 4, bytesPerImage: 0)
}
}
}
}
return midImage
}
When I pass midImage to graph2, the output of the graph is a square with 3/4 garbled noise, 1/4 black in the bottom right corner. I think I am not understanding something about the MPSImage slice property for storing extra channels. Thanks!
Metal 2d texture arrays are nearly always stored in a Morton or “Z” ordering of some kind. Certainly MPS always allocates them that way, though I suppose on MacOS there may be a means to make a linear 2D texture array and wrap a MPSImage around it. So, without undue care, direct access of a 2d texture array backing store is going to result in sadness and confusion.
The right way to do this is to write a simple Metal copy kernel. This gives you storage order independence and you don’t have to wait for the command buffer to complete before you can do the operation.
A feature request in Radar might also be warranted. Please also look in the latest macOS / iOS seed to see if Apple recently added a reshape filter for you.
This is the first time I'm trying to use Stencil Test but I have seen some examples using OpenGL and a few on Metal but focused on the Depth test instead. I understand the theory behind the Stencil test but I don't know how to set it up on Metal.
I want to draw irregular shapes. For the sake of simplicity lets consider the following 2D polygon:
I want the stencil to pass where the number of overlapping triangles is odd, so that I can reach something like this, where the white area is the area to be ignored:
I'm doing the following steps in the exact order:
Setting the depthStencilPixelFormat:
mtkView.depthStencilPixelFormat = .stencil8
mtkView.clearStencil = .allZeros
Stencil attachment:
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .stencil8, width: drawable.texture.width, height: drawable.texture.height, mipmapped: true)
textureDescriptor.textureType = .type2D
textureDescriptor.storageMode = .private
textureDescriptor.usage = [.renderTarget, .shaderRead, .shaderWrite]
mainPassStencilTexture = device.makeTexture(descriptor: textureDescriptor)
let stencilAttachment = MTLRenderPassStencilAttachmentDescriptor()
stencilAttachment.texture = mainPassStencilTexture
stencilAttachment.clearStencil = 0
stencilAttachment.loadAction = .clear
stencilAttachment.storeAction = .store
renderPassDescriptor.stencilAttachment = stencilAttachment
Stencil descriptor:
stencilDescriptor.depthCompareFunction = MTLCompareFunction.always
stencilDescriptor.isDepthWriteEnabled = true
stencilDescriptor.frontFaceStencil.stencilCompareFunction = MTLCompareFunction.equal
stencilDescriptor.frontFaceStencil.stencilFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthStencilPassOperation = MTLStencilOperation.invert
stencilDescriptor.frontFaceStencil.readMask = 0x1
stencilDescriptor.frontFaceStencil.writeMask = 0x1
stencilDescriptor.backFaceStencil = nil
depthStencilState = device.makeDepthStencilState(descriptor: stencilDescriptor)
and lastly, Im setting the reference value and the stencil state in the main pass:
renderEncoder.setStencilReferenceValue(0x1)
renderEncoder.setDepthStencilState(self.depthStencilState)
Am I missing something because the result I got is just like there is no stencil at all. I can see some differences when changing the settings of the depth test but nothing happens when changing the settings of the stencil ...
Any clue?
Thank you in advance
You're clearing the stencil texture to 0. The reference value is 1. The comparison function is "equal". So, the comparison will fail (1 does not equal 0). The operation for when the stencil comparison fails is "keep", so the stencil texture remains 0. Nothing changes for subsequent fragments.
I would expect that you'd get no rendering, although depending on the order of your vertexes and the front-face winding mode, you may be looking at the back faces of your triangles, in which case the stencil test is effectively disabled. If you don't otherwise care about front vs. back, just set both stencil descriptors the same way.
I think you need to do two passes: first, a stencil-only render; second, the color render governed by the stencil buffer. For the stencil only, you would make the compare function .always. This will toggle (invert) the low bit for each triangle that's drawn over a given pixel, giving you an indication of even or odd count. Because neither the compare function nor the operation involve the reference value, it doesn't matter what it is.
For the second pass, you'd set the compare function to .equal and the reference value to 1. The operations should all be .keep. Also, make sure to set the stencil attachment load action to .load (not .clear).