Using resources directly from Expansion file - andengine

How to create texture directly from expansion files.
We get InputStream, now how to use this Input Stream. If I convert in bitmap and then use it start giving outofmemory.
I have tried
ZipResourceFile expansionFile = APKExpansionSupport.getAPKExpansionZipFile(appContext,
mainVersion, patchVersion);
InputStream fileStream = expansionFile.getInputStream(pathToFileInsideZip);

You can create as follows:
File imageFile = new File(imagePath);
BitmapTextureAtlas mBitmapTextureAtlas = new BitmapTextureAtlas(
activity.getTextureManager(), 1024, 1024, TextureOptions.BILINEAR);
IBitmapTextureAtlasSource fileTextureSource = FileBitmapTextureAtlasSource
.create(imageFile);
ITextureRegion textureRegion = BitmapTextureAtlasTextureRegionFactory
.createFromSource(mBitmapTextureAtlas, fileTextureSource, 0, 0);
Memory management:
Give the texture Atlas size relative to your image size.
For Example if your image size is 200X200 then give as 256 X 256 otherwise giving texture more makes wastage of memory.
And also unload the textures when ever these are not needed.

Related

How to get depth images from the camera in pyBullet

In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:

How to create a 1D Metal Texture?

I want an array of 1-dimensional data (essentially an array of arrays) that is created at run time, so the size of this array is not known at compile time. I want to easily send the array of that data to a kernel shader using setTextures so my kernel can just accept a single argument like texture1d_array which will bind each element texture to a different kernel index automatically, regardless of how many there are.
The question is how to actually create a 1D MTLTexture ? All the MTLTextureDescriptor options seem to focus on 2D or 3D. Is it as simple as creating a 2D texture with the height of 1? Would that then be a 1D texture that the kernel would accept?
I.e.
let textureDescriptor = MTLTextureDescriptor
.texture2DDescriptor(pixelFormat: .r16Uint,
width: length,
height: 1,
mipmapped: false)
If my data is actually just one dimensional (not actually pixel data), is there is an equally convenient way to use an ordinary buffer instead of MTLTexture, with the same flexibility of sending an arbitrarily-sized array of these buffers to the kernel as a single kernel argument?
You can recreate all of those factory methods yourself. I would just use an initializer though unless you run into collisions.
public extension MTLTextureDescriptor {
/// A 1D texture descriptor.
convenience init(
pixelFormat: MTLPixelFormat = .r16Uint,
width: Int
) {
self.init()
textureType = .type1D
self.pixelFormat = pixelFormat
self.width = width
}
}
MTLTextureDescriptor(width: 512)
This is no different than texture2DDescriptor with a height of 1, but it's not as much of a lie. (I.e. yes, a texture can be thought of as infinite-dimensional, with a magnitude of 1 in everything but the few that matter. But we throw out all the dimensions with 1s when saying "how dimensional" something is.)
var descriptor = MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: .r16Uint,
width: 512,
height: 1,
mipmapped: false
)
descriptor.textureType = .type1D
descriptor == MTLTextureDescriptor(width: 512) // true
——
textureType = .typeTextureBuffer takes care of your second question, but why not use textureBufferDescriptor then?

How to draw concave shape using Stencil test on Metal

This is the first time I'm trying to use Stencil Test but I have seen some examples using OpenGL and a few on Metal but focused on the Depth test instead. I understand the theory behind the Stencil test but I don't know how to set it up on Metal.
I want to draw irregular shapes. For the sake of simplicity lets consider the following 2D polygon:
I want the stencil to pass where the number of overlapping triangles is odd, so that I can reach something like this, where the white area is the area to be ignored:
I'm doing the following steps in the exact order:
Setting the depthStencilPixelFormat:
mtkView.depthStencilPixelFormat = .stencil8
mtkView.clearStencil = .allZeros
Stencil attachment:
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .stencil8, width: drawable.texture.width, height: drawable.texture.height, mipmapped: true)
textureDescriptor.textureType = .type2D
textureDescriptor.storageMode = .private
textureDescriptor.usage = [.renderTarget, .shaderRead, .shaderWrite]
mainPassStencilTexture = device.makeTexture(descriptor: textureDescriptor)
let stencilAttachment = MTLRenderPassStencilAttachmentDescriptor()
stencilAttachment.texture = mainPassStencilTexture
stencilAttachment.clearStencil = 0
stencilAttachment.loadAction = .clear
stencilAttachment.storeAction = .store
renderPassDescriptor.stencilAttachment = stencilAttachment
Stencil descriptor:
stencilDescriptor.depthCompareFunction = MTLCompareFunction.always
stencilDescriptor.isDepthWriteEnabled = true
stencilDescriptor.frontFaceStencil.stencilCompareFunction = MTLCompareFunction.equal
stencilDescriptor.frontFaceStencil.stencilFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthStencilPassOperation = MTLStencilOperation.invert
stencilDescriptor.frontFaceStencil.readMask = 0x1
stencilDescriptor.frontFaceStencil.writeMask = 0x1
stencilDescriptor.backFaceStencil = nil
depthStencilState = device.makeDepthStencilState(descriptor: stencilDescriptor)
and lastly, Im setting the reference value and the stencil state in the main pass:
renderEncoder.setStencilReferenceValue(0x1)
renderEncoder.setDepthStencilState(self.depthStencilState)
Am I missing something because the result I got is just like there is no stencil at all. I can see some differences when changing the settings of the depth test but nothing happens when changing the settings of the stencil ...
Any clue?
Thank you in advance
You're clearing the stencil texture to 0. The reference value is 1. The comparison function is "equal". So, the comparison will fail (1 does not equal 0). The operation for when the stencil comparison fails is "keep", so the stencil texture remains 0. Nothing changes for subsequent fragments.
I would expect that you'd get no rendering, although depending on the order of your vertexes and the front-face winding mode, you may be looking at the back faces of your triangles, in which case the stencil test is effectively disabled. If you don't otherwise care about front vs. back, just set both stencil descriptors the same way.
I think you need to do two passes: first, a stencil-only render; second, the color render governed by the stencil buffer. For the stencil only, you would make the compare function .always. This will toggle (invert) the low bit for each triangle that's drawn over a given pixel, giving you an indication of even or odd count. Because neither the compare function nor the operation involve the reference value, it doesn't matter what it is.
For the second pass, you'd set the compare function to .equal and the reference value to 1. The operations should all be .keep. Also, make sure to set the stencil attachment load action to .load (not .clear).

AndEngine, how to get the width of a sprite on runtime?

I have a sprite for a png file. ( Dimensions of the png file is 432x10 ). The png file is in drawable-xxhdpi folder. When i run on emulator with hdpi density mySprite.getWidth() returns 432. ( mySprite.getWidthScaled() also returns 432.) But the png file is looked just about 200 pixel width. Which method gives right value. ( not the width of the png file.) The value that how many pixel the png file is monitorized in? Thank you very much.
Note : My English is insufficient, sorry.
`public Engine onLoadEngine () {
....
SCR_WIDTH = getResources().getDisplayMetrics().widthPixels;
SCR_HEIGHT = getResources().getDisplayMetrics().heightPixels;
MyCamera = new Camera (0, 0, SCR_WIDTH, SCR_HEIGHT);
......
}`

Memory Map UIImage

I have a UIImage and I would like to put its data in a file and and then used a mapped file to save some memory. Apparently, the UIImage data is private and it's not possible to access it. Would you have any suggestions to solve that?
Thanks!
If you want to memory map the encoded image data, then mmap a file and provide a reference to the data by passing a CGDataProviderRef to CGImageCreate.
mapped = mmap( NULL , length , ... );
provider = CGDataProviderCreateWithData( mapped , mapped , length , munmap_wrapper );
image = CGImageCreate( ... , provider , ... );
uiimage = [UIImage imageWithCGImage:image];
...
Where munmap_wrapper is something like this:
// conform to CGDataProviderReleaseDataCallback
void munmap_wrapper( void *p , const void *cp , size_t l ) { munmap( p , l ); }
If you want to memory map the actual pixels, instead of the encoded source data, you would do something similar with a CGBitmapContext. You would also create the provider and image so the image refers to the same pixels as the context. Whatever is drawn in the context will be the content of the image. The width, height, color space and other parameters should be identical for the context and image.
context = CGBitmapContextCreate( mapped , ... );
In this case, length will be at least bytes_per_row*height bytes so the file must be at least that large.
If you have an existing image and you want to mmap the pixels, then create the bitmap context with the size and color space of your image and use CGContextDrawImage to draw the image in the context.
You did not say the source of your image, but if you are creating it at runtime it would be more efficient to create it directly in the bitmap context. Any image creation requires a bitmap context behind the scenes, so it might as well be the memory mapped one from the start.