Memory Map UIImage - iphone

I have a UIImage and I would like to put its data in a file and and then used a mapped file to save some memory. Apparently, the UIImage data is private and it's not possible to access it. Would you have any suggestions to solve that?
Thanks!

If you want to memory map the encoded image data, then mmap a file and provide a reference to the data by passing a CGDataProviderRef to CGImageCreate.
mapped = mmap( NULL , length , ... );
provider = CGDataProviderCreateWithData( mapped , mapped , length , munmap_wrapper );
image = CGImageCreate( ... , provider , ... );
uiimage = [UIImage imageWithCGImage:image];
...
Where munmap_wrapper is something like this:
// conform to CGDataProviderReleaseDataCallback
void munmap_wrapper( void *p , const void *cp , size_t l ) { munmap( p , l ); }
If you want to memory map the actual pixels, instead of the encoded source data, you would do something similar with a CGBitmapContext. You would also create the provider and image so the image refers to the same pixels as the context. Whatever is drawn in the context will be the content of the image. The width, height, color space and other parameters should be identical for the context and image.
context = CGBitmapContextCreate( mapped , ... );
In this case, length will be at least bytes_per_row*height bytes so the file must be at least that large.
If you have an existing image and you want to mmap the pixels, then create the bitmap context with the size and color space of your image and use CGContextDrawImage to draw the image in the context.
You did not say the source of your image, but if you are creating it at runtime it would be more efficient to create it directly in the bitmap context. Any image creation requires a bitmap context behind the scenes, so it might as well be the memory mapped one from the start.

Related

How to get depth images from the camera in pyBullet

In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:

sws_scale PAL8 to RGBA returns image that isn't clear

I'm using sws_scale to convert images and videos from every format to RGBA, using an SWSContext created thus:
auto context = sws_getContext(width, height, pix_fmt, width, height, AV_PIX_FMT_RGBA,
SWS_BICUBIC, nullptr, nullptr, nullptr);
but when using a PNG with color type Palette (pix_fmt = AV_PIX_FMT_PAL8) sws_scale doesn't seem to take into account the transparent color, and the resulting RGBA raster isn't transparent. Is this a bug with sws_scale, or am I making some assumption about the result?
palette image:
https://drive.google.com/file/d/1CIPkYeHElNSsH2TAGMmr0kfHxOkYiZTK/view?usp=sharing
RGBA image:
https://drive.google.com/open?id=1GMlC7RxJGLy9lpyKLg2RWfup1nJh-JFc
I was making a wrong assumption - sws_scale doesn't promise to return a premultiplied-alpha color, so the values I was getting were r:255,g:255,b:255,a:0.

AndEngine, how to get the width of a sprite on runtime?

I have a sprite for a png file. ( Dimensions of the png file is 432x10 ). The png file is in drawable-xxhdpi folder. When i run on emulator with hdpi density mySprite.getWidth() returns 432. ( mySprite.getWidthScaled() also returns 432.) But the png file is looked just about 200 pixel width. Which method gives right value. ( not the width of the png file.) The value that how many pixel the png file is monitorized in? Thank you very much.
Note : My English is insufficient, sorry.
`public Engine onLoadEngine () {
....
SCR_WIDTH = getResources().getDisplayMetrics().widthPixels;
SCR_HEIGHT = getResources().getDisplayMetrics().heightPixels;
MyCamera = new Camera (0, 0, SCR_WIDTH, SCR_HEIGHT);
......
}`

Using resources directly from Expansion file

How to create texture directly from expansion files.
We get InputStream, now how to use this Input Stream. If I convert in bitmap and then use it start giving outofmemory.
I have tried
ZipResourceFile expansionFile = APKExpansionSupport.getAPKExpansionZipFile(appContext,
mainVersion, patchVersion);
InputStream fileStream = expansionFile.getInputStream(pathToFileInsideZip);
You can create as follows:
File imageFile = new File(imagePath);
BitmapTextureAtlas mBitmapTextureAtlas = new BitmapTextureAtlas(
activity.getTextureManager(), 1024, 1024, TextureOptions.BILINEAR);
IBitmapTextureAtlasSource fileTextureSource = FileBitmapTextureAtlasSource
.create(imageFile);
ITextureRegion textureRegion = BitmapTextureAtlasTextureRegionFactory
.createFromSource(mBitmapTextureAtlas, fileTextureSource, 0, 0);
Memory management:
Give the texture Atlas size relative to your image size.
For Example if your image size is 200X200 then give as 256 X 256 otherwise giving texture more makes wastage of memory.
And also unload the textures when ever these are not needed.

How to create a GtkPixbuf from a stock item llike (new_from_stock)?

I want to get a stock item to use it in a treeview but I can't get it as a pixbuf directly as there is no new_from_stock method for pixbufs!!
say you want to get a pixbuf from stock_item.
There are 2 ways:
First (easy):
pixbuf = gtk_widget_render_icon ( widget, stock_item, size )
Second (hard):
You need to look for it in the list of default icon factories:
icon_set = gtk_style_lookup_icon_set ( style, stock_item )
OR:
icon_set = gtk_icon_factory_lookup_default ( $stock_item )
then check available sizes with get_sizes.
Check if the size you want is available or get the largest size which will be the last in the returned list.
Then you need to render it to get the pixbuf:
pixbuf = gtk_icon_set_render_icon ( icon_set, style, direction, state, size, widget )
Then scale it to whatever size you want using:
gdk_pixbuf_scale_simple ( pixbuf, width, height, GdkInterpType )
Hope you got it
Are you aware of the icon-name property in GtkCellRendererPixbuf? That should solve the problem of showing a stock icon in a treeview.
A slightly easier (and more future-proof) version of ophidion's answer is to do the following:
#define ICON_NAME "go-next"
#define ICON_SIZE 96 /* pixels */
GError *error = NULL;
GtkIconTheme *icon_theme = gtk_icon_theme_get_default();
GdkPixbuf *pixbuf = gtk_icon_theme_load_icon(icon_theme, ICON_NAME, ICON_SIZE,
GTK_ICON_LOOKUP_USE_BUILTIN, &error);
Assuming you don't get an error, you can then put the resulting GdkPixbuf in a treemodel, or use it with a GtkButton, or whatever else you'd like to do with it.
As others have pointed out, GTK stock items are likely to be deprecated, so you should use names according to the Icon Naming Spec instead if you want to be future proof. See this Google Docs spreadsheet for the equivalent icon spec names for GTK stock items.
You can use gtk_image_new_from_stock(), then get the raw pixbuf using gtk_image_get_pixbuf().
But check out #jku's answer first, that sounds like the proper solution. I just wanted to add how you'd do it in any other context.