It looks like Android SDK's BitmapRegionDecoder uses Skia for decoding a part of the specified bitmap. Under the hood, it uses an appropriate code (jpeg, png etc) for the same. I'm looking at ways to optimize this using Renderscript.
Is it possible to define a Renderscript kernel function to ignore certain data from input allocation and save the rest in output allocation? I'm new to Renderscript and most of the kernel function tends to work on the entire input data set.
Yes, use the LaunchOptions API to limit the rectangle that you launch over:
Script.LaunchOptions lo;
lo.setX(10, 100);
lo.setY(5, 20);
kernel.forEach(in, out, lo);
https://developer.android.com/reference/android/renderscript/Script.LaunchOptions.html
Related
I have downloaded some .raw file of depth data from this website.
3D Video Download
In order to get a depth data image, I wrote a script in Unity as below:
However, this is the texture I got.
How can I got the depth data texture as below?
RAW is not a standarized format, while most of the variants are pretty easy to read (there's rarely any compression) its might not be just one call to LoadRawTextureData.
I am assuming you have tried other texture formats than PVRTC_RGBA4 and they all failed?
First off, if you have the resolution of your image, and file size, you can try to guess the format, for depth its common to use 8bit or 16bit values, if you need 16 bit you take two bytes and do
a<<8||b
or
a*256+b
But sometimes there's another operation required (i.e for 18bit formats).
Once you have your values, getting the texture is as easy as calling SetPixel enough times
I use matlab to render a complex mesh (using trimesh, material, camlight, view...) and need not display it to the user, just to get the rendered image. This is discussed in another question.
Using any of the suggested solutions (save-as image, saving into a video object, and using undocumented hardcopy) is very slow (~1sec), especially compared to rendering the plot itself, including painting on the screen takes less than 0.5sec.
I believe it is caused by hardcopy method not to utilize the GPU, while rendering the original plot for display do use the GPU; using GPU-Z monitor software I see the GPU working during ploting but not during hardcopy.
The figure use 'opengl' as renderer, but hardcopy, which is the underlying implementation of all the suggested methods, don't seem to respect this...
Any suggestion on how to configure it to use the GPU?
EDITED: following this thread I've moved to use the following, but GPU usage is still a flatliner.
cdata=hardcopy(f, '-Dopengl', '-r0')
Being used to Matlab and its great capabilities of drawing vector graphics, I am looking for something similar in OpenCV. OpenCV drawing functions seem to raster the lines or points at pixel level. Currently, I am dumping the data into text, copy-paste to Matlab and doing all the plots. I also thought about using Matlab engine to pass it the parameters and running plots, but it seems to be too much mess for simple debug operation.
I want to be able to do the following:
Zoom in, out of the image
Draw a line/point which is re-rastered each time I do zoom, like in Matlab.
Currently, I found image watch plugin to take care of zooming, but it does not help with the second part.
Any idea?
OpenCV has a lot of capabilities to process an image but only minimal ones for displaying the result. It has nothing that can display vector graphics like Matlab. When I need to see polygons on image (or just polygons) I am dumping them to file and using third party viewer (usually Giv viewer).
I'm trying to implement a state preserving particle system on the iPhone using OpenGL ES 2.0. By state-preserving, I mean that each particle is integrated forward in time, having a unique velocity and position vector that changes with time and can not be calculated from the initial conditions at every rendering call.
Here's one possible way I can think of.
Setup particle initial conditions in VBO.
Integrate particles in vertex shader, write result to texture in fragment shader. (1st rendering call)
Copy data from texture to VBO.
Render particles from data in VBO. (2nd rendering call)
Repeat 2.-4.
The only thing I don't know how to do efficiently is step 3. Do I have to go through the CPU? I wonder if is possible to do this entirely on the GPU with OpenGL ES 2.0. Any hints are greatly appreciated!
I don't think this is possible without simply using glReadPixels -- ES2 doesn't have the same flexible buffer management that OpenGL has to allow you to copy buffer contents using the GPU (where, for example, you could copy data between the texture and vbo, or use simply use transform feedback which is basically designed to do exactly what you want).
I think your only option if you need to use the GPU is to use glReadPixels to copy the framebuffer contents back out after rendering. You probably also want to check and use EXT_color_buffer_float or related if available to make sure you have high precision values (RGBA8 is probably not going to be sufficient for your particles). If you're intermixing this with normal rendering, you probably want to build in a bunch of buffering (wait a frame or two) so you don't stall the CPU waiting for the GPU (this would be especially bad on PowerVR since it buffers a whole frame before rendering).
ES3.0 will have support for transform feedback, which doesn't help but hopefully gives you some hope for the future.
Also, if you are running on an ARM cpu, it seems like it'd be faster to use NEON to quickly update all your particles. It can be quite fast and will skip all the overhead you'll incur from the CPU+GPU method.
I am looking to implement a mechanism that combines bitmaps together in a variety of complex ways, using ternary raster operations like you can in Windows.
The idea is to be able to blt an image to a destination using any kind of combination of the source, brush, and destination pixels (source AND destination, source AND brush AND destination, etc.)
This is supported by Windows GDI in what's called Ternary Raster Operations (check out http://msdn.microsoft.com/en-us/library/dd145130(VS.85).aspx). Is it possible that OS X and iOS completely lack this functionality? The only thing I've been able to find are blend modes, but they are not nearly as flexible.
Any ideas?
There are no ternary operators in Quartz or AppKit, and almost certainly not in UIKit, either. All drawing in Quartz-land is from a single source (image, color, gradient, etc.) into a single destination (context).
You can have two source images, one as the “source” and the other as the “pattern”/“brush”. For actual pattern drawing, you can use a CGPattern instead of the second image.