WebCamTexture to byte[]? - unity3d

I want to stream phone camera video and to do that I'm trying to convert WebCamTexture to byte[] but it appeared there's no way to do that. I can see only GetPixels() and GetPixels32() methods but I believe they're quite slow. Is there efficient way to get byte array out of webcam frame? Or am I on the wrong way?

This cannot be done without using the native plugin and deep knowledge of OpenGL and Vulkan. If you target devices with OpenGL ES 3.0 or higher you can use PBO for asynchronous pixel transfer operations.
This is the actual read pixels function without PBO:
void ReadPixels(void* data, int textureWidth, int textureHeight)
{
int currentFBORead;
int currentFBOWrite;
glGetIntegerv(GL_READ_FRAMEBUFFER_BINDING, &currentFBORead);
glGetIntegerv(GL_DRAW_FRAMEBUFFER_BINDING, &currentFBOWrite);
glBindFramebuffer(GL_READ_FRAMEBUFFER, currentFBOWrite);
glReadPixels(0, 0, textureWidth, textureHeight, GL_RGBA, GL_FLOAT, data);
glBindFramebuffer(GL_READ_FRAMEBUFFER, currentFBORead);
}

Related

When we bind a texture created in unity to opengl the internal format is modified

I want to manipulate a texture that has been created in Unity directly with OpenGL.
I create the texture in unity with these parameters :
_renderTexture = new RenderTexture(_sizeTexture, _sizeTexture, 0
, RenderTextureFormat.ARGB32, RenderTextureReadWrite.Linear)
{
useMipMap = false,
autoGenerateMips = false,
anisoLevel = 6,
filterMode = FilterMode.Trilinear,
wrapMode = TextureWrapMode.Clamp,
enableRandomWrite = true
};
Then I send the texture pointer to a native rendering plugin with GetNativeTexturePtr() method. In the native rendering plugin I bind the texture with glBindTexture(GL_TEXTURE_2D, gltex); where gltex is the pointer of my texture in Unity.
Finally, I check the internal format of my texture with :
GLint format;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &format);
I have format = GL_RGBA8 even though I defined the texture in Unity with the format RenderTextureFormat.ARGB32. You can reproduce this by using the native rendering plugin example of Unity and just replacing the function RenderAPI_OpenGLCoreES::EndModifyTexture of the file RenderAPI_OpenGLCoreES.cpp by :
void RenderAPI_OpenGLCoreES::EndModifyTexture(void* textureHandle, int textureWidth, int textureHeight, int rowPitch, void* dataPtr)
{
GLuint gltex = (GLuint)(size_t)(textureHandle);
// Update texture data, and free the memory buffer
glBindTexture(GL_TEXTURE_2D, gltex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, textureWidth, textureHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, textureWidth, textureHeight, GL_RGBA, GL_UNSIGNED_BYTE, dataPtr);
GLint format;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT,
&format);
delete[](unsigned char*)dataPtr;
}
Why did the internal format change after binding the texture to OpenGL? And is it possible to "impose" GL_RGBA32F format to my OpenGL texture?
I tried to use glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); after my binding but I get the following error: Error : OpenGL error 0x0502 (GL_INVALID_OPERATION).
Sorry if the question is simple I am new on OpenGL!
I just misread the documentation, moreover unity and open have not the same naming convention... I thought RenderTextureFormat.ARGB32 was a texture with 32 bit per channel, but it's a total of 32 bit and therefore 8 bit per channel which corresponds to GL_RGBA8.
There is a summary table for unity format here : https://docs.unity3d.com/Manual/class-ComputeShader.html.

Flutter - Trying to Use Tensorflowlite - FloatEfficientNet

I am attempting to use a model that is successfully inferencing in both native swift and android/java to do the same in flutter, specifically the android side of it.
In this case the values I am receiving are way off.
What I have done so far:
I took the tensorflowlite android example github repo: https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android, and found that the FloatEfficientNet option was accurately giving values for my model.
I took the flutter_tflite library, and I modified it so that the inferencing section of the android code matched that tensorflow example above:
https://github.com/shaqian/flutter_tflite
I used this tutorial and included repo which uses the above library to inference tensorflow via the platform channel:
https://github.com/flutter-devs/tensorflow_lite_flutter
Via the flutter tutorial, I use the camera plugin, which can stream CameraImage objects from the camera's live feed. I pass that into the modified flutter tensorflow library which uses the platform channel to pass the image into the android layer. It does so as a list of arrays of bytes. (3 planes, YuvImage). The tensorflow android example(1) with the working floatefficientnet code, examples a Bitmap. So I am using this method to convert:
public Bitmap imageToBitmap(List<byte[]> planes, float rotationDegrees, int width, int height) {
// NV21 is a plane of 8 bit Y values followed by interleaved Cb Cr
ByteBuffer ib = ByteBuffer.allocate(width * height * 2);
ByteBuffer y = ByteBuffer.wrap(planes.get(0));
ByteBuffer cr = ByteBuffer.wrap(planes.get(1));
ByteBuffer cb = ByteBuffer.wrap(planes.get(2));
ib.put(y);
ib.put(cb);
ib.put(cr);
YuvImage yuvImage = new YuvImage(ib.array(),
ImageFormat.NV21, width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 50, out);
byte[] imageBytes = out.toByteArray();
Bitmap bm = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
Bitmap bitmap = bm;
// On android the camera rotation and the screen rotation
// are off by 90 degrees, so if you are capturing an image
// in "portrait" orientation, you'll need to rotate the image.
if (rotationDegrees != 0) {
Matrix matrix = new Matrix();
matrix.postRotate(rotationDegrees);
Bitmap scaledBitmap = Bitmap.createScaledBitmap(bm,
bm.getWidth(), bm.getHeight(), true);
bitmap = Bitmap.createBitmap(scaledBitmap, 0, 0,
scaledBitmap.getWidth(), scaledBitmap.getHeight(), matrix, true);
}
return bitmap;
}
The inference is successful, I am able to return the values back to flutter and display the results, but they are way off. Using the same android phone, the results are completely different and way off.
I suspect the flaw is related to the conversion of the CameraImage data format into the Bitmap, since it's the only piece of the whole chain that I am not able to independently test. If anyone who has faced a similar issue could assist I am rather puzzled.
I think the reason is because matrix.postRotate() method expect an integer but you give it a float, so you have an implicit conversion from float to integer which messes it up.

Stream Unity RenderTexture to Gstreamer

I would like to export a Camera view to a native plugin implementing a GStreamer pipeline which encodes and streams the rendered texture over a network to a web browser. I did some research and figured out that the best way to do that is probably to use a RenderTexture in Unity.
However, I don't understand how to interface this RenderTexture with GStreamer inside a native plugin. Do I need to write my own GStreamer source element for this? If yes, what would be a good starting point? Or is there another more straightforward solution for exporting the Camera view from Unity into GStreamer?
Here is one possible way how to do this using openGL and appsrc.
For even more details refer to Unity low-level native plug-in interface documentation and the corresponding source code available on GitHub. There is also a nice tutorial on how to use appsrc.
Here is a short summary what needs to be done:
Unity (c#)
On Unity part you have to get your camera, create a RenderTexture and assign it to the camera. Then you have to give the GetNativeTexturePtr of that texture to your native plugin. Here are the most relevant parts of the code how this can be done:
...
[DllImport("YourPluginName")]
private static extern IntPtr GetRenderEventFunc();
[DllImport("YourPluginName")]
private static extern void SetTextureFromUnity(System.IntPtr texture, int w, int h);
...
IEnumerator Start()
{
CreateTextureAndPassToPlugin();
yield return StartCoroutine("CallPluginAtEndOfFrames");
}
private void CreateTextureAndPassToPlugin()
{
// get main camera and set its size
m_MainCamera = Camera.main;
m_MainCamera.pixelRect = new Rect(0, 0, 512, 512);
// create RenderTexture and assign it to the main camera
m_RenderTexture = new RenderTexture(m_MainCamera.pixelWidth, m_MainCamera.pixelHeight, 24, RenderTextureFormat.ARGB32);
m_RenderTexture.Create();
m_MainCamera.targetTexture = m_RenderTexture;
m_MainCamera.Render();
// Pass texture pointer to the plugin
SetTextureFromUnity(m_RenderTexture.GetNativeTexturePtr(), m_RenderTexture.width, m_RenderTexture.height);
}
private IEnumerator CallPluginAtEndOfFrames()
{
while(true)
{
// Wait until all frame rendering is done
yield return new WaitForEndOfFrame();
// Issue a plugin event with arbitrary integer identifier.
GL.IssuePluginEvent(GetRenderEventFunc(), m_EventID);
}
}
Native plugin (c++)
Here you have to store your texture handle and then access the pixel data on the rendering thread for example like this:
extern "C" void UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API SetTextureFromUnity(void* textureHandle, int w, int h)
{
g_TextureHandle = textureHandle;
g_TextureWidth = w;
g_TextureHeight = h;
}
static void OnRenderEvent(int eventID)
{
uint32_t uiSize = g_TextureWidth * g_TextureHeight * 4; // RGBA = 4
unsigned char* pData = new unsigned char[uiSize];
GLuint gltex = (GLuint)(size_t)(g_TextureHandle);
glBindTexture(GL_TEXTURE_2D, gltex);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, pData);
// now we have our pixel data in memory, we can now feed appsrc with it
...
}
extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API GetRenderEventFunc()
{
return OnRenderEvent;
}
As soon as you get the pixel data you can wrap it into GstBuffer and feed your pipeline using push-buffer signal:
GstBuffer* pTextureBuffer = gst_buffer_new_wrapped(pData, uiSize);
...
g_signal_emit_by_name(pAppsrc, "push-buffer", pTextureBuffer, ...);
In case someone knows how to feed the pipeline with the openGL texture handle directly (without copying it into RAM) I would appreciate some input on this.

OpenGL ES 2d rendering into image

I need to write OpenGL ES 2-dimensional renderer on iOS. It should draw some primitives such as lines and polygons into 2d image (it will be rendering of vector map). Which way is the best for getting image from OpenGL context in that task? I mean, should I render these primitives into texture and then get image from it, or what? Also, it will be great if someone give examples or tutorials which look like the thing I need (2d GL rendering into image). Thanks in advance!
If you need to render an OpenGL ES 2-D scene, then extract an image of that scene to use outside of OpenGL ES, you have two main options.
The first is to simply render your scene and use glReadPixels() to grab RGBA data for the scene and place it in a byte array, like in the following:
GLubyte *rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
glReadPixels(0, 0, (int)currentFBOSize.width, (int)currentFBOSize.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
// Do something with the image
free(rawImagePixels);
The second, and much faster, way of doing this is to render your scene to a texture-backed framebuffer object (FBO), where the texture has been provided by iOS 5.0's texture caches. I describe this approach in this answer, although I don't show the code for raw data access there.
You do the following to set up the texture cache and bind the FBO texture:
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)[[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context], NULL, &rawDataTextureCache);
if (err)
{
NSAssert(NO, #"Error at CVOpenGLESTextureCacheCreate %d");
}
// Code originally sourced from http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVPixelBufferCreate(kCFAllocatorDefault,
(int)imageSize.width,
(int)imageSize.height,
kCVPixelFormatType_32BGRA,
attrs,
&renderTarget);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
rawDataTextureCache, renderTarget,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)imageSize.width,
(int)imageSize.height,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
CFRelease(attrs);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
and then you can just read directly from the bytes that back this texture (in BGRA format, not the RGBA of glReadPixels()) using something like:
CVPixelBufferLockBaseAddress(renderTarget, 0);
_rawBytesForImage = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
// Do something with the bytes
CVPixelBufferUnlockBaseAddress(renderTarget, 0);
However, if you just want to reuse your image within OpenGL ES, you just need to render your scene to a texture-backed FBO and then use that texture in your second level of rendering.
I show an example of rendering to a texture, and then performing some processing on it, within the CubeExample sample application within my open source GPUImage framework, if you want to see this in action.

Use multiple texture when rendering cause low framerate

I tried use different to render the ground of a game. First, I create some texture and upload to GPU.
void initialize(float width,float height)
{
for(int i = 0;i < 10;i++)
{
glActiveTexture(GL_TEXTURE1 + i);
GLuint gridTexture;
glGenTextures(1, &gridTexture);
glBindTexture(GL_TEXTURE_2D, gridTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
setPngTexture(gameMap->texture);
glGenerateMipmap(GL_TEXTURE_2D);
}
}
Then I use the textures when render in every frame.
void render() const
{
for(int i = 0; i < map -> maps.size();i++)
{
glUniform1i(uniforms.Sampler,i);
glBindBuffer(GL_ARRAY_BUFFER, gameMap -> verticesbuffer);
glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(normal, 3, GL_FLOAT, GL_FALSE, stride, offset);
glVertexAttribPointer(texture, 2, GL_FLOAT, GL_FALSE, stride, texCoordOffset);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, gameMap -> indicesbuffer);
glDrawElements(GL_TRIANGLES, gameMap -> indexCount, GL_UNSIGNED_SHORT, 0);
}
}
The code work properly on iPhone simulator, but when I testing the app on device, it gives me a very low framerate : 2fps. If I change the line glUniform1i(uniforms.Sampler,i); to glUniform1i(uniforms.Sampler,0); the app work fine!
I'm a freshman on OpenGL ES and I'm not sure I use multiple texture in a correct way. Who can kindly tell me what's the reason of this problem and how can I correct it? Thanks a lot!
there are many things that could be wrong with this. You don't provide enough code to really diagnose the issue (shader source). I'd start with moving the glUniform1i(uniforms.Sampler,i); outside of your loop. It makes no sense the way your are currently using it (basically you're trying to bind uniforms.Sampler to the nth value inside your shader for map.size(). I'm pretty sure this is throwing tons of OpenGL errors. Re-read the documentation on glUniform1i