I want to implement the automatic capture function that shows the game view in Unity - unity3d

I'm using this as a capture.
ScreenCapture.CaptureScreenshot($"Screenshot{_shotIndex}.png", size);
But the CpatureScreenshot() is too slow.
So I tried different scripts but failed. One of them is that I succeeded in capturing the scene view.
public void TakeTransparentScreenshot(Camera cam, int width, int height, string savePath)
{
// Depending on your render pipeline, this may not work.
var bak_cam_targetTexture = cam.targetTexture;
var bak_cam_clearFlags = cam.clearFlags;
var bak_RenderTexture_active = RenderTexture.active;
var tex_transparent = new Texture2D(width, height, TextureFormat.ARGB32, false);
// Must use 24-bit depth buffer to be able to fill background.
var render_texture = RenderTexture.GetTemporary(width, height, 24, RenderTextureFormat.ARGB32);
var grab_area = new Rect(0, 0, width, height);
RenderTexture.active = render_texture;
cam.targetTexture = render_texture;
cam.clearFlags = CameraClearFlags.SolidColor;
// Simple: use a clear background
cam.backgroundColor = Color.clear;
cam.Render();
tex_transparent.ReadPixels(grab_area, 0, 0);
tex_transparent.Apply();
// Encode the resulting output texture to a byte array then write to the file
byte[] pngShot = ImageConversion.EncodeToPNG(tex_transparent);
File.WriteAllBytes(savePath, pngShot);
cam.clearFlags = bak_cam_clearFlags;
cam.targetTexture = bak_cam_targetTexture;
RenderTexture.active = bak_RenderTexture_active;
RenderTexture.ReleaseTemporary(render_texture);
Texture2D.Destroy(tex_transparent);
}
How do I capture a screenshot in Unity3d with a transparent background?
I need to capture the game view, but this script does the scene view.
Inevitably I tried to implement a feature that was automatically captured using CaptureScreenshot() and Coroutine. it was too slow to work.

Related

ImGui overlay with UpdateLayeredWindow function

As title, I'm trying to create a partially transparent in-game overlay using ImGui that's clickable on the UI but click-through otherwise, i.e. you can click on the ImGUI elements but outside the elements you can interact with the game.
I was able to do it using
https://github.com/ocornut/imgui/blob/master/examples/example_win32_directx11/main.cpp
by making the window style as WS_EX_TOPMOST | WS_EX_LAYERED, and using
SetLayeredWindowAttributes(hwnd, RGB(0, 0, 0), 0, ULW_COLORKEY);
However this would impact the rendering performance of the underlying game.
So, I decided to try the UpdateLayeredWindow function.
Here is one of the templates I referenced :
https://github.com/riley-x/TransparentWindow
This does not impact the performance and worked perfectly, and now I have to integrate ImGUI rendering to replace the green ellipse.
However, ImGUI uses the following code :
g_pd3dDeviceContext->OMSetRenderTargets(1, &g_mainRenderTargetView, NULL);
g_pd3dDeviceContext->ClearRenderTargetView(g_mainRenderTargetView, clear_color_with_alpha);
But in order to use the
BOOL UpdateLayeredWindow(
HWND hWnd,
HDC hdcDst,
POINT *pptDst,
SIZE *psize,
HDC hdcSrc,
POINT *pptSrc,
COLORREF crKey,
BLENDFUNCTION *pblend,
DWORD dwFlags
);
function, I have to tell ImGUI to render to an HDC.
I've thought of some possible solutions,
I'm not sure how g_mainRenderTargetView can possibly bind to an HDC, unlike ID2D1Factory::CreateDCRenderTarget which can actually bind to an HDC with BindDC.
Or I could let the ImGUI render as usual but retrieve the back buffer as a bitmap, and then do the following
HDC hdcWnd = GetDC(hwnd);
HDC hdcMem = CreateCompatibleDC(hdcWnd);
HBITMAP memBitmap = CreateCompatibleBitmap(hdcWnd, rect.right - rect.left, rect.bottom - rect.top);
SelectObject(hdcMem, memBitmap);
m_pRenderTarget->BindDC(hdcMem, &rect);
m_pRenderTarget->BeginDraw();
m_pRenderTarget->Clear({ 0 });
m_pRenderTarget->DrawBitmap(bitmap.Get()); // ???
m_pRenderTarget->EndDraw();
POINT pt0 = { 0 };
SIZE sz = { rect.right - rect.left, rect.bottom - rect.top };
BLENDFUNCTION bfunc = { 0 };
bfunc.AlphaFormat = AC_SRC_ALPHA;
bfunc.BlendFlags = 0;
bfunc.BlendOp = AC_SRC_OVER;
bfunc.SourceConstantAlpha = 255;
UpdateLayeredWindow(hwnd, hdcWnd, &pt0, &sz, hdcMem, &pt0, 0, &bfunc, ULW_COLORKEY);
DeleteObject(memBitmap);
ReleaseDC(hwnd, hdcWnd);
ReleaseDC(0, hdcMem);
where m_pRenderTarget is made from :
D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties(
D2D1_RENDER_TARGET_TYPE_DEFAULT,
D2D1::PixelFormat(
DXGI_FORMAT_B8G8R8A8_UNORM,
D2D1_ALPHA_MODE_PREMULTIPLIED),
0,
0,
D2D1_RENDER_TARGET_USAGE_NONE,
D2D1_FEATURE_LEVEL_10
);
d2Factory->CreateDCRenderTarget(&props, m_pRenderTarget.GetAddressOf());
The problem is when I tried to retrieve the backbuffer bitmap, it always failed.
Or maybe there is some other way to make the overlay work without using UpdateLayeredWindow?

How to change texture format from Alpha8 to RGBA in Unity3d?

I have been trying to change the format from a camera that give a texture in Alpha8 to RGBA and have been unsuccessful so far.
This is the code I've tried:
public static class TextureHelperClass
{
public static Texture2D ChangeFormat(this Texture2D oldTexture, TextureFormat newFormat)
{
//Create new empty Texture
Texture2D newTex = new Texture2D(2, 2, newFormat, false);
//Copy old texture pixels into new one
newTex.SetPixels(oldTexture.GetPixels());
//Apply
newTex.Apply();
return newTex;
}
}
And I'm calling the code like this:
Texture imgTexture = Aplpha8Texture.ChangeFormat(TextureFormat.RGBA32);
But the image gets corrupted and isn't visible.
Does anyone know how to change this Alpha8 to RGBA so I can process it like any other image in OpenCV?
A friend provided me with the answer:
Color[] cs =oldTexture.GetPixels();
for(int i = 0; i < cs.Length; i++){//we want to set the r g b values to a
cs[i].r = cs[i].a;
cs[i].g = cs[i].a;
cs[i].b = cs[i].a;
cs[i].a = 1.0f;
}
//set the pixels in the new texture
newTex.SetPixels(cs);
//Apply
newTex.Apply();
This will take alot of resources but it will work for sure.
If you know a better way to make this change please add an answer to this thread.

How to modify a Texture pixels from a compute shader in unity?

I stumbled upon a strange problem in vuforia.When i request a camera image using CameraDevice.GetCameraImage(mypixelformat), the image returned is both flipped sideways and rotated 180 deg. Because of this, to obtain a normal image i have to first rotate the image and then flip it sideways.The approach i am using is simply iterating over pixels of the image and modifying them.This approach is very poor performance wise.Below is the code:
Texture2D image;
CameraDevice cameraDevice = Vuforia.CameraDevice.Instance;
Vuforia.Image vufImage = cameraDevice.GetCameraImage(pixelFormat);
image = new Texture2D(vufImage.Width, vufImage.Height);
vufImage.CopyToTexture(image);
Color32[] colors = image.GetPixels32();
System.Array.Reverse(colors, 0, colors.Length); //rotate 180deg
image.SetPixels32(colors); //apply rotation
image = FlipTexture(image); //flip sideways
//***** THE FLIP TEXTURE METHOD *******//
private Texture2D FlipTexture(Texture2D original, bool upSideDown = false)
{
Texture2D flipped = new Texture2D(original.width, original.height);
int width = original.width;
int height = original.height;
for (int col = 0; col < width; col++)
{
for (int row = 0; row < height; row++)
{
if (upSideDown)
{
flipped.SetPixel(row, (width - 1) - col, original.GetPixel(row, col));
}
else
{
flipped.SetPixel((width - 1) - col, row, original.GetPixel(col, row));
}
}
}
flipped.Apply();
return flipped;
}
To improve the performance i want to somehow schedule these pixel operations on the GPU, i have heard that a compute shader can be used, but i have no idea where to start.Can someone please help me write the same operations in a compute shader so that the GPU can handle them, Thankyou!.
The whole compute shader are new for me too, but i took the occasion to research it a little bit for myself too. The following works for flipping a texture vertically (rotating and flipping horizontally should be just a vertical flip).
Someone might have a more elaborate solution for you, but maybe this is enough to get you started.
The Compute shader code:
#pragma kernel CSMain
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> Result;
Texture2D<float4> ImageInput;
float2 flip;
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
flip = float2(512 , 1024) - id.xy ;
Result[id.xy] = float4(ImageInput[flip].x, ImageInput[flip].y, ImageInput[flip].z, 1.0);
}
and called from any script:
public void FlipImage()
{
int kernelHandle = shader.FindKernel("CSMain");
RenderTexture tex = new RenderTexture(512, 1024, 24);
tex.enableRandomWrite = true;
tex.Create();
shader.SetTexture(kernelHandle, "Result", tex);
shader.SetTexture(kernelHandle, "ImageInput", myTexture);
shader.Dispatch(kernelHandle, 512/8 , 1024 / 8, 1);
RenderTexture.active = tex;
result.ReadPixels(new Rect(0, 0, tex.width, tex.height), 0, 0);
result.Apply();
}
This takes an input Texture2D, flips it in the shader, applies it to a RenderTexture and to a Texture2D, whatever you need.
Note that the image sizes are hardcoded in my instance and should be replaced by whatever size you need. (for within the shader use shader.SetInt(); )

Unity3D Texture2D.LoadRawTextureData causing massive memory leak?

In one application (Server), I am capturing the display:
public byte[] GetFrame()
{
int width = Screen.width;
int height = Screen.height;
Texture2D tex = new Texture2D(width, height, TextureFormat.RGB24, false);
tex.ReadPixels(new Rect(0, 0, width, height), 0, 0);
byte[] bytes = tex.GetRawTextureData();
Destroy(tex);
return bytes;
}
I send this frame over the network to another application (Client), which loads the received frame and puts it as texture on a plane or whatever (some other 3D primitive):
void DisplayFrame(byte[] frame)
{
Texture2D texture = new Texture2D(Screen.width, Screen.height, TextureFormat.RGB24, false);
texture.LoadRawTextureData(frame);
texture.Apply();
GetComponent<Renderer>().material.mainTexture = texture;
ScreenUpdate = true;
Destroy(texture);
}
This works and the image is displayed as I expected. However, when watching the RAM, I notice that the Client RAM goes nuts... over 8GB.
The interesting thing is that if I set frame = null; after calling Destroy(texture); in DisplayFrame on the Client, the user only sees a black screen.
I don't understand what is happening here... It's as if the frame is staying in memory and each new received frame just increases the memory used.
Does anyone have any ideas?
It turns out that if I move the Destroy(texture); from the bottom of DisplayFrame(byte[] frame) to the top of the function, the memory leak is resolved.
It's as if Destroy is being blocked because the texture is still being applied to the material when the next iteration comes around. I'm not clear on the issue here, so if someone with more knowledge could chip in, that would be great.

Scale a PNG in Unity5? - Bountie

Surprisingly in Unity, for years the only way to simply scale an actual PNG is to use the very awesome library http://wiki.unity3d.com/index.php/TextureScale
Example below
How do you scale a PNG using Unity5 functions? There must be a way now with new UI and so on.
So, scaling actual pixels (such as in Color[]) or literally a PNG file, perhaps downloaded from the net.
(BTW if you're new to Unity, the Resize call is unrelated. It merely changes the size of an array.)
public WebCamTexture wct;
public void UseFamousLibraryToScale()
{
// take the photo. scale down to 256
// also crop to a central-square
WebCamTexture wct;
int oldW = wct.width; // NOTE example code assumes wider than high
int oldH = wct.height;
Texture2D photo = new Texture2D(oldW, oldH,
TextureFormat.ARGB32, false);
//consider WaitForEndOfFrame() before GetPixels
photo.SetPixels( 0,0,oldW,oldH, wct.GetPixels() );
photo.Apply();
int newH = 256;
int newW = Mathf.FloorToInt(
((float)newH/(float)oldH) * oldW );
// use a famous Unity library to scale
TextureScale.Bilinear(photo, newW,newH);
// crop to central square 256.256
int startAcross = (newW - 256)/2;
Color[] pix = photo.GetPixels(startAcross,0, 256,256);
photo = new Texture2D(256,256, TextureFormat.ARGB32, false);
photo.SetPixels(pix);
photo.Apply();
demoImage.texture = photo;
// consider WriteAllBytes(
// Application.persistentDataPath+"p.png",
// photo.EncodeToPNG()); etc
}
Just BTW it occurs to me I'm probably only talking about scaling down here (as you often have to do to post an image, create something on the fly or whatever.) I guess, there would not often be a need to scale up in size an image; it's pointless quality-wise.
If you're okay with stretch-scaling, actually there's simpler way by using a temporary RenderTexture and Graphics.Blit. If you need it to be Texture2D, swapping RenderTexture.active temporarily and read its pixels to Texture2D should do the trick. For example:
public Texture2D ScaleTexture(Texture src, int width, int height){
RenderTexture rt = RenderTexture.GetTemporary(width, height);
Graphics.Blit(src, rt);
RenderTexture currentActiveRT = RenderTexture.active;
RenderTexture.active = rt;
Texture2D tex = new Texture2D(rt.width,rt.height);
tex.ReadPixels(new Rect(0, 0, tex.width, tex.height), 0, 0);
tex.Apply();
RenderTexture.ReleaseTemporary(rt);
RenderTexture.active = currentActiveRT;
return tex;
}