Unity: Reduce size of Render Texture before executing Texture2D.ReadPixels - unity3d

I'm working on a code where I basically have to take a low quality screenshot about every 30 milliseconds. The script is attached to a camara.
What I want to do is reduce the render texture size. The way the code is right now changing either W or H basically gets me a SECTION of of all that is being seen by the camara instead of a reduced size version. So my question is how can I resized or downsample what is read into the screenshot (Texture2D) but that it still is a representation of the entire screen.
public class CameraRenderToImage : MonoBehaviour
{
private RemoteRenderServer rrs;
void Start(){
TimeStamp.SetStart();
Camera.onPostRender += OnPostRenderCallback;
}
void OnPostRenderCallback(Camera cam){
if (TimeStamp.HasMoreThanThisEllapsed(30)){
TimeStamp.SetStart();
int W = Screen.width;
int H = Screen.height;
Texture2D screenshot = new Texture2D(W,H, TextureFormat.RGB24, false);
screenshot.ReadPixels( new Rect(0, 0, W,H), 0, 0);
byte[] bytes = screenshot.EncodeToPNG();
System.IO.File.WriteAllBytes("check_me_out.png", bytes);
TimeStamp.Tok("Encode to PNG and Save");
}
}
// Remove the onPostRender callback
void OnDestroy()
{
Camera.onPostRender -= OnPostRenderCallback;
}
}

If you need to resize your render texture from script you can refer to the next code snippet
void Resize(RenderTexture renderTexture, int width, int height) {
if (renderTexture) {
renderTexture.Release();
renderTexture.width = width;
renderTexture.height = height;
}
}
To make it possible to resize the render texture you first need to make sure it is released.
To get Texture2d:
private Texture2D ToCompressedTexture(ref RenderTexture renderTexture)
{
var texture = new Texture2D(renderTexture.width, renderTexture.height, TextureFormat.ARGB32, false);
var previousTarget = RenderTexture.active;
RenderTexture.active = renderTexture;
texture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0);
RenderTexture.active = previousTarget;
texture.Compress(false);
texture.Apply(false, true);
renderTexture.Release();
renderTexture = null;
return texture;
}

Related

Converting RenderTexture to Texture2D in Unity 2019

I'm using Intel Real Sense as camera device to capture picture. The capture result is displayed as a RenderTexture. Since I need to sent it via UDP, I need to convert it to byte[], but it only work for Texture2D. Is it possible to convert RenderTexture into Texture2D in unity 2019?
Edit:
Right now, I'm using this code to convert RenderTexture to Texture2D:
Texture2D toTexture2D(RenderTexture rTex)
{
Texture2D tex = new Texture2D(rTex.width, rTex.width, TextureFormat.ARGB32, false);
RenderTexture.active = rTex;
tex.ReadPixels(new Rect(0, 0, rTex.width, rTex.height), 0, 0);
tex.Apply();
return tex;
}
I got this code from here, which doesn't work anymore for unity 2019 since if I display the texture it only give me white texture.
Edit 2:
Here how i called that function:
//sender side
Texture2D WebCam;
public RawImage WebCamSender;
public RenderTexture tex;
Texture2D CurrentTexture;
//receiver side
public RawImage WebCamReceiver;
Texture2D Textur;
IEnumerator InitAndWaitForWebCamTexture()
{
WebCamSender.texture = tex;
CurrentTexture = new Texture2D(WebCamSender.texture.width,
WebCamSender.texture.height, TextureFormat.RGB24, false, false);
WebCam = toTexture2D(tex);
while (WebCamSender.texture.width < 100) //WebCam
{
yield return null;
}
StartCoroutine(SendUdpPacketVideo());
}
then i'll send it via network like this :
IEnumerator SendUdpPacketVideo()
{
...
CurrentTexture.SetPixels(WebCam.GetPixels());
byte[] PNGBytes = CurrentTexture.EncodeToPNG();
...
}
On receiver side, i'm gonna decode it and display on raw image:
....
Textur.LoadImage(ReceivedVideo);
WebCamReceiver.texture = Textur;
...
The most optimized way to do this is:
public Texture2D toTexture2D(RenderTexture rTex)
{
Texture2D dest = new Texture2D(rTex.width, rTex.height, TextureFormat.RGBA32, false);
dest.Apply(false);
Graphics.CopyTexture(rTex, dest);
return dest;
}

Take photo in unity c#

I'm trying to build a program that takes your photo and places it with a different background, like a monument or so. So far, I was able to turn the camera on when I start the project with this code
webcamTexture = new WebCamTexture();
rawImage.texture = webcamTexture;
rawImage.material.mainTexture = webcamTexture;
webcamTexture.Play();
Texture2D PhotoTaken = new Texture2D (webcamTexture.width, webcamTexture.height);
PhotoTaken.SetPixels (webcamTexture.GetPixels ());
PhotoTaken.Apply ();
However, I can't take a screenshot or photo because it always ends up all black. I've tried different codes but nothing is working. Can someone please help? Thanks
EDIT
After some tries, this is the code I have:
WebCamTexture webcamTexture;
public RawImage rawImage;
void Start () {
webcamTexture = new WebCamTexture();
rawImage.texture = webcamTexture;
rawImage.material.mainTexture = webcamTexture;
webcamTexture.Play();
RenderTexture texture= new RenderTexture(webcamTexture.width, webcamTexture.height,0);
Graphics.Blit(webcamTexture, texture);
Button btn = yourButton.GetComponent<Button>();
btn.onClick.AddListener(OnClick);
}
public IEnumerator Coroutine(){
yield return new WaitForEndOfFrame ();
}
public void OnClick() {
var width = 767;
var height = 575;
Texture2D texture = new Texture2D(width, height);
texture.ReadPixels(new Rect(0, 0, width, height), 0, 0);
texture.Apply();
// Encode texture into PNG
var bytes = texture.EncodeToPNG();
//Destroy(texture);
File.WriteAllBytes (Application.dataPath + "/../SavedScreen.png", bytes);
}
and with this next code the screenshot is taken, but it takes a photo of the whole thing, and not just a bit of the screen.
void Start()
{
// Set the playback framerate!
// (real time doesn't influence time anymore)
Time.captureFramerate = frameRate;
// Find a folder that doesn't exist yet by appending numbers!
realFolder = folder;
int count = 1;
while (System.IO.Directory.Exists(realFolder))
{
realFolder = folder + count;
count++;
}
// Create the folder
System.IO.Directory.CreateDirectory(realFolder);
}
void Update()
{
// name is "realFolder/shot 0005.png"
var name = string.Format("{0}/shot {1:D04}.png", realFolder, Time.frameCount);
// Capture the screenshot
Application.CaptureScreenshot(name, sizeMultiplier);
}
}
You can take a screenshot like this in Unity
Application.CaptureScreenshot("Screenshot.png");
Reference
EDIT 1
To take a screenshot on a specific part of the screen use the following script:
var width = 400;
var height = 300;
var startX = 200;
var startY = 100;
var tex = new Texture2D (width, height, TextureFormat.RGB24, false);
tex.ReadPixels (Rect(startX, startY, width, height), 0, 0);
tex.Apply ();
// Encode texture into PNG
var bytes = tex.EncodeToPNG();
Destroy(tex);
File.WriteAllBytes(Application.dataPath + "/../SavedScreen.png", bytes);
Reference

Taking snapshots of a image in Unity

I am trying to take snapshots of materials I used in my application in Unity. I simply add a directional light and a camera and in a perspective mode. Then I render the result to a texture and save it as a .png file. The result is good but there is a strange gizmo like figure in the middle of image. Here it is :
Camera and light is far enough from the object. Also I disabled light to see if it is caused by directional light. But didn't solve. Anyone knows what cause this elliptic figure? Thanks in advance.
Edit. Here is the code
public static Texture2D CreateThumbnailFromMaterial(Material _material, string _name, string _path)
{
GameObject sphereObj = GameObject.CreatePrimitive(PrimitiveType.Sphere);
sphereObj.name = _name;
sphereObj.GetComponent<Renderer>().material = _material;
Texture2D thumbnailTexture = CreateThumbnailFromModel(sphereObj, _path);
sphereObj.GetComponent<Renderer>().material = null;
Object.DestroyImmediate(sphereObj.gameObject);
return thumbnailTexture;
}
public static Texture2D CreateThumbnailFromModel(GameObject _gameObject, string _path)
{
Texture2D thumbnailTexture = new Texture2D(textureSize, textureSize);
thumbnailTexture.name = _gameObject.name.Simplify();
GameObject cameraObject = Object.Instantiate(Resources.Load("SceneComponent/SnapshotCamera") as GameObject);
Camera snapshotCamera = cameraObject.GetComponent<Camera>();
if (snapshotCamera)
{
GameObject sceneObject = GameObject.Instantiate(_gameObject) as GameObject;
sceneObject.transform.Reset();
sceneObject.transform.position = new Vector3(1000, 0, -1000);
sceneObject.hideFlags = HideFlags.HideAndDontSave;
// Create render texture
snapshotCamera.targetTexture = RenderTexture.GetTemporary(textureSize, textureSize, 24);
RenderTexture.active = snapshotCamera.targetTexture;
// Set layer
foreach (Transform child in sceneObject.GetComponentsInChildren<Transform>(true))
{
child.gameObject.layer = LayerMask.NameToLayer("ObjectSnapshot");
}
// Calculate bounding box
Bounds bounds = sceneObject.GetWorldSpaceAABB();
float maxBoundValue = 0f;
if (bounds.IsValid())
{
maxBoundValue = Mathf.Max(bounds.size.x, bounds.size.y, bounds.size.z);
}
double fov = Mathf.Deg2Rad * snapshotCamera.GetComponent<Camera>().fieldOfView;
float distanceToCenter = (maxBoundValue) / (float)System.Math.Tan(fov);
cameraObject.transform.LookAt(bounds.center);
cameraObject.transform.position = bounds.center - (snapshotCamera.transform.forward * distanceToCenter);
cameraObject.transform.SetParent(sceneObject.transform);
snapshotCamera.Render();
thumbnailTexture.ReadPixels(new Rect(0, 0, textureSize, textureSize), 0, 0);
thumbnailTexture.Apply();
sceneObject.transform.Reset();
snapshotCamera.transform.SetParent(null);
RenderTexture.active = null;
GameObject.DestroyImmediate(sceneObject);
GameObject.DestroyImmediate(cameraObject);
// Save as .png
IO.IOManager.Instance.SaveAsPNG(_path + thumbnailTexture.name, thumbnailTexture);
}
return thumbnailTexture;
}
And here is my camera properties

Modify WebcamTexture from unity native plugin

I just want to pass a WebcamTexture to a native plugin (using GetNativeTexturePtr() for performance), draw something on it and render results into a plane on the screen.
I guess that some Unity thread would be updating the WebcamTexture contents every frame so writing to that texture may produce some flickering effects as there would be 2 threads updating its contents. Because of that I'm using another texture "drawTexture" to render the result.
The algorithm is easy, on every render event:
Read camTexture
Copy camTexture contents to drawTexture
Draw things on drawTexture
Render drawTexture (OpenGL bindTexture)
I'm following the NativeRenderingPlugin example, but every time I try to copy contents from camTexture to drawTexture (even only 1 pixel) the main thread freezes.
I think that may be happening because I'm trying to read camTexture while it is being modified by an external thread.
Here it is some code:
C# plugin source
[DllImport ("plugin")]
private static extern void setTexture (IntPtr cam, IntPtr draw, int width, int height);
[DllImport ("plugin")]
private static extern IntPtr GetRenderEventFunc();
WebCamTexture webcamTexture;
Texture2D drawTexture;
GameObject plane;
bool initialized = false;
IEnumerator Start () {
// PlaneTexture
// Create a texture
drawTexture = new Texture2D(1280,720,TextureFormat.BGRA32,true);
drawTexture.filterMode = FilterMode.Point;
drawTexture.Apply ();
plane = GameObject.Find("Plane");
plane.GetComponent<MeshRenderer> ().material.mainTexture = drawTexture;
// Webcam texture
webcamTexture = new WebCamTexture();
webcamTexture.filterMode = FilterMode.Point;
webcamTexture.Play();
yield return StartCoroutine("CallPluginAtEndOfFrames");
}
private IEnumerator CallPluginAtEndOfFrames()
{
while (true) {
// Wait until all frame rendering is done
yield return new WaitForEndOfFrame();
// wait for webcam and initialize
if (!initialized && webcamTexture.width > 16) {
setTexture (webcamTexture.GetNativeTexturePtr(), drawTexture.GetNativeTexturePtr(), webcamTexture.width, webcamTexture.height);
initialized = true;
} else if (initialized) {
// Issue a plugin event with arbitrary integer identifier.
// The plugin can distinguish between different
// things it needs to do based on this ID.
// For our simple plugin, it does not matter which ID we pass here.
GL.IssuePluginEvent (GetRenderEventFunc (), 1);
}
}
}
C++ plugin (draw texture method)
static void ModifyTexturePixels()
{
void* textureHandle = drawTexturePointer;
int width = textureWidth;
int height = textureHeight;
if (!textureHandle)
return;
int textureRowPitch;
void* newEmptyTexturePointer = (unsigned char*)s_CurrentAPI->BeginModifyTexture(textureHandle, width, height, &textureRowPitch);
memcpy(newEmptyTexturePointer, webcamTexturePointer, width * height * 4);
s_CurrentAPI->EndModifyTexture(textureHandle, width, height, textureRowPitch, newEmptyTexturePointer);
}
Any help would be appreciated.

EmguCV + Unity Open WebCam is Error;

I use EmguCV open webcam in unity.
But it's fps is low much.
this is my code ↓
private Texture2D texture;
private Capture capture;
private Color32[] color = new Color32[640*480];
// Use this for initialization
void Start () {
texture = new Texture2D (640, 480);
capture = new Capture ();
}
// Update is called once per frame
void Update () {
Image<Bgr, Byte> currentFrame = capture.QueryFrame();
Bitmap bitmapCurrentFrame = currentFrame.ToBitmap();
Image<Bgra, Byte> img = new Image<Bgra, Byte> (bitmapCurrentFrame);
for(int y=0; y<480; y++){
for(int x=0; x<640; x++){
int index = y+x*480;
print(index+";"+x+";"+y);
//byte b = img.Data[x,y,0];
color[index].r = img.Data[x,y,2];
color[index].g = img.Data[x,y,1];
color[index].b = img.Data[x,y,0];
color[index].a = 0xff;
}
}
texture.SetPixels32 (color);
texture.Apply (false);
renderer.material.mainTexture = texture;
}
i don't know why fps is so low...
and why my boss like EmguCV with Unity, why he don't use Unity-WebCamTexture...
OKAY,i really thank you for your read.
Hope, I can get some answer.
Look at Texture2D.LoadRawTextureData. It has no proper docs, so here's a snippet:
Texture2D tex = new Texture2D(width, height, format, false, true);
tex.LoadRawTextureData(buffer);
tex.Apply(false, true);
Buffer must be in the correct hardware format. For the format variable look at the list of formats that unity accepts.