I am trying to pass the render texture from Unity into a C++ plugin to then use CUDA on the texture. This works fine in DirectX in editor and build, and also OpenGL in editor. But when I build in OpenGL it gives me a blank texture. I've also noticed that if I try and save said texture it also saves as a blank texture. I call my plugin using GL.IssuePluginEvent(PluginFunction(), 1), and this happens in a coroutine after yield return new WaitForEndOfFrame(). Any ideas on how to fix it?
private IEnumerator CallPlugin()
{
while (enabled) {
yield return new WaitForEndOfFrame();
//Attempt to save image to file, only works in editor
// Texture2D tex = new Texture2D(width, height, TextureFormat.RGBA32, false);
// RenderTexture.active = _renderTexture;
// tex.ReadPixels(new Rect(0, 0, width, height), 0, 0);
// tex.Apply();
// byte[] bytes = tex.EncodeToPNG();
// System.IO.File.WriteAllBytes("takeScreenshot.png", bytes);
GL.IssuePluginEvent(GetRenderEventFunc(), 1);
}
}
void OnRenderImage(RenderTexture src, RenderTexture dest)
{
if (shader != null)
{
Graphics.Blit(src, dest, material);
}
}
Related
I'm working on a code where I basically have to take a low quality screenshot about every 30 milliseconds. The script is attached to a camara.
What I want to do is reduce the render texture size. The way the code is right now changing either W or H basically gets me a SECTION of of all that is being seen by the camara instead of a reduced size version. So my question is how can I resized or downsample what is read into the screenshot (Texture2D) but that it still is a representation of the entire screen.
public class CameraRenderToImage : MonoBehaviour
{
private RemoteRenderServer rrs;
void Start(){
TimeStamp.SetStart();
Camera.onPostRender += OnPostRenderCallback;
}
void OnPostRenderCallback(Camera cam){
if (TimeStamp.HasMoreThanThisEllapsed(30)){
TimeStamp.SetStart();
int W = Screen.width;
int H = Screen.height;
Texture2D screenshot = new Texture2D(W,H, TextureFormat.RGB24, false);
screenshot.ReadPixels( new Rect(0, 0, W,H), 0, 0);
byte[] bytes = screenshot.EncodeToPNG();
System.IO.File.WriteAllBytes("check_me_out.png", bytes);
TimeStamp.Tok("Encode to PNG and Save");
}
}
// Remove the onPostRender callback
void OnDestroy()
{
Camera.onPostRender -= OnPostRenderCallback;
}
}
If you need to resize your render texture from script you can refer to the next code snippet
void Resize(RenderTexture renderTexture, int width, int height) {
if (renderTexture) {
renderTexture.Release();
renderTexture.width = width;
renderTexture.height = height;
}
}
To make it possible to resize the render texture you first need to make sure it is released.
To get Texture2d:
private Texture2D ToCompressedTexture(ref RenderTexture renderTexture)
{
var texture = new Texture2D(renderTexture.width, renderTexture.height, TextureFormat.ARGB32, false);
var previousTarget = RenderTexture.active;
RenderTexture.active = renderTexture;
texture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0);
RenderTexture.active = previousTarget;
texture.Compress(false);
texture.Apply(false, true);
renderTexture.Release();
renderTexture = null;
return texture;
}
I'm using Intel Real Sense as camera device to capture picture. The capture result is displayed as a RenderTexture. Since I need to sent it via UDP, I need to convert it to byte[], but it only work for Texture2D. Is it possible to convert RenderTexture into Texture2D in unity 2019?
Edit:
Right now, I'm using this code to convert RenderTexture to Texture2D:
Texture2D toTexture2D(RenderTexture rTex)
{
Texture2D tex = new Texture2D(rTex.width, rTex.width, TextureFormat.ARGB32, false);
RenderTexture.active = rTex;
tex.ReadPixels(new Rect(0, 0, rTex.width, rTex.height), 0, 0);
tex.Apply();
return tex;
}
I got this code from here, which doesn't work anymore for unity 2019 since if I display the texture it only give me white texture.
Edit 2:
Here how i called that function:
//sender side
Texture2D WebCam;
public RawImage WebCamSender;
public RenderTexture tex;
Texture2D CurrentTexture;
//receiver side
public RawImage WebCamReceiver;
Texture2D Textur;
IEnumerator InitAndWaitForWebCamTexture()
{
WebCamSender.texture = tex;
CurrentTexture = new Texture2D(WebCamSender.texture.width,
WebCamSender.texture.height, TextureFormat.RGB24, false, false);
WebCam = toTexture2D(tex);
while (WebCamSender.texture.width < 100) //WebCam
{
yield return null;
}
StartCoroutine(SendUdpPacketVideo());
}
then i'll send it via network like this :
IEnumerator SendUdpPacketVideo()
{
...
CurrentTexture.SetPixels(WebCam.GetPixels());
byte[] PNGBytes = CurrentTexture.EncodeToPNG();
...
}
On receiver side, i'm gonna decode it and display on raw image:
....
Textur.LoadImage(ReceivedVideo);
WebCamReceiver.texture = Textur;
...
The most optimized way to do this is:
public Texture2D toTexture2D(RenderTexture rTex)
{
Texture2D dest = new Texture2D(rTex.width, rTex.height, TextureFormat.RGBA32, false);
dest.Apply(false);
Graphics.CopyTexture(rTex, dest);
return dest;
}
I just want to pass a WebcamTexture to a native plugin (using GetNativeTexturePtr() for performance), draw something on it and render results into a plane on the screen.
I guess that some Unity thread would be updating the WebcamTexture contents every frame so writing to that texture may produce some flickering effects as there would be 2 threads updating its contents. Because of that I'm using another texture "drawTexture" to render the result.
The algorithm is easy, on every render event:
Read camTexture
Copy camTexture contents to drawTexture
Draw things on drawTexture
Render drawTexture (OpenGL bindTexture)
I'm following the NativeRenderingPlugin example, but every time I try to copy contents from camTexture to drawTexture (even only 1 pixel) the main thread freezes.
I think that may be happening because I'm trying to read camTexture while it is being modified by an external thread.
Here it is some code:
C# plugin source
[DllImport ("plugin")]
private static extern void setTexture (IntPtr cam, IntPtr draw, int width, int height);
[DllImport ("plugin")]
private static extern IntPtr GetRenderEventFunc();
WebCamTexture webcamTexture;
Texture2D drawTexture;
GameObject plane;
bool initialized = false;
IEnumerator Start () {
// PlaneTexture
// Create a texture
drawTexture = new Texture2D(1280,720,TextureFormat.BGRA32,true);
drawTexture.filterMode = FilterMode.Point;
drawTexture.Apply ();
plane = GameObject.Find("Plane");
plane.GetComponent<MeshRenderer> ().material.mainTexture = drawTexture;
// Webcam texture
webcamTexture = new WebCamTexture();
webcamTexture.filterMode = FilterMode.Point;
webcamTexture.Play();
yield return StartCoroutine("CallPluginAtEndOfFrames");
}
private IEnumerator CallPluginAtEndOfFrames()
{
while (true) {
// Wait until all frame rendering is done
yield return new WaitForEndOfFrame();
// wait for webcam and initialize
if (!initialized && webcamTexture.width > 16) {
setTexture (webcamTexture.GetNativeTexturePtr(), drawTexture.GetNativeTexturePtr(), webcamTexture.width, webcamTexture.height);
initialized = true;
} else if (initialized) {
// Issue a plugin event with arbitrary integer identifier.
// The plugin can distinguish between different
// things it needs to do based on this ID.
// For our simple plugin, it does not matter which ID we pass here.
GL.IssuePluginEvent (GetRenderEventFunc (), 1);
}
}
}
C++ plugin (draw texture method)
static void ModifyTexturePixels()
{
void* textureHandle = drawTexturePointer;
int width = textureWidth;
int height = textureHeight;
if (!textureHandle)
return;
int textureRowPitch;
void* newEmptyTexturePointer = (unsigned char*)s_CurrentAPI->BeginModifyTexture(textureHandle, width, height, &textureRowPitch);
memcpy(newEmptyTexturePointer, webcamTexturePointer, width * height * 4);
s_CurrentAPI->EndModifyTexture(textureHandle, width, height, textureRowPitch, newEmptyTexturePointer);
}
Any help would be appreciated.
I have a problem with a camera, I want to take a screenshot but I get this error:
Flare renderer to update not found UnityEngine.Camera:Render()
c__Iterator4:MoveNext() (at
Assets/Scripts/ActionCam.cs:43)
My code:
public IEnumerator TakeScreenshot() {
yield return new WaitForEndOfFrame();
Camera camOV = _Camera;
RenderTexture currentRT = RenderTexture.active;
RenderTexture.active = camOV.targetTexture;
camOV.Render(); // here is the problem...
Texture2D imageOverview = new Texture2D(camOV.targetTexture.width, camOV.targetTexture.height, TextureFormat.RGB24, false);
imageOverview.ReadPixels(new Rect(0, 0, camOV.targetTexture.width, camOV.targetTexture.height), 0, 0);
imageOverview.Apply();
RenderTexture.active = currentRT;
byte[] bytes = imageOverview.EncodeToPNG();
string path = ScreenShotName(Convert.ToInt32(imageOverview.width), Convert.ToInt32(imageOverview.height));
System.IO.File.WriteAllBytes(path, bytes);
}
There are my camera settings:
If I deactivate the "Flare Layer" I don't get this error, but my screenshots are more or less empty, only the skybox:
any idea?
I Use 3 types of screenshot in unity:
Application.CaptureScreenshot(ScreenSgotFile): Not using anymore because I prefer methods that stores the screenshots in memory. (I don't need the file because it is attached to a MailMessage and sended by an SmtpClient)
OnPostRender (it must be attached to a camera):
void OnPostRender(){
Texture2D ScreenShot=new Texture2D(Screen.width, Screen.height, TextureFormat.RGB24, false);
RenderTexture.active=null;
ScreenShot.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0);
ScreenShot.Apply();
byte[] PngData=ScreenShot.EncodeToPNG();
PngStream = new MemoryStream(PngData);
//File.WriteAllBytes(FileShot, PngData);
// Preview on a sprite
Sprite Spr=Sprite.Create(ScreenShot, new Rect(0, 0, ScreenShot.width, ScreenShot.height),Vector2.zero, 100);
Preview.sprite=Spr;
Preview.gameObject.SetActive(true);
}
The same code but at the end of frame:
void Update(){
...
StartCoroutine(ScreenShotAtEndOfFrame());
}
public IEnumerator ScreenShotAtEndOfFrame(){
yield return new WaitForEndOfFrame();
Texture2D ScreenShot=new Texture2D(Screen.width, Screen.height, TextureFormat.RGB24, false);
RenderTexture.active=null; `enter code here`
ScreenShot.ReadPixels(new Rect(0, 0, Screen.width, Screen.height), 0, 0);
ScreenShot.Apply();
byte[] PngData=ScreenShot.EncodeToPNG();
PngStream = new MemoryStream(PngData);
//File.WriteAllBytes(FileShot, PngData);
// Preview on a sprite
Sprite Spr=Sprite.Create(ScreenShot, new Rect(0, 0, ScreenShot.width, ScreenShot.height),Vector2.zero, 100);
Preview.sprite=Spr;
Preview.gameObject.SetActive(true);
}
I am trying to turn a base64 String into an Sprite in Unity 3D, but my sprite in scene remains blank.
public var cardPicture : Image;
function ReceiveData(jsonReply : JSONObject) {
var pictureBytes : byte[] = System.Convert.FromBase64String(jsonReply.GetString("picture"));
var cardPictureTexture = new Texture2D( 720, 720);
Debug.Log(cardPictureTexture.LoadImage(pictureBytes));
var sprite : Sprite = new Sprite ();
sprite = Sprite.Create (cardPictureTexture, new Rect (0,0,720,720), new Vector2 (0.5f, 0.5f));
cardPicture.overrideSprite = sprite;
}
This prints out true, but I am not sure if it is loading the image appropriately from the bytes or if something else is going wrong. I am not sure what to check in order to determine what is going wrong either. Assigning some picture to the cardPicture in scene displays correctly.
I logged the jsonReply.picture and used an online base64 to image converter and it displayed the image correctly.
byte[] pictureBytes = System.Convert.FromBase64String(jsonReply.GetString("picture"));
Texture2D tex = new Texture2D(2, 2);
tex.LoadImage( imageBytes );
Sprite sprite = Sprite.Create(tex, new Rect(0.0f, 0.0f, tex.width, tex.height), new Vector2(0.5f, 0.5f), 100.0f);
cardPicture.overrideSprite = sprite;
I assume you are trying to fetch an image from a remote url and trying to parse bytes into a texture. In unity WWW has facilitated this and does not require user involvement in conversion.
I believe your response may have header details which might cause issues in converting to a texture. You may use a code like below,
public string Url = #"http://dummyimage.com/300/09f/fff.png";
void Start () {
// Starting a coroutine to avoid blocking
StartCoroutine ("LoadImage");
}
IEnumerator LoadImage()
{
WWW www = new WWW(Url);
yield return www;
Debug.Log ("Loaded");
Texture texture = www.texture;
this.gameObject.GetComponent<Renderer>().material.SetTexture( 0,texture );
}
I don't know if this is solved but i want to share my solution.
void Start()
{
StartCoroutine(GetQR());
}
IEnumerator GetQR()
{
using (UnityWebRequest www = UnityWebRequest.Get(GetQR_URL))
{
yield return www.SendWebRequest();
if (www.isNetworkError || www.isHttpError)
{
Debug.Log(www.error);
}
else
{
// Show results as text
Debug.Log(www.downloadHandler.text);
QRData qr = JsonUtility.FromJson<QRData>(www.downloadHandler.text);
string result = Regex.Replace(qr.img, #"^data:image\/[a-zA-Z]+;base64,", string.Empty);
CovertBase64ToImage(result);
}
}
}
void CovertBase64ToImage(string img)
{
byte[] bytes = Convert.FromBase64String(img);
Texture2D myTexture = new Texture2D(512,212);
myTexture.LoadImage(bytes);
Sprite sprite = Sprite.Create(myTexture, new Rect(0, 0, myTexture.width, myTexture.height), new Vector2(0.5f, 0.5f));
QRimage.transform.parent.gameObject.SetActive(true);
QRimage.sprite = sprite;
}
It is working perfectly on unity version 2019.4