RenderTexture not updating each frame, when copying contents with CopyTexture() - unity3d

Im trying to save 2d textures from a rendertexture using CopyTexture() to capture multiple angle of a 3dobject but for some reason its only returns multiple of the same image, as if its not updating each frame.
public Texture2D ReadPixels(Texture2D tex)
{
Graphics.CopyTexture(rt, tex);
return tex;
}
I know it should be working because it was working with SetPixels() but i changed to copyTexture for performance but now its not updating the rendertexture.
I have tried things like camera.render() tex.apply, rt.create(), RenderTexture.active = rt, rt.release. Seems like these are more for using readpixels.

Related

Graphics.RenderTexture() causes extra image to appear on canavas

I have some code like this:
readonly Rect WORK_SOURCE_RECT = new Rect(0f, 0f, 1f, 1f);
Color[] workPixels;
void Start() {
Texture2D workTexture = new Texture2D(256, 256, GraphicsFormat.R8G8B8A8_UNorm,
TextureCreationFlags.None);
workPixels = workTexture.GetPixels();
}
void OnGUI() {
workTexture.SetPixels(workPixels);
workTexture.Apply();
Graphics.DrawTexture(toRect, workTexture, WORK_SOURCE_RECT,
0, 0, 0, 0, renderColor);
}
void Update() {
// Omitted - Some changes are made by code here to the workPixels array.
}
The call to Graphics.DrawTexture() correctly draws the content of workTexture to the screen, just how I want it. But there is a strange side effect...
Any other GUI that is drawn inside of a scene object containing a Canvas component, will show an extra Y-reversed copy of the work texture. (Nevermind the reversal--not the issue.) I don't know why this extra image is drawn. It seems like there is a shared resource between two GUI things I'd hoped were completely unrelated.
In the image shown below, the reversed-face on the right is the unwanted extra render. Strangely it appears when I move to the right side of my scene, so it's like it is in world space. But it will update when GUI-based elements like subtitles are shown.
On Unity 2019.4.13f1 with MacOS.
The solution I found that resolved my problem was camera stacking. I created a second camera that was culled to just UI layer. The first camera had UI layer removed from culling. And then the calls to Graphics.DrawTexture() no longer appeared on the canvas used for UI.

How to draw a 2D plot on a plane in Unity 3D?

I have a plane game object on the 3D scene and I want to plot 2D graph z=f(x)=sin kx (btw, MathJaX does not work in this site), for example, on it. I am very new to Unity, could you tell me what should I do?
There are three ways to show a plot.
you create a bunch of small gameobjects and piece together lines,
you create a Texture2D, and draw into it.
When leaving Unity a litte, call Texture.GetNativeTexturePtr() and use D3D calls for this.
I think the 2 is what you might use best.
3. is leaving Unity a little and will not port across target platforms.
It leaves up to you how to do graphics on it. Using only SetPixel is not a really big graphics API.
Here's an example how to load a texture with graphics drawn at runtime.
To use it, create an object, don't forget to assign a material, and attach this script.
using UnityEngine;
public class DrawTex : MonoBehaviour
{
Material mat;
Texture2D tx;
void Start()
{
MeshRenderer rend;
rend = GetComponent<MeshRenderer>();
UnityEngine.Assertions.Assert.IsNotNull(rend);
mat = rend.material;
UnityEngine.Assertions.Assert.IsNotNull(mat);
tx = new Texture2D(128,128,TextureFormat.ARGB32,true);
// draw stuff.
for(int y=0;y<128;y++)
{
for(int x=0;x<128;x++)
{
float a,r,g,b;
r=g=b=a=0f;
if( x<20 || y<20 || x>108 || y>108 )
{a=1.0f;r=g=b=0.75f;}
else
{a=0.5f;r=b=0.25f+(x/256.0f);g=0.25f+(y/256.0f);}
tx.SetPixel(x,y,new Color(r,g,b,a));
}
tx.Apply(true); // now really load all those pixels.
}
mat.mainTexture = tx;
}
}
Hope this helps.

Is WaitForEndOfFrame the same as OnRenderImage?

In Unity, is WaitForEndOfFrame
void Update()
{
StartCoroutine(_c());
}
IEnumerator _c()
{
yield return new WaitForEndOfFrame();
BuildTexturesForExample();
}
identical to OnRenderImage?
void OnRenderImage()
{
BuildTexturesForExample();
}
(It goes without saying the minimal/useless unity doco on the two calls does not help.)
If not what is done "after" OnRenderImage until WaitForEndOfFrame is called?
Does anyone have any experience of using the two comparatively?
Safe to replace??????
Can you always safely replace a WaitForEndOfFrame pattern with OnRenderImage?1
What's the deal?
1(Of course, gizmos/ongui are irrelevant.)
I guess I just found the actual answer in OnPostRender
OnPostRender is called after the camera renders all its objects. If you want to do something after all cameras and GUI is rendered, use WaitForEndOfFrame coroutine.
So (other than I thought and the ExecutionOrder makes it look/sound like) all methods in the Render block (except OnGUI and OnDrawGizmos) are called on a per Camera basis and also note that
OnPostRender: This function is called only if the script is attached to the camera and is enabled.
or
OnRenderImage: This message is sent to all scripts attached to the camera.
Its purpose is Post-Processing (I only understood how they work looking at the examples.), therefore it actually takes 2 arguments!
OnRenderImage(RenderTexture src, RenderTexture dest)
so you can overwrite the output texture (dest) with some render effects after receiving the input (src) as in their example
Material material;
private void OnRenderImage(RenderTexture source, RenderTexture destination)
{
// Copy the source Render Texture to the destination,
// applying the material along the way.
Graphics.Blit(source, destination, material);
}
There is also e.g. OnRenderObject which is called on all GameObjects (MonoBehaviours) not only the Camera. Again until I saw the example I didn't really understand what it does (or what makes it different from OnRenderImage). But that example here helped:
void OnRenderObject()
{
// Render different meshes for the object depending on whether
// the main camera or minimap camera is viewing.
if (Camera.current.name == "MiniMapcam")
{
Graphics.DrawMeshNow(miniMapMesh, transform.position, transform.rotation);
}
else
{
Graphics.DrawMeshNow(mainMesh, transform.position, transform.rotation);
}
}
Bonus: I finally understand the real purpose of Camera.current! :D
WaitForEndOfFrame on the other hand is called after all cameras finished rendering and everywhere not only on a Camera GameObject.
Waits until the end of the frame after all cameras and GUI is rendered, just before displaying the frame on screen.
So I'ld say No, you can/should not replace WaitForEndOfFrame by using OnRenderImage!

Can I take a photo in Unity using the device's camera?

I'm entirely unfamiliar with Unity3D's more complex feature set and am curious if it has the capability to take a picture and then manipulate it. Specifically my desire is to have the user take a selfie and then have them trace around their face to create a PNG that would then be texture mapped onto a model.
I know that the face mapping onto a model is simple, but I'm wondering if I need to write the photo/carving functionality into the encompassing Chrome app, or if it can all be done from within Unity. I don't need a tutorial on how to do it, just asking if it's something that is possible.
Yes, this is possible. You will want to look at the WebCamTexture functionality.
You create a WebCamTexture and call its Play() function which starts the camera. WebCamTexture, as any Texture, allows you to get the pixels via a GetPixels() call. This allows you to take a snapshot in when you like, and you can save this in a Texture2D. A call to EncodeToPNG() and subsequent write to file should get you there.
Do note that the code below is a quick write-up based on the documentation. I have not tested it. You might have to select a correct device if there are more than one available.
using UnityEngine;
using System.Collections;
using System.IO;
public class WebCamPhotoCamera : MonoBehaviour
{
WebCamTexture webCamTexture;
void Start()
{
webCamTexture = new WebCamTexture();
GetComponent<Renderer>().material.mainTexture = webCamTexture; //Add Mesh Renderer to the GameObject to which this script is attached to
webCamTexture.Play();
}
IEnumerator TakePhoto() // Start this Coroutine on some button click
{
// NOTE - you almost certainly have to do this here:
yield return new WaitForEndOfFrame();
// it's a rare case where the Unity doco is pretty clear,
// http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
// be sure to scroll down to the SECOND long example on that doco page
Texture2D photo = new Texture2D(webCamTexture.width, webCamTexture.height);
photo.SetPixels(webCamTexture.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "photo.png", bytes);
}
}
For those trying to get the camera to render live feed, here's how I managed to pull it off. First, I edited Bart's answer so the texture would be assigned on Update rather than just on Start:
void Start()
{
webCamTexture = new WebCamTexture();
webCamTexture.Play();
}
void Update()
{
GetComponent<RawImage>().texture = webCamTexture;
}
Then I attached the script to a GameObject with a RawImage component. You can easily create one by Right Click -> UI -> RawImage in the Hierarchy in the Unity Editor (this requires Unity 4.6 and above). Running it should show a live feed of the camera in your view. As of this writing, Unity 5 supports the use of webcams in the free personal edition of Unity 5.
I hope this helps anyone looking for a good way to capture live camera feed in Unity.
It is possible. I highly recommend you look at WebcamTexture Unity API. It has some useful functions:
GetPixel() -- Returns pixel color at coordinates (x, y).
GetPixels() -- Get a block of pixel colors.
GetPixels32() -- Returns the pixels data in raw format.
MarkNonReadable() -- Marks WebCamTexture as unreadable
Pause() -- Pauses the camera.
Play() -- Starts the camera.
Stop() -- Stops the camera.
Bart's answer has a required modification. I used his code and the pic I was getting was black. Required modification is that we have to
convert TakePhoto to a coroutine and add
yield return new WaitForEndOfFrame();
at the start of Coroutine. (Courtsey #fafase)
For more details see
http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
You can also refer to
Take photo using webcam is giving black output[Unity3D]
Yes, You can. I created Android Native camera plugin that can open your Android device camera, capture image, record video and save that in the desired location of your device with just a few lines of code.
you need to find your webcam device Index by search it in the devices list and select it for webcam texture to play.
you can use this code:
using UnityEngine;
using System.Collections;
using System.IO;
using UnityEngine.UI;
using System.Collections.Generic;
public class GetCam : MonoBehaviour
{
WebCamTexture webCam;
string your_path = "C:\\Users\\Jay\\Desktop";// any path you want to save your image
public RawImage display;
public AspectRatioFitter fit;
public void Start()
{
if(WebCamTexture.devices.Length==0)
{
Debug.LogError("can not found any camera!");
return;
}
int index = -1;
for (int i = 0; i < WebCamTexture.devices.Length; i++)
{
if (WebCamTexture.devices[i].name.ToLower().Contains("your webcam name"))
{
Debug.LogError("WebCam Name:" + WebCamTexture.devices[i].name + " Webcam Index:" + i);
index = i;
}
}
if (index == -1)
{
Debug.LogError("can not found your camera name!");
return;
}
WebCamDevice device = WebCamTexture.devices[index];
webCam = new WebCamTexture(device.name);
webCam.Play();
StartCoroutine(TakePhoto());
display.texture = webCam;
}
public void Update()
{
float ratio = (float)webCam.width / (float)webCam.height;
fit.aspectRatio = ratio;
float ScaleY = webCam.videoVerticallyMirrored ? -1f : 1f;
display.rectTransform.localScale = new Vector3(1f, ScaleY, 1f);
int orient = -webCam.videoRotationAngle;
display.rectTransform.localEulerAngles = new Vector3(0, 0, orient);
}
public void callTakePhoto() // call this function in button click event
{
StartCoroutine(TakePhoto());
}
IEnumerator TakePhoto() // Start this Coroutine on some button click
{
// NOTE - you almost certainly have to do this here:
yield return new WaitForEndOfFrame();
// it's a rare case where the Unity doco is pretty clear,
// http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
// be sure to scroll down to the SECOND long example on that doco page
Texture2D photo = new Texture2D(webCam.width, webCam.height);
photo.SetPixels(webCam.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "\\photo.png", bytes);
}
}
There is a plugin available for this type of functionality called Camera Capture Kit - https://www.assetstore.unity3d.com/en/#!/content/56673 and while the functionality provided is geared towards mobile it contains a demo of how you can use the WebCamTexture to take a still image.
If you want to do that without using a third party plugin then #FuntionR solution will help you. But, if you want to save the captured photo to the gallery (Android & iOS)then it's not possible within unity, you have to write native code to transfer photo to gallery and then call it from unity.
Here is a summarise blog which will guide you to achieve your goal.
http://unitydevelopers.blogspot.com/2018/07/pick-image-from-gallery-in-unity3d.html
Edit: Note that, the above thread describes image picking from the gallery, but the same process will be for saving the image to the gallery.

renderWithShader texture passing

i wish to create a nightvision effect with a shader for my camera. I have written the shader for a normal material, in which i mass a noise mask and a texture (in my camera example, the texture should be the image i get from the camera itself).
I have some questions: first, i see that i can pass a shader to the camera using Camera.renderWithShader. The thing is that i don't know how to link the image from what i see through my camera and my shader. I would also like to put the noise mask to my shader and don't know how to pass it. This is different then having a material to which you could link the textures.
I found some code on the net how to link the shader and the camera.. the thing is that i don't know if it's good due to the fact that i can't see the final nightvision effect because i don't know how to pass textures to the camera. I can see the view altering but don't know if it's right.
void Start () {
nightVisionShader = Shader.Find("Custom/nightvisionShader");
Camera.mainCamera.RenderWithShader(nightVisionShader,"");
}
void OnRenderImage (RenderTexture source, RenderTexture destination)
{
RenderTexture sceneNormals = RenderTexture.GetTemporary (source.width, source.height, 24, RenderTextureFormat.ARGB32);
transform.camera.targetTexture = sceneNormals;
transform.camera.RenderWithShader(nightVisionShader, "");
transform.camera.targetTexture = null;
// display contents in game view
Graphics.Blit (sceneNormals, destination);
RenderTexture.ReleaseTemporary (sceneNormals);
}
found how to do it!
void OnRenderImage (RenderTexture source, RenderTexture destination) {
overlayMaterial.SetTexture ("_MainTex", Resources.Load("nightvision/") as Texture2D);
overlayMaterial.SetTexture ("_noiseTex", Resources.Load("nightvision/noise_tex6") as Texture2D);
overlayMaterial.SetTexture ("_maskTex", Resources.Load("nightvision/binoculars_mask") as Texture2D);
overlayMaterial.SetFloat ("_elapsedTime", Time.time);
Graphics.Blit (source, destination, overlayMaterial, 0);
}