renderWithShader texture passing - unity3d

i wish to create a nightvision effect with a shader for my camera. I have written the shader for a normal material, in which i mass a noise mask and a texture (in my camera example, the texture should be the image i get from the camera itself).
I have some questions: first, i see that i can pass a shader to the camera using Camera.renderWithShader. The thing is that i don't know how to link the image from what i see through my camera and my shader. I would also like to put the noise mask to my shader and don't know how to pass it. This is different then having a material to which you could link the textures.
I found some code on the net how to link the shader and the camera.. the thing is that i don't know if it's good due to the fact that i can't see the final nightvision effect because i don't know how to pass textures to the camera. I can see the view altering but don't know if it's right.
void Start () {
nightVisionShader = Shader.Find("Custom/nightvisionShader");
Camera.mainCamera.RenderWithShader(nightVisionShader,"");
}
void OnRenderImage (RenderTexture source, RenderTexture destination)
{
RenderTexture sceneNormals = RenderTexture.GetTemporary (source.width, source.height, 24, RenderTextureFormat.ARGB32);
transform.camera.targetTexture = sceneNormals;
transform.camera.RenderWithShader(nightVisionShader, "");
transform.camera.targetTexture = null;
// display contents in game view
Graphics.Blit (sceneNormals, destination);
RenderTexture.ReleaseTemporary (sceneNormals);
}

found how to do it!
void OnRenderImage (RenderTexture source, RenderTexture destination) {
overlayMaterial.SetTexture ("_MainTex", Resources.Load("nightvision/") as Texture2D);
overlayMaterial.SetTexture ("_noiseTex", Resources.Load("nightvision/noise_tex6") as Texture2D);
overlayMaterial.SetTexture ("_maskTex", Resources.Load("nightvision/binoculars_mask") as Texture2D);
overlayMaterial.SetFloat ("_elapsedTime", Time.time);
Graphics.Blit (source, destination, overlayMaterial, 0);
}

Related

RenderTexture not updating each frame, when copying contents with CopyTexture()

Im trying to save 2d textures from a rendertexture using CopyTexture() to capture multiple angle of a 3dobject but for some reason its only returns multiple of the same image, as if its not updating each frame.
public Texture2D ReadPixels(Texture2D tex)
{
Graphics.CopyTexture(rt, tex);
return tex;
}
I know it should be working because it was working with SetPixels() but i changed to copyTexture for performance but now its not updating the rendertexture.
I have tried things like camera.render() tex.apply, rt.create(), RenderTexture.active = rt, rt.release. Seems like these are more for using readpixels.

Is WaitForEndOfFrame the same as OnRenderImage?

In Unity, is WaitForEndOfFrame
void Update()
{
StartCoroutine(_c());
}
IEnumerator _c()
{
yield return new WaitForEndOfFrame();
BuildTexturesForExample();
}
identical to OnRenderImage?
void OnRenderImage()
{
BuildTexturesForExample();
}
(It goes without saying the minimal/useless unity doco on the two calls does not help.)
If not what is done "after" OnRenderImage until WaitForEndOfFrame is called?
Does anyone have any experience of using the two comparatively?
Safe to replace??????
Can you always safely replace a WaitForEndOfFrame pattern with OnRenderImage?1
What's the deal?
1(Of course, gizmos/ongui are irrelevant.)
I guess I just found the actual answer in OnPostRender
OnPostRender is called after the camera renders all its objects. If you want to do something after all cameras and GUI is rendered, use WaitForEndOfFrame coroutine.
So (other than I thought and the ExecutionOrder makes it look/sound like) all methods in the Render block (except OnGUI and OnDrawGizmos) are called on a per Camera basis and also note that
OnPostRender: This function is called only if the script is attached to the camera and is enabled.
or
OnRenderImage: This message is sent to all scripts attached to the camera.
Its purpose is Post-Processing (I only understood how they work looking at the examples.), therefore it actually takes 2 arguments!
OnRenderImage(RenderTexture src, RenderTexture dest)
so you can overwrite the output texture (dest) with some render effects after receiving the input (src) as in their example
Material material;
private void OnRenderImage(RenderTexture source, RenderTexture destination)
{
// Copy the source Render Texture to the destination,
// applying the material along the way.
Graphics.Blit(source, destination, material);
}
There is also e.g. OnRenderObject which is called on all GameObjects (MonoBehaviours) not only the Camera. Again until I saw the example I didn't really understand what it does (or what makes it different from OnRenderImage). But that example here helped:
void OnRenderObject()
{
// Render different meshes for the object depending on whether
// the main camera or minimap camera is viewing.
if (Camera.current.name == "MiniMapcam")
{
Graphics.DrawMeshNow(miniMapMesh, transform.position, transform.rotation);
}
else
{
Graphics.DrawMeshNow(mainMesh, transform.position, transform.rotation);
}
}
Bonus: I finally understand the real purpose of Camera.current! :D
WaitForEndOfFrame on the other hand is called after all cameras finished rendering and everywhere not only on a Camera GameObject.
Waits until the end of the frame after all cameras and GUI is rendered, just before displaying the frame on screen.
So I'ld say No, you can/should not replace WaitForEndOfFrame by using OnRenderImage!

How to cycle more then one video inside the sphere for VR view ? [duplicate]

I want to play a stereo 360 degree video in virtual reality in Unity on an Android. So far I have been doing some research and I have two cameras for the right and left eye with each a sphere around them. I also need a custom shader to make the image render on the inside of the sphere. I have the upper half of the image showing on one sphere by setting the y-tiling to 0.5 and the lower half shows on the other sphere with y-tiling 0.5 and y-offset 0.5. With this I can show a 3D 360 degree image already correct. The whole idea is from this tutorial.
Now for video, I need control over the Video speed so it turned out I need the VideoPlayer from the new Unity 5.6 beta. Now my setup so far would require the Video Player to play the video on both spheres with one sphere playing the upper part (one eye) and the other video playing the lower part (other eye).
Here is my problem: I don't know how to get the video Player to play the same video on two different materials (since they have different tiling values). Is there a way to do that?
I got a hint that I could use the same material and achieve the tiling effect via UV, but I don't know how that works and I haven't even got the video player to play the video on two objects using the same material on both of them. I have a screenshot of that here. The Right sphere just has the material videoMaterial. No tiling since I'd have to do that via UV.
Which way to go and how to do it? Am I on the right way here?
Am I on the right way here?
Almost but you are currently using Renderer and Material instead of RenderTexture and Material.
Which way to go and how to do it?
You need to use RenderTexture for this. Basically, you render the Video to RenderTexture then you assign that Texture to the material of both Spheres.
1.Create a RenderTexture and assign it to the VideoPlayer.
2.Create two materials for the spheres.
3.Set VideoPlayer.renderMode to VideoRenderMode.RenderTexture;
4.Set the Texture of both Spheres to the Texture from the RenderTexture
5.Prepare and Play Video.
The code below is doing that exact thing. It should work out of the box. The only thing you need to do is to modify the tiling and offset of each material to your needs.
You should also comment out:
leftSphere = createSphere("LeftEye", new Vector3(-5f, 0f, 0f), new Vector3(4f, 4f, 4f));
rightSphere = createSphere("RightEye", new Vector3(5f, 0f, 0f), new Vector3(4f, 4f, 4f));
then use a Sphere imported from any 3D application. That line of code is only there for testing purposes and it's not a good idea to play video with Unity's sphere because the spheres don't have enough details to make the video smooth.
using UnityEngine;
using UnityEngine.Video;
public class StereoscopicVideoPlayer : MonoBehaviour
{
RenderTexture renderTexture;
Material leftSphereMat;
Material rightSphereMat;
public GameObject leftSphere;
public GameObject rightSphere;
private VideoPlayer videoPlayer;
//Audio
private AudioSource audioSource;
void Start()
{
//Create Render Texture
renderTexture = createRenderTexture();
//Create Left and Right Sphere Materials
leftSphereMat = createMaterial();
rightSphereMat = createMaterial();
//Create the Left and Right Sphere Spheres
leftSphere = createSphere("LeftEye", new Vector3(-5f, 0f, 0f), new Vector3(4f, 4f, 4f));
rightSphere = createSphere("RightEye", new Vector3(5f, 0f, 0f), new Vector3(4f, 4f, 4f));
//Assign material to the Spheres
leftSphere.GetComponent<MeshRenderer>().material = leftSphereMat;
rightSphere.GetComponent<MeshRenderer>().material = rightSphereMat;
//Add VideoPlayer to the GameObject
videoPlayer = gameObject.AddComponent<VideoPlayer>();
//Add AudioSource
audioSource = gameObject.AddComponent<AudioSource>();
//Disable Play on Awake for both Video and Audio
videoPlayer.playOnAwake = false;
audioSource.playOnAwake = false;
// We want to play from url
videoPlayer.source = VideoSource.Url;
videoPlayer.url = "http://www.quirksmode.org/html5/videos/big_buck_bunny.mp4";
//Set Audio Output to AudioSource
videoPlayer.audioOutputMode = VideoAudioOutputMode.AudioSource;
//Assign the Audio from Video to AudioSource to be played
videoPlayer.EnableAudioTrack(0, true);
videoPlayer.SetTargetAudioSource(0, audioSource);
//Set the mode of output to be RenderTexture
videoPlayer.renderMode = VideoRenderMode.RenderTexture;
//Set the RenderTexture to store the images to
videoPlayer.targetTexture = renderTexture;
//Set the Texture of both Spheres to the Texture from the RenderTexture
assignTextureToSphere();
//Prepare Video to prevent Buffering
videoPlayer.Prepare();
//Subscribe to prepareCompleted event
videoPlayer.prepareCompleted += OnVideoPrepared;
}
RenderTexture createRenderTexture()
{
RenderTexture rd = new RenderTexture(1024, 1024, 16, RenderTextureFormat.ARGB32);
rd.Create();
return rd;
}
Material createMaterial()
{
return new Material(Shader.Find("Specular"));
}
void assignTextureToSphere()
{
//Set the Texture of both Spheres to the Texture from the RenderTexture
leftSphereMat.mainTexture = renderTexture;
rightSphereMat.mainTexture = renderTexture;
}
GameObject createSphere(string name, Vector3 spherePos, Vector3 sphereScale)
{
GameObject sphere = GameObject.CreatePrimitive(PrimitiveType.Sphere);
sphere.transform.position = spherePos;
sphere.transform.localScale = sphereScale;
sphere.name = name;
return sphere;
}
void OnVideoPrepared(VideoPlayer source)
{
Debug.Log("Done Preparing Video");
//Play Video
videoPlayer.Play();
//Play Sound
audioSource.Play();
//Change Play Speed
if (videoPlayer.canSetPlaybackSpeed)
{
videoPlayer.playbackSpeed = 1f;
}
}
}
There is also Unity tutorial on how to do this with a special shader but this does not work for me and some other people. I suggest you use the method above until VR support is added to the VideoPlayer API.

How to control oculus rotation and potition in unity 5.3.x

I want to control the rotation and position of the Oculus DK2 in Unity 5.3 over. It doesn't seems to be trivial, I've already tried all I could find on unity forum, but nothing seems to works. The CameraRig script doesn't look to do anything when i change it. I want to disable all rotation and position because I have a mocap system that more reliable for those things.
Need some help!
To be able to control the pose your camera must be represented by using a OVRCameraRig that is included with the OVRPlugin for Unity 5.
Once you have that you can use the UpdatedAnchors event from the camera to transform the mocap data on to the camera position, just overwrite the value of OVRCameraRig.trackerAnchor for the head and OVRCameraRig.leftHandAnchor and OVRCameraRig.rightEyeAnchorfor hand positions if your suit supports those.
public class MocapController : MonoBehavior
{
public OVRCameraRig camera; //Drag camera rig object on to the script in the editor.
void Awake()
{
camera.UpdatedAnchors += UpdateAnchors
}
void UpdatedAnchors(OVRCameraRig rigToUpdate)
{
Transform headTransform = GetHeadTransform(); //Write yourself
Transform lHandTransform = GetLHandTransform(); //Write yourself
Transform rHandTransform = GetRHandTransform(); //Write yourself
rigToUpdate.trackerAnchor = headTransform;
rigToUpdate.leftHandAnchor= lHandTransform;
rigToUpdate.rightHandAnchor= rHandTransform;
}
}

Can I take a photo in Unity using the device's camera?

I'm entirely unfamiliar with Unity3D's more complex feature set and am curious if it has the capability to take a picture and then manipulate it. Specifically my desire is to have the user take a selfie and then have them trace around their face to create a PNG that would then be texture mapped onto a model.
I know that the face mapping onto a model is simple, but I'm wondering if I need to write the photo/carving functionality into the encompassing Chrome app, or if it can all be done from within Unity. I don't need a tutorial on how to do it, just asking if it's something that is possible.
Yes, this is possible. You will want to look at the WebCamTexture functionality.
You create a WebCamTexture and call its Play() function which starts the camera. WebCamTexture, as any Texture, allows you to get the pixels via a GetPixels() call. This allows you to take a snapshot in when you like, and you can save this in a Texture2D. A call to EncodeToPNG() and subsequent write to file should get you there.
Do note that the code below is a quick write-up based on the documentation. I have not tested it. You might have to select a correct device if there are more than one available.
using UnityEngine;
using System.Collections;
using System.IO;
public class WebCamPhotoCamera : MonoBehaviour
{
WebCamTexture webCamTexture;
void Start()
{
webCamTexture = new WebCamTexture();
GetComponent<Renderer>().material.mainTexture = webCamTexture; //Add Mesh Renderer to the GameObject to which this script is attached to
webCamTexture.Play();
}
IEnumerator TakePhoto() // Start this Coroutine on some button click
{
// NOTE - you almost certainly have to do this here:
yield return new WaitForEndOfFrame();
// it's a rare case where the Unity doco is pretty clear,
// http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
// be sure to scroll down to the SECOND long example on that doco page
Texture2D photo = new Texture2D(webCamTexture.width, webCamTexture.height);
photo.SetPixels(webCamTexture.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "photo.png", bytes);
}
}
For those trying to get the camera to render live feed, here's how I managed to pull it off. First, I edited Bart's answer so the texture would be assigned on Update rather than just on Start:
void Start()
{
webCamTexture = new WebCamTexture();
webCamTexture.Play();
}
void Update()
{
GetComponent<RawImage>().texture = webCamTexture;
}
Then I attached the script to a GameObject with a RawImage component. You can easily create one by Right Click -> UI -> RawImage in the Hierarchy in the Unity Editor (this requires Unity 4.6 and above). Running it should show a live feed of the camera in your view. As of this writing, Unity 5 supports the use of webcams in the free personal edition of Unity 5.
I hope this helps anyone looking for a good way to capture live camera feed in Unity.
It is possible. I highly recommend you look at WebcamTexture Unity API. It has some useful functions:
GetPixel() -- Returns pixel color at coordinates (x, y).
GetPixels() -- Get a block of pixel colors.
GetPixels32() -- Returns the pixels data in raw format.
MarkNonReadable() -- Marks WebCamTexture as unreadable
Pause() -- Pauses the camera.
Play() -- Starts the camera.
Stop() -- Stops the camera.
Bart's answer has a required modification. I used his code and the pic I was getting was black. Required modification is that we have to
convert TakePhoto to a coroutine and add
yield return new WaitForEndOfFrame();
at the start of Coroutine. (Courtsey #fafase)
For more details see
http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
You can also refer to
Take photo using webcam is giving black output[Unity3D]
Yes, You can. I created Android Native camera plugin that can open your Android device camera, capture image, record video and save that in the desired location of your device with just a few lines of code.
you need to find your webcam device Index by search it in the devices list and select it for webcam texture to play.
you can use this code:
using UnityEngine;
using System.Collections;
using System.IO;
using UnityEngine.UI;
using System.Collections.Generic;
public class GetCam : MonoBehaviour
{
WebCamTexture webCam;
string your_path = "C:\\Users\\Jay\\Desktop";// any path you want to save your image
public RawImage display;
public AspectRatioFitter fit;
public void Start()
{
if(WebCamTexture.devices.Length==0)
{
Debug.LogError("can not found any camera!");
return;
}
int index = -1;
for (int i = 0; i < WebCamTexture.devices.Length; i++)
{
if (WebCamTexture.devices[i].name.ToLower().Contains("your webcam name"))
{
Debug.LogError("WebCam Name:" + WebCamTexture.devices[i].name + " Webcam Index:" + i);
index = i;
}
}
if (index == -1)
{
Debug.LogError("can not found your camera name!");
return;
}
WebCamDevice device = WebCamTexture.devices[index];
webCam = new WebCamTexture(device.name);
webCam.Play();
StartCoroutine(TakePhoto());
display.texture = webCam;
}
public void Update()
{
float ratio = (float)webCam.width / (float)webCam.height;
fit.aspectRatio = ratio;
float ScaleY = webCam.videoVerticallyMirrored ? -1f : 1f;
display.rectTransform.localScale = new Vector3(1f, ScaleY, 1f);
int orient = -webCam.videoRotationAngle;
display.rectTransform.localEulerAngles = new Vector3(0, 0, orient);
}
public void callTakePhoto() // call this function in button click event
{
StartCoroutine(TakePhoto());
}
IEnumerator TakePhoto() // Start this Coroutine on some button click
{
// NOTE - you almost certainly have to do this here:
yield return new WaitForEndOfFrame();
// it's a rare case where the Unity doco is pretty clear,
// http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
// be sure to scroll down to the SECOND long example on that doco page
Texture2D photo = new Texture2D(webCam.width, webCam.height);
photo.SetPixels(webCam.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "\\photo.png", bytes);
}
}
There is a plugin available for this type of functionality called Camera Capture Kit - https://www.assetstore.unity3d.com/en/#!/content/56673 and while the functionality provided is geared towards mobile it contains a demo of how you can use the WebCamTexture to take a still image.
If you want to do that without using a third party plugin then #FuntionR solution will help you. But, if you want to save the captured photo to the gallery (Android & iOS)then it's not possible within unity, you have to write native code to transfer photo to gallery and then call it from unity.
Here is a summarise blog which will guide you to achieve your goal.
http://unitydevelopers.blogspot.com/2018/07/pick-image-from-gallery-in-unity3d.html
Edit: Note that, the above thread describes image picking from the gallery, but the same process will be for saving the image to the gallery.