Can I take a photo in Unity using the device's camera? - unity3d

I'm entirely unfamiliar with Unity3D's more complex feature set and am curious if it has the capability to take a picture and then manipulate it. Specifically my desire is to have the user take a selfie and then have them trace around their face to create a PNG that would then be texture mapped onto a model.
I know that the face mapping onto a model is simple, but I'm wondering if I need to write the photo/carving functionality into the encompassing Chrome app, or if it can all be done from within Unity. I don't need a tutorial on how to do it, just asking if it's something that is possible.

Yes, this is possible. You will want to look at the WebCamTexture functionality.
You create a WebCamTexture and call its Play() function which starts the camera. WebCamTexture, as any Texture, allows you to get the pixels via a GetPixels() call. This allows you to take a snapshot in when you like, and you can save this in a Texture2D. A call to EncodeToPNG() and subsequent write to file should get you there.
Do note that the code below is a quick write-up based on the documentation. I have not tested it. You might have to select a correct device if there are more than one available.
using UnityEngine;
using System.Collections;
using System.IO;
public class WebCamPhotoCamera : MonoBehaviour
{
WebCamTexture webCamTexture;
void Start()
{
webCamTexture = new WebCamTexture();
GetComponent<Renderer>().material.mainTexture = webCamTexture; //Add Mesh Renderer to the GameObject to which this script is attached to
webCamTexture.Play();
}
IEnumerator TakePhoto() // Start this Coroutine on some button click
{
// NOTE - you almost certainly have to do this here:
yield return new WaitForEndOfFrame();
// it's a rare case where the Unity doco is pretty clear,
// http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
// be sure to scroll down to the SECOND long example on that doco page
Texture2D photo = new Texture2D(webCamTexture.width, webCamTexture.height);
photo.SetPixels(webCamTexture.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "photo.png", bytes);
}
}

For those trying to get the camera to render live feed, here's how I managed to pull it off. First, I edited Bart's answer so the texture would be assigned on Update rather than just on Start:
void Start()
{
webCamTexture = new WebCamTexture();
webCamTexture.Play();
}
void Update()
{
GetComponent<RawImage>().texture = webCamTexture;
}
Then I attached the script to a GameObject with a RawImage component. You can easily create one by Right Click -> UI -> RawImage in the Hierarchy in the Unity Editor (this requires Unity 4.6 and above). Running it should show a live feed of the camera in your view. As of this writing, Unity 5 supports the use of webcams in the free personal edition of Unity 5.
I hope this helps anyone looking for a good way to capture live camera feed in Unity.

It is possible. I highly recommend you look at WebcamTexture Unity API. It has some useful functions:
GetPixel() -- Returns pixel color at coordinates (x, y).
GetPixels() -- Get a block of pixel colors.
GetPixels32() -- Returns the pixels data in raw format.
MarkNonReadable() -- Marks WebCamTexture as unreadable
Pause() -- Pauses the camera.
Play() -- Starts the camera.
Stop() -- Stops the camera.

Bart's answer has a required modification. I used his code and the pic I was getting was black. Required modification is that we have to
convert TakePhoto to a coroutine and add
yield return new WaitForEndOfFrame();
at the start of Coroutine. (Courtsey #fafase)
For more details see
http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
You can also refer to
Take photo using webcam is giving black output[Unity3D]

Yes, You can. I created Android Native camera plugin that can open your Android device camera, capture image, record video and save that in the desired location of your device with just a few lines of code.

you need to find your webcam device Index by search it in the devices list and select it for webcam texture to play.
you can use this code:
using UnityEngine;
using System.Collections;
using System.IO;
using UnityEngine.UI;
using System.Collections.Generic;
public class GetCam : MonoBehaviour
{
WebCamTexture webCam;
string your_path = "C:\\Users\\Jay\\Desktop";// any path you want to save your image
public RawImage display;
public AspectRatioFitter fit;
public void Start()
{
if(WebCamTexture.devices.Length==0)
{
Debug.LogError("can not found any camera!");
return;
}
int index = -1;
for (int i = 0; i < WebCamTexture.devices.Length; i++)
{
if (WebCamTexture.devices[i].name.ToLower().Contains("your webcam name"))
{
Debug.LogError("WebCam Name:" + WebCamTexture.devices[i].name + " Webcam Index:" + i);
index = i;
}
}
if (index == -1)
{
Debug.LogError("can not found your camera name!");
return;
}
WebCamDevice device = WebCamTexture.devices[index];
webCam = new WebCamTexture(device.name);
webCam.Play();
StartCoroutine(TakePhoto());
display.texture = webCam;
}
public void Update()
{
float ratio = (float)webCam.width / (float)webCam.height;
fit.aspectRatio = ratio;
float ScaleY = webCam.videoVerticallyMirrored ? -1f : 1f;
display.rectTransform.localScale = new Vector3(1f, ScaleY, 1f);
int orient = -webCam.videoRotationAngle;
display.rectTransform.localEulerAngles = new Vector3(0, 0, orient);
}
public void callTakePhoto() // call this function in button click event
{
StartCoroutine(TakePhoto());
}
IEnumerator TakePhoto() // Start this Coroutine on some button click
{
// NOTE - you almost certainly have to do this here:
yield return new WaitForEndOfFrame();
// it's a rare case where the Unity doco is pretty clear,
// http://docs.unity3d.com/ScriptReference/WaitForEndOfFrame.html
// be sure to scroll down to the SECOND long example on that doco page
Texture2D photo = new Texture2D(webCam.width, webCam.height);
photo.SetPixels(webCam.GetPixels());
photo.Apply();
//Encode to a PNG
byte[] bytes = photo.EncodeToPNG();
//Write out the PNG. Of course you have to substitute your_path for something sensible
File.WriteAllBytes(your_path + "\\photo.png", bytes);
}
}

There is a plugin available for this type of functionality called Camera Capture Kit - https://www.assetstore.unity3d.com/en/#!/content/56673 and while the functionality provided is geared towards mobile it contains a demo of how you can use the WebCamTexture to take a still image.

If you want to do that without using a third party plugin then #FuntionR solution will help you. But, if you want to save the captured photo to the gallery (Android & iOS)then it's not possible within unity, you have to write native code to transfer photo to gallery and then call it from unity.
Here is a summarise blog which will guide you to achieve your goal.
http://unitydevelopers.blogspot.com/2018/07/pick-image-from-gallery-in-unity3d.html
Edit: Note that, the above thread describes image picking from the gallery, but the same process will be for saving the image to the gallery.

Related

Unity mirror and webcam streaming WebGL

I have a project in witch I use mirror (unet 2.0 :P) to make a simple low poly WebGL game in unity. the game works great. Now I want to send a webcamstream over the network. I want to do this for every player.
I managed to get the cam working locally. Now I want to get it to run via the network. I made a Colors32[] array witch holds the image, it also works locally. Only If I try to send it over the network using a [Command] function I get the message that my packet is to large. I heard that splitting a package gives a lot of latency so I tried this solution sendTextures but it doesn't play nice with mirror. I only get one frame (if I'm lucky) if I turn my networkmanager on.
Do you guys have any pointers for me to get it working?
This is the code I have so far:
public class WebCamScript : NetworkBehaviour
{
static WebCamTexture camTexture;
Color32[] data;
private int width = 640, height = 480;
public GameObject screen;
void Start()
{
screen = GameObject.Find("screen");
if (camTexture == null)
camTexture = new WebCamTexture(width, height);
data = new Color32[width * height];
if (!camTexture.isPlaying)
camTexture.Play();
}
// Update is called once per frame
void Update()
{
CmdSendCam(data.Length, camTexture.GetPixels32(data));
}
[Command]
void CmdSendCam(int length, Color32[] receivedImage)
{
/*
Texture2D t = new Texture2D(width, height);
t.SetPixels32(receivedImage);
t.Apply();
screen.GetComponent<Renderer>().material.mainTexture = t;
*/
}
}

How to use scene camera with Agora.io in Unity

In Unity I have integrated Agora.io such that from within my virtual reality app, i can connect a video call to an outside user on a webpage. The VR user can see the website user, but the website user cannot see the VR user because there is no available physical camera to use. Is there a way to use a scene camera for the Agora video feed? This would mean that the website user would be able to see into the VR user's world
Yes. Although I haven't done projects in VR before, but the concept should be there. You may use the External Video Source to send any frames of the video as if it is sent from the physical camera. For Scene cameras, you may use a RenderTexture to output the camera feed, and extract the raw data from the RenderTexture. So the steps are:
Set up your camera to output to a RenderTexture (plus logic to display this RenderTexture somewhere locally if needed.)
Also make sure when you set up the Agora RTC engine, enable external video source using this call:
mRtcEngine.SetExternalVideoSource(true, false);
At each frame, extract the raw image data from the RenderTexture
Send the raw frame data to the SDK function rtc.pushVideoFrame()
You may find the code for the last step here
https://gist.github.com/icywind/92053d0983e713515c64d5c532ebee21
I modified the sharescreen code Agora io edited to extract a render texture. The problem is I only get a white or black screen on the receiver while my render texture is a depth cam video flow.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using agora_gaming_rtc;
using UnityEngine.UI;
using System.Globalization;
using System.Runtime.InteropServices;
using System;
public class ShareScreen : MonoBehaviour
{
Texture2D mTexture;
Rect mRect;
[SerializeField]
private string appId = "Your_AppID";
[SerializeField]
private string channelName = "agora";
public IRtcEngine mRtcEngine;
int i = 100;
public RenderTexture depthMap;
void Start()
{
Debug.Log("ScreenShare Activated");
mRtcEngine = IRtcEngine.getEngine(appId);
mRtcEngine.SetLogFilter(LOG_FILTER.DEBUG | LOG_FILTER.INFO | LOG_FILTER.WARNING | LOG_FILTER.ERROR | LOG_FILTER.CRITICAL);
mRtcEngine.SetParameters("{\"rtc.log_filter\": 65535}");
mRtcEngine.SetExternalVideoSource(true, false);
mRtcEngine.EnableVideo();
mRtcEngine.EnableVideoObserver();
mRtcEngine.JoinChannel(channelName, null, 0);
mRect = new Rect(0, 0, depthMap.width, depthMap.height);
mTexture = new Texture2D((int)mRect.width, (int)mRect.height, TextureFormat.RGBA32, false);
}
void Update()
{
//Start the screenshare Coroutine
StartCoroutine(shareScreen());
}
//Screen Share
IEnumerator shareScreen()
{
yield return new WaitForEndOfFrame();
//FB activate automaticaly the render texture for the copy
RenderTexture.active = depthMap;
//Read the Pixels inside the Rectangle
mTexture.ReadPixels(mRect, 0, 0);
//Apply the Pixels read from the rectangle to the texture
mTexture.Apply();
// Get the Raw Texture data from the the from the texture and apply it to an array of bytes
byte[] bytes = mTexture.GetRawTextureData();
// Make enough space for the bytes array
int size = Marshal.SizeOf(bytes[0]) * bytes.Length;
// Check to see if there is an engine instance already created
IRtcEngine rtc = IRtcEngine.QueryEngine();
//if the engine is present
if (rtc != null)
{
//Create a new external video frame
ExternalVideoFrame externalVideoFrame = new ExternalVideoFrame();
//Set the buffer type of the video frame
externalVideoFrame.type = ExternalVideoFrame.VIDEO_BUFFER_TYPE.VIDEO_BUFFER_RAW_DATA;
// Set the video pixel format
externalVideoFrame.format = ExternalVideoFrame.VIDEO_PIXEL_FORMAT.VIDEO_PIXEL_BGRA;
//apply raw data you are pulling from the rectangle you created earlier to the video frame
externalVideoFrame.buffer = bytes;
//Set the width of the video frame (in pixels)
externalVideoFrame.stride = (int)mRect.width;
//Set the height of the video frame
externalVideoFrame.height = (int)mRect.height;
//Remove pixels from the sides of the frame
externalVideoFrame.cropLeft = 0;
externalVideoFrame.cropTop = 0;
externalVideoFrame.cropRight = 0;
externalVideoFrame.cropBottom = 0;
//Rotate the video frame (0, 90, 180, or 270)
externalVideoFrame.rotation = 180;
// increment i with the video timestamp
externalVideoFrame.timestamp = i++;
//Push the external video frame with the frame we just created
int a = rtc.PushVideoFrame(externalVideoFrame);
Debug.Log(" pushVideoFrame = " + a);
}
}
}

Capture image on every frame in unity 3d

I'm working upon an algorithm that is based on Hand gesture recognition. This algorithm i found and run on Winform with c# scripts. The same technique i need to use in my game to perform hand gesture detection through webcam. I tried to use the algorithm in my game scripts but unable to capture any image using the algorithm. Below is the code that i'm currently working upon. I'm using aForge.net framework to implement the idea of motion detection. The bitmap image always returns null. However using the same algorithm in winform it captures image on every frame changed. I know there is a technique of using PhotoCapture in unity but i'm not sure how do i use it at runtime on every frame. Every guidance is appreciated. Thanks!
OpenCamera.cs
using AForge.GestureRecognition;
using System.Collections;
using System.Collections.Generic;
using System.Drawing;
using UnityEngine;
using System.Drawing.Design;
using UnityEngine.VR.WSA.WebCam;
using System.Linq;
using System;
public class OpenCamera : MonoBehaviour {
// statistics length
private const int statLength = 15;
Bitmap image;
PhotoCapture photoCaptureObject = null;
Texture2D targetTexture = null;
// current statistics index
private int statIndex = 0;
// ready statistics values
private int statReady = 0;
// statistics array
private int[] statCount = new int[statLength];
private GesturesRecognizerFromVideo gesturesRecognizer = new GesturesRecognizerFromVideo();
private Gesture gesture = new Gesture();
private int gestureShowTime = 0;
// Use this for initialization
void Start()
{
WebCamTexture webcamTexture = new WebCamTexture();
Renderer renderer = GetComponent<Renderer>();
renderer.material.mainTexture = webcamTexture;
webcamTexture.Play();
}
// Update is called once per frame
void Update ()
{
gesturesRecognizer.ProcessFrame(ref image);
// check if we need to draw gesture information on top of image
if (gestureShowTime > 0)
{
if ((gesture.LeftHand == HandPosition.NotRaised) || (gesture.RightHand != HandPosition.NotRaised))
{
System.Drawing.Graphics g = System.Drawing.Graphics.FromImage(image);
string text = string.Format("Left = " + gesture.LeftHand + "\nRight = " + gesture.RightHand);
System.Drawing.Font drawFont = new System.Drawing.Font("Courier", 13, System.Drawing.FontStyle.Bold);
SolidBrush drawBrush = new SolidBrush(System.Drawing.Color.Blue);
g.DrawString(text, drawFont, drawBrush, new PointF(0, 5));
drawFont.Dispose();
drawBrush.Dispose();
g.Dispose();
}
gestureShowTime--;
}
}
}
As mentioned in the comments (which I am not able to reply to at the moment) those libraries always heavely depend on system.drawing libraries, which are kind of windows only.
What you could try is downloading the drawing library dll (or copying from your system folder) into your unity project (it's managed code, so it'll run on every exportable platform) to keep teh references. Then you could capture a texture2d every frame (rendertextures or so) and draw those pixels into a bitmap which is fed to the library.
Note that copying thousands of pixels every frame is really really heavy. you'd have to convert the UnityEngine.Color to System.Drawing.Color...
This is the easies solution that might work. But definitly not a good final one.

How to control oculus rotation and potition in unity 5.3.x

I want to control the rotation and position of the Oculus DK2 in Unity 5.3 over. It doesn't seems to be trivial, I've already tried all I could find on unity forum, but nothing seems to works. The CameraRig script doesn't look to do anything when i change it. I want to disable all rotation and position because I have a mocap system that more reliable for those things.
Need some help!
To be able to control the pose your camera must be represented by using a OVRCameraRig that is included with the OVRPlugin for Unity 5.
Once you have that you can use the UpdatedAnchors event from the camera to transform the mocap data on to the camera position, just overwrite the value of OVRCameraRig.trackerAnchor for the head and OVRCameraRig.leftHandAnchor and OVRCameraRig.rightEyeAnchorfor hand positions if your suit supports those.
public class MocapController : MonoBehavior
{
public OVRCameraRig camera; //Drag camera rig object on to the script in the editor.
void Awake()
{
camera.UpdatedAnchors += UpdateAnchors
}
void UpdatedAnchors(OVRCameraRig rigToUpdate)
{
Transform headTransform = GetHeadTransform(); //Write yourself
Transform lHandTransform = GetLHandTransform(); //Write yourself
Transform rHandTransform = GetRHandTransform(); //Write yourself
rigToUpdate.trackerAnchor = headTransform;
rigToUpdate.leftHandAnchor= lHandTransform;
rigToUpdate.rightHandAnchor= rHandTransform;
}
}

Video texture Unity 5

I have a problem with getting a video texture to show up in unity 5.2 personal edition. I have applied a material with unlit shader and assigned it as a video texture. I also call the specific video texture through a script attached to the object with the video texture.
using UnityEngine;
using System.Collections;
[RequireComponent(typeof(AudioSource))]
public class CallMovie : MonoBehaviour {
public MovieTexture myMovie;
// Use this for initialization
void Start () {
Debug.Log ("start intro");
GetComponent<Renderer>().material.mainTexture = myMovie;// get movie
myMovie.Play ();// shall play movie
}
void Update(){
myMovie.loop =true;
}
}
When I hit the play button in unity the video texture stays black and nothing happens om screen although the program says it ran the video when checked with debug log.
Since I cant post questions in comment on your initial the following is an attempt to answer with what I know.
In your first statement after the debug call you are setting the maintexture component of the instanced material to myMovie, depending on shader this may or may not work as 'mainTexture' may not be referencing the texture you expect.
You can insure you hit the desired texture using the following method
//Note the diffrence between a material instance and the shared material
//... dont forget to clean up instances if you make them which hapens when you call .material
Material instancedMaterial = gameObject.GetComponent<Renderer>().material;
Material sharedMaterial = gameObject.GetComponent<Renderer>().sharedMaterial;
//_MainTex is the name of the main texture for most stock shaders ... but not all
//You can select the shader by clicking the gear in the inspector of the material
//this will display the shader in the inspector where you can see its properties by name
instancedMaterial.SetTexture("_MainTex", movie);
The following code is from a working class object I use to set a Unity UI object RawImage to render a movie. From what I see in your example you have the movie part correct I suspect your issue is with the shader parameter.
using UnityEngine;
using System.Collections;
public class RawImageMovePlayer : MonoBehaviour
{
public UnityEngine.UI.RawImage imageSource;
public bool play;
public bool isLoop = true;
public MovieTexture movie;
// Use this for initialization
void Start ()
{
movie = (MovieTexture)imageSource.texture;
movie.loop = isLoop;
}
// Update is called once per frame
void Update ()
{
if (!movie.isPlaying && play)
movie.Play();
}
public void ChangeMovie(MovieTexture movie)
{
imageSource.texture = movie;
this.movie = (MovieTexture)imageSource.texture;
this.movie.loop = isLoop;
}
public void OnDisable()
{
if (movie != null && movie.isPlaying)
movie.Stop();
}
}