Take photo in unity c# - unity3d

I'm trying to build a program that takes your photo and places it with a different background, like a monument or so. So far, I was able to turn the camera on when I start the project with this code
webcamTexture = new WebCamTexture();
rawImage.texture = webcamTexture;
rawImage.material.mainTexture = webcamTexture;
webcamTexture.Play();
Texture2D PhotoTaken = new Texture2D (webcamTexture.width, webcamTexture.height);
PhotoTaken.SetPixels (webcamTexture.GetPixels ());
PhotoTaken.Apply ();
However, I can't take a screenshot or photo because it always ends up all black. I've tried different codes but nothing is working. Can someone please help? Thanks
EDIT
After some tries, this is the code I have:
WebCamTexture webcamTexture;
public RawImage rawImage;
void Start () {
webcamTexture = new WebCamTexture();
rawImage.texture = webcamTexture;
rawImage.material.mainTexture = webcamTexture;
webcamTexture.Play();
RenderTexture texture= new RenderTexture(webcamTexture.width, webcamTexture.height,0);
Graphics.Blit(webcamTexture, texture);
Button btn = yourButton.GetComponent<Button>();
btn.onClick.AddListener(OnClick);
}
public IEnumerator Coroutine(){
yield return new WaitForEndOfFrame ();
}
public void OnClick() {
var width = 767;
var height = 575;
Texture2D texture = new Texture2D(width, height);
texture.ReadPixels(new Rect(0, 0, width, height), 0, 0);
texture.Apply();
// Encode texture into PNG
var bytes = texture.EncodeToPNG();
//Destroy(texture);
File.WriteAllBytes (Application.dataPath + "/../SavedScreen.png", bytes);
}
and with this next code the screenshot is taken, but it takes a photo of the whole thing, and not just a bit of the screen.
void Start()
{
// Set the playback framerate!
// (real time doesn't influence time anymore)
Time.captureFramerate = frameRate;
// Find a folder that doesn't exist yet by appending numbers!
realFolder = folder;
int count = 1;
while (System.IO.Directory.Exists(realFolder))
{
realFolder = folder + count;
count++;
}
// Create the folder
System.IO.Directory.CreateDirectory(realFolder);
}
void Update()
{
// name is "realFolder/shot 0005.png"
var name = string.Format("{0}/shot {1:D04}.png", realFolder, Time.frameCount);
// Capture the screenshot
Application.CaptureScreenshot(name, sizeMultiplier);
}
}

You can take a screenshot like this in Unity
Application.CaptureScreenshot("Screenshot.png");
Reference
EDIT 1
To take a screenshot on a specific part of the screen use the following script:
var width = 400;
var height = 300;
var startX = 200;
var startY = 100;
var tex = new Texture2D (width, height, TextureFormat.RGB24, false);
tex.ReadPixels (Rect(startX, startY, width, height), 0, 0);
tex.Apply ();
// Encode texture into PNG
var bytes = tex.EncodeToPNG();
Destroy(tex);
File.WriteAllBytes(Application.dataPath + "/../SavedScreen.png", bytes);
Reference

Related

Unity: Reduce size of Render Texture before executing Texture2D.ReadPixels

I'm working on a code where I basically have to take a low quality screenshot about every 30 milliseconds. The script is attached to a camara.
What I want to do is reduce the render texture size. The way the code is right now changing either W or H basically gets me a SECTION of of all that is being seen by the camara instead of a reduced size version. So my question is how can I resized or downsample what is read into the screenshot (Texture2D) but that it still is a representation of the entire screen.
public class CameraRenderToImage : MonoBehaviour
{
private RemoteRenderServer rrs;
void Start(){
TimeStamp.SetStart();
Camera.onPostRender += OnPostRenderCallback;
}
void OnPostRenderCallback(Camera cam){
if (TimeStamp.HasMoreThanThisEllapsed(30)){
TimeStamp.SetStart();
int W = Screen.width;
int H = Screen.height;
Texture2D screenshot = new Texture2D(W,H, TextureFormat.RGB24, false);
screenshot.ReadPixels( new Rect(0, 0, W,H), 0, 0);
byte[] bytes = screenshot.EncodeToPNG();
System.IO.File.WriteAllBytes("check_me_out.png", bytes);
TimeStamp.Tok("Encode to PNG and Save");
}
}
// Remove the onPostRender callback
void OnDestroy()
{
Camera.onPostRender -= OnPostRenderCallback;
}
}
If you need to resize your render texture from script you can refer to the next code snippet
void Resize(RenderTexture renderTexture, int width, int height) {
if (renderTexture) {
renderTexture.Release();
renderTexture.width = width;
renderTexture.height = height;
}
}
To make it possible to resize the render texture you first need to make sure it is released.
To get Texture2d:
private Texture2D ToCompressedTexture(ref RenderTexture renderTexture)
{
var texture = new Texture2D(renderTexture.width, renderTexture.height, TextureFormat.ARGB32, false);
var previousTarget = RenderTexture.active;
RenderTexture.active = renderTexture;
texture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0);
RenderTexture.active = previousTarget;
texture.Compress(false);
texture.Apply(false, true);
renderTexture.Release();
renderTexture = null;
return texture;
}

Taking snapshots of a image in Unity

I am trying to take snapshots of materials I used in my application in Unity. I simply add a directional light and a camera and in a perspective mode. Then I render the result to a texture and save it as a .png file. The result is good but there is a strange gizmo like figure in the middle of image. Here it is :
Camera and light is far enough from the object. Also I disabled light to see if it is caused by directional light. But didn't solve. Anyone knows what cause this elliptic figure? Thanks in advance.
Edit. Here is the code
public static Texture2D CreateThumbnailFromMaterial(Material _material, string _name, string _path)
{
GameObject sphereObj = GameObject.CreatePrimitive(PrimitiveType.Sphere);
sphereObj.name = _name;
sphereObj.GetComponent<Renderer>().material = _material;
Texture2D thumbnailTexture = CreateThumbnailFromModel(sphereObj, _path);
sphereObj.GetComponent<Renderer>().material = null;
Object.DestroyImmediate(sphereObj.gameObject);
return thumbnailTexture;
}
public static Texture2D CreateThumbnailFromModel(GameObject _gameObject, string _path)
{
Texture2D thumbnailTexture = new Texture2D(textureSize, textureSize);
thumbnailTexture.name = _gameObject.name.Simplify();
GameObject cameraObject = Object.Instantiate(Resources.Load("SceneComponent/SnapshotCamera") as GameObject);
Camera snapshotCamera = cameraObject.GetComponent<Camera>();
if (snapshotCamera)
{
GameObject sceneObject = GameObject.Instantiate(_gameObject) as GameObject;
sceneObject.transform.Reset();
sceneObject.transform.position = new Vector3(1000, 0, -1000);
sceneObject.hideFlags = HideFlags.HideAndDontSave;
// Create render texture
snapshotCamera.targetTexture = RenderTexture.GetTemporary(textureSize, textureSize, 24);
RenderTexture.active = snapshotCamera.targetTexture;
// Set layer
foreach (Transform child in sceneObject.GetComponentsInChildren<Transform>(true))
{
child.gameObject.layer = LayerMask.NameToLayer("ObjectSnapshot");
}
// Calculate bounding box
Bounds bounds = sceneObject.GetWorldSpaceAABB();
float maxBoundValue = 0f;
if (bounds.IsValid())
{
maxBoundValue = Mathf.Max(bounds.size.x, bounds.size.y, bounds.size.z);
}
double fov = Mathf.Deg2Rad * snapshotCamera.GetComponent<Camera>().fieldOfView;
float distanceToCenter = (maxBoundValue) / (float)System.Math.Tan(fov);
cameraObject.transform.LookAt(bounds.center);
cameraObject.transform.position = bounds.center - (snapshotCamera.transform.forward * distanceToCenter);
cameraObject.transform.SetParent(sceneObject.transform);
snapshotCamera.Render();
thumbnailTexture.ReadPixels(new Rect(0, 0, textureSize, textureSize), 0, 0);
thumbnailTexture.Apply();
sceneObject.transform.Reset();
snapshotCamera.transform.SetParent(null);
RenderTexture.active = null;
GameObject.DestroyImmediate(sceneObject);
GameObject.DestroyImmediate(cameraObject);
// Save as .png
IO.IOManager.Instance.SaveAsPNG(_path + thumbnailTexture.name, thumbnailTexture);
}
return thumbnailTexture;
}
And here is my camera properties

Load sprite from a base64 string which came from a websocket

I am trying to turn a base64 String into an Sprite in Unity 3D, but my sprite in scene remains blank.
public var cardPicture : Image;
function ReceiveData(jsonReply : JSONObject) {
var pictureBytes : byte[] = System.Convert.FromBase64String(jsonReply.GetString("picture"));
var cardPictureTexture = new Texture2D( 720, 720);
Debug.Log(cardPictureTexture.LoadImage(pictureBytes));
var sprite : Sprite = new Sprite ();
sprite = Sprite.Create (cardPictureTexture, new Rect (0,0,720,720), new Vector2 (0.5f, 0.5f));
cardPicture.overrideSprite = sprite;
}
This prints out true, but I am not sure if it is loading the image appropriately from the bytes or if something else is going wrong. I am not sure what to check in order to determine what is going wrong either. Assigning some picture to the cardPicture in scene displays correctly.
I logged the jsonReply.picture and used an online base64 to image converter and it displayed the image correctly.
byte[] pictureBytes = System.Convert.FromBase64String(jsonReply.GetString("picture"));
Texture2D tex = new Texture2D(2, 2);
tex.LoadImage( imageBytes );
Sprite sprite = Sprite.Create(tex, new Rect(0.0f, 0.0f, tex.width, tex.height), new Vector2(0.5f, 0.5f), 100.0f);
cardPicture.overrideSprite = sprite;
I assume you are trying to fetch an image from a remote url and trying to parse bytes into a texture. In unity WWW has facilitated this and does not require user involvement in conversion.
I believe your response may have header details which might cause issues in converting to a texture. You may use a code like below,
public string Url = #"http://dummyimage.com/300/09f/fff.png";
void Start () {
// Starting a coroutine to avoid blocking
StartCoroutine ("LoadImage");
}
IEnumerator LoadImage()
{
WWW www = new WWW(Url);
yield return www;
Debug.Log ("Loaded");
Texture texture = www.texture;
this.gameObject.GetComponent<Renderer>().material.SetTexture( 0,texture );
}
I don't know if this is solved but i want to share my solution.
void Start()
{
StartCoroutine(GetQR());
}
IEnumerator GetQR()
{
using (UnityWebRequest www = UnityWebRequest.Get(GetQR_URL))
{
yield return www.SendWebRequest();
if (www.isNetworkError || www.isHttpError)
{
Debug.Log(www.error);
}
else
{
// Show results as text
Debug.Log(www.downloadHandler.text);
QRData qr = JsonUtility.FromJson<QRData>(www.downloadHandler.text);
string result = Regex.Replace(qr.img, #"^data:image\/[a-zA-Z]+;base64,", string.Empty);
CovertBase64ToImage(result);
}
}
}
void CovertBase64ToImage(string img)
{
byte[] bytes = Convert.FromBase64String(img);
Texture2D myTexture = new Texture2D(512,212);
myTexture.LoadImage(bytes);
Sprite sprite = Sprite.Create(myTexture, new Rect(0, 0, myTexture.width, myTexture.height), new Vector2(0.5f, 0.5f));
QRimage.transform.parent.gameObject.SetActive(true);
QRimage.sprite = sprite;
}
It is working perfectly on unity version 2019.4

EmguCV + Unity Open WebCam is Error;

I use EmguCV open webcam in unity.
But it's fps is low much.
this is my code ↓
private Texture2D texture;
private Capture capture;
private Color32[] color = new Color32[640*480];
// Use this for initialization
void Start () {
texture = new Texture2D (640, 480);
capture = new Capture ();
}
// Update is called once per frame
void Update () {
Image<Bgr, Byte> currentFrame = capture.QueryFrame();
Bitmap bitmapCurrentFrame = currentFrame.ToBitmap();
Image<Bgra, Byte> img = new Image<Bgra, Byte> (bitmapCurrentFrame);
for(int y=0; y<480; y++){
for(int x=0; x<640; x++){
int index = y+x*480;
print(index+";"+x+";"+y);
//byte b = img.Data[x,y,0];
color[index].r = img.Data[x,y,2];
color[index].g = img.Data[x,y,1];
color[index].b = img.Data[x,y,0];
color[index].a = 0xff;
}
}
texture.SetPixels32 (color);
texture.Apply (false);
renderer.material.mainTexture = texture;
}
i don't know why fps is so low...
and why my boss like EmguCV with Unity, why he don't use Unity-WebCamTexture...
OKAY,i really thank you for your read.
Hope, I can get some answer.
Look at Texture2D.LoadRawTextureData. It has no proper docs, so here's a snippet:
Texture2D tex = new Texture2D(width, height, format, false, true);
tex.LoadRawTextureData(buffer);
tex.Apply(false, true);
Buffer must be in the correct hardware format. For the format variable look at the list of formats that unity accepts.

Emgucv Image to texture2d on Unity

public class testEmguCV : MonoBehaviour
{
private Capture capture;
void Start()
{
capture = new Capture();
}
void Update()
{
Image<Gray, Byte> currentFrame = capture.QueryGrayFrame();
Bitmap bitmapCurrentFrame = currentFrame.ToBitmap();
MemoryStream m = new MemoryStream();
bitmapCurrentFrame.Save(m, bitmapCurrentFrame.RawFormat);
Texture2D camera = new Texture2D(400, 400);
if (currentFrame != null)
{
camera.LoadImage(m.ToArray());
renderer.material.mainTexture = camera;
}
}
}
I used above code to convert between camera feed from emgucv camera to texture2d in unity but i am having problem with bitmapCurrentFrame.Save(m, bitmapCurrentFrame.RawFormat);
it is giving following errors
ArgumentNullException: Argument cannot be null. Parameter name:
encoder System.Drawing.Image.Save (System.IO.Stream stream,
System.Drawing.Imaging.ImageCodecInfo encoder,
System.Drawing.Imaging.EncoderParameters encoderParams)
System.Drawing.Image.Save (System.IO.Stream stream,
System.Drawing.Imaging.ImageFormat format) (wrapper
remoting-invoke-with-check) System.Drawing.Image:Save
(System.IO.Stream,System.Drawing.Imaging.ImageFormat)
WebcamUsingEmgucv.Update () (at Assets/WebcamUsingEmgucv.cs:51)
After several hours of thinking and searching i dont know what is going on please help
I used your example in our project, thanks! But i modified it to :
void Update()
{
if(capture == null)
{
Debug.LogError("Capture is null");
return;
}
Image<Gray, Byte> currentFrame = capture.QueryGrayFrame();
MemoryStream m = new MemoryStream();
currentFrame.Bitmap.Save(m, currentFrame.Bitmap.RawFormat);
Texture2D camera = new Texture2D(400, 400);
if (currentFrame != null)
{
camera.LoadImage(m.ToArray());
renderer.material.mainTexture = camera;
}
}
And it is work! Fps average ~30-35. Good luck!
Try to use this:
https://github.com/neutmute/emgucv/blob/3ceb85cba71cf957d5e31ae0a70da4bbf746d0e8/Emgu.CV/PInvoke/Unity/TextureConvert.cs
it has something like this:
public static Texture2D ImageToTexture2D<TColor, TDepth>(Image<TColor, TDepth> image, bool correctForVerticleFlip)
where TColor : struct, IColor
where TDepth : new()
{
Size size = image.Size;
if (typeof(TColor) == typeof(Rgb) && typeof(TDepth) == typeof(Byte))
{
Texture2D texture = new Texture2D(size.Width, size.Height, TextureFormat.RGB24, false);
byte[] data = new byte[size.Width * size.Height * 3];
GCHandle dataHandle = GCHandle.Alloc(data, GCHandleType.Pinned);
using (Image<Rgb, byte> rgb = new Image<Rgb, byte>(size.Width, size.Height, size.Width * 3, dataHandle.AddrOfPinnedObject()))
{
rgb.ConvertFrom(image);
if (correctForVerticleFlip)
CvInvoke.cvFlip(rgb, rgb, FLIP.VERTICAL);
}
dataHandle.Free();
texture.LoadRawTextureData(data);
texture.Apply();
return texture;
}
else //if (typeof(TColor) == typeof(Rgba) && typeof(TDepth) == typeof(Byte))
{
Texture2D texture = new Texture2D(size.Width, size.Height, TextureFormat.RGBA32, false);
byte[] data = new byte[size.Width * size.Height * 4];
GCHandle dataHandle = GCHandle.Alloc(data, GCHandleType.Pinned);
using (Image<Rgba, byte> rgba = new Image<Rgba, byte>(size.Width, size.Height, size.Width * 4, dataHandle.AddrOfPinnedObject()))
{
rgba.ConvertFrom(image);
if (correctForVerticleFlip)
CvInvoke.cvFlip(rgba, rgba, FLIP.VERTICAL);
}
dataHandle.Free();
texture.LoadRawTextureData(data);
texture.Apply();
return texture;
}
//return null;
}
If You don't want tuo use an InterOp You can use also something like
cameraframe.Convert<Rgb,byte>().Data.Cast<byte>().ToArray<byte>()
and use it instead of section using interOp
Both solutions worked for me. Just remember to destroy the texture before replacing it. I had memory leak issues before I did that.