Load sprite from a base64 string which came from a websocket - unity3d

I am trying to turn a base64 String into an Sprite in Unity 3D, but my sprite in scene remains blank.
public var cardPicture : Image;
function ReceiveData(jsonReply : JSONObject) {
var pictureBytes : byte[] = System.Convert.FromBase64String(jsonReply.GetString("picture"));
var cardPictureTexture = new Texture2D( 720, 720);
Debug.Log(cardPictureTexture.LoadImage(pictureBytes));
var sprite : Sprite = new Sprite ();
sprite = Sprite.Create (cardPictureTexture, new Rect (0,0,720,720), new Vector2 (0.5f, 0.5f));
cardPicture.overrideSprite = sprite;
}
This prints out true, but I am not sure if it is loading the image appropriately from the bytes or if something else is going wrong. I am not sure what to check in order to determine what is going wrong either. Assigning some picture to the cardPicture in scene displays correctly.
I logged the jsonReply.picture and used an online base64 to image converter and it displayed the image correctly.

byte[] pictureBytes = System.Convert.FromBase64String(jsonReply.GetString("picture"));
Texture2D tex = new Texture2D(2, 2);
tex.LoadImage( imageBytes );
Sprite sprite = Sprite.Create(tex, new Rect(0.0f, 0.0f, tex.width, tex.height), new Vector2(0.5f, 0.5f), 100.0f);
cardPicture.overrideSprite = sprite;

I assume you are trying to fetch an image from a remote url and trying to parse bytes into a texture. In unity WWW has facilitated this and does not require user involvement in conversion.
I believe your response may have header details which might cause issues in converting to a texture. You may use a code like below,
public string Url = #"http://dummyimage.com/300/09f/fff.png";
void Start () {
// Starting a coroutine to avoid blocking
StartCoroutine ("LoadImage");
}
IEnumerator LoadImage()
{
WWW www = new WWW(Url);
yield return www;
Debug.Log ("Loaded");
Texture texture = www.texture;
this.gameObject.GetComponent<Renderer>().material.SetTexture( 0,texture );
}

I don't know if this is solved but i want to share my solution.
void Start()
{
StartCoroutine(GetQR());
}
IEnumerator GetQR()
{
using (UnityWebRequest www = UnityWebRequest.Get(GetQR_URL))
{
yield return www.SendWebRequest();
if (www.isNetworkError || www.isHttpError)
{
Debug.Log(www.error);
}
else
{
// Show results as text
Debug.Log(www.downloadHandler.text);
QRData qr = JsonUtility.FromJson<QRData>(www.downloadHandler.text);
string result = Regex.Replace(qr.img, #"^data:image\/[a-zA-Z]+;base64,", string.Empty);
CovertBase64ToImage(result);
}
}
}
void CovertBase64ToImage(string img)
{
byte[] bytes = Convert.FromBase64String(img);
Texture2D myTexture = new Texture2D(512,212);
myTexture.LoadImage(bytes);
Sprite sprite = Sprite.Create(myTexture, new Rect(0, 0, myTexture.width, myTexture.height), new Vector2(0.5f, 0.5f));
QRimage.transform.parent.gameObject.SetActive(true);
QRimage.sprite = sprite;
}
It is working perfectly on unity version 2019.4

Related

Unity: Reduce size of Render Texture before executing Texture2D.ReadPixels

I'm working on a code where I basically have to take a low quality screenshot about every 30 milliseconds. The script is attached to a camara.
What I want to do is reduce the render texture size. The way the code is right now changing either W or H basically gets me a SECTION of of all that is being seen by the camara instead of a reduced size version. So my question is how can I resized or downsample what is read into the screenshot (Texture2D) but that it still is a representation of the entire screen.
public class CameraRenderToImage : MonoBehaviour
{
private RemoteRenderServer rrs;
void Start(){
TimeStamp.SetStart();
Camera.onPostRender += OnPostRenderCallback;
}
void OnPostRenderCallback(Camera cam){
if (TimeStamp.HasMoreThanThisEllapsed(30)){
TimeStamp.SetStart();
int W = Screen.width;
int H = Screen.height;
Texture2D screenshot = new Texture2D(W,H, TextureFormat.RGB24, false);
screenshot.ReadPixels( new Rect(0, 0, W,H), 0, 0);
byte[] bytes = screenshot.EncodeToPNG();
System.IO.File.WriteAllBytes("check_me_out.png", bytes);
TimeStamp.Tok("Encode to PNG and Save");
}
}
// Remove the onPostRender callback
void OnDestroy()
{
Camera.onPostRender -= OnPostRenderCallback;
}
}
If you need to resize your render texture from script you can refer to the next code snippet
void Resize(RenderTexture renderTexture, int width, int height) {
if (renderTexture) {
renderTexture.Release();
renderTexture.width = width;
renderTexture.height = height;
}
}
To make it possible to resize the render texture you first need to make sure it is released.
To get Texture2d:
private Texture2D ToCompressedTexture(ref RenderTexture renderTexture)
{
var texture = new Texture2D(renderTexture.width, renderTexture.height, TextureFormat.ARGB32, false);
var previousTarget = RenderTexture.active;
RenderTexture.active = renderTexture;
texture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0);
RenderTexture.active = previousTarget;
texture.Compress(false);
texture.Apply(false, true);
renderTexture.Release();
renderTexture = null;
return texture;
}

Converting RenderTexture to Texture2D in Unity 2019

I'm using Intel Real Sense as camera device to capture picture. The capture result is displayed as a RenderTexture. Since I need to sent it via UDP, I need to convert it to byte[], but it only work for Texture2D. Is it possible to convert RenderTexture into Texture2D in unity 2019?
Edit:
Right now, I'm using this code to convert RenderTexture to Texture2D:
Texture2D toTexture2D(RenderTexture rTex)
{
Texture2D tex = new Texture2D(rTex.width, rTex.width, TextureFormat.ARGB32, false);
RenderTexture.active = rTex;
tex.ReadPixels(new Rect(0, 0, rTex.width, rTex.height), 0, 0);
tex.Apply();
return tex;
}
I got this code from here, which doesn't work anymore for unity 2019 since if I display the texture it only give me white texture.
Edit 2:
Here how i called that function:
//sender side
Texture2D WebCam;
public RawImage WebCamSender;
public RenderTexture tex;
Texture2D CurrentTexture;
//receiver side
public RawImage WebCamReceiver;
Texture2D Textur;
IEnumerator InitAndWaitForWebCamTexture()
{
WebCamSender.texture = tex;
CurrentTexture = new Texture2D(WebCamSender.texture.width,
WebCamSender.texture.height, TextureFormat.RGB24, false, false);
WebCam = toTexture2D(tex);
while (WebCamSender.texture.width < 100) //WebCam
{
yield return null;
}
StartCoroutine(SendUdpPacketVideo());
}
then i'll send it via network like this :
IEnumerator SendUdpPacketVideo()
{
...
CurrentTexture.SetPixels(WebCam.GetPixels());
byte[] PNGBytes = CurrentTexture.EncodeToPNG();
...
}
On receiver side, i'm gonna decode it and display on raw image:
....
Textur.LoadImage(ReceivedVideo);
WebCamReceiver.texture = Textur;
...
The most optimized way to do this is:
public Texture2D toTexture2D(RenderTexture rTex)
{
Texture2D dest = new Texture2D(rTex.width, rTex.height, TextureFormat.RGBA32, false);
dest.Apply(false);
Graphics.CopyTexture(rTex, dest);
return dest;
}

How to access Hololens front camera

I'm working with hololens, and I'm trying to get the image of the front camera. The only thing I need is to take the image of each frame of that camera and transform it into a byte array.
Follow Unity Example in the documentation. It has a well written example:
https://docs.unity3d.com/Manual/windowsholographic-photocapture.html
Copied from the unity documentation from the link above:
using UnityEngine;
using System.Collections;
using System.Linq;
using UnityEngine.XR.WSA.WebCam;
public class PhotoCaptureExample : MonoBehaviour {
PhotoCapture photoCaptureObject = null;
Texture2D targetTexture = null;
// Use this for initialization
void Start() {
Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First();
targetTexture = new Texture2D(cameraResolution.width, cameraResolution.height);
// Create a PhotoCapture object
PhotoCapture.CreateAsync(false, delegate (PhotoCapture captureObject) {
photoCaptureObject = captureObject;
CameraParameters cameraParameters = new CameraParameters();
cameraParameters.hologramOpacity = 0.0f;
cameraParameters.cameraResolutionWidth = cameraResolution.width;
cameraParameters.cameraResolutionHeight = cameraResolution.height;
cameraParameters.pixelFormat = CapturePixelFormat.BGRA32;
// Activate the camera
photoCaptureObject.StartPhotoModeAsync(cameraParameters, delegate (PhotoCapture.PhotoCaptureResult result) {
// Take a picture
photoCaptureObject.TakePhotoAsync(OnCapturedPhotoToMemory);
});
});
}
void OnCapturedPhotoToMemory(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame) {
// Copy the raw image data into the target texture
photoCaptureFrame.UploadImageDataToTexture(targetTexture);
// Create a GameObject to which the texture can be applied
GameObject quad = GameObject.CreatePrimitive(PrimitiveType.Quad);
Renderer quadRenderer = quad.GetComponent<Renderer>() as Renderer;
quadRenderer.material = new Material(Shader.Find("Custom/Unlit/UnlitTexture"));
quad.transform.parent = this.transform;
quad.transform.localPosition = new Vector3(0.0f, 0.0f, 3.0f);
quadRenderer.material.SetTexture("_MainTex", targetTexture);
// Deactivate the camera
photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode);
}
void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result) {
// Shutdown the photo capture resource
photoCaptureObject.Dispose();
photoCaptureObject = null;
}
}
To get bytes: Replace the following method:
void OnCapturedPhotoToMemory(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame) {
// Copy the raw image data into the target texture
photoCaptureFrame.UploadImageDataToTexture(targetTexture);
// Create a GameObject to which the texture can be applied
GameObject quad = GameObject.CreatePrimitive(PrimitiveType.Quad);
Renderer quadRenderer = quad.GetComponent<Renderer>() as Renderer;
quadRenderer.material = new Material(Shader.Find("Custom/Unlit/UnlitTexture"));
quad.transform.parent = this.transform;
quad.transform.localPosition = new Vector3(0.0f, 0.0f, 3.0f);
quadRenderer.material.SetTexture("_MainTex", targetTexture);
// Deactivate the camera
photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode);
}
with the method below:
void OnCapturedPhotoToMemory(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame)
{
List<byte> imageBufferList = new List<byte>();
imageBufferList.Clear();
photoCaptureFrame.CopyRawImageDataIntoBuffer(imageBufferList);
var bytesArray = imageBufferList.ToArray();
}
If you are using Unity 2018.1 or 2018.2 (I think also 2017.4) now, then it will NOT work. Unity has public tracker to resolve it:
https://issuetracker.unity3d.com/issues/windowsmr-failure-to-take-photo-capture-in-hololens
I created a workaround until Unity fixes the bug:
https://github.com/MSAlshair/HoloLensMediaCapture
Basic sample without using PhotoCapture from unity as a workaround: More details in the link above
You must add #if WINDOWS_UWP to be able to use MediaCapture: Ideally, you want to use PhotoCapture from unity to avoid this, but until Unity resolve the issue, you can use something like this.
#if WINDOWS_UWP
public async System.Threading.Tasks.Task<byte[]> GetPhotoAsync()
{
//Get available devices info
var devices = await Windows.Devices.Enumeration.DeviceInformation.FindAllAsync(
Windows.Devices.Enumeration.DeviceClass.VideoCapture);
var numberOfDevices = devices.Count;
byte[] photoBytes = null;
//Check if the device has camera
if (devices.Count > 0)
{
Windows.Media.Capture.MediaCapture mediaCapture = new Windows.Media.Capture.MediaCapture();
await mediaCapture.InitializeAsync();
//Get Highest available resolution
var highestResolution = mediaCapture.VideoDeviceController.GetAvailableMediaStreamProperties(
Windows.Media.Capture.MediaStreamType.Photo).
Select(item => item as Windows.Media.MediaProperties.ImageEncodingProperties).
Where(item => item != null).
OrderByDescending(Resolution => Resolution.Height * Resolution.Width).
ToList().First();
using (var photoRandomAccessStream = new Windows.Storage.Streams.InMemoryRandomAccessStream())
{
await mediaCapture.CapturePhotoToStreamAsync(highestResolution, photoRandomAccessStream);
//Covnert stream to byte array
photoBytes = await ConvertFromInMemoryRandomAccessStreamToByteArrayAsync(photoRandomAccessStream);
}
}
else
{
System.Diagnostics.Debug.WriteLine("No camera device detected!");
}
return photoBytes;
}
public static async System.Threading.Tasks.Task<byte[]> ConvertFromInMemoryRandomAccessStreamToByteArrayAsync(
Windows.Storage.Streams.InMemoryRandomAccessStream inMemoryRandomAccessStream)
{
using (var dataReader = new Windows.Storage.Streams.DataReader(inMemoryRandomAccessStream.GetInputStreamAt(0)))
{
var bytes = new byte[inMemoryRandomAccessStream.Size];
await dataReader.LoadAsync((uint)inMemoryRandomAccessStream.Size);
dataReader.ReadBytes(bytes);
return bytes;
}
}
#endif
Do NOT forget to add capabilities to the manifest:
WebCam
Microphone: I am not sure if it needed since we are only taking photos or not, but I added it anyway.
Pictures Library if you want to save to it
In player settings, enable access to the camera, and try again.

Take photo in unity c#

I'm trying to build a program that takes your photo and places it with a different background, like a monument or so. So far, I was able to turn the camera on when I start the project with this code
webcamTexture = new WebCamTexture();
rawImage.texture = webcamTexture;
rawImage.material.mainTexture = webcamTexture;
webcamTexture.Play();
Texture2D PhotoTaken = new Texture2D (webcamTexture.width, webcamTexture.height);
PhotoTaken.SetPixels (webcamTexture.GetPixels ());
PhotoTaken.Apply ();
However, I can't take a screenshot or photo because it always ends up all black. I've tried different codes but nothing is working. Can someone please help? Thanks
EDIT
After some tries, this is the code I have:
WebCamTexture webcamTexture;
public RawImage rawImage;
void Start () {
webcamTexture = new WebCamTexture();
rawImage.texture = webcamTexture;
rawImage.material.mainTexture = webcamTexture;
webcamTexture.Play();
RenderTexture texture= new RenderTexture(webcamTexture.width, webcamTexture.height,0);
Graphics.Blit(webcamTexture, texture);
Button btn = yourButton.GetComponent<Button>();
btn.onClick.AddListener(OnClick);
}
public IEnumerator Coroutine(){
yield return new WaitForEndOfFrame ();
}
public void OnClick() {
var width = 767;
var height = 575;
Texture2D texture = new Texture2D(width, height);
texture.ReadPixels(new Rect(0, 0, width, height), 0, 0);
texture.Apply();
// Encode texture into PNG
var bytes = texture.EncodeToPNG();
//Destroy(texture);
File.WriteAllBytes (Application.dataPath + "/../SavedScreen.png", bytes);
}
and with this next code the screenshot is taken, but it takes a photo of the whole thing, and not just a bit of the screen.
void Start()
{
// Set the playback framerate!
// (real time doesn't influence time anymore)
Time.captureFramerate = frameRate;
// Find a folder that doesn't exist yet by appending numbers!
realFolder = folder;
int count = 1;
while (System.IO.Directory.Exists(realFolder))
{
realFolder = folder + count;
count++;
}
// Create the folder
System.IO.Directory.CreateDirectory(realFolder);
}
void Update()
{
// name is "realFolder/shot 0005.png"
var name = string.Format("{0}/shot {1:D04}.png", realFolder, Time.frameCount);
// Capture the screenshot
Application.CaptureScreenshot(name, sizeMultiplier);
}
}
You can take a screenshot like this in Unity
Application.CaptureScreenshot("Screenshot.png");
Reference
EDIT 1
To take a screenshot on a specific part of the screen use the following script:
var width = 400;
var height = 300;
var startX = 200;
var startY = 100;
var tex = new Texture2D (width, height, TextureFormat.RGB24, false);
tex.ReadPixels (Rect(startX, startY, width, height), 0, 0);
tex.Apply ();
// Encode texture into PNG
var bytes = tex.EncodeToPNG();
Destroy(tex);
File.WriteAllBytes(Application.dataPath + "/../SavedScreen.png", bytes);
Reference

EmguCV + Unity Open WebCam is Error;

I use EmguCV open webcam in unity.
But it's fps is low much.
this is my code ↓
private Texture2D texture;
private Capture capture;
private Color32[] color = new Color32[640*480];
// Use this for initialization
void Start () {
texture = new Texture2D (640, 480);
capture = new Capture ();
}
// Update is called once per frame
void Update () {
Image<Bgr, Byte> currentFrame = capture.QueryFrame();
Bitmap bitmapCurrentFrame = currentFrame.ToBitmap();
Image<Bgra, Byte> img = new Image<Bgra, Byte> (bitmapCurrentFrame);
for(int y=0; y<480; y++){
for(int x=0; x<640; x++){
int index = y+x*480;
print(index+";"+x+";"+y);
//byte b = img.Data[x,y,0];
color[index].r = img.Data[x,y,2];
color[index].g = img.Data[x,y,1];
color[index].b = img.Data[x,y,0];
color[index].a = 0xff;
}
}
texture.SetPixels32 (color);
texture.Apply (false);
renderer.material.mainTexture = texture;
}
i don't know why fps is so low...
and why my boss like EmguCV with Unity, why he don't use Unity-WebCamTexture...
OKAY,i really thank you for your read.
Hope, I can get some answer.
Look at Texture2D.LoadRawTextureData. It has no proper docs, so here's a snippet:
Texture2D tex = new Texture2D(width, height, format, false, true);
tex.LoadRawTextureData(buffer);
tex.Apply(false, true);
Buffer must be in the correct hardware format. For the format variable look at the list of formats that unity accepts.