EmguCV + Unity Open WebCam is Error; - unity3d

I use EmguCV open webcam in unity.
But it's fps is low much.
this is my code ↓
private Texture2D texture;
private Capture capture;
private Color32[] color = new Color32[640*480];
// Use this for initialization
void Start () {
texture = new Texture2D (640, 480);
capture = new Capture ();
}
// Update is called once per frame
void Update () {
Image<Bgr, Byte> currentFrame = capture.QueryFrame();
Bitmap bitmapCurrentFrame = currentFrame.ToBitmap();
Image<Bgra, Byte> img = new Image<Bgra, Byte> (bitmapCurrentFrame);
for(int y=0; y<480; y++){
for(int x=0; x<640; x++){
int index = y+x*480;
print(index+";"+x+";"+y);
//byte b = img.Data[x,y,0];
color[index].r = img.Data[x,y,2];
color[index].g = img.Data[x,y,1];
color[index].b = img.Data[x,y,0];
color[index].a = 0xff;
}
}
texture.SetPixels32 (color);
texture.Apply (false);
renderer.material.mainTexture = texture;
}
i don't know why fps is so low...
and why my boss like EmguCV with Unity, why he don't use Unity-WebCamTexture...
OKAY,i really thank you for your read.
Hope, I can get some answer.

Look at Texture2D.LoadRawTextureData. It has no proper docs, so here's a snippet:
Texture2D tex = new Texture2D(width, height, format, false, true);
tex.LoadRawTextureData(buffer);
tex.Apply(false, true);
Buffer must be in the correct hardware format. For the format variable look at the list of formats that unity accepts.

Related

Unity: Reduce size of Render Texture before executing Texture2D.ReadPixels

I'm working on a code where I basically have to take a low quality screenshot about every 30 milliseconds. The script is attached to a camara.
What I want to do is reduce the render texture size. The way the code is right now changing either W or H basically gets me a SECTION of of all that is being seen by the camara instead of a reduced size version. So my question is how can I resized or downsample what is read into the screenshot (Texture2D) but that it still is a representation of the entire screen.
public class CameraRenderToImage : MonoBehaviour
{
private RemoteRenderServer rrs;
void Start(){
TimeStamp.SetStart();
Camera.onPostRender += OnPostRenderCallback;
}
void OnPostRenderCallback(Camera cam){
if (TimeStamp.HasMoreThanThisEllapsed(30)){
TimeStamp.SetStart();
int W = Screen.width;
int H = Screen.height;
Texture2D screenshot = new Texture2D(W,H, TextureFormat.RGB24, false);
screenshot.ReadPixels( new Rect(0, 0, W,H), 0, 0);
byte[] bytes = screenshot.EncodeToPNG();
System.IO.File.WriteAllBytes("check_me_out.png", bytes);
TimeStamp.Tok("Encode to PNG and Save");
}
}
// Remove the onPostRender callback
void OnDestroy()
{
Camera.onPostRender -= OnPostRenderCallback;
}
}
If you need to resize your render texture from script you can refer to the next code snippet
void Resize(RenderTexture renderTexture, int width, int height) {
if (renderTexture) {
renderTexture.Release();
renderTexture.width = width;
renderTexture.height = height;
}
}
To make it possible to resize the render texture you first need to make sure it is released.
To get Texture2d:
private Texture2D ToCompressedTexture(ref RenderTexture renderTexture)
{
var texture = new Texture2D(renderTexture.width, renderTexture.height, TextureFormat.ARGB32, false);
var previousTarget = RenderTexture.active;
RenderTexture.active = renderTexture;
texture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0);
RenderTexture.active = previousTarget;
texture.Compress(false);
texture.Apply(false, true);
renderTexture.Release();
renderTexture = null;
return texture;
}

How to read from multiple render textures and set texture 2d values Unity 3D

So my idea is to create a real time security camera system on a texture in unity. The idea is to use multiple unity cameras scattered throughout a scene and have them all render to a RenderTexture. And then combine then onto 1 texture 2D which I could then either overlay the screen with or attach to an object in the scene. (most likely just overlay the screen with)
So my question then becomes how do I read from multiple RenderTextures in script in Unity? How can I then write blocks of the image to a texture2D in scipt as well to segments of the image?
This is what I am doing currently
public class Cameras : MonoBehaviour
{
public RawImage outText;
public RenderTexture[] cameras;
public int imgWidth;
public int imgHeight;
void Start()
{
Texture2D[] textures = new Texture2D[cameras.length];
for(int i = 0; i < cameras.length; ++i)
{
textures[i] = new Texture2D(cameras[i].width, cameras[i].height, TextureFormat.RGB24, false);
}
Texture2D Combined = new Texture2D(imgWidth, imgHeight, TextureFormat.RGB24, false);
}
void Update()
{
for(int i = 0; i < cameras.length; ++i)
{
//is this a decent way?
RenderTexture.active = cameras[i];
textures[i].ReadPixels (new Rect (0, 0, cameras[i].width, cameras[i].height), 0, 0);
textures[i].Apply ();
RenderTexture.active = null;
Color32[] camPixels= textures[i].GetPixels32(0);
/* someway to combine it?
for(int i = camOffset; i < camBlock.width; ++i)
{
for(int j = camOffset; j < camBlock.height; ++j)
{
Combined.SetPixel(i,j,camPixels);
}
}
*/
}
outText.texture = Combined;
}
}
As a follow up question, say I wanted to do some effects. How would I write solely to the red channel or the green channel of the Combined texture?
Thanks for the help in advance!

Take photo in unity c#

I'm trying to build a program that takes your photo and places it with a different background, like a monument or so. So far, I was able to turn the camera on when I start the project with this code
webcamTexture = new WebCamTexture();
rawImage.texture = webcamTexture;
rawImage.material.mainTexture = webcamTexture;
webcamTexture.Play();
Texture2D PhotoTaken = new Texture2D (webcamTexture.width, webcamTexture.height);
PhotoTaken.SetPixels (webcamTexture.GetPixels ());
PhotoTaken.Apply ();
However, I can't take a screenshot or photo because it always ends up all black. I've tried different codes but nothing is working. Can someone please help? Thanks
EDIT
After some tries, this is the code I have:
WebCamTexture webcamTexture;
public RawImage rawImage;
void Start () {
webcamTexture = new WebCamTexture();
rawImage.texture = webcamTexture;
rawImage.material.mainTexture = webcamTexture;
webcamTexture.Play();
RenderTexture texture= new RenderTexture(webcamTexture.width, webcamTexture.height,0);
Graphics.Blit(webcamTexture, texture);
Button btn = yourButton.GetComponent<Button>();
btn.onClick.AddListener(OnClick);
}
public IEnumerator Coroutine(){
yield return new WaitForEndOfFrame ();
}
public void OnClick() {
var width = 767;
var height = 575;
Texture2D texture = new Texture2D(width, height);
texture.ReadPixels(new Rect(0, 0, width, height), 0, 0);
texture.Apply();
// Encode texture into PNG
var bytes = texture.EncodeToPNG();
//Destroy(texture);
File.WriteAllBytes (Application.dataPath + "/../SavedScreen.png", bytes);
}
and with this next code the screenshot is taken, but it takes a photo of the whole thing, and not just a bit of the screen.
void Start()
{
// Set the playback framerate!
// (real time doesn't influence time anymore)
Time.captureFramerate = frameRate;
// Find a folder that doesn't exist yet by appending numbers!
realFolder = folder;
int count = 1;
while (System.IO.Directory.Exists(realFolder))
{
realFolder = folder + count;
count++;
}
// Create the folder
System.IO.Directory.CreateDirectory(realFolder);
}
void Update()
{
// name is "realFolder/shot 0005.png"
var name = string.Format("{0}/shot {1:D04}.png", realFolder, Time.frameCount);
// Capture the screenshot
Application.CaptureScreenshot(name, sizeMultiplier);
}
}
You can take a screenshot like this in Unity
Application.CaptureScreenshot("Screenshot.png");
Reference
EDIT 1
To take a screenshot on a specific part of the screen use the following script:
var width = 400;
var height = 300;
var startX = 200;
var startY = 100;
var tex = new Texture2D (width, height, TextureFormat.RGB24, false);
tex.ReadPixels (Rect(startX, startY, width, height), 0, 0);
tex.Apply ();
// Encode texture into PNG
var bytes = tex.EncodeToPNG();
Destroy(tex);
File.WriteAllBytes(Application.dataPath + "/../SavedScreen.png", bytes);
Reference

Taking snapshots of a image in Unity

I am trying to take snapshots of materials I used in my application in Unity. I simply add a directional light and a camera and in a perspective mode. Then I render the result to a texture and save it as a .png file. The result is good but there is a strange gizmo like figure in the middle of image. Here it is :
Camera and light is far enough from the object. Also I disabled light to see if it is caused by directional light. But didn't solve. Anyone knows what cause this elliptic figure? Thanks in advance.
Edit. Here is the code
public static Texture2D CreateThumbnailFromMaterial(Material _material, string _name, string _path)
{
GameObject sphereObj = GameObject.CreatePrimitive(PrimitiveType.Sphere);
sphereObj.name = _name;
sphereObj.GetComponent<Renderer>().material = _material;
Texture2D thumbnailTexture = CreateThumbnailFromModel(sphereObj, _path);
sphereObj.GetComponent<Renderer>().material = null;
Object.DestroyImmediate(sphereObj.gameObject);
return thumbnailTexture;
}
public static Texture2D CreateThumbnailFromModel(GameObject _gameObject, string _path)
{
Texture2D thumbnailTexture = new Texture2D(textureSize, textureSize);
thumbnailTexture.name = _gameObject.name.Simplify();
GameObject cameraObject = Object.Instantiate(Resources.Load("SceneComponent/SnapshotCamera") as GameObject);
Camera snapshotCamera = cameraObject.GetComponent<Camera>();
if (snapshotCamera)
{
GameObject sceneObject = GameObject.Instantiate(_gameObject) as GameObject;
sceneObject.transform.Reset();
sceneObject.transform.position = new Vector3(1000, 0, -1000);
sceneObject.hideFlags = HideFlags.HideAndDontSave;
// Create render texture
snapshotCamera.targetTexture = RenderTexture.GetTemporary(textureSize, textureSize, 24);
RenderTexture.active = snapshotCamera.targetTexture;
// Set layer
foreach (Transform child in sceneObject.GetComponentsInChildren<Transform>(true))
{
child.gameObject.layer = LayerMask.NameToLayer("ObjectSnapshot");
}
// Calculate bounding box
Bounds bounds = sceneObject.GetWorldSpaceAABB();
float maxBoundValue = 0f;
if (bounds.IsValid())
{
maxBoundValue = Mathf.Max(bounds.size.x, bounds.size.y, bounds.size.z);
}
double fov = Mathf.Deg2Rad * snapshotCamera.GetComponent<Camera>().fieldOfView;
float distanceToCenter = (maxBoundValue) / (float)System.Math.Tan(fov);
cameraObject.transform.LookAt(bounds.center);
cameraObject.transform.position = bounds.center - (snapshotCamera.transform.forward * distanceToCenter);
cameraObject.transform.SetParent(sceneObject.transform);
snapshotCamera.Render();
thumbnailTexture.ReadPixels(new Rect(0, 0, textureSize, textureSize), 0, 0);
thumbnailTexture.Apply();
sceneObject.transform.Reset();
snapshotCamera.transform.SetParent(null);
RenderTexture.active = null;
GameObject.DestroyImmediate(sceneObject);
GameObject.DestroyImmediate(cameraObject);
// Save as .png
IO.IOManager.Instance.SaveAsPNG(_path + thumbnailTexture.name, thumbnailTexture);
}
return thumbnailTexture;
}
And here is my camera properties

Load sprite from a base64 string which came from a websocket

I am trying to turn a base64 String into an Sprite in Unity 3D, but my sprite in scene remains blank.
public var cardPicture : Image;
function ReceiveData(jsonReply : JSONObject) {
var pictureBytes : byte[] = System.Convert.FromBase64String(jsonReply.GetString("picture"));
var cardPictureTexture = new Texture2D( 720, 720);
Debug.Log(cardPictureTexture.LoadImage(pictureBytes));
var sprite : Sprite = new Sprite ();
sprite = Sprite.Create (cardPictureTexture, new Rect (0,0,720,720), new Vector2 (0.5f, 0.5f));
cardPicture.overrideSprite = sprite;
}
This prints out true, but I am not sure if it is loading the image appropriately from the bytes or if something else is going wrong. I am not sure what to check in order to determine what is going wrong either. Assigning some picture to the cardPicture in scene displays correctly.
I logged the jsonReply.picture and used an online base64 to image converter and it displayed the image correctly.
byte[] pictureBytes = System.Convert.FromBase64String(jsonReply.GetString("picture"));
Texture2D tex = new Texture2D(2, 2);
tex.LoadImage( imageBytes );
Sprite sprite = Sprite.Create(tex, new Rect(0.0f, 0.0f, tex.width, tex.height), new Vector2(0.5f, 0.5f), 100.0f);
cardPicture.overrideSprite = sprite;
I assume you are trying to fetch an image from a remote url and trying to parse bytes into a texture. In unity WWW has facilitated this and does not require user involvement in conversion.
I believe your response may have header details which might cause issues in converting to a texture. You may use a code like below,
public string Url = #"http://dummyimage.com/300/09f/fff.png";
void Start () {
// Starting a coroutine to avoid blocking
StartCoroutine ("LoadImage");
}
IEnumerator LoadImage()
{
WWW www = new WWW(Url);
yield return www;
Debug.Log ("Loaded");
Texture texture = www.texture;
this.gameObject.GetComponent<Renderer>().material.SetTexture( 0,texture );
}
I don't know if this is solved but i want to share my solution.
void Start()
{
StartCoroutine(GetQR());
}
IEnumerator GetQR()
{
using (UnityWebRequest www = UnityWebRequest.Get(GetQR_URL))
{
yield return www.SendWebRequest();
if (www.isNetworkError || www.isHttpError)
{
Debug.Log(www.error);
}
else
{
// Show results as text
Debug.Log(www.downloadHandler.text);
QRData qr = JsonUtility.FromJson<QRData>(www.downloadHandler.text);
string result = Regex.Replace(qr.img, #"^data:image\/[a-zA-Z]+;base64,", string.Empty);
CovertBase64ToImage(result);
}
}
}
void CovertBase64ToImage(string img)
{
byte[] bytes = Convert.FromBase64String(img);
Texture2D myTexture = new Texture2D(512,212);
myTexture.LoadImage(bytes);
Sprite sprite = Sprite.Create(myTexture, new Rect(0, 0, myTexture.width, myTexture.height), new Vector2(0.5f, 0.5f));
QRimage.transform.parent.gameObject.SetActive(true);
QRimage.sprite = sprite;
}
It is working perfectly on unity version 2019.4