I'm using Screenshot code to take a screenshot of the screen which is working fine but its also taking the arcore models with it. Is there a way to take a screenshot before models are rendered?
I tried to SetActive(false) then take a screenshot then SetActive(true), it does work but there's a noticeable difference i.e. model disappears than reappears.
Update: This is a script applied on ScreenShotCamera and it is updated after removing all the bugs (thanks to #Shingo), feel free to use it it's working properly
using GoogleARCore;
using OpenCVForUnitySample;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.XR;
[RequireComponent(typeof(Camera))]
public class SnapshotCamera : MonoBehaviour
{
Camera snapCam;
public UnityEngine.UI.Text text;
public RenderTexture mRenderTexture;
int resWidth=480;
int resHeight=800;
// Start is called before the first frame update
public void initialize(ARBackgroundRenderer background, Material material)
{
background = new ARBackgroundRenderer();
snapCam = GetComponent<Camera>();
background.backgroundMaterial = material;
background.camera = snapCam;
background.mode = ARRenderMode.MaterialAsBackground;
if (snapCam.targetTexture == null)
{
snapCam.targetTexture = new RenderTexture(resWidth, resHeight, 24);
}
else
{
snapCam.targetTexture.height = resHeight;
snapCam.targetTexture.width = resWidth;
//resHeight = snapCam.targetTexture.height;
//resWidth = snapCam.targetTexture.width;
}
background.camera.cullingMask = LayerMask.NameToLayer("Default");
//snapCam.CopyFrom(background.camera);
snapCam.gameObject.SetActive(false);
}
public void TakeSnapShot()
{
snapCam.gameObject.SetActive(true);
}
void LateUpdate()
{
if (snapCam.gameObject.activeInHierarchy)
{
snapCam.cullingMask = LayerMask.NameToLayer("Default");
if (ARCoreBackgroundRenderer.screenShot == null)
ARCoreBackgroundRenderer.screenShot = new Texture2D(resWidth, resHeight, TextureFormat.RGB24, false);
snapCam.Render();
RenderTexture.active = snapCam.targetTexture;
ARCoreBackgroundRenderer.screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
ARCoreBackgroundRenderer.screenShot.Apply();
snapCam.gameObject.SetActive(false);
HandPoseRecognition.captureTexture = false;
//string name = string.Format("{0}_Capture{1}_{2}.png", Application.productName, "{0}", System.DateTime.Now.ToString("yyyy-MM-dd_HH-mm-ss"));
//UnityEngine.Debug.Log("Permission result: " + NativeGallery.SaveImageToGallery(ARCoreBackgroundRenderer.screenShot, Application.productName + " Captures", name));
}
}
}
Perhaps I was a little ambiguous, what u mentioned in the comment has already been resolved thanks to you but the problem now is.
I'll show you the images:
These are the 2 cameras I have:
This is what my Main (ARCore Camera) shows
And this is what the (ScreenShot Camera) Shows
You can use layer, put every arcore models in one layer (eg. ARLAYER), then set camera's culling mask to avoid these models.
Pseudo code:
// Set models' layer
foreach (GameObject arcoreModel in arcoreModels)
arcoreModel.layer = ARLAYER;
// Set camera's culling mask
camera.cullingMask = ~(1 << ARLAYER);
camera.Render();
Create screenshot camera from another camera
var go = new GameObject("screenshotcamera");
// Copy transform
go.transform.position = mainCamera.transform.position.
...
// Copy camera
var screenshotcamera= go.AddComopnent<Camera>();
screenshotcamera.CopyFrom(mainCamera);
Update with your script
snapCam = GetComponent<Camera>();
Related
This script works as follows: when I raycast an object, it fades out.
Script works great in the Unity Play Mode.
But objects dont wanna fade out on PC/Android build. Just nothing happen, but raycast is detecting object (problem not in raycast)
I debugged Android/PC and script is going into SetMaterialProperties method
Standard material, default new project, default scene, simple capsule gameObject, nothing specific
Because of what could this be?
private IEnumerator FadeIn()
{
Color objectColor = GetComponent<MeshRenderer>().material.color;
while (objectColor.a < 1)
{
float fadeAmount = objectColor.a + (_fadeSpeed * Time.deltaTime);
objectColor = new Color(objectColor.r, objectColor.g, objectColor.b, fadeAmount);
SetMaterialProperties(objectColor);
yield return null;
}
}
private void SetMaterialProperties(Color color)
{
foreach (var material in _materials)
{
material.SetColor("_Color", color);
material.SetFloat("_Mode", 3);
material.SetInt("_SrcBlend", (int)UnityEngine.Rendering.BlendMode.SrcAlpha);
material.SetInt("_DstBlend", (int)UnityEngine.Rendering.BlendMode.OneMinusSrcAlpha);
material.EnableKeyword("_ALPHABLEND_ON");
material.renderQueue = 3000;
}
}
So I'm making a card game and want my users to be able to create custom cards and not only use them in the game but also be able to print them.
Currently, I'm stuck on converting the UI GameObject of the card into a single image.
The card GameObject has a template card background, several other images (resources, main image, etc), and multiple text boxes (title, type, and description). Then I want to take that card hierarchy and convert it to a single image.
I thought this would be a fairly simple task but it appears to be a non-trivial... Or am I being a melon here?
Good question !
A button calling Capture below upon press and some GameObject image to capture:
Code:
using System.IO;
using UnityEngine;
public class NewBehaviourScript : MonoBehaviour
{
public GameObject GameObject;
public void Capture()
{
var rectTransform = GameObject.GetComponent<RectTransform>();
var delta = rectTransform.sizeDelta;
var position = rectTransform.position;
var offset = new RectOffset((int) (delta.x / 2), (int) (delta.x / 2), (int) (delta.y / 2), (int) (delta.y / 2));
var rect = new Rect(position, Vector2.zero);
var add = offset.Add(rect);
var tex = new Texture2D((int) add.width, (int) add.height);
tex.ReadPixels(add, 0, 0);
var encodeToPNG = tex.EncodeToPNG();
File.WriteAllBytes("card.png", encodeToPNG);
DestroyImmediate(tex);
}
}
In short, get object rect on screen, copy pixels, voila!
Result:
I have the code exact from a tutorial I copied, and the webcam is inserted and works. But when I load the Unity game (In Unity Editor) there is no "No Device connected" error or incorrect scripts. I'm confused as to why it isn't working.
Why isn't it being displayed?
My webCamScript
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class webCamScript : MonoBehaviour {
public GameObject webCameraPlane;
// Use this for initialization
void Start () {
if (Application.isMobilePlatform) {
GameObject cameraParent = new GameObject ("camParent");
cameraParent.transform.position = this.transform.position;
this.transform.parent = cameraParent.transform;
cameraParent.transform.Rotate (Vector3.right, 90);
}
Input.gyro.enabled = true;
WebCamTexture webCameraTexture = new WebCamTexture ();
webCameraPlane.GetComponent<MeshRenderer> ().material.mainTexture = webCameraTexture;
webCameraTexture.Play ();
}
// Update is called once per frame
void Update () {
Quaternion cameraRotation = new Quaternion (Input.gyro.attitude.x, Input.gyro.attitude.y, -Input.gyro.attitude.x, -Input.gyro.attitude.y);
this.transform.localRotation = cameraRotation;
}
}
Solved
I found the problem, I had a custom texture on the plane which was stopping the camera texture from being inserted.
I presume it has something to do with the fact that you have your code wrapped in an if statement that is checking to see if you are running on a mobile platform. The editor will not be classed as a mobile platform and hence that code will be ignored
I'm working upon an algorithm that is based on Hand gesture recognition. This algorithm i found and run on Winform with c# scripts. The same technique i need to use in my game to perform hand gesture detection through webcam. I tried to use the algorithm in my game scripts but unable to capture any image using the algorithm. Below is the code that i'm currently working upon. I'm using aForge.net framework to implement the idea of motion detection. The bitmap image always returns null. However using the same algorithm in winform it captures image on every frame changed. I know there is a technique of using PhotoCapture in unity but i'm not sure how do i use it at runtime on every frame. Every guidance is appreciated. Thanks!
OpenCamera.cs
using AForge.GestureRecognition;
using System.Collections;
using System.Collections.Generic;
using System.Drawing;
using UnityEngine;
using System.Drawing.Design;
using UnityEngine.VR.WSA.WebCam;
using System.Linq;
using System;
public class OpenCamera : MonoBehaviour {
// statistics length
private const int statLength = 15;
Bitmap image;
PhotoCapture photoCaptureObject = null;
Texture2D targetTexture = null;
// current statistics index
private int statIndex = 0;
// ready statistics values
private int statReady = 0;
// statistics array
private int[] statCount = new int[statLength];
private GesturesRecognizerFromVideo gesturesRecognizer = new GesturesRecognizerFromVideo();
private Gesture gesture = new Gesture();
private int gestureShowTime = 0;
// Use this for initialization
void Start()
{
WebCamTexture webcamTexture = new WebCamTexture();
Renderer renderer = GetComponent<Renderer>();
renderer.material.mainTexture = webcamTexture;
webcamTexture.Play();
}
// Update is called once per frame
void Update ()
{
gesturesRecognizer.ProcessFrame(ref image);
// check if we need to draw gesture information on top of image
if (gestureShowTime > 0)
{
if ((gesture.LeftHand == HandPosition.NotRaised) || (gesture.RightHand != HandPosition.NotRaised))
{
System.Drawing.Graphics g = System.Drawing.Graphics.FromImage(image);
string text = string.Format("Left = " + gesture.LeftHand + "\nRight = " + gesture.RightHand);
System.Drawing.Font drawFont = new System.Drawing.Font("Courier", 13, System.Drawing.FontStyle.Bold);
SolidBrush drawBrush = new SolidBrush(System.Drawing.Color.Blue);
g.DrawString(text, drawFont, drawBrush, new PointF(0, 5));
drawFont.Dispose();
drawBrush.Dispose();
g.Dispose();
}
gestureShowTime--;
}
}
}
As mentioned in the comments (which I am not able to reply to at the moment) those libraries always heavely depend on system.drawing libraries, which are kind of windows only.
What you could try is downloading the drawing library dll (or copying from your system folder) into your unity project (it's managed code, so it'll run on every exportable platform) to keep teh references. Then you could capture a texture2d every frame (rendertextures or so) and draw those pixels into a bitmap which is fed to the library.
Note that copying thousands of pixels every frame is really really heavy. you'd have to convert the UnityEngine.Color to System.Drawing.Color...
This is the easies solution that might work. But definitly not a good final one.
I am using UGUI to make a Novice guide to guide people to play my game.
And need the whole UI be mask, but some rectangular areas to be lighted.
How to do?
Create a new gameobject and add a image component to it. Create a image with transparent areas where you want your ui to be visible. Assign that image to the image component. Then add a mask component
Put your other gui elements inside this gameobject so that is could overlap and hide everything except transparent areas. Here is the picture of demo setup.
IMHO, what you want to achieve is not easy to be done perfectly in Unity. Here is my personal solution:
I put a black panel below every other GUI, so that it darkens my entire screen.
I put an empty game object called BrightRoot below the panel, so that everything under BrightRoot will float over and "brightened".
In my tutorial script, I add function to look for a UI game object by name and change its parent to the BrightRoot. In example:
// To brighten the object
GameObject button = GameObject.Find("PlayButton");
Tansform oldParent = button.transform.parent;
button.transform.SetParent(BrightRoot, true);
// To darken it again
button.transform.SetParent(oldParent, true);
The perfect solution would be to write a UI shader that darken any pixel outside some rectangles and brighten the inside. Then set that shader to all UI objects.
Editted:
This just another easy method, using UI Vertex effect. Just need to implement IsPointInsideClipRect, put this component to your UI objects, and set the rectangles list:
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
using UnityEngine.UI;
[AddComponentMenu("UI/Effects/Clip")]
public class Clip : BaseVertexEffect
{
// We need list of rectangles here - can be an array of RectTransform
public RectTransform[] ClipRects;
public override void ModifyVertices(List<UIVertex> vertexList)
{
if (!IsActive())
{
return;
}
bool isClipped = true;
for (int i = 0; i < count; i++)
{
UIVertex uiVertex = vertexList[i];
foreach (RectTransform rect in ClipRects)
{
if (IsPointInsideClipRect(rect, uiVertex.position))
{
isClipped = false;
break;
}
}
}
Color32 color = isClipped ? new Color32(0.5f, 0.5f, 0.5f, 0.5f) : new Color(1.0f, 1.0f, 1.0f, 1.0f);
for (int i = 0; i < count; i++)
{
UIVertex uiVertex = vertexList[i];
uiVertex.color = color;
}
}
private static bool IsPointInsideClipRect(RectTransform rect, Vector3 position)
{
// ...
}
}