How to draw line or rectangle to Plane in code? - unity3d

I create a Plane of 3D object in my scene, and also write a c# script as below with it. But the plane's color is not changed, still show white color, why? BTW, the plane use Mesh Renderer component.
private Texture2D drawTexture;
private Color[] buffer;
// Start is called before the first frame update
void Start()
{
Texture2D mainTexture =
(Texture2D)GetComponent<Renderer>().material.mainTexture;
Color[] pixels = mainTexture.GetPixels();
buffer = new Color[pixels.Length];
pixels.CopyTo(buffer, 0);
// Change pixel color of drawing area
for (int i = 0; i < pixels.Length; ++i)
{
buffer.SetValue(Color.red, i);
}
// Update pixels of texture with changed pixels
drawTexture = new Texture2D(mainTexture.width,
mainTexture.height, TextureFormat.RGBA32, false);
drawTexture.filterMode = FilterMode.Point;
drawTexture.SetPixels(buffer);
drawTexture.Apply();
GetComponent<Renderer>().material.mainTexture = drawTexture;
}

Related

How to read from multiple render textures and set texture 2d values Unity 3D

So my idea is to create a real time security camera system on a texture in unity. The idea is to use multiple unity cameras scattered throughout a scene and have them all render to a RenderTexture. And then combine then onto 1 texture 2D which I could then either overlay the screen with or attach to an object in the scene. (most likely just overlay the screen with)
So my question then becomes how do I read from multiple RenderTextures in script in Unity? How can I then write blocks of the image to a texture2D in scipt as well to segments of the image?
This is what I am doing currently
public class Cameras : MonoBehaviour
{
public RawImage outText;
public RenderTexture[] cameras;
public int imgWidth;
public int imgHeight;
void Start()
{
Texture2D[] textures = new Texture2D[cameras.length];
for(int i = 0; i < cameras.length; ++i)
{
textures[i] = new Texture2D(cameras[i].width, cameras[i].height, TextureFormat.RGB24, false);
}
Texture2D Combined = new Texture2D(imgWidth, imgHeight, TextureFormat.RGB24, false);
}
void Update()
{
for(int i = 0; i < cameras.length; ++i)
{
//is this a decent way?
RenderTexture.active = cameras[i];
textures[i].ReadPixels (new Rect (0, 0, cameras[i].width, cameras[i].height), 0, 0);
textures[i].Apply ();
RenderTexture.active = null;
Color32[] camPixels= textures[i].GetPixels32(0);
/* someway to combine it?
for(int i = camOffset; i < camBlock.width; ++i)
{
for(int j = camOffset; j < camBlock.height; ++j)
{
Combined.SetPixel(i,j,camPixels);
}
}
*/
}
outText.texture = Combined;
}
}
As a follow up question, say I wanted to do some effects. How would I write solely to the red channel or the green channel of the Combined texture?
Thanks for the help in advance!

Generate mesh from one-color texture

I make a code to be able to draw and generate a sprite of this drawing. So I get a sprite with white background and my drawing (which is in a different color).
My question : How could I remove the white background at runtime ?(with C# code)
My problem is : I want to generated mesh using the drawing, but with white background I have 4 vertices (the fourth corners of the sprite) and I want to get all the vertices from the real shape I draw on my sprite (so much more than 4 vertices)
My current idea is to convert the drawing into having a transparent background and then use unity's sprite packer to generate a mesh from that.
My project: It’s a game, where we can create his own game circuit : user draw a black and white sprite —> I convert it to a mesh with collider and generated the new game circuit.
I already thin to clean all white pixels, but I don't think I will get many vertices with that technic.
Thanks for help,
Axel
using System.IO;
using UnityEngine.UI;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEditor;
using UnityEngine.Networking;
public class scri : MonoBehaviour
{
// For saving the mesh------------------------
public KeyCode saveKey = KeyCode.F12;
public string saveName = "SavedMesh";
// Concerning mesher--------------------------
public GameObject mesher; //require
public List<Vector3> vertices;
public List<int> triangles;
public Vector3 point0;
public Vector3 point1;
public Vector3 point2;
public Vector3 point3;
public int loop;
public float size;
public Mesh meshFilterMesh;
public Mesh meshColliderMesh;
// Sprite work
public Color[] pixels;
public Texture2D newTexture;
public Texture2D oldTexture; //require
private Sprite mySprite;
private SpriteRenderer spriteRenderer;
public int pathCount;
public GameObject displayerComponent; //require
public PolygonCollider2D polygonColliderAdded; //require
void Start()
{
// Mesher
vertices = new List<Vector3> ();
triangles = new List<int> ();
meshFilterMesh= mesher.GetComponent<MeshFilter>().mesh;
meshColliderMesh= mesher.GetComponent<MeshCollider>().sharedMesh;
size = 10; // lenght of the mesh in Z direction
loop=0;
// Sprite
pixels = oldTexture.GetPixels();
newTexture =new Texture2D(oldTexture.width,oldTexture.height,TextureFormat.ARGB32, false);
spriteRenderer = gameObject.AddComponent<SpriteRenderer>();
ConvertSpriteAndCreateCollider (pixels);
BrowseColliderToCreateMesh (polygonColliderAdded);
}
void Update()
{
// Save if F12 press
if (Input.GetKeyDown(saveKey)){SaveAsset();}
}
public void ConvertSpriteAndCreateCollider (Color[] pixels) {
for (int i = 0 ; i < pixels.Length ; i++ )
{
// delete all black pixel (black is the circuit, white is the walls)
if ((pixels[i].r==0 && pixels[i].g==0 && pixels[i].b==0 && pixels[i].a==1)) {
pixels[i] = Color.clear;
}
}
// Set a new texture with this pixel list
newTexture.SetPixels(pixels);
newTexture.Apply();
// Create a sprite from this texture
mySprite = Sprite.Create(newTexture, new Rect(0, 0, newTexture.width, newTexture.height), new Vector2(10.0f,10.0f), 10.0f, 0, SpriteMeshType.Tight,new Vector4(0,0,0,0),false);
// Add it to our displayerComponent
displayerComponent.GetComponent<SpriteRenderer>().sprite=mySprite;
// Add the polygon collider to our displayer Component and get his path count
polygonColliderAdded = displayerComponent.AddComponent<PolygonCollider2D>();
}
// Method to browse the collider and launch makemesh
public void BrowseColliderToCreateMesh (PolygonCollider2D polygonColliderAdded){
//browse all path from collider
pathCount=polygonColliderAdded.pathCount;
for (int i = 0; i < pathCount; i++)
{
Vector2[] path = polygonColliderAdded.GetPath(i);
// browse all path point
for (int j = 1; j < path.Length; j++)
{
if (j != (path.Length - 1)) // if we aren't at the last point
{
point0 = new Vector3(path[j-1].x ,path[j-1].y ,0);
point1 = new Vector3(path[j-1].x ,path[j-1].y ,size);
point2 = new Vector3(path[j].x ,path[j].y ,size);
point3 = new Vector3(path[j].x ,path[j].y ,0);
MakeMesh(point0,point1,point2,point3);
}
else if(j == (path.Length - 1))// if we are at the last point, we need to close the loop with the first point
{
point0 = new Vector3(path[j-1].x ,path[j-1].y ,0);
point1 = new Vector3(path[j-1].x ,path[j-1].y ,size);
point2 = new Vector3(path[j].x ,path[j].y ,size);
point3 = new Vector3(path[j].x ,path[j].y ,0);
MakeMesh(point0,point1,point2,point3);
point0 = new Vector3(path[j].x ,path[j].y ,0);
point1 = new Vector3(path[j].x ,path[j].y ,size);
point2 = new Vector3(path[0].x ,path[0].y ,size); // First point
point3 = new Vector3(path[0].x ,path[0].y ,0); // First point
MakeMesh(point0,point1,point2,point3);
}
}
}
}
//Method to generate 2 triangles mesh from the 4 points 0 1 2 3 and add it to the collider
public void MakeMesh (Vector3 point0,Vector3 point1,Vector3 point2, Vector3 point3){
// Vertice add
vertices.Add(point0);
vertices.Add(point1);
vertices.Add(point2);
vertices.Add(point3);
//Triangle order
triangles.Add(0+loop*4);
triangles.Add(2+loop*4);
triangles.Add(1+loop*4);
triangles.Add(0+loop*4);
triangles.Add(3+loop*4);
triangles.Add(2+loop*4);
loop = loop + 1;
// create mesh
meshFilterMesh.vertices=vertices.ToArray();
meshFilterMesh.triangles=triangles.ToArray();
// add this mesh to the MeshCollider
mesher.GetComponent<MeshCollider>().sharedMesh=meshFilterMesh;
}
// Save if F12 press
public void SaveAsset()
{
var mf = mesher.GetComponent<MeshFilter>();
if (mf)
{
var savePath = "Assets/" + saveName + ".asset";
Debug.Log("Saved Mesh to:" + savePath);
AssetDatabase.CreateAsset(mf.mesh, savePath);
}
}
}
One approach is to generate the mesh directly on your own terms. The pro of this is that you can have very fine control of exactly what you want pixel boundaries to look like, and you have better information do your own triangulation of the mesh. The downside is that you have to do all of this yourself.
One way of implementing this is to use the Marching Squares algorithm to generate isobands from the pixel data (You can use the blue/green/alpha channel to get the isovalue depending on if the background is white or transparent), and then generate a piece of the mesh from each of the 2x2 pixel grounds that have a part of the isoband.
To get the pixel data from the image you can use Texture2D.GetPixels. Then you can use the marching squares algorithm on that information to determine how to represent every 2x2 cluster of pixels in the mesh. Then you would use that information to find the vertices of each triangle that represents that quad of pixels.
Once you convert each quad of pixels into triangles, arrange the vertices of those triangles into an array (make sure you order the vertices of each triangle in a clockwise direction from the visible side) and use Mesh.SetVertices to create a mesh with those vertices.
Another approach is to set the alpha of any non-red pixel to zero, and let Unity's sprite packer generate the mesh for you.
Here is one way to do that:
If it is an asset and you want to modify it, set the texture asset to have Read/Write enabled checked. If the texture is created at runtime (and is therefore not an asset) this step can be skipped.
Get the pixel data with Texture2D.GetPixels. This will get you an array of pixels in the form of Color[] pixels:
public Texture2D tex;
...
Color[] pixels = tex.GetPixels();
Iterate through each index and replace pixels with any amount of blue (such as white pixels) with clear pixels:
for (int i = 0 ; i < pixels.Length ; i++ )
{
if (
pixels[i].r != 1f
|| pixels[i].g != 0f
|| pixels[i].b != 0f)
pixels[i] = Color.clear;
}
Set the texture pixel data with the modified pixel array:
tex.SetPixels(pixels);
tex.Apply();
The downside to this approach is that I do not know if you can use the Unity spritepacker to pack textures created at run time onto the sprite atlas. If it can not, then a different tool would be needed for this approach to generate meshes from sprites at run time.
Ok I've made something :
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class scri : MonoBehaviour
{
public Texture2D tex;
public Texture2D newText;
public Sprite sprite;
public List<Color> colorList;
private Sprite mySprite;
private SpriteRenderer sr;
// Start is called before the first frame update
void Start()
{
sr = gameObject.AddComponent<SpriteRenderer>() as SpriteRenderer;
newText =new Texture2D(tex.width,tex.height,TextureFormat.ARGB32, false);
Color[] pixels = sprite.texture.GetPixels();
for (int i = 0 ; i < pixels.Length ; i++ )
{
Debug.Log(pixels[i]);
if (pixels[i].r==1) {
pixels[i] = Color.clear;
}
}
newText.SetPixels(pixels);
newText.Apply();
mySprite = Sprite.Create(newText, new Rect(0.0f, 0.0f, newText.width, newText.height), new Vector2(0.5f, 0.5f), 100.0f);
sr.sprite = mySprite;
}
// Update is called once per frame
// void Update()
// {
// Debug.Log(sprite.triangles.Length);
// Debug.Log(sprite.vertices.Length);
// }
}
Usefull link :
https://forum.unity.com/threads/setting-pixel-to-transparent-turns-out-black.172375/
https://docs.unity3d.com/ScriptReference/Sprite.Create.html
https://forum.unity.com/threads/is-it-possible-to-convert-a-texture2d-from-one-format-to-another-in-standalone-run-time.327141/
https://forum.unity.com/threads/texture-setpixels.431177/
But I don't know why, if the png has a white background at the begining it doesn't work well ... :
With svg it's ok since the begining without my code.
But in sprite Editor I could generate a custom physic shape:

SpriteRenderer shrinks after copying it's texture

I have a SpriteRenderer with a texture, and I want to make a copy of the texture and assign this copy to the same SpriteRenderer
//Copy of the texture that will be changes in runtime
[HideInInspector]
public Texture2D tempTexture;
//FilterMode of the copied texture
[SerializeField]
private FilterMode filterMode = FilterMode.Bilinear;
/// <summary>
/// Create a copy of the texture that is used by a SpriteRenderer and unload the original texture from the memory
/// </summary>
void Start()
{
var spriteRenderer = GetComponent<SpriteRenderer>();
if(spriteRenderer==null)
{
Debug.LogError("SpriteRenderer is null");
return;
}
var tex = spriteRenderer.sprite.texture;
if (tex == null)
{
Debug.LogError("Sprite's texture is null");
return;
}
Debug.Log("original texture size is: " + tex.width + " : " + tex.height);
tempTexture = new Texture2D(tex.width, tex.height, tex.format, false);
tempTexture.filterMode = filterMode;
try
{
var colors = tex.GetPixels();
tempTexture.SetPixels(colors);
}
catch(Exception ex)
{
Debug.LogError(ex.Message);
return;
}
tempTexture.Apply();
Resources.UnloadAsset(tex);
spriteRenderer.sprite = Sprite.Create(tempTexture, spriteRenderer.sprite.textureRect, Vector2.one * 0.5f);
}
It I don't change texture's size (or make it bigger) in the Inspector by overriding it everything works perfect. But if I make the size of the texture smaller, then the sprite shrinks when my script completed its job.
One more time. If the texture has size, let's say, bigger than 512x512, and I change it to 512 in the Inspector, then the sprite shrinks and becomes two times smaller.
The problem is in my script, because if I disable it - everything is okay, even if i override the texture's size.
Any ideas about how to fix it?
I have found a solution
var oldSprite = spriteRenderer.sprite;
spriteRenderer.sprite = Sprite.Create(tempTexture, spriteRenderer.sprite.textureRect, Vector2.one * 0.5f, oldSprite.pixelsPerUnit, 0, SpriteMeshType.Tight, oldSprite.border);
border property of the sprite is responsible for its size

Taking snapshots of a image in Unity

I am trying to take snapshots of materials I used in my application in Unity. I simply add a directional light and a camera and in a perspective mode. Then I render the result to a texture and save it as a .png file. The result is good but there is a strange gizmo like figure in the middle of image. Here it is :
Camera and light is far enough from the object. Also I disabled light to see if it is caused by directional light. But didn't solve. Anyone knows what cause this elliptic figure? Thanks in advance.
Edit. Here is the code
public static Texture2D CreateThumbnailFromMaterial(Material _material, string _name, string _path)
{
GameObject sphereObj = GameObject.CreatePrimitive(PrimitiveType.Sphere);
sphereObj.name = _name;
sphereObj.GetComponent<Renderer>().material = _material;
Texture2D thumbnailTexture = CreateThumbnailFromModel(sphereObj, _path);
sphereObj.GetComponent<Renderer>().material = null;
Object.DestroyImmediate(sphereObj.gameObject);
return thumbnailTexture;
}
public static Texture2D CreateThumbnailFromModel(GameObject _gameObject, string _path)
{
Texture2D thumbnailTexture = new Texture2D(textureSize, textureSize);
thumbnailTexture.name = _gameObject.name.Simplify();
GameObject cameraObject = Object.Instantiate(Resources.Load("SceneComponent/SnapshotCamera") as GameObject);
Camera snapshotCamera = cameraObject.GetComponent<Camera>();
if (snapshotCamera)
{
GameObject sceneObject = GameObject.Instantiate(_gameObject) as GameObject;
sceneObject.transform.Reset();
sceneObject.transform.position = new Vector3(1000, 0, -1000);
sceneObject.hideFlags = HideFlags.HideAndDontSave;
// Create render texture
snapshotCamera.targetTexture = RenderTexture.GetTemporary(textureSize, textureSize, 24);
RenderTexture.active = snapshotCamera.targetTexture;
// Set layer
foreach (Transform child in sceneObject.GetComponentsInChildren<Transform>(true))
{
child.gameObject.layer = LayerMask.NameToLayer("ObjectSnapshot");
}
// Calculate bounding box
Bounds bounds = sceneObject.GetWorldSpaceAABB();
float maxBoundValue = 0f;
if (bounds.IsValid())
{
maxBoundValue = Mathf.Max(bounds.size.x, bounds.size.y, bounds.size.z);
}
double fov = Mathf.Deg2Rad * snapshotCamera.GetComponent<Camera>().fieldOfView;
float distanceToCenter = (maxBoundValue) / (float)System.Math.Tan(fov);
cameraObject.transform.LookAt(bounds.center);
cameraObject.transform.position = bounds.center - (snapshotCamera.transform.forward * distanceToCenter);
cameraObject.transform.SetParent(sceneObject.transform);
snapshotCamera.Render();
thumbnailTexture.ReadPixels(new Rect(0, 0, textureSize, textureSize), 0, 0);
thumbnailTexture.Apply();
sceneObject.transform.Reset();
snapshotCamera.transform.SetParent(null);
RenderTexture.active = null;
GameObject.DestroyImmediate(sceneObject);
GameObject.DestroyImmediate(cameraObject);
// Save as .png
IO.IOManager.Instance.SaveAsPNG(_path + thumbnailTexture.name, thumbnailTexture);
}
return thumbnailTexture;
}
And here is my camera properties

Wrong result when using setPixel()

I'm dealing with a problem for a few days now with setPixel() on Texture2D.
What i'm doing is getting mouse position or touch position(on android), then using that in setPixel() with transparent color. But the result i'm getting occur elsewhere instead of exactly where the mouse is...
public class EarshPic : MonoBehaviour {
public SpriteRenderer sr;
public SpriteRenderer srO;
public Camera c;
// Use this for initialization
void Start () {
CreateCover();//This method is working fine
}
private void CreateCover()
{
Color color = new Color(0.5F, 0.5f, 0.5F, 1.0F);
int x = srO.sprite.texture.width;
int y = srO.sprite.texture.height;
Texture2D tmpTexture = new Texture2D(srO.sprite.texture.width,
srO.sprite.texture.height);
for (int i = 0; i < tmpTexture.width; i++)
{
for (int j = 0; j < tmpTexture.height; j++)
{
tmpTexture.SetPixel(i, j, color);
}
}
tmpTexture.Apply(true);
sr.sprite = Sprite.Create(tmpTexture, srO.sprite.rect,
new Vector2(0.5f, 0.5f),srO.sprite.pixelsPerUnit);
}
// I have problem in that method
// Vector2 v = mousePostion or touchpostion
void Eraser(Vector2 v)
{
Color color = new Color(0.5F, 0.5f, 0.5F, 0.0F);
sr.sprite.texture.SetPixel(v.x, v.y, color);
sr.sprite.texture.Apply(true);
}
// Update is called once per frame
void Update () {
if(Input.mousePosition!=null)
{
Eraser(Input.mousePosition);
}
if (Input.touchCount == 1)
{
Touch touch = Input.GetTouch(0);
switch (touch.phase)
{
case TouchPhase.Moved:
Eraser(touch.position);
break;
}
}
}
}
Problem
You are mixing different coordinates. This is the case if the texture is not perfectly screen sized. Your click is in screen coordinates and you are using it to set the transparency in texture coordinates.
Solution
This one requires the use of 3D models with colliders and textures on them. For 2D scenario you can use a box and set its texture to your 2D sprite. I don't know any easier method, but hopefully there is.
You have to first convert the screen position to world coordinate ray. This can be done with Camera.ScreenPointToRay.
Then you need to Physics.Raycast that ray to chech which position of the 3d model's collider it is intersecting with.
The intersection point can be changed to texture coordinates with RaycastHit.textureCoord. In the previous link, you can find a complete code example of the whole process.