I add a cube to unity scene. I want to set this cub's texture by using an image.
I use below code to load image and set texture :
Texture2D text2D = new Texture2D(Screen.width, Screen.height,TextureFormat.RGB24 , false);
text2D.SetPixels(((Texture2D)Resources.Load("image")).GetPixels());
MeshRenderer renderer = cube.GetComponent<MeshRenderer>();
renderer.material.mainTexture = text2D;
I see only a gray cube not the image on the scene.
You can shorten this quite a bit with only:
renderer.material.mainTexture = Resources.Load<Texture2D>("image");
Note that if the image is not found then you get null.
To see changes on the Texture2D, use text2d.Apply();
This is even more easy to do.
Try
public GameObject _cube;
void Start()
{
Renderer rend = _cube.GetComponent<Renderer> ();
rend.material.mainTexture = Resources.Load ("image") as Texture;
}
LoadImage method can also be used to do the job. But here, you have to pass in the image as .bytes format.
Example:
public TextAsset image;
void Start()
{
var texture = new Texture2D(100, 100, TextureFormat.ARGB32, false);
texture.LoadImage(image.bytes);
GetComponent<Renderer>().material.mainTexture = texture;
texture.Apply();
}
Related
I'm developing a VR App by using skybox Panoramic material. And I want to bind the material with a Texture2D created in C# script, then rendering the Texture2D in native plugin, the codes is below, but the skybox doesn't show anything, why?
private int texWidth = 2304;
private int texHeight = 2304;
private Texture2D tex;
// Start is called before the first frame update
void Start()
{
// Create a texture
Texture2D tex = new Texture2D(2304, 2304, TextureFormat.ARGB32, true);
// Set point filtering just so we can see the pixels clearly
tex.filterMode = FilterMode.Trilinear;
// Call Apply() so it‘s actually uploaded to the GPU
tex.Apply();
init(tex.GetNativeTexturePtr());
// Set texture onto our material
//RenderSettings.skybox.SetTexture("_MainTex", tex);
RenderSettings.skybox.mainTexture = tex;
}
[Original picture]
[The picture taken in unity]
I wanna take a screenshot with camera in unity. i can get a screenshot with camera but it looks weird like second picture. how can i solve this problem?
public void ScreenShot(int imgName)
{
RenderTexture activeRenderTexture = RenderTexture.active; RenderTexture.active = targetCam.targetTexture;
targetCam.Render();
Texture2D image = new Texture2D(targetCam.targetTexture.width, targetCam.targetTexture.height);
_texture = image;
image.ReadPixels(new Rect(0, 0, targetCam.targetTexture.width, targetCam.targetTexture.height), 0, 0);
image.Apply();
RenderTexture.active = activeRenderTexture;
byte[] bytes = image.EncodeToPNG();
}
This is the code used in unity
I have been trying to change the format from a camera that give a texture in Alpha8 to RGBA and have been unsuccessful so far.
This is the code I've tried:
public static class TextureHelperClass
{
public static Texture2D ChangeFormat(this Texture2D oldTexture, TextureFormat newFormat)
{
//Create new empty Texture
Texture2D newTex = new Texture2D(2, 2, newFormat, false);
//Copy old texture pixels into new one
newTex.SetPixels(oldTexture.GetPixels());
//Apply
newTex.Apply();
return newTex;
}
}
And I'm calling the code like this:
Texture imgTexture = Aplpha8Texture.ChangeFormat(TextureFormat.RGBA32);
But the image gets corrupted and isn't visible.
Does anyone know how to change this Alpha8 to RGBA so I can process it like any other image in OpenCV?
A friend provided me with the answer:
Color[] cs =oldTexture.GetPixels();
for(int i = 0; i < cs.Length; i++){//we want to set the r g b values to a
cs[i].r = cs[i].a;
cs[i].g = cs[i].a;
cs[i].b = cs[i].a;
cs[i].a = 1.0f;
}
//set the pixels in the new texture
newTex.SetPixels(cs);
//Apply
newTex.Apply();
This will take alot of resources but it will work for sure.
If you know a better way to make this change please add an answer to this thread.
I have three scenes.
1) Where you make your team.
2) Where the level is built.
3) The game.
On my team, there are 5 choices for each team member.
I am trying to figure out how I set the player and then recall the Image or Sprite of that player on another scene.
I figured a playerPref would work, but it seems like this is not an option.
What is a good way of saving an image from one scene and recalling the image in a different scene?
You can store texture of sprite as base64 in playerprefs, then you can create sprite from stored texture. But texture must be Read/Write Enabled and supported format like ARGB32, RGBA32, RGB24 etc. Here is an example;
using UnityEngine;
using System.Collections;
public class TextureStore
{
public static void WriteTextureToPlayerPrefs (string tag, Texture2D tex)
{
// if texture is png otherwise you can use tex.EncodeToJPG().
byte[] texByte = tex.EncodeToPNG ();
// convert byte array to base64 string
string base64Tex = System.Convert.ToBase64String (texByte);
// write string to playerpref
PlayerPrefs.SetString (tag, base64Tex);
PlayerPrefs.Save ();
}
public static Texture2D ReadTextureFromPlayerPrefs (string tag)
{
// load string from playerpref
string base64Tex = PlayerPrefs.GetString (tag, null);
if (!string.IsNullOrEmpty (base64Tex)) {
// convert it to byte array
byte[] texByte = System.Convert.FromBase64String (base64Tex);
Texture2D tex = new Texture2D (2, 2);
//load texture from byte array
if (tex.LoadImage (texByte)) {
return tex;
}
}
return null;
}
}
You could just save the name of the sprite to PlayerPrefs, and then load it from resources: Resources.Load(spriteName);
you can create a static texture variable. write texture of the image to that variable and read it on the other scene
Surprisingly in Unity, for years the only way to simply scale an actual PNG is to use the very awesome library http://wiki.unity3d.com/index.php/TextureScale
Example below
How do you scale a PNG using Unity5 functions? There must be a way now with new UI and so on.
So, scaling actual pixels (such as in Color[]) or literally a PNG file, perhaps downloaded from the net.
(BTW if you're new to Unity, the Resize call is unrelated. It merely changes the size of an array.)
public WebCamTexture wct;
public void UseFamousLibraryToScale()
{
// take the photo. scale down to 256
// also crop to a central-square
WebCamTexture wct;
int oldW = wct.width; // NOTE example code assumes wider than high
int oldH = wct.height;
Texture2D photo = new Texture2D(oldW, oldH,
TextureFormat.ARGB32, false);
//consider WaitForEndOfFrame() before GetPixels
photo.SetPixels( 0,0,oldW,oldH, wct.GetPixels() );
photo.Apply();
int newH = 256;
int newW = Mathf.FloorToInt(
((float)newH/(float)oldH) * oldW );
// use a famous Unity library to scale
TextureScale.Bilinear(photo, newW,newH);
// crop to central square 256.256
int startAcross = (newW - 256)/2;
Color[] pix = photo.GetPixels(startAcross,0, 256,256);
photo = new Texture2D(256,256, TextureFormat.ARGB32, false);
photo.SetPixels(pix);
photo.Apply();
demoImage.texture = photo;
// consider WriteAllBytes(
// Application.persistentDataPath+"p.png",
// photo.EncodeToPNG()); etc
}
Just BTW it occurs to me I'm probably only talking about scaling down here (as you often have to do to post an image, create something on the fly or whatever.) I guess, there would not often be a need to scale up in size an image; it's pointless quality-wise.
If you're okay with stretch-scaling, actually there's simpler way by using a temporary RenderTexture and Graphics.Blit. If you need it to be Texture2D, swapping RenderTexture.active temporarily and read its pixels to Texture2D should do the trick. For example:
public Texture2D ScaleTexture(Texture src, int width, int height){
RenderTexture rt = RenderTexture.GetTemporary(width, height);
Graphics.Blit(src, rt);
RenderTexture currentActiveRT = RenderTexture.active;
RenderTexture.active = rt;
Texture2D tex = new Texture2D(rt.width,rt.height);
tex.ReadPixels(new Rect(0, 0, tex.width, tex.height), 0, 0);
tex.Apply();
RenderTexture.ReleaseTemporary(rt);
RenderTexture.active = currentActiveRT;
return tex;
}