Tint a Texture2D in the same way Unity does - unity3d

I'm trying to tint a Texture2D with a Color in Unity3D in same same way the sprite renderer does.
What is the best way to do this?
Edit:
This are the codes i tried:
private Color Tint(Color source, Color tint) {
Color tinted = source;
//tinted.r = Math.Min(Math.Max(0f, (source.r + (255f - source.r) * tint.r)), 255f);
//tinted.g = Math.Min(Math.Max(0f, (source.g + (255f - source.g) * tint.g)), 255f);
//tinted.b = Math.Min(Math.Max(0f, (source.b + (255f - source.b) * tint.b)), 255f);
//tinted.r = (float)Math.Floor(source.r * tint.r / 255f);
//tinted.g = (float)Math.Floor(source.g * tint.g / 255f);
//tinted.b = (float)Math.Floor(source.b * tint.b / 255f);
Color.RGBToHSV(source, out float sourceH, out float sourceS, out float sourceV);
Color.RGBToHSV(tint, out float tintH, out float tintS, out float tintV);
tinted = Color.HSVToRGB(tintH, tintS, sourceV);
tinted.a = source.a;
return tinted;
}

Afaik the tint for Renderer in general is done by multiplication (black stays black) and could simply be achieved using the * operator
private Color Tint(Color source, Color tint)
{
return source * tint;
}

Much flexible way is to use lerp
Color original = Color.red;
Color tint = Color.cyan;
float tintPercentage = 0.3f;
// This will give you original color tinted by tintPercentage
UnityEngine.Color.Lerp(original, tint, tintPercentage);
// this will give you original color
UnityEngine.Color.Lerp(original, tint, 0f);
// this will give you full tint color
UnityEngine.Color.Lerp(original, tint, 1f);

Related

Unity material.color.a code is not working

void OnTriggerStay(Collider other)
{
if (other.gameObject.CompareTag("tree"))
{
Color color = other.gameObject.GetComponent<Renderer>().material.color;
color.a = 0.5f;
}
}
If an object with a "tree" tag enters the camera's trigger, I want the opacity of that object to be 0.5. But didn't worked. How can i fix this? Thanks.
While the other two answers are technically correct, you are most likely missing a very important step to allow for the changing of the alpha of a Material. I'll take a guess and assume you generated a new Material in the Editor by using the Asset Creation Menu. By default, the Material RenderMode is set to Opaque.
To allow for changes of the Material color's alpha, you will need to set the RenderMode either to Transparent or Fade. If you are working with a custom shader, you will need to alter the code to format to one of the mentioned RenderTypes. If you need help modifying your shader, that would best be answered in a new question.
For clarity, here is a gif of what the confusion might be:
Edit: For completeness, here is a full script that will toggle the RenderMode of your material at runtime if you do not wish to change it at compile time.
using UnityEngine;
public static class MaterialUtils
{
public enum BlendMode
{
Opaque,
Cutout,
Fade,
Transparent
}
public static void SetupBlendMode(Material material, BlendMode blendMode)
{
switch (blendMode)
{
case BlendMode.Transparent:
material.SetOverrideTag("RenderType", "Transparent");
material.SetInt("_SrcBlend", (int)UnityEngine.Rendering.BlendMode.SrcAlpha);
material.SetInt("_DstBlend", (int)UnityEngine.Rendering.BlendMode.OneMinusSrcAlpha);
material.SetInt("_ZWrite", 0);
material.DisableKeyword("_ALPHATEST_ON");
material.EnableKeyword("_ALPHABLEND_ON");
material.DisableKeyword("_ALPHAPREMULTIPLY_ON");
material.renderQueue = (int)UnityEngine.Rendering.RenderQueue.Transparent;
material.SetFloat("_Mode", 3.0f);
break;
case BlendMode.Opaque:
material.SetOverrideTag("RenderType", "");
material.SetInt("_SrcBlend", (int)UnityEngine.Rendering.BlendMode.One);
material.SetInt("_DstBlend", (int)UnityEngine.Rendering.BlendMode.Zero);
material.SetInt("_ZWrite", 1);
material.DisableKeyword("_ALPHATEST_ON");
material.DisableKeyword("_ALPHABLEND_ON");
material.DisableKeyword("_ALPHAPREMULTIPLY_ON");
material.renderQueue = -1;
material.SetFloat("_Mode", 0.0f);
break;
default:
Debug.LogWarning("Warning: BlendMode: " + blendMode + " is not yet implemented!");
break;
}
}
}
[RequireComponent(typeof(MeshRenderer))]
public class TestScript : MonoBehaviour
{
[SerializeField] private MeshRenderer mr = null;
[SerializeField] private float alphaChange = 0.5f;
private bool isOpaque = true;
private void Awake()
{
if (mr == null)
mr = GetComponent<MeshRenderer>();
}
private void OnMouseDown()
{
// store our color struct and change the alpha channel
Color clr = mr.material.color;
clr.a = alphaChange;
// instance our material to alter the rendermode
Material mat = mr.material;
// update our render mode to transparent and our color to the new alpha
MaterialUtils.SetupBlendMode(mat, isOpaque ? MaterialUtils.BlendMode.Transparent : MaterialUtils.BlendMode.Opaque);
mat.color = clr;
// apply our material change
mr.material = mat;
// toggle our bool
isOpaque = !isOpaque;
}
}
Your original question does not state whether or not you need to toggle the material back to opaque, but I included it. You can keep the RenderMode as Transparent and simply change the alpha back to 1.0f to make it fully opaque again. Again, here's a gif example of the above script in action:
To show that the snippet is working, I place 2 spheres behind the cubes. The snippet is probably a bit overkill for what you need, but if someone else stumbles on the question and needs a more versatile answer here it is!
Color is just a struct and basically just a container of values without further functionality. It is not linked to the Material it was taken from.
By assigning only
color.a = XY;
you do nothing yet.
You have to assign it back to the material!
void OnTriggerStay(Collider other)
{
if (!other.CompareTag("tree")) return;
var material = other.GetComponent<Renderer>().material;
var color = material.color;
color.a = 0.5f;
material.color = color;
}
You're not really setting the color with the code you wrote.
Color color = other.gameObject.GetComponent<Renderer>().material.color;
color.a = 0.5f;
With the first line you take the color from the object and with the second you set the opacity. But you don't assign it back to the object. You can assign the color back to the object and it should work:
other.gameObject.GetComponent<Renderer>().material.color = color;

Changing the color of a gameobject shows as white not the colour requested

I have a set of gameobjects (simple cubes). I can set their initial colour when instantiating them. However when I try and change the colour by code, the object in the game view and inspector show as white, but in the colour picker show the correct colour!
There is a single directional light (the default one).
IEnumerator ColourChange()
{
Color targetColour = new Color(Random.Range(0, 255), Random.Range(0, 255), Random.Range(0, 255));
Debug.Log("color = " + targetColour);
for (int x = 0; x < CreateCubeGrid.GRIDSIZE; x++) {
for (int z = 0; z < CreateCubeGrid.GRIDSIZE; z++) {
CreateCubeGrid.cubeGrid[x,z].GetComponent<Renderer>().material.color = targetColour;
}
yield return new WaitForSeconds (0.05f);
}
}
Colours are 0 to 1 not 0 to 255.
Use Color32 if you want to use 0-255 values.
Color32 Documentation
In order to change the material's color, you must tell it exactly what color you're trying to change, by using Shader.Find("_YourColor") (Emission, Albedo, etc).
An approach that should work for materials using Standard Shader can be seen below:
private void ChangeColor(Color color)
{
//Fetch the Renderer from the GameObject.
Renderer rend = GetComponent<Renderer>();
//Find and set the main Color ("_Color") of the Material to the new one
rend.material.shader = Shader.Find("_Color");
rend.material.SetColor("_Color", color);
}
You can read more about changing a Material's color at Unity's Documentation.

Drawing a Rectangle with color and thickness in OnGUI

I would like to draw a frame / rectangle in OnGUI in order to display a certain area for debugging purposes.
This rectangle should be displayed with a certain "thickness" / line width and color.
So far, I have only found GUI.Label and GUI.Box, which both seems inadequate for this.
Thank you!
If it is only for debugging I would recommend to use Gizmos.DrawWireCube
Note: Only drawn in the SceneView not in the GameView so really only for debugging
private void OnDrawGizmosSelected()
{
// Draw a yellow cube at the transform position
var color = Gizmos.color;
Gizmos.color = Color.yellow;
Gizmos.DrawWireCube(transform.position, new Vector3(1, 1, 1));
Gizmos.color = color;
}
for showing it only if object is selected or OnDrawGizmos for showing it allways
Note that this is done in WorldSpace so if you rather want the size vector etc rotate together with the object you can wrap in between
var matrix = Gizmos.matrix;
Gizmos.matrix = transform.localToWorldMatrix;
//...
Gizmos.matrix = matrix;
Unfortunately there is no option to change the line thikness...
... but you could overcome this by simply drawing e.g. 4 normal cubes using Gizmos.DrawCube to form a rectangle. Something maybe like
private void OnDrawGizmos()
{
DrawDebugRect(new Vector2(0.5f, 0.3f), 0.05f);
}
private void DrawRect(Vector2 size, float thikness)
{
var matrix = Gizmos.matrix;
Gizmos.matrix = transform.localToWorldMatrix;
//top cube
Gizmos.DrawCube(Vector3.up * size.y / 2, new Vector3(size.x, thikness, 0.01f);
//bottom cube
Gizmos.DrawCube(Vector3.down * size.y / 2, new Vector3(size.x, thikness, 0.01f);
//left cube
Gizmos.DrawCube(Vector3.left * size.x / 2, new Vector3(thikness, size.y, 0.01f);
//right cube
Gizmos.DrawCube(Vector3.right * size.x / 2, new Vector3(thikness, size.y, 0.01f);
Gizmos.matrix = matrix;
}
I'm only on smartphone so it might not be copy-past-able but I think you'll get the idea ;)

How to modify a Texture pixels from a compute shader in unity?

I stumbled upon a strange problem in vuforia.When i request a camera image using CameraDevice.GetCameraImage(mypixelformat), the image returned is both flipped sideways and rotated 180 deg. Because of this, to obtain a normal image i have to first rotate the image and then flip it sideways.The approach i am using is simply iterating over pixels of the image and modifying them.This approach is very poor performance wise.Below is the code:
Texture2D image;
CameraDevice cameraDevice = Vuforia.CameraDevice.Instance;
Vuforia.Image vufImage = cameraDevice.GetCameraImage(pixelFormat);
image = new Texture2D(vufImage.Width, vufImage.Height);
vufImage.CopyToTexture(image);
Color32[] colors = image.GetPixels32();
System.Array.Reverse(colors, 0, colors.Length); //rotate 180deg
image.SetPixels32(colors); //apply rotation
image = FlipTexture(image); //flip sideways
//***** THE FLIP TEXTURE METHOD *******//
private Texture2D FlipTexture(Texture2D original, bool upSideDown = false)
{
Texture2D flipped = new Texture2D(original.width, original.height);
int width = original.width;
int height = original.height;
for (int col = 0; col < width; col++)
{
for (int row = 0; row < height; row++)
{
if (upSideDown)
{
flipped.SetPixel(row, (width - 1) - col, original.GetPixel(row, col));
}
else
{
flipped.SetPixel((width - 1) - col, row, original.GetPixel(col, row));
}
}
}
flipped.Apply();
return flipped;
}
To improve the performance i want to somehow schedule these pixel operations on the GPU, i have heard that a compute shader can be used, but i have no idea where to start.Can someone please help me write the same operations in a compute shader so that the GPU can handle them, Thankyou!.
The whole compute shader are new for me too, but i took the occasion to research it a little bit for myself too. The following works for flipping a texture vertically (rotating and flipping horizontally should be just a vertical flip).
Someone might have a more elaborate solution for you, but maybe this is enough to get you started.
The Compute shader code:
#pragma kernel CSMain
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> Result;
Texture2D<float4> ImageInput;
float2 flip;
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
flip = float2(512 , 1024) - id.xy ;
Result[id.xy] = float4(ImageInput[flip].x, ImageInput[flip].y, ImageInput[flip].z, 1.0);
}
and called from any script:
public void FlipImage()
{
int kernelHandle = shader.FindKernel("CSMain");
RenderTexture tex = new RenderTexture(512, 1024, 24);
tex.enableRandomWrite = true;
tex.Create();
shader.SetTexture(kernelHandle, "Result", tex);
shader.SetTexture(kernelHandle, "ImageInput", myTexture);
shader.Dispatch(kernelHandle, 512/8 , 1024 / 8, 1);
RenderTexture.active = tex;
result.ReadPixels(new Rect(0, 0, tex.width, tex.height), 0, 0);
result.Apply();
}
This takes an input Texture2D, flips it in the shader, applies it to a RenderTexture and to a Texture2D, whatever you need.
Note that the image sizes are hardcoded in my instance and should be replaced by whatever size you need. (for within the shader use shader.SetInt(); )

cocos2dx: Sprite3D rotating, culling error

Hi I'm trying to have 2 sprites with different z in 3d world and a camera that rotates around the center of the screen and points at the center of the screen.
Even if the sprites has different z (and zorder, I don't know if this is necessary) the sprites are always visualized while I'm expecting to have the second sprite hided from the other...
This is helloworld layer init
auto sp3d = Sprite3D::create();
sp3d->setPosition(visibleSize.width/2, visibleSize.height/2);
addChild(sp3d);
auto sprite = Sprite::create("JP9_table.png");
auto spritePos = Vec3(0,0,0);
sprite->setScale(0.3);
sprite->setPosition3D(spritePos);
sp3d->addChild(sprite,0);
auto sprite2 = Sprite::create("JP9_logo_yc.png");
auto spritePos2 = Vec3(0,0,10);
sprite2->setPosition3D(spritePos2);
sp3d->addChild(sprite2,10);
sp3d->setCullFace(GL_BACK);
sp3d->setCullFaceEnabled(true);
this->setCameraMask((unsigned short)CameraFlag::USER2, true);
camera = Camera::createPerspective(60, (float)visibleSize.width/visibleSize.height, 1.0, 1000);
camera->setCameraFlag(CameraFlag::USER2);
camera->setPosition3D(spritePos + Vec3(-200,0,800));
camera->lookAt(spritePos, Vec3(0.0,1.0,0.0));
this->addChild(camera);
this->scheduleUpdate();
angle=0;
and this is update:
void TestScene::update(float dt)
{
angle+=0.1;
Size visibleSize = Director::getInstance()->getVisibleSize();
Vec2 origin = Director::getInstance()->getVisibleOrigin();
Vec3 spritePos=Vec3(visibleSize.width/2,visibleSize.height/2,0);
camera->setPosition3D(Vec3(visibleSize.width/2,visibleSize.height/2,0) + Vec3(800*cos(angle),0,800*sin(angle)));
camera->lookAt(spritePos, Vec3(0.0,1.0,0.0));
}
I have tryed something simplier:
auto sp3d = Sprite3D::create();
sp3d->setPosition(visibleSize.width/2, visibleSize.height/2);
addChild(sp3d);
auto sprite = Sprite::create("JP9_table.png");
auto spritePos = Vec3(0,0,0);
sprite->setScale(0.3);
sprite->setPosition3D(spritePos);
sp3d->addChild(sprite,0);
auto sprite2 = Sprite::create("JP9_logo_yc.png");
auto spritePos2 = Vec3(0,0,10);
sprite2->setPosition3D(spritePos2);
sp3d->addChild(sprite2,10);
sp3d->setCullFace(GL_BACK);
sp3d->setCullFaceEnabled(true);
even with sp3d->runAction(RotateTo::create(20,vec3(0,3000,0))) same error.
Is it a cocos2dx bug?
the sprite with z=10 disappear before it is covered by the other sprite...
remain hidden for a while, and when it should be hidden completely reappear!!!
Do I have forgot something?
thanks
Maybe you should check this.
_camControlNode = Node::create();
_camControlNode->setNormalizedPosition(Vec2(.5,.5));
addChild(_camControlNode);
_camNode = Node::create();
_camNode->setPositionZ(Camera::getDefaultCamera()->getPosition3D().z);
_camControlNode->addChild(_camNode);
auto sp3d = Sprite3D::create();
sp3d->setPosition(s.width/2, s.height/2);
addChild(sp3d);
auto lship = Label::create();
lship->setString("Ship");
lship->setPosition(0, 20);
sp3d->addChild(lship);
and
_lis->onTouchMoved = [this](Touch* t, Event* e) {
float dx = t->getDelta().x;
Vec3 rot = _camControlNode->getRotation3D();
rot.y += dx;
_camControlNode->setRotation3D(rot);
Vec3 worldPos;
_camNode->getNodeToWorldTransform().getTranslation(&worldPos);
Camera::getDefaultCamera()->setPosition3D(worldPos);
Camera::getDefaultCamera()->lookAt(_camControlNode->getPosition3D());
};