I am trying to get scaled vertices from a mesh after scaling the gameObject.
The gameObject will be printed scaled in the editor but if I print the vertices of mesh, they will not be scaled.
gameObject.transform.localScale *= 10;
_mesh = gameObject.GetComponent<MeshFilter>().mesh;
//mesh.recalculateAllStuff if it is not done before get the mesh
for (int i = 0; i < _mesh.vertexCount; i++)
{
print(_mesh.vertices) //Not right scale
}
I am wondering how it is working.
I think this will work if you want the scaled points, local to world position:
var scale = 12f;
gameObject.transform.localScale *= scale;
_mesh = gameObject.GetComponent<MeshFilter>().mesh;
for (int i = 0; i < _mesh.vertexCount; i++)
{
print(transform.TransformPoint(_mesh.vertices[i]));
}
If you want relative scaled points, try multiplying the _mesh.vertices[i] components by the local scale vector components - using Vector3.Scale:
Related
I'm calculating the normals of a mesh that I've generated using the marching cubes algorithm but when I run it the object looks blurry like in the picture.
Variables:
CurrentTri is a Vector3int with the indexes of each vertex
CurrentNorm is a Vector3 with the current normal
Verts is a Vector3 array of the positions of the vertices
VertNorm is a Vector3 array of the normals of the vertices
The c# code where I calculate the normals:
// Repeated for each triangle
CurrentNorm = Vector3.Cross(Verts[CurrentTri.y] - Verts[CurrentTri.x], Verts[CurrentTri.z] - Verts[CurrentTri.x]);
VertNorm[CurrentTri.x] += CurrentNorm;
VertNorm[CurrentTri.y] += CurrentNorm;
VertNorm[CurrentTri.z] += CurrentNorm;
Normalising the normals:
for(int i = 0; i < VertNorm.Length; i++)
{
VertNorm[i] = VertNorm[i].normalized;
}
mesh.normals = VertNorm;
It's supposed to be like that (I think). I was a complete idiot.
I'm working on a raycast based pathfinding system. Basically what I'm trying to do is generate points around an object/check if that object can reach those points, and check if those points can reach the target. The target is the green cylinder in the back of the photo. Here is my layer mask which basically says to ignore the player as a collider/obstacle:
layerMask = Physics.DefaultRaycastLayers & ~(1 << 3);
Here is my raycasting code:
// Check if enemy can see player without any obstructions
bool CanSeeDestination(Vector3 startingPoint, Vector3 destination)
{
if(Physics.Raycast(startingPoint, destination, 50f, layerMask))
{
Debug.DrawLine(startingPoint, destination, Color.red);
return false;
} else
{
Debug.DrawLine(startingPoint, destination, Color.green);
return true;
}
}
And finally my pathfinding function:
// Raycast based pathfinding
void Pathfind()
{
List<Vector3> surroundingPoints = new List<Vector3>();
bool foundTarget = false;
// Nested loop to build surrounding points vector array
for(var i = 1; i <= 10; i++)
{
for(var k = 1; k <= 10; k++)
{
// Offset by half of max to get negative distance
int offsetI = i - 5;
int offsetK = k - 5;
surroundingPoints.Add(new Vector3(transform.localPosition.x + offsetI, stepOverHeight.y, transform.localPosition.z + offsetK));
}
}
// Loop through array of surrounding vectors
for(var m = 0; m < surroundingPoints.Count; m++)
{
// If enemy can reach this surrounding point and this surrounding point has an unobstructed path to the target
if(CanSeeDestination(transform.localPosition, surroundingPoints[m]) && CanSeeDestination(surroundingPoints[m], player.transform.position))
{
float distanceFromEnemyToTarget = Vector3.Distance(transform.position, surroundingPoints[m]);
float distanceFromTargetToPlayer = Vector3.Distance(surroundingPoints[m], player.transform.position);
float totalDistance = distanceFromEnemyToTarget + distanceFromTargetToPlayer;
// If this total path distance is shorter than current path distance set this as target
if(totalDistance < currentPathDistance)
{
currentPathDistance = totalDistance;
target = surroundingPoints[m];
foundTarget = true;
}
}
}
if (!foundTarget)
{
target = transform.position;
}
}
For some reason the raycasts trigger on the right side of the obstacle but not the left. Also if I increase the obstacle size or collider size I can eventually block the left side. Not sure why raycasts on the left are green and still passing through the collider.
I resolved the issue. The problem was in this line:
if(Physics.Raycast(startingPoint, destination, 50f, layerMask))
I should have been using Physics.Linecast two go between two points. Raycast goes in a vector "Direction" linecast goes between two points. The correct code is:
if(Physics.Linecast(startingPoint, destination, layerMask))
I want to copy selected part of a raw image to another image
I get start and end position as percentage and by that I can calculate the start and end position in width
how can I copy that selected part to another raw image?
Assuming it's a Texture2D, you can do the following:
Calculate A texture start/end X (dX)
Create a new Texture2D (B), sized as dX and full Y
Call A.GetPixels()
Iterate on array copying pixels to new texture
Apply on new texture
Pseudo code:
var aPixels = aTexture.GetPixels();
var bWidth = endX - startX;
var bTexture = new Texture2D(bWidth, endY);
var bPixels = bTexture.GetPixels();
for (int x = startX; x < endX; x++)
{
for (int y = 0; y < endY; y++)
{
var aIndex = x + y * A.width;
var bIndex = (x - startX) + y * bWidth;
bPixels[bIndex] = aPixels[aIndex];
}
}
bTexture.Apply();
Note that my code quite possibly won't work; as I'm typing this on a mobile phone.
Usually, Image Processing is an expensive process for CPUs, so I don't recommend it in Unity,
But anyway, For your image and in this special case, I think you can crop your image by changing the Size and Offset of texture in material.
Update:
This is an example of what I mentioned:
You can calculate Tile and Offset based on the dragged mouse position on Texture. (Check Here)
I found this.
you can set start coordinates and width and height to GetPixels();
void Start () {
public Texture2D mTexture;
Color[] c = mTexture.GetPixels (startX, startY, width, height);
Texture2D m2Texture = new Texture2D (width, height);
m2Texture.SetPixels (c);
m2Texture.Apply ();
gameObject.GetComponent<MeshRenderer> ().material.mainTexture = m2Texture;
}
```
I generate a 4x4 grid of squares with below code. They all draw in correct position, rows and columns, on canvas on stage.update(). But the x,y coordinates for all sixteen of them on inspection are 0,0. Why? Does each shape has it's own x,y coordinate system? If so, if I get a handle to a shape, how do I determine where it was drawn originally onto the canvas?
The EaselJS documentation is silent on the topic ;-). Maybe you had to know Flash.
var stage = new createjs.Stage("demoCanvas");
for (i = 0; i < 4; i++) {
for (j = 0; j < 4; j++) {
var square = new createjs.Shape();
square.graphics.drawRect(i*100, j*100, 100, 100);
console.log("Created square + square.x + "," + square.y);
stage.addChild(square);
}
}
You are drawing the graphics at the coordinates you want, instead of drawing them at 0,0, and moving them using x/y coordinates. If you don't set the x/y yourself, it will be 0. EaselJS does not infer the x/y or width/height based on the graphics content (more info).
Here is an updated fiddle where the graphics are all drawn at [0,0], and then positioned using x/y instead: http://jsfiddle.net/0o63ty96/
Relevant code:
square.graphics.beginStroke("red").drawRect(0,0,100,100);
square.x = i * 100;
square.y = j * 100;
I have a texture2d in unity and I want to generate a color histogram based on each pixel's hue value. I've tried using getPixels but it's insanely slow for a 1920x1080 texture.
if (tex == null)
{
return null;
}
tex = GetComponent<GUITexture>().texture as Texture2D;
Color32[] texColors = tex.GetPixels32();
int total = texColors.Length;
int[] Harray = new int[360];
for (int i = 0; i < total; i++)
{
float H;
float S;
float V;
Color.RGBToHSV(new Color(texColors[i].r, texColors[i].g, texColors[i].b),out H,out S,out V);
Harray[(int)H]++;
}
for (int i = 0; i < 360; i++)
{
PlotManager.Instance.PlotAdd("Hue", Harray[i]);
}
I'm thinking shader might be able to help but I've no experience at shader programming. Can a shader populates the hue occurrence array and return it while maintaining a good frame rate?
There's an asset on unity store uses compute shader with a RWStructuredBuffer to achieve this : https://www.assetstore.unity3d.com/en/#!/content/15699