Convert from one 2D coordinates system to another with different (0,0) - coordinates

I am programming a software where I receive data for a 2D display in a given coordinate system, where (0,0) is the upper left corner, the x axis grows to the right, and the y axis grows to the bottom.
My graphing library is the Python library pyglet, which considers (0,0) to be the bottom left corner. The x axis grows to the right(no conversion needed) but the y axis grows upwards. Thus, I cannot pass the coordinates from the data I receive directly to my graphing library.
In which way could I convert the y-axis component?

If your graphing library has support for views then you should be able to set the graphing_height to -non_graphing_height. If the position of the view is the center, then it shouldn't need to be changed, if it is the bottom-left, then setting graphic_bottom_left to non_graphing_top_left + non_graphing_height should work.
This might look like this in C++ with the SFML library:
const auto& current_view = window.getView();
const auto& current_view_size = current_view.getSize();
window.setView(sf::View(current_view.getCenter(), { current_view_size.x, -current_view_size.y }));
If you are in 3D this could instead be applied to the projection matrix.
If there are no views, you should be able to recreate them with a view matrix. You should then be able to pass that view matrix into your graphics library.
If you can not pass in the view matrix then apply those to your non graphing transforms to get your graphing transforms. Once you have that you will have a transformation matrix that you can pass into your graphics library.
If your graphics library does not support passing in that either you can decompose the transformed matrix into a translation, rotation and scale.
Here is an example of a function in C++ without the SFML library that would do this with a view from 0,0 to 200,200 in the non graphics library:
/// transform the passed in position, rotation and scale
/// \param rotation radians
void transform(Vector2& position, float& rotation, Vector2& scale) {
// create a transformation matrix from the position, rotation and scale
const Matrix transform_matrix{ position, rotation, scale };
// create a upside down view matrix at origin of size 200x200
const Vector2 view_position{
0.f,
0.f
};
const Vector2 view_size{
200.f,
200.f
};
const Matrix view_matrix =
Matrix::translation({ -view_position.x, -(view_position.y + view_size.y) }) *
Matrix::scale({ 1.f / view_size.x, -1.f / view_size.y });
// multiply the transformation matrix with the view matrix
const auto multiplied = transform_matrix * view_matrix;
// update the passed in values
position = multiplied.getTranslation();
rotation = multiplied.getAngle();
scale = multiplied.getScale();
}
Note that in 3D the rotation might be multiple values or a Quaternion, so make sure you have a Matrix class that supports that.
Here is the example Vector2 and Matrix class I used for the example above:
struct Vector2 {
float x;
float y;
};
class Matrix {
public:
/// creates a matrix using passed in matrix
Matrix(const std::vector<std::vector<float>>& matrix) :
matrix_{ matrix }
{}
/// creates 2d transformation matrix
/// \param angle radians
Matrix(const Vector2& translation, const float angle, const Vector2& scale) :
Matrix({
{ scale.x * cos(angle), scale.y * sin(angle), 0.f },
{ -scale.x * sin(angle), scale.x * cos(angle), 0.f },
{ translation.x, translation.y, 1.f }
})
{}
/// create 2d translation matrix
static Matrix translation(const Vector2& translation) {
return Matrix({
{ 1.f, 0.f, 0.f },
{ 0.f, 1.f, 0.f },
{ translation.x, translation.y, 1.f }
});
}
/// create 2d scale matrix
static Matrix scale(const Vector2& scale) {
return Matrix({
{ scale.x, 0.f, 0.f },
{ 0.f, scale.y, 0.f },
{ 0.f, 0.f, 1.f }
});
}
/// multiplies two matrices together
Matrix operator*(const Matrix& rhs) const {
// get number of rows, columns and elements
const auto rows = matrix_.size();
const auto columns = rhs.matrix_[0].size();
const auto elements = rhs.matrix_.size();
// create a new matrix of the correct size, filled with zeroes
Matrix new_matrix(
std::vector<std::vector<float>>(
rows, std::vector<float>(columns, 0.f)
)
);
// go through each row of the new matrix
for (size_t row = 0; row < rows; ++row) {
// go through each column of the new matrix
for (size_t column = 0; column < columns; ++column) {
// set element in the new matrix to the dot product of this matrix's row and the rhs matrix's column
for (size_t element = 0; element < elements; ++element) {
new_matrix.matrix_[row][column] +=
matrix_[row][element] * rhs.matrix_[element][column];
}
}
}
return new_matrix;
}
/// gets the translation from the matrix
Vector2 getTranslation() const {
return { matrix_[2][0], matrix_[2][1] };
}
/// gets the positive scale from the matrix
Vector2 getScale() const {
Vector2 scale{
sqrt(pow(matrix_[0][0], 2) + pow(matrix_[0][1], 2)),
sqrt(pow(matrix_[1][0], 2) + pow(matrix_[1][1], 2))
};
if (matrix_[0][0] < 0) {
scale.x = -scale.x;
}
if (matrix_[1][1] < 0) {
scale.y = -scale.y;
}
return scale;
}
/// gets the angle from the matrix
/// \returns radians
float getAngle() const {
return atan2(matrix_[0][1], std::abs(matrix_[1][1]));
}
private:
std::vector<std::vector<float>> matrix_;
};

Related

How can I get smooth random movement (2D Unity)?

I'm using Random.onUnitSphere to simulate bubbles floating around jostling for position. It works well although the movement is too "jerky". I'd like to create the same effect but slow it down and make smoother random movements. Is there anyway I can easily achieve this? Here's my code:
private void Update()
{
if (floaty == true)
{
rb.AddRelativeForce(Random.onUnitSphere * speed);
speed = 0.06f;
}
}
A Perlin noise random walker should work
//There are probably better ways to do this.
Vector3 RandomSmoothPointOnUnitSphere(float time)
{
//Get the x of the vector
float x = Math.PerlinNoise(time, /* your x seed */);
//Get the y of the vector
float y = Math.PerlinNoise(time, /* your y seed */);
//Get the z of the vector
float z = Math.PerlinNoise(time, /* your z seed */);
//Create a vector3
Vector3 vector = new Vector3(x, y, z);
//Normalize the vector and return it
return Vector3.Normalize(vector);
}
And in the Update function
if (floaty)
{
//Get the vector
Vector3 movementvector = RandomSmoothPointOnUnitSphere(Time.time);
//You can also use CharacterController.Move()
transform.Translate(movementvector * Time.deltatime);
}
I should also mention that this approach shouldn't work with RigidBody.ApplyForce(), but I don't usually use Unity's default physics, so it may. Regardless, it shouldn't change anything.

UV mapping a procedural cylinder in Unity

I have a method that creates a cylinder based on variables that contain the height, radius and number of sides.
The mesh generates fine with any number of sides, however I am really struggling with understanding how this should be UV mapped.
Each side of the cylinder is a quad made up of two triangles.
The triangles share vertices.
I think the placement of the uv code is correct, however I have no idea what values would be fitting?
Right now the texture is stretched/crooked on all sides of the mesh.
Please help me understand this.
private void _CreateSegmentSides(float height)
{
if(m_Sides > 2) {
float angleStep = 360.0f / (float) m_Sides;
BranchSegment seg = new BranchSegment(m_NextID++);
Quaternion rotation = Quaternion.Euler(0.0f, angleStep, 0.0f);
int index_tr = 0, index_tl = 3, index_br = 2, index_bl = 1;
float u0 = (float)1 / (float) m_Sides;
int max = m_Sides - 1;
// Make first triangles.
seg.vertexes.Add(rotation * (new Vector3(m_Radius, height, 0f)));
seg.vertexes.Add(rotation * (new Vector3(m_Radius, 0f, 0f)));
seg.vertexes.Add(rotation * seg.vertexes[seg.vertexes.Count - 1]);
seg.vertexes.Add(rotation * seg.vertexes[seg.vertexes.Count - 3]);
// Add triangle indices.
seg.triangles.Add(index_tr); // 0
seg.triangles.Add(index_bl); // 1
seg.triangles.Add(index_br); // 2
seg.triangles.Add(index_tr); // 0
seg.triangles.Add(index_br); // 2
seg.triangles.Add(index_tl); // 3
seg.uv.Add(new Vector2(0, 0));
seg.uv.Add(new Vector2(0, u0));
seg.uv.Add(new Vector2(u0, u0));
seg.uv.Add(new Vector2(u0, 0));
for (int i = 0; i < max; i++)
{
seg.vertexes.Add(rotation * seg.vertexes[seg.vertexes.Count - 2]); // new vertex
seg.triangles.Add(seg.vertexes.Count - 1); // new vertex
seg.triangles.Add(seg.vertexes.Count - 2); // shared
seg.triangles.Add(seg.vertexes.Count - 3); // shared
seg.vertexes.Add(rotation * seg.vertexes[seg.vertexes.Count - 2]); // new vertex
seg.triangles.Add(seg.vertexes.Count - 3); // shared
seg.triangles.Add(seg.vertexes.Count - 2); // shared
seg.triangles.Add(seg.vertexes.Count - 1); // new vertex
// How should I set up the variables for this part??
// I know they are not supposed to be zero.
if (i % 2 == 0) {
seg.uv.Add(new Vector2(0, 0));
seg.uv.Add(new Vector2(0, u0));
} else {
seg.uv.Add(new Vector2(u0, u0));
seg.uv.Add(new Vector2(u0, 0));
}
}
m_Segments.Add(seg);
}
else
{
Debug.LogWarning("Too few sides in the segment.");
}
}
Edit: Added pictures
This is what the cylinder looks like (onesided triangles):
This is what the same shader should look like (on a flat plane):
Edit 2: Wireframe pics
So your wireframe is okey(you linked only wireframe but i asked for shaded wireframe: this is a shaded wireframe buts its okey).
The reason your texture looks like this, is because its trying to strecth your image alongside any height, so it might look good on an 1m height cylinder, but would look stretched on an 1000m height one, so you actually need to dynamically strecth this uv map.
Example for 1m height cylinder, texture is okey cos it is for 1x1 dimension:
Example for 2m height cylinder texture stretched because double the length 2x1 dimension:
So what you can do is if you generate always the same height cylinder you can just adjust it inside unity, at the texture properties its called tiling, just increase the x or y dimension of your texture and dont forget to make the texture repeat itself.
Also your cylinder cap should look like this(it is not a must have thing but yeah):

How to manipulate a shaped area of terrain in runtime - Unity 3D

My game has a drawing tool - a looping line renderer that is used as a marker to manipulate an area of the terrain in the shape of the line. This all happens in runtime as soon as the player stops drawing the line.
So far I have managed to raise terrain verteces that match the coordinates of the line renderer's points, but I have difficulties with raising the points that fall inside the marker's shape. Here is an image describing what I currently have:
I tried using the "Polygon Fill Algorithm" (http://alienryderflex.com/polygon_fill/), but raising the terrain vertices one line at a time is too resourceful (even when the algorithm is narrowed to a rectangle that surrounds only the marked area). Also my marker's outline points have gaps between them, meaning I need to add a radius to the line that raises the terrain, but that might leave the result sloppy.
Maybe I should discard the drawing mechanism and use a mesh with a mesh collider as the marker?
Any ideas are appreciated on how to get the terrain manipulated in the exact shape as the marker.
Current code:
I used this script to create the line - the first and the last line points have the same coordinates.
The code used to manipulate the terrain manipulation is currently triggered when clicking a GUI button:
using System;
using System.Collections;
using UnityEngine;
public class changeTerrainHeight_lineMarker : MonoBehaviour
{
public Terrain TerrainMain;
public LineRenderer line;
void OnGUI()
{
//Get the terrain heightmap width and height.
int xRes = TerrainMain.terrainData.heightmapWidth;
int yRes = TerrainMain.terrainData.heightmapHeight;
//GetHeights - gets the heightmap points of the tarrain. Store them in array
float[,] heights = TerrainMain.terrainData.GetHeights(0, 0, xRes, yRes);
if (GUI.Button(new Rect(30, 30, 200, 30), "Line points"))
{
/* Set the positions to array "positions" */
Vector3[] positions = new Vector3[line.positionCount];
line.GetPositions(positions);
/* use this height to the affected terrain verteces */
float height = 0.05f;
for (int i = 0; i < line.positionCount; i++)
{
/* Assign height data */
heights[Mathf.RoundToInt(positions[i].z), Mathf.RoundToInt(positions[i].x)] = height;
}
//SetHeights to change the terrain height.
TerrainMain.terrainData.SetHeights(0, 0, heights);
}
}
}
Got to the solution thanks to Siim's personal help, and thanks to the article: How can I determine whether a 2D Point is within a Polygon?.
The end result is visualized here:
First the code, then the explanation:
using System;
using System.Collections;
using UnityEngine;
public class changeTerrainHeight_lineMarker : MonoBehaviour
{
public Terrain TerrainMain;
public LineRenderer line;
void OnGUI()
{
//Get the terrain heightmap width and height.
int xRes = TerrainMain.terrainData.heightmapWidth;
int yRes = TerrainMain.terrainData.heightmapHeight;
//GetHeights - gets the heightmap points of the tarrain. Store them in array
float[,] heights = TerrainMain.terrainData.GetHeights(0, 0, xRes, yRes);
//Trigger line area raiser
if (GUI.Button(new Rect(30, 30, 200, 30), "Line fill"))
{
/* Set the positions to array "positions" */
Vector3[] positions = new Vector3[line.positionCount];
line.GetPositions(positions);
float height = 0.10f; // define the height of the affected verteces of the terrain
/* Find the reactangle the shape is in! The sides of the rectangle are based on the most-top, -right, -bottom and -left vertex. */
float ftop = float.NegativeInfinity;
float fright = float.NegativeInfinity;
float fbottom = Mathf.Infinity;
float fleft = Mathf.Infinity;
for (int i = 0; i < line.positionCount; i++)
{
//find the outmost points
if (ftop < positions[i].z)
{
ftop = positions[i].z;
}
if (fright < positions[i].x)
{
fright = positions[i].x;
}
if (fbottom > positions[i].z)
{
fbottom = positions[i].z;
}
if (fleft > positions[i].x)
{
fleft = positions[i].x;
}
}
int top = Mathf.RoundToInt(ftop);
int right = Mathf.RoundToInt(fright);
int bottom = Mathf.RoundToInt(fbottom);
int left = Mathf.RoundToInt(fleft);
int terrainXmax = right - left; // the rightmost edge of the terrain
int terrainZmax = top - bottom; // the topmost edge of the terrain
float[,] shapeHeights = TerrainMain.terrainData.GetHeights(left, bottom, terrainXmax, terrainZmax);
Vector2 point; //Create a point Vector2 point to match the shape
/* Loop through all points in the rectangle surrounding the shape */
for (int i = 0; i < terrainZmax; i++)
{
point.y = i + bottom; //Add off set to the element so it matches the position of the line
for (int j = 0; j < terrainXmax; j++)
{
point.x = j + left; //Add off set to the element so it matches the position of the line
if (InsidePolygon(point, bottom))
{
shapeHeights[i, j] = height; // set the height value to the terrain vertex
}
}
}
//SetHeights to change the terrain height.
TerrainMain.terrainData.SetHeightsDelayLOD(left, bottom, shapeHeights);
TerrainMain.ApplyDelayedHeightmapModification();
}
}
//Checks if the given vertex is inside the the shape.
bool InsidePolygon(Vector2 p, int terrainZmax)
{
// Assign the points that define the outline of the shape
Vector3[] positions = new Vector3[line.positionCount];
line.GetPositions(positions);
int count = 0;
Vector2 p1, p2;
int n = positions.Length;
// Find the lines that define the shape
for (int i = 0; i < n; i++)
{
p1.y = positions[i].z;// - p.y;
p1.x = positions[i].x;// - p.x;
if (i != n - 1)
{
p2.y = positions[(i + 1)].z;// - p.y;
p2.x = positions[(i + 1)].x;// - p.x;
}
else
{
p2.y = positions[0].z;// - p.y;
p2.x = positions[0].x;// - p.x;
}
// check if the given point p intersects with the lines that form the outline of the shape.
if (LinesIntersect(p1, p2, p, terrainZmax))
{
count++;
}
}
// the point is inside the shape when the number of line intersections is an odd number
if (count % 2 == 1)
{
return true;
}
else
{
return false;
}
}
// Function that checks if two lines intersect with each other
bool LinesIntersect(Vector2 A, Vector2 B, Vector2 C, int terrainZmax)
{
Vector2 D = new Vector2(C.x, terrainZmax);
Vector2 CmP = new Vector2(C.x - A.x, C.y - A.y);
Vector2 r = new Vector2(B.x - A.x, B.y - A.y);
Vector2 s = new Vector2(D.x - C.x, D.y - C.y);
float CmPxr = CmP.x * r.y - CmP.y * r.x;
float CmPxs = CmP.x * s.y - CmP.y * s.x;
float rxs = r.x * s.y - r.y * s.x;
if (CmPxr == 0f)
{
// Lines are collinear, and so intersect if they have any overlap
return ((C.x - A.x < 0f) != (C.x - B.x < 0f))
|| ((C.y - A.y < 0f) != (C.y - B.y < 0f));
}
if (rxs == 0f)
return false; // Lines are parallel.
float rxsr = 1f / rxs;
float t = CmPxs * rxsr;
float u = CmPxr * rxsr;
return (t >= 0f) && (t <= 1f) && (u >= 0f) && (u <= 1f);
}
}
The used method is filling the shape one line at a time - "The Ray Casting method". It turns out that this method starts taking more resources only if the given shape as a lot of sides. (A side of the shape is a line that connects two points in the outline of the shape.)
When I posted this question, my Line Renderer had 134 points defining the line. This also means the shape has the same number of sides that needs to pass the ray cast check.
When I narrowed down the number of points to 42, the method got fast enough, and also the shape did not lose almost any detail.
Furthermore I am planning on using some methods to make the contours smoother, so the shape can be defined with even less points.
In short, you need these steps to get to the result:
Create the outline of the shape;
Find the 4 points that mark the bounding box around the shape;
Start ray casting the box;
Check the number of how many times the ray intersects with the sides of the shape. The points with the odd number are located inside the shape:
Assign your attributes to all of the points that were found in the shape.

EaselJS shape x,y properties confusion

I generate a 4x4 grid of squares with below code. They all draw in correct position, rows and columns, on canvas on stage.update(). But the x,y coordinates for all sixteen of them on inspection are 0,0. Why? Does each shape has it's own x,y coordinate system? If so, if I get a handle to a shape, how do I determine where it was drawn originally onto the canvas?
The EaselJS documentation is silent on the topic ;-). Maybe you had to know Flash.
var stage = new createjs.Stage("demoCanvas");
for (i = 0; i < 4; i++) {
for (j = 0; j < 4; j++) {
var square = new createjs.Shape();
square.graphics.drawRect(i*100, j*100, 100, 100);
console.log("Created square + square.x + "," + square.y);
stage.addChild(square);
}
}
You are drawing the graphics at the coordinates you want, instead of drawing them at 0,0, and moving them using x/y coordinates. If you don't set the x/y yourself, it will be 0. EaselJS does not infer the x/y or width/height based on the graphics content (more info).
Here is an updated fiddle where the graphics are all drawn at [0,0], and then positioned using x/y instead: http://jsfiddle.net/0o63ty96/
Relevant code:
square.graphics.beginStroke("red").drawRect(0,0,100,100);
square.x = i * 100;
square.y = j * 100;

Vertex position relative to normal

In a surface shader, given the world's up axis (and the others too), a world space position and a normal in world space, how can we rotate the worldspace position into the space of the normal?
That is, given a up vector and a non-orthogonal target-up vector, how can we transform the position by rotating its up vector?
I need this so I can get the vertex position only affected by the object's rotation matrix, which I don't have access to.
Here's a graphical visualization of what I want to do:
Up is the world up vector
Target is the world space normal
Pos is arbitrary
The diagram is bidimensional, but I need to solve this for a 3D space.
Looks like you're trying to rotate pos by the same rotation that would transform up to new_up.
Using the rotation matrix found here, we can rotate pos using the following code. This will work either in the surface function or a supplementary vertex function, depending on your application:
// Our 3 vectors
float3 pos;
float3 new_up;
float3 up = float3(0,1,0);
// Build the rotation matrix using notation from the link above
float3 v = cross(up, new_up);
float s = length(v); // Sine of the angle
float c = dot(up, new_up); // Cosine of the angle
float3x3 VX = float3x3(
0, -1 * v.z, v.y,
v.z, 0, -1 * v.x,
-1 * v.y, v.x, 0
); // This is the skew-symmetric cross-product matrix of v
float3x3 I = float3x3(
1, 0, 0,
0, 1, 0,
0, 0, 1
); // The identity matrix
float3x3 R = I + VX + mul(VX, VX) * (1 - c)/pow(s,2) // The rotation matrix! YAY!
// Finally we rotate
float3 new_pos = mul(R, pos);
This is assuming that new_up is normalized.
If the "target up normal" is a constant, the calculation of R could (and should) only happen once per frame. I'd recommend doing it on the CPU side and passing it into the shader as a variable. Calculating it for every vertex/fragment is costly, consider what it is you actually need.
If your pos is a vector-4, just do the above with the first three elements, the fourth element can remain unchanged (it doesn't really mean anything in this context anyway).
I'm away from a machine where I can run shader code, so if I made any syntactical mistakes in the above, please forgive me.
Not tested, but should be able to input a starting point and an axis. Then all you do is change procession which is a normalized (0-1) float along the circumference and your point will update accordingly.
using UnityEngine;
using System.Collections;
public class Follower : MonoBehaviour {
Vector3 point;
Vector3 origin = Vector3.zero;
Vector3 axis = Vector3.forward;
float distance;
Vector3 direction;
float procession = 0f; // < normalized
void Update() {
Vector3 offset = point - origin;
distance = offset.magnitude;
direction = offset.normalized;
float circumference = 2 * Mathf.PI * distance;
angle = (procession % 1f) * circumference;
direction *= Quaternion.AngleAxis(Mathf.Rad2Deg * angle, axis);
Ray ray = new Ray(origin, direction);
point = ray.GetPoint(distance);
}
}