Bevy How to Render Triangle (or Polygon) in 2D - bevy

In the bevy examples breakout only uses rectangles, there are examples of loading sprites, there is an example of loading a 3d mesh. In 2d I'd like to draw a triangle (or other polygons), but I haven't been able to figure it out through the docs.

Currently there is no support for 'drawing' in 2D.
This is being looked at, but is not there yet.

Not sure if that is still relevant, but today I have had the same issue in here is how I have been able to draw simple trianle
fn create_triangle() -> Mesh {
let mut mesh = Mesh::new(PrimitiveTopology::TriangleList);
mesh.set_attribute(
Mesh::ATTRIBUTE_POSITION,
vec![[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [1.0, 1.0, 0.0]],
);
mesh.set_attribute(Mesh::ATTRIBUTE_COLOR, vec![[0.0, 0.0, 0.0, 1.0]; 3]);
mesh.set_indices(Some(Indices::U32(vec![0, 1, 2])));
mesh
}
This will create a triangle mesh.
As for me the tricky part was to figure out that by default triangle is drawn transparent and alfa value should be set for the vertexes.
Later you can use this mesh generating function in your system like this:
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<ColorMaterial>>
) {
commands.spawn_bundle(OrthographicCameraBundle::new_2d());
commands.spawn_bundle(MaterialMesh2dBundle {
mesh: meshes.add(create_triangle()).into(),
transform: Transform::default().with_scale(Vec3::splat(128.)),
material: materials.add(ColorMaterial::from(Color::PURPLE)),
..Default::default()
});
}

Related

Cannot display QImage correctly on QQuickPaintedItem providing world tranform matrix

In Qt Quick project, I derived a custom class from QQuickPaintedItem and mapped screen coordinate system to Cartesian coordinate system by providing a transform matrix, now I want to display a png on the custom class with QPainter->drawImage, however, the y coordinate of the image is inversed, how to fix it, thanks!
Below is the code snippet:
void DrawArea::paint(QPainter *painter)
{
painter->setRenderHint(QPainter::Antialiasing, true);
QTransform transform;
transform.setMatrix(800.0/10.0, 0.0, 0.0,
0.0, -600.0/10.0, 0.0,
400, 300, 1.0);
painter->setWorldTransform(transform);
painter->drawImage(QRectF(0, 0, 3, 3), m_image, QRectF(0, 0, m_image.width(),
m_image.height()));
}
the window size is 800x600, the Cartesian coordinate is from -5 to 5 with both x and y.
The y coord is inversed due to -600.0/10.0, but if I remove the minus sign as 600.0/10.0, the image is correct displayed, but the image extend below y=0 axis in Cartesian coordinate system.

RealityKit – Updating Entity's translation returns unexpected values

This little method that I wrote, changes spotlight1's position to the unexpected value.
If I understand well, setPosition method should set spotlight's translation relative to the tv's position
TV's translation: [0.0, 0.0, -5.0]
setPosition to [0.0, 5.0, 0.5] relative to Tv's translation.
So:
[0.0 + 0, 0.0 + 5, -5.0 + 0.5] = [0.0, 5.0, -4.5]
But what I get is:
[0.0, 0.9999994, -4.9]
Am I missing some important information here?
func loadLights() {
arView.scene.addAnchor(lightAnchor)
lightAnchor.addChild(spotlight1)
print(tv?.position) // 0.0, 0.0, -5.0
spotlight1.setPosition([0, 5, 0.5], relativeTo: tv)
if let tv = tv {
spotlight1.look(at: tv.position,
from: spotlight1.position,
relativeTo: nil)
}
print(spotlight1.position) // 0.0, 0.99999994, -4.99
}
Reference Coordinate Frame
As paradoxical as it may sound, RealityKit did everything right. You need to take into account the frame of reference (frame of your tv model). As far as I understand, you reduced the scale of the tv model by five times. Am I right? However, reference's transform matters.
In other words, you've scaled down the local coordinate frame (a.k.a. local coordinate system) of the TV model, you're trying to position spotLight1 relative to.
About relativeTo parameter
Read this post and this post to explore possible issues.

Swift: What matrix should be used to convert 3D point to 2D in ARKit/Scenekit

I am trying to use ARCamera matrix to do the conversion of 3D point to 2D in ARkit/Scenekit. Previously, I used projectpoint to get the projected x and y coordinate which is working fine. However, the app is significantly slowed down and would crash for appending long recordings.
So I turn into another approach: using the ARCamera parameter to do the conversion on my own. The Apple document for projectionMatrix did not give much. So I looked into the theory about projection matrix The Perspective and Orthographic Projection Matrix and Metal Tutorial. From my understanding that when we have a 3D points P=(x,y,z), in theory we should be able to just get the 2D point like so: P'(2D)=P(3D)*projectionMatrix.
I am assuming that's would be the case, so I did:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let arCamera = session.currentFrame?.camera else { return }
//intrinsics: a matrix that converts between the 2D camera plane and 3D world coordinate space.
//projectionMatrix: a transform matrix appropriate for rendering 3D content to match the image captured by the camera.
print("ARCamera ProjectionMatrix = \(arCamera.projectionMatrix)")
print("ARCamera Intrinsics = \(arCamera.intrinsics)")
}
I am able to get the projection matrix and intrinsics (I even tired to get intrinsics to see whether it changes) but they are all the same for each frame.
ARCamera ProjectionMatrix = simd_float4x4([[1.774035, 0.0, 0.0, 0.0], [0.0, 2.36538, 0.0, 0.0], [-0.0011034012, 0.00073593855, -0.99999976, -1.0], [0.0, 0.0, -0.0009999998, 0.0]])
ARCamera Intrinsics = simd_float3x3([[1277.3052, 0.0, 0.0], [0.0, 1277.3052, 0.0], [720.29443, 539.8974, 1.0]])...
I am not too sure I understand what is happening here as I am expecting that the projection matrix will be different for each frame. Can someone explain the theory here with projection matrix in scenekit/ARKit and validate my approach? Am I using the right matrix or do I miss something here in the code?
Thank you so much in advance!
You'd likely need to use the camera's transform matrix as well, as this is what changes between frames as the user moves around the real world camera and the virtual camera's transform is updated to best match that. Composing that together with the projection matrix should allow you to get into screen space.

flutter convert 3d points to 2d points

I have 3d points from an obj,i want to be able to select a point say v -0.822220 0.216242 -0.025730 and overlay it with a container and save the point.(I have a vehicle 3d obj ,I want to be able to select say the drivers door and save the point selected maybe a door handle).
sample points:
v 0.822220 0.216242 -0.025730 v -0.822220 0.216242 -0.025730 v 0.811726 0.220845 0.029668 v -0.811726 0.220845 0.029668 v 0.777874 0.214472 0.075458 v -0.777874 0.214472 0.075458 v 0.724172 0.189587 0.073470 v -0.724172 0.189587 0.073470 v 0.704111 0.180226 0.027508
what i have achieved
return new GestureDetector(
onTapDown: (TapDownDetails details) => onTapDown(context, details),
child: new Stack(fit: StackFit.expand, children: <Widget>[
Object3D(...),
new Positioned(
child: new Container(color:Colors.red),
left: posx,
top: posy,
)
]),
);
void onTapDown(BuildContext context, TapDownDetails details) {
print('${details.globalPosition}');
final RenderBox box = context.findRenderObject();
final Offset localOffset = box.globalToLocal(details.globalPosition);
setState(() {
posx = localOffset.dx;
posy = localOffset.dy;
});
}
.i got a suggestion to convert the points to 2d and use the 2d points to overlay the container.How do i convert the 3d points to 2d points?
Is there a better way of doing this?
i'm using this package flutter_3d_obj
(Disclaimer: 3D graphics in Dart/Flutter are extremely experimental. Flutter does not provide any 3D rendering context to draw 3D objects onto, and any 3D rendering packages such as flutter_3d_obj are A) software-based and therefore very slow, and B) extremely limited in feature set [i.e. lacking lighting, shading, normals, texturing, etc.]. As such, it's not recommended to try and draw 3D objects in Flutter directly. The recommendation is to either use something like flare to replicate the 3D effect using 2D animations or to use something like the Unity3D Widget package to draw 3D graphics on a non-Flutter canvas.)
The conversion from a point in a 3D space to a 2D plane is called a Projection Transformation. This transformation is the basis of all "cameras" in 3D software from simple games to 3D animated Hollywood films. There are quite a few excellent write-ups on how a projection transformation works (a Google search brings up this one), but an overly simplified explanation is as follows.
(Like any other transformation, this will require linear algebra and matrix multiplication. Fortunately, Dart has a matrix math package in vector_math.)
There are two general types of projection transformations: perspective and orthogonal.
Orthogonal is the easiest to conceptualize and implement, as it's just a flat conversion from a point in 3D space to the place on the plane that is closest to that point. It's literally just stripping the Z coordinate off of the 3D point and using the X and Y coordinates as your new 2D point:
import 'package:vector_math/vector_math.dart';
Vector2 transformPointOrtho(Vector3 input) {
final ortho = Matrix4(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 0, 0,
0, 0, 0, 0,
);
return (ortho * input).xy;
}
Perspective is more complex, as it also takes perspective (a.k.a field of view angles) into account. As such, there are some parameters that go into creating the transformation matrix:
n = Near clipping plane
f = Far clipping plane
S = Representation of the vertical viewing angle, derived by the following equation:
fov = Field of view (FOV) angle in degrees
(To use radians, omit the "* (π/180)" part)
import 'dart:math';
import 'package:vector_math/vector_math.dart';
Vector2 transformPointPersp(Vector3 input, double fovDeg, double nearClip, double farClip) {
final s = 1 / (tan((fovDeg / 2) * (pi / 180)));
final fdfn = -farClip / (farClip - nearClip);
final fndfn = -(farClip * nearClip) / (farClip - nearClip);
final persp = Matrix4(
s, 0, 0, 0,
0, s, 0, 0,
0, 0, fdfn, -1,
0, 0, fndfn, 0,
);
return (input * persp).xy;
}
Obviously, this is an overly simplistic explanation and doesn't take into account factors such as camera position/rotation. This is also the simplest way to form the transformation matrices, and not necessarily the best. For further reading, I highly suggest you look at various linear algebra and low-level 3D rendering tutorials (such as the one linked above).

Why do vertices of a quad and the localScale of the quad not match in Unity?

I have a Quad whose vertices I'm printing like this:
public MeshFilter quadMeshFilter;
for(var vertex in quadMeshFilter.mesh.vertices)
{
print(vertex);
}
And, the localScale like this:
public GameObject quad;
print(quad.transform.localScale);
Vertices are like this:
(-0.5, -0.5), (0.5, 0.5), (0.5, -0.5), (-0.5, 0.5)
while the localScale is:
(6.4, 4.8, 0)
How is this possible - because the vertices make a square but localScale does not.
How do I use vertices and draw another square in front of the quad?
I am not well versed in the matters of meshes, but I believe I know the answer to this question.
Answer
How is this possible
Scale is a value which your mesh is multiplied in size by in given directions (x, y, z). A scale of 1 is default size. A scale of 2 is double size and so on. Your localSpace coordinates will then be multiplied by this scale.
Say a localSpace coordinate is (1, 0, 2), the scale however, is (3, 1, 3). Meaning that the result is (1*3, 0*1, 2*3).
How do I use vertices and draw another square in front of the quad?
I'd personally just create the object and then move it via Unity's Transform system. Since it allows you to change the worldSpace coordinates using transform.position = new Vector3(1f, 5.4f, 3f);
You might be able to move each individual vertex in WorldSpace too, but I haven't tried that before.
I imagine it is related to this bit of code though: vertices[i] = transform.TransformPoint(vertices[i]); since TransformPoint converts from localSpace to worldSpace based on the Transform using it.
Elaboration
Why do I get lots of 0's and 5's in my space coordinates despite them having other positions in the world?
If I print the vertices of a quad using the script below. I get these results, which have 3 coordinates and can be multiplied as such by localScale.
Print result:
Script:
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
This first result is what we call local space.
There also exists something called WorldSpace. You can convert between local- and worldSpace.
localSpace is the objects mesh vertices in relation to the object itself while worldSpace is the objects location in the Unity scene.
Then you get the results as seen below, first the localSpace coordinates as in the first image, then the WorldSpace coordinates converted from these local coordinates.
Here is the script I used to print the above result.
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
Debug.Log("World Space");
for (int i = 0; i < vertices.Length; ++i)
{
vertices[i] = transform.TransformPoint(vertices[i]);
Debug.Log(vertices[i]);
}
Good luck with your future learning process.
This becomes clear once you understand how Transform hierarchies work. Its a tree, in which parent transform [3x3] matrix (position, rotation, scale (rotation is actually a quaternion but lets assume its euler for simplicity so that math works). by extension of this philosophy, the mesh itself can be seen as child to the gameoobject that holds it.
If you imagine a 1x1 quad (which is what is described by your vertexes), parented to a gameobject, and that gameobject's Transform has a non-one localScale, all the vertexes in the mesh get multiplied by that value, and all the positions are added.
now if you parent that object to another gameObject, and give it another localScale, this will again multiply all the vertex positions by that scale, translate by its position etc.
to answer your question - global positions of your vertexes are different than contained in the source mesh, because they are feed through a chain of Transforms all the way up to the scene root.
This is both the reason that we only have localScale and not scale, and this is also the reason why non-uniform scaling of objects which contain rotated children can sometimes give very strange results. Transforms stack.