Cannot display QImage correctly on QQuickPaintedItem providing world tranform matrix - qtquick2

In Qt Quick project, I derived a custom class from QQuickPaintedItem and mapped screen coordinate system to Cartesian coordinate system by providing a transform matrix, now I want to display a png on the custom class with QPainter->drawImage, however, the y coordinate of the image is inversed, how to fix it, thanks!
Below is the code snippet:
void DrawArea::paint(QPainter *painter)
{
painter->setRenderHint(QPainter::Antialiasing, true);
QTransform transform;
transform.setMatrix(800.0/10.0, 0.0, 0.0,
0.0, -600.0/10.0, 0.0,
400, 300, 1.0);
painter->setWorldTransform(transform);
painter->drawImage(QRectF(0, 0, 3, 3), m_image, QRectF(0, 0, m_image.width(),
m_image.height()));
}
the window size is 800x600, the Cartesian coordinate is from -5 to 5 with both x and y.
The y coord is inversed due to -600.0/10.0, but if I remove the minus sign as 600.0/10.0, the image is correct displayed, but the image extend below y=0 axis in Cartesian coordinate system.

Related

CGAffineTransform tanslationX or translatedBy

I'm getting confused.
Using vision, I transform bottom-left coordinates to top-left by
CGAffineTransform(scaleX: 1, y: -1).translatedBy(x:0, y: -1)
however to rotate the camera view according to the orientation I
CGAffineTransform(translationX: 1, y: 0), rotated(by: -CGFloat.pi / 2)
why in the second case do we use CGAffineTransform(translationX... rather than CGAffineTransform(scaleX..
What is the difference between the two?
So why to transform bottom-left coordinates to top left do we use scale
So your question really is: Why is
CGAffineTransform(scaleX: 1, y: -1).translatedBy(x:0, y: -1)
the way to flip?
Let's start with the scale part. Scaling the y coordinate system to -1 is a multiplication: it reverses the scale so that up is down. That's the flip. (Scaling the x to 1 just means "leave it alone".)
The translate part is because transforms take place around the origin (the bottom left corner, originally). So when we flip by scaling, we flip ourselves right off the screen. In order to compensate for that, we slide back onto the screen.
They have a completely different meaning from each other, as per Apple doc:
.translatedBy
Returns an affine transformation matrix constructed by translating an
existing affine transform.
.scaledBy
Returns an affine transformation matrix constructed by scaling an
existing affine transform.

flutter convert 3d points to 2d points

I have 3d points from an obj,i want to be able to select a point say v -0.822220 0.216242 -0.025730 and overlay it with a container and save the point.(I have a vehicle 3d obj ,I want to be able to select say the drivers door and save the point selected maybe a door handle).
sample points:
v 0.822220 0.216242 -0.025730 v -0.822220 0.216242 -0.025730 v 0.811726 0.220845 0.029668 v -0.811726 0.220845 0.029668 v 0.777874 0.214472 0.075458 v -0.777874 0.214472 0.075458 v 0.724172 0.189587 0.073470 v -0.724172 0.189587 0.073470 v 0.704111 0.180226 0.027508
what i have achieved
return new GestureDetector(
onTapDown: (TapDownDetails details) => onTapDown(context, details),
child: new Stack(fit: StackFit.expand, children: <Widget>[
Object3D(...),
new Positioned(
child: new Container(color:Colors.red),
left: posx,
top: posy,
)
]),
);
void onTapDown(BuildContext context, TapDownDetails details) {
print('${details.globalPosition}');
final RenderBox box = context.findRenderObject();
final Offset localOffset = box.globalToLocal(details.globalPosition);
setState(() {
posx = localOffset.dx;
posy = localOffset.dy;
});
}
.i got a suggestion to convert the points to 2d and use the 2d points to overlay the container.How do i convert the 3d points to 2d points?
Is there a better way of doing this?
i'm using this package flutter_3d_obj
(Disclaimer: 3D graphics in Dart/Flutter are extremely experimental. Flutter does not provide any 3D rendering context to draw 3D objects onto, and any 3D rendering packages such as flutter_3d_obj are A) software-based and therefore very slow, and B) extremely limited in feature set [i.e. lacking lighting, shading, normals, texturing, etc.]. As such, it's not recommended to try and draw 3D objects in Flutter directly. The recommendation is to either use something like flare to replicate the 3D effect using 2D animations or to use something like the Unity3D Widget package to draw 3D graphics on a non-Flutter canvas.)
The conversion from a point in a 3D space to a 2D plane is called a Projection Transformation. This transformation is the basis of all "cameras" in 3D software from simple games to 3D animated Hollywood films. There are quite a few excellent write-ups on how a projection transformation works (a Google search brings up this one), but an overly simplified explanation is as follows.
(Like any other transformation, this will require linear algebra and matrix multiplication. Fortunately, Dart has a matrix math package in vector_math.)
There are two general types of projection transformations: perspective and orthogonal.
Orthogonal is the easiest to conceptualize and implement, as it's just a flat conversion from a point in 3D space to the place on the plane that is closest to that point. It's literally just stripping the Z coordinate off of the 3D point and using the X and Y coordinates as your new 2D point:
import 'package:vector_math/vector_math.dart';
Vector2 transformPointOrtho(Vector3 input) {
final ortho = Matrix4(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 0, 0,
0, 0, 0, 0,
);
return (ortho * input).xy;
}
Perspective is more complex, as it also takes perspective (a.k.a field of view angles) into account. As such, there are some parameters that go into creating the transformation matrix:
n = Near clipping plane
f = Far clipping plane
S = Representation of the vertical viewing angle, derived by the following equation:
fov = Field of view (FOV) angle in degrees
(To use radians, omit the "* (π/180)" part)
import 'dart:math';
import 'package:vector_math/vector_math.dart';
Vector2 transformPointPersp(Vector3 input, double fovDeg, double nearClip, double farClip) {
final s = 1 / (tan((fovDeg / 2) * (pi / 180)));
final fdfn = -farClip / (farClip - nearClip);
final fndfn = -(farClip * nearClip) / (farClip - nearClip);
final persp = Matrix4(
s, 0, 0, 0,
0, s, 0, 0,
0, 0, fdfn, -1,
0, 0, fndfn, 0,
);
return (input * persp).xy;
}
Obviously, this is an overly simplistic explanation and doesn't take into account factors such as camera position/rotation. This is also the simplest way to form the transformation matrices, and not necessarily the best. For further reading, I highly suggest you look at various linear algebra and low-level 3D rendering tutorials (such as the one linked above).

How to handle 3D object size and its position in ARKit

I am facing difficulties with 3D object size and its x, y, z positioning. I added the 3D object to sceneView, but its size is too big. How do I reduce the 3D object size based on my requirement? Can anyone help me handle the 3D object's size and its x, y, z positioning?
I am using Swift to code.
Each SCNNode has a scale property:
Each component of the scale vector multiplies the corresponding
dimension of the node’s geometry. The default scale is 1.0 in all
three dimensions. For example, applying a scale of (2.0, 0.5, 2.0) to
a node containing a cube geometry reduces its height and increases its
width and depth.
Which can be set as follows:
var scale: SCNVector3 { get set }
If for example your node was called myNode, you could thus use the following to scale it by 1/10 of it's original size:
myNode.scale = SCNVector3(0.1, 0.1, 0.1)
Regarding positioning SCNNodes this can be achieved by setting the position property:
The node’s position locates it within the coordinate system of its
parent, as modified by the node’s pivot property. The default position
is the zero vector, indicating that the node is placed at the origin
of the parent node’s coordinate system.
If therefore, you wanted to add your SCNNode to the center of the worldOrigin, and 1m away from the camera you can use the following:
myNode.position = SCNVector3(0, 0, -1)
Hope it helps...

In libgdx's Batch interface, what are these arguments used for?

I'm trying to figure out what all these arguments do, as when I draw my bullet image it appears as a solid block instead of a sprite that alternates between solid color and an empty portion (i.e instead of 10101 it's 11111, with 0's being empty parts in the texture).
Before, I was using batch.draw(texture, float x, float y) and it displays the texture correctly. However I was playing around with rotation, and this is the version of draw that seemed most suitable:
batch.draw(texture, x, y, originX, originY, width, height, scaleX, scaleY, rotation, srcX, srcY, srcWidth, srcHeight, flipX, flipY)
I can figure out the obvious ones, those being originX, originY (location to draw the image from its upper left pixel I believe) however then I don't know what the x, y coordinate after texture is for.
scaleX,scaleY, rotation, and flipX, flipY I know what to do with, but what is srcX and srcY, along with the srcWidth and srcHeight for?
edit: I played around and figured out what the srcX,srcY and srcHeight,Width do. I can not figure out what originX,Y does, even though I'm guess it's the centerpoint of the image. Since I don't want to play around with this one anyway, should I leave it as 0,0?
What would be common uses for manipulating the centerpoint of images?
Answering main question.
srcX, srcY, srcWidth, srcHeight are values determine which part (rectangle) of source texture you want to draw. For example, your source image is 100x100 pixels of size. And you want to draw only 60x60 part in a middle of source image.
batch.draw(texture, x, y, 20, 20, 60, 60);
Answering your edited question.
Origin is a center point for rotation and scale transformations. So if you want to your sprite scales and rotates around it's center point you should set origin values so:
float originX = width * 0.5f;
float originY = height * 0.5f;
In case you don't care about rotation and scaling you may not specify this params (leave it 0).
And keep in mind, that origin is not determines image drawing position (this is most common mistake). It means that two next method calls are draw image at same position (forth and fifth params are originX and originY):
batch.draw(image, x, y, 0, 0, width, height, ...);
batch.draw(image, x, y, 50, 50, width, height, ...);
According to the documentation, the parameters are as defined:
srcX - the x-coordinate in texel space
srcY - the y-coordinate in texel space
srcWidth - the source with in texels
srcHeight - the source height in texels

Translate 3d object center coordinates to 2d visible viewport coordinates

I have loaded an wavefront object in Iphone OpenGL.
It can be rotated around x/y axis, panned around, zoomed in/out.
My task is - when object is tapped, highlight it's 2d center coordinates on screen for example like this: (Imagine that + is in the center of visible object.)
When loading OpenGL object I store it's:
object center position in world,
x,y,z position offset,
x,y,z rotation,
zoom scale.
When user taps on the screen, I can distinguish which object was tapped. But - as user can tap anywhere on object - Tapped point is not center.
When user touches an object, I want to be able to find out corresponding object visible approximate center coordinates.
How can I do that?
Most code in google I could find is meant - to translate 3d coordinates to 2d but without rotation.
Some variables in code:
Vertex3D centerPosition;
Vertex3D currentPosition;
Rotation3D currentRotation;
//centerPosition.x, centerPosition.y, centerPosition.z
//currentPosition.x, currentPosition.y, currentPosition.z
//currentRotation.x, currentRotation.y, currentRotation.z
Thank You in advance.
(To find out which object I tapped - re-color each object in different color, thus I know what color user tapped.)
object drawSelf function:
// Save the current transformation by pushing it on the stack
glPushMatrix();
// Load the identity matrix to restore to origin
glLoadIdentity();
// Translate to the current position
glTranslatef(currentPosition.x, currentPosition.y, currentPosition.z);
// Rotate to the current rotation
glRotatef(currentRotation.x, 1.0, 0.0, 0.0);
glRotatef(currentRotation.y, 0.0, 1.0, 0.0);
glRotatef(currentRotation.z, 0.0, 0.0, 1.0);
// Enable and load the vertex array
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glNormalPointer(GL_FLOAT, 0, vertexNormals);
// Loop through each group
if (textureCoords != NULL)
{
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(valuesPerCoord, GL_FLOAT, 0, textureCoords);
}
for (OpenGLWaveFrontGroup *group in groups)
{
if (textureCoords != NULL && group.material.texture != nil)
[group.material.texture bind];
// Set color and materials based on group's material
Color3D ambient = group.material.ambient;
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, (const GLfloat *)&ambient);
Color3D diffuse = group.material.diffuse;
glColor4f(diffuse.red, diffuse.green, diffuse.blue, diffuse.alpha);
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, (const GLfloat *)&diffuse);
Color3D specular = group.material.specular;
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, (const GLfloat *)&specular);
glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, group.material.shininess);
glDrawElements(GL_TRIANGLES, 3*group.numberOfFaces, GL_UNSIGNED_SHORT, &(group.faces[0]));
}
if (textureCoords != NULL)
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
// Restore the current transformation by popping it off
glPopMatrix();
ok, as I said, you'll need to apply the same transformations to the object center that are applied to the object's vertices by the graphics pipeline; only this time, the graphics pipeline won't help you - you'll have to do it yourself. And it involves some matrix calculations, so I'd suggest getting a good maths library like the OpenGL Maths library, which has the advatage that function names etc. are extremly similar to OpenGL.
step 1: transform the center form object coordinates to modelview coordinates
in your code, you set up your 4x4 modelview matrix like this:
// Load the identity matrix to restore to origin
glLoadIdentity();
// Translate to the current position
glTranslatef(currentPosition.x, currentPosition.y, currentPosition.z);
// Rotate to the current rotation
glRotatef(currentRotation.x, 1.0, 0.0, 0.0);
glRotatef(currentRotation.y, 0.0, 1.0, 0.0);
glRotatef(currentRotation.z, 0.0, 0.0, 1.0);
you need to multiply that matrix with the object center, and OpenGL does not help you with that, since it's not a maths library itself. If you use glm, there are functions like rotate(), translate() etc that function similiar to glRotatef() & glTranslatef(), and you can use them to build your modelview matrix. Also, since the matrix is 4x4, you'll have to append 1.f as 4th component to the object center ( called 'w-component' ), otherwise you can't multiply it with a 4x4 matrix.
Alternatively, you could query the current value of th modelview matrix directly from OpenGl:
GLfloat matrix[16];
glGetFloatv (GL_MODELVIEW_MATRIX, matrix);
but then you'll have to write your own code for the multiplication...
step 2: go from modelview coordinates to clip coordinates
from what you posted, I can't tell whether you ever change the projection matrix ( is there a glMatrixMode( GL_PROJECTION ) somewhere? ) - if you never touch the projection matrix, you can omit this step; otherwise you'll now need to multiply the transformed object center with the projection matrix as well.
step 3: perspective division
divide all 4 components of the object center by the 4th - then throw away the 4th component, keeping only xyz.
If you omitted step 2, you can also omit the division.
step 4: map the object center coordinates to window coordinates
the object center is now defined in normalized device coordinates, with x&y components in range [-1.f, 1.f]. the last step is mapping them to your viewport, i.e. to pixel positions. the z-component does not really matter to you anyway, so let's ignore z and call the x & y component obj_x and obj_y, respectively.
the viewport dimensions should be set somewhere in your code with glViewport( viewport_x, viewport_y, width, height ). from the function arguments, you can then calculate the pixel position for the center like this:
pixel_x = width/2 * obj_x + viewport_x + width/2;
pixel_y = height/2 * obj_y + viewport_y + height/2;
and that's basically it.