I have a 720x576 video that was played full screen on a screen with 1280x960 resolution and the relevant eye tracker gaze coordinates data.
I have built a gaze tracking visualization code but the only thing I am not sure about is how to convert my input coordinates to match the original video.
So, does anybody have an idea on what to do?
The native aspect ratio of the video (720/576 = 1.25) does not match the aspect ratio at which it was displayed (1280/960 = 1.33). i.e. the pixels didn't just get scaled in size, but in shape.
So assuming your gaze coordinates were calibrated to match the physical screen (1280 × 960), then you will need to independently scale the x coordinates by 720/1280 = 0.5625 and the y coordinates by 576/960 = 0.6.
Note that this will distort the actual gaze behaviour (horizontal saccades are being scaled by more than vertical ones). Your safest option would actually be to rescale the video to have the same aspect ratio as the screen, and project the gaze coordinates onto that. That way, they won't be distorted, and the slightly skewed movie will match what was actually shown to the subjects.
Related
I'm having problems to set the right resolution on unity to not have pixel distortion on my pixel art assets. When I create an tile grid, on the preview tab the assets look terrible.
I have an tilemap with 64x32 resolution for each tile.
I'm using 64 pixels per unit.
The camera size is set to 5 in a 640x360 resolution (using the following formula: vertical resolution / PPU / 2).
What I'm doing wrong and what I'm missing?
I don't know how the tiles are defined, but assuming those are rects with textures on topm you could check your texture filter setting and play with it a little, setting it up for example to "anisotropic"
To solve this problem and get an "pixel perfect" view, you need to apply the following formula:
Camera size = height of the screen resolution / PPU (pixels per unit) / 2
This will do the job!
I have dimensions in millimeters (mostly rectangles and squares) and I'm trying to draw them to their size.
Something like so 6.70 x 4.98 x 3.33 mm.
I really won't be using the depth in the object but just threw it in.
New to drawing shapes with my hands ;)
Screens are typically measured in pixels (android) or points (ios). Both amount to the old standard of 72 pts/in. Though, now we have devices with different pixel ratios. To figure out an exact size would mean you need to determine the current device's screen size and it's pixel ratio. Both can be done with WidgetsBinding.instance.window... Then you just do the math from there to convert those measurements to mm.
However, this seems like an odd requirement so you may just be asking how to draw a square of an exact size. You may want to look into the Canvas/Paint API which can be used in conjunction with a CustomPainter. Another option is a Stack with some Position.fromRect or .fromRelativeRect and draw them using that setup.
I have discovered that certain tablets (e.g. Samsung SM-T210 - Galaxy Tab 3) have equal horizontal and vertical angles of view (implying aspect ratio = 1), while NONE of their camera1.parameters supported picture sizes have an aspect ratio of 1 (closest being 1.33). What's going on? Are pixel pitches different in the x and y directions? Is come kind of cropping always applied?
I am testing an augmented reality app, and I need to have a very clear understanding of how an optical point maps onto the sensor and then onto the screen or image.
On devices supporting Camera2, one can discover the actual physical sensor size, the pixel sensor size, and the pixel size actually being used, which would enable me to answer this question. But this device seems to have older hardware.
I would have thought that the largest supported pixel size would be (close to) the actual usable pixel array size. But the largest size has Aspect ratio of 1.33, not 1. Does this mean cropping is happening? Is there a scaler in play? What's going on??
The reported vertical / horizontal camera angle of view is not consistent with supported picture sizes. Presumably this discrepancy is resolved by filling / clipping before an image is returned in the onPicture() callback. How is it resolved? I would like to correctly measure angles by processing the picture.
Actually, the angles of view are consistent with some of the supported picture sizes. In the
case of an Samsung Captivate, the angles are reported as 51.2 x 39.4. Taking the ratio of the tangents of these angles divided by 2, you get 1.338. This agrees closely enough with the 640x480, 1600x1200, 2048x1536, and 2560x1920 aspect ratios: 1.333...
Additionally, the angles do not change either when you change the picture size, or the zoom, so they are describing the hardware, specifically a relationship between lens and sensor.
So my question only applies to the other picture sizes, having an aspect ratio of 1.66....
I use the glTranslate command to shift the position of a sprite which I load from a texture in my iPhone OpenGL App. My problem is after I apply glTranslatef, the image appears a little blurred. When I comment that line of code, the image is crystal clear. How can i resolve this issue???
You're probably not hitting the screen pixel grid exactly. This will cause texture filtering to blur it. The issue is a bit complicated: Instead of seeing the screen an texture as a array of points, see it as sheets of grid ruled paper (the texture sheet can be stretched, sheared, scaled). To make things look crisp the grids must align perfectly. The texture coordinates (0,0) and (1,1) don't hit the center of the texels but the outer edges of the texture sheet. Thus you need a little bit to offset and scale to address the texel centers. And the same goes for placing the target quads on the screen, where the vertex position must be aligned with the edges of the screen, not the pixel centers. If your projection and modelview matrix are not setup in a way that one unit in modelview space is one pixel wide and the projection fills the whole screen (or window viewport) it's difficult to get this right.
One normally starts with
glViewport(0,0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// modelview XY range 0..width x 0..height now covers the whole viewport
// (0,0) don't address the lower left pixel but the lower left edge of this
// (width,height) similarily addresses the upper right corner
// drawing a 0..width x 0..height quad with texture coordinates 0..1 x 0..1
// will cover it perfectly
This will work as long as the quad as exactly the same dimensions (i.e. it's vertex positions match) the texture coordinates and the vertex positions are integers.
Now the interesting part is: What if they don't meet those conditions. Then aliasing occours. In GL_NEAREST filtering mode things still look crisp, but some lines/rows are simply missing. In GL_LINEAR filtering mode neighbouring pixels are interpolated with the interpolation factor beding determined how far off grid they are (in laymans terms, the actual implementation looks slightly different).
So how to solve your issue: Draw sprites in a projection/modelview that matches with the viewport, use only integer coordinates for the vertex coordinates and make your texture cover the whole picture. If you're using only a part of the texture coordinate range, things get even more interesting, since one addressed the texture grid, not the texel centers.
I would recommend looking at your modelview matrix declaration and be sure that glLoadIdentity() is being called to ensure that the matrix stack is clean before applying the transform.