reported vertical / horizontal camera angle of view is not consistent with supported picture sizes. Which gets clipped or filled on display? - android-camera

The reported vertical / horizontal camera angle of view is not consistent with supported picture sizes. Presumably this discrepancy is resolved by filling / clipping before an image is returned in the onPicture() callback. How is it resolved? I would like to correctly measure angles by processing the picture.

Actually, the angles of view are consistent with some of the supported picture sizes. In the
case of an Samsung Captivate, the angles are reported as 51.2 x 39.4. Taking the ratio of the tangents of these angles divided by 2, you get 1.338. This agrees closely enough with the 640x480, 1600x1200, 2048x1536, and 2560x1920 aspect ratios: 1.333...
Additionally, the angles do not change either when you change the picture size, or the zoom, so they are describing the hardware, specifically a relationship between lens and sensor.
So my question only applies to the other picture sizes, having an aspect ratio of 1.66....

Related

Drawing a shape with dimensions in millimeters

I have dimensions in millimeters (mostly rectangles and squares) and I'm trying to draw them to their size.
Something like so 6.70 x 4.98 x 3.33 mm.
I really won't be using the depth in the object but just threw it in.
New to drawing shapes with my hands ;)
Screens are typically measured in pixels (android) or points (ios). Both amount to the old standard of 72 pts/in. Though, now we have devices with different pixel ratios. To figure out an exact size would mean you need to determine the current device's screen size and it's pixel ratio. Both can be done with WidgetsBinding.instance.window... Then you just do the math from there to convert those measurements to mm.
However, this seems like an odd requirement so you may just be asking how to draw a square of an exact size. You may want to look into the Canvas/Paint API which can be used in conjunction with a CustomPainter. Another option is a Stack with some Position.fromRect or .fromRelativeRect and draw them using that setup.

optical aspect ratio not consistent with supportedPictureSizes

I have discovered that certain tablets (e.g. Samsung SM-T210 - Galaxy Tab 3) have equal horizontal and vertical angles of view (implying aspect ratio = 1), while NONE of their camera1.parameters supported picture sizes have an aspect ratio of 1 (closest being 1.33). What's going on? Are pixel pitches different in the x and y directions? Is come kind of cropping always applied?
I am testing an augmented reality app, and I need to have a very clear understanding of how an optical point maps onto the sensor and then onto the screen or image.
On devices supporting Camera2, one can discover the actual physical sensor size, the pixel sensor size, and the pixel size actually being used, which would enable me to answer this question. But this device seems to have older hardware.
I would have thought that the largest supported pixel size would be (close to) the actual usable pixel array size. But the largest size has Aspect ratio of 1.33, not 1. Does this mean cropping is happening? Is there a scaler in play? What's going on??

Optical lense distance from an object

I am using a Raspberry PI camera and the problem in hand is how to find the best position for it in order to fully see an object.
The object looks like this:
Question is how to find the perfect position given that the camera is placed in the centre of the above image. Perfectly the camera will be able to catch the object only, as the idea is to get the camera as close as possible.
Take a picture with you camera, save it as a JPG, then open it in a viewer that allows you to inspect the EXIF header. If you are lucky you should see the focal length (in mm) and the sensor size. If the latter is missing, you can probably work it out from the sensor's spec sheet (see here to start). From the two quantities you can work out the angle of the field of view (HorizFOV = atan(0.5 * sensor_width / focal_length), VertFOV = atan(0.5 * sensor_height / focal_length). From these angles you can derive an approximate distance from your subject that will keep it fully in view.
Note that these are only approximations. Nonlinear lens distortion will produce a slightly larger effective FOV, especially near the corners.

Relationship of video coordinates before/after resizing

I have a 720x576 video that was played full screen on a screen with 1280x960 resolution and the relevant eye tracker gaze coordinates data.
I have built a gaze tracking visualization code but the only thing I am not sure about is how to convert my input coordinates to match the original video.
So, does anybody have an idea on what to do?
The native aspect ratio of the video (720/576 = 1.25) does not match the aspect ratio at which it was displayed (1280/960 = 1.33). i.e. the pixels didn't just get scaled in size, but in shape.
So assuming your gaze coordinates were calibrated to match the physical screen (1280 × 960), then you will need to independently scale the x coordinates by 720/1280 = 0.5625 and the y coordinates by 576/960 = 0.6.
Note that this will distort the actual gaze behaviour (horizontal saccades are being scaled by more than vertical ones). Your safest option would actually be to rescale the video to have the same aspect ratio as the screen, and project the gaze coordinates onto that. That way, they won't be distorted, and the slightly skewed movie will match what was actually shown to the subjects.

Why are UIView frames made up of floats?

Technically the x, y, width, and height represent a set of dimensions that relate to pixels. I
can't have 200.23422 pixels so why do they use floats instead of ints?
The reason for the floats are that modern CPUs and GPUs a are optimized to work with many floating point numbers in parallel. This is true for iOS as well as Mac.
With Quartz you don't address individual pixels, but everything you draw is always antialiased. When you have a coordinate 1.0, 1.0 then this actually adds color to the 2x2 pixels at the coordinate origin.
This is why you might get blurry lines if you draw at integer coordinates. On non-retina you habe to draw offset by 0.5. Technically you would need to offset by 0.25 to draw exact pixels on Retina displays. Though there it does not really matter that much because you don't really see it any more at that pixel size.
Long story short: you don't address pixels direcrly, but the Graphics engine maps between floating point coordinates and pixels for you.
Resolution independence.
You want to keep your mathematical representation of your UI as accurate as practicable, only translating to pixel int values when you actually need to draw to the output device (and even then, not really). That's so that you can apply any number of transformations to your views and still get an accurate result.
Moreover it is possible to render lines, for example, at half-pixel widths and even less with a visible result - the system uses intelligent antialiasing to display a fine line.
It's the same principle as vector drawing has been using for decades (Adobe's PostScript, SVG etc). In fact Quartz is based on PDF, which is the modern version of PostScript. NeXT used Display PostScript in it's time, and then it was considered pretty revolutionary.
The dimensions are actually points that on non-retina screens have a 1 to 1 relation to pixels, but for retina screens 1 point = 2 pixels. So on a retina screen you can actually increment by half a point.