What is the size (in pixels or points) of the notch of the iPhone Xr and the iPhone Xs Max? I know them for the iPhone X, see "iPhone X Cutout" on https://www.paintcodeapp.com/news/iphone-x-screen-demystified
I believe the iPhone Xs has the same dimensions.
I'm trying to make a game that hugs the space right up to the pixel.
Via #prabhat-kasera
It's 30 X 209 Pt
I see now what my error was: I was using 30/2436 % of the screen for both phones.
I should use 30/1624 for iPhone Xr and 30/2688 for iPhone Xs Max.
Device-agnostic code:
// Landscape
float screenRatio = 30 * ((float)UIScreen.mainScreen.scale) /
UIScreen.mainScreen.currentMode.size.width;
// Portrait
float screenRatio = 30 * ((float)UIScreen.mainScreen.scale) /
UIScreen.mainScreen.currentMode.size.height;
Related
In android we define text size as dp but in flutter text size is in pixel then how to match same in flutter.
How to achieve same in flutter.
Any help is appreciated!
From the Android Developer Documentation:
px
> Pixels - corresponds to actual pixels on the screen.
in
> Inches - based on the physical size of the screen.
> 1 Inch = 2.54 centimeters
mm
> Millimeters - based on the physical size of the screen.
pt
> Points - 1/72 of an inch based on the physical size of the screen.
dp or dip
> Density-independent Pixels - an abstract unit that is based on the physical density of the screen. These units are relative to a 160
dpi screen, so one dp is one pixel on a 160 dpi screen. The ratio of dp-to-pixel will change with the screen density, but not necessarily in direct proportion. Note: The compiler accepts both "dip" and "dp", though "dp" is more consistent with "sp".
sp
> Scaleable Pixels OR scale-independent pixels - this is like the dp unit, but it is also scaled by the user's font size preference. It is recommended you
use this unit when specifying font sizes, so they will be adjusted
for both the screen density and the user's preference. Note, the Android documentation is inconsistent on what sp actually stands for, one doc says "scale-independent pixels", the other says "scaleable pixels".
From Understanding Density Independence In Android:
Density Bucket
Screen Density
Physical Size
Pixel Size
ldpi
120 dpi
0.5 x 0.5 in
0.5 in * 120 dpi = 60x60 px
mdpi
160 dpi
0.5 x 0.5 in
0.5 in * 160 dpi = 80x80 px
hdpi
240 dpi
0.5 x 0.5 in
0.5 in * 240 dpi = 120x120 px
xhdpi
320 dpi
0.5 x 0.5 in
0.5 in * 320 dpi = 160x160 px
xxhdpi
480 dpi
0.5 x 0.5 in
0.5 in * 480 dpi = 240x240 px
xxxhdpi
640 dpi
0.5 x 0.5 in
0.5 in * 640 dpi = 320x320 px
Unit
Description
Units Per Physical Inch
Density Independent?
Same Physical Size On Every Screen?
px
Pixels
Varies
No
No
in
Inches
1
Yes
Yes
mm
Millimeters
25.4
Yes
Yes
pt
Points
72
Yes
Yes
dp
Density Independent Pixels
~160
Yes
No
sp
Scale Independent Pixels
~160
Yes
No
More info can be also be found in the Google Design Documentation.
I am writing an ARKit app where I need to use camera poses and intrinsics for 3D reconstruction.
The camera Intrinsics matrix returned by ARKit seems to be using a different image resolution than mobile screen resolution. Below is one example of this issue
Intrinsics matrix returned by ARKit is :
[[1569.249512, 0, 931.3638306],[0, 1569.249512, 723.3305664],[0, 0, 1]]
whereas input image resolution is 750 (width) x 1182 (height). In this case, the principal point seems to be out of the image which cannot be possible. It should ideally be close to the image center. So above intrinsic matrix might be using image resolution of 1920 (width) x 1440 (height) returned that is completely different than the original image resolution.
The questions are:
Whether the returned camera intrinsics belong to 1920x1440 image resolution?
If yes, how can I get the intrinsics matrix representing original image resolution i.e. 750x1182?
Intrinsics 3x3 matrix
Intrinsics camera matrix converts between the 2D camera plane and 3D world coordinate space. Here's a decomposition of an intrinsic matrix, where:
fx and fy is a Focal Length in pixels
xO and yO is a Principal Point Offset in pixels
s is an Axis Skew
According to Apple Documentation:
The values fx and fy are the pixel focal length, and are identical for square pixels. The values ox and oy are the offsets of the principal point from the top-left corner of the image frame. All values are expressed in pixels.
So you let's examine what your data is:
[1569, 0, 931]
[ 0, 1569, 723]
[ 0, 0, 1]
fx=1569, fy=1569
xO=931, yO=723
s=0
To convert a known focal length in pixels to mm use the following expression:
F(mm) = F(pixels) * SensorWidth(mm) / ImageWidth(pixels)
Points Resolution vs Pixels Resolution
Look at this post to find out what a Point Rez and what a Pixel Rez are.
Let's explore what is what when using iPhoneX data.
#IBOutlet var arView: ARSCNView!
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) {
let imageRez = (self.arView.session.currentFrame?.camera.imageResolution)!
let intrinsics = (self.arView.session.currentFrame?.camera.intrinsics)!
let viewportSize = self.arView.frame.size
let screenSize = self.arView.snapshot().size
print(imageRez as Any)
print(intrinsics as Any)
print(viewportSize as Any)
print(screenSize as Any)
}
Apple Documentation:
imageResolution instance property describes the image in the capturedImage buffer, which contains image data in the camera device's native sensor orientation. To convert image coordinates to match a specific display orientation of that image, use the viewMatrix(for:) or projectPoint(_:orientation:viewportSize:) method.
iPhone X imageRez (aspect ratio is 4:3).
These aspect ratio values correspond to camera sensor values:
(1920.0, 1440.0)
iPhone X intrinsics:
simd_float3x3([[1665.0, 0.0, 0.0], // first column
[0.0, 1665.0, 0.0], // second column
[963.8, 718.3, 1.0]]) // third column
iPhone X viewportSize (ninth part of screenSize):
(375.0, 812.0)
iPhone X screenSize (resolution declared in tech spec):
(1125.0, 2436.0)
Pay attention, there's no snapshot() method for RealityKit's ARView.
Is there any best way to do auto scaling for windows store games developed in monogame,I am facing many problems ,Currently i am doing this.
following is two lines of code which is useful for keeping my games windows to fixed scale.
graphics.PreferredBackBufferHeight = 768;
graphics.PreferredBackBufferWidth = 1024;
but this is solving my problem.
see bleow code for autoscaling for your store game
two vars scalesx and scaley , divide the current deveice width and height by your required resolution i am taking 1024 and 768 because in pc's this smallest resolution those which will run windows 8 OS.
when you pass x and y co-ordinates for rectangles and vectors multiply it scalex and scaley and the pos according to 1024*768 resolution
i have taken x of rect (int)(scalex*102) ------>by this, x of rect is at 102px but for autoscale multiply it by scalex.
` public static float scaley,scalex;
scalex = (float)graphics.GraphicsDevice.Viewport.Width / 1024;
scaley = (float)graphics.GraphicsDevice.Viewport.Height / 768;
jumposition = new Rectangle((int)(scalex*102),(int)(scaley*320),(int)(scalex*250),(int)(scaley* 250));`
I am developing an application for the iPhone using opencv. I have to use the method solvePnPRansac:
http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html
For this method I need to provide a camera matrix:
__ __
| fx 0 cx |
| 0 fy cy |
|_0 0 1 _|
where cx and cy represent the center pixel positions of the image and fx and fy represent focal lengths, but that is all the documentation says. I am unsure what to provide for these focal lengths. The iPhone 5 has a focal length of 4.1 mm, but I do not think that this value is usable as is.
I checked another website:
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
which shows how opencv creates camera matrices. Here it states that focal lengths are measured in pixel units.
I checked another website:
http://www.velocityreviews.com/forums/t500283-focal-length-in-pixels.html
(about half way down)
it says that focal length can be converted from units of millimeters to pixels using the equation: fx = fy = focalMM * pixelDensity / 25.4;
Another Link I found states that fx = focalMM * width / (sensorSizeMM);
fy = focalMM * length / (sensorSizeMM);
I am unsure about these equations and how to properly create this matrix.
Any help, advice, or links on how to create an accurate camera matrix (especially for the iPhone 5) would be greatly appreciated,
Isaac
p.s. I think that (fx/fy) or (fy/fx) might be equal to the aspect ratio of the camera, but that might be completely wrong.
UPDATE:
Pixel coordinates to 3D line (opencv)
using this link, I can figure out how they want fx and fy to be formatted because they use it to scale angles relative to their distance from the center. therefore, fx and fy are likely in pixels/(unit length) but im still not sure what this unit length needs to be, can it be arbitrary as long as x and y are scaled to each other?
You can get an initial (rough) estimate of the focal length in pixel dividing the focal length in mm by the width of a pixel of the camera' sensor (CCD, CMOS, whatever).
You get the former from the camera manual, or read it from the EXIF header of an image taken at full resolution. Finding out the latter is a little more complicated: you may look up on the interwebs the sensor's spec sheet, if you know its manufacturer and model number, or you may just divide the overall width of its sensitive area by the number of pixels on the side.
Absent other information, it's usually safe to assume that the pixels are square (i.e. fx == fy), and that the sensor is orthogonal to the lens's focal axis (i.e. that the term in the first row and second column of the camera matrix is zero). Also, the pixel coordinates of the principal point (cx, cy) are usually hard to estimate accurately without a carefully designed calibration rig, and an as-carefully executed calibration procedure (that's because they are intrinsically confused with the camera translation parallel to the image plane). So it's best to just set them equal to the geometrical geometrical center of the image, unless you know that the image has been cropped asymmetrically.
Therefore, your simplest camera model has only one unknown parameter, the focal length f = fx = fy.
Word of advice: in your application is usually more convenient to carry around the horizontal (or vertical) field-of-view angle, rather than the focal length in pixels. This is because the FOV is invariant to image scaling.
The "focal length" you are dealing with here is simply a scaling factor from objects in the world to camera pixels, used in the pinhole camera model (Wikipedia link). That's why its units are pixels/unit length. For a given f, an object of size L at a distance (perpendicular to the camera) z, would be f*L/z pixels.
So, you could estimate the focal length by placing an object of known size at a known distance of your camera and measuring its size in the image. You could aso assume the central point is the center of the image. You should definitely not ignore the lens distortion (dist_coef parameter in solvePnPRansac).
In practice, the best way to obtain the camera matrix and distortion coefficients is to use a camera calibration tool. You can download and use the MRPT camera_calib software from this link, there's also a video tutorial here. If you use matlab, go for the Camera Calibration Toolbox.
Here you have a table with the spec of the cameras for iPhone 4 and 5.
The calculation is:
double f = 4.1;
double resX = (double)(sourceImage.cols);
double resY = (double)(sourceImage.rows);
double sensorSizeX = 4.89;
double sensorSizeY = 3.67;
double fx = f * resX / sensorSizeX;
double fy = f * resY / sensorSizeY;
double cx = resX/2.;
double cy = resY/2.;
Try this:
func getCamMatrix()->(Float, Float, Float, Float)
{
let format:AVCaptureDeviceFormat? = deviceInput?.device.activeFormat
let fDesc:CMFormatDescriptionRef = format!.formatDescription
let dim:CGSize = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true)
// dim = dimensioni immagine finale
let cx:Float = Float(dim.width) / 2.0;
let cy:Float = Float(dim.height) / 2.0;
let HFOV : Float = format!.videoFieldOfView
let VFOV : Float = ((HFOV)/cx)*cy
let fx:Float = abs(Float(dim.width) / (2 * tan(HFOV / 180 * Float(M_PI) / 2)));
let fy:Float = abs(Float(dim.height) / (2 * tan(VFOV / 180 * Float(M_PI) / 2)));
return (fx, fy, cx, cy)
}
Old thread, present problem.
As Milo and Isaac mentioned after Milo's answer, there seems to be no "common" params available for, say, the iPhone 5.
For what it is worth, here is the result of a run with the MRPT calibration tool, with a good old iPhone 5:
[CAMERA_PARAMS]
resolution=[3264 2448]
cx=1668.87585
cy=1226.19712
fx=3288.47697
fy=3078.59787
dist=[-7.416752e-02 1.562157e+00 1.236471e-03 1.237955e-03 -5.378571e+00]
Average err. of reprojection: 1.06726 pixels (OpenCV error=1.06726)
Note that dist means distortion here.
I am conducting experiments on a toy project, with these parameters---kind of ok. If you do use them on your own project, please keep in mind that they may be hardly good enough to get started. The best will be to follow Milo's recommendation with your own data. The MRPT tool is quite easy to use, with the checkerboard they provide. Hope this does help getting started !
In specs,
iPhone 4 screen resolution & pixel
density
* iPhone 4 has a screen resolution of 960×640 pixels, which is twice that of
the prior iPhone models
As we know, when we code like this,
CGImageRef screenImage = UIGetScreenImage();
CGRect fullRect = [[UIScreen mainScreen] applicationFrame];
CGImageRef saveCGImage = CGImageCreateWithImageInRect(screenImage, fullRect);
the saveCGImage will have size (320,480), my question is how about iPhone 4 ? Is that (640,960) ?
Another question is about black image in thumb view when you open Photo.app if coding like this,
CGImageRef screenImage =
UIGetScreenImage();
CGImageRef saveCGImage = CGImageCreateWithImageInRect(screenImage, CGRectMake(0,0,320,460)); // please note, I used 460 instead of 480
The problem is that when open "Photo.app", in the thumb view, those images are view as black, when clicking it to see details, that is okay.
Any solution for this issue now ?
Thanks for your time.
update questions:
When you invoke UIGetScreenImage() to capture screen in iPhone 4, is that also 320x480 ?
From Falk Lumo:
iPhone 4 main camera:
5.0 Mpixels (2592 x 1936)
1/3.2" back-illuminated CMOS sensor
4:3 aspect ratio
35 mm film camera crop factor: 7.64
Low ISO 80 (or better)
3.85 mm lens focal length
f/2.8 lens aperture
Autofocus: tap to focus
Equivalent 35mm film camera and lens:
30 mm f/22
Photo resolution taken by iOS devices,
iPhone 6/6+, iPhone 5/5S, iPhone 4S(8 MP) - 3264 x
2448 pixels
iPhone 4, iPad 3, iPodTouch(5 MP) - 2592 x 1936 pixels
iPhone 3GS(3.2 MP) - 2048 x 1536 pixels
iPhone 2G/3G(2 MP) - 1600 x 1200 pixels