In specs,
iPhone 4 screen resolution & pixel
density
* iPhone 4 has a screen resolution of 960×640 pixels, which is twice that of
the prior iPhone models
As we know, when we code like this,
CGImageRef screenImage = UIGetScreenImage();
CGRect fullRect = [[UIScreen mainScreen] applicationFrame];
CGImageRef saveCGImage = CGImageCreateWithImageInRect(screenImage, fullRect);
the saveCGImage will have size (320,480), my question is how about iPhone 4 ? Is that (640,960) ?
Another question is about black image in thumb view when you open Photo.app if coding like this,
CGImageRef screenImage =
UIGetScreenImage();
CGImageRef saveCGImage = CGImageCreateWithImageInRect(screenImage, CGRectMake(0,0,320,460)); // please note, I used 460 instead of 480
The problem is that when open "Photo.app", in the thumb view, those images are view as black, when clicking it to see details, that is okay.
Any solution for this issue now ?
Thanks for your time.
update questions:
When you invoke UIGetScreenImage() to capture screen in iPhone 4, is that also 320x480 ?
From Falk Lumo:
iPhone 4 main camera:
5.0 Mpixels (2592 x 1936)
1/3.2" back-illuminated CMOS sensor
4:3 aspect ratio
35 mm film camera crop factor: 7.64
Low ISO 80 (or better)
3.85 mm lens focal length
f/2.8 lens aperture
Autofocus: tap to focus
Equivalent 35mm film camera and lens:
30 mm f/22
Photo resolution taken by iOS devices,
iPhone 6/6+, iPhone 5/5S, iPhone 4S(8 MP) - 3264 x
2448 pixels
iPhone 4, iPad 3, iPodTouch(5 MP) - 2592 x 1936 pixels
iPhone 3GS(3.2 MP) - 2048 x 1536 pixels
iPhone 2G/3G(2 MP) - 1600 x 1200 pixels
Related
In android we define text size as dp but in flutter text size is in pixel then how to match same in flutter.
How to achieve same in flutter.
Any help is appreciated!
From the Android Developer Documentation:
px
> Pixels - corresponds to actual pixels on the screen.
in
> Inches - based on the physical size of the screen.
> 1 Inch = 2.54 centimeters
mm
> Millimeters - based on the physical size of the screen.
pt
> Points - 1/72 of an inch based on the physical size of the screen.
dp or dip
> Density-independent Pixels - an abstract unit that is based on the physical density of the screen. These units are relative to a 160
dpi screen, so one dp is one pixel on a 160 dpi screen. The ratio of dp-to-pixel will change with the screen density, but not necessarily in direct proportion. Note: The compiler accepts both "dip" and "dp", though "dp" is more consistent with "sp".
sp
> Scaleable Pixels OR scale-independent pixels - this is like the dp unit, but it is also scaled by the user's font size preference. It is recommended you
use this unit when specifying font sizes, so they will be adjusted
for both the screen density and the user's preference. Note, the Android documentation is inconsistent on what sp actually stands for, one doc says "scale-independent pixels", the other says "scaleable pixels".
From Understanding Density Independence In Android:
Density Bucket
Screen Density
Physical Size
Pixel Size
ldpi
120 dpi
0.5 x 0.5 in
0.5 in * 120 dpi = 60x60 px
mdpi
160 dpi
0.5 x 0.5 in
0.5 in * 160 dpi = 80x80 px
hdpi
240 dpi
0.5 x 0.5 in
0.5 in * 240 dpi = 120x120 px
xhdpi
320 dpi
0.5 x 0.5 in
0.5 in * 320 dpi = 160x160 px
xxhdpi
480 dpi
0.5 x 0.5 in
0.5 in * 480 dpi = 240x240 px
xxxhdpi
640 dpi
0.5 x 0.5 in
0.5 in * 640 dpi = 320x320 px
Unit
Description
Units Per Physical Inch
Density Independent?
Same Physical Size On Every Screen?
px
Pixels
Varies
No
No
in
Inches
1
Yes
Yes
mm
Millimeters
25.4
Yes
Yes
pt
Points
72
Yes
Yes
dp
Density Independent Pixels
~160
Yes
No
sp
Scale Independent Pixels
~160
Yes
No
More info can be also be found in the Google Design Documentation.
I am writing an ARKit app where I need to use camera poses and intrinsics for 3D reconstruction.
The camera Intrinsics matrix returned by ARKit seems to be using a different image resolution than mobile screen resolution. Below is one example of this issue
Intrinsics matrix returned by ARKit is :
[[1569.249512, 0, 931.3638306],[0, 1569.249512, 723.3305664],[0, 0, 1]]
whereas input image resolution is 750 (width) x 1182 (height). In this case, the principal point seems to be out of the image which cannot be possible. It should ideally be close to the image center. So above intrinsic matrix might be using image resolution of 1920 (width) x 1440 (height) returned that is completely different than the original image resolution.
The questions are:
Whether the returned camera intrinsics belong to 1920x1440 image resolution?
If yes, how can I get the intrinsics matrix representing original image resolution i.e. 750x1182?
Intrinsics 3x3 matrix
Intrinsics camera matrix converts between the 2D camera plane and 3D world coordinate space. Here's a decomposition of an intrinsic matrix, where:
fx and fy is a Focal Length in pixels
xO and yO is a Principal Point Offset in pixels
s is an Axis Skew
According to Apple Documentation:
The values fx and fy are the pixel focal length, and are identical for square pixels. The values ox and oy are the offsets of the principal point from the top-left corner of the image frame. All values are expressed in pixels.
So you let's examine what your data is:
[1569, 0, 931]
[ 0, 1569, 723]
[ 0, 0, 1]
fx=1569, fy=1569
xO=931, yO=723
s=0
To convert a known focal length in pixels to mm use the following expression:
F(mm) = F(pixels) * SensorWidth(mm) / ImageWidth(pixels)
Points Resolution vs Pixels Resolution
Look at this post to find out what a Point Rez and what a Pixel Rez are.
Let's explore what is what when using iPhoneX data.
#IBOutlet var arView: ARSCNView!
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) {
let imageRez = (self.arView.session.currentFrame?.camera.imageResolution)!
let intrinsics = (self.arView.session.currentFrame?.camera.intrinsics)!
let viewportSize = self.arView.frame.size
let screenSize = self.arView.snapshot().size
print(imageRez as Any)
print(intrinsics as Any)
print(viewportSize as Any)
print(screenSize as Any)
}
Apple Documentation:
imageResolution instance property describes the image in the capturedImage buffer, which contains image data in the camera device's native sensor orientation. To convert image coordinates to match a specific display orientation of that image, use the viewMatrix(for:) or projectPoint(_:orientation:viewportSize:) method.
iPhone X imageRez (aspect ratio is 4:3).
These aspect ratio values correspond to camera sensor values:
(1920.0, 1440.0)
iPhone X intrinsics:
simd_float3x3([[1665.0, 0.0, 0.0], // first column
[0.0, 1665.0, 0.0], // second column
[963.8, 718.3, 1.0]]) // third column
iPhone X viewportSize (ninth part of screenSize):
(375.0, 812.0)
iPhone X screenSize (resolution declared in tech spec):
(1125.0, 2436.0)
Pay attention, there's no snapshot() method for RealityKit's ARView.
What is the size (in pixels or points) of the notch of the iPhone Xr and the iPhone Xs Max? I know them for the iPhone X, see "iPhone X Cutout" on https://www.paintcodeapp.com/news/iphone-x-screen-demystified
I believe the iPhone Xs has the same dimensions.
I'm trying to make a game that hugs the space right up to the pixel.
Via #prabhat-kasera
It's 30 X 209 Pt
I see now what my error was: I was using 30/2436 % of the screen for both phones.
I should use 30/1624 for iPhone Xr and 30/2688 for iPhone Xs Max.
Device-agnostic code:
// Landscape
float screenRatio = 30 * ((float)UIScreen.mainScreen.scale) /
UIScreen.mainScreen.currentMode.size.width;
// Portrait
float screenRatio = 30 * ((float)UIScreen.mainScreen.scale) /
UIScreen.mainScreen.currentMode.size.height;
What are the measurement units used to specify sizes or X,Y cordinates in OpenXML? (Presentation).
Does it makes sense to match those with pixels, if so how can be those converted to pixels?
graphicFrame.Transform = new Transform(new Offset() { X = 1650609L, Y = 4343400L }, new Extents { Cx =
6096000L, Cy = 741680L });
In above code X is set to 1650609 units? What units are they?
They are called EMU (English Metric Units)
http://en.wikipedia.org/wiki/English_Metric_Unit#DrawingML
http://polymathprogrammer.com/2009/10/22/english-metric-units-and-open-xml/
1pt = 12,700 EMU
Also as explained here 1px =~ 9525EMU
http://openxmldeveloper.org/discussions/formats/f/15/p/396/933.aspx
EMU is right, although converting EMU to PX depends on the image density. The conversion factor for 96ppi images is 9525, while for a 72ppi image is 12700 and for a 300ppi image is 3048.
So, the conversion factor would be emu's per inch (914,400) / image ppi.
Example: a 200px width image with a density of 300ppi, would give us 609,600 EMU:
609,600 EMU / (914,400 emus-per-inch / 300 pixels-per-inch) = 200 px
I am using a web that is helping me a lot for these things. I have found it in another post about all measures in word and their equivalents. is that: https://unit-converter-bcmmybn3dq-ez.a.run.app/
I found in here Default WordML Unit Measurement ? pixel or point or inches
You just need EMUS to px and the page calculates the equivalent with a lot of decimal for precision.
I hope it really helps you.
I am using UIImagePickerController to record videos and I always want the output video to be 1280 x 720 (small enough for other purposes), but in iphone 4s/iphone 5, the output video will be 1920 x 1280. I cannot find a proper configuration that helps me to make the video exactly 1280 x 720.
UIImagePickerControllerQualityTypeHigh = 0,
UIImagePickerControllerQualityTypeMedium = 1, // default value
UIImagePickerControllerQualityTypeLow = 2,
UIImagePickerControllerQualityType640x480 = 3,
High will be 1920 x 1280 in iPhone 4s/iPhone 5
Maybe this thread could help!
You should use google before asking questions like this. :D
Link to thread on stackoverf.