OpenXML distance, size units - openxml

What are the measurement units used to specify sizes or X,Y cordinates in OpenXML? (Presentation).
Does it makes sense to match those with pixels, if so how can be those converted to pixels?
graphicFrame.Transform = new Transform(new Offset() { X = 1650609L, Y = 4343400L }, new Extents { Cx =
6096000L, Cy = 741680L });
In above code X is set to 1650609 units? What units are they?

They are called EMU (English Metric Units)
http://en.wikipedia.org/wiki/English_Metric_Unit#DrawingML
http://polymathprogrammer.com/2009/10/22/english-metric-units-and-open-xml/
1pt = 12,700 EMU
Also as explained here 1px =~ 9525EMU
http://openxmldeveloper.org/discussions/formats/f/15/p/396/933.aspx

EMU is right, although converting EMU to PX depends on the image density. The conversion factor for 96ppi images is 9525, while for a 72ppi image is 12700 and for a 300ppi image is 3048.
So, the conversion factor would be emu's per inch (914,400) / image ppi.
Example: a 200px width image with a density of 300ppi, would give us 609,600 EMU:
609,600 EMU / (914,400 emus-per-inch / 300 pixels-per-inch) = 200 px

I am using a web that is helping me a lot for these things. I have found it in another post about all measures in word and their equivalents. is that: https://unit-converter-bcmmybn3dq-ez.a.run.app/
I found in here Default WordML Unit Measurement ? pixel or point or inches
You just need EMUS to px and the page calculates the equivalent with a lot of decimal for precision.
I hope it really helps you.

Related

Geometrical transformation of a polygon to a higher resolution image

I'm trying to resize and reposition a ROI (region of interest) correctly from a low resolution image (256x256) to a higher resolution image (512x512). It should also be mentioned that the two images cover different field of view - the low and high resolution image have 330mm x 330mm and 180mm x 180mm FoV, respectively.
What I've got at my disposal are:
Physical reference point (in mm) in the 256x256 and 512x512 image, which are refpoint_lowres=(-164.424,-194.462) and refpoint_highres=(-94.3052,-110.923). The reference points are located in the top left pixel (1,1) in their respective images.
Pixel coordinates of the ROI in the 256x256 image (named pxX and pxY). These coordinates are positioned relative to the reference point of the lower resolution image, refpoint_lowres=(-164.424,-194.462).
Pixel spacing for the 256x256 and 512x512 image, which are 0.7757 pixel/mm and 2.8444 pixel/mm respectively.
How can I rescale and reposition the ROI (the binary mask) to correct pixel location in the 512x512 image? Many thanks in advance!!
Attempt
% This gives correctly placed and scaled binary array in the 256x256 image
mask_lowres = double(poly2mask(pxX, pxY, 256., 256.));
% Compute translational shift in pixel
mmShift = refpoint_lowres - refpoint_highres;
pxShift = abs(mmShift./pixspacing_highres)
% This produces a binary array that is only positioned correctly in the
% 512x512 image, but it is not upscaled correctly...(?)
mask_highres = double(poly2mask(pxX + pxShift(1), pxY + pxShift(2), 512.,
512.));
So you have coordinates pxX, and pxY in pixels with respect to the low-resolution image. You can transform these coordinates to real-world coordinates:
pxX_rw = pxX / 0.7757 - 164.424;
pxY_rw = pxY / 0.7757 - 194.462;
Next you can transform these coordinates to high-res coordinates:
pxX_hr = (pxX_rw - 94.3052) * 2.8444;
pxY_hr = (pxY_rw - 110.923) * 2.8444;
Since the original coordinates fit in the low-res image, but the high-res image is smaller (in physical coordinates) than the low-res one, it is possible that these new coordinates do not fit in the high-res image. If this is the case, cropping the polygon is a non-trivial exercise, it cannot be done by simply moving the vertices to be inside the field of view. MATLAB R2017b introduces the polyshape object type, which you can intersect:
bbox = polyshape([0 0 180 180] - 94.3052, [180 0 0 180] - 110.923);
poly = polyshape(pxX_rw, pxY_rw);
poly = intersect([poly bbox]);
pxX_rw = poly.Vertices(:,1);
pxY_rw = poly.Vertices(:,2);
If you have an earlier version of MATLAB, maybe the easiest solution is to make the field of view larger to draw the polygon, then crop the resulting image to the right size. But this does require some proper calculation to get it right.

How to change a pixel distance to meters?

I have a .bmp image with a map. What i know:
Height an Width of bmp image
dpi
Map Scale
Image Center's coordinates in meters.
What i want:
How can i calculate some points of image (for example corners) in meters.
Or how can i change a pixel distanse to meters?
What i do before:
For sure i know image center coordinates in pixels:
CenterXpix = Widht/2;
CenterYpix = Height/2;
But what i gonna do to find another corners coordinates. Don't think that:
metersDistance = pixelDistance*Scale;
is a correct equation.
Any advises?
If you know the height or width in both meters and pixels, you can calculate the scale in meters/pixel. You equation:
metersDistance = pixelDistance*Scale;
is correct, but only if your points are on the same axis. If your two points are diagonal from each other, you have to use good old pythagoras (in pseudocode):
X = XdistancePix*scale;
Y = YdistancePix*scale;
Distance_in_m = sqrt(X*X+Y*Y);

leaflet pixel size depending on zoom level

I have to display 5x5 degrees pie chart on a leaflet map. I can display pie chart using the great leaflet-dvf library, but I have to provide the radius in pixel, and it is static so far.
I would like to have it dynamic so that at any zoom level, the pie chart fills the 5x5 square (aka radius = 5x5 degree length).
How can I know the length in pixel of the side of a 5x5 degrees square, depending on the zoom level?
Thanks
This will work out metres per pixel
metresPerPixel = 40075016.686 * Math.abs(Math.cos(map.getCenter().lat / 180 * Math.PI)) / Math.pow(2, map.getZoom()+8);
I used the following page that provides the pixel per meter on a leaflet map depending on the zoom level:
http://wiki.openstreetmap.org/wiki/Zoom_levels
Then I computed the length of a 5x5 square at the equator: 556000 meters.
Then I store the length ratio for zoom level = 0:
$scope.lengthRatio = 556000 / 156412 // in meter / pixel
Finally, I get the radius of a pie chart depending on the zoom level ($scope.mapZoom):
var radius = $scope.lengthRatio * Math.pow(2,$scope.mapZoom)) / 2
The /2 is because I want the radius and not the diameter
Simple!

create opencv camera matrix for iPhone 5 solvepnp

I am developing an application for the iPhone using opencv. I have to use the method solvePnPRansac:
http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html
For this method I need to provide a camera matrix:
__ __
| fx 0 cx |
| 0 fy cy |
|_0 0 1 _|
where cx and cy represent the center pixel positions of the image and fx and fy represent focal lengths, but that is all the documentation says. I am unsure what to provide for these focal lengths. The iPhone 5 has a focal length of 4.1 mm, but I do not think that this value is usable as is.
I checked another website:
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
which shows how opencv creates camera matrices. Here it states that focal lengths are measured in pixel units.
I checked another website:
http://www.velocityreviews.com/forums/t500283-focal-length-in-pixels.html
(about half way down)
it says that focal length can be converted from units of millimeters to pixels using the equation: fx = fy = focalMM * pixelDensity / 25.4;
Another Link I found states that fx = focalMM * width / (sensorSizeMM);
fy = focalMM * length / (sensorSizeMM);
I am unsure about these equations and how to properly create this matrix.
Any help, advice, or links on how to create an accurate camera matrix (especially for the iPhone 5) would be greatly appreciated,
Isaac
p.s. I think that (fx/fy) or (fy/fx) might be equal to the aspect ratio of the camera, but that might be completely wrong.
UPDATE:
Pixel coordinates to 3D line (opencv)
using this link, I can figure out how they want fx and fy to be formatted because they use it to scale angles relative to their distance from the center. therefore, fx and fy are likely in pixels/(unit length) but im still not sure what this unit length needs to be, can it be arbitrary as long as x and y are scaled to each other?
You can get an initial (rough) estimate of the focal length in pixel dividing the focal length in mm by the width of a pixel of the camera' sensor (CCD, CMOS, whatever).
You get the former from the camera manual, or read it from the EXIF header of an image taken at full resolution. Finding out the latter is a little more complicated: you may look up on the interwebs the sensor's spec sheet, if you know its manufacturer and model number, or you may just divide the overall width of its sensitive area by the number of pixels on the side.
Absent other information, it's usually safe to assume that the pixels are square (i.e. fx == fy), and that the sensor is orthogonal to the lens's focal axis (i.e. that the term in the first row and second column of the camera matrix is zero). Also, the pixel coordinates of the principal point (cx, cy) are usually hard to estimate accurately without a carefully designed calibration rig, and an as-carefully executed calibration procedure (that's because they are intrinsically confused with the camera translation parallel to the image plane). So it's best to just set them equal to the geometrical geometrical center of the image, unless you know that the image has been cropped asymmetrically.
Therefore, your simplest camera model has only one unknown parameter, the focal length f = fx = fy.
Word of advice: in your application is usually more convenient to carry around the horizontal (or vertical) field-of-view angle, rather than the focal length in pixels. This is because the FOV is invariant to image scaling.
The "focal length" you are dealing with here is simply a scaling factor from objects in the world to camera pixels, used in the pinhole camera model (Wikipedia link). That's why its units are pixels/unit length. For a given f, an object of size L at a distance (perpendicular to the camera) z, would be f*L/z pixels.
So, you could estimate the focal length by placing an object of known size at a known distance of your camera and measuring its size in the image. You could aso assume the central point is the center of the image. You should definitely not ignore the lens distortion (dist_coef parameter in solvePnPRansac).
In practice, the best way to obtain the camera matrix and distortion coefficients is to use a camera calibration tool. You can download and use the MRPT camera_calib software from this link, there's also a video tutorial here. If you use matlab, go for the Camera Calibration Toolbox.
Here you have a table with the spec of the cameras for iPhone 4 and 5.
The calculation is:
double f = 4.1;
double resX = (double)(sourceImage.cols);
double resY = (double)(sourceImage.rows);
double sensorSizeX = 4.89;
double sensorSizeY = 3.67;
double fx = f * resX / sensorSizeX;
double fy = f * resY / sensorSizeY;
double cx = resX/2.;
double cy = resY/2.;
Try this:
func getCamMatrix()->(Float, Float, Float, Float)
{
let format:AVCaptureDeviceFormat? = deviceInput?.device.activeFormat
let fDesc:CMFormatDescriptionRef = format!.formatDescription
let dim:CGSize = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true)
// dim = dimensioni immagine finale
let cx:Float = Float(dim.width) / 2.0;
let cy:Float = Float(dim.height) / 2.0;
let HFOV : Float = format!.videoFieldOfView
let VFOV : Float = ((HFOV)/cx)*cy
let fx:Float = abs(Float(dim.width) / (2 * tan(HFOV / 180 * Float(M_PI) / 2)));
let fy:Float = abs(Float(dim.height) / (2 * tan(VFOV / 180 * Float(M_PI) / 2)));
return (fx, fy, cx, cy)
}
Old thread, present problem.
As Milo and Isaac mentioned after Milo's answer, there seems to be no "common" params available for, say, the iPhone 5.
For what it is worth, here is the result of a run with the MRPT calibration tool, with a good old iPhone 5:
[CAMERA_PARAMS]
resolution=[3264 2448]
cx=1668.87585
cy=1226.19712
fx=3288.47697
fy=3078.59787
dist=[-7.416752e-02 1.562157e+00 1.236471e-03 1.237955e-03 -5.378571e+00]
Average err. of reprojection: 1.06726 pixels (OpenCV error=1.06726)
Note that dist means distortion here.
I am conducting experiments on a toy project, with these parameters---kind of ok. If you do use them on your own project, please keep in mind that they may be hardly good enough to get started. The best will be to follow Milo's recommendation with your own data. The MRPT tool is quite easy to use, with the checkerboard they provide. Hope this does help getting started !

Detecting bright/dark points on iPhone screen

I would like to detect and mark the brightest and the darkest spot on an image.
For example I am creating an AVCaptureSession and showing the video frames on screen using AVCaptureVideoPreviewLayer. Now on this camera output view I would like to be able to mark the current darkest and lightest points.
Would i have to read Image pixel data? If so, how can i do that?
In any case, you must to read pixels to detect this. But if you whant to make it fast, dont read EVERY pixel: read only 1 of 100:
for (int x = 0; x < widgh-10; x+=10) {
for (int y = 0; y < height-10; y+=10) {
//Detect bright/dark points here
}
}
Then, you may read pixels around the ones you find, to make results more correct
here is the way to get pixel data: stackoverflow.com/questions/448125/… ... at the most bright point, red+green+blue must be maximum (225+225+225 = 675 = 100% white). At the most dark point red+green+blue must bo minimum (0 = 100% black).