auto scaling in monogame for windows store games using xna and xaml - monogame

Is there any best way to do auto scaling for windows store games developed in monogame,I am facing many problems ,Currently i am doing this.
following is two lines of code which is useful for keeping my games windows to fixed scale.
graphics.PreferredBackBufferHeight = 768;
graphics.PreferredBackBufferWidth = 1024;
but this is solving my problem.

see bleow code for autoscaling for your store game
two vars scalesx and scaley , divide the current deveice width and height by your required resolution i am taking 1024 and 768 because in pc's this smallest resolution those which will run windows 8 OS.
when you pass x and y co-ordinates for rectangles and vectors multiply it scalex and scaley and the pos according to 1024*768 resolution
i have taken x of rect (int)(scalex*102) ------>by this, x of rect is at 102px but for autoscale multiply it by scalex.
` public static float scaley,scalex;
scalex = (float)graphics.GraphicsDevice.Viewport.Width / 1024;
scaley = (float)graphics.GraphicsDevice.Viewport.Height / 768;
jumposition = new Rectangle((int)(scalex*102),(int)(scaley*320),(int)(scalex*250),(int)(scaley* 250));`

Related

Convert pixels to world space units

I'm planning to make my sprite dimension to be 1.5 inches to all devices. My problem now is convert the pixels to world space units so that I can scale my sprite correctly.
float pixelLength = Screen.dpi * 1.5f; // 1.5 inches
// code to convert pixelLength to world space units
float pixelLength = Screen.dpi * 1.5f;
// Using your sprite's pixel per unit...
float worldLength = pixelLength / spriteRenderer.sprite.pixelsPerUnit;
Alternatively, instead of scaling your sprites, you can change the Camera's OrthographicSize. Changing OrthographicSize works better if all of your sprites' PixelPerUnit are roughly the same, and you are scaling everything.

Focal length to Field of view

I'm importing some 3D models from Rhino 3D to Unity. When doing that I need to import the camera views. In Rhino they have the property Focal length and in Unity we have Field of view.
I need to convert focal length to Field of View. I found a formula to convert the values here
http://paulbourke.net/miscellaneous/lens/
I'm planning to use this formula
vertical field of view = 2 atan(0.5 height / focal length)
My question is how can I find the value for the height. I'm not sure from where I can get that in Unity.
Thanks
Maths isn't my strong suit but I do recall the formula was used in a certain BFBC2 fov tool
hFov = 2 * atan(tan( vFov/2 ) * width/height)
Where width and height are your current screen resolution dimensions.
I hope this is correct for your purpose.
To answer my question, this is the formula I ended up using at the end. Hope that'll be useful for anyone having a similar issue.
Code in C# in Unity
// Standard film size
int filmHeight = 24;
int filmWidth = 36;
// Formula to convert focalLength to field of view - In Unity they use Vertical FOV.
// So we use the filmHeight to calculate Vertical FOV.
double fovdub = Mathf.Rad2Deg * 2.0 * Math.Atan(filmHeight / (2.0 * focalLen));
float fov = (float) fovdub;

create opencv camera matrix for iPhone 5 solvepnp

I am developing an application for the iPhone using opencv. I have to use the method solvePnPRansac:
http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html
For this method I need to provide a camera matrix:
__ __
| fx 0 cx |
| 0 fy cy |
|_0 0 1 _|
where cx and cy represent the center pixel positions of the image and fx and fy represent focal lengths, but that is all the documentation says. I am unsure what to provide for these focal lengths. The iPhone 5 has a focal length of 4.1 mm, but I do not think that this value is usable as is.
I checked another website:
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
which shows how opencv creates camera matrices. Here it states that focal lengths are measured in pixel units.
I checked another website:
http://www.velocityreviews.com/forums/t500283-focal-length-in-pixels.html
(about half way down)
it says that focal length can be converted from units of millimeters to pixels using the equation: fx = fy = focalMM * pixelDensity / 25.4;
Another Link I found states that fx = focalMM * width / (sensorSizeMM);
fy = focalMM * length / (sensorSizeMM);
I am unsure about these equations and how to properly create this matrix.
Any help, advice, or links on how to create an accurate camera matrix (especially for the iPhone 5) would be greatly appreciated,
Isaac
p.s. I think that (fx/fy) or (fy/fx) might be equal to the aspect ratio of the camera, but that might be completely wrong.
UPDATE:
Pixel coordinates to 3D line (opencv)
using this link, I can figure out how they want fx and fy to be formatted because they use it to scale angles relative to their distance from the center. therefore, fx and fy are likely in pixels/(unit length) but im still not sure what this unit length needs to be, can it be arbitrary as long as x and y are scaled to each other?
You can get an initial (rough) estimate of the focal length in pixel dividing the focal length in mm by the width of a pixel of the camera' sensor (CCD, CMOS, whatever).
You get the former from the camera manual, or read it from the EXIF header of an image taken at full resolution. Finding out the latter is a little more complicated: you may look up on the interwebs the sensor's spec sheet, if you know its manufacturer and model number, or you may just divide the overall width of its sensitive area by the number of pixels on the side.
Absent other information, it's usually safe to assume that the pixels are square (i.e. fx == fy), and that the sensor is orthogonal to the lens's focal axis (i.e. that the term in the first row and second column of the camera matrix is zero). Also, the pixel coordinates of the principal point (cx, cy) are usually hard to estimate accurately without a carefully designed calibration rig, and an as-carefully executed calibration procedure (that's because they are intrinsically confused with the camera translation parallel to the image plane). So it's best to just set them equal to the geometrical geometrical center of the image, unless you know that the image has been cropped asymmetrically.
Therefore, your simplest camera model has only one unknown parameter, the focal length f = fx = fy.
Word of advice: in your application is usually more convenient to carry around the horizontal (or vertical) field-of-view angle, rather than the focal length in pixels. This is because the FOV is invariant to image scaling.
The "focal length" you are dealing with here is simply a scaling factor from objects in the world to camera pixels, used in the pinhole camera model (Wikipedia link). That's why its units are pixels/unit length. For a given f, an object of size L at a distance (perpendicular to the camera) z, would be f*L/z pixels.
So, you could estimate the focal length by placing an object of known size at a known distance of your camera and measuring its size in the image. You could aso assume the central point is the center of the image. You should definitely not ignore the lens distortion (dist_coef parameter in solvePnPRansac).
In practice, the best way to obtain the camera matrix and distortion coefficients is to use a camera calibration tool. You can download and use the MRPT camera_calib software from this link, there's also a video tutorial here. If you use matlab, go for the Camera Calibration Toolbox.
Here you have a table with the spec of the cameras for iPhone 4 and 5.
The calculation is:
double f = 4.1;
double resX = (double)(sourceImage.cols);
double resY = (double)(sourceImage.rows);
double sensorSizeX = 4.89;
double sensorSizeY = 3.67;
double fx = f * resX / sensorSizeX;
double fy = f * resY / sensorSizeY;
double cx = resX/2.;
double cy = resY/2.;
Try this:
func getCamMatrix()->(Float, Float, Float, Float)
{
let format:AVCaptureDeviceFormat? = deviceInput?.device.activeFormat
let fDesc:CMFormatDescriptionRef = format!.formatDescription
let dim:CGSize = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true)
// dim = dimensioni immagine finale
let cx:Float = Float(dim.width) / 2.0;
let cy:Float = Float(dim.height) / 2.0;
let HFOV : Float = format!.videoFieldOfView
let VFOV : Float = ((HFOV)/cx)*cy
let fx:Float = abs(Float(dim.width) / (2 * tan(HFOV / 180 * Float(M_PI) / 2)));
let fy:Float = abs(Float(dim.height) / (2 * tan(VFOV / 180 * Float(M_PI) / 2)));
return (fx, fy, cx, cy)
}
Old thread, present problem.
As Milo and Isaac mentioned after Milo's answer, there seems to be no "common" params available for, say, the iPhone 5.
For what it is worth, here is the result of a run with the MRPT calibration tool, with a good old iPhone 5:
[CAMERA_PARAMS]
resolution=[3264 2448]
cx=1668.87585
cy=1226.19712
fx=3288.47697
fy=3078.59787
dist=[-7.416752e-02 1.562157e+00 1.236471e-03 1.237955e-03 -5.378571e+00]
Average err. of reprojection: 1.06726 pixels (OpenCV error=1.06726)
Note that dist means distortion here.
I am conducting experiments on a toy project, with these parameters---kind of ok. If you do use them on your own project, please keep in mind that they may be hardly good enough to get started. The best will be to follow Milo's recommendation with your own data. The MRPT tool is quite easy to use, with the checkerboard they provide. Hope this does help getting started !

Is CGContextAddArc really that slow (compared to a circle drawn with a few lines

Folks,
While coding up a few dials and sliders (e.g. like a big volume button one can rotate around) - I found that the standard CGContextAddArc() used like:
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
CGContextSetLineWidth(ctx, radius * (KE-KR)+8);
CGContextSetStrokeColorWithColor(ctx,self.foregroundColor.CGColor);
.... more some colour/width/etc settings
...
CGContextAddArc(ctx, dx,dy,radius, 0, 2*M_PI, 0);
to be unbelievable slow.
On an iPad - with a handful of filled/stroked circles, less than some 10 clean [self setNeedsDisplay] updates/second during drag. A very quick hack with a hand-drawn circle (shown below) was several orders of magnitude faster. Same applies to the emulator.
Why is this. Seems to be the case for both a normal fill and various gradient fills. What am I doing wrong ?
Dw.
// Stupid replacement for CGContectAddArc() which seems to be very slow.
//
void CGContextAddCirlce(CGContextRef ctx, float ox, float oy, float radius)
{
double len = 2 * M_PI * radius;
double step = 1.8 / len; // over the top :)
// translating/scaling would more efficient, etc..
//
float x = ox + radius;
float y = oy;
// stupid hack - should just do a quadrant and mirror twice.
//
CGContextMoveToPoint(ctx,x,y);
for(double a = step; a < 2.0 * M_PI -step; a += step) {
x = ox + radius * cos(a);
y = oy + radius * sin(a);
CGContextAddLineToPoint(ctx, x, y);
};
CGContextClosePath(ctx);
};
The vector drawing operations of Quartz 2D can be slow, which is why it is a good idea to redraw only when needed.
In your case, I would suggest drawing your volume button once, then transforming the UIView or CALayer into which you've drawn the button using a rotational transform. By simply moving, rotating, or scaling a view, you do not trigger an expensive redraw. The content is already cached as a texture, and the GPU can quickly manipulate and composite this rasterized content on top of your other views.
You'll find that avoiding redrawing in this manner will yield much improved performance.
Issue partly (mostly resolved).
Extensive benchmarking does show that AddArc is indeed slow compared to drawing a complete circle with a vector/straight-line path for circles in the 100-200 pixel radius range. For partial circles the effect is much less pronounced; am wondering if this is tied to the number of beziers.
BUT:
The code below did not compile as one would read it; M_PI was not the 3.14etc as actually expected by set to (3.14... * ((EVP_ARM7_ADJUST[(PLTF)])) by an included fixed-point DSP library (set to x100).
Hence it specified the end-arc double by a factor of 256 too large.
And it was the latter which did make the issue so noticeable (evidently the underlaying implementation just keeps going round and round and round..).
So issue now understood (and will keep an optimized/benchmarked version).
Thanks for the help!

How do I use the gravity vector to correctly transform scene for augmented reality?

I'm trying figure out how to get an OpenGL specified object to be displayed correctly according to the device orientation (ie. according to the gravity vector from the accelerometer, and heading from compass).
The GLGravity sample project has an example which is almost like this (despite ignoring heading), but it has some glitches. For example, the teapot jumps 180deg as the device viewing angle crosses the horizon, and it also rotates spuriously if you tilt the device from portrait into landscape. This is fine for the context of this app, as it just shows off an object and it doesn't matter that it does these things. But it means that the code just doesn't work when you attempt to emulate real life viewing of an OpenGL object according to the device's orientation. What happens is that it almost works, but the heading rotation you apply from the compass gets "corrupted" by the spurious additional rotations seen in the GLGravity example project.
Can anyone provide sample code that shows how to adjust correctly for the device orientation (ie. gravity vector), or to fix the GLGravity example so that it doesn't include spurious heading changes?
//Clear matrix to be used to rotate from the current referential to one based on the gravity vector
bzero(matrix, sizeof(matrix));
matrix[3][3] = 1.0;
//Setup first matrix column as gravity vector
matrix[0][0] = accel[0] / length;
matrix[0][1] = accel[1] / length;
matrix[0][2] = accel[2] / length;
//Setup second matrix column as an arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz} defined by by the equation "Gx * x + Gy * y + Gz * z = 0" in which we arbitrarily set x=0 and y=1
matrix[1][0] = 0.0;
matrix[1][1] = 1.0;
matrix[1][2] = -accel[1] / accel[2];
length = sqrtf(matrix[1][0] * matrix[1][0] + matrix[1][1] * matrix[1][1] + matrix[1][2] * matrix[1][2]);
matrix[1][0] /= length;
matrix[1][1] /= length;
matrix[1][2] /= length;
//Setup third matrix column as the cross product of the first two
matrix[2][0] = matrix[0][1] * matrix[1][2] - matrix[0][2] * matrix[1][1];
matrix[2][1] = matrix[1][0] * matrix[0][2] - matrix[1][2] * matrix[0][0];
matrix[2][2] = matrix[0][0] * matrix[1][1] - matrix[0][1] * matrix[1][0];
//Finally load matrix
glMultMatrixf((GLfloat*)matrix);
Here's a clarification showing how to get the elevation and tilt that are needed for gluLookAt solution as shown in my last answer:
// elevation comes from z component (0 = facing horizon)
elevationRadians = asin(gravityVector.z / Vector3DMagnitude(gravityVector));
// tilt is how far screen is from vertical, looking along z axis
tiltRadians = atan2(-gravityVector.y, -gravityVector.x) - M_PI_2;
Following up on Chris's suggestion: I'm not sure if I've got this all correct due to differing conventions of row/column order and heading cw or ccw. However the following code is what I came up with:
Vector3D forward = Vector3DMake(0.0f, 0.0f, -1.0f);
// Multiply it by current rotation matrix to get teapot direction
Vector3D direction;
direction.x = matrix[0][0] * forward.x + matrix[1][0] * forward.y + matrix[2][0] * forward.z;
direction.y = matrix[0][1] * forward.x + matrix[1][1] * forward.y + matrix[2][1] * forward.z;
direction.z = matrix[0][2] * forward.x + matrix[1][2] * forward.y + matrix[2][2] * forward.z;
heading = atan2(direction.z, direction.x) * 180 / M_PI;
// Use this heading to adjust the teapot direction back to keep it fixed
// Rotate about vertical axis (Y), as it is a heading adjustment
glRotatef(heading, 0.0, 1.0, 0.0);
When I run this code, the teapot behaviour has apparently "improved" eg. heading no longer flips 180deg when device screen (in portrait view) is pitched forward/back through upright. However, it still makes major jumps in heading when device (in landscape view) is pitched forward/back. So something's not right. It suggests that the above calculation of the actual heading is incorrect...
I finally found a solution that works. :-)
I dropped the rotation matrix approach, and instead adopted gluLookAt. To make this work you need to know the device "elevation" (viewing angle relative to horizon ie. 0 on horizon, +90 overhead), and the camera's "tilt" (how far the device is from vertical its x/y plane ie. 0 when vertical/portrait, +/-90 when horizontal/landscape), both of which are obtained from the device gravity vector components.
Vector3D eye, scene, up;
CGFloat distanceFromScene = 0.8;
// Adjust eye position for elevation (y/z)
eye.x = 0;
eye.y = distanceFromScene * -sin(elevationRadians); // eye position goes down as elevation angle goes up
eye.z = distanceFromScene * cos(elevationRadians); // z position is maximum when elevation is zero
// Lookat point is origin
scene = Vector3DMake(0, 0, 0); // Scene is at origin
// Camera tilt - involves x/y plane only - arbitrary vector length
up.x = sin(tiltRadians);
up.y = cos(tiltRadians);
up.z = 0;
Then you just apply the gluLookAt transformation, and also rotate the scene according to the device heading.
// Adjust view for device orientation
gluLookAt(eye.x, eye.y, eye.z, scene.x, scene.y, scene.z, up.x, up.y, up.z);
// Apply device heading to scene
glRotatef(currentHeadingDegrees, 0.0, 1.0, 0.0);
Try rotating the object depending upon iphone acceleration values.
float angle = -atan2(accelX, accelY);
glPushMatrix();
glTranslatef(centerPoint.x, centerPoint.y, 0);
glRotatef(angle, 0, 0, 1);
glTranslatef(-centerPoint.x, -centerPoint.y, 0);
glPopMatrix();
Where centerPoint is the middle point the object.
oo, nice.
GLGravity seems to get everything right except for the yaw. Here's what I would try. Do everything GLGravity does, and then this:
Project a vector in the direction you want the teapot to face, using the compass or whatever you so choose. Then multiply a "forward" vector by the teapot's current rotation matrix, which will give you the direction the teapot is facing. Flatten the two vectors to the horizontal plane and take the angle between them.
This angle is your corrective yaw. Then just glRotatef by it.
Whether or not the 3GS's compass is reliable and robust enough for this to work is another thing. Normal compasses don't work when the north vector is perpendicular to their face. But I just tried the Maps app on my workmate's 3GS and it seems to cope, so maybe they have got a mechanical solution in there. Knowing what the device is actually doing will help interpret the results it gives.
Make sure to test your app at the north and south poles once you're done. :-)
Getting a much more stable gravity-based reference, can now be done using CMMotionManager.
When starting motion updates with startDeviceMotionUpdates(), you can specify a reference frame.
This fuses the accelerometer, gyroscope and optionally (depending on chose reference frame) magnetometer data. Accelerometer data is pretty noisy and bouncy (any sideways motion of the device temporarily tilts the gravity vector by any device acceleration) and alone doesn't make a good reference.
I've been low-pass filtering the accelerometer data, which helps a bit but makes the system slow.