Why are UIView frames made up of floats? - iphone

Technically the x, y, width, and height represent a set of dimensions that relate to pixels. I
can't have 200.23422 pixels so why do they use floats instead of ints?

The reason for the floats are that modern CPUs and GPUs a are optimized to work with many floating point numbers in parallel. This is true for iOS as well as Mac.
With Quartz you don't address individual pixels, but everything you draw is always antialiased. When you have a coordinate 1.0, 1.0 then this actually adds color to the 2x2 pixels at the coordinate origin.
This is why you might get blurry lines if you draw at integer coordinates. On non-retina you habe to draw offset by 0.5. Technically you would need to offset by 0.25 to draw exact pixels on Retina displays. Though there it does not really matter that much because you don't really see it any more at that pixel size.
Long story short: you don't address pixels direcrly, but the Graphics engine maps between floating point coordinates and pixels for you.

Resolution independence.
You want to keep your mathematical representation of your UI as accurate as practicable, only translating to pixel int values when you actually need to draw to the output device (and even then, not really). That's so that you can apply any number of transformations to your views and still get an accurate result.
Moreover it is possible to render lines, for example, at half-pixel widths and even less with a visible result - the system uses intelligent antialiasing to display a fine line.
It's the same principle as vector drawing has been using for decades (Adobe's PostScript, SVG etc). In fact Quartz is based on PDF, which is the modern version of PostScript. NeXT used Display PostScript in it's time, and then it was considered pretty revolutionary.

The dimensions are actually points that on non-retina screens have a 1 to 1 relation to pixels, but for retina screens 1 point = 2 pixels. So on a retina screen you can actually increment by half a point.

Related

Scale graphics with pygame_util

I am building a model with pymunk and I need to use real dimensions (physical size of model is approximately 1 meter). Is there a way to scale the graphics in pygame_util so that 1 meter corresponds to 800 pixels?
Pymunk itself is unitless, as described here: http://www.pymunk.org/en/latest/overview.html#mass-weight-and-units
Pymunk 6.1 (and later)
With Pymunk 6.1 its now possible to set a Transform on the SpaceDebugDrawOptions object (or one one of the library-specific implementations like pygame_utils.DebugDraw) as documented here http://www.pymunk.org/en/latest/pymunk.html#pymunk.SpaceDebugDrawOptions.transform
With this new feature it should be possible to set a scaling Transform to achieve what you are asking about.
Pymunk 6.0 (and earlier)
When used with pygame_util the distances will be measured in pixels, e.g. a 10x20 box shape (create_box(size=(10,20))) will be drawn as a 10x20 pixels rectangle. This means that the easiest way to achive what you ask about is to just define that the Pymunk length unit is 0.125cm, and therefore the box shape above 1.25cm x 2.5cm.
An alternative would be to scale the surface once complete. So instead of using the screen surface in pymunk.pygame_util.DrawOptions() you use a custom surface that you scale when the space has been drawn and then blit the result to the screen. I dont think this option is a good as the first option since there might be scaling artifacts, but depending on your exact use case maybe it works.

Drawing a shape with dimensions in millimeters

I have dimensions in millimeters (mostly rectangles and squares) and I'm trying to draw them to their size.
Something like so 6.70 x 4.98 x 3.33 mm.
I really won't be using the depth in the object but just threw it in.
New to drawing shapes with my hands ;)
Screens are typically measured in pixels (android) or points (ios). Both amount to the old standard of 72 pts/in. Though, now we have devices with different pixel ratios. To figure out an exact size would mean you need to determine the current device's screen size and it's pixel ratio. Both can be done with WidgetsBinding.instance.window... Then you just do the math from there to convert those measurements to mm.
However, this seems like an odd requirement so you may just be asking how to draw a square of an exact size. You may want to look into the Canvas/Paint API which can be used in conjunction with a CustomPainter. Another option is a Stack with some Position.fromRect or .fromRelativeRect and draw them using that setup.

Floats SpriteKit

Why does SpriteKit use floats for position even though there can't be a decimal point in pixels?
I am building a game in SpriteKit and am wondering whether I should make the character position in ints to optimize it but am not sure whether there will be any performance improvements.
Is it worth converting the character positions to ints to optimize?
In general, it's simpler to use floats than integers when dealing with graphics above the bit-twiddling level. Note that Core Graphics also uses floats for coordinates.
The reason it's simpler is that you want to do things like scale (by a non-integer factor) and rotate coordinates, images, paths, etc. If you use integers, the rounding errors accumulate by a noticeable amount quickly. If you use floats, the rounding errors take much longer to become noticeable - usually they don't become noticeable at all.
Also consider that an SKNode has xScale and yScale properties. If you scale up a node, the fractional part of its children's positions can have a large effect on their positions on the screen.
You should not convert your character positions to ints. SpriteKit uses floats, so you'll end up converting them back to floats to use them with SpriteKit, the CPU has floating point support, and the GPU is designed to perform massive amounts of floating point arithmetic.
The other answers are good, but keep in mind that you are NOT working in pixels: you are working in points. Depending on the device, points will hold more or less pixels. The reason behind that was to accommodate the retina displays that had the save physical size. You work with points, and when you add pixels, everything still lines up because the number of pixels per point increased, but the number of points in the screen stayed the same.
SpriteKit uses floats because floats are not that huge, and numbers you will be using will not be as huge either since the max size for the iPad in itself is 2048 by 1536, there is no need to declare them as integers when floats work just fine for those sizes.
There is no use converting to integers. I've made a game in SpriteKit and it works just fine using CGFloats.
If you convert to integers, do note that almost anything dealing with a sprite's image or object's size will be in floats.

Fragment Shader - Average Luminosity

Does any body know how to find average luminosity for a texture in a fragment shader? I have access to both RGB and YUV textures the Y component in YUV is an array and I want to get an average number from this array.
I recently had to do this myself for input images and video frames that I had as OpenGL ES textures. I didn't go with generating mipmaps for these due to the fact that I was working with non-power-of-two textures, and you can't generate mipmaps for NPOT textures in OpenGL ES 2.0 on iOS.
Instead, I did a multistage reduction similar to mipmap generation, but with some slight tweaks. Each step down reduced the size of the image by a factor of four in both width and height, rather than the normal factor of two used for mipmaps. I did this by sampling from four texture locations that were in the middle of the four squares of four pixels each that made up a 4x4 area in the higher-level image. This takes advantage of hardware texture interpolation to average the four sets of four pixels, then I just had to average those four pixels to yield a 16X reduction in pixels in a single step.
I converted the image to luminance at the very first stage using a dot product of the RGB values with a vec3 of (0.2125, 0.7154, 0.0721). This allowed me to just read the red channel for each subsequent reduction stage, which really helps on iOS hardware. Note that you don't need this if you are starting with a Y channel luminance texture already, but I was dealing with RGB images.
Once the image had been reduced to a sufficiently small size, I read the pixels from that back onto the CPU and did a last quick iteration over the remaining few to arrive at the final luminosity value.
For a 640x480 video frame, this process yields a luminosity value in ~6 ms on an iPhone 4, and I think I can squeeze out a 1-2 ms reduction in that processing time with a little tuning. In my experience, that seems faster than the iOS devices normally generate mipmaps for power-of-two images at around that size, but I don't have solid numbers to back that up.
If you wish to see this in action, check out the code for the GPUImageLuminosity class in my open source GPUImage framework (and the GPUImageAverageColor superclass). The FilterShowcase example demonstrates this luminosity extractor in action.
You generally don't do this just with a shader.
One of the more common methods is to create a buffer texture with full mip-maps (down to 1x1, this is important). When you want to find luminosity, you copy the backbuffer to this buffer, then regenerate mips with a nearest neighbor algorithm. The bottom pixel will then have the average color of the entire surface and can be used to find average lum through something like (c.r * 0.6) + (c.g * 0.3) + (c.b * 0.1) (edit: if you have a YUV, then do similar and use the Y; the trick is just averaging the texture down to a single value, which is what mips do).
This isn't a precise technique, but is reasonably fast, especially on hardware that can generate mipmaps internally.
I'm presenting a solution for the RGB texture here as I'm not sure mip map generation would work with a YUV texture.
The first step is to create mipmaps for the texture, if not already present:
glGenerateMipmapOES(GL_TEXTURE_2D);
Now we can access the RGB value of the smallest mipmap level from the fragment shader by using the optional third argument of the sampler function texture2D, the "bias":
vec4 color = texture2D(sampler, vec2(0.5, 0.5), 8.0);
This will shift the mipmap level up eight levels, resulting in sampling a far smaller level.
If you have a 256x256 texture and render it with a scale of 1, a bias of 8.0 will effectively reduce the picked mipmap to the smallest 1x1 level (256 / 2^8 == 1). Of course you have to adjust the bias for your conditions to sample the smallest level.
OK, now we have the average RGB value of the whole image. The third step is to reduce RGB to a luminosity:
float lum = dot(vec3(0.30, 0.59, 0.11), color.xyz);
The dot product is just a fancy (and fast) way of calculating a weighted sum.

Warping an image on the iphone with OpenGL

I am fairly new to programming and I'm doing it, at this point, just to educate myself and have fun.
I'm having a lot of trouble understanding some OpenGL stuff despite having read this great article here. I've also downloaded and played around with an example from the apple developer site that uses a .png image for a sprite. I do eventually want to use an image.
All I want to do is take an image and warp it such that it's four corners end up at four different x,y coordinates that I supply. This would be on a timer of sorts (CADisplayLink?) with one or more of these points changing at each moment. I just want to stretch it between these dynamic points.
I'm just having trouble understanding exactly how this works. As I've understood some example code over at the developer center, I can use:
glVertexPointer(2, GL_FLOAT, 0, spriteVertices);
where spriteVertices is something like:
const GLfloat spriteVertices[] = {
-0.90f, -.85f,
0.95f, -0.83f,
-0.85f, 0.85f,
0.80f, 0.80f,
};
The problem is that I don't understand what the numbers actually mean, why some have negatives infront of them, and where they are counting from to get the four corners. How would I need to change normal x,y coordinates that I get in order to plug them into this? (the numbers I would have for x,y wouldn't look like numbers between 1 and 0 would they? I would like something akin to per pixel accuracy.
Any help is greatly appreciated even if it's just a link to more reading. I'm having trouble finding resources for a newb.
It isn't as complicated as it seems at first. Each pair of numbers relates to an x,y position on the screen. So, 0.80f, 0.80f, would say go to 80% of the drawable area for both x and y(left to right, down to up). While -0.80,-0.80 would say go to 80% of the drawable area from right to left, up to down. The negatives just switch the sides. A point of note, openGL draws down to up(as if you were looking up a building from the ground), while the iPhone draws up to down (as though you were reading a book).
To get pixels, you multiply the float value by drawable area 1024 X 0.8 = 819.2.
This tutorial is for textures, but it is amazing and really helps you learn the coordinate systems:
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-6_25.html