Getting pixels data from image on iPhone - iphone

I want to use bitmap images as a "map" for levels in iphone game. Basicly it's all about the location of obstacles in the rectangular world. The obstacles would be color-coded -- where the white pixel is, there's no obstacle. Black means there is one at this point.
Now I need to use this data to do 2 things: (a) display the level map, (b) for in-game calculations. So, in general, I need a way to read the data from the bitmap and create some data structure (matrix-like) with those information - to both overlay the bitmap onto the level graphics as well as to calculate collisions and such.
How should I do it? Is there any easy way to read the data from image? And what's the best format to keep the images for this?

Have you looked at how Texture2D translates an image file to an OpenGL Texture ?
Tip: take a look at this Method in Texture2D.m:
- (id) initWithCGImage:(CGImageRef)image orientation:(UIImageOrientation)orientation sizeToFit:(BOOL)sizeToFit pixelFormat:(Texture2DPixelFormat)pixelFormat filter:(GLenum) filter
In 3D apps, it's quite common to use this kind of representation for height maps, in a height map, you use a Texture with colors that range from black to white ( white represents the maximum altitude )
For example, from this:
To this:
That was just to tell you that your representation is not that crazy :).
About reading the bitmap, I would also recommend you to read this (just in case you want to go deeper)
Hope I helped a bit!

Related

Different ways to detect size of image on mesh versus size of mesh

I'm creating a puzzle game that generates random sized pieces with 2D meshes. The images contain transparent portions and sometimes a piece is completely transparent. I need to detect what percentage of a piece is transparent. One way I found to do this is to go pixel by pixel. I posted my solution to this HERE. However, this process adds a few seconds during loading which I'd like to avoid and I'm looking for other ideas
I've considered using the selection outline of a MeshCollider to somehow to get a surface area I can compare to the surface area of the mesh but everything I find is on the rendering of outline with specialized shaders. Does anyone have any ideas on to solve this?
.
1) I guess you could add a PolygonCollider2D to your sprite and use its Path for the outline and calculation of the surface area. Not sure however if this will be faster.
PolygonCollider2D.GetPath:
A path is a cyclic sequence of line segments between points that define the outline of the Collider
Checking PolygonCollider2D.GetTotalPointCount or path length may be good enough to determine if the sprite is 'empty'.
Sprite.vertices, Sprite.triangles may also be helpful.
2) You could also improve performance of your first approach:
instead of calling GetPixel as you do now use GetPixels or GetPixels32 and loop through the array in one for loop.
Using GetPixels can be faster than calling GetPixel repeatedly, especially for large textures. In addition, GetPixels can access individual mipmap levels. For most textures, even faster is to use GetPixels32 which returns low precision color data without costly integer-to-float conversions.
check only every 2nd or nth pixel as it should be good enough for approximation
limit number of type casts

Why in 3D game we need to separate a material into so many textures for a static object?

Perhaps the question is not that correct, the textures should be say a kind of channel? although I know they will be mixed in the shader finally.
I know the knowledge of the various textures is very important, but also a bit hard to understand completely.
From my understanding:
diffuse - the 'real' color of an object without light involved.
light - for static objects. render light effections into texture beforehand.
specular - the area where has direct reflection.
ao - to absorb indirect light for the different area of an object.
alpha - to 'shape' the object.
emissive - self illuminance.
normal - pixel normal vector to deal with the light ray.
bump - (dont' know the exact differences between normalmap).
height - stores Z range values, to generate terrain, modify vertex etc.
And the items below should be related to PBR material which I'm not familiar with:
translucency / cavity / metalness / roughness etc...
Please correct me if some misunderstandings there.
But whatever, my question is why we need to separate these textures apart for a material but not only render them together into the diffusemap directly for a static object?
It'll be appreciated if some examples (especially for PBR) , and thank you very much.
I can beforehand bake all things into the diffuse map and apply to my
mesh, why I need to apply so many different textures?
Re-usability:
Most games re-use textures to reduce the size of the game. You can't if you combine them together. For example, when you two similar objects but you want to randomize the looks of them(aging effect), you can make them share the-same color(albedo) map but use different ao map. This becomes important when there hundreds of objects, you can use different combination of texture maps on similar objects to create unique Objects. If you have combined this into one, it would be impossible to share it with other similar objects but you to slightly make to look different.
Customize-able:
If you separate them, you'll be able to change the amount of effect each texture will apply to the Object. For example, the slider on the metallic slot for the Standard shader. There are more of this sliders on other map slots but they only appear once you plug a texture into the slot. You can't do this when you combine the textures into one.
Shader:
The standard shader can't do this so you have to learn how to write shader since you can't use one image to get the effects you would with all those texture maps with the standard shader. A custom shader is required and you need a way to read the information about the maps in the combined shader.
This seems like a reasonable place to start:
https://en.wikipedia.org/wiki/Texture_mapping
A texture map is an image applied (mapped) to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3d model formats or material definitions, and assembled into resource bundles.
I would add to this that the shape or a polygon don't have to belong to 3d objects as one may imagine it. If you render two triangles as a rectangle, you can run all sorts of computations and store it in a "live" texture.
Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974.
What this detail represents is either some agreed upon format to represent some property, (say "roughness" within some BRDF model) which you would encounter if you are using some kind of an engine.
Or whatever you decide that detail to be, if you are writing your own engine. You can decide to store whatever you want, however you want it.
You'll notice on the link that different "mapping" techniques are mentioned, each with their own page. This is the result of some person, or people who did some research and came up with a paper detailing the technique. Other people adopt it, and that's how they find their way into engines.
There is no rule saying these can't be combined.

Placing objects around image highlights- coco2d /opengl / coregraphics?

I would like to extract the white areas/bright areas of an image and place a custom objects in those areas. I need to know which framework to work with. if anyone has done something similar, I would appreciate an answer. I know how to get Pixel values, however the hard part is creating a Bloom/star effect in those highlighted areas.
You could make a mask where the luminance value was above a threshold, then blur or whatever the mask and composite above the image.

Map image to irregular polygon?

Say the user taps 4 spots on the iphone, defining an irregular 4 sided polygon (in 2d space). Is there a way to map/fit a (potentially highly distorted) image onto this shape, without using OpenGL?
Something like:
Is my only option to somehow calculate the 3d space that my irregular 4 sided shape sits in (based on where the tapped 2d points sit), create an OpenGL plane in that space, and map my texture to it flatly? Seems like there should be an easier way...
Thanks in advance.
Update: After diving into OpenGL I'm almost there... but I still can't get the texture to distort correctly. The triangulation seems to be messing with the texture mapping:
I can't answer your question completely, but one thing I would say is that you don't need to think about any conversion / mapping to 3D. Using OpenGL you can easily draw the shape in 2D and have the texture mapped as you desire. No need for any fancy maths or conversions. It's no more complicated than drawing a rectangle. OpenGL doesn't care that your 4-sided shape isn't actually rectangular.

Algorithm for "filling in" texture in a 2D image

I recall seeing a paper a while back for an algorithm that could automatically and seamlessly "graft" texture from parts of an image onto another part of an image.
The approach was something along the lines of the following:
You'd build up a databases of small squares of pixels (perhaps 8X8) from the parts of the picture that are present.
You'd then pick an empty pixel (the "destination" for the texture graft) to fill in, and look for one of the squares in your database that most closely matches the surrounding pixels. You'd then color the empty pixel according to the color of the corresponding pixel in the square you find. Then you pick another empty pixel and repeat until there are no empty pixels remaining.
Of course, this is only a vague description because I can't find any references to this algorithm to refresh my memory of the details! Can anyone help?
Sounds a lot like Texture Synthesis by Non-parametric Sampling