Getting the pixel color from a image - iphone

I'm working in a view based application and am trying to find some code that will let me grab some pixel colors from one of my images and use it for collision detection against one of my UIImageViews but haven't had any luck finding anything on this subject. So if my UIImageView for my player collides with the UIImageView of my map && collides with the color black in my image that's placed inside of my map view... then run collision code... or something along those lines.

Is your question about getting the pixel color, or about doing collision detection?
If you want to get the pixel color, I'm not sure there's an easy way to do it - you may have to mess with your current graphics context to get it, and nothing is coming up in the docs.
If it's just collision detection you want to do, take a look at UIView's convertPoint:toView: and convertPoint:fromView: methods. They let you take defined points within a given view and get their equivalents in other views. With some basic math on the resultant points, you could theoretically do some pretty good collision detection without having to worry about pixel colors.

Related

How can I make the SCNCamera that i instantiated zoom into only nodes that i want in SceneKit

Pretend i have 3 nodes in total. One of the nodes is a large SCNShere and i put the camera inside this sphere and make the sphere double sided with a textured image. I then put in two smaller spheres next to each other in the center inside this sphere. I also allowCameraControl. I want to be able to zoom into these two smaller spheres without zooming into the larger sphere and messing up the detail on that sphere.
You can't put limits on the camera that's automatically created with allowCameraControl. You'll have to do your own camera management, using your own gesture recognizers.
Another solution would be to rethink your approach to the background image. Instead of using a sky sphere for the background (which is what it sounds like you're doing), use a skybox, or cube map. You can supply a cube map through the scene's background property. The SCNMaterial documentation explains the options for supply a cube map.
Hmm, I wonder what would happen if you use the large sphere's textured image/material as the scene's background, instead of putting it on an enclosing sphere?
I like the idea of using an image as the background but there are two problems. One is i looked on the web for ways to make an image the background and none of them work. Two I want the background to have depth so in order to go on that idea I need to find a way to zoom into the background and have the image pan in the opposite direction that I drag.

How to trace the intersection of an image with the boundaries of an irregular shaped image in cocos2d?

I have an image of mountain with small gutters and tunnels in it. I want to pass a small image through that tunnels. How to trace the intersection of that small image with the exact boundaries of large image in cocos2d?
I would make a collision mask for this.
What this means is to create an exact copy of the image you are using for your terrain except make it only two colors: white and black.
Make the areas that you want the player to be able to move through (not walls) white. Make the walls and anything you want the player to collide with back. Next, just do some pixel collision detection. To do this, I would get the RGB (not RGBA because alpha doesn't matter) data. Loop through this data (or a section of it for better performance) and detect whether or not the player is on a black or white pixel.
Do whatever you need to accordingly.
If you need more help, feel free to ask.

UIGesture recognition on different areas of a UIImageView

I have this image:
What I want to do is to add a UITapGestureRecognizer to this image (or I can split the image in the different parts it consists of and add for each part a UITapGestureRecognizer) in order to have different actions according to the leaf tapped. If I split the image in different images each for each leaf the UIImageViews will probably overlap and tapping on one will be recognized as a tap on another one. Having just one image implies knowing the points of the screen that belongs to a leaf rather than to another one.
Any clues on how to do it would be really appreciated.
Thanks
Change your behavior by examining the gesture recognizer's locationInView:.
If you handle the image as one unit, implement this in your gesture recognizer call back to decide which "leaf" (if any) was tapped.
If you handle the image as multiple images, you could also implement it in your callback, or you could also implement in, e.g., your delegate's gestureRecognizerShouldBegin: to suppress events for touches outside the leaf as drawn.
EDIT: I didn't realize that you might also be looking for assistance on figuring out whether a point lies within a leaf. #PhillipMills is correct on this point: we need to know how you are drawing the image.
FOLLOW-UP: This is somewhat outside my area of expertise.
The easiest approach (from a hit-testing standpoint) is to do what #PhillipMills suggested, using Quartz drawing and CGPathContainsPoint(). If you have detailed graphics that you need rendered as a PNG, you could certainly construct a simple path that would be (virtually) overlayed to allow hit testing.
Your other options, AFAIK, are to do hit testing mathematically, but you would basically be reimplementing CGPathContainsPoint() but without a path, or to employ various tricks that look at the color of the pixels at your touch point to do hit testing. Googling will turn up some useful results if you go this route, but honestly for a shape as simple as what you've drawn, just use some UIBezierPath code to recreate in code.
Not sure if this will be helpful but if you get stuck on figuring out which leaf was clicked, you could use an old image map trick we used to use in CD-ROM projects for pixel accurate click tracking on images.
You have your full size image. Make a 25% (or less) scaled version of it. Fill each of the leaf regions you want to track clicks on with a different color; anything you want to ignore make black. When the full size image is clicked, get the x/y coordinates and scale them by the percentage of your scaled image. Then get the pixel color of the scaled image at the scaled x/y coordinate. By determining the pixel color you will know which leaf was clicked.
Sounds clunky but it works really well and is fast.
(all that said, I don't think alpha areas of images trigger the gesture recognizer - so breaking the image up would be less complicated/code intensive.)
If you can break the shape apart into the constituent elements, then you can put each into it's own layer and use the method discussed in this stackoverflow discussion to determine which was touched: Hit Testing with CALayer using the alpha properties of the CALayer contents

How to draw a light effect over a texture on iPhone using UIKit/Quartz

I have a scene with a background image (a lit room), and a black image (shadow) over that. I need to be able to move my finger over the background and reveal some parts of the scene, simulating a dim light source in a dark room.
My current approach was to generate a mask depending on the position of the touch, and then applying that mask to the shadow image. The problem is I'm generating a new mask and applying it every time I receive a touch event. It's a large image (800x600) and this causes the performance to go down and it increases a lot the memory usage, eventually crashing the game (I think I don't have any memory leaks, but that's not warrantied... anyway the performance itself isn't acceptable).
Can anyone think of a better approach (which doesn't involve using OpenGL ES -- that's not an option in this project) to do this?
To go with my comments above.
Maybe to get around the different shadow levels you could also have a grid of views (squares) between the image and the shadow view. each grid square has a different alpha opacity and when the spot is over a grid square, the grid square's alpha opacity changes to 0. when the spot moves off the grid square it's alpha opacity changes back to it's default.
Without more information it is a little difficult to know whether this approach will work in your case but what you could do is generate a single mask image, say, a radial alpha gradient and then apply an affine transform to it to shape it according to the touches. This can be used to simulate a torch/flashlight beam.
I would try this: use one view with a custom drawRect implemetation: first draw the shadow image (in grayscale) then a bright spot image in white an alpha. And finally the background image in a 'multiply' blend mode.
Just a thought, does the shadow has to be an image? Perhaps you could simply fill the shadow layer with a color and mask it then? This way the memory usage should be less and the effect should be nearly identical (if not exactly the same).
There is no reason to generate a new mask on every touch move. Instead, let the mask be initialized once and manipulate it (reset it's frame) as needed upon touch events.

Creating a bounding box for a UIImageView

How can a bounding box be created for a UIImageView that is not a CGRect?
I would like to have objects in my view which should display images as well as detection collisions.
The issue is I would like these object to be whatever shape they are rather than fitting them into a CGRect and detecting collisions on areas which are inside the box but are nit the actual image.
How does one achieve this?
This is a non-trivial problem. But the basics are a CGRect is a rectangle and a hit test inside of a rectangle is fairly easy to understand. However, you sound like you want a more complex shape. UIImageView displays an image. It does not have any idea about what shape you want to use for your collision test. So you are going to have to tell it.
One easy thing to do is to look at the alpha/transparent values of the display image to create a shape. So to answer the question is a point hitting an image we figure out the location of point in the image and return true if the alpha is greater than 0. If you do this you can create any image with a transparent background and the code will just work.
If that will not work for you then can can also run a hit test on a point and a polygon this post covers that in detail.
How can I determine whether a 2D Point is within a Polygon?