Detecting Touch on PNGs with Alpha - iphone

I am trying to detect a touch event on a PNG image loaded into a UIImageView. I have everything working fine except that the touch is being tested for the bounding rectangle around the image (as expected). What I would like to do is test if the user has selected part of the visible PNG as opposed to the UIImageView itself.
For example if I have a horseshoe image, I want it to only respond to touches when you select the sides and not the center part where nothing is being drawn. I am kind of at a loss on this one, google reveals a number of people with the same issue but not even a hint towards where to begin looking.

Two ways:
a) you examine a pixel data of your image to determine if the touched pixel is a transparent pixel. You have to draw your image to an offline buffer to make this possible. Use CGContextDrawImage and CGBitmapContextGetData to get access to pixel data from UIImage.CGImage This Apple's Q&A explains the basic method to access pixel data.
b) you have a polygon representation of the horseshoe and use polygon hit testing to determine if the horseshoe was touched. Google for "point in polygon" for algorithms.
a) is probably less work if you need this just for a few images, but if you have a lot of hit testing (game with a lot of movement) b) might be better.

Related

Is it possible to create a 3D photo from a normal photo?

If I have understand well, 3D 360 photos are created from a panorama photo, so I guess it should be possible to create a 3D photo (non 360) from a normal photo. But how? I did not find anything in Google! Any idea of what should I search??
So far, if nothing available (I don't think so), I'll try to duplicate the same photo in each eye. One of the pictures a little bit moved to the right, and the other one moved a little bit to the left. But I think the distortion algorithm is much more complicated.
Note: I'm also receiving answers here: https://plus.google.com/u/0/115463690952639951338/posts/4KdqFcqUTT9
I am in no way certain of this, but my intuition on how 3D 360 images are created in GoogleVR is this:
As you take a panorama image, it actually takes a series of images. As you turn the phone around, the perspective changes slightly with each image, not only by angle, but also offset (except in the unlikely event you spin the phone around its own axis). When it stitches together the final image, it creates one image for each eye, picking suitable images from the series so that it creates a 3D effect when viewed together. The same "area" of the image for each eye comes from a different source image.
You can't do anything similar with a single image. It's the multitude of images produced, each with a different perspective coming from the turning of the phone, that enables the algorithm to create a 3D image.
2D lacks a dimension hence cannot be converted to 3D just like that, but there are clever ways for example Google Pixel even though doesn't have 2 camera can make it seem like the image is 3D by applying some Machine learning algorithm that create the effect of perspective and depth by selective blurring.
3d photos can't be taken by normal but you can take 360 photos with normal camera ..... There are many apps via which you can do this ..... Also there are many algorithms to do it programmatically

UIGesture recognition on different areas of a UIImageView

I have this image:
What I want to do is to add a UITapGestureRecognizer to this image (or I can split the image in the different parts it consists of and add for each part a UITapGestureRecognizer) in order to have different actions according to the leaf tapped. If I split the image in different images each for each leaf the UIImageViews will probably overlap and tapping on one will be recognized as a tap on another one. Having just one image implies knowing the points of the screen that belongs to a leaf rather than to another one.
Any clues on how to do it would be really appreciated.
Thanks
Change your behavior by examining the gesture recognizer's locationInView:.
If you handle the image as one unit, implement this in your gesture recognizer call back to decide which "leaf" (if any) was tapped.
If you handle the image as multiple images, you could also implement it in your callback, or you could also implement in, e.g., your delegate's gestureRecognizerShouldBegin: to suppress events for touches outside the leaf as drawn.
EDIT: I didn't realize that you might also be looking for assistance on figuring out whether a point lies within a leaf. #PhillipMills is correct on this point: we need to know how you are drawing the image.
FOLLOW-UP: This is somewhat outside my area of expertise.
The easiest approach (from a hit-testing standpoint) is to do what #PhillipMills suggested, using Quartz drawing and CGPathContainsPoint(). If you have detailed graphics that you need rendered as a PNG, you could certainly construct a simple path that would be (virtually) overlayed to allow hit testing.
Your other options, AFAIK, are to do hit testing mathematically, but you would basically be reimplementing CGPathContainsPoint() but without a path, or to employ various tricks that look at the color of the pixels at your touch point to do hit testing. Googling will turn up some useful results if you go this route, but honestly for a shape as simple as what you've drawn, just use some UIBezierPath code to recreate in code.
Not sure if this will be helpful but if you get stuck on figuring out which leaf was clicked, you could use an old image map trick we used to use in CD-ROM projects for pixel accurate click tracking on images.
You have your full size image. Make a 25% (or less) scaled version of it. Fill each of the leaf regions you want to track clicks on with a different color; anything you want to ignore make black. When the full size image is clicked, get the x/y coordinates and scale them by the percentage of your scaled image. Then get the pixel color of the scaled image at the scaled x/y coordinate. By determining the pixel color you will know which leaf was clicked.
Sounds clunky but it works really well and is fast.
(all that said, I don't think alpha areas of images trigger the gesture recognizer - so breaking the image up would be less complicated/code intensive.)
If you can break the shape apart into the constituent elements, then you can put each into it's own layer and use the method discussed in this stackoverflow discussion to determine which was touched: Hit Testing with CALayer using the alpha properties of the CALayer contents

Zooming in/out and painting in openGL

I've recently had some issues implementing a zooming feature into a painting application. Please let me start off by giving you some background information.
First, I started off by modifying Apple's glPaint demo app. I think it's a great source, since it shows you how to set up the EAGLView, etc...
Now, what I wanted to do next, was to implement zooming functionality. After doing some research, I tried two different approaches.
1) use glOrthof
2) change the frame size of my EAGLView.
While both ways allow me to perfectly zoom in / out, I experience different problems, when it actually comes to painting while zoomed in.
When I use (1), I have to render the view like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(left, right, bottom, top, -1.0f, 1.0f); //those values have been previously calculated
glDisable(GL_BLEND);
//I'm using Apple's Texture2D class here to render an image
[_textures[kTexture_MyImage] drawInRect:[self bounds]];
glEnable(GL_BLEND);
[self swapBuffers];
Now, let's assume I zoom in a little, THEN I paint and after that, I want to zoom out again. In order to get this to work, I need to make sure that "kTexture_MyImage" always contains the latest changes. In order to do that, I need to capture the screen contents after changes have been made and merge them with the original image. The problem here is, that when I zoom in, my screen only shows part of the image (enlarged) and I haven't found a proper way to deal with this yet.
I tried to calculate which part of the screen was enlarged, then do the capturing. After that I'd resize this part to its original size and use yet another method to paste it into the original image at the correct position.
Now, I could go more into detail on how I achieved this, but it's really complicated and I figured, there has to be an easier way. There are already several apps out there, that perfectly do, what I'm trying to achieve, so it must be possible.
As far as approach (2) goes, I can avoid most of the above, since I only change the size of my EAGLView window. However, when painting, the strokes are way off their expected position. I probably need take the zoom level into account when painting and re-calculate the CGPoints in a different way.
However, if you have done similar things in the past or can give me a hint, how I could implement zooming into my painting app, I'd really appreciate it.
Thanks in advance.
Yes, it is definitely possible.
When it comes to paint programs, you should be keeping a linked list or tree of objects to draw for easy insertion / removal. When the user stops painting, (i.e. touchesEnded), you add objects to the data structure containing your scene.
When your user zooms you need to modulate the coordinates of the objects you are drawing with respect to the current viewport, projection, and modelview transforms. In your case, you're not changing the viewport or the modelview transforms so you need only account for the projection transform. You could also implement your zoom using a translation and scale on the modelview matrix but I'll ignore that case for simplicity because it involves inverting the transforms.
The good news is that you are using an orthographic projection so world coordinates correspond to window coordinates when no zooming is in effect. The "world" in your case is a simple canvas that probably corresponds to the size of the device in window coordinates.
Before you add an object to your scene data structure, convert all of the coordinates, using the current projection transform (i.e. the parameters to the glOrthof() call) to world coordinates (i.e. full canvas coordinates). You'll only remain sane if you keep all things in your model in the same coordinate space.
To convert the coordinates, assuming you can never zoom out past full device dimensions in your glOrtho() call, you'll have to scale them down proportional to the ratios of your zoomed ortho dimensions to your unzoomed ortho dimensions then bias them by the difference between your zoomed ortho bottom, left values and those of the original unzoomed ortho values.

OpenGL ES for iPhone blending not working

I'm a beginner to 3D graphics in general and I'm trying to make a 3D game for the iPhone, and more specifically, to use textures that contain transparency. I am able to load a texture (an 8 bit .png file) into OpenGL and map it to a square (made from a triangle strip) but the transparent parts of the image are not transparent when I run the app in the simulator - they take on the background colour, whatever it is set to, but obscure images that are further away. I am unable to post a screenshot as I am a new user, so my apologies for that. I will try to upload and link it some other way.
Even more annoying is that when I load the image into Apple's GLSprite example code, it works exactly as I want it to. I have copied the code from GLSprite's setupView into my project and it still doesn't work properly.
I am using the blend function:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I was under the impression that this is correct for what I want to do.
Is there something very basic I am missing here? Any help would be much appreciated as I am submitting this as a coursework project in a few weeks and would very much like it to work.
Let me break this down:
First of all your transparent object is drawn.
At this point two things happen:
The pixels are drawn correctly to the back buffer
The depth buffer pixels are set in the depth buffer. Note that the depth buffer will write values all across your object, and transparency does not affect it.
You then draw other objects behind the transparent object.
But any of these objects pixels will not be drawn, because their depth buffer value are less than those already drawn.
The solution to this problem is to draw your scene back-to-front (draw starting at the further away things).
Hope that helps.
Edit: I'm assuming you are using the depth buffer here. If this isn't correct I'll consider writing another answer.

Getting the pixel color from a image

I'm working in a view based application and am trying to find some code that will let me grab some pixel colors from one of my images and use it for collision detection against one of my UIImageViews but haven't had any luck finding anything on this subject. So if my UIImageView for my player collides with the UIImageView of my map && collides with the color black in my image that's placed inside of my map view... then run collision code... or something along those lines.
Is your question about getting the pixel color, or about doing collision detection?
If you want to get the pixel color, I'm not sure there's an easy way to do it - you may have to mess with your current graphics context to get it, and nothing is coming up in the docs.
If it's just collision detection you want to do, take a look at UIView's convertPoint:toView: and convertPoint:fromView: methods. They let you take defined points within a given view and get their equivalents in other views. With some basic math on the resultant points, you could theoretically do some pretty good collision detection without having to worry about pixel colors.