Zooming a hotspot when it clicked - iphone

I have to make some hot spots on my image , so that when i click anyone of them they zoom.
For example consider the image
When i click that butterfly then it zoom
Please provide a good solution
Following are my thinking
Use a button and click it and add a subImageView and zoom it.
Zoom a particular region of this image when touch at that place.
Both ideas have their limitations.In first i have to create a saperate image and also my app size will become very large.
and limitation of second is that it will not zoom exact required image.
I was also thinking of masking but i think that is also not very good way, because this is just a sample, i have many images like this and can be many hot spots on a single image.
please guide.

If you are dealing we rather large images you should check out CATiledLayer, which are very fast and efficient to use.
Check out this blog post, including the demo app:
http://www.cimgf.com/2011/03/01/subduing-catiledlayer/

Related

detect coordinates of some element on image

Please tell me how to solve this problem.
Where to start and which way to go.
I have an image with some buttons :
How can i detect coordinates of blue round button for example?
The difficulty lies in the fact that these are not application buttons, but just a picture on the desktop.
I understand that this is a vast and complex question, but tell me at least the right way.
It will be useful to many people.
The first thing I can imagine is to do a desktop screen, and then try to detect pixels with blue color.
You don't need to do manual image detection because Apple's vision framework already does this. You can use it to detect rectangular regions, detect text, or recognize and image within an image, depending on your needs.
See Detecting Objects in Still Images

how to save a image larger then in the phone

I made an app following these instructions
http://www.ifans.com/forums/showthread.php?t=132024
And was great, but now,
Now, I want to draw in another view at the same time (when the move begin, the line has to be drawn in both views), but different size (without scaling the one i'm touching)
is that possible?
thank you
I would draw with OpenGL instead in that case, as you can add a larger view, and a camera which scrolls based on what the users actions are. You can find a great sample project to do this here. In order to find more stuff about the camera, and making a larger view, you can look here, or search in google for OpenGL iPhone tutorials. Hope that helps!

UIImageView interactions

I am working on an app in which I want similar kind of functionality as that of WebMD body image.
How can I identify which part of image is touched in an optimal way? Do I have to slice the image according to requirements?
How can I add some tags into the image? Similar to the facebook photo upload functionality in iphone.
You need some way to figure out what the user touched, or tried to touch.
You might use a list of annotation-like objects, where each object has a location. When the user touches the image, you'll need to find the annotation in the list that's closest to the touch location and react appropriately. The "optimal" way to do that is probably to use a quad tree. For an iPhone app, though, the number of touchable points is probably pretty small (several dozen?), and a brute force search through the list will probably be more than fast enough.
Another option would be to overlay a transparent view on top of your image for each region that you want the user to be able to touch. Doing this would also make it simple to draw a "tag" at each of those locations.

How to draw an image according to the pixels of another image?

HI all ,what i want is to map the images.Suppose i have two images of persons,one is of fat person and another is of weak person,Now i want to match their faces ,eyes.I want to increase or decrease the face size eye size of one image according to another.As you can see in adobe photoshop you can make the face fat,make it squueze.I want to do the image manuplation in this.These types of operations i want to implement.I don't know from where to start.
Pleas guide and help me.Can i perform all this with core graphics if so then how
Any reference,tutorial address ,sample code ........appreciated.
You are probably going to have to deal with some sort of edge detection and face recognition algorithms, at the very least, if this is to be accomplished automatically. Otherwise, if the user is going to be resizing one image to match the other, this will require simple resizing operations driven by perhaps user pinch & gestures.
UPDATE:
For manual resizing:
Download the source code for the great book Cool iPhone Projects. One of the projects is called 'Touching'. This project contains code that accomplishes what you need: pinch and zoom functionality.

Multi changeable areas of a image on a iPhone

I have an image with picture of a person and I want to let the user to pick some area of the person and change the color. But how can I best create a multi-mask image?
E.g. should the user be able the change the color for a leg or a hand.
I am using Titanium Appcelerator, and right now I had a solution with buttons placed over the image, which is not a pretty and accepted solution.
The Kitchensink example, has only one area which can be changed.
The only solution I found for working with sections of an image is to divide the image into different views then use a vertical or horizontal view to glue them together. Sounds like you took a similar approach using buttons.
Another option might be to use one of the jQuery image libraries within the webview. This most likely will have a performance penalty though.