I'm having some trouble with Gesture Detector. I have a set of images that I want to add a GestureDetector to. However, I only want the GestureDetector to be active within a border of the image. As seen in the images below. I have some images with random shapes that are transparent behind the images. I want to make a gesture detector that only detects the inside of these random shapes. The parts that are marked with green. I did this for the circle by taking the local.position of the taping and then checking if it was within a circle with x^2 + y^2 = ... However this can not be done with custom shapes. Could someone help me with an idea of how to solve this? If I just mark the image with a gesturedetector then is it also possible to tap on the transparent part of the image, which I dont want. Thanks
Image
This is not something, you will be able to do natively in Flutter. If you want to extract the exact region of interest, you will need to do image processing to find the right shape. Your best bet to do the image processing live is C/C++.
Dart has the ability to execute native C, take a look at this: https://dart.dev/guides/libraries/c-interop
You should attempt to extract a polygon bounding box of the image areas and then use the polygon path values to draw a path using a CustomPainter.
If you do not need to be exact, try to approximate the shape by a simple polygon path by hand. The user probably will not notice the difference.
Related
Please tell me how to solve this problem.
Where to start and which way to go.
I have an image with some buttons :
How can i detect coordinates of blue round button for example?
The difficulty lies in the fact that these are not application buttons, but just a picture on the desktop.
I understand that this is a vast and complex question, but tell me at least the right way.
It will be useful to many people.
The first thing I can imagine is to do a desktop screen, and then try to detect pixels with blue color.
You don't need to do manual image detection because Apple's vision framework already does this. You can use it to detect rectangular regions, detect text, or recognize and image within an image, depending on your needs.
See Detecting Objects in Still Images
I want to merge an image to another image in one shape. Example:
1- People image
2- Shape Image:
So how to do draw that. I already implement for merging but it's not fill to that shape.
It's possible to do this using the masking functions in the Quartz 2D framework. It's a little bit more involved than using the higher level image functions of UI Kit, but Quartz 2D gives you a lot more power to do cool graphics techniques.
The relevant Apple Developer guide to this can be found here: https://developer.apple.com/library/mac/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html
For this example, you'd want to create a mask shape for the inside part of the shape image. There are two ways you can do this. One way is to use image editing software to create a second mask image, with the same size as your shape image, with pure black in the area where you want the people image to appear, and white where you don't want to appear. In this example, that would be the area inside the blue shape. It is important to not crop this image, or else they won't match up exactly.
The other way to create the masking image would be to do that dynamically based on the shape image, and honestly, this is the way I would do it. This would mean that you're including fewer images in your app, and if you made any changes to the shape image, you wouldn't have to recreate the mask image as well. You could do this by making a small change to the way your shape image is formatted. You would need to use a format that allows transparency - png is preferred - so that there is alpha transparency in the part of the image outside of the shape, which is white in your JPEG image. Make sure the section in the center of the image is white (really, any color that is NOT USED in the wanted part of the shape image would work, but I'll say white for this example) and that you don't have parts of it that aren't pure white after image compression.
You will then use Quartz to select the area that's white, and create a mask from that. This technique is a bit more involved, but what you need can be found in the document I linked to above. Because of this, you might start with a static masking image, and then convert to the more involved technique after you've got the code to make the first technique work.
When you have your masking image, you would create the mask itself with the function CGImageMaskCreate(::::::::). You can then apply the mask to the people image using the function CGImageCreateWithMask(::), which will give you an image with the person's portrait, with the correct shape cropped from the center.
Finally, you would display this in your app by placing the masked people image on top of the shape image, and voila, you'll have what you're looking for.
Also, keep in mind, when using the Quartz 2D framework, you'll have to make sure you release images when they are no longer needed, or else you could have memory leaks.
I have this image:
What I want to do is to add a UITapGestureRecognizer to this image (or I can split the image in the different parts it consists of and add for each part a UITapGestureRecognizer) in order to have different actions according to the leaf tapped. If I split the image in different images each for each leaf the UIImageViews will probably overlap and tapping on one will be recognized as a tap on another one. Having just one image implies knowing the points of the screen that belongs to a leaf rather than to another one.
Any clues on how to do it would be really appreciated.
Thanks
Change your behavior by examining the gesture recognizer's locationInView:.
If you handle the image as one unit, implement this in your gesture recognizer call back to decide which "leaf" (if any) was tapped.
If you handle the image as multiple images, you could also implement it in your callback, or you could also implement in, e.g., your delegate's gestureRecognizerShouldBegin: to suppress events for touches outside the leaf as drawn.
EDIT: I didn't realize that you might also be looking for assistance on figuring out whether a point lies within a leaf. #PhillipMills is correct on this point: we need to know how you are drawing the image.
FOLLOW-UP: This is somewhat outside my area of expertise.
The easiest approach (from a hit-testing standpoint) is to do what #PhillipMills suggested, using Quartz drawing and CGPathContainsPoint(). If you have detailed graphics that you need rendered as a PNG, you could certainly construct a simple path that would be (virtually) overlayed to allow hit testing.
Your other options, AFAIK, are to do hit testing mathematically, but you would basically be reimplementing CGPathContainsPoint() but without a path, or to employ various tricks that look at the color of the pixels at your touch point to do hit testing. Googling will turn up some useful results if you go this route, but honestly for a shape as simple as what you've drawn, just use some UIBezierPath code to recreate in code.
Not sure if this will be helpful but if you get stuck on figuring out which leaf was clicked, you could use an old image map trick we used to use in CD-ROM projects for pixel accurate click tracking on images.
You have your full size image. Make a 25% (or less) scaled version of it. Fill each of the leaf regions you want to track clicks on with a different color; anything you want to ignore make black. When the full size image is clicked, get the x/y coordinates and scale them by the percentage of your scaled image. Then get the pixel color of the scaled image at the scaled x/y coordinate. By determining the pixel color you will know which leaf was clicked.
Sounds clunky but it works really well and is fast.
(all that said, I don't think alpha areas of images trigger the gesture recognizer - so breaking the image up would be less complicated/code intensive.)
If you can break the shape apart into the constituent elements, then you can put each into it's own layer and use the method discussed in this stackoverflow discussion to determine which was touched: Hit Testing with CALayer using the alpha properties of the CALayer contents
How can a bounding box be created for a UIImageView that is not a CGRect?
I would like to have objects in my view which should display images as well as detection collisions.
The issue is I would like these object to be whatever shape they are rather than fitting them into a CGRect and detecting collisions on areas which are inside the box but are nit the actual image.
How does one achieve this?
This is a non-trivial problem. But the basics are a CGRect is a rectangle and a hit test inside of a rectangle is fairly easy to understand. However, you sound like you want a more complex shape. UIImageView displays an image. It does not have any idea about what shape you want to use for your collision test. So you are going to have to tell it.
One easy thing to do is to look at the alpha/transparent values of the display image to create a shape. So to answer the question is a point hitting an image we figure out the location of point in the image and return true if the alpha is greater than 0. If you do this you can create any image with a transparent background and the code will just work.
If that will not work for you then can can also run a hit test on a point and a polygon this post covers that in detail.
How can I determine whether a 2D Point is within a Polygon?
I am adding some functionality to an iPhone app, and could use some help in picking the fastest / most efficient / best practice approach for solving this problem:
At the upper-half of my screen, I have speech bubbles (think comic book) that are UIImageViews translating across the screen (dynamic x & y position). It is a UIImageView because there is an image as the background of the speech bubble.
Each speech bubble has a matching image moving around the bottom of the screen (elsewhere in the layer tree)
I would like to draw a tail (that triangle bit from a speech bubble) so the point of the triangle is tracking the lower image, with the base of the triangle being attached to the bottom of the upper UIImageView. (technically the base doesn't have to be butted against, it can overlap as long as I can match the color of my background image to the triangle).
I have already done all the tracking & drawn a line with CGContextStrokePath methods, and now I am stuck on how to replace the line with a triangle.
I have looked at drawing a triangle in Quartz and filling it. My concern is the speech bubbles are repositioned every 1/10th of a second, and it looks like drawing just the line used for proof of concept had a pretty severe performance / visual smoothness impact.
One idea I have is to do the trigonometry myself, and stretch & rotate an image of a triangle to connect each of these speech bubbles with the lower spot. Something is telling me there is a more efficient / more elegant solution, but I am not able to see it looking through the documentation. Any help on how you have or would approach this issue is appreciated. Thanks.
If the speech bubbles are fixed in size, just use a static UIImage. Set the image view's layer.position property at the point of the triangle. Then you can use view animation to move the bubbles around.
If you need the speech bubbles to be different sizes, I'd create a resizeable image using resizableImageWithCapInsets. Then I'd do the same as above to position it.
If there was something special about the speech bubble that I could't achieve with either a static image or a resizable image, I'd probably create a custom CA Layer or layers to get the effect I wanted (Like a gradient layer with a shape layer as it's mask layer)