How to check UI Element overlaps or not? - unity3d

I have to check whether two images (UI) are overlapping or not. I already use rect.overlap(), though output is really not acceptable, and rect.contains also does not work because both images are children of different parent. Is there any other way to calculate it?
Currently I'm playing with RectTransformUtility.WorldToScreenPoint, this function gives center of UI image so I can create new rect based on image center, but this will not be able to work for rotated image.

Related

Scale UI buttons and images with screen size

I ma developing a game for unity and i want to be able to scale my buttons and UI elements to fit different screen sizes. Please how can i go about this? I have tried scale with screen size and it doesn't seem to help me. Is there a script i can use for this?
You will want to use a canvas scaler for this.
You simply attach this object to your canvas parenting your UI elements and it will scale them acordingly.
You will also want to make sure your UI elements are anchored correctly so that they stretch with their correct anchor point.

How to create a trail effect in a rendertexture?

I'm trying to create a cumulative trail effect in a render texture. By cumulative I mean that the render texture would show the last few frames overlaid on each other. Currently, when my camera outputs to a render texture it completely overwrites whatever was there previously.
Let me know if I can clarify anything.
Thanks!
You could set the clear flag on the camera to Don't clear. This will prevent the clearing of previous frame on your camera and then will create this overlapping kinda like Flash movement style.
The issue is that everything will be kept on screen so if only the character moves then it is ok but if the camera moves then the effect also applies to environment and your scene becomes a big blur.
You could have two cameras for the effect, each with different rendering layers. One takes care of the items that should not have the effect and one takes care of those that are considered for the effect. This way you can apply the effect on characters and ignore the environment, if that is required else just go with one camera.

UIGesture recognition on different areas of a UIImageView

I have this image:
What I want to do is to add a UITapGestureRecognizer to this image (or I can split the image in the different parts it consists of and add for each part a UITapGestureRecognizer) in order to have different actions according to the leaf tapped. If I split the image in different images each for each leaf the UIImageViews will probably overlap and tapping on one will be recognized as a tap on another one. Having just one image implies knowing the points of the screen that belongs to a leaf rather than to another one.
Any clues on how to do it would be really appreciated.
Thanks
Change your behavior by examining the gesture recognizer's locationInView:.
If you handle the image as one unit, implement this in your gesture recognizer call back to decide which "leaf" (if any) was tapped.
If you handle the image as multiple images, you could also implement it in your callback, or you could also implement in, e.g., your delegate's gestureRecognizerShouldBegin: to suppress events for touches outside the leaf as drawn.
EDIT: I didn't realize that you might also be looking for assistance on figuring out whether a point lies within a leaf. #PhillipMills is correct on this point: we need to know how you are drawing the image.
FOLLOW-UP: This is somewhat outside my area of expertise.
The easiest approach (from a hit-testing standpoint) is to do what #PhillipMills suggested, using Quartz drawing and CGPathContainsPoint(). If you have detailed graphics that you need rendered as a PNG, you could certainly construct a simple path that would be (virtually) overlayed to allow hit testing.
Your other options, AFAIK, are to do hit testing mathematically, but you would basically be reimplementing CGPathContainsPoint() but without a path, or to employ various tricks that look at the color of the pixels at your touch point to do hit testing. Googling will turn up some useful results if you go this route, but honestly for a shape as simple as what you've drawn, just use some UIBezierPath code to recreate in code.
Not sure if this will be helpful but if you get stuck on figuring out which leaf was clicked, you could use an old image map trick we used to use in CD-ROM projects for pixel accurate click tracking on images.
You have your full size image. Make a 25% (or less) scaled version of it. Fill each of the leaf regions you want to track clicks on with a different color; anything you want to ignore make black. When the full size image is clicked, get the x/y coordinates and scale them by the percentage of your scaled image. Then get the pixel color of the scaled image at the scaled x/y coordinate. By determining the pixel color you will know which leaf was clicked.
Sounds clunky but it works really well and is fast.
(all that said, I don't think alpha areas of images trigger the gesture recognizer - so breaking the image up would be less complicated/code intensive.)
If you can break the shape apart into the constituent elements, then you can put each into it's own layer and use the method discussed in this stackoverflow discussion to determine which was touched: Hit Testing with CALayer using the alpha properties of the CALayer contents

Dragging images from a scrollable region in Raphael?

I'm investigating the feasibility of using Raphael for a user-research project. One of the features allows for users to drag images onto a canvas and we record where they placed it. The pool of images is potentially quite large and we'll have them in a scrollable box in the tool.
I put together a quick wireframe of the issue I'm looking into since it'll probably be clearer than my explanation.
Please see the wireframe:
I'd stick with straight HTML/CSS and use jQueryUI draggables, as you mention in your comment.
You don't appear to need any of the drawing/display features SVG offers, yet if you went that route, you'd have to build your own custom scrolling behavior (instead of setting a CSS overflow-y rule) and picture layout algorithms (again instead of using CSS floats or something).
You can create a scrollable region using Raphael.
Create the viewport with fixed
dimensions (say 800x600)
Draw the images with increasing y value. After few images, the y value will go beyond 600. It will be drawn but will not be visible in the viewport.
Create a scrollbar using raphael rects. Attach drag events to the scrollbar handle rect.
When the handle is moved, translate all the images accordingly.
For e.g. lets assume in step 2, you had drawn all the images and the bottom most point of the end image is having y value 2000. Assuming the scrollbar has length 500, each dx movement of the handle will have to translate 2000/500 = 4dx. You can calculate the handle length similarly using ratios.
Since everything inside a single Raphael paper the dragging of images will work seamlessly. You will have to maintain the positions of each images.
You might find this demo similar
Remember you can always use getBBox when you drop. In this case it's rects but images would be the same..
http://irunmywebsite.com/raphael/additionalhelp.php?q=bearbones

Zooming in/out and painting in openGL

I've recently had some issues implementing a zooming feature into a painting application. Please let me start off by giving you some background information.
First, I started off by modifying Apple's glPaint demo app. I think it's a great source, since it shows you how to set up the EAGLView, etc...
Now, what I wanted to do next, was to implement zooming functionality. After doing some research, I tried two different approaches.
1) use glOrthof
2) change the frame size of my EAGLView.
While both ways allow me to perfectly zoom in / out, I experience different problems, when it actually comes to painting while zoomed in.
When I use (1), I have to render the view like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(left, right, bottom, top, -1.0f, 1.0f); //those values have been previously calculated
glDisable(GL_BLEND);
//I'm using Apple's Texture2D class here to render an image
[_textures[kTexture_MyImage] drawInRect:[self bounds]];
glEnable(GL_BLEND);
[self swapBuffers];
Now, let's assume I zoom in a little, THEN I paint and after that, I want to zoom out again. In order to get this to work, I need to make sure that "kTexture_MyImage" always contains the latest changes. In order to do that, I need to capture the screen contents after changes have been made and merge them with the original image. The problem here is, that when I zoom in, my screen only shows part of the image (enlarged) and I haven't found a proper way to deal with this yet.
I tried to calculate which part of the screen was enlarged, then do the capturing. After that I'd resize this part to its original size and use yet another method to paste it into the original image at the correct position.
Now, I could go more into detail on how I achieved this, but it's really complicated and I figured, there has to be an easier way. There are already several apps out there, that perfectly do, what I'm trying to achieve, so it must be possible.
As far as approach (2) goes, I can avoid most of the above, since I only change the size of my EAGLView window. However, when painting, the strokes are way off their expected position. I probably need take the zoom level into account when painting and re-calculate the CGPoints in a different way.
However, if you have done similar things in the past or can give me a hint, how I could implement zooming into my painting app, I'd really appreciate it.
Thanks in advance.
Yes, it is definitely possible.
When it comes to paint programs, you should be keeping a linked list or tree of objects to draw for easy insertion / removal. When the user stops painting, (i.e. touchesEnded), you add objects to the data structure containing your scene.
When your user zooms you need to modulate the coordinates of the objects you are drawing with respect to the current viewport, projection, and modelview transforms. In your case, you're not changing the viewport or the modelview transforms so you need only account for the projection transform. You could also implement your zoom using a translation and scale on the modelview matrix but I'll ignore that case for simplicity because it involves inverting the transforms.
The good news is that you are using an orthographic projection so world coordinates correspond to window coordinates when no zooming is in effect. The "world" in your case is a simple canvas that probably corresponds to the size of the device in window coordinates.
Before you add an object to your scene data structure, convert all of the coordinates, using the current projection transform (i.e. the parameters to the glOrthof() call) to world coordinates (i.e. full canvas coordinates). You'll only remain sane if you keep all things in your model in the same coordinate space.
To convert the coordinates, assuming you can never zoom out past full device dimensions in your glOrtho() call, you'll have to scale them down proportional to the ratios of your zoomed ortho dimensions to your unzoomed ortho dimensions then bias them by the difference between your zoomed ortho bottom, left values and those of the original unzoomed ortho values.