SceneKit HitTest Small Object in Swift - swift

I am trying to move a cylinder using the pan gesture.
Got this working however the hittest does not work well with small objects and my big fingers.
Is there anyway I can expand the objects bounding box so it is bigger then the cylinder making it easier to be moved/hittested?
I am passing in the SCNHitTestBoundingBoxOnlyKey option so if I could expand the bounding box maybe it will work better.

I think I found a solution!
One should be able to add a bigger cylinder and make it a child note to the original cylinder. Then make it hidden, and pass the option SCNHitTestIgnoreHiddenNodesKey=NO when doing the hitTest. This way a small cylinder/object can be moved with pan gesture allthough it is smaller then a touch/finger point.

Related

How to 9-slice a sprite while keeping the center not scaled?

I wonder is there any way to slice this sprite(dialog pop up thing) that could keep the bottom center (the upside-down triangle) not scaled? I'm using nGUI if it matters.
Nope
Sorry, but that's how 9-slice scaling works. You would need 25 slice scaling to do what you're looking for and that's overkill for most things, so I've never seen an implementation.
What to do instead...
Break up your sprite into two pieces: the 9-slice portion and the "notch" portion. Then just position the notch to be in the right place.
I haven't used nGUI (only iGUI and the Unity native--both old and new) so I'm not sure on the precise nature of how nGUI will let you do that, but you'd still need two sprites, one of which is scaled and the other one which isn't, positioned either manually or through parent-child relative relationship. If your dialog is always the same width, it'll be pretty straight forward. If not, it might be more challenging.
A few other things:
You'll probably want the notch sprite and the bubble sprite to the same native image size, but its not necessary (might make things easier, might not).
The notch will want to have some "overbleed" so that when the two stack the underlying rendering code doesn't go all squinty eyed and go "there's a gap here..." and draw through in some cases.
Depending on the bubble portion's drawn edge, you might want the notch to be in front or behind. In your precise case, I don't think it'll make a difference. It's a little hard to tell due to the colors, but when I did a selectable tab (which is built similarly), the tab sits on top of the container window so that the shaded edge flows nicely. The unselected version then has no overbleed so it looks like it sits "behind" (accurate pixel placement--2D game at a fixed size--insures that no "gap" is rendered).
It's a little tedious but pretty straightforward to implement this for UI images. I recently did it in order to make a slice stretch the left/right borders of a 9-slice instead of the center.
The trick is to subclass Image and override OnPopulateMesh, where you do the calculations you need and set positions/uvs to whatever you require.
Here's a helpful how-to article: https://www.hallgrimgames.com/blog/2018/11/25/custom-unity-ui-meshes
Things for a non-UI sprite will be harder. I think you'll have to create all your geometry in a script, and the calculations might be a little complicated because you're using an atlas.

How can I make the SCNCamera that i instantiated zoom into only nodes that i want in SceneKit

Pretend i have 3 nodes in total. One of the nodes is a large SCNShere and i put the camera inside this sphere and make the sphere double sided with a textured image. I then put in two smaller spheres next to each other in the center inside this sphere. I also allowCameraControl. I want to be able to zoom into these two smaller spheres without zooming into the larger sphere and messing up the detail on that sphere.
You can't put limits on the camera that's automatically created with allowCameraControl. You'll have to do your own camera management, using your own gesture recognizers.
Another solution would be to rethink your approach to the background image. Instead of using a sky sphere for the background (which is what it sounds like you're doing), use a skybox, or cube map. You can supply a cube map through the scene's background property. The SCNMaterial documentation explains the options for supply a cube map.
Hmm, I wonder what would happen if you use the large sphere's textured image/material as the scene's background, instead of putting it on an enclosing sphere?
I like the idea of using an image as the background but there are two problems. One is i looked on the web for ways to make an image the background and none of them work. Two I want the background to have depth so in order to go on that idea I need to find a way to zoom into the background and have the image pan in the opposite direction that I drag.

At some position character cannot see the whole mesh in unity

I want to show a transparent building in our project. I did that by setting the material of the mesh to be "transparent/diffuse". However, there exists some visibility problem of the mesh of the building. At some position, I can only see two or three sides of the cuboid(the transparent block, i.e the building). If I adjust my character position, I can see the whole cuboid. I googled the similar question online, someone mentioned about the frustum view of the camera. It seems like character has to be inside the frustum view of the camera, then user can see the whole mesh of the cuboid.
Can anyone give me some suggestions? I feel like it might be something about the way of how I build my mesh for the building, but at some position, I can see the whole cuboid.
I've solved this problem. It is just about the way how you construct the mesh.Basically, for the cuboid, I reconstructed the mesh in this way:
triangles[0]=topleft;
triangles[1]=topright;
triangles[2]=bottomright;
triangles[3]=bottomright;
triangles[4]=bottomleft;
triangles[5]=topleft;
Note: This is just front side, the other sides should be constructed in a same way.
Besides, in order to show the mesh when user enters the block, you have to construct the inside area of the block in previous way.

UIGesture recognition on different areas of a UIImageView

I have this image:
What I want to do is to add a UITapGestureRecognizer to this image (or I can split the image in the different parts it consists of and add for each part a UITapGestureRecognizer) in order to have different actions according to the leaf tapped. If I split the image in different images each for each leaf the UIImageViews will probably overlap and tapping on one will be recognized as a tap on another one. Having just one image implies knowing the points of the screen that belongs to a leaf rather than to another one.
Any clues on how to do it would be really appreciated.
Thanks
Change your behavior by examining the gesture recognizer's locationInView:.
If you handle the image as one unit, implement this in your gesture recognizer call back to decide which "leaf" (if any) was tapped.
If you handle the image as multiple images, you could also implement it in your callback, or you could also implement in, e.g., your delegate's gestureRecognizerShouldBegin: to suppress events for touches outside the leaf as drawn.
EDIT: I didn't realize that you might also be looking for assistance on figuring out whether a point lies within a leaf. #PhillipMills is correct on this point: we need to know how you are drawing the image.
FOLLOW-UP: This is somewhat outside my area of expertise.
The easiest approach (from a hit-testing standpoint) is to do what #PhillipMills suggested, using Quartz drawing and CGPathContainsPoint(). If you have detailed graphics that you need rendered as a PNG, you could certainly construct a simple path that would be (virtually) overlayed to allow hit testing.
Your other options, AFAIK, are to do hit testing mathematically, but you would basically be reimplementing CGPathContainsPoint() but without a path, or to employ various tricks that look at the color of the pixels at your touch point to do hit testing. Googling will turn up some useful results if you go this route, but honestly for a shape as simple as what you've drawn, just use some UIBezierPath code to recreate in code.
Not sure if this will be helpful but if you get stuck on figuring out which leaf was clicked, you could use an old image map trick we used to use in CD-ROM projects for pixel accurate click tracking on images.
You have your full size image. Make a 25% (or less) scaled version of it. Fill each of the leaf regions you want to track clicks on with a different color; anything you want to ignore make black. When the full size image is clicked, get the x/y coordinates and scale them by the percentage of your scaled image. Then get the pixel color of the scaled image at the scaled x/y coordinate. By determining the pixel color you will know which leaf was clicked.
Sounds clunky but it works really well and is fast.
(all that said, I don't think alpha areas of images trigger the gesture recognizer - so breaking the image up would be less complicated/code intensive.)
If you can break the shape apart into the constituent elements, then you can put each into it's own layer and use the method discussed in this stackoverflow discussion to determine which was touched: Hit Testing with CALayer using the alpha properties of the CALayer contents

Creating a bounding box for a UIImageView

How can a bounding box be created for a UIImageView that is not a CGRect?
I would like to have objects in my view which should display images as well as detection collisions.
The issue is I would like these object to be whatever shape they are rather than fitting them into a CGRect and detecting collisions on areas which are inside the box but are nit the actual image.
How does one achieve this?
This is a non-trivial problem. But the basics are a CGRect is a rectangle and a hit test inside of a rectangle is fairly easy to understand. However, you sound like you want a more complex shape. UIImageView displays an image. It does not have any idea about what shape you want to use for your collision test. So you are going to have to tell it.
One easy thing to do is to look at the alpha/transparent values of the display image to create a shape. So to answer the question is a point hitting an image we figure out the location of point in the image and return true if the alpha is greater than 0. If you do this you can create any image with a transparent background and the code will just work.
If that will not work for you then can can also run a hit test on a point and a polygon this post covers that in detail.
How can I determine whether a 2D Point is within a Polygon?