How can I make a box's edges smooth without anti-aliasing? - iphone

I'm creating a dice game for the iPhone. I'm using SIO2 as engine, but I think this question is more general OpenGL-related.
Since the iPhone lacks support for anti-aliasing, my dice looks kind of edgy. If possible, I'd like to make the edges of the die rounded and smooth instead of sharp. I've found one app, MotionX, that manages to do this, and I think without using anti-aliasing. See screenshot here. If you look closely at the dice edges, you see there is a floating transition from the brightly lit top face to the shadowed side face. This looks kind of round from far away.
Does anyone know how to recreate such an effect?

You need to create the dice with slightly rounded edges and corners. That way there won't be a sharp transition between each face.
If your modelling package can create them you could use Superquadrics to create this sort of model. You can change the parameters of the equation to produce the effect.
See the top left figure on this image
(source: free-online.co.uk)

If you look closely at the dice edges, you see there is a floating transition from the brightly lit top face to the shadowed side face
Not sure if the iPhone supports this, but you may be able to achieve this effect with a normal map:
http://en.wikipedia.org/wiki/Normal_mapping
Of course, you'll need to truncate the corners to get them sufficiently round enough that the normal map will get you the rest of the way.

Related

Draw curved lines with texture and glow with Unity

I'm looking for an efficient way to draw curved lines and to make an object follow them in Unity.
I also need to draw them using a custom image and not a solid color.
And on top of that I would like to apply an outer glow to them, and not to the rest of the scene.
I don't ask for a copy/paste solution for each of these elements, I list them all to give some context.
I did something similar in a web app using the html5 canvas to draw text progressively. Here a gif showing you the render:
I only used small lines to draw what you see above. Here a very big letter with thinker lines so lines are more visible:
Of course it's not perfect, but the goal was to keep it simple and efficient. And spaces on the outer edges are not very visible in normal size.
This is used in an educational game working on mobile as a progressive app. In real world usage I attach a particles emitter to it for better effect :
And it runs smoothly even on low end devices.
I don't want to recreate this exact effect on Unity but the core functionality is very close.
Because of how I did it the first time, I thought about creating a big list of segments to draw manually, but unity may have better tools to create this kind of stuff, maybe working directly with bezier curves.
I a beginner in Unity so I don't really know what is the most efficient way to do it.
I looked at the line renderer which seemed (at first) to be a good choice but I'm a little bit worried about performances with a list of 500+ points (considering mobiles are a target).
Also, the glow I would like to add may impact on the technique to choose.
Do you have any advice or direction to give me?
Thank you very much.

Displaying ARKit nodes in relation to real objects

I am trying to draw a box that can help someone understand the dimensions of an item, but I keep having the issue that since I first need to recognize a plane when I put my physical item on top of the plane, my box gets drawn in front of the item.
Is it possible to somehow overcome this?
#John Scalo is right, your problem is not having to first detect a plane, but it's that your render engine doesn't know that part of your green box frame is occluded (hidden) by a real-world object.
"…to somehow overcome this"
Yes, and by doing so you might be "solving" your original problem—help someone understand the dimensions of an item.
(Depending on your choice of render engine, e.g. SceneKit) You can add an invisible 3D object that has the same dimensions as the real-world object; so the render engine will "know" that some parts of your box frame are behind this (for the user invisible) 3D object. Therefor, you can tell it not to draw those parts of your box frame, which will give the illusion (borrowing from Apple here) that your soda can has the box around it.
These workarounds are inaccurate, but maybe their accuracy is enough for the level of realism you are trying to achieve:
Option 1: 1. After detecting the desk surface, place a semi-transparent 3D object over the soda can and then resize it (gestures/buttons your choice) until it's about the dimensions of the soda can. 2. Confirm that you're done, and just don't draw a texture on it at all just let it occlude the green box frame.
Option 2: Hold your device near the edges of the soda can and add "enough" ARAnchors to be able to create a "bounding shape" that (again) can be used to capture the real-word object and occlude that.
Option 3: (intense, and perhaps the least accurate) Use your finger to "brush" over the object from various angles, and on each touch perform a hit test (hopefully the top/nearest hit is a part of your soda can) and build up a "bounding shape" that way.
Option X: any combination of 1 - 2 - 3.
Good luck, there's lots of people trying to work around this device/ARKit limitation at them moment, so keep your eyes open for good ideas.
The problem you're dealing with is called occlusion, and ARKit doesn't (currently?) include occlusion support. Maybe some day soon iPhones and iPads will begin to ship with LIDAR (or similar), in which case ARKit will be able to detect objects in the scene, making occlusion much easier.

How to 9-slice a sprite while keeping the center not scaled?

I wonder is there any way to slice this sprite(dialog pop up thing) that could keep the bottom center (the upside-down triangle) not scaled? I'm using nGUI if it matters.
Nope
Sorry, but that's how 9-slice scaling works. You would need 25 slice scaling to do what you're looking for and that's overkill for most things, so I've never seen an implementation.
What to do instead...
Break up your sprite into two pieces: the 9-slice portion and the "notch" portion. Then just position the notch to be in the right place.
I haven't used nGUI (only iGUI and the Unity native--both old and new) so I'm not sure on the precise nature of how nGUI will let you do that, but you'd still need two sprites, one of which is scaled and the other one which isn't, positioned either manually or through parent-child relative relationship. If your dialog is always the same width, it'll be pretty straight forward. If not, it might be more challenging.
A few other things:
You'll probably want the notch sprite and the bubble sprite to the same native image size, but its not necessary (might make things easier, might not).
The notch will want to have some "overbleed" so that when the two stack the underlying rendering code doesn't go all squinty eyed and go "there's a gap here..." and draw through in some cases.
Depending on the bubble portion's drawn edge, you might want the notch to be in front or behind. In your precise case, I don't think it'll make a difference. It's a little hard to tell due to the colors, but when I did a selectable tab (which is built similarly), the tab sits on top of the container window so that the shaded edge flows nicely. The unselected version then has no overbleed so it looks like it sits "behind" (accurate pixel placement--2D game at a fixed size--insures that no "gap" is rendered).
It's a little tedious but pretty straightforward to implement this for UI images. I recently did it in order to make a slice stretch the left/right borders of a 9-slice instead of the center.
The trick is to subclass Image and override OnPopulateMesh, where you do the calculations you need and set positions/uvs to whatever you require.
Here's a helpful how-to article: https://www.hallgrimgames.com/blog/2018/11/25/custom-unity-ui-meshes
Things for a non-UI sprite will be harder. I think you'll have to create all your geometry in a script, and the calculations might be a little complicated because you're using an atlas.

How to scale a beveled rectangular prism, without destroying the aspect ratio of the corners in Unity

I am making 3d virtual keyboard and the keys are made from rectangular prism meshes with rounded corners / beveled edges. I made the mesh with blender and it works fine. Except that depending on device orientation etc the aspect ratio of the keys change. I have just been achieving this by doing a scale transform but it distorts the corner rounding.
In 2d this is handled with scale 9 algorithms which scale the middle and the length of the sides but not the corners.
Is there a common way of doing this in unity?
Maybe there is an easy way to render something with rounded corners after a transform on a regular cube?
Or to specify the way that a mesh scales with some areas remaking static?
Or building my own mesh with code? Is there a good example of this? Building a mesh of a rectangular prism seems doable - but rounding the corners seems like a formidable task.
All depends on how much bevelling is needed. If it just a little, it is possible to do this with a shader without touching the mesh. You can do post processing like in the accepted answer of this question which links to this paper.
Other option for small beveling is to use Relief Mapping like in this question. It is probably easier to make dynamic changes to the Relief Mapping texture than to the model.
If multiple pixels of beveling is needed, I think the mesh needs to be modified. (Hopefully someone proofs I'm wrong and tells about some weird scaling.) Making whole mesh from a script is quite easy with Mesh once you get used to it. But depending on the amount of 3D experience, it might take lot of time to get the round edges working correctly.
Another idea is to make one model for the corners, one model for the edges and one model for the sides. Then you can use 6 instances of corners, 6 instances of stretched edges and 6 scaled sides to make a complete cube out of them.

Can this type of wiggle image deformation be done on iPhone without using openGL

I have a straight image and I want to deform it in a wave-like manner.
Original image:
straight texture http://img145.imageshack.us/img145/107/woodstraight.png
and I want it to look like this (except animated):
bent texture http://img145.imageshack.us/img145/8496/woodbent.png
I haven't tackled the learning curve of openGL yet so if I can do this with Core Animation it would be great.
Is this possible?
Unfortunately, I think this is a job for OpenGL. You could achieve the same affect in Quartz by slicing the image up vertically and drawing segments with different vertical offsets... but I don't think you'd be able to achieve good enough performance to animate it. (At least, with 1px or 2px wide slices)
You could also leave the image stationary, and use Quartz to animate a masking path that would create the waving edges. That probably wouldn't look too natural, though.
As far as I know, Core Animation on the iPhone isn't capable of doing this, either. On the Mac it comes with some more advanced filters, but I think you'd probably see a lot more stuff like this if the iPhone filters could do it :-)
OpenGL does have quite a learning curve, but here's what you'd want to do to achieve the effect: Create a flat rectangle in OpenGL with several verticies along it's length. Point the camera at the rectangle so that it appears flat. Then, use a sine() function of some sort to animate the verticies back and forth in place.
This approach is also used to achieve the rippling-water effect, and you might be able find an example or two of it.
Sorry to bring bad news :-) Hope that helps!