Painting Polygon Shaped Areas on Terrain/Surface/Ground in Unity - unity3d

I have a set of 2D polygon shapes, each one with a varying amount of points that are determined at runtime.
I need to draw (or "paint") these polygons on top of a surface.
I also need to fill in the areas of each polygon with a specific color that is also determined at runtime.
The polygons only really need to be drawn once, however solutions that would allow me to update/change the colors would be nice.The project will be built for WebGL.
The polygons are to mark of specific areas on a surface so they shouldn't repeat.
Polygons can overlap with eachother
What are the different/best solutions that may help me achieve this?
Am also open to suggestions and further reading
I'm a relative Beginner to Unity but a somewhat experienced programmer and know a slight bit about shader programming.
By polygons I don't mean geometry I just mean a regular shapes I would like to paint on to a surface. Its just that I will always have polygons with a varying amount of points that are loaded in at runtime.

First, u need to split the surface into that polygon then u need to have materials on them and the code that specifies the colour should update the colour of the material which is on that specific polygon. Creating materials could be done in code to avoid a lot of work in a simple loop.

Related

Casting Civilization V based Hex Grid on Unity Terrain and Select Certain Areas of Grid

I am looking for approach for casting hex based grid on terrain which is for now pre-made but eventually it will be procedural for my exploration game where you can scan planet and elements will be highlighted/hex grid selected. What could be the approach towards making this kind of hex grid as my terrain will be un-even.
I have seen approaches like mesh creation, using tile-map, unity projectors but eventually I feel like this should be something using shaders but what about selection?
Can someone please guide me in a right direction.
I think this topic is more like for https://gamedev.stackexchange.com .
My tips for you:
I think the hexa grid projection can be solved with Unity built in Projector, you can use ortoghraphic projection with it, so it does not matter if your terrain is uneven, also it has a convenient way for selecting which layers are affected only (terrain, your buildings etc...)
(The projector, is a shader magic tho, it blends the picture you give it to it, and the layer below it)
If projector does not satisfies your needs, im pretty sure there are grid shader already written for unity.
About the selection, i think you could also solve that with projector, or give some trail effect to the grid boundaries? - i guess you gonna still store the boundaries so..
About country borders in Civ:
I think they cast a spline using the hex grid border points, then blend it on the terrain. I saw a shader that could draw lines on a terrain, so you might found it!
Keywords for search: Beziér, Catmull–Rom, Spline, terrain shader

Why does merging geometries improve rendering speed?

In my web application I only need to add static objects to my scene. It worked slow so I started searching and I found that merging geometries and merging vertices were the solution. When I implemented it, it indeed worked a lot better. All the articles said that the reason for this improvement is the decrease in number of WebGL calls. As I am not very familiar with things like OpenGL and WebGL (I use Three.js to avoid their complexity), I would like to know why exactly it reduces the WebGL calls?
Because you send one large object instead of many littles, the overhead reduces. So I understand that loading one big mesh to the scene goes faster than many small meshes.
BUT I do not understand why merging geometries also has a positive influence on the rendering calculation? I would also like to know the difference between merging geometries and merging vertices?
Thanks in advance!
three.js is a framework that helps you work with the WebGL API.
What a "mesh" is to three.js, to webgl, it's a series of low level calls that set up state and issue calls to the GPU.
Let's take a sphere for example. With three.js you would create it with a few lines:
var sphereGeometry = new THREE.SphereGeometry(10);
var sphereMaterial = new THREE.MeshBasicMaterial({color:'red'});
var sphereMesh = new THREE.Mesh( sphereGeometry, sphereMaterial);
myScene.add( sphereMesh );
You have your renderer.render() call, and poof, a sphere appears on screen.
A lot of stuff happens under the hood though.
The first line, creates the sphere "geometry" - the cpu will a bunch of math and logic describing a sphere with points and triangles. Points are vectors, three floats grouped together, triangles are a structure that groups these points by indecis (groups of integers).
Somewhere there is a loop that calculates the vectors based on trigonometry (sin, cos), and another, that weaves the resulting array of vectors into triangles (take every N , N + M , N + 2M, create a triangle etc).
Now these numbers exist in javascript land, it's just a bunch of floats and ints, grouped together in a specific way to describe shapes such as cubes, spheres and aliens.
You need a way to draw this construct on a screen - a two dimensional array of pixels.
WebGL does not actually know much about 3D. It knows how to manage memory on the gpu, how to compute things in parallel (or gives you the tools), it does know how to do mathematical operations that are crucial for 3d graphics, but the same math can be used to mine bitcoins, without even drawing anything.
In order for WebGL to draw something on screen, it first needs the data put into appropriate buffers, it needs to have the shader programs, it needs to be setup for that specific call (is there going to be blending - transparency in three.js land, depth testing, stencil testing etc), then it needs to know what it's actually drawing (so you need to provide strides, sizes of attributes etc to let it know where a 'mesh' actually is in memory), how it's drawing it (triangle strips, fans, points...) and what to draw it with - which shaders will it apply on the data you provided.
So, you need a way to 'teach' WebGL to do 3d.
I think the best way to get familiar with this concept is to look at this tutorial , re-reading if necessary, because it explains what happens pretty much on every single 3d object in perspective, ever.
To sum up the tutorial:
a perspective camera is basically two 4x4 matrices - a perspective matrix, that puts things into perspective, and a view matrix, that moves the entire world into camera space. Every camera you make, consists of these two matrices.
Every object exists in it's object space. TRS matrix, (world matrix in three.js terms) is used to transform this object into world space.
So this stuff - a concept such as "projective matrix" is what teaches webgl how to draw perspective.
Three.js abstracts this further and gives you things like "field of view" and "aspect ratio" instead of left right, top bottom.
Three.js also abstracts the transformation matrices (view matrix on the camera, and world matrices on every object) because it allows you to set "position" and "rotation" and computes the matrix based on this under the hood.
Since every mesh has to be processed by the vertex shader and the pixel shader in order to appear on the screen, every mesh needs to have all this information available.
When a draw call is being issued for a specific mesh, that mesh will have the same perspective matrix, and view matrix as any other object being rendered with the same camera. They will each have their own world matrices - numbers that move them around around your scene.
This is transformation alone, happening in the vertex shader. These results are then rasterized, and go to the pixel shader for processing.
Lets consider two materials - black plastic and red plastic. They will have the same shader, perhaps one you wrote using THREE.ShaderMaterial, or maybe one from three's library. It's the same shader, but it has one uniform value exposed - color. This allows you to have many instances of a plastic material, green, blue, pink, but it means that each of these requires a separate draw call.
Webgl will have to issue specific calls to change that uniform from red to black, and then it's ready to draw stuff using that 'material'.
So now imagine a particle system, displaying a thousand cubes each with a unique color. You have to issue a thousand draw calls to draw them all, if you treat them as separate meshes and change colors via a uniform.
If on the other hand, you assign vertex colors to each cube, you don't rely on the uniform any more, but on an attribute. Now if you merge all the cubes together, you can issue a single draw call, processing all the cubes with the same shader.
You can see why this is more efficient simply by taking a glance at webglrenderer from three.js, and all the stuff it has to do in order to translate your 3d calls to webgl. Better done once than a thousand times.
Back to those 3 lines, the sphereMaterial can take a color argument, if you look at the source, this will translate to a uniform vec3 in the shader. However, you can also achieve the same thing by rendering the vertex colors, and assigning the color you want before hand.
sphereMesh will wrap that computed geometry into an object that three's webglrenderer understands, which in turn sets up webgl accordingly.

Shader-coding: nonlinear projection models

As I understand it, the standard projection model places an imaginary grid in front of the camera, and for each triangle in the scene, determines which 3 pixels its 3 corners project onto. The color is determined for each of these points, and the fragment shader fills in the rest using interpolation.
My question is this: is it possible to gain control over this projection model? For example, create my own custom distorted uv-grid? Or even just supply my own algorithm:
xyPixelPos_for_Vector3( Vector3 v ) {...}
I'm working in Unity3D, so I think that limits me to cG or openGL.
I did once write a GLES2 shader, but I don't remember ever performing any kind of "ray hits quad" type test to resolve the pixel position of a particular 3D point in space.
I'm going to assume that you want to render 3d images based upon 3d primitives that are defined by vertices. This is not the only way to render images with OpenGL but it is the most common. The technique that you describe sounds much more like Ray-Tracing.
How OpenGL Typically Works:
I wouldn't say that OpenGL creates an imaginary grid. Instead, what it does is take the positions of each of your vertices, and converts them into a different space using linear algebra (Matrices).
If you want to start playing around with this, it would be best to do some reading on Matrices, to understand what the graphics card is doing.
You can easily start warping the positions of Vertices by making a vertex shader. However, there is some setup involved. See the Lighthouse tutorials (http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/) to get started with that! You will also want to read their tutorials on lighting (http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/), to create a fully functioning vertex shader which includes a lighting model.
Thankfully, once the shader is set up, you can distort your entire scene to your hearts content. Just remember to do your distortions in the right 'space'. World coordinates are much different than eye coordinates!

Open GL - ES 2.0 : Touch detection

Hi Guys I am doing some work on iOS and the work requires use of OpenGL es. So now I have a bunch of squares, cubes and triangles on the screen. Some of these geometries might overlap. Any ideas/ approaches for touch detection?
Regards
To follow up on the answer already given, squares, cubes and triangles are convex shapes so you can perform ray-object intersection quite easily, even directly from the geometry rather than from the mathematical description of the perfect object.
You're going to need to be able to calculate the distance of a point from the plane and the intersection of a ray with the plane. As a simple test you can implement yourself very quickly, for each polygon on the convex shape work out the intersection between the ray and the plane. Then check whether that point is behind all the planes defined by polygons that share an edge with the one you just tested. If so then the hit is on the surface of the object — though you should be careful about coplanar adjoining polygons and rounding errors.
Once you've found a collision you can easily get the length of the ray to the point of collision. The object with the shortest distance is the one that's in front.
If that's fast enough then great, otherwise you'll probably want to look into partitioning the world or breaking objects down to their silhouettes. Convex objects are really simple — consider all the edges that run between one polygon and the next. If only exactly one of those polygons is front facing then the edge is part of the silhouette. All the silhouettes edges together can be projected to a convex 2d shape on the view plane. You can then test touches by performing a 2d point-in-polygon from that.
A further common alternative that eliminates most of the maths is picking. You'd render the scene to an invisible buffer with each object appearing as a solid blob in a suitably unique colour. To test for touch, you'd just do a glReadPixels and inspect the colour.
For the purposes of glu on the iPhone, you can grab SGI's implementation (as used by MESA). I've used its tessellator in a shipping, production project before.
I had that problem in the past. What I have used is an implementation of glu unproject that you can find on google (it uses the inverse of the model view projection matrix and the viewport size). This allows you to map the 2D screen coordinates to a 3D vector into the world. Then, you can use this vector to intersect with your objects and see which one intersects (or comes really close to doing so).
I do hope there are better ways of doing this, so I look forward to other answers as well!
Once you get the inverse-modelview and cast your ray (vector), you still need to know if the ray intersects your geometry. One approach would be to grab the depth (z in view coordinate system) of the object's center and extend (stretch) your vector just that far. Then see if the vector's "head" ends within the volume of your object or not (you need the objects center and e.g. Its radius, if it's a sphere)

Map image to irregular polygon?

Say the user taps 4 spots on the iphone, defining an irregular 4 sided polygon (in 2d space). Is there a way to map/fit a (potentially highly distorted) image onto this shape, without using OpenGL?
Something like:
Is my only option to somehow calculate the 3d space that my irregular 4 sided shape sits in (based on where the tapped 2d points sit), create an OpenGL plane in that space, and map my texture to it flatly? Seems like there should be an easier way...
Thanks in advance.
Update: After diving into OpenGL I'm almost there... but I still can't get the texture to distort correctly. The triangulation seems to be messing with the texture mapping:
I can't answer your question completely, but one thing I would say is that you don't need to think about any conversion / mapping to 3D. Using OpenGL you can easily draw the shape in 2D and have the texture mapped as you desire. No need for any fancy maths or conversions. It's no more complicated than drawing a rectangle. OpenGL doesn't care that your 4-sided shape isn't actually rectangular.