Types of Light in Metal API - swift

I'd like to know if there are only Point Light and Ambient Light in Metal API or I can also use Spot Light and Directional Light in my 3D environment?
struct Light {
var color: (Float, Float, Float)
var ambientIntensity: Float
static func size() -> Int {
return sizeof(Float) * 4
}
func raw() -> [Float] {
let raw = [color.0, color.1, color.2, ambientIntensity]
return raw
}
}
How to implement Spot Light and Directional Light if they exist in Apple's Metal API?

The Metal API itself has no concept of anything as high-level as a “light”, in the same way that modern OpenGL doesn’t—the APIs for that kind of thing went the way of the dinosaur with the fixed-function pipeline. With modern low-level graphics APIs, you need to roll your own lighting; the excellent Metal By Example series has an article on doing that, though you may want to go through the earlier sections to get a clearer idea of what’s going on. Note that that article only deals with directional lights; spot lights are significantly trickier and you’ll need to do some research to find how those are usually done and then implement that approach in Metal.
As an alternative to using Metal directly, you might want to look into SceneKit, which is quite powerful and has built-in support for many types of light via the SCNLight class.

Marius gave me a hint that the MDLLight class is the abstract superclass for objects that describe light sources in a scene. When you load lights from an asset file using the MDLAsset class, or create lights when building an asset for export, you use one or more of the concrete subclasses MDLPhysicallyPlausibleLight, MDLAreaLight, MDLPhotometricLight, or MDLLightProbe.
...and MDLLightType is Enumeration that has options for the shape and style of illumination provided by a light, used by the lightType property:
case unknown = 0
0. The type of the light is unknown or has not been initialized.
case ambient = 1
1. The light source should illuminate a scene evenly regardless of position or direction.
case directional = 2
2. The light source illuminates a scene from a uniform direction regardless of position.
case spot = 3
3. The light source illuminates a scene from a specific position and direction.
case point = 4
4. The light source illuminates a scene in all directions from a specific position.
case linear = 5
5. The light source illuminates a scene in all directions from an area in the shape of a line.
case discArea = 6
6. The light source illuminates a scene in all directions from an area in the shape of a disc.
case rectangularArea = 7
7. The light source illuminates a scene in all directions from an area in the shape of a rectangle.
case superElliptical = 8
8. The light source illuminates a scene in all directions from an area in the shape of a rectangle.
case photometric = 9
9. The illumination from the light is determined by a photometric profile.
case probe = 10
10. The illumination from the light is determined by texture images representing a sample of a scene at a specific point.
case environment = 11
11. The illumination from the light is determined by texture images representing a sample of the surrounding environment for a scene.
These are excerpts from Apple Metal API Reference.
MDLPhysicallyPlausibleLight
A MDLPhysicallyPlausibleLight object describes a light source for use in a shading model based on real-world physics.
MDLAreaLight
A MDLAreaLight object represents a source of light that illuminates a 3D scene not from a single point or direction, but from an area with a specific shape. The shape of an area light is a two-dimensional figure in the xy-plane of the light’s local coordinate space, and its illumination is directed in the negative z-axis direction (spreading out from that direction according to the inherited innerConeAngle and outerConeAngle properties).
MDLPhotometricLight
A MDLPhotometricLight object represents a light source whose shape, direction, and intensity of illumination is determined by a photometric profile. You create a photometric light from a file in the IES format, containing physical measurements of a light source. Many manufacturers of real-world light fixtures publish such files describing the lighting characteristics of their products. This photometry data measures the light web surrounding a light source—measurements of the light’s intensity in all directions around the source.
MDLLightProbe
A MDLLightProbe object describes a light source in terms of the variations in color and intensity of its illumination in all directions. A light probe represents this variation either as a cube map texture, where each texel represents the color and intensity of light in a particular direction from the cube’s center, or as a set of spherical harmonic coefficients. In addition to describing such light sources, the MDLLightProbe provides methods for generating light probe textures based on the contents of a scene and for generating spherical harmonic coefficients from a texture.

Related

How can I set a Projection Matrix to have a Tibia like projection?

I am beating my head a little bit here for a while but I still could bot find a way to set up a matrix that projects my Unity game in a Tibianeske like manner:
Reading on tutorials on internet I could figure out how a normal orthographic perspective works, but tibia's one is kind of odd.
Digging over webs I found in here a guy (Clint Bellanger) who describes really well how to get the same perspective in blender's render according to him:
Start with a scene in 45 degree isometric. Video game style, where
the camera angle is Blender (60,0,45).
In Blender if you look at Buttons Window -> Scene -> Render Buttons ->
Format, you can set the render aspect ratio. Set AspY to half of
AspX. This is the same as taking regular rendered output and scaling
X by 50%. If you rendered a cube, the top of the cube will be a
perfect square (though at a 45 degree angle).
We can then use Blender nodes to rotate the result 45 degrees. The
output:
Note this started as a cube, so there's a lot of "vertical"
distortion. So you might have to scale meshes to 50% Z before using
this method. Also notice the Edge seems to be applied after the
Aspect, so the edge isn't distorted.
Blend file: http://clintbellanger.net/images/temp/UltimaVII.blend (I'm
a Nodes noob so there might be a smarter setup).
For kicks, here is that tower again. I pulled it into the above
workflow scene and scaled Z by 50%. Click "Re-render this layer" on
the first node to create the composite.
On his method, he used stuff like rescaling the render and changing the scale of models, Im convinced I could get along just with the 4x4matrix in unity(or in any other 3d environment really).
Hope someone more experienced with perks of 3D maths could help me to figure it out. Thank you! =D
What you ask for is a simple parallel projection. The typical orthographic projection is just a special case where the projection rays are perpendicular to the image plane. However, every parallel projection can be represented by an affine shear transformation followed by a standard orthogonal projection.
Im convinced I could get along just with the 4x4matrix in unity(or in any other 3d environment really).
Yes. Using default GL conventions here, all you have to do is to take the standard ortho matrix, post-multiply it by an appropriate shear matrix and use that as the projection matrix.

Unity how to paint a mesh in different colors with gradient

I want to generate low poly terrain like on picture below.
I've done mesh generator. But I cannot imagine how to apply colors on that mesh. For example brown color on high angle points, light colors on flat points. And how to make gradient between colors. Can someone help me with advice what should I learn?
The terrain data holds... well, the data for the terrain.
exactly what the terrain data holds can be found here:
http://docs.unity3d.com/ScriptReference/TerrainData.html
you will need to do a bit of coding.
Terrain data can be accessed with something like:
TerrainData terrainData = Terrain.activeTerrain.terrainData;
and here is a post that may be useful:
http://answers.unity3d.com/questions/12835/how-to-automatically-apply-different-textures-on-t.html
the code given in that post, creates a 3D array in which to store your splatmap data in and then uses the terrain data and some logic (based on the elevation of that particular block of terrain) to 'splatter' texure to give a more realistic look.
here is a random example of a terrain using a splatmap (from google):
http://www.cygengames.com/images/terrainBeautyShot09_720.png

Shader-coding: nonlinear projection models

As I understand it, the standard projection model places an imaginary grid in front of the camera, and for each triangle in the scene, determines which 3 pixels its 3 corners project onto. The color is determined for each of these points, and the fragment shader fills in the rest using interpolation.
My question is this: is it possible to gain control over this projection model? For example, create my own custom distorted uv-grid? Or even just supply my own algorithm:
xyPixelPos_for_Vector3( Vector3 v ) {...}
I'm working in Unity3D, so I think that limits me to cG or openGL.
I did once write a GLES2 shader, but I don't remember ever performing any kind of "ray hits quad" type test to resolve the pixel position of a particular 3D point in space.
I'm going to assume that you want to render 3d images based upon 3d primitives that are defined by vertices. This is not the only way to render images with OpenGL but it is the most common. The technique that you describe sounds much more like Ray-Tracing.
How OpenGL Typically Works:
I wouldn't say that OpenGL creates an imaginary grid. Instead, what it does is take the positions of each of your vertices, and converts them into a different space using linear algebra (Matrices).
If you want to start playing around with this, it would be best to do some reading on Matrices, to understand what the graphics card is doing.
You can easily start warping the positions of Vertices by making a vertex shader. However, there is some setup involved. See the Lighthouse tutorials (http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/) to get started with that! You will also want to read their tutorials on lighting (http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/), to create a fully functioning vertex shader which includes a lighting model.
Thankfully, once the shader is set up, you can distort your entire scene to your hearts content. Just remember to do your distortions in the right 'space'. World coordinates are much different than eye coordinates!

How to use a different texture on intersecting part of 2 quads

I'm looking for a way to dynamically change a part of a Quad that has a SpriteRenderer attached to it. Let's say I have a red Quad and a blue Quad, and then I drag one onto the other (fast or slow), the intersecting part should be colored using a green sprite. This illustration shows the scenario I'm trying to solve.
Can someone please help me with this?
You have two options:
First, if your mid color will be the correct mixture of other two color, in this case it would be yellow, you can use Mobile Particle/Additive or Mobile Particle/Multiply Shaders.
In a second way, you can write your own shader that takes the intersection area as parameter and paint your textures according to parameters.

How make realistic 3D earth for iOS with atmosphere shaders

How porting from 3D Max / or other 3D application realistic earth model to iOS device ( Open GL ES)
How porting atmosphere effects ( not clouds - it is texture) - the glow of sky?
If speed is not the main point, you can use ray-tracing. You can model the earth and it's atmosphere as an opaque sphere, and a few non-opaque large spheres for the atmosphere. It gives you a model that handle clouds, shadows, scattering, light filtering for a reasonable amount of work and not too much tweaks. Ray-tracing a dozen spheres with same center is very cheap. Each 'atmosphere' layer will deviate light rays, with decreasing refraction index for each layer, and they will absorb some light, more for the lower layers. Spending some time on paper, you can simplify the math a bit and make really cheap :)
Also, just for the atmospheric effect, I guess doing it in half-resolution should be enough, as atmospheric effect are rather low-frequency.
I do it like this:
first render pass
surface model is ellipsoid
plus color texture
plus bump mapping
plus alpha blending with cloud texture
second render pass
just draw single Quad over whole screen
and blend in sky color via simplified atmospheric scattering GLSL shader
[Notes]
you can add also atmospheric refraction to be more precise