Metal Alpha Blending to create smooth line - swift

I am trying to implement the method of drawing a smooth line through a large number of 3D points shown here --> https://ahmetkermen.com/simple-smooth-line-drawing-with-opengl.
I have a small quad at each point, with a billboard modelview matrix, that I render this image onto:
My blending options are defined like this:
// Allow alpha blending
pipelineDescriptor.colorAttachments[0].isBlendingEnabled = true
pipelineDescriptor.colorAttachments[0].rgbBlendOperation = .add
pipelineDescriptor.colorAttachments[0].alphaBlendOperation = .add
pipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = .sourceAlpha
pipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .oneMinusSourceAlpha
pipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = .sourceAlpha
pipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
I am seeing that the alpha blending works with the rest of the scene as shown here:
But the blending does not work with the quads in the line themselves. Some quads show the sharp corner 'over' the next quad in the line:
Perhaps it is a z-order issue, however each point does have a unique position.
My fragment shader is only sampling the texture and setting the color:
fragment float4 frag_main(VertexOut fragmentIn [[stage_in]],
constant FragUniforms &uniforms [[buffer(0)]],
texture2d<float, access::sample> baseColorTexture [[texture(0)]],
sampler baseColorSampler [[sampler(0)]]
{
// Sample the image
float3 baseColor = baseColorTexture.sample(baseColorSampler, fragmentIn.texCoords).rgb;
return float4(baseColor, 1.0);
}
What am I doing wrong and how can I get the blending to work so that the quads appear as a continuous and smooth line?

Related

How to replace colors with textures via Shaders in Unity 3D

I have a problem. I tried to search but I can't find what I want.
I have a texture. This texture have a blue, green and black colors. They are masks, this is a face texture actually. I want to replace them like blue color will replace with my eyes texture in unity, green color will replace with face texture.. How can I write this shader? I searched but only I find color changing shaders :( Thanks..
The way i interpret your question is that you want a shader where one texture serves as a mask, blending between 3 other textures. I'm assuming that this is for character customization, to stitch different pieces of face together.
In the fragment (or surf) function, sample your 3 textures and the mask:
fixed4 face = tex2D(_FaceTex, i.uv); // Green channel of the mask
fixed4 eyes = tex2D(_EyeTex, i.uv); // Blue channel of the mask
fixed4 mouth = tex2D(_MouthTex, i.uv); // No mask value (black)
fixed4 mask = tex2D(_MaskTex, i.uv);
Then, you need to blend them together using the mask. Let's assume that whatever black represents in the mask is the background color, and then we interpolate in the other textures.
fixed4 col = lerp(mouth, eyes, mask.b);
Then we can interpolate between the resulting color and our third value:
col = lerp(col, face, mask.g);
You could repeat this once again with the red channel, for a fourth texture. Of course, this assumes that you use pure red, green, or blue in the mask. There are ways to use a more specific color as key too, for instance you can use the absolute of the distance between the mask color and some reference color:
fixed4 eyeMaskColor = (0.5, 0.5, 1, 1);
half t = 1 - saturate(abs(length(mask - eyeMaskColor)));
In this case, t is the lerp factor you use to blend in the texture. The saturate function clamps the value in the range of [0, 1]. If the mask color is the same as eyeMaskColor, then the length of the vector between them is 0 and the statement evaluates to 1.

How to apply fresnel effect only on one part of the sphere in SceneKit?

I am applying fresnel effect to the SCNSphere to achieve a beautiful glow of the Earth
However, because I am doing model of the Earth, I want that illuminated part of sphere was with fresnel effect and dark side without
Maybe it can be possible to achieve that effect if dark side of sphere will transparent? Or maybe I just can use shader modifier property of Scenekit _surface.fresnel?
For this you need to use an emission instance property that defines the color emitted by each point on a surface.
You can use an emissive map texture to simulate parts of a surface that glow with their own light. SceneKit does not treat the material as a light source—rather, the emission property determines colors for a material independent of lighting. (To create an object that appears to glow, you may wish to combine a geometry with an emissive map and additional SCNLight objects added to the scene.)
var emission: SCNMaterialProperty { get }
You can find emission property in any available SceneKit's shader Constant, Lambert, Blinn, Phong or Physically Based.
let earth = SCNNode(geometry: SCNSphere(radius: 5))
earth.geometry?.firstMaterial?.emission.contents = UIImage(named: "emissionMap.png")
And, of course, if you need to create a square emissionMap.png texture (a.k.a. UV Map) you should make it in Autodesk Maya or in Autodesk 3dsMax or in Maxon Cinema4D, etc. Such maps could be 512x512, 1024x1024, 2048x2048, etc.
I am no expert in Scenekit and shaders and maybe this solution is in no way optimal but one way to do it is to just use shader modifiers :
First, I take the diffuse (=direct, in Scenekit) lighting litting the sphere and convert the red, green and blue values into luminance. Source can be found here.
vec3 light = _lightingContribution.diffuse;
float lum = 0.2126*light.r + 0.7152*light.g + 0.0722*light.b;
I calculate a number based on the angle of the surface from the pov of the camera. surface entry point
float factorOpacity = ( _surface.normal.x * _surface.normal.x + _surface.normal.y * _surface.normal.y );
I set up a fresnel exponent which controls the thickness of the edge halo. Lower values mean the fresnel effect is more visible around the center and higher values mean that the fresnel effect appears more around the edges.
float fresnelExponent = 1.0;
I set up the red, green and blue values (0.0-1.0) for my atmosphere color.
float red = 0.1;
float green = 0.2;
float blue = 0.4;
I set up the Nadir(=looking straight down) transparency that I want. 0.0 means fully transparent and 1.0 means fully opaque.
float nadirTransparency = 0.2;
I set the edge transparency.
float edgeTransparency = 1.0;
I calculate the final fresnel factor taking into account the fresnel exponent.
factorOpacity = pow(factorOpacity,fresnelExponent);
Finally, I compute the final color of the geometry and multiply by the desired transparency depending on the input parameters and the fresnel factors.
_output.color = vec4(red,green,blue,1.0) * min(lum,1.0) * (nadirTransparency + (edgeTransparency - nadirTransparency) * factorOpacity);
The Swift implementation looks like this :
let sphere = SCNSphere(radius: radiusValue)
self.geometry = sphere
let ShaderModifier =
"""
#pragma transparent
vec3 light = _lightingContribution.diffuse;
float lum = 0.2126*light.r + 0.7152*light.g + 0.0722*light.b;
float factorOpacity = ( _surface.normal.x * _surface.normal.x + _surface.normal.y * _surface.normal.y );
float fresnelExponent = 1.0;
float red = 0.1;
float green = 0.2;
float blue = 0.4;
float nadirTransparency = 0.2;
float edgeTransparency = 1.0;
factorOpacity = pow(factorOpacity,fresnelExponent);
_output.color = vec4(red,green,blue,1.0) * min(lum,1.0) * (nadirTransparency + (edgeTransparency - nadirTransparency) * factorOpacity);
"""
self.geometry?.firstMaterial?.shaderModifiers = [.fragment: ShaderModifier]
Here is how it looks (cloud colors are from another shader)
atmosphere shader scenekit
And here is how it looks when setting the nadir transparency to 0, edge transparency to 1, a bright blue color and a high fresnel exponent
amplified fresnel effect

OpenGL ES transparency not working, instead things just blend with the background

So I have a simple simulation set up on my phone. The goal is to have circles of red, white, and blue that appear on the screen with various transparencies. I have most of that working, except for one thing, while transparency sort of works, the only blending happens with the black background. As a result the circle in the center appears dark red instead of showing the white circles under it. What am I doing wrong?
Note I am working in an orthographic 2d projection matrix. All of the objects z positions are the same, and are rendered in a specific order.
Here is how I set it so transparency works:
glEnable(GLenum(GL_DEPTH_TEST))
glEnable(GLenum(GL_POINT_SIZE));
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_SRC_ALPHA), GLenum(GL_ONE_MINUS_SRC_ALPHA))
glEnable(GLenum(GL_POINT_SMOOTH))
//Note some of these things aren't compatible with OpenGL-es but they can hurt right?
Here is the fragment shader:
precision mediump float;
varying vec4 outColor;
varying vec3 center;
varying float o_width;
varying float o_height;
varying float o_pointSize;
void main()
{
vec4 fc = gl_FragCoord;
vec3 fp = vec3(fc);
vec2 circCoord = 2.0 * gl_PointCoord - 1.0;
if (dot(circCoord, circCoord) > 1.0) {
discard;
}
gl_FragColor = outColor;//colorOut;
}
Here is how I pass each circle to the shader:
func drawParticle(part: Particle,color_loc: GLint, size_loc: GLint)
{
//print("Drawing: " , part)
let p = part.position
let c = part.color
glUniform4f(color_loc, GLfloat(c.h), GLfloat(c.s), GLfloat(c.v), GLfloat(c.a))
glUniform1f(size_loc, GLfloat(part.size))
glVertexAttribPointer(0, GLint(3), GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, [p.x, p.y, p.z]);
glEnableVertexAttribArray(0);
glDrawArrays(GLenum(GL_POINTS), 0, GLint(1));
}
Here is how I set it so transparency works:
glEnable(GLenum(GL_DEPTH_TEST))
glEnable(GLenum(GL_POINT_SIZE));
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_SRC_ALPHA), GLenum(GL_ONE_MINUS_SRC_ALPHA))
glEnable(GLenum(GL_POINT_SMOOTH))
And that's not how transparency works. OpenGL is not a scene graph, it just draws geometry in the order you specify it to. If the first thing you draw are the red circles, they will blend with the background. Once things get drawn that are "behind" the red circles, the "occulded" parts will simply be discarded due to the depth test. There is no way for OpenGL (or any other depth test based algorithm) to automatically sort the different depth layers and blend them appropriately.
What you're trying to do there is order independent transparency, a problem still in research on how to solve it efficiently.
For what you want to achieve you'll have to:
sort your geometry far to near and draw in that order
disable the depth test while rendering

Real-Time glow shader confusion

So I have a rather simple real-time 2d game that I am trying to add some nice glow to. To take it down to its most basic form it is simply circles and lies drawn on a black surface. And if you consider the scene from a hsv color space perspective all colors (except for black) have a "v" value of 100%.
Currently I have a sort of "accumulation" buffer where the current frame is joined with the previous frame. It works by using two off-screen buffers and a black texture.
Buffer one activated-------------
Lines and dots drawn
Buffer one deactivated
Buffer two activated-------------
Buffer two contents drawn as a ful screen quad
Black texture drawn with slight transparency over full screen
Buffer one contents drawn
Buffer two deactivated
On Screen buffer activated-------
Buffer two's contents drawn to screen
Right now all "lag" by far comes from latency on the cpu. The GPU handles all of this really well.
So I was thinking of maybe trying to spice things up abit by adding a glow effect to things. I was thinking perhaps for step 10 instead of using a regular texture shader, I could use one that draws the texture except with glow!
Unfortunately I am a bit confused on how to do this. Here are some reasons
Blur stuff. Mostly that some people claim that a Gaussian blur can be done real-time while others say you shouldn't. Also people mention another type of blur called a "focus" blur that I dont know what it is.
Most of the examples I can find use XNA. I need to have one that is written in a shader language that is like OpenGL es 2.0.
Some people call it glow, others call it bloom
Different blending modes? can be used to add the glow to the original texture.
How to combine vertical and horizontal blur? Perhaps in one draw call?
Anyway the process as I understand it for rendering glow is thus
Cut out dark data from it
Blur the light data (using Gaussian?)
Blend the light data on-top of the original (screen blending?)
So far I have gotten to the point where I have a shader that draws a texture. What does my next step look like?
//Vertex
percision highp float;
attrivute vec2 positionCoords;
attribute vec2 textureCoords;
uniform mat4 matrix;
uniform float alpha;
varying vec2 v_texcoord;
varying float o_alpha;
void main()
{
gl_Position = matrix * vec4(positionCoords, 0.0, 1.0);
v_texcoord = textureCoords.xy;
o_alpha = alpha;
}
//Fragment
varying vec2 v_texcoord;
uniform sampler2D s_texture;
varying float o_alpha;
void main()
{
vec4 color = texture2D(s_texture, v_texcoord);
gl_FragColor = vec4(color.r, color.g, color.b, color.a - o_alpha);
}
Also is this a feasible thing to do in real-time?
Edit: I probably want to do a 5px or less blur
To address your initial confusion items:
Any kind of blur filter will effectively spread each pixel into a blob based on its original position, and accumulate this result additively for all pixels. The difference between filters is the shape of the blob.
For a Gaussian blur, this blob should be a smooth gradient, feathering gradually to zero around the edges. You probably want a Gaussian blur.
A "focus" blur would be an attempt to emulate an out-of-focus camera: rather than fading gradually to zero, its blob would spread each pixel over a hard-edged circle, giving a subtly different effect.
For a straightforward, one-pass effect, the computational cost is proportional to the width of the blur. This means that a narrow (e.g. 5px or less) blur is likely to be feasible as a real-time one-pass effect. (It is possible to achieve a wide Gaussian blur in real-time by using multiple passes and a multi-resolution pyramid, but I'd recommend trying something simpler first...)
You could reasonably call the effect either "glow" or "bloom". However, to me, "glow" connotes a narrow blur leading to a neon-like effect, while "bloom" connotes using a wide blur to emulate the visual effect of bright objects in a high-dynamic-range visual environment.
The blend mode determines how what you draw is combined with the existing colors in the target buffer. In OpenGL, activate blending with glEnable(GL_BLEND) and set the mode with glBlendFunc().
For a narrow blur, you should be able to do horizontal and vertical filtering in one pass.
To do fast one-pass full-screen sampling, you will need to determine the pixel increment in your source texture. It is fastest to determine this statically, so that your fragment shader doesn't need to compute it at run-time:
float dx = 1.0 / x_resolution_drawn_over;
float dy = 1.0 / y_resolution_drawn_over;
You can do a 3-pixel (1,2,1) Gaussian blur in one pass by setting your texture sampling mode to GL_LINEAR, and taking 4 samples from source texture t as follows:
float dx2 = 0.5*dx; float dy2 = 0.5*dy; // filter steps
[...]
vec2 a1 = vec2(x+dx2, y+dy2);
vec2 a2 = vec2(x+dx2, y-dy2);
vec2 b1 = vec2(x-dx2, y+dy2);
vec2 b2 = vec2(x-dx2, y-dy2);
result = 0.25*(texture(t,a1) + texture(t,a2) + texture(t,b1) + texture(t,b2));
You can do a 5-pixel (1,4,6,4,1) Gaussian blur in one pass by setting your texture sampling mode to GL_LINEAR, and taking 9 samples from source texture t as follows:
float dx12 = 1.2*dx; float dy12 = 1.2*dy; // filter steps
float k0 = 0.375; float k1 = 0.3125; // filter constants
vec4 filter(vec4 a, vec4 b, vec4 c) {
return k1*a + k0*b + k1*c;
}
[...]
vec2 a1 = vec2(x+dx12, y+dy12);
vec2 a2 = vec2(x, y+dy12);
vec2 a3 = vec2(x-dx12, y+dy12);
vec4 a = filter(sample(t,a1), sample(t,a2), sample(t,a3));
vec2 b1 = vec2(x+dx12, y );
vec2 b2 = vec2(x, y );
vec2 b3 = vec2(x-dx12, y );
vec4 b = filter(sample(t,b1), sample(t,b2), sample(t,b3));
vec2 c1 = vec2(x+dx12, y-dy12);
vec2 c2 = vec2(x, y-dy12);
vec2 c3 = vec2(x-dx12, y-dy12);
vec4 c = filter(sample(t,c1), sample(t,c2), sample(t,c3));
result = filter(a,b,c);
I can't tell you if these filters will be real-time feasible on your platform; 9 samples/pixel at full resolution could be slow.
Any wider Gaussian would make separate horizontal and vertical passes advantageous; substantially wider Gaussian would require multi-resolution techniques for real-time performance. (Note that, unlike the Gaussian, filters such as the "focus" blur are not separable, which means they cannot be separated into horizontal and vertical passes...)
Everything that #comingstorm has said is true, but there's a much easier way. Don't write the blur or glow yourself. Since you're on iOS, why not use CoreImage which has a number of interesting filters to choose from and which work in realtime already? For example, they have a Bloom filter which will likely produce the results you want. Also of interest might be the Gloom filter.
Chaining together CoreImage filters is much easier than writing shaders. You can create a CIImage from an OpenGL texture via [+CIImage imageWithTexture:size:flipped:colorSpace:].

How to avoid these edges around my transparent PNGs with WebGL?

I recently added opacity to my webgl fragment shader, replacing
gl_FragColor = texture2D(u_image, vec2(v_texCoord.s, v_texCoord.t));
with
vec4 color = texture2D(u_image, vec2(v_texCoord.s, v_texCoord.t));
gl_FragColor = vec4(color.rgb, * color.a);
but since I did, a little edge appears around my transparent pngs.
Here you can see a little white edge around the worm body parts, where they layer on the background, and thin black edge where they layer on other worm parts.
With the old code, before I added the opacity, or when using Canvas 2D context instead of WebGL, there is no edge :
Here is one of the pngs I use for the worm body parts. As you can see there is no white edge around it, and no black edge on the bottom where it layers on another body part.
Update : I updated SuperPNG export pluggin to last version, which does not anymore give white color to fully transparent pixels. It seems to me that the edge is actually still there but it is now the same color as the shape border so you barely can see it. The thin black edge at the back of the worm parts is still here though.
I once had a similar sprite edge problem before and I added "premultipliedAlpha:false" in my getContext() call to fix it. The new code seems to ignore it. Whever I remove or let it in the getContext() call, the little edge stays around my sprites.
Here is my code :
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
uniform sampler2D u_image;
varying vec2 v_texCoord;
uniform float alpha;
void main()
{
vec4 color = texture2D(u_image, vec2(v_texCoord.s, v_texCoord.t));
gl_FragColor = vec4(color.rgb, alpha * color.a);
}
</script>
...
webglalpha = gl.getUniformLocation(gl.program, "alpha");
...
gl.uniform1f(webglalpha, d_opacity);
Does anybody know what is the problem and how to solve it ?
Consider using premultiplied alpha. Change your blending mode to
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
If your images are not already premultiplied then do this in your shader
float newAlpha = textureColor.a * alpha;
gl_FragColor = vec4(textureColor.rgb * newAlpha, newAlpha);