alpha blending looking wrong/different on iPhone/Pad - iphone

I'm drawing some alpha blended sprites. I have the same assets rendering on Win32 using D3D and OpenGL, and the correct result looks like this when the vertex color is 0xFFFFFF and the background colour is 0xC0D0E0
On Win32/OpenGL I enable blending as follows:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
and on Win32/D3D as follows:
CD3D11_BLEND_DESC blendDesc(D3D11_DEFAULT);
D3D11_RENDER_TARGET_BLEND_DESC rtBlendDesc;
rtBlendDesc.BlendEnable = true;
rtBlendDesc.BlendOp = D3D11_BLEND_OP_ADD;
rtBlendDesc.BlendOpAlpha = D3D11_BLEND_OP_ADD;
rtBlendDesc.DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
rtBlendDesc.DestBlendAlpha = D3D11_BLEND_ZERO;
rtBlendDesc.RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
rtBlendDesc.SrcBlend = D3D11_BLEND_SRC_ALPHA;
rtBlendDesc.SrcBlendAlpha = D3D11_BLEND_ZERO;
blendDesc.RenderTarget[0] = rtBlendDesc;
gDevice->CreateBlendState(&blendDesc, &mBlendState);
then, at render time:
gContext->OMSetBlendState(mBlendState, NULL, 0xffffffff);
On Win32/OpenGL I use the fixed function pipeline, and on DX11 I use very simple vert/pixel shaders:
float4 SpritePixelShader(PS_INPUT input) : SV_Target
{
return (txDiffuse.Sample(samLinear, input.Tex)) * input.Col;
}
PS_INPUT SpriteVertexShader(VS_INPUT input)
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = mul(float4(input.Pos.x, input.Pos.y, 0.5, 1.0), projection);
output.Tex = input.Tex;
output.Col = input.Col;
return output;
}
All well and good. OpenGL and D3D give me identical results.
On OpenGL ES 2.0 on the iPhone simulator (not sure if this is a simulator thing or not, I can't test on a device yet):
I enable blending similarly:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
My fragment/vertex shaders are similar:
// fragment shader
void main()
{
gl_FragColor = color * texture2D(s_texture, texCoord);
}
// vertex shader
void main()
{
color = vertexColor;
texCoord = vertexTexCoord;
gl_Position = matrix * vec4(vertexPosition, 0, 1);
}
But I get this:
Which seems to be using modulate rather than additive. I've tried calling:
glBlendEquation(GL_FUNC_ADD);
but that doesn't seem to do anything, nor does
glBlendEquationOES(GL_FUNC_ADD_OES);
If I use:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
then it looks right but the alpha component of the vertex colors is ignored (and I need it to be used).
Does anyone have any ideas?
Thanks!
Charlie.

Related

Selectively discarding pixels based on background color with shader, Unity + ARFoundation

I have a problem that I'm finding hard to find a solution to so far... I'm also new to shaders and less new to Unity, so that probably doesn't help.
I am using AR Foundation to create an application. The application uses the camera image as a background texture. Objects in view should generally be rendered normally, but depending on the color of the camera image being rendered to the background, need to be discarded. Example: the color of the pixel in the background exceeds a certain blue value, so the object in the foreground does not render that pixel.
My approach so far has been to attempt stencil testing.
I've tried altering the AR Foundation default background shader. I've managed to do basic stuff, like set color values for the fragment before it is returned, so I assumed I could use GLES3 functions like glEnable(GL_STENCIL_TEST) to enable stencil testing. This prevents rendering all together, so I assume I can't use the full GLES3 functionality.
Using HLSL stencil testing outside the GLSLPROGRAM block works fine, but I cant figure out how to selectively chose ref values based on fragment color rendered in the GLSLPROGRAM block.
Is there a way to achieve what I'm attempting here?
This is the state of the shader currently (this is basically still the default ARCameraBackground shader, uncommenting
//glEnable(GL_STENCIL_TEST);
causes errors. Uncommenting:
//Stencil
//{
// Ref 2
// Comp Always
// Pass Replace
//}
works fine and lets me fully hide or fully show objects with shaders that stencil test as well):
Shader "CustomARCoreBackground"
{
Properties
{
_MainTex("Texture", 2D) = "white" {}
_CutoffThresh("CutoffValue", Float) = 0.5
}
SubShader
{
Tags
{
"Queue" = "Background"
"RenderType" = "Background"
"ForceNoShadowCasting" = "True"
}
Pass
{
Cull Off
ZTest Always
ZWrite On
Lighting Off
LOD 100
Tags
{
"LightMode" = "Always"
}
GLSLPROGRAM
#pragma only_renderers gles3
#ifdef SHADER_API_GLES3
#extension GL_OES_EGL_image_external_essl3 : require
#endif // SHADER_API_GLES3
// Device display transform is provided by the AR Foundation camera background renderer.
uniform mat4 _UnityDisplayTransform;
#ifdef VERTEX
varying vec2 textureCoord;
void main()
{
#ifdef SHADER_API_GLES3
// Transform the position from object space to clip space.
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
// Remap the texture coordinates based on the device rotation.
textureCoord = (_UnityDisplayTransform * vec4(gl_MultiTexCoord0.x, 1.0f - gl_MultiTexCoord0.y, 1.0f, 0.0f)).xy;
#endif // SHADER_API_GLES3
}
#endif // VERTEX
#ifdef FRAGMENT
varying vec2 textureCoord;
uniform samplerExternalOES _MainTex;
//added threshold variable
uniform float _CutoffThresh;
#if defined(SHADER_API_GLES3) && !defined(UNITY_COLORSPACE_GAMMA)
float GammaToLinearSpaceExact (float value)
{
if (value <= 0.04045F)
return value / 12.92F;
else if (value < 1.0F)
return pow((value + 0.055F)/1.055F, 2.4F);
else
return pow(value, 2.2F);
}
vec3 GammaToLinearSpace (vec3 sRGB)
{
// Approximate version from http://chilliant.blogspot.com.au/2012/08/srgb-approximations-for-hlsl.html?m=1
return sRGB * (sRGB * (sRGB * 0.305306011F + 0.682171111F) + 0.012522878F);
// Precise version, useful for debugging, but the pow() function is too slow.
// return vec3(GammaToLinearSpaceExact(sRGB.r), GammaToLinearSpaceExact(sRGB.g), GammaToLinearSpaceExact(sRGB.b));
}
#endif // SHADER_API_GLES3 && !UNITY_COLORSPACE_GAMMA
void main()
{
//glEnable(GL_STENCIL_TEST);
#ifdef SHADER_API_GLES3
vec3 result = texture(_MainTex, textureCoord).xyz;
#ifndef UNITY_COLORSPACE_GAMMA
result = GammaToLinearSpace(result);
#endif // !UNITY_COLORSPACE_GAMMA
gl_FragColor = vec4(result, 1);
gl_FragDepth = 1.0f;
#endif // SHADER_API_GLES3
}
#endif // FRAGMENT
ENDGLSL
//Stencil
//{
// Ref 2
// Comp Always
// Pass Replace
//}
}
}
FallBack Off
}```

RawImage flicker when changing from Texture2D to VideoPlayer texture

Using Unity 2017.3
I have a RawImage into which I display a sequence of images loaded as Texture2D's. Works perfectly and seamlessly.
I then show a video into the same RawImage, using the sample VideoPlayer code, and assigning
rawImage.texture = videoPlayer.texture;
The video plays perfectly well, but as part of switching from the still images to the video, there is a noticeable flicker in the RawImage, as if there's a frame or two of black displayed. The first frame of the video matches the last static image I displayed, so I had expected the transition to be pretty seamless.
Note that the video has been "Prepared" prior to all this - my code yields until videoPlayer.isPrepared returns true, and only then tell the video to play and set the texture.
I thought maybe there was an issue with the texture not being quite ready, but I tried yielding once or twice after calling Play and before setting the texture, but that didn't have any effect on the flicker.
I saw this item: https://answers.unity.com/questions/1294011/raw-image-flickering-when-texture-changed.html which suggests that this is something to do with material instances being set up. I don't fully understand the solution presented in that answer, nor do I understand how I could adapt it to my own case, but maybe it means something to those more skilled in Unity than I.
Any suggestions on how to get rid of that flickery frame?
EDIT: Here's the code
public class VideoAnimation : MonoBehaviour, IAnimation {
private VideoPlayer _videoPlayer;
private UnityAction _completeAction;
private bool _prepareStarted;
public void Configure(VideoClip clip, bool looping)
{
_videoPlayer = gameObject.AddComponent<VideoPlayer>();
_videoPlayer.playOnAwake = false;
_videoPlayer.isLooping = looping;
_videoPlayer.source = VideoSource.VideoClip;
_videoPlayer.clip = clip;
_videoPlayer.skipOnDrop = true;
}
public void Prepare()
{
_prepareStarted = true;
_videoPlayer.Prepare();
}
public void Begin(RawImage destImage, UnityAction completeAction)
{
_completeAction = completeAction;
_videoPlayer.loopPointReached += OnLoopPointReached;
StartCoroutine(GetVideoPlaying(destImage));
}
private IEnumerator GetVideoPlaying(RawImage destImage)
{
if (!_prepareStarted)
{
_videoPlayer.Prepare();
}
while(!_videoPlayer.isPrepared)
{
yield return null;
}
_videoPlayer.Play();
destImage.texture = _videoPlayer.texture;
}
public void OnLoopPointReached(VideoPlayer source)
{
if (_completeAction != null)
{
_completeAction();
}
}
public void End()
{
_videoPlayer.Stop();
_videoPlayer.loopPointReached -= OnLoopPointReached;
}
public class Factory : Factory<VideoAnimation>
{
}
}
In the specific case I'm dealing with, Configure and Prepare are called ahead of time, while the RawImage is showing the last static image before the video. Then when it's time to show the video, Begin is called. Thus, _prepareStarted is already true when Begin is called. Inserting log messages shows that isPrepared is returning true by the time I get around to calling Begin, so I don't loop there either.
I've tried altering the order of the two line
_videoPlayer.Play();
destImage.texture = _videoPlayer.texture;
but it doesn't seem to change anything. I also thought that maybe the VideoPlayer was somehow outputting a black frame ahead of the normal video, but inserting a yield or three after Play and before the texture set made no difference.
None of the samples I've seen have a Texture in the RawImage before the VideoPlayer's texture is inserted. So in those, the RawImage is starting out black, which means that an extra black frame isn't going to be noticeable.
EDIT #2:
Well, I came up with a solution and, I think, somewhat of an explanation.
First, VideoPlayer.frame is documented as "The frame index currently being displayed by the VideoPlayer." This is not strictly true. Or, maybe it is somewhere in the VideoPlayer's pipeline, but it's not the frame that's observable by code using the VideoPlayer's texture.
When you Prepare the VideoPlayer, at least in the mode I'm using it, the VideoPlayer creates an internal RenderTexture. You would think that, once the player has been prepared, that texture would contain the first frame of the video. It doesn't. There is a very noticeable delay before there's anything there. Thus, when my code set the RawImage texture to the player's texture, it was arranging for a texture that was, at least at that moment, empty to be displayed. This perfectly explains the black flicker, since that's the color of the background Canvas.
So my first attempt at a solution was to insert the loop here:
_videoPlayer.Play();
while(_videoPlayer.frame < 1)
{
yield return null;
}
destImage.texture = _videoPlayer.texture;
between Play and the texture set.
I figured that, despite the documentation, maybe frame was the frame about to be displayed. If so, this should result in the first (0th) frame already being in the buffer, and would get rid of the flicker. Nope. Still flickered. But when I changed to
_videoPlayer.Play();
while(_videoPlayer.frame < 2)
{
yield return null;
}
destImage.texture = _videoPlayer.texture;
then the transition was seamless. So my initial attempt where I inserted yields between the two was the right approach - I just didn't insert quite enough. One short, as a matter of fact. I inserted a counter in the loop, and it showed that I yielded 4 times in the above loop, which is what I would expect, since the video is 30fps, and I'm running at 60fps on my computer. (Sync lock is on.)
A final experiment showed that:
_videoPlayer.Play();
while(_videoPlayer.frame < 1)
{
yield return null;
}
yield return null;
destImage.texture = _videoPlayer.texture;
also did not result in a flicker. (Or, at least, not one that I could see.) So once the VideoPlayer was reporting that it was displaying the second frame (the numbers are 0-based according to the docs), it took one additional game frame before the transition was seamless. (Unless there was a 60-th of a second flicker that my eyes can't see.) That game frame might have something to do with Unity's graphic pipeline or VideoPlayer pipeline - I don't know.
So, the bottom line is that there is a noticeable delay from the time you call Play until there is actually anything in the VideoPlayer's texture that will make it to the screen, and unless you wait for that, you'll be displaying "nothing" (which, in my case, resulted in black background flickering through.)
It occurs to me that since the VideoPlayer is producing a RenderTexture, it might also be possible to blit the previous static texture to the VideoPlayer's texture (so that there would be something there right away) and then do the switch immediately. Another experiment to run...
Hmm, let's try use shaders, maybe it helps you.
First we must create custom shader and it must work like a standard UI shader.
You can download all build-in shaders in this link.
Take UI-Default.shader and modificate it. I modificate it for you!
Just create shader in unity and paste this code:
Shader "Custom/CustomShaderForUI"
{
Properties
{
//[PerRendererData] _MainTex ("Sprite Texture", 2D) = "white" {}
_CustomTexture ("Texture", 2D) = "white" {} // <--------------- new property
_Color ("Tint", Color) = (1,1,1,1)
_StencilComp ("Stencil Comparison", Float) = 8
_Stencil ("Stencil ID", Float) = 0
_StencilOp ("Stencil Operation", Float) = 0
_StencilWriteMask ("Stencil Write Mask", Float) = 255
_StencilReadMask ("Stencil Read Mask", Float) = 255
_ColorMask ("Color Mask", Float) = 15
[Toggle(UNITY_UI_ALPHACLIP)] _UseUIAlphaClip ("Use Alpha Clip", Float) = 0
}
SubShader
{
Tags
{
"Queue"="Transparent"
"IgnoreProjector"="True"
"RenderType"="Transparent"
"PreviewType"="Plane"
"CanUseSpriteAtlas"="True"
}
Stencil
{
Ref [_Stencil]
Comp [_StencilComp]
Pass [_StencilOp]
ReadMask [_StencilReadMask]
WriteMask [_StencilWriteMask]
}
Cull Off
Lighting Off
ZWrite Off
ZTest [unity_GUIZTestMode]
Blend SrcAlpha OneMinusSrcAlpha
ColorMask [_ColorMask]
Pass
{
Name "Default"
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma target 2.0
#include "UnityCG.cginc"
#include "UnityUI.cginc"
#pragma multi_compile __ UNITY_UI_CLIP_RECT
#pragma multi_compile __ UNITY_UI_ALPHACLIP
struct appdata_t
{
float4 vertex : POSITION;
float4 color : COLOR;
float2 texcoord : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct v2f
{
float4 vertex : SV_POSITION;
fixed4 color : COLOR;
float2 texcoord : TEXCOORD0;
float4 worldPosition : TEXCOORD1;
UNITY_VERTEX_OUTPUT_STEREO
};
fixed4 _Color;
fixed4 _TextureSampleAdd;
float4 _ClipRect;
v2f vert(appdata_t v)
{
v2f OUT;
UNITY_SETUP_INSTANCE_ID(v);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(OUT);
OUT.worldPosition = v.vertex;
OUT.vertex = UnityObjectToClipPos(OUT.worldPosition);
OUT.texcoord = v.texcoord;
OUT.color = v.color * _Color;
return OUT;
}
//sampler2D _MainTex;
sampler2D _CustomTexture; // <---------------------- new property
fixed4 frag(v2f IN) : SV_Target
{
//half4 color = (tex2D(_MainTex, IN.texcoord) + _TextureSampleAdd) * IN.color;
half4 color = (tex2D(_CustomTexture, IN.texcoord) + _TextureSampleAdd) * IN.color; // <- using new property
#ifdef UNITY_UI_CLIP_RECT
color.a *= UnityGet2DClipping(IN.worldPosition.xy, _ClipRect);
#endif
#ifdef UNITY_UI_ALPHACLIP
clip (color.a - 0.001);
#endif
return color;
}
ENDCG
}
}
}
Next, create a material and set your RenderTexture to texture field in shader (not in RawImage component).
Hope this helps!
Try hiding the RawImage gameObject while the video is loading. This should fix any flickering caused by the VideoPlayer not being fully loaded.
public VideoPlayer _videoPlayer;
public RawImage _videoImage;
private void PlayClip(VideoClip videoClip)
{
StartCoroutine(PlayClipCoroutine(videoClip));
}
private IEnumerator PlayClipCoroutine(VideoClip clip)
{
_videoImage.gameObject.SetActive(false);
_videoPlayer.clip = clip;
_videoPlayer.Prepare();
while (!_videoPlayer.isPrepared)
{
yield return null;
}
_videoPlayer.Play();
_videoImage.texture = _videoPlayer.texture;
_videoImage.gameObject.SetActive(true);
}

Send a BOOL value to a Fragment Shader OpenGL ES 2.0 on iOS / iPhone

I'm new to OpenGL ES 2.0 so please bear with me... I'd like to pass a BOOL flag into my fragment shader so that after a certain touch event has occurred in my app, it renders gl_FragColor differently. I tried using a vec2 attribute for this and just "faking" the .x value as my "BOOL" but it looks like OpenGL is normalizing the value from 0.0 to 1.0 before the shader gets ahold of it. So even though in my app I've set it to 0.0, while the shader is doing its thing, the value will eventually reach 1.0. Any suggestions would be hugely appreciated.
VertexAttrib Code:
// set up context, shaders, use program etc.
[filterProgram addAttribute:#"inputBrushMode"];
inputBrushModeAttribute = [filterProgram attributeIndex:#"inputBrushMode"];
bMode[0] = 0.0;
bMode[1] = 0.0;
glVertexAttribPointer(inputBrushModeAttribute, 2, GL_FLOAT, 0, 0, bMode);
Current Vertex Shader Code:
...
attribute vec2 inputBrushMode;
varying highp float brushMode;
void main()
{
gl_Position = position;
...
brushMode = inputBrushMode.x;
}
Current Fragment Shader Code:
...
varying highp float brushMode;
void main()
{
if(brushMode < 0.5) {
// render the texture
gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
} else {
// cover things in yellow funk
gl_FragColor = vec4(1,1,0,1);
}
}
Thanks in advance.
Create the bool as a glUniform (1.0 or 0.0) instead. Set its value with glUniform1f(GLint location, GLfloat v0). In the shader, check its value like so:
if (my_uniform < 0.5) {
// FALSE
} else {
// TRUE
}

OpenGL ES 2.0 / MonoTouch: Rendering GUI Textures shows nothing

I building a simple Framework for OpenGL UI's for MonoTouch. I set up everything and also succeeded rendering 3D Models, but a simple 2D texture object fails. The texture has a size of 256x256 so it's not to large and its power of two.
Here is some rendering code( Note: I did remove the existing, and working code ):
// Render the gui objects ( flat )
Projection = Matrix4x4.Orthographic(0, WindowProperties.Width, WindowProperties.Height, 0);
View = new Matrix4x4();
GL.Disable(All.CullFace);
GL.Disable(All.DepthTest);
_Stage.RenderGui();
Stage:
public void RenderGui ()
{
Draw(this);
// Renders every child control, all of them call "DrawImage" when rendering something
}
public void DrawImage (Control caller, ITexture2D texture, PointF position, SizeF size)
{
PointF gposition = caller.GlobalPosition; // Resulting position is 0,0 in my tests
gposition.X += position.X;
gposition.Y += position.Y;
// Renders the ui model, this is done by using a existing ( and working vertex buffer )
// The shader gets some parameters ( this works too in 3d space )
_UIModel.Render(new RenderParameters() {
Model = Matrix4x4.Scale(size.Width, size.Height, 1) * Matrix4x4.Translation(gposition.X, gposition.Y, 0),
TextureParameters = new TextureParameter[] {
new TextureParameter("texture", texture)
}
});
}
The model is using a vector2 for positions, no other attributes are given to the shader.
The shader below should render the texture.
Vertex:
attribute vec2 position;
uniform mat4 modelViewMatrix;
varying mediump vec2 textureCoordinates;
void main()
{
gl_Position = modelViewMatrix * vec4(position.xy, -3.0, 1.0);
textureCoordinates = position;
}
Fragment:
varying mediump vec2 textureCoordinates;
uniform sampler2D texture;
void main()
{
gl_FragColor = texture2D(texture, textureCoordinates) + vec4(0.5, 0.5, 0.5, 0.5);
}
I found out that the drawing issue is caused by the shader. This line produces a GL_INVALID_OPERATION( It works with other shaders ):
GL.UniformMatrix4(uni.Location, 1, false, (parameters.Model * _Device.View * _Device.Projection).ToArray());
EDIT:
It turns out that the shader uniform locations changed( Yes i'm wondering about this too, because the initialization happens when the shader is completly initialized. I changed it, and now everything works.
As mentioned in the other thread the texture is wrong, but this is another issue ( OpenGL ES 2.0 / MonoTouch: Texture is colorized red )
The shader initialization with the GL.GetUniformLocation problem mentioned above:
[... Compile shaders ...]
// Attach vertex shader to program.
GL.AttachShader (_Program, vertexShader);
// Attach fragment shader to program.
GL.AttachShader (_Program, pixelShader);
// Bind attribute locations
for (int i = 0; i < _VertexAttributeList.Length; i++) {
ShaderAttribute attribute = _VertexAttributeList [i];
GL.BindAttribLocation (_Program, i, attribute.Name);
}
// Link program
if (!LinkProgram (_Program)) {
GL.DeleteShader (vertexShader);
GL.DeleteShader (pixelShader);
GL.DeleteProgram (_Program);
throw new Exception ("Shader could not be linked");
}
// Get uniform locations
for (int i = 0; i < _UniformList.Length; i++) {
ShaderUniform uniform = _UniformList [i];
uniform.Location = GL.GetUniformLocation (_Program, uniform.Name);
Console.WriteLine ("Uniform: {0} Location: {1}", uniform.Name, uniform.Location);
}
// Detach shaders
GL.DetachShader (_Program, vertexShader);
GL.DetachShader (_Program, pixelShader);
GL.DeleteShader (vertexShader);
GL.DeleteShader (pixelShader);
// Shader is initialized add it to the device
_Device.AddResource (this);
I don't know what Matrix4x4.Orthographic uses as near-far range, but if it's something simple like [-1,1], the object may just be out of the near-far-interval, since you set its z value explicitly to -3.0 in the vertex shader (and neither the scale nor the translation of the model matrix will change that). Try to use a z of 0.0 instead. Why is it -3, anyway?
EDIT: So if GL.UniformMatrix4 function throws a GL_INVALID_OPERATION, it seems you didn't retrieve the corresponding unfiorm location successfully. So the code where you do this might also help to find the issue.
Or it may also be that you call GL.UniformMatrix4 before the corresponding shader program is used. Keep in mind that uniforms can only be set once the program is active (GL.UseProgram or something similar was called with the shader program).
And by the way, you're multiplying the matrices in the wrong order, anyway (given your shader and matrix setting code). If it really works this way for other renderings, then you either were just lucky or you have some severe conceptual and mathemtical inconsistency in your matrix library.
It turns out that the shader uniforms change at a unknown time. Everything is created and initialized when i ask OpenGL ES for the uniform location, so it must be a bug in OpenGL.
Calling GL.GetUniformLocation(..) each time i set the shader uniforms solves the problem.

Multiple textures doesn't show

I'm a newbie of DirectX10. Now I'm developing a Direct10 application. It mixes two textures which are filled manually according to user's input. The current implementation is
Create two empty textures with usage D3D10_USAGE_STAGING.
Create two resource shader view to bind to the pixel shader because the shader needs it.
Copy the textures to the GPU memory by calling CopyResource.
Now the problem is that I can only see the first texture but I don't see the second. It looks to me that the binding doesn't work for the second texture.
I don't know what's wrong with it. Can anyone here shed me a light on it?
Thanks,
Marshall
The class COverlayTexture takes responsible for creating the texture, creating resource view, fill the texture with the mapped bitmap from another applicaiton and bind the resource view to the pixel shader.
HRESULT COverlayTexture::Initialize(VOID)
{
D3D10_TEXTURE2D_DESC texDesStaging;
texDesStaging.Width = m_width;
texDesStaging.Height = m_height;
texDesStaging.Usage = D3D10_USAGE_STAGING;
texDesStaging.BindFlags = 0;
texDesStaging.ArraySize = 1;
texDesStaging.MipLevels = 1;
texDesStaging.SampleDesc.Count = 1;
texDesStaging.SampleDesc.Quality = 0;
texDesStaging.MiscFlags = 0;
texDesStaging.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesStaging.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
HR( m_Device->CreateTexture2D( &texDesStaging, NULL, &m_pStagingResource ) );
D3D10_TEXTURE2D_DESC texDesShader;
texDesShader.Width = m_width;
texDesShader.Height = m_height;
texDesShader.BindFlags = D3D10_BIND_SHADER_RESOURCE;
texDesShader.ArraySize = 1;
texDesShader.MipLevels = 1;
texDesShader.SampleDesc.Count = 1;
texDesShader.SampleDesc.Quality = 0;
texDesShader.MiscFlags = 0;
texDesShader.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesShader.Usage = D3D10_USAGE_DEFAULT;
texDesShader.CPUAccessFlags = 0;
HR( m_Device->CreateTexture2D( &texDesShader, NULL, &m_pShaderResource ) );
D3D10_SHADER_RESOURCE_VIEW_DESC viewDesc;
ZeroMemory( &viewDesc, sizeof( viewDesc ) );
viewDesc.Format = texDesShader.Format;
viewDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
viewDesc.Texture2D.MipLevels = texDesShader.MipLevels;
HR( m_Device->CreateShaderResourceView( m_pShaderResource, &viewDesc, &m_pShaderResourceView ) );
}
HRESULT COverlayTexture::Render(VOID)
{
m_Device->PSSetShaderResources(0, 1, m_pShaderResourceView);
D3D10_MAPPED_TEXTURE2D lockedRect;
m_pStagingResource->Map( 0, D3D10_MAP_WRITE, 0, &lockedRect );
// Fill in the texture with the bitmap mapped from shared memory view
m_pStagingResource->Unmap(0);
m_Device->CopyResource(m_pShaderResource, m_pStagingResource);
}
I use two instances of the class COverlayTexture each of which fills its own bitmap to its texture respectively and renders with sequence COverlayTexture[1] then COverlayTexture[0].
COverlayTexture* pOverlayTexture[2];
for( int i = 1; i < 0; i++)
{
pOverlayTexture[i]->Render()
}
The blend state setting in the FX file is definedas below:
BlendState AlphaBlend
{
AlphaToCoverageEnable = FALSE;
BlendEnable[0] = TRUE;
SrcBlend = SRC_ALPHA;
DestBlend = INV_SRC_ALPHA;
BlendOp = ADD;
BlendOpAlpha = ADD;
SrcBlendAlpha = ONE;
DestBlendAlpha = ZERO;
RenderTargetWriteMask[0] = 0x0f;
};
The pixel shader in the FX file is defined as below:
Texture2D txDiffuse;
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse.Sample(samLinear, input.Tex);
return ret;
}
Thanks again.
Edit for Paulo:
Thanks a lot, Paulo. The problem is that which instance of the object should be bound to alpha texture or diffuse texture. As testing, I bind the COverlayTexture[0] to the alpha and COverlayTexture[1] to the diffuse texture.
Texture2D txDiffuse[2];
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse[1].Sample(samLinear, input.Tex);
float alpha = txDiffuse[0].Sample(samLinear, input.Tex).x;
return float4(ret.xyz, alpha);
}
I called the PSSetShaderResources for the two resource views.
g_pShaderResourceViews[0] = overlay[0].m_pShaderResourceView;
g_pShaderResourceViews[1] = overlay[1].m_pShaderResourceView;
m_Device->PSSetShaderResources(0, 2, g_pShaderResourceViews);
The result is that i don't see anything. I also tried the channel x,y,z,w.
Post some more code.
I'm not sure how you mean to mix these two textures. If you want to mix them in the pixel shader you need to sample both of them then add them (or whatever operation you required) toghether.
How do you add the textures toghether? By setting a ID3D11BlendState or in the pixel shader?
EDIT:
You don't need two textures in every class: if you want to write to your texture your usage should be D3D10_USAGE_DYNAMIC. When you do this, you can also have this texture as your shader resource so you don't need to do the m_Device->CopyResource(m_pShaderResource, m_pStagingResource); step.
Since you're using alpha blending you must control the alpha value output in the pixel shader (the w component of the float4 that the pixel shader returns).
Bind both textures to your pixel shader and use one textures value as the alpha components:
Texture2D txDiffuse;
Texture2D txAlpha;
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse.Sample(samLinear, input.Tex);
float alpha=txAlpha.Sample(samLinear,input.Tex).x; // Choose the proper channel
return float4(ret.xyz,alpha); // Alpha is the 4th component
}