How Do I Enable MSAA for a Render to Texture iOS App - swift

I have a working render to texture toy iOS app. The issue is it has a ton of jaggies because it is point sampled and not anti-aliasied:
I increased the sample count in my MTKView subclass to 4 to enable MSAA.
Here is what the relevant code looks like.
// render to texture render pass descriptor
renderPassDesc = MTLRenderPassDescriptor()
renderPassDesc.EI_configure(clearColor: MTLClearColorMake(1, 1, 1, 1), clearDepth: 1)
// my MTLRenderPassDescriptor extension convenience method
public func EI_configure(clearColor:MTLClearColor, clearDepth: Double) {
// color
colorAttachments[ 0 ] = MTLRenderPassColorAttachmentDescriptor()
colorAttachments[ 0 ].storeAction = .store
colorAttachments[ 0 ].loadAction = .clear
colorAttachments[ 0 ].clearColor = clearColor
// depth
depthAttachment = MTLRenderPassDepthAttachmentDescriptor()
depthAttachment.storeAction = .dontCare
depthAttachment.loadAction = .clear
depthAttachment.clearDepth = clearDepth;
}
I attach a color and depth buffer configured for MSAA to renderPassDesc:
// color
let colorDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat:view.colorPixelFormat, width:Int(view.bounds.size.width), height:Int(view.bounds.size.height), mipmapped:false)
colorDesc.mipmapLevelCount = 1;
colorDesc.textureType = .type2DMultisample
colorDesc.sampleCount = view.sampleCount
colorDesc.usage = [.renderTarget, .shaderRead]
renderPassDesc.colorAttachments[ 0 ].texture = view.device!.makeTexture(descriptor:colorDesc)
// depth
let depthDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat:.depth32Float, width:Int(view.bounds.size.width), height:Int(view.bounds.size.height), mipmapped:false)
depthDesc.mipmapLevelCount = 1;
depthDesc.textureType = .type2DMultisample
depthDesc.sampleCount = view.sampleCount
depthDesc.usage = .renderTarget
renderPassDesc.depthAttachment.texture = view.device!.makeTexture(descriptor:depthDesc)
In my draw loop I am getting the following error from my fragment shader that consumes the texture that was rendered into:
failed assertion Fragment Function(finalPassOverlayFragmentShader):
incorrect type of texture (MTLTextureType2DMultisample) bound at texture binding at index 0 (expect MTLTextureType2D) for underlay[0]
This is the fragment shader:
fragment float4 finalPassOverlayFragmentShader(InterpolatedVertex vert [[ stage_in ]],
texture2d<float> underlay [[ texture(0) ]],
texture2d<float> overlay [[ texture(1) ]]) {
constexpr sampler defaultSampler;
float4 _F = overlay.sample(defaultSampler, vert.st).rgba;
float4 _B = underlay.sample(defaultSampler, vert.st).rgba;
float4 rgba = _F + (1.0f - _F.a) * _B;
return rgba;
}
I am sure I have missed a setting somewhere but I cannot find it.
What have I missed here?
UPDATE 0
I now have MSAA happening for my 2-pass toy. The only problem is there is not much anti-aliasing happening. In fact it is hard to tell that anything has changed. Here are my latest settings
// color - multi-sampled texture target
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat:format, width:w, height:h, mipmapped:false)
desc.mipmapLevelCount = 1;
desc.textureType = .type2DMultisample
desc.sampleCount = view.sampleCount
desc.usage = .renderTarget
let tex:MTLTexture? = view.device!.makeTexture(descriptor:desc)
// color - point-sampled resolve-texture
let resolveDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat:format, width:w, height:h, mipmapped:true)
let resolveTex:MTLTexture? = view.device!.makeTexture(descriptor:resolveDesc)
// depth texture target
let depthDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat:.format, width:w, height:h, mipmapped:false)
depthDesc.mipmapLevelCount = 1;
depthDesc.textureType = .type2DMultisample
depthDesc.sampleCount = view.sampleCount
depthDesc.usage = .renderTarget
let depthTex:MTLTexture? = view.device!.makeTexture(descriptor:depthDesc)
// render pass descriptor
renderPassDesc = MTLRenderPassDescriptor()
// color
renderPassDesc.colorAttachments[ 0 ] = MTLRenderPassColorAttachmentDescriptor()
renderPassDesc.colorAttachments[ 0 ].storeAction = .storeAndMultisampleResolve
renderPassDesc.colorAttachments[ 0 ].loadAction = .clear
renderPassDesc.colorAttachments[ 0 ].clearColor = MTLClearColorMake(0.25, 0.25, 0.25, 1)
renderPassDesc.colorAttachments[ 0 ].texture = tex
renderPassDesc.colorAttachments[ 0 ].resolveTexture = resolveTex
// depth
renderPassDesc.depthAttachment = MTLRenderPassDepthAttachmentDescriptor()
renderPassDesc.depthAttachment.storeAction = .dontCare
renderPassDesc.depthAttachment.loadAction = .clear
renderPassDesc.depthAttachment.clearDepth = 1;
renderPassDesc.depthAttachment.texture = depthTex
UPDATE 1
The jaggies appear to be from the render-to-texture and not in the asset. Below is a side by side comparison. the top image is rendered using a single pass with MSAA enabled. The bottom image is rendered to texture. The jaggies are clearly visible in the bottom image
single pass
2-pass

The error is not about your render target (a.k.a. color and depth attachments). It's about a texture you're passing in via the render command encoder's fragment texture table — that is, where you're calling setFragmentTexture(_:index:). The one you're passing for index 0 is a .type2DMultisample when the shader is coded to expect .type2D, because you declared underlay as a texture2d<...>.
Your setup for MSAA is OK for an intermediate step. You will eventually need to resolve the texture to a non-multisampled texture in order to draw it to screen. For that step (which might be for this render command encoder or a later one, depending on your needs), you need to set the storeAction for the color attachment to either .multisampleResolve or .storeAndMultisampleResolve. And you need to set the resolveTexture to a 2D texture. That could be one of your own or a drawable's texture.

Related

How to add a CIFilter to MTLTexture Using ARMatteGenerator?

I am working off of Apple's sample project related to using the ARMatteGenerator to generate a a MTLTexture that can be used as an occlusion matte in the people occlusion technology.
I would like to determine how I could run the generated matte through a CIFilter. In my code, I am "filtering" the matte like such;
func updateMatteTextures(commandBuffer: MTLCommandBuffer) {
guard let currentFrame = session.currentFrame else {
return
}
var targetImage: CIImage?
alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer)
dilatedDepthTexture = matteGenerator.generateDilatedDepth(from: currentFrame, commandBuffer: commandBuffer)
targetImage = CIImage(mtlTexture: alphaTexture!, options: nil)
monoAlphaCIFilter?.setValue(targetImage!, forKey: kCIInputImageKey)
monoAlphaCIFilter?.setValue(CIColor.red, forKey: kCIInputColorKey)
targetImage = (monoAlphaCIFilter?.outputImage)!
let drawingBounds = CGRect(origin: .zero, size: CGSize(width: alphaTexture!.width, height: alphaTexture!.height))
context.render(targetImage!, to: alphaTexture!, commandBuffer: commandBuffer, bounds: drawingBounds, colorSpace: CGColorSpaceCreateDeviceRGB())
}
When I go to composite the matte texture and backgrounds, there is no filtering effect applied to the matte. This is how the textures are being composited;
func compositeImagesWithEncoder(renderEncoder: MTLRenderCommandEncoder) {
guard let textureY = capturedImageTextureY, let textureCbCr = capturedImageTextureCbCr else {
return
}
// Push a debug group allowing us to identify render commands in the GPU Frame Capture tool
renderEncoder.pushDebugGroup("CompositePass")
// Set render command encoder state
renderEncoder.setCullMode(.none)
renderEncoder.setRenderPipelineState(compositePipelineState)
renderEncoder.setDepthStencilState(compositeDepthState)
// Setup plane vertex buffers
renderEncoder.setVertexBuffer(imagePlaneVertexBuffer, offset: 0, index: 0)
renderEncoder.setVertexBuffer(scenePlaneVertexBuffer, offset: 0, index: 1)
// Setup textures for the composite fragment shader
renderEncoder.setFragmentBuffer(sharedUniformBuffer, offset: sharedUniformBufferOffset, index: Int(kBufferIndexSharedUniforms.rawValue))
renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureY), index: 0)
renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureCbCr), index: 1)
renderEncoder.setFragmentTexture(sceneColorTexture, index: 2)
renderEncoder.setFragmentTexture(sceneDepthTexture, index: 3)
renderEncoder.setFragmentTexture(alphaTexture, index: 4)
renderEncoder.setFragmentTexture(dilatedDepthTexture, index: 5)
// Draw final quad to display
renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
renderEncoder.popDebugGroup()
}
How could I apply the CIFilter to only the alphaTexture generated by the ARMatteGenerator?
I don't think you want to apply a CIFilter to the alphaTexture. I assume you're using Apple's Effecting People Occlusion in Custom Renderers sample code. If you watch this year's Bringing People into AR WWDC session, they talk about generating a segmentation matte using ARMatteGenerator, which is what is being done with alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer). alphaTexture is a MTLTexture that is essentially an alpha mask for where humans have been detected in the camera frame (i.e. complete opaque where a human is and completely transparent where a human is not).
Adding a filter to the alpha texture won't filter the final rendered image but will simply affect the mask that is used in the compositing. If you're trying to achieve the video linked in your previous question, I would recommend adjusting the metal shader where the compositing occurs. In the session, they point out that they compare the dilatedDepth and the renderedDepth to see if they should draw virtual content or pixels from the camera:
fragment half4 customComposition(...) {
half4 camera = cameraTexture.sample(s, in.uv);
half4 rendered = renderedTexture.sample(s, in.uv);
float renderedDepth = renderedDepthTexture.sample(s, in.uv);
half4 scene = mix(rendered, camera, rendered.a);
half matte = matteTexture.sample(s, in.uv);
float dilatedDepth = dilatedDepthTexture.sample(s, in.uv);
if (dilatedDepth < renderedDepth) { // People in front of rendered
// mix together the virtual content and camera feed based on the alpha provided by the matte
return mix(scene, camera, matte);
} else {
// People are not in front so just return the scene
return scene
}
}
Unfortunately, this is done sightly differently in the sample code, but it's still fairly easy to modify. Open up Shaders.metal. Find the compositeImageFragmentShader function. Toward the end of the function you'll see half4 occluderResult = mix(sceneColor, cameraColor, alpha); This is essentially the same operation as mix(scene, camera, matte); that we saw above. We're deciding if we should use a pixel from the scene or a pixel from camera feed based on the segmentation matte. We can easily replace the camera image pixel with an arbitrary rgba value by replacing cameraColor with a half4 that represents a color. For example, we could use half4(float4(0.0, 0.0, 1.0, 1.0)) to paint all of the pixels within the segmentation matte blue:
…
// Replacing camera color with blue
half4 occluderResult = mix(sceneColor, half4(float4(0.0, 0.0, 1.0, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;
Of course, you can apply other effects as well. Dynamic grayscale static is pretty easy to achieve.
Above compositeImageFragmentShader add:
float random(float offset, float2 tex_coord, float time) {
// pick two numbers that are unlikely to repeat
float2 non_repeating = float2(12.9898 * time, 78.233 * time);
// multiply our texture coordinates by the non-repeating numbers, then add them together
float sum = dot(tex_coord, non_repeating);
// calculate the sine of our sum to get a range between -1 and 1
float sine = sin(sum);
// multiply the sine by a big, non-repeating number so that even a small change will result in a big color jump
float huge_number = sine * 43758.5453 * offset;
// get just the numbers after the decimal point
float fraction = fract(huge_number);
// send the result back to the caller
return fraction;
}
(taken from #twostraws ShaderKit)
Then modify compositeImageFragmentShader to:
…
float randFloat = random(1.0, cameraTexCoord, rgb[0]);
half4 occluderResult = mix(sceneColor, half4(float4(randFloat, randFloat, randFloat, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;
You should get:
Finally, the debugger seems to have a hard time keeping up with the app. For me, when running attached Xcode, the app would freeze shortly after launch, but was typically smooth when running on its own.

How to modify a Texture pixels from a compute shader in unity?

I stumbled upon a strange problem in vuforia.When i request a camera image using CameraDevice.GetCameraImage(mypixelformat), the image returned is both flipped sideways and rotated 180 deg. Because of this, to obtain a normal image i have to first rotate the image and then flip it sideways.The approach i am using is simply iterating over pixels of the image and modifying them.This approach is very poor performance wise.Below is the code:
Texture2D image;
CameraDevice cameraDevice = Vuforia.CameraDevice.Instance;
Vuforia.Image vufImage = cameraDevice.GetCameraImage(pixelFormat);
image = new Texture2D(vufImage.Width, vufImage.Height);
vufImage.CopyToTexture(image);
Color32[] colors = image.GetPixels32();
System.Array.Reverse(colors, 0, colors.Length); //rotate 180deg
image.SetPixels32(colors); //apply rotation
image = FlipTexture(image); //flip sideways
//***** THE FLIP TEXTURE METHOD *******//
private Texture2D FlipTexture(Texture2D original, bool upSideDown = false)
{
Texture2D flipped = new Texture2D(original.width, original.height);
int width = original.width;
int height = original.height;
for (int col = 0; col < width; col++)
{
for (int row = 0; row < height; row++)
{
if (upSideDown)
{
flipped.SetPixel(row, (width - 1) - col, original.GetPixel(row, col));
}
else
{
flipped.SetPixel((width - 1) - col, row, original.GetPixel(col, row));
}
}
}
flipped.Apply();
return flipped;
}
To improve the performance i want to somehow schedule these pixel operations on the GPU, i have heard that a compute shader can be used, but i have no idea where to start.Can someone please help me write the same operations in a compute shader so that the GPU can handle them, Thankyou!.
The whole compute shader are new for me too, but i took the occasion to research it a little bit for myself too. The following works for flipping a texture vertically (rotating and flipping horizontally should be just a vertical flip).
Someone might have a more elaborate solution for you, but maybe this is enough to get you started.
The Compute shader code:
#pragma kernel CSMain
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> Result;
Texture2D<float4> ImageInput;
float2 flip;
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
flip = float2(512 , 1024) - id.xy ;
Result[id.xy] = float4(ImageInput[flip].x, ImageInput[flip].y, ImageInput[flip].z, 1.0);
}
and called from any script:
public void FlipImage()
{
int kernelHandle = shader.FindKernel("CSMain");
RenderTexture tex = new RenderTexture(512, 1024, 24);
tex.enableRandomWrite = true;
tex.Create();
shader.SetTexture(kernelHandle, "Result", tex);
shader.SetTexture(kernelHandle, "ImageInput", myTexture);
shader.Dispatch(kernelHandle, 512/8 , 1024 / 8, 1);
RenderTexture.active = tex;
result.ReadPixels(new Rect(0, 0, tex.width, tex.height), 0, 0);
result.Apply();
}
This takes an input Texture2D, flips it in the shader, applies it to a RenderTexture and to a Texture2D, whatever you need.
Note that the image sizes are hardcoded in my instance and should be replaced by whatever size you need. (for within the shader use shader.SetInt(); )

cocos2dx: Sprite3D rotating, culling error

Hi I'm trying to have 2 sprites with different z in 3d world and a camera that rotates around the center of the screen and points at the center of the screen.
Even if the sprites has different z (and zorder, I don't know if this is necessary) the sprites are always visualized while I'm expecting to have the second sprite hided from the other...
This is helloworld layer init
auto sp3d = Sprite3D::create();
sp3d->setPosition(visibleSize.width/2, visibleSize.height/2);
addChild(sp3d);
auto sprite = Sprite::create("JP9_table.png");
auto spritePos = Vec3(0,0,0);
sprite->setScale(0.3);
sprite->setPosition3D(spritePos);
sp3d->addChild(sprite,0);
auto sprite2 = Sprite::create("JP9_logo_yc.png");
auto spritePos2 = Vec3(0,0,10);
sprite2->setPosition3D(spritePos2);
sp3d->addChild(sprite2,10);
sp3d->setCullFace(GL_BACK);
sp3d->setCullFaceEnabled(true);
this->setCameraMask((unsigned short)CameraFlag::USER2, true);
camera = Camera::createPerspective(60, (float)visibleSize.width/visibleSize.height, 1.0, 1000);
camera->setCameraFlag(CameraFlag::USER2);
camera->setPosition3D(spritePos + Vec3(-200,0,800));
camera->lookAt(spritePos, Vec3(0.0,1.0,0.0));
this->addChild(camera);
this->scheduleUpdate();
angle=0;
and this is update:
void TestScene::update(float dt)
{
angle+=0.1;
Size visibleSize = Director::getInstance()->getVisibleSize();
Vec2 origin = Director::getInstance()->getVisibleOrigin();
Vec3 spritePos=Vec3(visibleSize.width/2,visibleSize.height/2,0);
camera->setPosition3D(Vec3(visibleSize.width/2,visibleSize.height/2,0) + Vec3(800*cos(angle),0,800*sin(angle)));
camera->lookAt(spritePos, Vec3(0.0,1.0,0.0));
}
I have tryed something simplier:
auto sp3d = Sprite3D::create();
sp3d->setPosition(visibleSize.width/2, visibleSize.height/2);
addChild(sp3d);
auto sprite = Sprite::create("JP9_table.png");
auto spritePos = Vec3(0,0,0);
sprite->setScale(0.3);
sprite->setPosition3D(spritePos);
sp3d->addChild(sprite,0);
auto sprite2 = Sprite::create("JP9_logo_yc.png");
auto spritePos2 = Vec3(0,0,10);
sprite2->setPosition3D(spritePos2);
sp3d->addChild(sprite2,10);
sp3d->setCullFace(GL_BACK);
sp3d->setCullFaceEnabled(true);
even with sp3d->runAction(RotateTo::create(20,vec3(0,3000,0))) same error.
Is it a cocos2dx bug?
the sprite with z=10 disappear before it is covered by the other sprite...
remain hidden for a while, and when it should be hidden completely reappear!!!
Do I have forgot something?
thanks
Maybe you should check this.
_camControlNode = Node::create();
_camControlNode->setNormalizedPosition(Vec2(.5,.5));
addChild(_camControlNode);
_camNode = Node::create();
_camNode->setPositionZ(Camera::getDefaultCamera()->getPosition3D().z);
_camControlNode->addChild(_camNode);
auto sp3d = Sprite3D::create();
sp3d->setPosition(s.width/2, s.height/2);
addChild(sp3d);
auto lship = Label::create();
lship->setString("Ship");
lship->setPosition(0, 20);
sp3d->addChild(lship);
and
_lis->onTouchMoved = [this](Touch* t, Event* e) {
float dx = t->getDelta().x;
Vec3 rot = _camControlNode->getRotation3D();
rot.y += dx;
_camControlNode->setRotation3D(rot);
Vec3 worldPos;
_camNode->getNodeToWorldTransform().getTranslation(&worldPos);
Camera::getDefaultCamera()->setPosition3D(worldPos);
Camera::getDefaultCamera()->lookAt(_camControlNode->getPosition3D());
};

Setting diffuse map of material in 3ds max using C++

I am developing an utility plugin for 3dsMax 2013 and 2014 in C++. In this, I need to set a bitmap to the diffuse map of the currently selected object's material. I tried the following code, file name is showing near to the diffuse map in material editor but the image is not applied to the material as well as on clicking the View image button in Bitmap Parameters rollout is also not showing any image.
Bitmap* bmap;
BitmapInfo bmap_info(L"D:\\misc\\licenseplates.jpg");
// Load the selected image
BMMRES status;
bmap = TheManager->Load(&bmap_info, &status);
for( int i = 0; i < GetCOREInterface()->GetSelNodeCount(); ++i )
{
INode *node = GetCOREInterface()->GetSelNode(i);
// Get the material from the node
Mtl *m = node->GetMtl();
if (!m) return; // No material assigned
StdMat* std = (StdMat *)m;
// Access the Diffuse map
BitmapTex *tmap = (BitmapTex *)std->GetSubTexmap(ID_DI);
// No map assigned
if (!tmap)
{
tmap = (BitmapTex*)NewDefaultBitmapTex();
}
tmap->SetBitmap(bmap);
tmap->SetBitmapInfo(bmap_info);
tmap->ReloadBitmapAndUpdate();
tmap->fnReload();
std->SetSubTexmap(ID_DI,tmap);
std->SetTexmapAmt(ID_DI,1.0f,0);
std->EnableMap(ID_DI, TRUE);
}
Do I need to set some other parameters also to set the map?

Multiple textures doesn't show

I'm a newbie of DirectX10. Now I'm developing a Direct10 application. It mixes two textures which are filled manually according to user's input. The current implementation is
Create two empty textures with usage D3D10_USAGE_STAGING.
Create two resource shader view to bind to the pixel shader because the shader needs it.
Copy the textures to the GPU memory by calling CopyResource.
Now the problem is that I can only see the first texture but I don't see the second. It looks to me that the binding doesn't work for the second texture.
I don't know what's wrong with it. Can anyone here shed me a light on it?
Thanks,
Marshall
The class COverlayTexture takes responsible for creating the texture, creating resource view, fill the texture with the mapped bitmap from another applicaiton and bind the resource view to the pixel shader.
HRESULT COverlayTexture::Initialize(VOID)
{
D3D10_TEXTURE2D_DESC texDesStaging;
texDesStaging.Width = m_width;
texDesStaging.Height = m_height;
texDesStaging.Usage = D3D10_USAGE_STAGING;
texDesStaging.BindFlags = 0;
texDesStaging.ArraySize = 1;
texDesStaging.MipLevels = 1;
texDesStaging.SampleDesc.Count = 1;
texDesStaging.SampleDesc.Quality = 0;
texDesStaging.MiscFlags = 0;
texDesStaging.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesStaging.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
HR( m_Device->CreateTexture2D( &texDesStaging, NULL, &m_pStagingResource ) );
D3D10_TEXTURE2D_DESC texDesShader;
texDesShader.Width = m_width;
texDesShader.Height = m_height;
texDesShader.BindFlags = D3D10_BIND_SHADER_RESOURCE;
texDesShader.ArraySize = 1;
texDesShader.MipLevels = 1;
texDesShader.SampleDesc.Count = 1;
texDesShader.SampleDesc.Quality = 0;
texDesShader.MiscFlags = 0;
texDesShader.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesShader.Usage = D3D10_USAGE_DEFAULT;
texDesShader.CPUAccessFlags = 0;
HR( m_Device->CreateTexture2D( &texDesShader, NULL, &m_pShaderResource ) );
D3D10_SHADER_RESOURCE_VIEW_DESC viewDesc;
ZeroMemory( &viewDesc, sizeof( viewDesc ) );
viewDesc.Format = texDesShader.Format;
viewDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
viewDesc.Texture2D.MipLevels = texDesShader.MipLevels;
HR( m_Device->CreateShaderResourceView( m_pShaderResource, &viewDesc, &m_pShaderResourceView ) );
}
HRESULT COverlayTexture::Render(VOID)
{
m_Device->PSSetShaderResources(0, 1, m_pShaderResourceView);
D3D10_MAPPED_TEXTURE2D lockedRect;
m_pStagingResource->Map( 0, D3D10_MAP_WRITE, 0, &lockedRect );
// Fill in the texture with the bitmap mapped from shared memory view
m_pStagingResource->Unmap(0);
m_Device->CopyResource(m_pShaderResource, m_pStagingResource);
}
I use two instances of the class COverlayTexture each of which fills its own bitmap to its texture respectively and renders with sequence COverlayTexture[1] then COverlayTexture[0].
COverlayTexture* pOverlayTexture[2];
for( int i = 1; i < 0; i++)
{
pOverlayTexture[i]->Render()
}
The blend state setting in the FX file is definedas below:
BlendState AlphaBlend
{
AlphaToCoverageEnable = FALSE;
BlendEnable[0] = TRUE;
SrcBlend = SRC_ALPHA;
DestBlend = INV_SRC_ALPHA;
BlendOp = ADD;
BlendOpAlpha = ADD;
SrcBlendAlpha = ONE;
DestBlendAlpha = ZERO;
RenderTargetWriteMask[0] = 0x0f;
};
The pixel shader in the FX file is defined as below:
Texture2D txDiffuse;
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse.Sample(samLinear, input.Tex);
return ret;
}
Thanks again.
Edit for Paulo:
Thanks a lot, Paulo. The problem is that which instance of the object should be bound to alpha texture or diffuse texture. As testing, I bind the COverlayTexture[0] to the alpha and COverlayTexture[1] to the diffuse texture.
Texture2D txDiffuse[2];
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse[1].Sample(samLinear, input.Tex);
float alpha = txDiffuse[0].Sample(samLinear, input.Tex).x;
return float4(ret.xyz, alpha);
}
I called the PSSetShaderResources for the two resource views.
g_pShaderResourceViews[0] = overlay[0].m_pShaderResourceView;
g_pShaderResourceViews[1] = overlay[1].m_pShaderResourceView;
m_Device->PSSetShaderResources(0, 2, g_pShaderResourceViews);
The result is that i don't see anything. I also tried the channel x,y,z,w.
Post some more code.
I'm not sure how you mean to mix these two textures. If you want to mix them in the pixel shader you need to sample both of them then add them (or whatever operation you required) toghether.
How do you add the textures toghether? By setting a ID3D11BlendState or in the pixel shader?
EDIT:
You don't need two textures in every class: if you want to write to your texture your usage should be D3D10_USAGE_DYNAMIC. When you do this, you can also have this texture as your shader resource so you don't need to do the m_Device->CopyResource(m_pShaderResource, m_pStagingResource); step.
Since you're using alpha blending you must control the alpha value output in the pixel shader (the w component of the float4 that the pixel shader returns).
Bind both textures to your pixel shader and use one textures value as the alpha components:
Texture2D txDiffuse;
Texture2D txAlpha;
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse.Sample(samLinear, input.Tex);
float alpha=txAlpha.Sample(samLinear,input.Tex).x; // Choose the proper channel
return float4(ret.xyz,alpha); // Alpha is the 4th component
}