About addition processing in Metal - swift

In WebGL (OpenGL), write as follows
// alpha blending
gl.blendFuncSeparate (gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.ONE, gl.ONE);
gl.bindFramebuffer (gl.FRAMEBUFFER, buffers [0] .framebuffer);
gl.useProgram (firstProgram);
webgl.enableAttribute (planeVBO, attLocation, attStride);
gl.bindBuffer (gl.ELEMENT_ARRAY_BUFFER, planeIBO);
gl.uniform2fv (resolution, [width, height]);
gl.drawElements (gl.TRIANGLES, plane.index.length, gl.UNSIGNED_SHORT, 0);
I am worried when I want to realize with Metal. In WebGL, if you read (bind) the created frame buffer and draw it with Shader, it will add it, but in the case of Metal
Is it more appropriate to send the framebuffer as a texture to Shader and add it on the Shader side rather than reading it? Or does it exist as a mechanism in the same way?
Or
renderToTextureRenderPassDescriptor = MTLRenderPassDescriptor()
renderToTextureRenderPassDescriptor.colorAttachments[0].texture = renderTargetTexture
renderToTextureRenderPassDescriptor.colorAttachments[0].loadAction = .clear
renderToTextureRenderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.0)
renderToTextureRenderPassDescriptor.colorAttachments[0].storeAction = .store
Is it possible to continue adding in the same way as long as it is not cleared in such a place?
Or the earlier pipline
renderTargetTexture = mtlDevice.makeTexture(descriptor: textureDescriptor)
Since it has been initialized at that point, is there any need for change?
The code template itself is as follows.
https://sgaworks.com/metalsample/MetalSample2.zip

Related

How to add a CIFilter to MTLTexture Using ARMatteGenerator?

I am working off of Apple's sample project related to using the ARMatteGenerator to generate a a MTLTexture that can be used as an occlusion matte in the people occlusion technology.
I would like to determine how I could run the generated matte through a CIFilter. In my code, I am "filtering" the matte like such;
func updateMatteTextures(commandBuffer: MTLCommandBuffer) {
guard let currentFrame = session.currentFrame else {
return
}
var targetImage: CIImage?
alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer)
dilatedDepthTexture = matteGenerator.generateDilatedDepth(from: currentFrame, commandBuffer: commandBuffer)
targetImage = CIImage(mtlTexture: alphaTexture!, options: nil)
monoAlphaCIFilter?.setValue(targetImage!, forKey: kCIInputImageKey)
monoAlphaCIFilter?.setValue(CIColor.red, forKey: kCIInputColorKey)
targetImage = (monoAlphaCIFilter?.outputImage)!
let drawingBounds = CGRect(origin: .zero, size: CGSize(width: alphaTexture!.width, height: alphaTexture!.height))
context.render(targetImage!, to: alphaTexture!, commandBuffer: commandBuffer, bounds: drawingBounds, colorSpace: CGColorSpaceCreateDeviceRGB())
}
When I go to composite the matte texture and backgrounds, there is no filtering effect applied to the matte. This is how the textures are being composited;
func compositeImagesWithEncoder(renderEncoder: MTLRenderCommandEncoder) {
guard let textureY = capturedImageTextureY, let textureCbCr = capturedImageTextureCbCr else {
return
}
// Push a debug group allowing us to identify render commands in the GPU Frame Capture tool
renderEncoder.pushDebugGroup("CompositePass")
// Set render command encoder state
renderEncoder.setCullMode(.none)
renderEncoder.setRenderPipelineState(compositePipelineState)
renderEncoder.setDepthStencilState(compositeDepthState)
// Setup plane vertex buffers
renderEncoder.setVertexBuffer(imagePlaneVertexBuffer, offset: 0, index: 0)
renderEncoder.setVertexBuffer(scenePlaneVertexBuffer, offset: 0, index: 1)
// Setup textures for the composite fragment shader
renderEncoder.setFragmentBuffer(sharedUniformBuffer, offset: sharedUniformBufferOffset, index: Int(kBufferIndexSharedUniforms.rawValue))
renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureY), index: 0)
renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureCbCr), index: 1)
renderEncoder.setFragmentTexture(sceneColorTexture, index: 2)
renderEncoder.setFragmentTexture(sceneDepthTexture, index: 3)
renderEncoder.setFragmentTexture(alphaTexture, index: 4)
renderEncoder.setFragmentTexture(dilatedDepthTexture, index: 5)
// Draw final quad to display
renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
renderEncoder.popDebugGroup()
}
How could I apply the CIFilter to only the alphaTexture generated by the ARMatteGenerator?
I don't think you want to apply a CIFilter to the alphaTexture. I assume you're using Apple's Effecting People Occlusion in Custom Renderers sample code. If you watch this year's Bringing People into AR WWDC session, they talk about generating a segmentation matte using ARMatteGenerator, which is what is being done with alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer). alphaTexture is a MTLTexture that is essentially an alpha mask for where humans have been detected in the camera frame (i.e. complete opaque where a human is and completely transparent where a human is not).
Adding a filter to the alpha texture won't filter the final rendered image but will simply affect the mask that is used in the compositing. If you're trying to achieve the video linked in your previous question, I would recommend adjusting the metal shader where the compositing occurs. In the session, they point out that they compare the dilatedDepth and the renderedDepth to see if they should draw virtual content or pixels from the camera:
fragment half4 customComposition(...) {
half4 camera = cameraTexture.sample(s, in.uv);
half4 rendered = renderedTexture.sample(s, in.uv);
float renderedDepth = renderedDepthTexture.sample(s, in.uv);
half4 scene = mix(rendered, camera, rendered.a);
half matte = matteTexture.sample(s, in.uv);
float dilatedDepth = dilatedDepthTexture.sample(s, in.uv);
if (dilatedDepth < renderedDepth) { // People in front of rendered
// mix together the virtual content and camera feed based on the alpha provided by the matte
return mix(scene, camera, matte);
} else {
// People are not in front so just return the scene
return scene
}
}
Unfortunately, this is done sightly differently in the sample code, but it's still fairly easy to modify. Open up Shaders.metal. Find the compositeImageFragmentShader function. Toward the end of the function you'll see half4 occluderResult = mix(sceneColor, cameraColor, alpha); This is essentially the same operation as mix(scene, camera, matte); that we saw above. We're deciding if we should use a pixel from the scene or a pixel from camera feed based on the segmentation matte. We can easily replace the camera image pixel with an arbitrary rgba value by replacing cameraColor with a half4 that represents a color. For example, we could use half4(float4(0.0, 0.0, 1.0, 1.0)) to paint all of the pixels within the segmentation matte blue:
…
// Replacing camera color with blue
half4 occluderResult = mix(sceneColor, half4(float4(0.0, 0.0, 1.0, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;
Of course, you can apply other effects as well. Dynamic grayscale static is pretty easy to achieve.
Above compositeImageFragmentShader add:
float random(float offset, float2 tex_coord, float time) {
// pick two numbers that are unlikely to repeat
float2 non_repeating = float2(12.9898 * time, 78.233 * time);
// multiply our texture coordinates by the non-repeating numbers, then add them together
float sum = dot(tex_coord, non_repeating);
// calculate the sine of our sum to get a range between -1 and 1
float sine = sin(sum);
// multiply the sine by a big, non-repeating number so that even a small change will result in a big color jump
float huge_number = sine * 43758.5453 * offset;
// get just the numbers after the decimal point
float fraction = fract(huge_number);
// send the result back to the caller
return fraction;
}
(taken from #twostraws ShaderKit)
Then modify compositeImageFragmentShader to:
…
float randFloat = random(1.0, cameraTexCoord, rgb[0]);
half4 occluderResult = mix(sceneColor, half4(float4(randFloat, randFloat, randFloat, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;
You should get:
Finally, the debugger seems to have a hard time keeping up with the app. For me, when running attached Xcode, the app would freeze shortly after launch, but was typically smooth when running on its own.

NSRect.fill() not working with clear colors on Xcode 9 Swift 4

Swift 4 has a new NSRect.fill() function to replace NSRectFill(). When attempting to clear a bitmap using NSColor.clear.setFill(), the bitmap remains unchanged using NSRect.fill().
A workaround is to use NSRect.fill(using:) and specifying .copy.
Here is what the code does in Playgrounds in Swift 4:
Sample code:
// Duplicating a bug with Xcode 9 and Swift 4
import Cocoa
var image = NSImage(size: NSSize(width: 32, height: 32))
let rect = NSRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
image.lockFocus()
NSColor.red.setFill()
rect.fill()
image.unlockFocus()
image.lockFocus()
NSColor.clear.setFill()
// calling fill() with a clear color does not work
rect.fill()
image.unlockFocus()
// Image above should not be red
image.lockFocus()
NSColor.clear.setFill()
// using fill(using:) does work
rect.fill(using: .copy)
image.unlockFocus()
// image above is properly cleared
Is this a bug that I should file with Apple or am I missing something? This sequence worked in Swift 3 using NSRectFill().
Here is Swift 3 using NSRectFill:
They have simply different default logic:
Old syntax was always .copy.
New syntax is context if present or .sourceOver.
void NSRectFill(NSRect rect);
Fills aRect with the current color using the compositing mode NSCompositingOperationCopy
(source: https://developer.apple.com/documentation/appkit/1473652-nsrectfill)
/// Fills this rect in the current NSGraphicsContext in the context's fill
/// color.
/// The compositing operation of the fill defaults to the context's
/// compositing operation, not necessarily using `.copy` like `NSRectFill()`.
/// - precondition: There must be a set current NSGraphicsContext.
#available(swift 4)
public func fill(using operation: NSCompositingOperation = NSGraphicsContext.current?.compositingOperation ?? .sourceOver)
(source: Xcode comments for https://developer.apple.com/documentation/corefoundation/cgrect/2903486-fill/)
So if you're converting old code to newer format, then rect.fill(using: .copy) is the syntax to keep identical behaviour.

NSColorWell for View Background | Swift OSx

Currently I can change the background color of my window like so:
tab_view.layer?.backgroundColor = NSColor(red: 0.9, green: 0.9, blue: 0.9, alpha: 1).CGColor
Which is great and all, but I want the user to be able to change the color.
This is when I found out about NSColorWell.
I tried applying the color like so:
tab_view.layer?.backgroundColor = NSColorWell
but then it says "Can't assign value of type 'NSColorWell.Type' to type 'CGColor'?"
I tried looking it up, but everything I found (which was barely any) was written in Objective - C
Someone please explain to me how I can make the view background color a color from the NSColorWell.
You need to pass NSColorWell .color .CGColor property value. Apple docs

Having issues with drawing to a frame buffer texture. It draws blank

I am in OpenGL es 2.0 with glKit trying to render to iOS devices.
Basically my goal is to instead of drawing to the main buffer draw to a texture. Then render that texture to the screen. I have been trying to follow another topic on so. Unfortunately they mention something about the power of two (im assuming with regards to resolution) but I don't know how to fix it. Anyway here is my swift interpretation of the code from that topic.
import Foundation
import GLKit
import OpenGLES
class RenderTexture {
var framebuffer:GLuint = 0
var tex:GLuint = 0
var old_fbo:GLint = 0
init(width: GLsizei, height: GLsizei)
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glGenFramebuffers(1, &framebuffer)
glGenTextures(1, &tex)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
glBindTexture(GLenum(GL_TEXTURE_2D), tex)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, GLsizei(width), GLsizei(height), 0, GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex, 0)
glClearColor(0, 0.1, 0, 1)
glClear(GLenum(GL_COLOR_BUFFER_BIT))
let status = glCheckFramebufferStatus(GLenum(GL_FRAMEBUFFER))
if (status != GLenum(GL_FRAMEBUFFER_COMPLETE))
{
print("DIDNT GO WELL WITH", width, " " , height)
print(status)
}
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
func begin()
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
}
func end()
{
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
}
Then as far as rendering I have some things going on.
A code that theoretically renders any texture full screen. This has been tested with two manually loaded pngs (using no buffer changes) and works great.
func drawTriangle(texture: GLuint)
{
loadBuffers()
//glViewport(0, 0, width, height)
//glClearColor(0, 0.0, 0, 1.0)
//glClear(GLbitfield(GL_COLOR_BUFFER_BIT) | GLbitfield(GL_DEPTH_BUFFER_BIT))
glEnable(GLenum(GL_TEXTURE_2D))
glActiveTexture(GLenum(GL_TEXTURE0))
glUseProgram(texShader)
let loc1 = glGetUniformLocation(texShader, "s_texture")
glUniform1i(loc1, 0)
let loc3 = glGetUniformLocation(texShader, "matrix")
if (loc3 != -1)
{
glUniformMatrix4fv(loc3, 1, GLboolean(GL_FALSE), &matrix)
}
glBindTexture(GLenum(GL_TEXTURE_2D), texture)
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 6)
glDisable(GLenum(GL_TEXTURE_2D))
destroyBuffers()
}
I also have a function that draws a couple dots on the screen. You dont really need to see the methods but it works. This is how I am going to know that OpenGL is drawing from the buffer texture and NOT a preloaded texture.
Finally here is the gist of the code I am trying to do.
func initialize()
{
nfbo = RenderTexture(width: width, height: height)
}
fun draw()
{
glViewport(0, 0, GLsizei(width * 2), GLsizei(height * 2)) //why do I have to multiply for 2 to get it to work?????
nfbo.begin()
drawDots() //Draws the dots
nfbo.end()
reset()
drawTriangle(nfbo.tex)
}
At the end of all this all that is drawn is a blank screen. If there is any more code that would help you figure things out let me know. I tried to trim it to make it less annoying for you.
Note: Considering the whole power of two thing I have tried passing the fbo class 512 x 512 just in case it would make things work being a power of two. Unfortunately it didnt do that.
Another Note: All I am doing is going to be 2D so I dont need depth buffers right?
yesterday I saw exactly the same issue.
after struggling for hours, I found out why.
the trick is configuring your texture map with the following:
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GL_CLAMP_TO_EDGE);
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GL_CLAMP_TO_EDGE);
otherwise, you won't draw anything on the texture map.
the reason seems to be that while ios supports texture maps that are not power of 2. it requires GL_CLAMP_TO_EDGE. otherwise it won't work.
it should really report incomplete framebuffer. it took me quite long time to debug this problem!
here a related discussion:
Rendering to non-power-of-two texture on iPhone

OpenGL ES 2.0 texturing

I'm trying to render a simple textured quad in OpenGL ES 2.0 on an iPhone. The geometry is fine and I get the expected quad if I use a solid color in my shader:
gl_FragColor = vec4 (1.0, 0.0, 0.0, 1.0);
And I get the expected gradients if I render the texture coordinates directly:
gl_FragColor = vec4 (texCoord.x, texCoord.y, 0.0, 1.0);
The image data is loaded from a UIImage, scaled to fit within 1024x1024, and loaded into a texture like so:
glGenTextures (1, &_texture);
glBindTexture (GL_TEXTURE_2D, _texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, data);
width, height, and the contents of data are all correct, as examined in the debugger.
When I change my fragment shader to use the texture:
gl_FragColor = texture2D (tex, texCoord);
... and bind the texture and render like so:
glActiveTexture (GL_TEXTURE0);
glBindTexture (GL_TEXTURE_2D, _texture);
// this is unnecessary, as it defaults to 0, but for completeness...
GLuint texLoc = glGetUniformLocation(_program, "tex");
glUniform1i(texLoc, 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
... I get nothing. A black quad. glGetError() doesn't return an error and glIsTexture(_texture) returns true.
What am I doing wrong here? I've been over and over every example I could find online, but everybody is doing it exactly as I am, and the debugger shows my parameters to the various GL functions are what I expect them to be.
After glTexImage2D, set the MIN/MAG filters with glTexParameter, the defaults use mipmaps so the texture is incomplete with that code.
I was experiencing the same issue (black quad) and could not find an answer until a response by jfcalvo from this question led me to the cause. Basically make sure you are not loading the texture in a different thread.
make sure you set the texture wrap parameters to GL_CLAMP_TO_EDGE in both S and T directions. Without this, the texture is incomplete and will appear black.
make sure that you are calling (glTexImage2D) with right formats(constants)
make sure that you are freeing resources of image after glTexImage2D
that's how i'm do it on android:
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
mTextureID = textures[0];
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureID);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_NEAREST);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_REPEAT);
InputStream is = mContext.getResources()
.openRawResource(R.drawable.ywemmo2);
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
} finally {
try {
is.close();
} catch(IOException e) {
// Ignore.
}
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
maybe you forgot to
glEnable(GL_TEXTURE_2D);
is such case, texture2D in the shader would return black, as the OP seems to suffer.