Having issues with drawing to a frame buffer texture. It draws blank - swift

I am in OpenGL es 2.0 with glKit trying to render to iOS devices.
Basically my goal is to instead of drawing to the main buffer draw to a texture. Then render that texture to the screen. I have been trying to follow another topic on so. Unfortunately they mention something about the power of two (im assuming with regards to resolution) but I don't know how to fix it. Anyway here is my swift interpretation of the code from that topic.
import Foundation
import GLKit
import OpenGLES
class RenderTexture {
var framebuffer:GLuint = 0
var tex:GLuint = 0
var old_fbo:GLint = 0
init(width: GLsizei, height: GLsizei)
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glGenFramebuffers(1, &framebuffer)
glGenTextures(1, &tex)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
glBindTexture(GLenum(GL_TEXTURE_2D), tex)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, GLsizei(width), GLsizei(height), 0, GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex, 0)
glClearColor(0, 0.1, 0, 1)
glClear(GLenum(GL_COLOR_BUFFER_BIT))
let status = glCheckFramebufferStatus(GLenum(GL_FRAMEBUFFER))
if (status != GLenum(GL_FRAMEBUFFER_COMPLETE))
{
print("DIDNT GO WELL WITH", width, " " , height)
print(status)
}
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
func begin()
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
}
func end()
{
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
}
Then as far as rendering I have some things going on.
A code that theoretically renders any texture full screen. This has been tested with two manually loaded pngs (using no buffer changes) and works great.
func drawTriangle(texture: GLuint)
{
loadBuffers()
//glViewport(0, 0, width, height)
//glClearColor(0, 0.0, 0, 1.0)
//glClear(GLbitfield(GL_COLOR_BUFFER_BIT) | GLbitfield(GL_DEPTH_BUFFER_BIT))
glEnable(GLenum(GL_TEXTURE_2D))
glActiveTexture(GLenum(GL_TEXTURE0))
glUseProgram(texShader)
let loc1 = glGetUniformLocation(texShader, "s_texture")
glUniform1i(loc1, 0)
let loc3 = glGetUniformLocation(texShader, "matrix")
if (loc3 != -1)
{
glUniformMatrix4fv(loc3, 1, GLboolean(GL_FALSE), &matrix)
}
glBindTexture(GLenum(GL_TEXTURE_2D), texture)
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 6)
glDisable(GLenum(GL_TEXTURE_2D))
destroyBuffers()
}
I also have a function that draws a couple dots on the screen. You dont really need to see the methods but it works. This is how I am going to know that OpenGL is drawing from the buffer texture and NOT a preloaded texture.
Finally here is the gist of the code I am trying to do.
func initialize()
{
nfbo = RenderTexture(width: width, height: height)
}
fun draw()
{
glViewport(0, 0, GLsizei(width * 2), GLsizei(height * 2)) //why do I have to multiply for 2 to get it to work?????
nfbo.begin()
drawDots() //Draws the dots
nfbo.end()
reset()
drawTriangle(nfbo.tex)
}
At the end of all this all that is drawn is a blank screen. If there is any more code that would help you figure things out let me know. I tried to trim it to make it less annoying for you.
Note: Considering the whole power of two thing I have tried passing the fbo class 512 x 512 just in case it would make things work being a power of two. Unfortunately it didnt do that.
Another Note: All I am doing is going to be 2D so I dont need depth buffers right?

yesterday I saw exactly the same issue.
after struggling for hours, I found out why.
the trick is configuring your texture map with the following:
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GL_CLAMP_TO_EDGE);
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GL_CLAMP_TO_EDGE);
otherwise, you won't draw anything on the texture map.
the reason seems to be that while ios supports texture maps that are not power of 2. it requires GL_CLAMP_TO_EDGE. otherwise it won't work.
it should really report incomplete framebuffer. it took me quite long time to debug this problem!
here a related discussion:
Rendering to non-power-of-two texture on iPhone

Related

How to add a CIFilter to MTLTexture Using ARMatteGenerator?

I am working off of Apple's sample project related to using the ARMatteGenerator to generate a a MTLTexture that can be used as an occlusion matte in the people occlusion technology.
I would like to determine how I could run the generated matte through a CIFilter. In my code, I am "filtering" the matte like such;
func updateMatteTextures(commandBuffer: MTLCommandBuffer) {
guard let currentFrame = session.currentFrame else {
return
}
var targetImage: CIImage?
alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer)
dilatedDepthTexture = matteGenerator.generateDilatedDepth(from: currentFrame, commandBuffer: commandBuffer)
targetImage = CIImage(mtlTexture: alphaTexture!, options: nil)
monoAlphaCIFilter?.setValue(targetImage!, forKey: kCIInputImageKey)
monoAlphaCIFilter?.setValue(CIColor.red, forKey: kCIInputColorKey)
targetImage = (monoAlphaCIFilter?.outputImage)!
let drawingBounds = CGRect(origin: .zero, size: CGSize(width: alphaTexture!.width, height: alphaTexture!.height))
context.render(targetImage!, to: alphaTexture!, commandBuffer: commandBuffer, bounds: drawingBounds, colorSpace: CGColorSpaceCreateDeviceRGB())
}
When I go to composite the matte texture and backgrounds, there is no filtering effect applied to the matte. This is how the textures are being composited;
func compositeImagesWithEncoder(renderEncoder: MTLRenderCommandEncoder) {
guard let textureY = capturedImageTextureY, let textureCbCr = capturedImageTextureCbCr else {
return
}
// Push a debug group allowing us to identify render commands in the GPU Frame Capture tool
renderEncoder.pushDebugGroup("CompositePass")
// Set render command encoder state
renderEncoder.setCullMode(.none)
renderEncoder.setRenderPipelineState(compositePipelineState)
renderEncoder.setDepthStencilState(compositeDepthState)
// Setup plane vertex buffers
renderEncoder.setVertexBuffer(imagePlaneVertexBuffer, offset: 0, index: 0)
renderEncoder.setVertexBuffer(scenePlaneVertexBuffer, offset: 0, index: 1)
// Setup textures for the composite fragment shader
renderEncoder.setFragmentBuffer(sharedUniformBuffer, offset: sharedUniformBufferOffset, index: Int(kBufferIndexSharedUniforms.rawValue))
renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureY), index: 0)
renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureCbCr), index: 1)
renderEncoder.setFragmentTexture(sceneColorTexture, index: 2)
renderEncoder.setFragmentTexture(sceneDepthTexture, index: 3)
renderEncoder.setFragmentTexture(alphaTexture, index: 4)
renderEncoder.setFragmentTexture(dilatedDepthTexture, index: 5)
// Draw final quad to display
renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
renderEncoder.popDebugGroup()
}
How could I apply the CIFilter to only the alphaTexture generated by the ARMatteGenerator?
I don't think you want to apply a CIFilter to the alphaTexture. I assume you're using Apple's Effecting People Occlusion in Custom Renderers sample code. If you watch this year's Bringing People into AR WWDC session, they talk about generating a segmentation matte using ARMatteGenerator, which is what is being done with alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer). alphaTexture is a MTLTexture that is essentially an alpha mask for where humans have been detected in the camera frame (i.e. complete opaque where a human is and completely transparent where a human is not).
Adding a filter to the alpha texture won't filter the final rendered image but will simply affect the mask that is used in the compositing. If you're trying to achieve the video linked in your previous question, I would recommend adjusting the metal shader where the compositing occurs. In the session, they point out that they compare the dilatedDepth and the renderedDepth to see if they should draw virtual content or pixels from the camera:
fragment half4 customComposition(...) {
half4 camera = cameraTexture.sample(s, in.uv);
half4 rendered = renderedTexture.sample(s, in.uv);
float renderedDepth = renderedDepthTexture.sample(s, in.uv);
half4 scene = mix(rendered, camera, rendered.a);
half matte = matteTexture.sample(s, in.uv);
float dilatedDepth = dilatedDepthTexture.sample(s, in.uv);
if (dilatedDepth < renderedDepth) { // People in front of rendered
// mix together the virtual content and camera feed based on the alpha provided by the matte
return mix(scene, camera, matte);
} else {
// People are not in front so just return the scene
return scene
}
}
Unfortunately, this is done sightly differently in the sample code, but it's still fairly easy to modify. Open up Shaders.metal. Find the compositeImageFragmentShader function. Toward the end of the function you'll see half4 occluderResult = mix(sceneColor, cameraColor, alpha); This is essentially the same operation as mix(scene, camera, matte); that we saw above. We're deciding if we should use a pixel from the scene or a pixel from camera feed based on the segmentation matte. We can easily replace the camera image pixel with an arbitrary rgba value by replacing cameraColor with a half4 that represents a color. For example, we could use half4(float4(0.0, 0.0, 1.0, 1.0)) to paint all of the pixels within the segmentation matte blue:
…
// Replacing camera color with blue
half4 occluderResult = mix(sceneColor, half4(float4(0.0, 0.0, 1.0, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;
Of course, you can apply other effects as well. Dynamic grayscale static is pretty easy to achieve.
Above compositeImageFragmentShader add:
float random(float offset, float2 tex_coord, float time) {
// pick two numbers that are unlikely to repeat
float2 non_repeating = float2(12.9898 * time, 78.233 * time);
// multiply our texture coordinates by the non-repeating numbers, then add them together
float sum = dot(tex_coord, non_repeating);
// calculate the sine of our sum to get a range between -1 and 1
float sine = sin(sum);
// multiply the sine by a big, non-repeating number so that even a small change will result in a big color jump
float huge_number = sine * 43758.5453 * offset;
// get just the numbers after the decimal point
float fraction = fract(huge_number);
// send the result back to the caller
return fraction;
}
(taken from #twostraws ShaderKit)
Then modify compositeImageFragmentShader to:
…
float randFloat = random(1.0, cameraTexCoord, rgb[0]);
half4 occluderResult = mix(sceneColor, half4(float4(randFloat, randFloat, randFloat, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;
You should get:
Finally, the debugger seems to have a hard time keeping up with the app. For me, when running attached Xcode, the app would freeze shortly after launch, but was typically smooth when running on its own.

Unity3D Texture2D.LoadRawTextureData causing massive memory leak?

In one application (Server), I am capturing the display:
public byte[] GetFrame()
{
int width = Screen.width;
int height = Screen.height;
Texture2D tex = new Texture2D(width, height, TextureFormat.RGB24, false);
tex.ReadPixels(new Rect(0, 0, width, height), 0, 0);
byte[] bytes = tex.GetRawTextureData();
Destroy(tex);
return bytes;
}
I send this frame over the network to another application (Client), which loads the received frame and puts it as texture on a plane or whatever (some other 3D primitive):
void DisplayFrame(byte[] frame)
{
Texture2D texture = new Texture2D(Screen.width, Screen.height, TextureFormat.RGB24, false);
texture.LoadRawTextureData(frame);
texture.Apply();
GetComponent<Renderer>().material.mainTexture = texture;
ScreenUpdate = true;
Destroy(texture);
}
This works and the image is displayed as I expected. However, when watching the RAM, I notice that the Client RAM goes nuts... over 8GB.
The interesting thing is that if I set frame = null; after calling Destroy(texture); in DisplayFrame on the Client, the user only sees a black screen.
I don't understand what is happening here... It's as if the frame is staying in memory and each new received frame just increases the memory used.
Does anyone have any ideas?
It turns out that if I move the Destroy(texture); from the bottom of DisplayFrame(byte[] frame) to the top of the function, the memory leak is resolved.
It's as if Destroy is being blocked because the texture is still being applied to the material when the next iteration comes around. I'm not clear on the issue here, so if someone with more knowledge could chip in, that would be great.

vImageAlphaBlend crashes

I'm trying to alpha blend some layers: [CGImageRef] in the drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!) routine of my custom NSView. Until now I used CGContextDrawImage() for drawing those layers into the drawLayer context. While profiling I noticed CGContextDrawImage() needs 70% of the CPU time so I decided to try the Accelerate framework. I changed the code but it just crashes and I have no clue what the reason could be.
I'm creating those layers like this:
func addLayer() {
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB)
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedFirst.rawValue)
var layerContext = CGBitmapContextCreate(nil, UInt(canvasSize.width), UInt(canvasSize.height), 8, UInt(canvasSize.width * 4), colorSpace, bitmapInfo)
var newLayer = CGBitmapContextCreateImage(layerContext)
layers.append( newLayer )
}
My drawLayers routine looks like this:
override func drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!)
{
var ctxImageBuffer = vImage_Buffer(data:CGBitmapContextGetData(ctx),
height:CGBitmapContextGetHeight(ctx),
width:CGBitmapContextGetWidth(ctx),
rowBytes:CGBitmapContextGetBytesPerRow(ctx))
for imageLayer in layers
{
//CGContextDrawImage(ctx, CGRect(origin: frameOffset, size: canvasSize), imageLayer)
var inProvider:CGDataProviderRef = CGImageGetDataProvider(imageLayer)
var inBitmapData:CFDataRef = CGDataProviderCopyData(inProvider)
var buffer:vImage_Buffer = vImage_Buffer(data: &inBitmapData, height:
CGImageGetHeight(imageLayer), width: CGImageGetWidth(imageLayer), rowBytes:
CGImageGetBytesPerRow(imageLayer))
vImageAlphaBlend_ARGB8888(&buffer, &ctxImageBuffer, &ctxImageBuffer, 0)
}
}
the canvasSize is allways the same and also all the layers have the same size, so I don't understand why the last line crashes.
Also I don't see how to use the new convenience functions to create vImageBuffers directly from CGLayerRefs. That's why I do it the complicated way.
Any help appreciated.
EDIT
inBitmapData indeed holds pixel data that reflect the background color I set. However the debugger can not po &inBitmapData and fails with this message:
error: reference to 'CFData' not used to initialize a inout parameter &inBitmapData
So I looked for a way to get the pointer to inBitmapData. That is what I came up with:
var bitmapPtr: UnsafeMutablePointer<CFDataRef> = UnsafeMutablePointer<CFDataRef>.alloc(1)
bitmapPtr.initialize(inBitmapData)
I also had to change the way to point at my data for both buffers that i need for the alpha blend input. Now it's not crashing anymore and luckily the speed boost is inspectable with a profiler (vImageAlphaBlend only takes about a third of CGContextDrawImage), but unfortunately the image results in a transparent image with pixel failures instead of the white image background.
So far I don't get any runtime errors anymore but since the result is not as expected I fear that I still don't use the alpha blend function correctly.
vImage_Buffer.data should point to the CFData data (pixel data), not the CFDataRef.
Also, not all images store their data as four channel, 8-bit per channel data. If it turns out to be three channel or RGBA or monochrome, you may get more crashing or funny colors. Also, you have assumed that the raw image data is not premultiplied, which may not be a safe assumption.
You are better off using vImageBuffer_initWithCGImage so that you can guarantee the format and colorspace of the raw image data. A more specific question about that function might help us resolve your confusion about it.
Some CG calls fall back on vImage to do the work. Rewriting your code in this way might be unprofitable in such cases. Usually the right thing to do first is to look carefully at the backtraces in the CG call to try to understand why you are causing so much work for it. Often the answer is colorspace conversion. I would look carefully at the CGBitmapInfo and colorspace of the drawing surface and your images and see if there wasn't something I could do to get those to match up a bit better.
IIRC, CALayerRefs usually have their data in non cacheable storage for better GPU access. That could cause problems for the CPU. If the data is in a CALayerRef I would use CA to do the compositing. Also, I thought that CALayers are nearly always BGRA 8-bit premultiplied. If you are not going to use CA to do the compositing, then the right vImage function is probably vImagePremultipliedAlphaBlend_RGBA/BGRA8888.

Cropping image with CIImage.ImageByCroppingToRect or CICrop in MonoTouch

I have a very large image that I would like to show a 200x200px thumbnail of (showing a portion of the image, not a strethed version of the entire image). To achieve this I am looking into using CIImage.ImageByCroppingToRect or CICrop - but I am not able to get anything useful. Either the result is just black (I assume what I see is the black portion of the cropped image) or I get a SIGABRT ("Cannot handle a (6000 x 3000) sized texture with the given GLES context!")
There is a ObjC sample in this thread:
Cropping CIImage with CICrop isn't working properly
But I haven't managed to translate it in to C# and get it working properly.
Here's a MonoTouch port of the answer from the post you mentioned:
var croppedImaged = CIImage.FromCGImage (inputCGImage).ImageByCroppingToRect (new RectangleF (150, 150, 300, 300));
var transformFilter = new CIAffineTransform();
var affineTransform = CGAffineTransform.MakeTranslation (-150, 150);
transformFilter.Transform = affineTransform;
transformFilter.Image = croppedImaged;
CIImage transformedImage = transformFilter.OutputImage;

OpenGL ES 2.0 texturing

I'm trying to render a simple textured quad in OpenGL ES 2.0 on an iPhone. The geometry is fine and I get the expected quad if I use a solid color in my shader:
gl_FragColor = vec4 (1.0, 0.0, 0.0, 1.0);
And I get the expected gradients if I render the texture coordinates directly:
gl_FragColor = vec4 (texCoord.x, texCoord.y, 0.0, 1.0);
The image data is loaded from a UIImage, scaled to fit within 1024x1024, and loaded into a texture like so:
glGenTextures (1, &_texture);
glBindTexture (GL_TEXTURE_2D, _texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, data);
width, height, and the contents of data are all correct, as examined in the debugger.
When I change my fragment shader to use the texture:
gl_FragColor = texture2D (tex, texCoord);
... and bind the texture and render like so:
glActiveTexture (GL_TEXTURE0);
glBindTexture (GL_TEXTURE_2D, _texture);
// this is unnecessary, as it defaults to 0, but for completeness...
GLuint texLoc = glGetUniformLocation(_program, "tex");
glUniform1i(texLoc, 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
... I get nothing. A black quad. glGetError() doesn't return an error and glIsTexture(_texture) returns true.
What am I doing wrong here? I've been over and over every example I could find online, but everybody is doing it exactly as I am, and the debugger shows my parameters to the various GL functions are what I expect them to be.
After glTexImage2D, set the MIN/MAG filters with glTexParameter, the defaults use mipmaps so the texture is incomplete with that code.
I was experiencing the same issue (black quad) and could not find an answer until a response by jfcalvo from this question led me to the cause. Basically make sure you are not loading the texture in a different thread.
make sure you set the texture wrap parameters to GL_CLAMP_TO_EDGE in both S and T directions. Without this, the texture is incomplete and will appear black.
make sure that you are calling (glTexImage2D) with right formats(constants)
make sure that you are freeing resources of image after glTexImage2D
that's how i'm do it on android:
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
mTextureID = textures[0];
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureID);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_NEAREST);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_REPEAT);
InputStream is = mContext.getResources()
.openRawResource(R.drawable.ywemmo2);
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
} finally {
try {
is.close();
} catch(IOException e) {
// Ignore.
}
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
maybe you forgot to
glEnable(GL_TEXTURE_2D);
is such case, texture2D in the shader would return black, as the OP seems to suffer.