How to add a CIFilter to MTLTexture Using ARMatteGenerator? - arkit

I am working off of Apple's sample project related to using the ARMatteGenerator to generate a a MTLTexture that can be used as an occlusion matte in the people occlusion technology.
I would like to determine how I could run the generated matte through a CIFilter. In my code, I am "filtering" the matte like such;
func updateMatteTextures(commandBuffer: MTLCommandBuffer) {
guard let currentFrame = session.currentFrame else {
return
}
var targetImage: CIImage?
alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer)
dilatedDepthTexture = matteGenerator.generateDilatedDepth(from: currentFrame, commandBuffer: commandBuffer)
targetImage = CIImage(mtlTexture: alphaTexture!, options: nil)
monoAlphaCIFilter?.setValue(targetImage!, forKey: kCIInputImageKey)
monoAlphaCIFilter?.setValue(CIColor.red, forKey: kCIInputColorKey)
targetImage = (monoAlphaCIFilter?.outputImage)!
let drawingBounds = CGRect(origin: .zero, size: CGSize(width: alphaTexture!.width, height: alphaTexture!.height))
context.render(targetImage!, to: alphaTexture!, commandBuffer: commandBuffer, bounds: drawingBounds, colorSpace: CGColorSpaceCreateDeviceRGB())
}
When I go to composite the matte texture and backgrounds, there is no filtering effect applied to the matte. This is how the textures are being composited;
func compositeImagesWithEncoder(renderEncoder: MTLRenderCommandEncoder) {
guard let textureY = capturedImageTextureY, let textureCbCr = capturedImageTextureCbCr else {
return
}
// Push a debug group allowing us to identify render commands in the GPU Frame Capture tool
renderEncoder.pushDebugGroup("CompositePass")
// Set render command encoder state
renderEncoder.setCullMode(.none)
renderEncoder.setRenderPipelineState(compositePipelineState)
renderEncoder.setDepthStencilState(compositeDepthState)
// Setup plane vertex buffers
renderEncoder.setVertexBuffer(imagePlaneVertexBuffer, offset: 0, index: 0)
renderEncoder.setVertexBuffer(scenePlaneVertexBuffer, offset: 0, index: 1)
// Setup textures for the composite fragment shader
renderEncoder.setFragmentBuffer(sharedUniformBuffer, offset: sharedUniformBufferOffset, index: Int(kBufferIndexSharedUniforms.rawValue))
renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureY), index: 0)
renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureCbCr), index: 1)
renderEncoder.setFragmentTexture(sceneColorTexture, index: 2)
renderEncoder.setFragmentTexture(sceneDepthTexture, index: 3)
renderEncoder.setFragmentTexture(alphaTexture, index: 4)
renderEncoder.setFragmentTexture(dilatedDepthTexture, index: 5)
// Draw final quad to display
renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
renderEncoder.popDebugGroup()
}
How could I apply the CIFilter to only the alphaTexture generated by the ARMatteGenerator?

I don't think you want to apply a CIFilter to the alphaTexture. I assume you're using Apple's Effecting People Occlusion in Custom Renderers sample code. If you watch this year's Bringing People into AR WWDC session, they talk about generating a segmentation matte using ARMatteGenerator, which is what is being done with alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer). alphaTexture is a MTLTexture that is essentially an alpha mask for where humans have been detected in the camera frame (i.e. complete opaque where a human is and completely transparent where a human is not).
Adding a filter to the alpha texture won't filter the final rendered image but will simply affect the mask that is used in the compositing. If you're trying to achieve the video linked in your previous question, I would recommend adjusting the metal shader where the compositing occurs. In the session, they point out that they compare the dilatedDepth and the renderedDepth to see if they should draw virtual content or pixels from the camera:
fragment half4 customComposition(...) {
half4 camera = cameraTexture.sample(s, in.uv);
half4 rendered = renderedTexture.sample(s, in.uv);
float renderedDepth = renderedDepthTexture.sample(s, in.uv);
half4 scene = mix(rendered, camera, rendered.a);
half matte = matteTexture.sample(s, in.uv);
float dilatedDepth = dilatedDepthTexture.sample(s, in.uv);
if (dilatedDepth < renderedDepth) { // People in front of rendered
// mix together the virtual content and camera feed based on the alpha provided by the matte
return mix(scene, camera, matte);
} else {
// People are not in front so just return the scene
return scene
}
}
Unfortunately, this is done sightly differently in the sample code, but it's still fairly easy to modify. Open up Shaders.metal. Find the compositeImageFragmentShader function. Toward the end of the function you'll see half4 occluderResult = mix(sceneColor, cameraColor, alpha); This is essentially the same operation as mix(scene, camera, matte); that we saw above. We're deciding if we should use a pixel from the scene or a pixel from camera feed based on the segmentation matte. We can easily replace the camera image pixel with an arbitrary rgba value by replacing cameraColor with a half4 that represents a color. For example, we could use half4(float4(0.0, 0.0, 1.0, 1.0)) to paint all of the pixels within the segmentation matte blue:
…
// Replacing camera color with blue
half4 occluderResult = mix(sceneColor, half4(float4(0.0, 0.0, 1.0, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;
Of course, you can apply other effects as well. Dynamic grayscale static is pretty easy to achieve.
Above compositeImageFragmentShader add:
float random(float offset, float2 tex_coord, float time) {
// pick two numbers that are unlikely to repeat
float2 non_repeating = float2(12.9898 * time, 78.233 * time);
// multiply our texture coordinates by the non-repeating numbers, then add them together
float sum = dot(tex_coord, non_repeating);
// calculate the sine of our sum to get a range between -1 and 1
float sine = sin(sum);
// multiply the sine by a big, non-repeating number so that even a small change will result in a big color jump
float huge_number = sine * 43758.5453 * offset;
// get just the numbers after the decimal point
float fraction = fract(huge_number);
// send the result back to the caller
return fraction;
}
(taken from #twostraws ShaderKit)
Then modify compositeImageFragmentShader to:
…
float randFloat = random(1.0, cameraTexCoord, rgb[0]);
half4 occluderResult = mix(sceneColor, half4(float4(randFloat, randFloat, randFloat, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;
You should get:
Finally, the debugger seems to have a hard time keeping up with the app. For me, when running attached Xcode, the app would freeze shortly after launch, but was typically smooth when running on its own.

Related

How can I get rid of blurry textures in Metal?

I just created a Metal template and changed up the code a little. I switched out the default colormap with the Minecraft 16x16 lapis ore texture, but for some reason they are blurred out when they are low resolution. I am trying to achieve that pixelated, Minecraft look and so would like to know how to disable this blurring/filtering.
Is there a way to load/present assets without this blurring? Here is my load asset function:
class func loadTexture(device: MTLDevice, textureName: String) throws -> MTLTexture {
/// Load texture data with optimal parameters for sampling
return try MTKTextureLoader(device: device).newTexture(name: textureName, scaleFactor: 1.0, bundle: nil, options: [
MTKTextureLoader.Option.textureUsage: NSNumber(value: MTLTextureUsage.shaderRead.rawValue),
MTKTextureLoader.Option.textureStorageMode: NSNumber(value: MTLStorageMode.`private`.rawValue)
])
}
Here is a screenshot of the blurry cube I'm getting:
In your texture sample call (in the shader), you need to set your magnification filter to 'nearest', instead of 'linear', like so (assuming your sampler is declared inline inside your shader):
constexpr sampler textureSampler (mag_filter::nearest, // <-- Set this to 'nearest' if you don't want any filtering on magnification
min_filter::nearest);
// Sample the texture to obtain a color
const half4 colorSample = colorTexture.sample(textureSampler, in.textureCoordinate);

How Do I Enable MSAA for a Render to Texture iOS App

I have a working render to texture toy iOS app. The issue is it has a ton of jaggies because it is point sampled and not anti-aliasied:
I increased the sample count in my MTKView subclass to 4 to enable MSAA.
Here is what the relevant code looks like.
// render to texture render pass descriptor
renderPassDesc = MTLRenderPassDescriptor()
renderPassDesc.EI_configure(clearColor: MTLClearColorMake(1, 1, 1, 1), clearDepth: 1)
// my MTLRenderPassDescriptor extension convenience method
public func EI_configure(clearColor:MTLClearColor, clearDepth: Double) {
// color
colorAttachments[ 0 ] = MTLRenderPassColorAttachmentDescriptor()
colorAttachments[ 0 ].storeAction = .store
colorAttachments[ 0 ].loadAction = .clear
colorAttachments[ 0 ].clearColor = clearColor
// depth
depthAttachment = MTLRenderPassDepthAttachmentDescriptor()
depthAttachment.storeAction = .dontCare
depthAttachment.loadAction = .clear
depthAttachment.clearDepth = clearDepth;
}
I attach a color and depth buffer configured for MSAA to renderPassDesc:
// color
let colorDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat:view.colorPixelFormat, width:Int(view.bounds.size.width), height:Int(view.bounds.size.height), mipmapped:false)
colorDesc.mipmapLevelCount = 1;
colorDesc.textureType = .type2DMultisample
colorDesc.sampleCount = view.sampleCount
colorDesc.usage = [.renderTarget, .shaderRead]
renderPassDesc.colorAttachments[ 0 ].texture = view.device!.makeTexture(descriptor:colorDesc)
// depth
let depthDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat:.depth32Float, width:Int(view.bounds.size.width), height:Int(view.bounds.size.height), mipmapped:false)
depthDesc.mipmapLevelCount = 1;
depthDesc.textureType = .type2DMultisample
depthDesc.sampleCount = view.sampleCount
depthDesc.usage = .renderTarget
renderPassDesc.depthAttachment.texture = view.device!.makeTexture(descriptor:depthDesc)
In my draw loop I am getting the following error from my fragment shader that consumes the texture that was rendered into:
failed assertion Fragment Function(finalPassOverlayFragmentShader):
incorrect type of texture (MTLTextureType2DMultisample) bound at texture binding at index 0 (expect MTLTextureType2D) for underlay[0]
This is the fragment shader:
fragment float4 finalPassOverlayFragmentShader(InterpolatedVertex vert [[ stage_in ]],
texture2d<float> underlay [[ texture(0) ]],
texture2d<float> overlay [[ texture(1) ]]) {
constexpr sampler defaultSampler;
float4 _F = overlay.sample(defaultSampler, vert.st).rgba;
float4 _B = underlay.sample(defaultSampler, vert.st).rgba;
float4 rgba = _F + (1.0f - _F.a) * _B;
return rgba;
}
I am sure I have missed a setting somewhere but I cannot find it.
What have I missed here?
UPDATE 0
I now have MSAA happening for my 2-pass toy. The only problem is there is not much anti-aliasing happening. In fact it is hard to tell that anything has changed. Here are my latest settings
// color - multi-sampled texture target
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat:format, width:w, height:h, mipmapped:false)
desc.mipmapLevelCount = 1;
desc.textureType = .type2DMultisample
desc.sampleCount = view.sampleCount
desc.usage = .renderTarget
let tex:MTLTexture? = view.device!.makeTexture(descriptor:desc)
// color - point-sampled resolve-texture
let resolveDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat:format, width:w, height:h, mipmapped:true)
let resolveTex:MTLTexture? = view.device!.makeTexture(descriptor:resolveDesc)
// depth texture target
let depthDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat:.format, width:w, height:h, mipmapped:false)
depthDesc.mipmapLevelCount = 1;
depthDesc.textureType = .type2DMultisample
depthDesc.sampleCount = view.sampleCount
depthDesc.usage = .renderTarget
let depthTex:MTLTexture? = view.device!.makeTexture(descriptor:depthDesc)
// render pass descriptor
renderPassDesc = MTLRenderPassDescriptor()
// color
renderPassDesc.colorAttachments[ 0 ] = MTLRenderPassColorAttachmentDescriptor()
renderPassDesc.colorAttachments[ 0 ].storeAction = .storeAndMultisampleResolve
renderPassDesc.colorAttachments[ 0 ].loadAction = .clear
renderPassDesc.colorAttachments[ 0 ].clearColor = MTLClearColorMake(0.25, 0.25, 0.25, 1)
renderPassDesc.colorAttachments[ 0 ].texture = tex
renderPassDesc.colorAttachments[ 0 ].resolveTexture = resolveTex
// depth
renderPassDesc.depthAttachment = MTLRenderPassDepthAttachmentDescriptor()
renderPassDesc.depthAttachment.storeAction = .dontCare
renderPassDesc.depthAttachment.loadAction = .clear
renderPassDesc.depthAttachment.clearDepth = 1;
renderPassDesc.depthAttachment.texture = depthTex
UPDATE 1
The jaggies appear to be from the render-to-texture and not in the asset. Below is a side by side comparison. the top image is rendered using a single pass with MSAA enabled. The bottom image is rendered to texture. The jaggies are clearly visible in the bottom image
single pass
2-pass
The error is not about your render target (a.k.a. color and depth attachments). It's about a texture you're passing in via the render command encoder's fragment texture table — that is, where you're calling setFragmentTexture(_:index:). The one you're passing for index 0 is a .type2DMultisample when the shader is coded to expect .type2D, because you declared underlay as a texture2d<...>.
Your setup for MSAA is OK for an intermediate step. You will eventually need to resolve the texture to a non-multisampled texture in order to draw it to screen. For that step (which might be for this render command encoder or a later one, depending on your needs), you need to set the storeAction for the color attachment to either .multisampleResolve or .storeAndMultisampleResolve. And you need to set the resolveTexture to a 2D texture. That could be one of your own or a drawable's texture.

How to modify a Texture pixels from a compute shader in unity?

I stumbled upon a strange problem in vuforia.When i request a camera image using CameraDevice.GetCameraImage(mypixelformat), the image returned is both flipped sideways and rotated 180 deg. Because of this, to obtain a normal image i have to first rotate the image and then flip it sideways.The approach i am using is simply iterating over pixels of the image and modifying them.This approach is very poor performance wise.Below is the code:
Texture2D image;
CameraDevice cameraDevice = Vuforia.CameraDevice.Instance;
Vuforia.Image vufImage = cameraDevice.GetCameraImage(pixelFormat);
image = new Texture2D(vufImage.Width, vufImage.Height);
vufImage.CopyToTexture(image);
Color32[] colors = image.GetPixels32();
System.Array.Reverse(colors, 0, colors.Length); //rotate 180deg
image.SetPixels32(colors); //apply rotation
image = FlipTexture(image); //flip sideways
//***** THE FLIP TEXTURE METHOD *******//
private Texture2D FlipTexture(Texture2D original, bool upSideDown = false)
{
Texture2D flipped = new Texture2D(original.width, original.height);
int width = original.width;
int height = original.height;
for (int col = 0; col < width; col++)
{
for (int row = 0; row < height; row++)
{
if (upSideDown)
{
flipped.SetPixel(row, (width - 1) - col, original.GetPixel(row, col));
}
else
{
flipped.SetPixel((width - 1) - col, row, original.GetPixel(col, row));
}
}
}
flipped.Apply();
return flipped;
}
To improve the performance i want to somehow schedule these pixel operations on the GPU, i have heard that a compute shader can be used, but i have no idea where to start.Can someone please help me write the same operations in a compute shader so that the GPU can handle them, Thankyou!.
The whole compute shader are new for me too, but i took the occasion to research it a little bit for myself too. The following works for flipping a texture vertically (rotating and flipping horizontally should be just a vertical flip).
Someone might have a more elaborate solution for you, but maybe this is enough to get you started.
The Compute shader code:
#pragma kernel CSMain
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> Result;
Texture2D<float4> ImageInput;
float2 flip;
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
flip = float2(512 , 1024) - id.xy ;
Result[id.xy] = float4(ImageInput[flip].x, ImageInput[flip].y, ImageInput[flip].z, 1.0);
}
and called from any script:
public void FlipImage()
{
int kernelHandle = shader.FindKernel("CSMain");
RenderTexture tex = new RenderTexture(512, 1024, 24);
tex.enableRandomWrite = true;
tex.Create();
shader.SetTexture(kernelHandle, "Result", tex);
shader.SetTexture(kernelHandle, "ImageInput", myTexture);
shader.Dispatch(kernelHandle, 512/8 , 1024 / 8, 1);
RenderTexture.active = tex;
result.ReadPixels(new Rect(0, 0, tex.width, tex.height), 0, 0);
result.Apply();
}
This takes an input Texture2D, flips it in the shader, applies it to a RenderTexture and to a Texture2D, whatever you need.
Note that the image sizes are hardcoded in my instance and should be replaced by whatever size you need. (for within the shader use shader.SetInt(); )

Having issues with drawing to a frame buffer texture. It draws blank

I am in OpenGL es 2.0 with glKit trying to render to iOS devices.
Basically my goal is to instead of drawing to the main buffer draw to a texture. Then render that texture to the screen. I have been trying to follow another topic on so. Unfortunately they mention something about the power of two (im assuming with regards to resolution) but I don't know how to fix it. Anyway here is my swift interpretation of the code from that topic.
import Foundation
import GLKit
import OpenGLES
class RenderTexture {
var framebuffer:GLuint = 0
var tex:GLuint = 0
var old_fbo:GLint = 0
init(width: GLsizei, height: GLsizei)
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glGenFramebuffers(1, &framebuffer)
glGenTextures(1, &tex)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
glBindTexture(GLenum(GL_TEXTURE_2D), tex)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, GLsizei(width), GLsizei(height), 0, GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex, 0)
glClearColor(0, 0.1, 0, 1)
glClear(GLenum(GL_COLOR_BUFFER_BIT))
let status = glCheckFramebufferStatus(GLenum(GL_FRAMEBUFFER))
if (status != GLenum(GL_FRAMEBUFFER_COMPLETE))
{
print("DIDNT GO WELL WITH", width, " " , height)
print(status)
}
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
func begin()
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
}
func end()
{
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
}
Then as far as rendering I have some things going on.
A code that theoretically renders any texture full screen. This has been tested with two manually loaded pngs (using no buffer changes) and works great.
func drawTriangle(texture: GLuint)
{
loadBuffers()
//glViewport(0, 0, width, height)
//glClearColor(0, 0.0, 0, 1.0)
//glClear(GLbitfield(GL_COLOR_BUFFER_BIT) | GLbitfield(GL_DEPTH_BUFFER_BIT))
glEnable(GLenum(GL_TEXTURE_2D))
glActiveTexture(GLenum(GL_TEXTURE0))
glUseProgram(texShader)
let loc1 = glGetUniformLocation(texShader, "s_texture")
glUniform1i(loc1, 0)
let loc3 = glGetUniformLocation(texShader, "matrix")
if (loc3 != -1)
{
glUniformMatrix4fv(loc3, 1, GLboolean(GL_FALSE), &matrix)
}
glBindTexture(GLenum(GL_TEXTURE_2D), texture)
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 6)
glDisable(GLenum(GL_TEXTURE_2D))
destroyBuffers()
}
I also have a function that draws a couple dots on the screen. You dont really need to see the methods but it works. This is how I am going to know that OpenGL is drawing from the buffer texture and NOT a preloaded texture.
Finally here is the gist of the code I am trying to do.
func initialize()
{
nfbo = RenderTexture(width: width, height: height)
}
fun draw()
{
glViewport(0, 0, GLsizei(width * 2), GLsizei(height * 2)) //why do I have to multiply for 2 to get it to work?????
nfbo.begin()
drawDots() //Draws the dots
nfbo.end()
reset()
drawTriangle(nfbo.tex)
}
At the end of all this all that is drawn is a blank screen. If there is any more code that would help you figure things out let me know. I tried to trim it to make it less annoying for you.
Note: Considering the whole power of two thing I have tried passing the fbo class 512 x 512 just in case it would make things work being a power of two. Unfortunately it didnt do that.
Another Note: All I am doing is going to be 2D so I dont need depth buffers right?
yesterday I saw exactly the same issue.
after struggling for hours, I found out why.
the trick is configuring your texture map with the following:
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GL_CLAMP_TO_EDGE);
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GL_CLAMP_TO_EDGE);
otherwise, you won't draw anything on the texture map.
the reason seems to be that while ios supports texture maps that are not power of 2. it requires GL_CLAMP_TO_EDGE. otherwise it won't work.
it should really report incomplete framebuffer. it took me quite long time to debug this problem!
here a related discussion:
Rendering to non-power-of-two texture on iPhone

OpenGL ES 2.0 / MonoTouch: Rendering GUI Textures shows nothing

I building a simple Framework for OpenGL UI's for MonoTouch. I set up everything and also succeeded rendering 3D Models, but a simple 2D texture object fails. The texture has a size of 256x256 so it's not to large and its power of two.
Here is some rendering code( Note: I did remove the existing, and working code ):
// Render the gui objects ( flat )
Projection = Matrix4x4.Orthographic(0, WindowProperties.Width, WindowProperties.Height, 0);
View = new Matrix4x4();
GL.Disable(All.CullFace);
GL.Disable(All.DepthTest);
_Stage.RenderGui();
Stage:
public void RenderGui ()
{
Draw(this);
// Renders every child control, all of them call "DrawImage" when rendering something
}
public void DrawImage (Control caller, ITexture2D texture, PointF position, SizeF size)
{
PointF gposition = caller.GlobalPosition; // Resulting position is 0,0 in my tests
gposition.X += position.X;
gposition.Y += position.Y;
// Renders the ui model, this is done by using a existing ( and working vertex buffer )
// The shader gets some parameters ( this works too in 3d space )
_UIModel.Render(new RenderParameters() {
Model = Matrix4x4.Scale(size.Width, size.Height, 1) * Matrix4x4.Translation(gposition.X, gposition.Y, 0),
TextureParameters = new TextureParameter[] {
new TextureParameter("texture", texture)
}
});
}
The model is using a vector2 for positions, no other attributes are given to the shader.
The shader below should render the texture.
Vertex:
attribute vec2 position;
uniform mat4 modelViewMatrix;
varying mediump vec2 textureCoordinates;
void main()
{
gl_Position = modelViewMatrix * vec4(position.xy, -3.0, 1.0);
textureCoordinates = position;
}
Fragment:
varying mediump vec2 textureCoordinates;
uniform sampler2D texture;
void main()
{
gl_FragColor = texture2D(texture, textureCoordinates) + vec4(0.5, 0.5, 0.5, 0.5);
}
I found out that the drawing issue is caused by the shader. This line produces a GL_INVALID_OPERATION( It works with other shaders ):
GL.UniformMatrix4(uni.Location, 1, false, (parameters.Model * _Device.View * _Device.Projection).ToArray());
EDIT:
It turns out that the shader uniform locations changed( Yes i'm wondering about this too, because the initialization happens when the shader is completly initialized. I changed it, and now everything works.
As mentioned in the other thread the texture is wrong, but this is another issue ( OpenGL ES 2.0 / MonoTouch: Texture is colorized red )
The shader initialization with the GL.GetUniformLocation problem mentioned above:
[... Compile shaders ...]
// Attach vertex shader to program.
GL.AttachShader (_Program, vertexShader);
// Attach fragment shader to program.
GL.AttachShader (_Program, pixelShader);
// Bind attribute locations
for (int i = 0; i < _VertexAttributeList.Length; i++) {
ShaderAttribute attribute = _VertexAttributeList [i];
GL.BindAttribLocation (_Program, i, attribute.Name);
}
// Link program
if (!LinkProgram (_Program)) {
GL.DeleteShader (vertexShader);
GL.DeleteShader (pixelShader);
GL.DeleteProgram (_Program);
throw new Exception ("Shader could not be linked");
}
// Get uniform locations
for (int i = 0; i < _UniformList.Length; i++) {
ShaderUniform uniform = _UniformList [i];
uniform.Location = GL.GetUniformLocation (_Program, uniform.Name);
Console.WriteLine ("Uniform: {0} Location: {1}", uniform.Name, uniform.Location);
}
// Detach shaders
GL.DetachShader (_Program, vertexShader);
GL.DetachShader (_Program, pixelShader);
GL.DeleteShader (vertexShader);
GL.DeleteShader (pixelShader);
// Shader is initialized add it to the device
_Device.AddResource (this);
I don't know what Matrix4x4.Orthographic uses as near-far range, but if it's something simple like [-1,1], the object may just be out of the near-far-interval, since you set its z value explicitly to -3.0 in the vertex shader (and neither the scale nor the translation of the model matrix will change that). Try to use a z of 0.0 instead. Why is it -3, anyway?
EDIT: So if GL.UniformMatrix4 function throws a GL_INVALID_OPERATION, it seems you didn't retrieve the corresponding unfiorm location successfully. So the code where you do this might also help to find the issue.
Or it may also be that you call GL.UniformMatrix4 before the corresponding shader program is used. Keep in mind that uniforms can only be set once the program is active (GL.UseProgram or something similar was called with the shader program).
And by the way, you're multiplying the matrices in the wrong order, anyway (given your shader and matrix setting code). If it really works this way for other renderings, then you either were just lucky or you have some severe conceptual and mathemtical inconsistency in your matrix library.
It turns out that the shader uniforms change at a unknown time. Everything is created and initialized when i ask OpenGL ES for the uniform location, so it must be a bug in OpenGL.
Calling GL.GetUniformLocation(..) each time i set the shader uniforms solves the problem.