CGContext.init returns nil - swift

I'm making a simple graphics app with moving objects and some animation for Mac OS.
What I'm trying to do is to create a bitmap in memory which is to be rendered to the actual context at the end of a frame (not sure whether CG is a proper way to do this):
let tempBitmap = CGContext.init (data: nil, width: width,
height: height, bitsPerComponent: 8, bytesPerRow: 0, space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.none.rawValue)
but it always returns nil.
What is the proper way to create an in-memory bitmap?

Your combination is not supported. You can't have RGB color with no alpha. The list of supported pixel formats is in the Quartz 2D Programming Guide.
You can ignore the alpha ("none skip first" or "none skip last") but you can't work directly on 24-bit (3x8) packed pixels.

Related

How to create a 1D Metal Texture?

I want an array of 1-dimensional data (essentially an array of arrays) that is created at run time, so the size of this array is not known at compile time. I want to easily send the array of that data to a kernel shader using setTextures so my kernel can just accept a single argument like texture1d_array which will bind each element texture to a different kernel index automatically, regardless of how many there are.
The question is how to actually create a 1D MTLTexture ? All the MTLTextureDescriptor options seem to focus on 2D or 3D. Is it as simple as creating a 2D texture with the height of 1? Would that then be a 1D texture that the kernel would accept?
I.e.
let textureDescriptor = MTLTextureDescriptor
.texture2DDescriptor(pixelFormat: .r16Uint,
width: length,
height: 1,
mipmapped: false)
If my data is actually just one dimensional (not actually pixel data), is there is an equally convenient way to use an ordinary buffer instead of MTLTexture, with the same flexibility of sending an arbitrarily-sized array of these buffers to the kernel as a single kernel argument?
You can recreate all of those factory methods yourself. I would just use an initializer though unless you run into collisions.
public extension MTLTextureDescriptor {
/// A 1D texture descriptor.
convenience init(
pixelFormat: MTLPixelFormat = .r16Uint,
width: Int
) {
self.init()
textureType = .type1D
self.pixelFormat = pixelFormat
self.width = width
}
}
MTLTextureDescriptor(width: 512)
This is no different than texture2DDescriptor with a height of 1, but it's not as much of a lie. (I.e. yes, a texture can be thought of as infinite-dimensional, with a magnitude of 1 in everything but the few that matter. But we throw out all the dimensions with 1s when saying "how dimensional" something is.)
var descriptor = MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: .r16Uint,
width: 512,
height: 1,
mipmapped: false
)
descriptor.textureType = .type1D
descriptor == MTLTextureDescriptor(width: 512) // true
——
textureType = .typeTextureBuffer takes care of your second question, but why not use textureBufferDescriptor then?

sws_scale PAL8 to RGBA returns image that isn't clear

I'm using sws_scale to convert images and videos from every format to RGBA, using an SWSContext created thus:
auto context = sws_getContext(width, height, pix_fmt, width, height, AV_PIX_FMT_RGBA,
SWS_BICUBIC, nullptr, nullptr, nullptr);
but when using a PNG with color type Palette (pix_fmt = AV_PIX_FMT_PAL8) sws_scale doesn't seem to take into account the transparent color, and the resulting RGBA raster isn't transparent. Is this a bug with sws_scale, or am I making some assumption about the result?
palette image:
https://drive.google.com/file/d/1CIPkYeHElNSsH2TAGMmr0kfHxOkYiZTK/view?usp=sharing
RGBA image:
https://drive.google.com/open?id=1GMlC7RxJGLy9lpyKLg2RWfup1nJh-JFc
I was making a wrong assumption - sws_scale doesn't promise to return a premultiplied-alpha color, so the values I was getting were r:255,g:255,b:255,a:0.

Copying data between metal textures of different shapes

I am converting two trained Keras models to Metal Performance Shaders. I have to reshape the output of the first graph and use it as input to the second graph. The first graph's output is an MPSImage with "shape" (1,1,8192), and the second graph's input is an MPSImage of "shape" (4,4,512).
I cast graph1's output image.texture as a float16 array, and pass it to the following function to copy the data into "midImage", a 4x4x512 MPSImage:
func reshapeTexture(imageArray:[Float16]) -> MPSImage{
let image = imageArray
image.withUnsafeBufferPointer { ptr in
let width = midImage.texture.width
let height = midImage.texture.height
for slice in 0..<128{
for w in 0..<width{
for h in 0..<height{
let region = MTLRegion(origin: MTLOriginMake(w, h, 0),
size: MTLSizeMake(1, 1, 1))
midImage.texture.replace(region: region, mipmapLevel: 0, slice: slice, withBytes: ptr.baseAddress!.advanced(by: ((slice * 4 * width * height) + ((w + h) * 4)), bytesPerRow: MemoryLayout<Float16>.stride * 4, bytesPerImage: 0)
}
}
}
}
return midImage
}
When I pass midImage to graph2, the output of the graph is a square with 3/4 garbled noise, 1/4 black in the bottom right corner. I think I am not understanding something about the MPSImage slice property for storing extra channels. Thanks!
Metal 2d texture arrays are nearly always stored in a Morton or “Z” ordering of some kind. Certainly MPS always allocates them that way, though I suppose on MacOS there may be a means to make a linear 2D texture array and wrap a MPSImage around it. So, without undue care, direct access of a 2d texture array backing store is going to result in sadness and confusion.
The right way to do this is to write a simple Metal copy kernel. This gives you storage order independence and you don’t have to wait for the command buffer to complete before you can do the operation.
A feature request in Radar might also be warranted. Please also look in the latest macOS / iOS seed to see if Apple recently added a reshape filter for you.

Render to FBO gives unexpected results

I have an Android plugin in Unity which will do some native rendering using OpenGL ES.
I have simplified the code to this, and it successfully reproduces the problem:
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fboId);
//Draw texture to framebuffer
GLES20.glViewport(0, 0, width, height);
GLES20.glUseProgram(program);
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glUniformMatrix4fv(u_MVPMatrix, 1, false, matrix, 0);
GLES20.glEnableVertexAttribArray(a_Position);
GLES20.glVertexAttribPointer(a_Position, 3, GLES20.GL_FLOAT, false, 0, verticesBuffer);
GLES20.glEnableVertexAttribArray(a_texCoord);
GLES20.glVertexAttribPointer(a_texCoord, 2, GLES20.GL_FLOAT, false, 0, uvBuffer);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.length, GLES20.GL_UNSIGNED_SHORT, indicesBuffer);
GLES20.glFinish();
GLES20.glFlush();
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
It is working fine when I am forcing Unity to use only OpenGL ES20, but if using OpenGL ES30, I get unexpected results and the input array given here:
GLES20.glVertexAttribPointer(a_Position, 3, GLES20.GL_FLOAT, false, 0, verticesBuffer);
Is ignored and the given quad is drawn in a different shape. No matter what I change the input coordinates to, I still get the same odd shape.
I am not an OpenGL coder, so I cannot find the issue. Am I missing to set some states here?
As suggested by Reto Koradi in the comments, client side vertex arrays are deprecated in ES 3.0. And when switching to VBOs it works. Not sure why, but assume it is related to some unknown OpenGL state left after Unity.

How to Improve SKEffectNode Performance (Swift)?

so in my project I have an SKEffectNode that I use to provide a glow effect around some of my spriteNodes. I use a spriteNode (blurNode) to get the color of my obstacle and then give it to the effectNode. This works fine.
let blurNode = SKSpriteNode(imageNamed: "neonLine.png")
blurNode.color = obstacle.color
blurNode.colorBlendFactor = 1.0
blurNode.size = CGSize(width: obstacle.size.width + 30, height: obstacle.size.height + 30)
let effectNode = SKEffectNode()
effectNode.shouldRasterize = true
obstacle.addChild(effectNode)
effectNode.addChild(blurNode)
effectNode.filter = CIFilter(name: "CIGaussianBlur", withInputParameters: ["inputRadius":30])
effectNode.alpha = 1.0
My issue occurs here.
let colorFadegreen = SKAction.sequence([SKAction.colorize(with: UIColor(red: 0, green: 0.6471, blue: 0.3569, alpha: 1.0), colorBlendFactor: 1.0, duration: 3)])
obstacle.removeAllActions()
obstacle.run(colorFadegreen)
blurNode.removeAllActions()
blurNode.run(colorFadegreen)
What I want to do is have the "glow" that's around the obstacle change colors with the obstacle. That is exactly what happens; however, when I do so my frame rate drops down to 30fps.
So, my question is does anyone know how to improve the performance of this task? Or is there maybe another way I could go about doing this.
One of the ideas I thought of would be to manually blur the "neonLine.png" in photoshop and then add it to the blur node like so
let blurNode = SKSpriteNode(imageNamed: "bluredNeonLine.png"). The only thing is I can never get the blur right it always looks off.
Any help would be very much appreciated. Thanks!
EDIT:
Here are some photos of the glows in my project:
Here is the glow and lines changing color:
Three answers to the performance question with regards glows:
Use a pre-rendered glow, as per your mentioning in the question, done in Photoshop or similar bitmap editor, exported as bitmap with opacity and used as an SKSpriteNode texture, probably with additive blending for best results, and colour caste to taste.
Bake a texture of the SKEffectNode that's creating a desirable glow within SpriteKit by making it into a texture, and then loading it into an SKSpriteNode, as per this example: https://stackoverflow.com/a/40137270/2109038
Rasterise the results from your SKEffectNode and then hope your changes to colour casts don't cause re-rendering. This is shown in a wonderful extension, here: https://stackoverflow.com/a/40362874/2109038
In all cases, you're best off rendering a white glow that fades out as you like, and then applying colour blend changes to it, since SpriteKit has this built in, and it's reasonably performant in the few tests I've done. This is known as colorizing:
You can change and animate both the blend amount: https://developer.apple.com/reference/spritekit/skspritenode/1519780-colorblendfactor
and the color being blended with the texture: https://developer.apple.com/reference/spritekit/skspritenode/1519639-color