I just created a Metal template and changed up the code a little. I switched out the default colormap with the Minecraft 16x16 lapis ore texture, but for some reason they are blurred out when they are low resolution. I am trying to achieve that pixelated, Minecraft look and so would like to know how to disable this blurring/filtering.
Is there a way to load/present assets without this blurring? Here is my load asset function:
class func loadTexture(device: MTLDevice, textureName: String) throws -> MTLTexture {
/// Load texture data with optimal parameters for sampling
return try MTKTextureLoader(device: device).newTexture(name: textureName, scaleFactor: 1.0, bundle: nil, options: [
MTKTextureLoader.Option.textureUsage: NSNumber(value: MTLTextureUsage.shaderRead.rawValue),
MTKTextureLoader.Option.textureStorageMode: NSNumber(value: MTLStorageMode.`private`.rawValue)
])
}
Here is a screenshot of the blurry cube I'm getting:
In your texture sample call (in the shader), you need to set your magnification filter to 'nearest', instead of 'linear', like so (assuming your sampler is declared inline inside your shader):
constexpr sampler textureSampler (mag_filter::nearest, // <-- Set this to 'nearest' if you don't want any filtering on magnification
min_filter::nearest);
// Sample the texture to obtain a color
const half4 colorSample = colorTexture.sample(textureSampler, in.textureCoordinate);
Related
I'm trying to load a rendered image from Blender into Unity. But I saved the image with a color depth of 16 bits per channel, and now I want to have the same accuracy inside Unity. However, when I put the texture on a material and zoom in a bit, it looks like this:
As far as I can tell, this is only 8 bits per channel. What fixed it for me was overriding the format in the Texture Import Settings (from the default RGBA Compressed DXT5 to RGBA64, because my image also has an alpha channel):
And now the image looks nice again:
However, I would like to be able to import the image at runtime. So far I've been doing it like this:
Texture2D tex = new Texture2D(0, 0);
byte[] bytes = File.ReadAllBytes(path);
tex.LoadImage(bytes);
The problem is that, according to the documentation for Texture2D.LoadImage, "PNG files are loaded into ARGB32 format" by default. And even if I set the format when creating the Texture2D, it seems to get overitten when I call LoadImage ("After LoadImage, texture size and format might change").
Is there a way to import an image in a specific format (at runtime)? Thanks in advance
Subclass AssetPostprocessor:
public class MyPostProcessor : AssetPostprocessor {
public void OnPreprocessTexture() {
if (assetPath.Contains("SomeFileName")) {
TextureImporter textureImporter = (TextureImporter)assetImporter;
TextureImporterPlatformSettings settings = new TextureImporterPlatformSettings();
settings.textureCompression = TextureImporterCompression.Uncompressed;
settings.format = TextureImporterFormat.RGBA64;
textureImporter.SetPlatformTextureSettings(settings);
}
}
}
Using CoreImage to filter photos, I have found that saving to JPG file will result in an image that has a subtle but visible blue hue. In this example using a B&W image, the histogram reveals how the colors have been shifted in the saved file.
---Input
Output [] Histogram shows the color layers are offset
-- Issue demonstrated with MacOS 'Preview' App
I can show a similar result using only the Preview App.
test image here: https://i.stack.imgur.com/Y3f03.jpg
Open the JPG image using Preview.
Export to JPEG at any 'Quality' other than the default (85%?)
Open the exported file and look at the Histogram, and the same color shifting can be seen as I experience within my app.
-- Issue demonstrated in custom MacOS App
The code here is as bare bones as possible, creating a CIImage from the photo and immediately saving it without performing any filters. In this example I chose 0.61 for compression as it resulted in a similar file size as the original. The distortion seems to be broader if using a higher compression ratio, but I could not find any value that would eliminate it.
if let img = CIImage(contentsOf: url) {
let dest = procFolder.url(named: "InOut.jpg")
img.jpgWrite(url: dest)
}
extension CIImage {
func jpgWrite(url: URL) {
let prop: [NSBitmapImageRep.PropertyKey: Any] = [
.compressionFactor: 0.61
]
let bitmap = NSBitmapImageRep(ciImage: self)
let data = bitmap.representation(using: NSBitmapImageRep.FileType.jpeg, properties: prop)
do {
try data?.write(to: url, options: .atomic)
} catch {
log.error(error)
}
}
}
Update 1: Using #Frank Schlegel's answer for saving JPG file
The JPG now carries a Color Sync Profile, and I can (unscientifically) track a ~10% performance boost for portrait images (less for landscape), which are nice improvements. But, unfortunately the resulting file is still skewing the colors in the same way demonstrated in the histograms above.
extension CIImage {
static let writeContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [
// using an extended working color space allows you to retain wide gamut information, e.g., if the input is in DisplayP3
.workingColorSpace: CGColorSpace(name: CGColorSpace.extendedSRGB)!,
.workingFormat: CIFormat.RGBAh // 16 bit color depth, needed in extended space
])
func jpgWrite(url: URL) {
// write the output in the same color space as the input; fallback to sRGB if it can't be determined
let outputColorSpace = colorSpace ?? CGColorSpace(name: CGColorSpace.sRGB)!
do {
try CIImage.writeContext.writeJPEGRepresentation(of: self, to: url, colorSpace: outputColorSpace, options: [:])
} catch {
}
}
}
Question:
How can I open a B&W JPG as a CIImage, and re-save a JPG file avoiding any color shifting?
This looks like a color sync issue (as Leo pointed out) – more specifically a mismatch/misinterpretation of color spaces between input, processing, and output.
When you are calling NSBitmapImageRep(ciImage:), there's actually a lot happening under the hood. The system actually needs to render the CIImage you are providing to get the bitmap data of the result. It does so by creating a CIContext with default (device-specific) settings, using it to process your image (with all filters and transformations applied to it), and then giving you the raw bitmap data of the result. In the process, there are multiple color space conversions happening that you can't control when using this API (and seemingly don't lead to the result you intended). I don't like these "convenience" APIs for rendering CIImages for this reason and I see a lot of questions on SO that are related to them.
I recommend you instead use a CIContext to render your CIImage into a JPEG file. This gives you direct control over color spaces and more:
let input = CIImage(contentsOf: url)
// ideally you create this context once and re-use it because it's an expensive object
let context = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [
// using an extended working color space allows you to retain wide gamut information, e.g., if the input is in DisplayP3
.workingColorSpace: CGColorSpace(name: CGColorSpace.extendedSRGB)!,
.workingFormat: CIFormat.RGBAh // 16 bit color depth, needed in extended space
])
// write the output in the same color space as the input; fallback to sRGB if it can't be determined
let outputColorSpace = input.colorSpace ?? CGColorSpace(name: CGColorSpace.sRGB)!
context.writeJPEGRepresentation(of: input, to: dest, colorSpace: outputColorSpace, options: [kCGImageDestinationLossyCompressionQuality: 0.61])
Please let me know if you still see a discrepancy when using this API.
I never found the underlying cause of this issue and therefore no 'true' solution as I was seeking. In discussion with #Frank Schlegel, it led to a belief that it is an artifact of Apple's jpeg converter. And the issue was certainly more apparent when using test files that appear monochrome but actually had small amount of color info in them.
The simplest fix for my app was to ensure there was no color in the source image, so I drop the saturation to 0 prior to saving the file.
let params = [
"inputBrightness": brightness, // -1...1, This filter calculates brightness by adding a bias value: color.rgb + vec3(brightness)
"inputContrast": contrast, // 0...2, this filter uses the following formula: (color.rgb - vec3(0.5)) * contrast + vec3(0.5)
"inputSaturation": saturation // 0...2
]
image.applyingFilter("CIColorControls", parameters: params)
I am working off of Apple's sample project related to using the ARMatteGenerator to generate a a MTLTexture that can be used as an occlusion matte in the people occlusion technology.
I would like to determine how I could run the generated matte through a CIFilter. In my code, I am "filtering" the matte like such;
func updateMatteTextures(commandBuffer: MTLCommandBuffer) {
guard let currentFrame = session.currentFrame else {
return
}
var targetImage: CIImage?
alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer)
dilatedDepthTexture = matteGenerator.generateDilatedDepth(from: currentFrame, commandBuffer: commandBuffer)
targetImage = CIImage(mtlTexture: alphaTexture!, options: nil)
monoAlphaCIFilter?.setValue(targetImage!, forKey: kCIInputImageKey)
monoAlphaCIFilter?.setValue(CIColor.red, forKey: kCIInputColorKey)
targetImage = (monoAlphaCIFilter?.outputImage)!
let drawingBounds = CGRect(origin: .zero, size: CGSize(width: alphaTexture!.width, height: alphaTexture!.height))
context.render(targetImage!, to: alphaTexture!, commandBuffer: commandBuffer, bounds: drawingBounds, colorSpace: CGColorSpaceCreateDeviceRGB())
}
When I go to composite the matte texture and backgrounds, there is no filtering effect applied to the matte. This is how the textures are being composited;
func compositeImagesWithEncoder(renderEncoder: MTLRenderCommandEncoder) {
guard let textureY = capturedImageTextureY, let textureCbCr = capturedImageTextureCbCr else {
return
}
// Push a debug group allowing us to identify render commands in the GPU Frame Capture tool
renderEncoder.pushDebugGroup("CompositePass")
// Set render command encoder state
renderEncoder.setCullMode(.none)
renderEncoder.setRenderPipelineState(compositePipelineState)
renderEncoder.setDepthStencilState(compositeDepthState)
// Setup plane vertex buffers
renderEncoder.setVertexBuffer(imagePlaneVertexBuffer, offset: 0, index: 0)
renderEncoder.setVertexBuffer(scenePlaneVertexBuffer, offset: 0, index: 1)
// Setup textures for the composite fragment shader
renderEncoder.setFragmentBuffer(sharedUniformBuffer, offset: sharedUniformBufferOffset, index: Int(kBufferIndexSharedUniforms.rawValue))
renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureY), index: 0)
renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureCbCr), index: 1)
renderEncoder.setFragmentTexture(sceneColorTexture, index: 2)
renderEncoder.setFragmentTexture(sceneDepthTexture, index: 3)
renderEncoder.setFragmentTexture(alphaTexture, index: 4)
renderEncoder.setFragmentTexture(dilatedDepthTexture, index: 5)
// Draw final quad to display
renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
renderEncoder.popDebugGroup()
}
How could I apply the CIFilter to only the alphaTexture generated by the ARMatteGenerator?
I don't think you want to apply a CIFilter to the alphaTexture. I assume you're using Apple's Effecting People Occlusion in Custom Renderers sample code. If you watch this year's Bringing People into AR WWDC session, they talk about generating a segmentation matte using ARMatteGenerator, which is what is being done with alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer). alphaTexture is a MTLTexture that is essentially an alpha mask for where humans have been detected in the camera frame (i.e. complete opaque where a human is and completely transparent where a human is not).
Adding a filter to the alpha texture won't filter the final rendered image but will simply affect the mask that is used in the compositing. If you're trying to achieve the video linked in your previous question, I would recommend adjusting the metal shader where the compositing occurs. In the session, they point out that they compare the dilatedDepth and the renderedDepth to see if they should draw virtual content or pixels from the camera:
fragment half4 customComposition(...) {
half4 camera = cameraTexture.sample(s, in.uv);
half4 rendered = renderedTexture.sample(s, in.uv);
float renderedDepth = renderedDepthTexture.sample(s, in.uv);
half4 scene = mix(rendered, camera, rendered.a);
half matte = matteTexture.sample(s, in.uv);
float dilatedDepth = dilatedDepthTexture.sample(s, in.uv);
if (dilatedDepth < renderedDepth) { // People in front of rendered
// mix together the virtual content and camera feed based on the alpha provided by the matte
return mix(scene, camera, matte);
} else {
// People are not in front so just return the scene
return scene
}
}
Unfortunately, this is done sightly differently in the sample code, but it's still fairly easy to modify. Open up Shaders.metal. Find the compositeImageFragmentShader function. Toward the end of the function you'll see half4 occluderResult = mix(sceneColor, cameraColor, alpha); This is essentially the same operation as mix(scene, camera, matte); that we saw above. We're deciding if we should use a pixel from the scene or a pixel from camera feed based on the segmentation matte. We can easily replace the camera image pixel with an arbitrary rgba value by replacing cameraColor with a half4 that represents a color. For example, we could use half4(float4(0.0, 0.0, 1.0, 1.0)) to paint all of the pixels within the segmentation matte blue:
…
// Replacing camera color with blue
half4 occluderResult = mix(sceneColor, half4(float4(0.0, 0.0, 1.0, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;
Of course, you can apply other effects as well. Dynamic grayscale static is pretty easy to achieve.
Above compositeImageFragmentShader add:
float random(float offset, float2 tex_coord, float time) {
// pick two numbers that are unlikely to repeat
float2 non_repeating = float2(12.9898 * time, 78.233 * time);
// multiply our texture coordinates by the non-repeating numbers, then add them together
float sum = dot(tex_coord, non_repeating);
// calculate the sine of our sum to get a range between -1 and 1
float sine = sin(sum);
// multiply the sine by a big, non-repeating number so that even a small change will result in a big color jump
float huge_number = sine * 43758.5453 * offset;
// get just the numbers after the decimal point
float fraction = fract(huge_number);
// send the result back to the caller
return fraction;
}
(taken from #twostraws ShaderKit)
Then modify compositeImageFragmentShader to:
…
float randFloat = random(1.0, cameraTexCoord, rgb[0]);
half4 occluderResult = mix(sceneColor, half4(float4(randFloat, randFloat, randFloat, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;
You should get:
Finally, the debugger seems to have a hard time keeping up with the app. For me, when running attached Xcode, the app would freeze shortly after launch, but was typically smooth when running on its own.
I'm looking at SceneKit's handle binding method with the SCNBufferBindingBlock call back as described here:
https://developer.apple.com/documentation/scenekit/scnbufferbindingblock
Does anyone have an example of how this works?
let program = SCNProgram()
program.handleBinding(ofBufferNamed: "", frequency: .perFrame) { (steam, theNode, theShadable, theRenderer) in
}
To me it reads like I can use a *.metal shader on a SCNNode without having to go through the hassle of SCNTechniques....any takers?
Just posting this in case someone else came here looking for a concise example. Here's how SCNProgram's handleBinding() method can be used with Metal:
First define a data structure in your .metal shader file:
struct MyShaderUniforms {
float myFloatParam;
float2 myFloat2Param;
};
Then pass this as an argument to a shader function:
fragment half4 myFragmentFunction(MyVertex vertexIn [[stage_in]],
constant MyShaderUniforms& shaderUniforms [[buffer(0)]]) {
...
}
Next, define the same data structure in your Swift file:
struct MyShaderUniforms {
var myFloatParam: Float = 1.0
var myFloat2Param = simd_float2()
}
Now create an instance of this data structure, changes its values and define the SCNBufferBindingBlock:
var myUniforms = MyShaderUniforms()
myUniforms.myFloatParam = 3.0
...
program.handleBinding(ofBufferNamed: "shaderUniforms", frequency: .perFrame) { (bufferStream, node, shadable, renderer) in
bufferStream.writeBytes(&myUniforms, count: MemoryLayout<MyShaderUniforms>.stride)
}
Here, the string passed to ofBufferNamed: corresponds to the argument name in the fragment function. The block's bufferStream property then contains the user-defined data type MyShaderUniforms which can then be written to with updated values.
The .handleBinding(ofBufferNamed:frequency:handler:) method registers a block for SceneKit to call at render time for binding a Metal buffer to the shader program. This method can only be used with Metal or OpenGL shading language based programs. SCNProgram object helps perform this custom rendering. Program object contains a vertex shader and a fragment shader. Using a program object completely replaces SceneKit’s rendering. Your shaders take input from SceneKit and become responsible for all transform, lighting and shading effects you want to produce. Use .handleBinding() method to associate a block with a Metal shader program to handle setup of a buffer used in that shader.
Here's a link to Developer Documentation on SCNProgram class.
Also you need an instance method writeBytes(_:count:) that copies all your necessary data bytes into the underlying Metal buffer for use by a shader.
SCNTechniqueclass specifically made for post-processing SceneKit's rendering of a scene using additional drawing passes with custom Metal or OpenGL shaders. Using SCNTechnique you can create such effects as color grading or displacement, motion blur and render ambient occlusion as well as other render passes.
Here is a first code's excerpt how to properly use .handleBinding() method:
func useTheseAPIs(shadable: SCNShadable,
bufferStream: SCNBufferStream
voidPtr: UnsafeMutableRawPointer,
bindingBlock: #escaping SCNBindingBlock,
bufferFrequency: SCNBufferFrequency,
bufferBindingBlock: #escaping SCNBufferBindingBlock,
program: SCNProgram) {
bufferStream.writeBytes(voidPtr, count: 4)
shadable.handleBinding!(ofSymbol: "symbol", handler: bindingBlock)
shadable.handleUnbinding!(ofSymbol: "symbol", handler: bindingBlock)
program.handleBinding(ofBufferNamed: "pass",
frequency: bufferFrequency,
handler: bufferBindingBlock)
}
And here is a second code's excerpt:
let program = SCNProgram()
program.delegate = self as? SCNProgramDelegate
program.vertexShader = NextLevelGLContextYUVVertexShader
program.fragmentShader = NextLevelGLContextYUVFragmentShader
program.setSemantic(
SCNGeometrySource.Semantic.vertex.rawValue,
forSymbol: NextLevelGLContextAttributeVertex,
options: nil)
program.setSemantic(
SCNGeometrySource.Semantic.texcoord.rawValue,
forSymbol: NextLevelGLContextAttributeTextureCoord,
options: nil)
if let material = self._material {
material.program = program
material.handleBinding(ofSymbol: NextLevelGLContextUniformTextureSamplerY, handler: {
(programId: UInt32, location: UInt32, node: SCNNode?, renderer: SCNRenderer) in
glUniform1i(GLint(location), 0);
})
material.handleBinding(ofSymbol: NextLevelGLContextUniformTextureSamplerUV, handler: {
(programId: UInt32, location: UInt32, node: SCNNode?, renderer: SCNRenderer) in
glUniform1i(GLint(location), 1);
})
}
Also, look at Simulating refraction in SceneKit
SO post.
I'm trying to alpha blend some layers: [CGImageRef] in the drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!) routine of my custom NSView. Until now I used CGContextDrawImage() for drawing those layers into the drawLayer context. While profiling I noticed CGContextDrawImage() needs 70% of the CPU time so I decided to try the Accelerate framework. I changed the code but it just crashes and I have no clue what the reason could be.
I'm creating those layers like this:
func addLayer() {
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB)
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedFirst.rawValue)
var layerContext = CGBitmapContextCreate(nil, UInt(canvasSize.width), UInt(canvasSize.height), 8, UInt(canvasSize.width * 4), colorSpace, bitmapInfo)
var newLayer = CGBitmapContextCreateImage(layerContext)
layers.append( newLayer )
}
My drawLayers routine looks like this:
override func drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!)
{
var ctxImageBuffer = vImage_Buffer(data:CGBitmapContextGetData(ctx),
height:CGBitmapContextGetHeight(ctx),
width:CGBitmapContextGetWidth(ctx),
rowBytes:CGBitmapContextGetBytesPerRow(ctx))
for imageLayer in layers
{
//CGContextDrawImage(ctx, CGRect(origin: frameOffset, size: canvasSize), imageLayer)
var inProvider:CGDataProviderRef = CGImageGetDataProvider(imageLayer)
var inBitmapData:CFDataRef = CGDataProviderCopyData(inProvider)
var buffer:vImage_Buffer = vImage_Buffer(data: &inBitmapData, height:
CGImageGetHeight(imageLayer), width: CGImageGetWidth(imageLayer), rowBytes:
CGImageGetBytesPerRow(imageLayer))
vImageAlphaBlend_ARGB8888(&buffer, &ctxImageBuffer, &ctxImageBuffer, 0)
}
}
the canvasSize is allways the same and also all the layers have the same size, so I don't understand why the last line crashes.
Also I don't see how to use the new convenience functions to create vImageBuffers directly from CGLayerRefs. That's why I do it the complicated way.
Any help appreciated.
EDIT
inBitmapData indeed holds pixel data that reflect the background color I set. However the debugger can not po &inBitmapData and fails with this message:
error: reference to 'CFData' not used to initialize a inout parameter &inBitmapData
So I looked for a way to get the pointer to inBitmapData. That is what I came up with:
var bitmapPtr: UnsafeMutablePointer<CFDataRef> = UnsafeMutablePointer<CFDataRef>.alloc(1)
bitmapPtr.initialize(inBitmapData)
I also had to change the way to point at my data for both buffers that i need for the alpha blend input. Now it's not crashing anymore and luckily the speed boost is inspectable with a profiler (vImageAlphaBlend only takes about a third of CGContextDrawImage), but unfortunately the image results in a transparent image with pixel failures instead of the white image background.
So far I don't get any runtime errors anymore but since the result is not as expected I fear that I still don't use the alpha blend function correctly.
vImage_Buffer.data should point to the CFData data (pixel data), not the CFDataRef.
Also, not all images store their data as four channel, 8-bit per channel data. If it turns out to be three channel or RGBA or monochrome, you may get more crashing or funny colors. Also, you have assumed that the raw image data is not premultiplied, which may not be a safe assumption.
You are better off using vImageBuffer_initWithCGImage so that you can guarantee the format and colorspace of the raw image data. A more specific question about that function might help us resolve your confusion about it.
Some CG calls fall back on vImage to do the work. Rewriting your code in this way might be unprofitable in such cases. Usually the right thing to do first is to look carefully at the backtraces in the CG call to try to understand why you are causing so much work for it. Often the answer is colorspace conversion. I would look carefully at the CGBitmapInfo and colorspace of the drawing surface and your images and see if there wasn't something I could do to get those to match up a bit better.
IIRC, CALayerRefs usually have their data in non cacheable storage for better GPU access. That could cause problems for the CPU. If the data is in a CALayerRef I would use CA to do the compositing. Also, I thought that CALayers are nearly always BGRA 8-bit premultiplied. If you are not going to use CA to do the compositing, then the right vImage function is probably vImagePremultipliedAlphaBlend_RGBA/BGRA8888.