Why the SceneKit Material looks different, even when the image is the same? - sprite-kit

The material content support many options to be loaded, two of these are NSImage (or UIImage) and SKTexture.
I noticed when loading the same image file (.png) with different loaders, the material is rendered different.
I'm very sure it is an extra property loaded from SpriteKit transformation, but I don't know what is it.
Why the SceneKit Material looks different, even when the image is the same?
This is the rendered example:
About the code:
let plane = SCNPlane(width: 1, height: 1)
plane.firstMaterial?.diffuse.contents = NSColor.green
let plane = SCNPlane(width: 1, height: 1)
plane.firstMaterial?.diffuse.contents = NSImage(named: "texture")
let plane = SCNPlane(width: 1, height: 1)
plane.firstMaterial?.diffuse.contents = SKTexture(imageNamed: "texture")
The complete example is here: https://github.com/Maetschl/SceneKitExamples/tree/master/MaterialTests

I think this has something to do with color spaces/gamma correction. My guess is that textures loaded via the SKTexture(imageNamed:) initializer aren't properly gamma corrected. You would think this would be documented somewhere, or other people would have noticed, but I can't seem to find anything.
Here's some code to swap with the last image in your linked sample project. I've force unwrapped as much as possible for brevity:
// Create the texture using the SKTexture(cgImage:) init
// to prove it has the same output image as SKTexture(imageNamed:)
let originalDogNSImage = NSImage(named: "dog")!
var originalDogRect = CGRect(x: 0, y: 0, width: originalDogNSImage.size.width, height: originalDogNSImage.size.height)
let originalDogCGImage = originalDogNSImage.cgImage(forProposedRect: &originalDogRect, context: nil, hints: nil)!
let originalDogTexture = SKTexture(cgImage: originalDogCGImage)
// Create the ciImage of the original image to use as the input for the CIFilter
let imageData = originalDogNSImage.tiffRepresentation!
let ciImage = CIImage(data: imageData)
// Create the gamma adjustment Core Image filter
let gammaFilter = CIFilter(name: "CIGammaAdjust")!
gammaFilter.setValue(ciImage, forKey: kCIInputImageKey)
// 0.75 is the default. 2.2 makes the dog image mostly match the NSImage(named:) intializer
gammaFilter.setValue(2.2, forKey: "inputPower")
// Create a SKTexture using the output of the CIFilter
let gammaCorrectedDogCIImage = gammaFilter.outputImage!
let gammaCorrectedDogCGImage = CIContext().createCGImage(gammaCorrectedDogCIImage, from: gammaCorrectedDogCIImage.extent)!
let gammaCorrectedDogTexture = SKTexture(cgImage: gammaCorrectedDogCGImage)
// Looks bad, like in StackOverflow question image.
// let planeWithSKTextureDog = planeWith(diffuseContent: originalDogTexture)
// Looks correct
let planeWithSKTextureDog = planeWith(diffuseContent: gammaCorrectedDogTexture)
Using a CIGammaAdjust filter with an inputPower of 2.2 makes the SKTexture almost? match the NSImage(named:) init. I've included the original image being loaded through SKTexture(cgImage:) to rule out any changes caused by using that initializer versus the SKTexture(imageNamed:) you asked about.

Related

create transparent texture in swift

I just need to create a transparent texture.(pixels with alpha 0).
func layerTexture()-> MTLTexture {
let width = Int(self.drawableSize.width )
let height = Int(self.drawableSize.height )
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .bgra8Unorm, width: width , height: height, mipmapped: false)
let temparyTexture = self.device?.makeTexture(descriptor: texDescriptor)
return temparyTexture!
}
when I open temparyTexture using preview,it's appeared to be black. What is the missing here?
UPDATE
I just tried to create texture using transparent image.
code.
func layerTexture(imageData:Data)-> MTLTexture {
let width = Int(self.drawableSize.width )
let height = Int(self.drawableSize.height )
let bytesPerRow = width * 4
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: width , height: height, mipmapped: false)
let temparyTexture = self.device?.makeTexture(descriptor: texDescriptor)
let region = MTLRegionMake2D(0, 0, width, height)
imageData.withUnsafeBytes { (u8Ptr: UnsafePointer<UInt8>) in
let rawPtr = UnsafeRawPointer(u8Ptr)
temparyTexture?.replace(region: region, mipmapLevel: 0, withBytes: rawPtr, bytesPerRow: bytesPerRow)
}
return temparyTexture!
}
method is get called as follows
let image = UIImage(named: "layer1.png")!
let imageData = UIImagePNGRepresentation(image)
self.layerTexture(imageData: imageData!)
where layer1.png is a transparent png. But even though it is crashing with message "Thread 1: EXC_BAD_ACCESS (code=1, address=0x107e8c000) " at the point I try to replace texture. I believe it's because image data is compressed and rawpointer should point to uncompressed data. How can I resolve this?
Am I in correct path or completely in wrong direction? Is there any other alternatives. What I just need is to create transparent texture.
Pre-edit: When you quick-look a transparent texture, it will appear black. I just double-checked with some code I have running stably in production - that is the expected result.
Post-edit: You are correct, you should not be copying PNG or JPEG data to a MTLTexture's contents directly. I would recommend doing something like this:
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue]
var status = CVPixelBufferCreate(nil, Int(image.size.width), Int(image.size.height),
kCVPixelFormatType_32BGRA, attrs as CFDictionary,
&pixelBuffer)
assert(status == noErr)
let coreImage = CIImage(image: image)!
let context = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!)
context.render(coreImage, to: pixelBuffer!)
var textureWrapper: CVMetalTexture?
status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
GPUManager.shared.textureCache, pixelBuffer!, nil, .bgra8Unorm,
CVPixelBufferGetWidth(pixelBuffer!), CVPixelBufferGetHeight(pixelBuffer!), 0, &textureWrapper)
let texture = CVMetalTextureGetTexture(textureWrapper!)!
// use texture now for your Metal texture. the texture is now map-bound to the CVPixelBuffer's underlying memory.
The issue you are running into is that it is actually pretty hard to fully grasp how bitmaps work and how they can be laid out differently. Graphics is a very closed field with lots of esoteric terminology, some of which refers to things that take years to grasp, some of which refers to things that are trivial but people just picked a weird word to call them by. My main pointers are:
Get out of UIImage land as early in your code as possible. The best way to avoiding overhead and delays when you go into Metal land is to get your images into a GPU-compatible representation as soon as you can.
Once you are outside of UIImage land, always know your channel order (RGBA, BGRA). At any point in code that you are editing, you should have a mental model of what pixel format each CVPixelBuffer / MTLTexture has.
Read up on premultiplied vs non-premultiplied alpha, you may not run into issues with this, but it threw me off repeatedly when I was first learning.
total byte size of a bitmap/pixelbuffer = bytesPerRow * height

Get and change hue of SKSpriteNode's SKColor(HSBA)?

A SKSpriteNode's SKColor has a way to be created with Hue, Saturation, Brightness & Alpha:
let myColor = SKColor(hue: 0.5, saturation: 1, brightness: 1, alpha: 1)
mySprite.color = myColor
How do I get at the hue of a SKSpriteNode and make a change to it? eg, divide it by 2.
An SKSpriteNode is a node that draws a texture (optionally blended with a color), an image, a colored square. So, this is it's nature.
When you make an SKSpriteNode, you have an instance property that represent the texture used to draw the sprite called also texture
Since iOS 9.x, we are able to retrieve an image from a texture following the code below. In this example I call my SKSpriteNode as spriteBg:
let spriteBg = SKSpriteNode.init(texture: SKTexture.init(imageNamed: "myImage.png"))
if let txt = spriteBg.texture {
if #available(iOS 9.0, *) {
let image : UIImage = UIImage.init(cgImage:txt.cgImage())
} else {
// Fallback on earlier versions and forgot this code..
}
}
Following this interesting answer, we can translate it to a more confortable Swift 3.0 version:
func imageWith(source: UIImage, rotatedByHue: CGFloat) -> UIImage {
// Create a Core Image version of the image.
let sourceCore = CIImage(cgImage: source.cgImage!)
// Apply a CIHueAdjust filter
guard let hueAdjust = CIFilter(name: "CIHueAdjust") else { return source }
hueAdjust.setDefaults()
hueAdjust.setValue(sourceCore, forKey: "inputImage")
hueAdjust.setValue(CGFloat(rotatedByHue), forKey: "inputAngle")
let resultCore = hueAdjust.value(forKey: "outputImage") as! CIImage!
let context = CIContext(options: nil)
let resultRef = context.createCGImage(resultCore!, from: resultCore!.extent)
let result = UIImage(cgImage: resultRef!)
return result
}
So, finally with the previous code we can do:
if let txt = spriteBg.texture {
if #available(iOS 9.0, *) {
let image : UIImage = UIImage.init(cgImage:txt.cgImage())
let changedImage = imageWith(source: image, rotatedByHue: 0.5)
spriteBg.texture = SKTexture(image: changedImage)
} else {
// Fallback on earlier versions or bought a new iphone
}
}
I'm not in a place to be able to test this right now, but looking at the UIColor documentation (UIColor and SKColor are basically the same thing), you should be able to use the .getHue(...) function retrieve the color's components, make changes to it, then set the SKSpriteNode's color property to the new value. the .getHue(...) function "Returns the components that make up the color in the HSB color space."
https://developer.apple.com/reference/uikit/uicolor/1621949-gethue

Why filtering a cropped image is 4x slower than filtering resized image (both have the same dimensions)

I've been trying to wrap my head around this problem with no luck. I have a very simple Swift command-line application which takes one argument - image path to load. It crops the image and filters that image fragment with SepiaTone filter.
It works just fine. It crops the image to 200x200 and filters it with SepiaTone. Now here's the problem that I'm facing - the whole process takes 600ms on my MacBook Air. Now when I RESIZE (instead of cropping) input image to the same dimensions (200x200) it takes 150ms.
Why is that? In both cases I'm filtering an image which is 200x200 in size. I'm using this particular image for testing (5966x3978).
UPDATE:
It's this particular line of code that takes 4x longer when dealing with cropped image:
var ciImage:CIImage = CIImage(cgImage: cgImage)
END OF UPDATE
Code for cropping (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// CROP THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
let rect = CGRect(x: 0, y: 0, width: 200, height: 200)
if let croppedImage = cgImage.cropping(to: rect) {
cgImage = croppedImage
} else {
exit(EXIT_FAILURE)
}
// END CROPPING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Code for resizing (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// RESIZE THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
guard let CGcontext = CGContext(data: nil,
width: 200,
height: 200,
bitsPerComponent: cgImage.bitsPerComponent,
bytesPerRow: cgImage.bytesPerRow,
space: cgImage.colorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: cgImage.bitmapInfo.rawValue)
else {
exit(EXIT_FAILURE)
}
CGcontext.draw(cgImage, in: CGRect(x: 0, y: 0, width: 200, height: 200))
if let resizeOutput = CGcontext.makeImage() {
cgImage = resizeOutput
}
// END RESIZING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Its very likely that the cgImage lives in video memory and when you scale the image it actually uses the hardware to write the image to a new area of memory. When you crop the cgImage the documentation implies that it is just referencing the original image. The line
var ciImage:CIImage = CIImage(cgImage: cgImage)
must be triggering a read (maybe to main memory?), and in the case of your scaled image it can probably just read the whole buffer continuously. In the case of the cropped image it may be reading it line by line and this could account for the difference, but thats just me guessing.
It looks like you are doing two very different things. In the "slow" version you are cropping (as in taking a small CGRect of the original image) and in the "fast" version you are resizing (as in reducing the original down to a CGRect).
You can prove this by adding two UIImageViews and adding these lines after each declaration of ciImage:
slowImage.image = UIImage(ciImage: ciImage)
fastImage.image = UIImage(ciImage: ciImage)
Here are two simulator screenshots, with the "slow" image above the "fast" image. The first is with your code where the "slow" CGRect origin is (0,0) and the second is with it adjusted to (2000,2000):
Origin is (0,0)
Origin is (2000,2000)
Knowing this, I can come up with a few things happening on the timing.
I'm including a link to Apple's documentation on the cropping function. It explains that it is doing some CGRect calculations behind the scenes but it doesn't explain how it pulls the pixel bits out of the full-sized CG image - I think that's where the real slow down is.
In the end though, it looks like the timing is due to doing two entirely different things.
CGRect.cropping(to:)

CIFilter GaussianBlur seems to be broken on iOS9.x (used with SKEffectNode)

I am trying to create a blur effect using the following snippet:
let glowEffectNode = SKEffectNode()
glowEffectNode.shouldRasterize = true
let glowSize = CGSize(width: barSize.width, height: barSize.height)
let glowEffectSprite = SKSpriteNode(color: barColorData.topColor, size: glowSize)
glowEffectNode.addChild(glowEffectSprite)
let glowFilter = CIFilter(name: "CIGaussianBlur")
glowFilter!.setDefaults()
glowFilter!.setValue(5, forKey: "inputRadius")
glowEffectNode.filter = glowFilter
Of course on iOS 8.x it works perfectly but from iOS 9.x (tried it both both on 9.0 and 9.1) the blur is not working properly. (On the simulator the node seems to be a bit transparent but definitely not blurred and on the device it seems blurred but cropped and also has an offset from its center position:/)
Is there a quick way to fix this using CIFilter ?
I fiddled a bit more with this and found a solution...
First of all, it seems that using odd numbers for the blur radius causes the entire node to be rendered with an offset (???) so using 10 for example fixed the offset issue.
Secondly, it seems that the blur is cropped since the entire node is the rendered sprite and for a blur effect you need an extra space so I use a transparent sprite for the extra space and the following code snippet now works:
let glowEffectNode = SKEffectNode()
glowEffectNode.shouldRasterize = true
let glowBackgroundSize = CGSize(width: barSize.width + 60, height: barSize.height + 60)
let glowSize = CGSize(width: barSize.width + 10, height: barSize.height + 10)
let glowEffectSprite = SKSpriteNode(color: barColorData.topColor, size: glowSize)
glowEffectNode.addChild(SKSpriteNode(color: SKColor.clearColor(), size: glowBackgroundSize))
glowEffectNode.addChild(glowEffectSprite)
let glowFilter = CIFilter(name: "CIGaussianBlur")
glowFilter!.setDefaults()
glowFilter!.setValue(10, forKey: "inputRadius")
glowEffectNode.filter = glowFilter
I should have mentioned that I am creating a texture from this node using view.textureFromNode(glowEffectNode) for efficiency purposes but I tried using the node itself and the problem was still there so the above should work regardless

Apply CiFilter GaussianBlur

As the title says I need to implement GaussianBlur to an UIImage; i tried to search for a tutorial but I am not still able to implement it. I tried this
var imageToBlur = CIImage(image: coro.logo)
var blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter.setValue(imageToBlur, forKey: "inputImage")
blurfilter.setValue(2, forKey: "inputImage")
var resultImage = blurfilter.valueForKey("outputImage") as! CIImage
var blurredImage = UIImage(CIImage: resultImage)
self.immagineCoro.image = blurredImage
importing CoreImage framework, but Xcode shows me an error ("NSInvalidArgumentException") at line 5. Can anyone help me to implement gaussianBlur and CIFilter in general?
Edit: thank to you both, but I have an other question; I need to apply blur only to a little part of the image like this
I just tried your code, and here's the modification I suggest, this works:
let fileURL = NSBundle.mainBundle().URLForResource("th", withExtension: "png")
let beginImage = CIImage(contentsOfURL: fileURL)
var blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter.setValue(beginImage, forKey: "inputImage")
//blurfilter.setValue(2, forKey: "inputImage")
var resultImage = blurfilter.valueForKey("outputImage") as! CIImage
var blurredImage = UIImage(CIImage: resultImage)
self.profileImageView.image = blurredImage
So, commenting out the portion you see above, did the trick and I get a blurred image as expected. And I'm using the file path, but this shouldn't make a difference from what you have.
You've used inputImage twice. The second time is probably meant to be inputRadius.
You might want to create a CIImage greyscale mask image with the shape you want, a blurred CIImage (using CIGaussianBlur), and then use CIBlendWithMask to blend them together.
The inputs of CIBlendWithMask are the input image (the blurred image), the input background image (the unblurred image), and the mask image (the shape you want). The output is the image you desire.