ARKit with SceneKit – Banding on semi-transparent diffuse material - swift

I have hard time removing this banding from SceneKit.
Diffuse image is ok (added black background here to make contrast) (if you see a bit of banding here is because of the compression post upload)
has no banding, but this is the result in arkit (I occluded the camera to have a dark background)
Code is:
var bloomBackground = UIImage(named: "diffuse_map_02")!.withRenderingMode(.alwaysTemplate)
bloomBackground = bloomBackground.maskWithColor(color: UIColor(hex: baseColorFullOpacity))
bNode.geometry?.firstMaterial?.diffuse.contents = bloomBackground
Am I missing any flag to be set to remove this banding problem?

Solution I
Banding artifacts on gradients is quite common issue in computer graphics. To eliminate banding you usually need to use blur. Here's a code that helps you do it for SceneKit diffuse material:
import SceneKit
class ViewController: UIViewController {
#IBOutlet var sceneView: SCNView!
let ciContext = CIContext()
fileprivate func gaussianBlur() -> UIImage? {
let uiImage = UIImage(named: "art.scnassets/banding.png")!
let ciImage = CIImage(image: uiImage)
guard let ciBlurFilter = CIFilter(name: "CIGaussianBlur")
else { return nil }
ciBlurFilter.setValue(ciImage, forKey: "inputImage")
let resultedImage = ciBlurFilter.value(forKey: "outputImage") as! CIImage
var blurredImage = UIImage(ciImage: resultedImage)
let cgImage = ciContext.createCGImage(resultedImage,
from: resultedImage.extent)
blurredImage = cgImage.flatMap { UIImage(cgImage: $0) }!
return blurredImage
}
override func viewDidLoad() {
super.viewDidLoad()
sceneView.scene = SCNScene()
let sphereNode = SCNNode(geometry: SCNSphere(radius: 0.2))
sphereNode.geometry?.firstMaterial?.diffuse.contents = gaussianBlur()
sceneView.scene?.rootNode.addChildNode(sphereNode)
}
}
Solution II
Banding artifacts isn't possible in generated source as 16-bit and 32-bit images (for instance .psd, .hdr, .tiff or .exr file formats). Regular .png or .jpg are 8-bit per channel.
Increasing a size of 8-bit image doesn't bring a positive result. That's because you still have 256 grey half-tones per channel. But if you use 16-bit .tiff you get 65536 steps of grey color per channel. It's 256 times more than in an 8-bit image.
However, let's see what Apple documentation says about it.
Although image objects support all platform-native image formats, it is recommended that you use PNG or JPEG files for most images in your app. Image objects are optimized for reading and displaying both formats, and those formats offer better performance than most other image formats. Because the PNG format is lossless, it is especially recommended for the images you use in your app’s interface.
So Apple tries to say us that using 16-bit and 32-bit files is possible but it smells like non-optimized way of development. In case you're planning to render too many 32-bit textures in SCNScene – be ready to get a freezed (unresponsive) view.
I personally tried using .hdr, .tiff and .exr file formats and it looks OK about them. Not 100% sure, but I think you could exploit 16-bit and 32-bit .psd files, however I suppose they must be flattened (to be a-single-layer) before importing them into Xcode project.
Solution III
You can build a CIFilter's CISmoothLinearGradient programmatically. This filter has four parameters:
inputPoint0 (CIVector)
inputPoint1 (CIVector)
inputColor0 (CIColor)
inputColor1 (CIColor)

Related

CIGaussianBlur shrinks UIImageView

Using CIGaussianBlur causes UIImageView to apply the blur from the border in, making the image appear to shrink (right image). Using .blur on a SwiftUI view does the opposite; the blur is applied from the border outwards (left image). This is the effect I’m trying to achieve in UIKit. How can I go about this?
I've seen a few posts about using CIAffineClamp, but that causes the blur to stop at the image boarder which is not what I want.
private let context = CIContext()
private let filter = CIFilter(name: "CIGaussianBlur")!
private func createBluredImage(using image: UIImage, value: CGFloat) -> UIImage? {
let beginImage = CIImage(image: image)
filter.setValue(beginImage, forKey: kCIInputImageKey)
filter.setValue(value, forKey: kCIInputRadiusKey)
guard
let outputImage = filter.outputImage,
let cgImage = context.createCGImage(outputImage, from: outputImage.extent)
else {
return nil
}
return UIImage(cgImage: cgImage)
}
When I used CIGaussianBlur I wanted my output image to be contained inside the image frame, so I used CIAffineClamp on the image before applying the blur, as you describe.
You might need to render your source image into a larger frame, clamp to that larger frame using CIAffineClamp, apply your blur filter, then load the resulting blurred output image. Core Image is a bit of a pain to set up and figure out, so I don’t have a full solution ready for you, but that’s what I would suggest.

SwiftUI - Saving Image to Share Sheet causes image to save blurry/low res

I have a bit of code in my app that generates a QR Code and scales it up (code reference I used from this link from Hackng with Swift. Now, I'm using the share sheet to allow the user to save the qr code to their camera roll and, it is working, but saving the image low res, and it saves to the camera roll blurry (and i assume if it is shared via other methods it will also be blurry)
Here is the code of my share sheet function:
struct ActivityView: UIViewControllerRepresentable {
let activityItems: [Any]
let applicationActivities: [UIActivity]?
func makeUIViewController(context: UIViewControllerRepresentableContext<ActivityView>) -> UIActivityViewController {
return UIActivityViewController(activityItems: activityItems, applicationActivities: applicationActivities)
}
func updateUIViewController(_ uiViewController: UIActivityViewController, context: UIViewControllerRepresentableContext<ActivityView>) {
}
}
and here's the code in my view struct:
.sheet(isPresented: $showShareSheet) {
ShareSheet(activityItems: [self.qrCodeImage])
}
Is there a trick to remove the interpolation on the image when it saves to the share sheet like the .interpolation(.none) on the image view itself?
Your problem is that the QR code image is actually tiny! Like really tiny:
Printing description of image:
<UIImage:0x60000202cc60 anonymous {23, 23}>
When you share this image, the way it will be displayed is dependant on the program or app that will display it, and is out of control of your app as far as I know.
However,
there is a way that you could potentially make it "pretty" in other apps, and this would be to increase the resolution to a larger amount so that when it's rendered it'll appear to have "sharp" pixels.
How would this be accomplished? I think I have an example buried somewhere in old code, I'll dig into it and see if I can find you an example ;)
Edit
I found the code:
extension UIImage {
func resized(toWidth width: CGFloat) -> UIImage? {
let canvasSize = CGSize(width: round(width), height: CGFloat(ceil(width/size.width * size.height)))
UIGraphicsBeginImageContextWithOptions(canvasSize, false, scale)
defer { UIGraphicsEndImageContext() }
let context = UIGraphicsGetCurrentContext();
context?.interpolationQuality = .none
// Set the quality level to use when rescaling
draw(in: CGRect(origin: .zero, size: canvasSize))
let r = UIGraphicsGetImageFromCurrentImageContext()
return r
}
}
The trick is to provide a way to scale the image, but the real magic is on line 7:
context?.interpolationQuality = .none
If you exclude this line, you'll get blurry images, which is what the OS does by default because you don't generally want to see the pixel edges in images.
You could use this extension like so:
.sheet(isPresented: $showShareSheet) {
ShareSheet(activityItems: [self.qrCodeImage.resized(toWidth: 512) ?? UIImage()])
}
However, this may be resizing the image way more often than necessary. Optimally you'd resize it in the same function that you generate it.

NSView to PDF and PNG: Why is the outcome so different?

I am trying to safe an NSView to an PNG.
I start with the NSView and then call dataWithPDF or cacheDisplay for PNG. The code to do both looks like this.
guard view.lockFocusIfCanDraw() else {
assert (false)
return
}
let pdfData = view.dataWithPDF(inside: rect)
guard let imgData = view.bitmapImageRepForCachingDisplay(in: rect) else {
assert(false)
}
view.cacheDisplay(in: rect, to: imgData)
view.unlockFocus()
try pdfData.write(to: pdfName, options: .atomic)
let pngData = imgData.representation(using: .png, properties: [:])
try pngData!.write(to: pngName, options: .atomic)
So far, so good. However, this is the different outcome.
PDF (correct!)
And this is the PNG output. As one can see, the subviews aren't included. The arrows are drawn as part of view
Why is the outcome so different?
Many thanks in advance!
Ok, I found the answer. Thanks to "View Debugging" did I see that the subviews use a layer (self.wantsLayer = true). And layers are not finding their way into the PNG, but into the PDF. Not sure whether this is a bug or a feature. However, now I can fix the PNG output.
Why is the outcome so different?
Trying your code using a different (I obviously don't have your view) view with subviews works as expected and the PNG is fine. So it has to be something to do with your views, but I can make no suggestion as to what. However...
As you've got valid PDF data you can generate your PNG from that using something like:
let captured = NSImage(data:pdfData)
let rep = NSBitmapImageRep(data:(captured?.tiffRepresentation)!)
let pngData = rep?.representation(using: NSPNGFileType, properties:[:])
(that is Swift 3, hence NSPNGFileType rather than .png)
This of course doesn't solve whatever problem you have, it avoids it :-) You should really figure out why your views are failing and treat this as a temporary band aid (assuming it works for you...).
HTH

Changing JUST .scale in UIImage?

Here, I'm creating a typical graphic (it's full-screen size, on all devices) on the fly...
func buildImage()->UIImage
{
let wrapperA:UIView = say, a picture
let wrapperB:UIView = say, some text to go on top
let mainSize = basicImage.bounds.size
UIGraphicsBeginImageContextWithOptions(mainSize, false, 0.0)
basicImage.drawHierarchy(in: basicImage.bounds, afterScreenUpdates:true)
wrapperA.drawHierarchy(in: wrapperA.bounds, afterScreenUpdates:true)
wrapperB.drawHierarchy(in: wrapperB.bounds, afterScreenUpdates: true)
let result:UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// so we've just created a nice big image for some reason,
// no problem so far
print( result?.scale )
// I want to change that image to have a scale of 1.
// I don't know how to do that, so I actually just
// make a new identical one, with scale of 1
let resultFixed:UIImage = UIImage(cgImage: result!.cgImage!,
scale: 1.0,
orientation: result!.imageOrientation)
print( resultFixed.scale )
print("Let's use only '1-scale' images to upload to things like Instagram")
// return result
return resultFixed
// be sure to ask on SO if there's a way to
// just "change the scale" rather than make new.
}
I need the final image to be .scale of 1 - but .scale is a read only property.
The only thing I know how to do is make a whole new image copy ... but set the scale to 1 as it's being created.
Is there a better way?
Handy tip -
This was motivated by: say you're saving a large image to the user's album, and also allowing UIActivityViewController so as to post to (example) Instagram. As a general rule, it seems to be best to make the scale 1 before sending to (example) Instagram; if the scale is say 3 you actually just get the top-left 1/3 of the image on your Instagram post. In terms of saving it to the iOS photo album, it does seem to be harmless (perhaps, better in some ways) to set the scale to 1. (I only say "better" as, if the image is, example, ultimately say emailed to a friend on PC, it can cause less confusion if the scale is 1.) Interestingly though, if you just use the iOS Photos album, and take a scale 2 or 3 image, and share it to Instagram: it does in fact appear properly on Instagram! (perhaps Apple's Photos indeed knows it os best to make it scale 1, before sending it to somewhere like Instagram!).
As you say, the scale property of UIImage is read-only – therefore you cannot change it directly.
However, using UIImage's init(cgImage:scale:orientation) initialiser doesn't really copy the image – the underlying CGImage that it's wrapping (which contains the actual bitmap data) is still the same instance. It's only a new UIImage wrapper that is created.
Although that being said, you could cut out the intermediate UIImage wrapper in this case by getting the CGImage from the context directly through CGContext's makeImage() method. For example:
func buildImage() -> UIImage? {
// ...
let mainSize = basicImage.bounds.size
UIGraphicsBeginImageContextWithOptions(mainSize, false, 0.0)
defer {
UIGraphicsEndImageContext()
}
// get the current context
guard let context = UIGraphicsGetCurrentContext() else { return nil }
// -- do drawing here --
// get the CGImage from the context by calling makeImage() – then wrap in a UIImage
// through using Optional's map(_:) (as makeImage() can return nil)
// by default, the scale of the UIImage is 1.
return context.makeImage().map(UIImage.init(cgImage:))
}
btw you can change scale of result image throw creating new image
let newScaleImage = UIImage(cgImage: oldScaleImage.cgImage!, scale: 1.0, orientation: oldScaleImage.imageOrientation)

How to force SKTextureAtlas created from a dictionary to not modify textures size?

In my project, textures are procedurally generated from method provided by PaintCode (paint-code).
I then create a SKTextureAtlas from a dictionary filed with UIImage generated by these methods :
myAtlas = SKTextureAtlas(dictionary: myTextures)
At last, textures are retrieve from atlas using textureNamed:
var sprite1 = SKSpriteNode(texture:myAtlas.textureNamed("texture1"))
But displayed nodes are double sized on iPhone4S simulator. And triple sized on iPhone 6 Plus simulator.
It seems that at init, atlas compute images at the device resolution.
But generated images already have the correct size and do not need to be changed. See Drawing Method below.
Here is the description of the generated image:
<UIImage: 0x7f86cae56cd0>, {52, 52}
And the description of the corresponding texture in atlas:
<SKTexture> 'image1' (156 x 156)
This for iPhone 6 Plus, using #3x images, that's why size is x3.
And for iPhone 4S, using #2x images, as expected:
<UIImage: 0x7d55dde0>, {52, 52}
<SKTexture> 'image1' (156 x 156)
At last, the scaleproperty for generated UIImage is set to the right device resolution: 2.0 for #2x (iPhone 4S) and 3.0 for #3x (iPhone 6 Plus).
The Question
So what can I do to avoid atlas resizing the pictures?
Drawing method
PaintCode generate drawing methods as the following:
public class func imageOfCell(#frame: CGRect) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, 0)
StyleKit.drawCell(frame: frame)
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
Update 1
Comparing two approaches to generate SKTextureAtlas
// Some test image
let testImage:UIImage...
// Atlas creation
var myTextures = [String:UIImage]()
myTextures["texture1"] = testImage
myAtlas = SKTextureAtlas(dictionary: myTextures)
// Create two textures from the same image
let texture1 = myAtlas.textureNamed("texture1")
let texture2 = SKTexture(image:testImage)
// Wrong display : node is oversized
var sprite1 = SKSpriteNode(texture:texture1)
// Correct display
var sprite2 = SKSpriteNode(texture:texture2)
It seems that the problem lie on SKTextureAtlas from a dictionary as as SKSpriteNode initialization does not use scale property from UIImage to correctly size the node.
Here are descriptions on console:
- texture1: '' (84 x 84)
- texture2: 'texture1' (84 x 84)
texture2 miss some data! That could explain the lack of scale information to properly size the node as:
node's size = texture's size divide by texture's scale.
Update 2
The problem occur when the scale property of UIImage is different than one.
So you can use the following method to generate picture:
func imageOfCell(frame: CGRect, color:SKColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, 0)
var bezierPath = UIBezierPath(rect: frame)
color.setFill()
bezierPath.fill()
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
The problem come from the use of SKTextureAtlas(dictionary:) to initialize atlas.
SKTexture created using this method does not embed data related to image's scale property. So during the creation of SKSpriteNode by init(texture:) the lack of scale information in texture leads to choose texture's size in place of image's size.
One way to correct it is to provide node's size during SKSpriteNode creation: init(texture:size:)
From the documentation for the scale parameter for UIGraphicsBeginImageContextWithOptions,
The scale factor to apply to the bitmap. If you specify a value of
0.0, the scale factor is set to the scale factor of the device’s main screen.
Therefore, if you want the textures to be the same "size" across all devices, set this value to 1.0.
EDIT:
override func didMoveToView(view: SKView) {
let image = imageOfCell(CGRectMake(0, 0, 10, 10),scale:0)
let dict:[String:UIImage] = ["t1":image]
let texture = SKTextureAtlas(dictionary: dict)
let sprite1 = SKSpriteNode(texture: texture.textureNamed("t1"))
sprite1.position = CGPointMake (CGRectGetMidX(view.frame),CGRectGetMidY(view.frame))
addChild(sprite1)
println(sprite1.size)
// prints (30.0, 30.0) if scale = 0
// prints (10,0, 10,0) if scale = 1
}
func imageOfCell(frame: CGRect, scale:CGFloat) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, scale)
var bezierPath = UIBezierPath(rect: frame)
UIColor.whiteColor().setFill()
bezierPath.fill()
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}