How do I render CALayer sublayers in the correct position - swift

I am trying to get an image drawn of a CALayer containing a number of sublayers positioned at specific points, but at the moment it does not honour the zPosition of the sublayers when I use CALayer.render(in ctx:). It works fine on screen but when rendering to PDF it seems to render them in the order they were created.
These sublayers that are positioned(x, y, angle) on the drawing layer.
One solution seems to be to override the render(in ctx:) method on the drawing layer, which seems to work except the rendering of the sublayers is in the incorrect position. They are all in the bottom left corner (0,0) and not rotated correctly.
override func render(in ctx: CGContext) {
if let layers:[CALayer] = self.sublayers {
let orderedLayers = layers.sorted(by: {
$0.zPosition < $1.zPosition
})
for v in orderedLayers {
v.render(in: ctx)
}
}
}
If I don't override this method then they are positioned correctly but just in the wrong zPosition - i.e. ones that should be at the bottom (zPosition-0) are at the top.
What am I missing here ? It seems I need to position the sublayers correctly somehow in the render(incts:) function?
How do I do this ? These sublayers have already been positioned on screen and all I am trying to do is generate an image of the drawing. This is done using the following function.
func createPdfData()->Data?{
DebugLog("")
let scale: CGFloat = 1
let mWidth = drawingLayer.frame.width * scale
let mHeight = drawingLayer.frame.height * scale
var cgRect = CGRect(x: 0, y: 0, width: mWidth, height: mHeight)
let documentInfo = [kCGPDFContextCreator as String:"MakeSpace(www.xxxx.com)",
kCGPDFContextTitle as String:"Layout Image",
kCGPDFContextAuthor as String:GlobalVars.shared.appUser?.username ?? "",
kCGPDFContextSubject as String:self.level?.imageCode ?? "",
kCGPDFContextKeywords as String:"XXXX, Layout"]
let data = NSMutableData()
guard let pdfData = CGDataConsumer(data: data),
let ctx = CGContext.init(consumer: pdfData, mediaBox: &cgRect, documentInfo as CFDictionary) else {
return nil}
ctx.beginPDFPage(nil)
ctx.saveGState()
ctx.scaleBy(x: scale, y: scale)
self.drawingLayer.render(in: ctx)
ctx.restoreGState()
ctx.endPDFPage()
ctx.closePDF()
return data as Data
}

This is what I ended up doing - and it seems to work.
class ZOrderDrawingLayer: CALayer {
override func render(in ctx: CGContext) {
if let layers:[CALayer] = self.sublayers {
let orderedLayers = layers.sorted(by: {
$0.zPosition < $1.zPosition
})
for v in orderedLayers {
ctx.saveGState()
// Translate and rotate the context using the sublayers
// size, position and transform (angle)
let w = v.bounds.width/2
let ww = w*w
let h = v.bounds.height/2
let hh = h*h
let c = sqrt(ww + hh)
let theta = asin(h/c)
let angle = atan2(v.transform.m12, v.transform.m11)
let x = c * cos(theta+angle)
let y = c * sin(theta+angle)
ctx.translateBy(x: v.position.x-x, y: v.position.y-y)
ctx.rotate(by: angle)
v.render(in: ctx)
ctx.restoreGState()
}
}
}
}

Related

Cropping visible part of UIImage in UIImageView for saliency

I'm doing attentionBased saliency and should pass image to the request. When the contentMode is ScaleAspectFill, the result of the request is not correct, because I use full image (not visible on screen part)
I'm trying to crop UIImage, but this method doesn't crop correctly
let newImage = cropImage(imageToCrop: imageView.image, toRect: imageView.frame)
func cropImage(imageToCrop: UIImage?, toRect rect: CGRect) -> UIImage? {
guard let imageRef = imageToCrop?.cgImage?.cropping(to: rect) else {
return nil
}
let cropped: UIImage = UIImage(cgImage: imageRef)
return cropped
}
How can I make saliency request only for the visible part of the image (which changes when change contentMode)?
If I understand your goal correctly...
Suppose we have this 640 x 360 image:
and we display it in a 240 x 240 image view, using .scaleAspectFill...
It looks like this (the red outline is the image view frame):
and, with .clipsToBounds = true:
we want to generate this new 360 x 360 image (that is, we want to keep the original image resolution... we don't want to end up with a 240 x 240 image):
To crop the visible portion of the image, we need to calculate the scaled rect, including the offset:
func cropImage(imageToCrop: UIImage?, toRect rect: CGRect) -> UIImage? {
guard let imageRef = imageToCrop?.cgImage?.cropping(to: rect) else {
return nil
}
let cropped: UIImage = UIImage(cgImage: imageRef)
return cropped
}
func myCrop(imgView: UIImageView) -> UIImage? {
// get the image from the imageView
guard let img = imgView.image else { return nil }
// image view rect
let vr: CGRect = imgView.bounds
// image size -- we need to account for scale
let imgSZ: CGSize = CGSize(width: img.size.width * img.scale, height: img.size.height * img.scale)
let viewRatio: CGFloat = vr.width / vr.height
let imgRatio: CGFloat = imgSZ.width / imgSZ.height
var newRect: CGRect = .zero
// calculate the rect that needs to be clipped from the full image
if viewRatio > imgRatio {
// image has a wider aspect ratio than the image view
// so top and bottom will be clipped
let f: CGFloat = imgSZ.width / vr.width
let h: CGFloat = vr.height * f
newRect.origin.y = (imgSZ.height - h) * 0.5
newRect.size.width = imgSZ.width
newRect.size.height = h
} else {
// image has a narrower aspect ratio than the image view
// so left and right will be clipped
let f: CGFloat = imgSZ.height / vr.height
let w: CGFloat = vr.width * f
newRect.origin.x = (imgSZ.width - w) * 0.5
newRect.size.width = w
newRect.size.height = imgSZ.height
}
return cropImage(imageToCrop: img, toRect: newRect)
}
and call it like this:
if let croppedImage = myCrop(imgView: theImageView) {
// do something with the new image
}

Drawing a Tiled Logo over NSImage an 45 degree

I'm trying to draw a logo tiled over an image at 45 degrees.But I always get a spacing on the left side.
var y_offset: CGFloat = logo.size.width * sin(45 * (CGFloat.pi / 180.0))
// the sin of the angle may return zero or negative value,
// it won't work with this formula
if y_offset >= 0 {
var x: CGFloat = 0
while x < size.width {
var y: CGFloat = 0
while y < size.height {
// move to this position
context.saveGState()
context.translateBy(x: x, y: y)
// draw text rotated around its center
context.rotate(by: ((CGFloat(-45) * CGFloat.pi ) / 180))
logo.draw(at:NSPoint(x:x,y:y), from: .zero, operation: .sourceOver, fraction: CGFloat(logotransparency))
// reset
context.restoreGState()
y = y + CGFloat(y_offset)
}
x = x + logo.size.width
}}
}
This is the result what I get.
As you can see there are some spacing present on the left side.I cannot figure out what I'm doing wrong.I have tried setting y to size.height and decrementing it by y_offset in the loop.But I get the same result.
Update:
var dirtyRect:NSRect=NSMakeRect(0, 0, size.width, size.height)
let deg45 = CGFloat.pi / 4
if let ciImage = logo.ciImage {
let ciTiled = ciImage.tiled(at: deg45).cropped(to: dirtyRect)
let color = NSColor.init(patternImage: NSImage.fromCIImage(ciTiled))
color.setFill()
context.fill(dirtyRect)
}
Updated answer
If you need more control over the appearance you can go with manually drawing the overlays. See below code for a fixed version of your original code with two options for spacing.
In production, you would of course want to avoid using ! and move the image loading out of the draw function (even though NSImage(named:) uses a cache).
override func draw(_ dirtyRect: NSRect) {
let bgImage = NSImage(named: "landscape")!
bgImage.draw(in: dirtyRect)
let deg45 = CGFloat.pi / 4
let logo = NSImage(named: "TextTile")!
let context = NSGraphicsContext.current!.cgContext
let h = logo.size.height // (sin(deg45) * logo.size.height) + (cos(deg45) * logo.size.height)
let w = logo.size.width // (sin(deg45) * logo.size.width ) + (cos(deg45) * logo.size.width )
var x: CGFloat = -w
while x < dirtyRect.width + w {
var y: CGFloat = -h
while y < dirtyRect.height + h {
context.saveGState()
context.translateBy(x: x, y: y)
context.rotate(by: deg45)
logo.draw(at:NSPoint(x:0,y:0),
from: .zero,
operation: .sourceOver,
fraction: 1)
context.restoreGState()
y = y + h
}
x = x + w
}
super.draw(dirtyRect)
}
Original answer
You can set a backgroundColor with a patternImage to for the effect of drawing image tiles in a rect.
To tilt the image by some angle, use CIImage's CIAffineTile option with some transformation.
Here is some example code:
import Cocoa
import CoreImage
class ViewController: NSViewController {
override func loadView() {
let size = CGSize(width: 500, height: 500)
let view = TiledView(frame: CGRect(origin: CGPointZero, size: size))
self.view = view
}
}
class TiledView: NSView {
override func draw(_ dirtyRect: NSRect) {
let bgImage = NSImage(named: "landscape")!
bgImage.draw(in: dirtyRect)
let deg45 = CGFloat.pi / 4
if let ciImage = NSImage(named: "TextTile")?.ciImage() {
let ciTiled = ciImage.tiled(at: deg45).cropped(to: dirtyRect)
let color = NSColor.init(patternImage: NSImage.fromCIImage(ciTiled))
color.setFill()
dirtyRect.fill()
}
super.draw(dirtyRect)
}
}
extension NSImage {
// source: https://rethunk.medium.com/convert-between-nsimage-and-ciimage-in-swift-d6c6180ef026
func ciImage() -> CIImage? {
guard let data = self.tiffRepresentation,
let bitmap = NSBitmapImageRep(data: data) else {
return nil
}
let ci = CIImage(bitmapImageRep: bitmap)
return ci
}
static func fromCIImage(_ ciImage: CIImage) -> NSImage {
let rep = NSCIImageRep(ciImage: ciImage)
let nsImage = NSImage(size: rep.size)
nsImage.addRepresentation(rep)
return nsImage
}
}
extension CIImage {
func tiled(at angle: CGFloat) -> CIImage {
// try different transforms here
let transform = CGAffineTransform(rotationAngle: angle)
return self.applyingFilter("CIAffineTile", parameters: [kCIInputTransformKey: transform])
}
}
The result looks like this:

setColor on NSBitmapImageRep not working in Swift

I'm trying to figure out how setColor works. I have the following code:
lazy var imageView:NSImageView = {
let imageView = NSImageView(frame: view.frame)
return imageView
}()
override func viewDidLoad() {
super.viewDidLoad()
createColorProjection()
view.wantsLayer = true
view.addSubview(imageView)
view.needsDisplay = true
}
func createColorProjection() {
var bitmap = NSBitmapImageRep(cgImage: cgImage!)
var x = 0
while x < bitmap.pixelsWide {
var y = 0
while y < bitmap.pixelsHigh {
//pixels[Point(x: x, y: y)] = (getColor(x: x, y: y, bitmap: bitmap))
bitmap.setColor(NSColor(cgColor: .black)!, atX: x, y: y)
y += 1
}
x += 1
}
let image = createImage(bitmap: bitmap)
imageView.image = image
imageView.needsDisplay = true
}
func createImage(bitmap:NSBitmapImageRep) -> NSImage {
let image = bitmap.cgImage
return NSImage(cgImage: image! , size: CGSize(width: image!.width, height: image!.height))
}
The intention of the code is to change a photo (a rainbow) to be entirely black (I'm just testing with black right now to make sure I understand how it works). However, when I run the program, the unchanged picture of the rainbow is shown, not a black photo.
I am getting these errors:
Unrecognized colorspace number -1 and Unknown number of components for colorspace model -1.
Thanks.
First, you're right: setColor has been broken at least since Catalina. Apple hasn't fixed it probably because it's so slow and inefficient, and nobody ever used it.
Second, docs say NSBitmapImageRep(cgImage: CGImage) produces a read-only bitmap so your code wouldn't have worked even if setColor worked.
As Alexander says, making your own CIFilter is the best way to change a photo's pixels to different colors. Writing and implementing the OpenGL isn't easy, but it's the best.
If you were to add an extension to NSBitmapImageRep like this:
extension NSBitmapImageRep {
func setColorNew(_ color: NSColor, atX x: Int, y: Int) {
guard let data = bitmapData else { return }
let ptr = data + bytesPerRow * y + samplesPerPixel * x
ptr[0] = UInt8(color.redComponent * 255.1)
ptr[1] = UInt8(color.greenComponent * 255.1)
ptr[2] = UInt8(color.blueComponent * 255.1)
if samplesPerPixel > 3 {
ptr[3] = UInt8(color.alphaComponent * 255.1)
}
}
}
Then simply changing an image's pixels could be done like this:
func changePixels(image: NSImage, newColor: NSColor) -> NSImage {
guard let imgData = image.tiffRepresentation,
let bitmap = NSBitmapImageRep(data: imgData),
let color = newColor.usingColorSpace(.deviceRGB)
else { return image }
var y = 0
while y < bitmap.pixelsHigh {
var x = 0
while x < bitmap.pixelsWide {
bitmap.setColorNew(color, atX: x, y: y)
x += 1
}
y += 1
}
let newImage = NSImage(size: image.size)
newImage.addRepresentation(bitmap)
return newImage
}

Swift SpriteKit Detect TouchesBegan on SKSpriteNode with SKPhysicsBody from Sprite's Texture

I have a SpriteKit scene with a sprite. The sprite has a physics body derived from the texture's alpha to get an accurate physics shape like so:
let texture_bottle = SKTexture(imageNamed:"Bottle")
let sprite_bottle = SKSpriteNode(texture: texture_bottle)
physicsBody_bottle = SKPhysicsBody(texture: texture_bottle, size: size)
physicsBody_bottle.affectedByGravity = false
sprite_bottle.physicsBody = physicsBody_bottle
root.addChild(sprite_bottle)
....
func touchesBegan(_ touches: Set<UITouch>?, with event: UIEvent?, touchLocation: CGPoint!) {
let hitNodes = self.nodes(at: touchLocation)
}
When a user taps the screen, how can I detect if they actually touched within the physics body shape (not the sprite's rect)?
You "can't" (Not easily)
UITouch commands are based on CGRects, so let hitNodes = self.nodes(at: touchLocation) is going to be filled with any node who's frame intersects with that touch.
This can't be avoided, so the next step is to determine pixel accuracy from the nodes that registered as "hit". The first thing you should do is convert the touch position to local coordinates to your sprite.
for node in hitNodes
{
//assuming touchLocation is based on screen coordinates
let localLocation = node.convertPoint(touchLocation,from:scene)
}
Then from this point you need to figure out which method you want to use.
If you need speed, then I would recommend creating a 2D boolean array that behaves as a mask, and fill this array with false for transparent areas and true for opaque areas. Then you can use localLocation to point to a certain index of the array (Remember to add anchorPoint * width and height to your x and y values then cast to int)
func isHit(node: SKNode,mask: [[Boolean]],position:CGPoint) -> Boolean
{
return mask[Int(node.size.height * node.anchorPoint.y + position.y)][Int(node.size.width * node.anchorPoint.x + position.x)]
}
If speed is not of concern, then you can create a CGContext, fill your texture into this context, and then check if the point in the context is transparent or not.
Something like this would help you out:
How do I get the RGB Value of a pixel using CGContext?
//: Playground - noun: a place where people can play
import UIKit
import XCPlayground
extension CALayer {
func colorOfPoint(point:CGPoint) -> UIColor
{
var pixel:[CUnsignedChar] = [0,0,0,0]
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue)
let context = CGBitmapContextCreate(&pixel, 1, 1, 8, 4, colorSpace,bitmapInfo.rawValue)
CGContextTranslateCTM(context, -point.x, -point.y)
self.renderInContext(context!)
let red:CGFloat = CGFloat(pixel[0])/255.0
let green:CGFloat = CGFloat(pixel[1])/255.0
let blue:CGFloat = CGFloat(pixel[2])/255.0
let alpha:CGFloat = CGFloat(pixel[3])/255.0
//println("point color - red:\(red) green:\(green) blue:\(blue)")
let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)
return color
}
}
extension UIColor {
var components:(red: CGFloat, green: CGFloat, blue: CGFloat, alpha: CGFloat) {
var r:CGFloat = 0
var g:CGFloat = 0
var b:CGFloat = 0
var a:CGFloat = 0
getRed(&r, green: &g, blue: &b, alpha: &a)
return (r,g,b,a)
}
}
//get an image we can work on
var imageFromURL = UIImage(data: NSData(contentsOfURL: NSURL(string:"https://www.gravatar.com/avatar/ba4178644a33a51e928ffd820269347c?s=328&d=identicon&r=PG&f=1")!)!)
//only use a small area of that image - 50 x 50 square
let imageSliceArea = CGRectMake(0, 0, 50, 50);
let imageSlice = CGImageCreateWithImageInRect(imageFromURL?.CGImage, imageSliceArea);
//we'll work on this image
var image = UIImage(CGImage: imageSlice!)
let imageView = UIImageView(image: image)
//test out the extension above on the point (0,0) - returns r 0.541 g 0.78 b 0.227 a 1.0
var pointColor = imageView.layer.colorOfPoint(CGPoint(x: 0, y: 0))
let imageRect = CGRectMake(0, 0, image.size.width, image.size.height)
UIGraphicsBeginImageContext(image.size)
let context = UIGraphicsGetCurrentContext()
CGContextSaveGState(context)
CGContextDrawImage(context, imageRect, image.CGImage)
for x in 0...Int(image.size.width) {
for y in 0...Int(image.size.height) {
var pointColor = imageView.layer.colorOfPoint(CGPoint(x: x, y: y))
//I used my own creativity here - change this to whatever logic you want
if y % 2 == 0 {
CGContextSetRGBFillColor(context, pointColor.components.red , 0.5, 0.5, 1)
}
else {
CGContextSetRGBFillColor(context, 255, 0.5, 0.5, 1)
}
CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
}
}
CGContextRestoreGState(context)
image = UIGraphicsGetImageFromCurrentImageContext()
where you would eventually call colorOfPoint(point:localLocation).cgColor.alpha > 0 to determine if you are touching a node or not.
Now I would recommend you make colorOfPoint an extension of SKSpriteNode, so be creative with the code posted above.
func isHit(node: SKSpriteNode,position:CGPoint) -> Boolean
{
return node.colorOfPoint(point:localLocation).cgColor.alpha > 0
}
Your final code would look something like this:
hitNodes = hitNodes.filter
{
node in
//assuming touchLocation is based on screen coordinates
let localLocation = node.convertPoint(touchLocation,from:node.scene)
return isHit(node:node,mask:mask,position:localLocation)
}
OR
hitNodes = hitNodes.filter
{
node in
//assuming touchLocation is based on screen coordinates
let localLocation = node.convertPoint(touchLocation,from:node.scene)
return isHit(node:node,position:localLocation)
}
which is basically filtering out all nodes that were in the detected by the frame comparison, leaving you pixel perfect touched nodes.
Note: The code from the separate SO link may need to be converted to Swift 4.

Sample NSImage and retain quality (swift 3)

I wrote an NSImage extension to allow me to take random samples of an image. I would like those samples to retain the same quality as the original image. However, they appear to be aliased or slightly blurry. Here's an example - the original drawn on the right and a random sample on the left:
I'm playing around with this in SpriteKit at the moment. Here's how I create the original image:
let bg = NSImage(imageLiteralResourceName: "ref")
let tex = SKTexture(image: bg)
let sprite = SKSpriteNode(texture: tex)
sprite.position = CGPoint(x: size.width/2, y:size.height/2)
addChild(sprite)
And here's how I create the sample:
let sample = bg.sample(size: NSSize(width: 100, height: 100))
let sampletex = SKTexture(image:sample!)
let samplesprite = SKSpriteNode(texture:sampletex)
samplesprite.position = CGPoint(x: 60, y:size.height/2)
addChild(samplesprite)
Here's the NSImage extension (and randomNumber func) that creates the sample:
extension NSImage {
/// Returns the height of the current image.
var height: CGFloat {
return self.size.height
}
/// Returns the width of the current image.
var width: CGFloat {
return self.size.width
}
func sample(size: NSSize) -> NSImage? {
// Resize the current image, while preserving the aspect ratio.
let source = self
// Make sure that we are within a suitable range
var checkedSize = size
checkedSize.width = floor(min(checkedSize.width,source.size.width * 0.9))
checkedSize.height = floor(min(checkedSize.height, source.size.height * 0.9))
// Get random points for the crop.
let x = randomNumber(range: 0...(Int(source.width) - Int(checkedSize.width)))
let y = randomNumber(range: 0...(Int(source.height) - Int(checkedSize.height)))
// Create the cropping frame.
var frame = NSRect(x: x, y: y, width: Int(checkedSize.width), height: Int(checkedSize.height))
// let ref = source.cgImage.cropping(to:frame)
let ref = source.cgImage(forProposedRect: &frame, context: nil, hints: nil)
let rep = NSBitmapImageRep(cgImage: ref!)
// Create a new image with the new size
let img = NSImage(size: checkedSize)
// Set a graphics context
img.lockFocus()
defer { img.unlockFocus() }
// Fill in the sample image
if rep.draw(in: NSMakeRect(0, 0, checkedSize.width, checkedSize.height),
from: frame,
operation: NSCompositingOperation.copy,
fraction: 1.0,
respectFlipped: false,
hints: [NSImageHintInterpolation:NSImageInterpolation.high.rawValue]) {
// Return the cropped image.
return img
}
// Return nil in case anything fails.
return nil
}
}
func randomNumber(range: ClosedRange<Int> = 0...100) -> Int {
let min = range.lowerBound
let max = range.upperBound
return Int(arc4random_uniform(UInt32(1 + max - min))) + min
}
I've tried this about 10 different ways and the results always seem to be a slightly blurry sample. I even checked for smudges on my screen. :)
How can I create a sample of an NSImage that retains the exact qualities of the section of the original source image?
Switching the interpolation mode to NSImageInterpolation.none was apparently sufficient in this case.
It's also important to handle the draw destination rect correctly. Since cgImage(forProposedRect:...) may change the proposed rect, you should use a destination rect that's based on it. You should basically use a copy of frame that's offset by (-x, -y) so it's relative to (0, 0) instead of (x, y).