NSBezierPath stroke not scaled correctly - swift

I have what should be a simple subclass of an NSView that draws circular nodes at specified locations.
To render the nodes in my view, I translate the graphics context's origin to the center of the view's frame and scale it such that it spans from -1.25 to 1.25 in the limiting dimension (the node coordinates are all in the range -1...1). I then create for each node an NSBezierPath using the ovalIn: constructor. Finally, I fill the path with yellow and stroke it with black.
But... While the yellow fill looks ok, the black outline is not being scaled correctly!
What am I missing?
Here's the code:
override func draw(_ dirtyRect: NSRect)
{
let nodeRadius = CGFloat(0.05)
let unscaledSpan = CGFloat(2.5)
super.draw(dirtyRect)
NSColor.white.set()
self.frame.fill()
guard let graph = graph else { return }
let scale = min(bounds.width/unscaledSpan, bounds.height/unscaledSpan)
NSGraphicsContext.current?.saveGraphicsState()
defer { NSGraphicsContext.current?.restoreGraphicsState() }
let xform = NSAffineTransform()
xform.translateX( by: 0.5*bounds.width, yBy: 0.5*bounds.height)
xform.scale(by: scale)
xform.concat()
for v in graph.vertices
{
let r = NSRect(x: v.x-nodeRadius, y: v.y-nodeRadius, width: 2.0*nodeRadius, height: 2.0*nodeRadius)
let p = NSBezierPath(ovalIn: r)
NSColor.yellow.set()
p.fill()
NSColor.black.set()
p.stroke()
}
}
This is what I'm seeing (shown with two different window sizes)
Clearly, the translation is working fine for both fill and stroke.
But, the scaling is off for stroke.
Thanks for any/all hints/suggestions.

Doh... I wasn't considering the effect of scaling on the line width.
Made the following edit and all is well:
...
let p = NSBezierPath(ovalIn: r)
p.lineWidth = CGFloat(0.01)
NSColor.yellow.set()
p.fill()
NSColor.black.set()
p.stroke()
...

Related

Swift Combine and reverse UIBezierPaths

The problem I am facing is reversing subpaths. Take this example:
let circlePaths: [UIBezierPath] = ...
let rectanglePath: UIBezierPath = ... // a rectangle
let totalPath: UIBezierPath = .init()
for path in circlePaths {
totalPath.append(path)
}
rectanglePath.append(totalPath)
It should look like this:
Now ideally I want to cut out all the circles using
bezierPath.append(totalPath.reversing())
However the effect is not as expected. I expect the two circles to make up a path and this one is reversed, however in reality both circle paths are reversed, which causes the intersection to be part of the path (reversing() twice has no effect). I'd like to combine the circle paths into one with the intersection not being present but as part of the path. I want the smaller circle to "extend" the larger circle as a path.
Any idea how I would do it?
Edit 1: Here is an image how the resulting path should look like.
If you need to actually create a single path as the combination / union of your multiple paths, you may want to look at one of the libraries that are out there.
However, if you only need that visual output, this might be a usable approach.
Create 3 paths -- outer rect, large circle, small circle.
Stroke and Fill the outer rect path
Stroke both circle paths
Fill both circle paths
An example UIView class:
class FakeUnionUIBezierPaths : UIView {
override func draw(_ rect: CGRect) {
// yellow line width
let lWidth: CGFloat = 8
// paths are stroked on the center, so
// use one-half the line width to adjust ediges
let hWidth: CGFloat = lWidth * 0.5
// border / outer rect, inset by one-half line width
let dRect: CGRect = rect.insetBy(dx: hWidth, dy: hWidth)
// first circle
// full-height
// aligned to left edge
var c1Rect: CGRect = dRect
c1Rect.size.width = c1Rect.height
// second circle
// half-height
// aligned to right edge
var c2Rect: CGRect = dRect
c2Rect.size.height *= 0.5
c2Rect.size.width = c2Rect.height
c2Rect.origin.x = dRect.width - c2Rect.width + hWidth
c2Rect.origin.y = dRect.height * 0.25
let pRect: UIBezierPath = UIBezierPath(rect: dRect)
let p1: UIBezierPath = UIBezierPath(ovalIn: c1Rect)
let p2: UIBezierPath = UIBezierPath(ovalIn: c2Rect)
UIColor.yellow.setStroke()
UIColor.blue.setFill()
// same line-width for all three paths
[pRect, p1, p2].forEach { p in
p.lineWidth = lWidth
}
// stroke and fill the border / outer rect
pRect.stroke()
pRect.fill()
// stroke both circle paths
p1.stroke()
p2.stroke()
// fill both circle paths
p1.fill()
p2.fill()
}
}
Output:

custom overlay(renderer) is getting cut of by map tiles (in some cases)

I wrote a custom renderer to represent a min and a max Radius. In some cases the renderer is not working as expected. It looks like the overlay is getting cut of by the map tiles.
See the full video
Here is how I did it. Did I miss something?
class RadiusOverlayRenderer: MKOverlayRenderer {
override func draw(_ mapRect: MKMapRect, zoomScale: MKZoomScale, in context: CGContext) {
guard let overlay = self.overlay as? RadiusOverlay else {
return
}
let maxRadiusRect = self.rect(for: overlay.boundingMapRect)
.offsetBy(
dx: CGFloat(-overlay.boundingMapRect.height)/2,
dy: CGFloat(-overlay.boundingMapRect.width)/2
)
let minRadiusRect = CGRect(
x: Double(maxRadiusRect.midX)-overlay.minRadRect.width/2,
y: Double(maxRadiusRect.midY)-overlay.minRadRect.height/2,
width: overlay.minRadRect.width,
height: overlay.minRadRect.height)
let aPath = CGMutablePath()
aPath.addEllipse(in: maxRadiusRect)
aPath.addEllipse(in: minRadiusRect)
aPath.closeSubpath()
context.setFillColor(overlay.color.cgColor)
context.setAlpha(overlay.alpha)
context.addPath(aPath)
context.drawPath(using: .eoFillStroke)
}
}
Notice that only the upper left parts are clipped?
with .offsetBy you are drawing outside of the .boundingMapRect.
Remove the .offsetBy...
If you want to draw your circle at a different place, then adjust coordinate and / or boundingMapRect of your MKOverlay.

Change drawing order of multiple CAShapeLayers

I have my program drawing several different CGMutablePaths, each of which belongs to its own CAShapeLayer. It currently looks like this
Notice how the lines of the rectangle overlap the circles. Here is how I want it to look
Essentially, I want the ellipses and their fill to be drawn on top of the lines of the rectangle.
In my code, I have an array of CAShapeLayers, the first of which is the rectangle. In awakeFromNib(), I loop through all the shape layers and update information like the stroke width and fill, then I manually change the fill color of the rectangle to be different. In the draw function, I create all my paths, rectangle first, then ellipses. Finally, I add those paths to their relative shape layer, ellipses after the rectangle.
I have tried swapping the order in each case, but no luck. I honestly can't think of anything else that might be effecting draw order. The strangest part about it is that, in the first image, you'll see that the bottom left ellipse has indeed been drawn above the rectangle, but nowhere in my code is it set apart.
var shapeLayers: [String: CAShapeLayer] = [
"rectangle": CAShapeLayer(),
"ellipse1": CAShapeLayer(),
"ellipse2": CAShapeLayer()...and so on
]
override func awakeFromNib() {
wantsLayer = true
for (_, shapeLayer) in shapeLayers {
shapeLayer.lineWidth = borderWidth
shapeLayer.strokeColor = ColorScheme.blue.cgColor
shapeLayer.fillColor = ColorScheme.white.cgColor
shapeLayer.path = nil
shapeLayer.lineCap = kCALineCapSquare
layer?.addSublayer(shapeLayer)
}
shapeLayers["rectangle"]?.fillColor = NSColor.clear.cgColor
}
override func draw(_ dirtyRect: NSRect) {
let rectangle = CGMutablePath().addRect(selectionBounds)
let ellipse1 = CGMutablePath().addEllipse(in: CGRect(origin: somePoint, size: someSize))
let ellipse2 = CGMutablePath().addEllipse(in: CGRect(origin: somePoint, size: someSize))
...and so on
shapeLayers["rectangle"]?.path = rectangle
shapeLayers["ellipse1"]?.path = ellipse1
shapeLayers["ellipse2"]?.path = ellipse2...and so on
}
I'm clearly missing something basic in my understanding of shapeLayers and paths. Your help is greatly appreciated - TIA
You can use the zPosition to force the layers to draw in a specific order. Just assign them arbitrary values as long as the numbers ascend from back layer to front layer.

I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.

Image Cropping grabbing the wrong portion of UIImage during crop

I've been working on making a view controller that will crop an image down to a specific size with some draggable control points and the background image outside of the crop zone dimmed.
For some reason whenever the image is cropped, it is grabbing the wrong reference. I've looked at just about every other post on this to deal with cropping.
Here is my setup for the Storyboard:
I've asked a few other people including a tutor and mentor from a course that I'm taking, but we all seem to be stumped.
I can select a frame by dragging the UL UR DL DR corners around the view controller like this:
But when I press the button and use the crop function I've written, I get something that is not the correct crop based on the framed selection.
I also get this error message during the cropping proceedure:
2016-09-07 23:36:38.962 ImageCropView[33133:1056024]
<UIView: 0x7f9cfa42c730; frame = (0 0; 414 736); autoresize = W+H; layer = <CALayer: 0x7f9cfa408400>>'s window
is not equal to <ImageCropView.CroppedImageViewController: 0x7f9cfa43f9b0>'s view's window!
The offending part of the code must be somewhere in one of the functions below.
Here is the cropping function:
func cropImage(image: UIImage, toRect rect: CGRect) -> UIImage {
func rad(deg: CGFloat) -> CGFloat {
return deg / 180.0 * CGFloat(M_PI)
}
// determine the orientation of the image and apply a transformation to the crop rectangle to shift it to the correct position
var rectTransform: CGAffineTransform
switch image.imageOrientation {
case .Left:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(90)), 0, -image.size.height)
case .Right:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-90)), -image.size.width, 0)
case .Down:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-180)), -image.size.width, -image.size.height)
default:
rectTransform = CGAffineTransformIdentity
}
// adjust the transformation scale based on the image scale
rectTransform = CGAffineTransformScale(rectTransform, UIScreen.mainScreen().scale, UIScreen.mainScreen().scale)
// apply the transformation to the rect to create a new, shifted rect
let transformedCropSquare = CGRectApplyAffineTransform(rect, rectTransform)
// use the rect to crop the image
let imageRef = CGImageCreateWithImageInRect(image.CGImage, transformedCropSquare)
// create a new UIImage and set the scale and orientation appropriately
let result = UIImage(CGImage: imageRef!, scale: image.scale, orientation: image.imageOrientation)
return result
}
Here are the functions to set and translate the mask view
func setTopMask(){
let path = CGPathCreateWithRect(cropViewMask.frame, nil)
topMaskLayer.path = path
topImageView.layer.mask = topMaskLayer
}
func translateMask(sender: UIPanGestureRecognizer) {
let translation = sender.translationInView(self.view)
sender.view!.center = CGPointMake(sender.view!.center.x + translation.x, sender.view!.center.y + translation.y)
// print(sender.translationInView(self.view))
sender.setTranslation(CGPointZero, inView: self.view)
// print("panned mask")
if sender.state == .Ended {
printFrames()
}
}
func setCropMaskFrame() {
let x = ulCorner.center.x
let y = ulCorner.center.y
let width = urCorner.center.x - ulCorner.center.x
let height = blCorner.center.y - ulCorner.center.y
cropViewMask.frame = CGRectMake(x, y, width, height)
setTopMask()
}
I know this was long time ago...Just a thought, I ran into similar problem and what I found is that the frames for cropping are most probably correct. The problem lies in the actual size of the picture you're trying to crop. I solved the issue by aligning sizes of my view which holds the picture, with the actual picture size (in points). Then the cropping area cropped what was selected. I know this is probably not a solution, just sharing my experience, hope it helps to turn on some lightbulbs :)