custom overlay(renderer) is getting cut of by map tiles (in some cases) - swift

I wrote a custom renderer to represent a min and a max Radius. In some cases the renderer is not working as expected. It looks like the overlay is getting cut of by the map tiles.
See the full video
Here is how I did it. Did I miss something?
class RadiusOverlayRenderer: MKOverlayRenderer {
override func draw(_ mapRect: MKMapRect, zoomScale: MKZoomScale, in context: CGContext) {
guard let overlay = self.overlay as? RadiusOverlay else {
return
}
let maxRadiusRect = self.rect(for: overlay.boundingMapRect)
.offsetBy(
dx: CGFloat(-overlay.boundingMapRect.height)/2,
dy: CGFloat(-overlay.boundingMapRect.width)/2
)
let minRadiusRect = CGRect(
x: Double(maxRadiusRect.midX)-overlay.minRadRect.width/2,
y: Double(maxRadiusRect.midY)-overlay.minRadRect.height/2,
width: overlay.minRadRect.width,
height: overlay.minRadRect.height)
let aPath = CGMutablePath()
aPath.addEllipse(in: maxRadiusRect)
aPath.addEllipse(in: minRadiusRect)
aPath.closeSubpath()
context.setFillColor(overlay.color.cgColor)
context.setAlpha(overlay.alpha)
context.addPath(aPath)
context.drawPath(using: .eoFillStroke)
}
}

Notice that only the upper left parts are clipped?
with .offsetBy you are drawing outside of the .boundingMapRect.
Remove the .offsetBy...
If you want to draw your circle at a different place, then adjust coordinate and / or boundingMapRect of your MKOverlay.

Related

How Do I Draw a String in an MKOverlayRender

The use case I have is one where I want to draw and label counties in a state. Annotations don't seem like the right approach to solve this problem. First of all, the label refers to region rather than a point. Second, there are far too many; so, I would have to selectively show and hide annotations based on zoom level (actually something more like the size of the MKCoordinateRegion span). Lastly, county labels are not all that relevant unless the user starts zooming in.
Just as a side note, county boundaries may be present in map tiles, but they are not emphasized. Moreover, there are a multitude of other boundaries I might want to draw that are completely absent from map tiles.
Ultimately, what I want to do is create an overlay for each county shape (counties are clickable and I can navigate to details) and another set of overlays for the labels. I separate county shapes and labels because county shapes are messy and I just use the center of the county. There is no guarantee with this approach that labels will not draw outside of county shapes, which means labels could end up getting clipped when other counties are drawn.
Drawing the county shapes was relatively easy or at least relatively well documented. I do not include any code on rendering shapes. Drawing text on the other hand is not straight forward, not well documented, and most of the posts on the subject are ancient. The lack of recent posts on the subject as well as the fact that most posts posit solutions that no longer work, use deprecated APIs, or only solve a part of the problem motivates this post. Of course, the lack of activity on this problem could be because my strategy is mind numbingly stupid.
I have posted a complete solution to the problem. If you can improve on the solution below or believe there is a better way, I would appreciate the feedback. Alternatively, if you are trying to find a solution to this problem, you will find this post more helpful than the dozens I have looked at, which on the whole got me to where I am now.
Below is a complete solution that can be run in an Xcode single view Playground. I am running Xcode 14.2. The most important bit of code is the overridden draw function of LabelOverlayRenderer. That bit of code is what I struggled to craft for more than a day. I almost gave up. Another key point is when drawing text, one uses CoreText. The APIs pertaining to drawing and managing text are many and most have had a lot of name changes and deprecation.
import UIKit
import MapKit
import SwiftUI
class LabelOverlayRenderer: MKOverlayRenderer {
let title: String
let center: CLLocationCoordinate2D
init(overlay: LabelOverlay) {
center = overlay.coordinate
title = overlay.title!
super.init(overlay: overlay)
}
override func draw(_ mapRect: MKMapRect, zoomScale: MKZoomScale, in context: CGContext) {
context.saveGState()
// Set Drawing mode
context.setTextDrawingMode(.fillStroke)
// If I don't do this, the text is upside down.
context.textMatrix = CGAffineTransformMakeScale(1.0, -1.0);
// Text size is crazy big because label has to be miles across
// to be visible.
var attrs = [ NSAttributedString.Key : Any]()
attrs[NSAttributedString.Key.font] = UIFont(name: "Helvetica", size: 128000.0)!
attrs[NSAttributedString.Key.foregroundColor] = UIColor(Color.red)
let attributedString = NSAttributedString(string: title, attributes: attrs)
let line = CTLineCreateWithAttributedString(attributedString)
// Get the size of the whole string, so the string can
// be centered. CGSize is huge because I don't want
// to clip or wrap the string. The range setting
// is just cut and paste. Looks like a place holder.
// Ideally, it is the range of that portion
// of the string for which I want the size.
let frameSetter = CTFramesetterCreateWithAttributedString(attributedString)
let size = CTFramesetterSuggestFrameSizeWithConstraints(frameSetter, CFRangeMake(0, 0), nil, CGSize(width: 1000000, height: 1000000), nil)
// Center is lat-lon, but map is in meters (maybe? definitely
// not lat-lon). Center string and draw.
var p = point(for: MKMapPoint(center))
p.x -= size.width/2
p.y += size.height/2
// There is no "at" on CTLineDraw. The string
// is positioned in the context.
context.textPosition = p
CTLineDraw(line, context)
context.restoreGState()
}
}
class LabelOverlay: NSObject, MKOverlay {
let title: String?
let coordinate: CLLocationCoordinate2D
let boundingMapRect: MKMapRect
init(title: String, coordinate: CLLocationCoordinate2D, boundingMapRect: MKMapRect) {
self.title = title
self.coordinate = coordinate
self.boundingMapRect = boundingMapRect
}
}
class MapViewCoordinator: NSObject, MKMapViewDelegate {
func mapView(_ mapView: MKMapView, rendererFor overlay: MKOverlay) -> MKOverlayRenderer {
if let overlay = overlay as? LabelOverlay {
return LabelOverlayRenderer(overlay: overlay)
}
fatalError("Unknown overlay type!")
}
}
struct MyMapView: UIViewRepresentable {
func makeCoordinator() -> MapViewCoordinator {
return MapViewCoordinator()
}
func updateUIView(_ view: MKMapView, context: Context){
// Center on Georgia
let center = CLLocationCoordinate2D(latitude: 32.6793, longitude: -83.62245)
let span = MKCoordinateSpan(latitudeDelta: 4.875, longitudeDelta: 5.0003)
let region = MKCoordinateRegion(center: center, span: span)
view.setRegion(region, animated: true)
view.delegate = context.coordinator
let coordinate = CLLocationCoordinate2D(latitude: 32.845084, longitude: -84.3742)
let mapRect = MKMapRect(x: 70948460.0, y: 107063759.0, width: 561477.0, height: 613908.0)
let overlay = LabelOverlay(title: "Hello World!", coordinate: coordinate, boundingMapRect: mapRect)
view.addOverlay(overlay)
}
func makeUIView(context: Context) -> MKMapView {
// Create a map with constrained zoom gestures only
let mapView = MKMapView(frame: .zero)
mapView.isPitchEnabled = false
mapView.isRotateEnabled = false
let zoomRange = MKMapView.CameraZoomRange(
minCenterCoordinateDistance: 160000,
maxCenterCoordinateDistance: 1400000
)
mapView.cameraZoomRange = zoomRange
return mapView
}
}
struct ContentView: View {
var body: some View {
VStack {
MyMapView()
}
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}

NSBezierPath stroke not scaled correctly

I have what should be a simple subclass of an NSView that draws circular nodes at specified locations.
To render the nodes in my view, I translate the graphics context's origin to the center of the view's frame and scale it such that it spans from -1.25 to 1.25 in the limiting dimension (the node coordinates are all in the range -1...1). I then create for each node an NSBezierPath using the ovalIn: constructor. Finally, I fill the path with yellow and stroke it with black.
But... While the yellow fill looks ok, the black outline is not being scaled correctly!
What am I missing?
Here's the code:
override func draw(_ dirtyRect: NSRect)
{
let nodeRadius = CGFloat(0.05)
let unscaledSpan = CGFloat(2.5)
super.draw(dirtyRect)
NSColor.white.set()
self.frame.fill()
guard let graph = graph else { return }
let scale = min(bounds.width/unscaledSpan, bounds.height/unscaledSpan)
NSGraphicsContext.current?.saveGraphicsState()
defer { NSGraphicsContext.current?.restoreGraphicsState() }
let xform = NSAffineTransform()
xform.translateX( by: 0.5*bounds.width, yBy: 0.5*bounds.height)
xform.scale(by: scale)
xform.concat()
for v in graph.vertices
{
let r = NSRect(x: v.x-nodeRadius, y: v.y-nodeRadius, width: 2.0*nodeRadius, height: 2.0*nodeRadius)
let p = NSBezierPath(ovalIn: r)
NSColor.yellow.set()
p.fill()
NSColor.black.set()
p.stroke()
}
}
This is what I'm seeing (shown with two different window sizes)
Clearly, the translation is working fine for both fill and stroke.
But, the scaling is off for stroke.
Thanks for any/all hints/suggestions.
Doh... I wasn't considering the effect of scaling on the line width.
Made the following edit and all is well:
...
let p = NSBezierPath(ovalIn: r)
p.lineWidth = CGFloat(0.01)
NSColor.yellow.set()
p.fill()
NSColor.black.set()
p.stroke()
...

Draw Trails in Xcode

I am working on a screensaver in Xcode. I have an array of points and am changing their position s every frame in the override func animateOneFrame(). How would I have the points create a fading trail similar to this: https://www.youtube.com/watch?v=fDSIRXmnVvk&t=117s&ab_channel=CodeParade
For reference, in my draw function I have this and it only displays the points, no trail. It effectively draws a black background every frame even though I don't tell it to:
override func draw(_ rect: NSRect) {
for point in 1..<PointArray.count {
drawPoint(p: PointArray[point])
}
}
private func drawPoint(p: point) {
let pixel = NSRect(x: p.position.x,
y: p.position.y,
width: 3,
height: 3)
p.color.setFill()
pixel.fill()
}

How could I get mouse click coordinates using Swift on MacOS?

I try to find out about getting mouse click coordinates to draw a line. I would like to make two clicks (first and second dots) and a line'll be create.
I analysed a lot of codes, but they are huge (for example, I'm fond of this way https://stackoverflow.com/a/47496766/9058168).
I hope drawing line in Swift isn't difficult. Could I type another variable which has mouse click coordinates instead of numeral coordinates (look at code below)? If your answer is true, how to code it? Help me, please, to make it simplier.
import Cocoa
class DrawLine: NSView {
override func draw(_ dirtyRect: NSRect) {
NSBezierPath.strokeLine(from: CGPoint(x: 20, y: 20), to: CGPoint(x: 0, y: 100))
}
}
Listen for the mouse down event and set start location and end location using it to draw the path.
import Cocoa
class DrawLine: NSView {
var startPoint:NSPoint?
var endPoint:NSPoint?
override func mouseDown(with event: NSEvent){
if startPoint == nil || self.endPoint != nil{
self.startPoint = event.locationInWindow
} else {
self.endPoint = event.locationInWindow
self.needsDisplay = true
}
}
override func draw(_ dirtyRect: NSRect) {
if let theStart = startPoint, let theEnd = endPoint{
NSBezierPath.strokeLine(from: theStart, to: theEnd)
}
}
}

I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.