Canvas doesn't get redrawn in SwiftUI - swift

I have a project in SwiftUI on macOS where I draw to a canvas twice per second.
This is my ContentView:
struct ContentView: view {
#State var score: Int = 0
var body: some View {
VStack {
Text("Score: \(self.score)")
.fixedSize(horizontal: true, vertical: true)
Canvas(renderer: { gc, size in
start(
gc: &gc,
size: size
onPoint: { newScore in
self.score = newScore
}
)
)
}
}
}
The start function:
var renderer: Renderer
func start(
gc: inout GraphicsContext,
size: size,
onPoint: #escaping (Int) -> ()
) {
if renderer != nil {
renderer!.set(gc: &gc)
} else {
renderer = Renderer(
context: &gc,
canvasSize: size,
onPoint: onPoint
)
startGameLoop(renderer: renderer!)
}
renderer!.drawFrame()
}
var timer: Timer
func startGameLoop(renderer: Renderer) {
timer = Timer.scheduledTimer(withTimeInterval: 0.5, repeats: true, block: {
renderer!.handleNextFrame()
}
}
And the renderer roughly looks like this:
class Renderer {
var gc: GraphicsContext
var size: CGSize
var cellSize: CGFloat
let pointCallback: (Int) -> ()
var player: (Int, Int) = (0,0)
init(
context: inout GraphicsContext,
canvasSize: CGSize,
onPoint: (Int) -> ()
) {
self.gc = gc
self.size = canvasSize
self.pointCallback = onPoint
self.cellSize = min(self.size.width, self.size.height)
}
}
extension Renderer {
func handleNextFrame() {
self.player = (self.player.0 + 1, self.player.1 + 1)
self.drawFrame
}
func drawFrame() {
self.gc.fill(
Path(
x: CGFloat(self.player.0) * self.cellSize,
y: CGFloat(self.player.1) * self.cellSize,
width: self.cellSize,
height: self.cellSize
)
)
}
}
So the handleNextFrame method is called twice per second, which calls the drawFrame method, drawing the position of the player to the canvas.
However, there is nothing being drawn to the canvas.
Only the first frame is drawn, which comes from the renderer!.drawFrame() in start. When a point is scored, the canvas is also redrawn, because the start function gets called again.
The problem is that there is nothing being drawn to the Canvas when the drawFrame is called from handleNextFrame.
Where lies my problem, and how can I fix this issue?
Thanks in advance,
Jonas

I had the same issue.
Go to your project settings, select 'Build Settings' tab, and make sure 'Strict Concurrency Checking' is ON.
It will give you hints about GraphicsContext not being thread-safe, because it does not conform to Sendable protocol.
I am no expert on the topic, but I believe this means that GraphicsContext is not meant to be used in asynchronous context, you are not meant to save its reference for future use. You are given a GraphicsContext instance and you are meant to use it so long as the renderer closure's execution lasts (and this execution, of course, ends before your asynchronous callbacks could use the GraphicsContext instance).
What you really want is a TimelineView:
import SwiftUI
private func nanosValue(for date: Date) -> Int { return Calendar.current.component(.nanosecond, from: date)
}
struct ContentView: View {
var body: some View {
TimelineView(.periodic(from: Date(), by: 0.000001)) { timeContext in
Canvas { context, size in
let value = nanosValue(for: timeContext.date) % 2
let rect = CGRect(origin: .zero, size: size).insetBy(dx: 25, dy: 25)
// Path
let path = Path(roundedRect: rect, cornerRadius: 35.0)
// Gradient
let gradient = Gradient(colors: [.green, value == 1 ? .blue : .red])
let from = rect.origin
let to = CGPoint(x: rect.width + from.x, y: rect.height + from.y)
// Stroke path
context.stroke(path, with: .color(.blue), lineWidth: 25)
// Fill path
context.fill(path, with: .linearGradient(gradient,
startPoint: from,
endPoint: to))
}
}
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
This will change the gradient colour of the drawn rectangle every nanosecond.
Funny thing, I have heard on some blog, you need a completely different mindset to use SwiftUI, haha. It just does not work the traditional way.

Related

Unable to get pixel color from an image in SwiftUI

I want to design a draggable color selector view that will select the color of the present pixel. My strategy is, first, to choose the position of the pixel and then, second, get the pixel color. The position has been correctly changed while dragging the view. The color of the view has also been changed but don't understand its manner of changing. I think it doesn't returns the proper color of its pixel. Additionally, when x or y of the CGImage coordinate is less then 0, the getPixelColor() returns 'UIExtendedSRGBColorSpace 0 0 0 0'.
After a long search, I have not get any proper solution to get pixel color in SwiftUI.
The program is given below:
struct ColorSelectorView: View {
#Binding var position: CGPoint
#State private var isDragging = false
#State private var colorPicked = UIColor.black
var image: UIImage?
var colorPickedCallBack : (UIColor) -> ()
var body: some View {
colorSelectorView()
.onAppear {
colorPicked = getPixelColor(image: image, point: position) ?? .clear
}
.gesture(DragGesture()
.onChanged { value in
self.position.x += CGFloat(Int(value.location.x - value.startLocation.x))
self.position.y += CGFloat(Int(value.location.y - value.startLocation.y))
colorPicked = getPixelColor(image: image, point: position) ?? .clear
}
.onEnded({ value in
colorPickedCallBack(colorPicked)
})
)
.offset(x: position.x, y: position.y)
}
}
func getPixelColor(image: UIImage, point: CGPoint) -> UIColor? {
guard let pixelData = image.cgImage?.dataProvider?.data else { return nil }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = Int((image.size.width * point.y + point.x) * 4.0)
let i = Array(0 ... 3).map { CGFloat(data[pixelInfo + $0]) / CGFloat(255) }
return UIColor(red: i[0], green: i[1], blue: i[2], alpha: 1)
}
}

SwiftUI: Update struct View when calling mutating func

New to Switch. I need to implement an Analog Clock with auto-updating every second to the current time (like a typical analog clock).
Drawing the Views etc is no problem. But I'm having troubles auto-updating the time.
View
struct ClockView: View {
var viewModel: ClockViewModel
var clock:ClockViewModel.TransformedClock
var size: CGFloat
#State var timer = Timer.publish(every: 1, on: .main, in: .common).autoconnect()
var body: some View {
Circle()
.fill(.white)
.frame(width: size, height: size)
.overlay{
GeometryReader{ geometry in
let radius = geometry.size.width/2
let midX = geometry.safeAreaInsets.top + radius
let midY = geometry.safeAreaInsets.leading + radius
ClockFace(size: size, radius: radius, x: midX, y: midY)
ClockHand(type: .second, angle: clock.secondAngle, x: midX, y: midY, length: radius-size*0.04, size: size)
.onReceive(timer){ _ in updateAngleHere}
}
}
}
}
struct ClockHand:View{
let type: ClockHandType
#State var angle: Double
var x: CGFloat
var y: CGFloat
var length: CGFloat
var size: CGFloat
var body: some View {
RoundedRectangle(cornerRadius: 5)
.fill(type.getColor())
.frame(width: size*type.getWidth(), height: length )
.position(x: x, y: y)
.offset(y: -length/2)
.rotationEffect(Angle.degrees(angle))
}
mutating func update(angle: Double) {
print("calling \(self.angle)")
self.angle = angle
}
}
ViewModel
class ClockViewModel: ObservableObject {
#Published private var model: ClockModel
init(){
model = ClockViewModel.initClocks()
}
static func initClock()->ClockModel {
let places = [
"Europe/Bern",
]
return ClockModel(number: places.count){ index in
places[index]
}
}
var activeClock: TransformedClock{
TransformedClock(clock: model.activeWorldClock!)
}
struct TransformedClock:Identifiable{
var id:Int
var secondAngle:Double
var clock:ClockModel.WorldClock
init(clock:ClockModel.WorldClock){
let second = clock.calendar.component(.second, from: Date())
self.clock = clock
self.id = clock.id
self.secondAngle = Double(second)/60.0*360
}
}
}
Model
struct ClockModel{
var worldClocks: [WorldClock]
init(number: Int, timeFactory: (Int)->String){
worldClocks = [WorldClock]()
for index in 0..<number{
let place = timeFactory(index)
var calendar = Calendar.current
worldClocks.append(
WorldClock(
id: index,
calendar: calendar,
place: formatPlaceString(place)
)
)
}
}
struct WorldClock:Identifiable {
var calendar: Calendar
var place: String
}
}
Find my code above. First of all, I just would like to move the ClockHand for seconds to different positions. For this, I've been told to use a timer and make use of onReceive. My first idea was to call a mutating func of that struct in the closure - but since I can't seem to specifically call it on that struct, I guess it's the wrong option.
So, I need to find a way to teach my second-ClockFace struct to update / redraw itself, whenever I call the mutating func.
Examples I've found only shown functions outside a struct...
Any inut would be much appreciated.

Draw multiple circles in view | ForEach

What I am trying to achieve
I am learning swift. I am trying to draw a line with multiple circle on it.
Then I want to be able to modify the Color of a circle when the user tap the circle.
And eventually do a drag and drop for moving the circle. Such as a custom bezier curve.
Currently, I am stuck on drawing multiple circle on the line.
What I trying to do :
1 - I create an array containing CGPoint
public var PointArray:[CGPoint]=[]
2 - Then doing an Identifiable struct
private struct Positions: Identifiable {
var id: Int
let point: CGPoint
}
3 - With CurveCustomInit, I fill the array. I got this error
No exact matches in call to instance method 'append'
I have done a lot, but I never success at using a ForEach function in one view.
I am using struct shapes as I want to customise shapes and then reuse the component. And also add a gesture function to each circle.
There is the whole code :
Note that a GeometryReader is sending the size of the view
import SwiftUI
public var PointArray:[CGPoint]=[]
public var PointArrayInit:Bool = false
private struct Positions: Identifiable {
var id: Int
let point: CGPoint
}
struct Arc: Shape {
var startAngle: Angle
var endAngle: Angle
var clockwise: Bool
var centerCustom:CGPoint
func path(in rect: CGRect) -> Path {
let rotationAdjustment = Angle.degrees(90)
let modifiedStart = startAngle - rotationAdjustment
let modifiedEnd = endAngle - rotationAdjustment
var path = Path()
path.addArc(center: centerCustom, radius: 20, startAngle: modifiedStart, endAngle: modifiedEnd, clockwise: !clockwise)
return path
}
}
struct CurveCustomInit: Shape {
private var Divider:Int = 10
func path(in rect: CGRect) -> Path {
var path = Path()
let xStep:CGFloat = DrawingZoneWidth / CGFloat(Divider)
let yStep:CGFloat = DrawingZoneHeight / 2
var xStepLoopIncrement:CGFloat = 0
path.move(to: CGPoint(x: 0, y: yStep))
var incr = 0
for _ in 0...Divider {
let Point:CGPoint = CGPoint(x: xStepLoopIncrement, y: yStep)
let value = Positions(id:incr, point:Point )
PointArray.append(value)
path.addLine(to: Point)
xStepLoopIncrement += xStep
incr += 1
}
PointArrayInit = true
return (path)
}
}
struct TouchCurveBasic: View {
var body: some View {
if !PointArrayInit {
// initialisation
CurveCustomInit()
.stroke(Color.red, style: StrokeStyle(lineWidth: 10, lineCap: .round, lineJoin: .round))
.frame(width: 300, height: 300)
.overlay(Arc(startAngle: .degrees(0),endAngle: .degrees(360),clockwise:true).stroke(Color.blue, lineWidth: 10))
} else {
// Update point
}
}
}
struct TouchCurveBasic_Previews: PreviewProvider {
static var previews: some View {
TouchCurveBasic()
}
}
What I am trying to achieve :
Your array PointArray contains values of type CGFloat but you are trying to append a value of type Positions. That's why you're getting this error.

Render SwiftUI view offscreen and save view as UIImage to share

I'm trying to create a share button with SwiftUI that when pressed can share a generated image. I've found some tutorials that can screen shot a current displayed view and convert it to an UIImage. But I want to create a view programmatically off the screen and then save that to a UIImage that users can then share with a share sheet.
import SwiftUI
import SwiftyJSON
import MapKit
struct ShareRentalView : View {
#State private var region = MKCoordinateRegion(center: CLLocationCoordinate2D(latitude: 32.786038, longitude: -117.237324) , span: MKCoordinateSpan(latitudeDelta: 0.025, longitudeDelta: 0.025))
#State var coordinates: [JSON] = []
#State var origin: CGPoint? = nil
#State var size: CGSize? = nil
var body: some View {
GeometryReader{ geometry in
VStack(spacing: 0) {
ZStack{
HistoryMapView(region: region, pointsArray: $coordinates)
.frame(height: 300)
}.frame(height: 300)
}.onAppear {
self.origin = geometry.frame(in: .global).origin
self.size = geometry.size
}
}
}
func returnScreenShot() -> UIImage{
return takeScreenshot(origin: self.origin.unsafelyUnwrapped, size: self.size.unsafelyUnwrapped)
}
}
extension UIView {
var renderedImage: UIImage {
// rect of capure
let rect = self.bounds
// create the context of bitmap
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
let context: CGContext = UIGraphicsGetCurrentContext()!
self.layer.render(in: context)
// get a image from current context bitmap
let capturedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return capturedImage
}
}
extension View {
func takeScreenshot(origin: CGPoint, size: CGSize) -> UIImage {
let window = UIWindow(frame: CGRect(origin: origin, size: size))
let hosting = UIHostingController(rootView: self)
hosting.view.frame = window.frame
window.addSubview(hosting.view)
window.makeKeyAndVisible()
return hosting.view.renderedImage
}
}
This is kind of my code idea at the moment. I have a view I've built that onAppear sets the CGpoint and CGsize of the screen capture. Then an attached method that can then take the screen shot of the view. The problem right now this view never renders because I never add this to a parent view as I don't want this view to appear to the user. In the parent view I have
struct HistoryCell: View {
...
private var shareRental : ShareRentalView? = nil
private var uiimage: UIImage? = nil
...
init(){
...
self.shareRental = ShareRentalView()
}
var body: some View {
...
Button{action: {self.uiimage = self.shareRental?.returnScreenShot()}}
...
}
}
This doesn't work because there view I want to screen shot is never rendered? Is there a way to render it in memory or off screen and then create an image from it? Or do I need to think of another way of doing this?
This ended up working to get the a screen shot of a view that was not presented on the screen to save as a UIImage
extension UIView {
func asImage() -> UIImage {
let format = UIGraphicsImageRendererFormat()
format.scale = 1
return UIGraphicsImageRenderer(size: self.layer.frame.size, format: format).image { context in
self.drawHierarchy(in: self.layer.bounds, afterScreenUpdates: true)
}
}
}
extension View {
func asImage() -> UIImage {
let controller = UIHostingController(rootView: self)
let size = controller.sizeThatFits(in: UIScreen.main.bounds.size)
controller.view.bounds = CGRect(origin: .zero, size: size)
let image = controller.view.asImage()
return image
}
}
And then in my parent view
var shareRental: ShareRentalView?
init(){
....
self.shareRental = ShareRentalView()
}
var body: some View {
Button(action: {
let shareImage = self.shareRental.asImage()
}
This gets me almost there. The MKMapSnapshotter has a delay while loading and the image creation happens too fast and there is no map when the UIImage is created.
In order to get around the issue with the delay in the map loading I created an array in a class that builds all the UIImages and stores them in an array.
class MyUser: ObservableObject {
...
public func buildHistoryRental(){
self.historyRentals.removeAll()
MapSnapshot().generateSnapshot(completion: self.snapShotRsp)
}
}
}
}
private func snapShotRsp(image: UIImage){
self.historyRentals.append(image))
}
And then I made a class to create snap shot images like this
func generateSnapshot(completion: #escaping (JSON, UIImage)->() ){
let mapSnapshotOptions = MKMapSnapshotOptions()
// Set the region of the map that is rendered. (by polyline)
let polyLine = MKPolyline(coordinates: &yourCoordinates, count: yourCoordinates.count)
let region = MKCoordinateRegionForMapRect(polyLine.boundingMapRect)
mapSnapshotOptions.region = region
// Set the scale of the image. We'll just use the scale of the current device, which is 2x scale on Retina screens.
mapSnapshotOptions.scale = UIScreen.main.scale
// Set the size of the image output.
mapSnapshotOptions.size = CGSize(width: IMAGE_VIEW_WIDTH, height: IMAGE_VIEW_HEIGHT)
// Show buildings and Points of Interest on the snapshot
mapSnapshotOptions.showsBuildings = true
mapSnapshotOptions.showsPointsOfInterest = true
let snapShotter = MKMapSnapshotter(options: mapSnapshotOptions)
var image: UIImage = UIImage()
snapshotter.start(completionHandler: { (snapshot: MKMapSnapshotter.Snapshot?, Error) -> Void in
if(Error != nil){
print("\(String(describing: Error))");
}else{
image = self.drawLineOnImage(snapshot: snapshot.unsafelyUnwrapped, pointsToUse: pointsToUse)
}
completion(image)
})
}
}
func drawLineOnImage(snapshot: MKMapSnapshot) -> UIImage {
let image = snapshot.image
// for Retina screen
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, true, 0)
// draw original image into the context
image.draw(at: CGPoint.zero)
// get the context for CoreGraphics
let context = UIGraphicsGetCurrentContext()
// set stroking width and color of the context
context!.setLineWidth(2.0)
context!.setStrokeColor(UIColor.orange.cgColor)
// Here is the trick :
// We use addLine() and move() to draw the line, this should be easy to understand.
// The diificult part is that they both take CGPoint as parameters, and it would be way too complex for us to calculate by ourselves
// Thus we use snapshot.point() to save the pain.
context!.move(to: snapshot.point(for: yourCoordinates[0]))
for i in 0...yourCoordinates.count-1 {
context!.addLine(to: snapshot.point(for: yourCoordinates[i]))
context!.move(to: snapshot.point(for: yourCoordinates[i]))
}
// apply the stroke to the context
context!.strokePath()
// get the image from the graphics context
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
// end the graphics context
UIGraphicsEndImageContext()
return resultImage!
}
It's important to return the image back async with the callback. Trying to return the image directly from the func call yielded a blank map.

Swiftui getting an image's displaying dimensions

I'm trying to get the dimensions of a displayed image to draw bounding boxes over the text I have recognized using apple's Vision framework.
So I run the VNRecognizeTextRequest uppon the press of a button with this funcion
func readImage(image:NSImage, completionHandler:#escaping(([VNRecognizedText]?,Error?)->()), comp:#escaping((Double?,Error?)->())) {
var recognizedTexts = [VNRecognizedText]()
var rr = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let requestHandler = VNImageRequestHandler(cgImage: image.cgImage(forProposedRect: &rr, context: nil, hints: nil)!
, options: [:])
let textRequest = VNRecognizeTextRequest { (request, error) in
guard let observations = request.results as? [VNRecognizedTextObservation] else { completionHandler(nil,error)
return
}
for currentObservation in observations {
let topCandidate = currentObservation.topCandidates(1)
if let recognizedText = topCandidate.first {
recognizedTexts.append(recognizedText)
}
}
completionHandler(recognizedTexts,nil)
}
textRequest.recognitionLevel = .accurate
textRequest.recognitionLanguages = ["es"]
textRequest.usesLanguageCorrection = true
textRequest.progressHandler = {(request, value, error) in
comp(value,nil)
}
try? requestHandler.perform([textRequest])
}
and compute the bounding boxes offsets using this struct and function
struct DisplayingRect:Identifiable {
var id = UUID()
var width:CGFloat = 0
var height:CGFloat = 0
var xAxis:CGFloat = 0
var yAxis:CGFloat = 0
init(width:CGFloat, height:CGFloat, xAxis:CGFloat, yAxis:CGFloat) {
self.width = width
self.height = height
self.xAxis = xAxis
self.yAxis = yAxis
}
}
func createBoundingBoxOffSet(recognizedTexts:[VNRecognizedText], image:NSImage) -> [DisplayingRect] {
var rects = [DisplayingRect]()
let imageSize = image.size
let imageTransform = CGAffineTransform.identity.scaledBy(x: imageSize.width, y: imageSize.height)
for obs in recognizedTexts {
let observationBounds = try? obs.boundingBox(for: obs.string.startIndex..<obs.string.endIndex)
let rectangle = observationBounds?.boundingBox.applying(imageTransform)
print("Rectange: \(rectangle!)")
let width = rectangle!.width
let height = rectangle!.height
let xAxis = rectangle!.origin.x - imageSize.width / 2 + rectangle!.width / 2
let yAxis = -(rectangle!.origin.y - imageSize.height / 2 + rectangle!.height / 2)
let rect = DisplayingRect(width: width, height: height, xAxis: xAxis, yAxis: yAxis)
rects.append(rect)
}
return(rects)
}
I place the rects using this code in the ContentView
ZStack{
Image(nsImage: self.img!)
.scaledToFit()
ForEach(self.rects) { rect in
Rectangle()
.fill(Color.init(.sRGB, red: 1, green: 0, blue: 0, opacity: 0.2))
.frame(width: rect.width, height: rect.height)
.offset(x: rect.xAxis, y: rect.yAxis)
}
}
If I use the original's image dimensions I get these results
But if I add
Image(nsImage: self.img!)
.resizable()
.scaledToFit()
I get these results
Is there a way to get the image dimensions and pass them and get the proper size of the image being displayed? I also need this because I can't show the whole image sometimes and need to scale it.
Thanks a lot
I would use GeometryReader on background so it reads exactly size of image, as below
#State var imageSize: CGSize = .zero // << or initial from NSImage
...
Image(nsImage: self.img!)
.resizable()
.scaledToFit()
.background(rectReader())
// ... somewhere below
private func rectReader() -> some View {
return GeometryReader { (geometry) -> Color in
let imageSize = geometry.size
DispatchQueue.main.async {
print(">> \(imageSize)") // use image actual size in your calculations
self.imageSize = imageSize
}
return .clear
}
}
Rather than pass in the frame to every view, Apple elected to give you a separate GeometryReader view that gets its frame passed in as a parameter to its child closure.
struct Example: View {
var body: some View {
GeometryReader { geometry in
Image(systemName: "check")
.onAppear {
print(geometry.frame(in: .local))
}
}
}
}