How to draw NSImage to CGContext - swift

I have an object NSImage, which I want to draw to CGContext.
I found that there is no corresponding method for CGContext, but there seems to be a CGImage method, clip
func clip(to: CGRect, mask: CGImage)
How do I need to do it, please?

The easiest way is to create an NSGraphicsContext wrapping your CGContext, and draw into that:
let gc: CGContext = ...
let nsImage: NSImage = ...
let rect: CGRect = ...
let priorNsgc = NSGraphicsContext.current
defer { NSGraphicsContext.current = priorNsgc }
NSGraphicsContext.current = NSGraphicsContext(cgContext: gc, flipped: false)
nsImage.draw(in: rect)

Related

Draw graphics and export with pixel precision with CoreGraphics

I saw few questions here on stackoverflow but none of them is solving my problem. What I want to do is to subclass NSView and draw some shapes on it. Then I want to export/save created graphics to png file. And while drawing is quite simple, I want to be able to store image with pixel precision - I know that drawing is being done in points instead of pixels. So what I am doing is I override draw() method to draw any graphic like so:
override func draw(_ dirtyRect: NSRect) {
super.draw(dirtyRect)
NSColor.white.setFill()
dirtyRect.fill()
NSColor.green.setFill()
NSColor.green.setStroke()
currentContext?.beginPath()
currentContext?.setLineWidth(1.0)
currentContext?.setStrokeColor(NSColor.green.cgColor)
currentContext?.move(to: CGPoint(x: 0, y: 0))
currentContext?.addLine(to: CGPoint(x: self.frame.width, y: self.frame.height))
currentContext?.closePath()
}
and since on screen it looks OK, after saving this to file is not what I expected. I set line width to 1 but in exported file it is 2 pixels wide. And to save image, I create NSImage from current view:
func getImage() -> NSImage? {
let size = self.bounds.size
let imageSize = NSMakeSize(size.width, size.height)
guard let imageRepresentation = self.bitmapImageRepForCachingDisplay(in: self.bounds) else {
return nil
}
imageRepresentation.size = imageSize
self.cacheDisplay(in: self.bounds, to: imageRepresentation)
let image = NSImage(size: imageSize)
image.addRepresentation(imageRepresentation)
return image
}
and this image is then save to file:
do {
guard let image = self.canvasView?.getImage() else {
return
}
let imageRep = image.representations.first as? NSBitmapImageRep
let data = imageRep?.representation(using: .png, properties: [:])
try data?.write(to: url, options: .atomic)
} catch {
print(error.localizedDescription)
}
Do you have any tips of what I am doing wrong?

how to draw images on top of circles in UIView swift

I'm trying to draw circles and in the middle of each circle, I want to draw an image.
my circles work fine but I'm not getting along with the images.
I don't understand why I can't just draw an UIImage directly.
The code below //draw PNGs is what my question is about but I posted the whole code.
thanks in advance for any help
import UIKit
import CoreGraphics
enum Shape2 {
case circle
case Rectangle
case Line
}
class Canvas: UIView {
let viewModel = ViewModel(set: set1)
var currentShape: Shape2?
override func draw(_ rect: CGRect) {
guard let currentContext = UIGraphicsGetCurrentContext() else {
print("Could not get Context")
return
}
drawIcons (user: currentContext)
}
private func drawIcons(user context: CGContext){
for i in 0...viewModel.iconsList.count-1 {
let centerPoint = CGPoint(x: viewModel.icons_coord_x[i], y: viewModel.icons_coord_y[i])
context.addArc(center: centerPoint, radius: CGFloat(viewModel.Diameters[i]), startAngle: CGFloat(0).degreesToRadians, endAngle: CGFloat(360).degreesToRadians, clockwise: true)
context.setFillColor(UIColor.blue.cgColor)
//context.setFillColor(viewModel.iconsbackground_colors[i].cgColor)
context.fillPath()
context.setLineWidth(4.0)
//draw PNGs:
let image = UIImage(named: "rocket")!
let ciImage = image.ciImage
let imageRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let context2 = CIContext(options: nil)
let cgImage = context2.createCGImage(ciImage ?? <#default value#>, from: ciImage?.extent ?? <#default value#>)
context.draw(CGImage() as! CGLayer, in: imageRect)
}
}
func drawShape(selectedShape: Shape2){
currentShape = selectedShape
setNeedsDisplay()
}
} ```
I don't understand why I can't just draw an UIImage directly.
You can.
https://developer.apple.com/documentation/uikit/uiimage/1624132-draw
https://developer.apple.com/documentation/uikit/uiimage/1624092-draw

Swift: UIGraphicsImageRendererFormat instead of UIGraphicsBeginImageContext returning blank image

I'm replacing UIGraphicsBeginImageContext with UIGraphicsImageRendererFormat to optimize performance and modernize my code. For some reason, UIGraphicsImageRendererFormat is returning a blank image in my function. I must be doing something wrong!
Old function: (works fine)
func drawImageInRect(inputImage: UIImage, inRect imageRect: CGRect) -> UIImage {
UIGraphicsBeginImageContext(self.size)
self.draw(in:(CGRect(x:0.0, y:0.0, width:self.size.width, height:self.size.height)))
inputImage.draw(in:(imageRect))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
New function: (draws nothing)
func drawImageInRect(inputImage: UIImage, inRect imageRect: CGRect) -> UIImage {
let renderFormat = UIGraphicsImageRendererFormat.default()
renderFormat.opaque = true
let renderer = UIGraphicsImageRenderer(size: self.size, format: renderFormat)
let scaledImage = renderer.image {
(context) in
inputImage.draw(in: imageRect)
}
return scaledImage
}
The problem is that in your updated function you forgot to draw self so you get a black background where the original image should be.
func drawImageInRect(inputImage: UIImage, inRect imageRect: CGRect) -> UIImage {
let renderFormat = UIGraphicsImageRendererFormat.default()
renderFormat.opaque = true
let renderer = UIGraphicsImageRenderer(size: self.size, format: renderFormat)
let scaledImage = renderer.image {
(context) in
// Add the following missing line
self.draw(in:(CGRect(x:0.0, y:0.0, width:self.size.width, height:self.size.height)))
inputImage.draw(in: imageRect)
}
return scaledImage
}

Drawing a simple line on a JPEG image

I'm stuck again with an apparently simple question.
I loaded a JPEG file into a CGImage. I got the correct values for width and height (in pixels) and was able to show "myImage" in a ImageView Controller. But I wanted to add some graphics on this image and found that I should instead get it into a NSImage. So I did but got different (proportional) values for width and height: 595.08 instead for 1653, and 841.68 instead of 2338, respectively.
I tried to create a NSCGContext from a CGContext 'gc' for drawing (a simple line and a rectangle) which resulted in a "Value of optional type 'CGContext?' not unwrapped, did you mean to use '!' or '?'?"... I'm lost...
// with NSData
//
let imageAsData = try Data(contentsOf: chosenFiles[0])
let imageProvider = CGDataProvider(data: imageAsData as CFData)
var myImage = CGImage(jpegDataProviderSource: imageProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
let imageWidth = myImage!.width
let imageHeight = myImage!.height
// with NSImage, now
//
let imageAsNSImage=NSImage(contentsOf: chosenFiles[0])
let imageSize=imageAsNSImage?.size // ---> 0.36 * pixels
// creating a CG context and drawing
//
let colorSpace:CGColorSpace = CGColorSpaceCreateDeviceRGB()
let gc = CGContext(data: nil, width: imageWidth, height: imageHeight, bitsPerComponent: 8, bytesPerRow: 0,space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
let NSGContext = NSGraphicsContext(cgContext: gc, flipped: true)
let currentContext = NSGraphicsContext.current() // Cocoa GC object appropriate for the current drawing environment
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.current = NSGContext
NSGContext?.beginPath()
NSGContext?.setStrokeColor(redColor)
NSGContext?.setLineWidth(50.0)
NSGContext?.move(to: targetStart)
NSGContext?.addLine(to: targetEnd)
NSGContext?.setStrokeColor(grayColor)
NSGContext?.setFillColor(grayColor)
NSGContext?.addRect(ROIRect)
NSGContext?.closePath()
NSGContext.restoreGraphicsState()
imageAsNSImage?.draw(at: NSZeroPoint, from: NSZeroRect, operation: NSCompositeSourceOver, fraction: 1.0)
imageAsNSImage?.unlockFocus()
NSGraphicsContext.setcurrent(currentContext)
myImageView.image = imageAsNSImage // image & drawings should show in View
Drawing a simple line on a JPEG image
// load JPEG from main bundle
guard let path = Bundle.main.pathForImageResource(NSImage.Name("picture.jpg")),
let image = NSImage(contentsOfFile: path)
else { fatalError() }
let size = image.size
image.lockFocus() // prepare image for drawing
NSColor.red.setStroke()
NSBezierPath.strokeLine(from: .zero, to: NSPoint(x: size.width, y: size.height))
image.unlockFocus() // drawing commands done
The code above strokes a red line from lower left corner to top right.
If you have an NSImageView at hand you can use the image directly:
#IBOutlet weak var imageView: NSImageView!
...
imageView.image = image
Thanks to djromero, here is the solution I just reached:
// Load the JPEG image from disk into a CGImage
//
let imageAsData = try Data(contentsOf: chosenFiles[0])
let imageProvider = CGDataProvider(data: imageAsData as CFData)
var myImage = CGImage(jpegDataProviderSource: imageProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
// Create a NSImage from the CGImage (with the same width and height in pixels)
//
let imageAsNSImage=NSImage(cgImage: myImage!, size: NSZeroSize)
// Drawing a simple line
//
imageAsNSImage.lockFocusFlipped(true) // Otherwise, the origin is at the lower left corner
NSColor.red.setStroke()
NSBezierPath.strokeLine(from: targetStart, to: targetEnd)
imageAsNSImage.unlockFocus()
// Show the NSImage in the NSImageView
//
myImageView.image = imageAsNSImage

How can I 'cut' a transparent hole in a UIImage?

I'm trying to cut an transparent square in a UIImage, however I honestly have no idea where/how to start.
Any help would be greatly appreciated.
Thanks!
Presume that your image is being displayed in a view - probably a UIImageView. Then we can punch a rectangular hole in that view by masking the view's layer. Every view has a layer. We will apply to this view's layer a mask which is itself a layer containing an image, which we will generate in code. The image will be black except for a clear rectangle somewhere in the middle. That clear rectangle will cause the hole in the image view.
So, let self.iv be this UIImageView. Try running this code:
CGRect r = self.iv.bounds;
CGRect r2 = CGRectMake(20,20,40,40); // adjust this as desired!
UIGraphicsBeginImageContextWithOptions(r.size, NO, 0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextAddRect(c, r2);
CGContextAddRect(c, r);
CGContextEOClip(c);
CGContextSetFillColorWithColor(c, [UIColor blackColor].CGColor);
CGContextFillRect(c, r);
UIImage* maskim = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CALayer* mask = [CALayer layer];
mask.frame = r;
mask.contents = (id)maskim.CGImage;
self.iv.layer.mask = mask;
For example, in this image, the white square is not a superimposed square, it is a hole, showing the white of the window background behind it:
EDIT: I feel obligated, since I mentioned it in a comment, to show how to do the same thing with a CAShapeLayer. The result is exactly the same:
CGRect r = self.iv.bounds;
CGRect r2 = CGRectMake(20,20,40,40); // adjust this as desired!
CAShapeLayer* lay = [CAShapeLayer layer];
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddRect(path, nil, r2);
CGPathAddRect(path, nil, r);
lay.path = path;
CGPathRelease(path);
lay.fillRule = kCAFillRuleEvenOdd;
self.iv.layer.mask = lay;
Here's a simple Swift function cut#hole#inView to copy and paste for 2017
func cut(hole: CGRect, inView view: UIView) {
let path: CGMutablePath = CGMutablePath()
path.addRect(view.bounds)
path.addRect(hole)
let shapeLayer = CAShapeLayer()
shapeLayer.path = path
shapeLayer.fillRule = .evenOdd
view.layer.mask = shapeLayer
}
Just needed the Version from #Fattie, thanks again! Here is the updated Code for Swift 5.1:
private func cut(holeRect: CGRect, inView view: UIView) {
let combinedPath = CGMutablePath()
combinedPath.addRect(view.bounds)
combinedPath.addRect(holeRect)
let maskShape = CAShapeLayer()
maskShape.path = combinedPath
maskShape.fillRule = .evenOdd
view.layer.mask = maskShape
}
If you want the cutout to have rounded corners you can replace combinedPath.addRect(holeRect) with rectanglePath.addRoundedRect(in: holeRect, cornerWidth: 8, cornerHeight: 8).
Here's the updated code to cut a hole in an UIImage (instead of UIView) using Swift:
func cut(hole: CGRect, inView image: UIImage) -> UIImage? {
UIGraphicsBeginImageContext(image.size)
image.draw(at: CGPoint.zero)
let context = UIGraphicsGetCurrentContext()!
let bez = UIBezierPath(rect: hole)
context.addPath(bez.cgPath)
context.clip()
context.clear(CGRect(x:0,y:0,width: image.size.width,height: image.size.height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}