I know I can fill a rect using NSRectFill(bounds). However I wanted to preserve transparency for PDF output and I discovered that I can do that only with NSBezierPath(rect: bounds).fill()
What is the difference (behind the scenes) of those two?
func drawBackground() {
CGContextSaveGState(currentContext)
if (NSGraphicsContext.currentContextDrawingToScreen()) {
NSColor(patternImage: checkerboardImage).set()
NSRectFillUsingOperation(bounds, NSCompositingOperation.CompositeSourceOver)
}
NSColor.clearColor().setFill()
//NSRectFill(bounds) //option 1
NSBezierPath(rect: bounds).fill() // option 2
CGContextRestoreGState(currentContext)
}
extension NSImage {
static func checkerboardImageWithSize(size : CGFloat) -> NSImage {
let fullRect = NSRect(x: 0, y: 0, width: size, height: size)
let halfSize : CGFloat = size * 0.5;
let upperSquareRect = NSRect(x: 0, y: 0, width: halfSize, height: halfSize);
let bottomSquareRect = NSRect(x: halfSize, y: halfSize, width:halfSize, height: halfSize);
let image = NSImage(size: NSSize(width: size, height: size))
image.lockFocus()
NSColor.whiteColor()
NSRectFill(fullRect)
NSColor(deviceWhite: 0.0, alpha:0.1).set()
NSRectFill(upperSquareRect)
NSRectFill(bottomSquareRect)
image.unlockFocus()
return image
}
}
I'm mostly an iOS programmer and not very fluent these days over on the AppKit side of things, but my guess is that you're getting the wrong NSCompositingOperation. I see from the docs that NSRectFill uses NSCompositeCopy. Perhaps it would work better if you used NSRectFillUsingOperation, where you get to specify the compositing operation.
Related
How to resize image from UIImagePickerController if the length, width, or both of the photos exceed 1500, both the length and width must be reduced to 65%. For example if the size of the photo is 2000x3000 after resizing both parameters should be multiplied by 0.65 and become 1300x1950.
And, if the length and width of the photo does not exceed 1500, the photo should remain unchanged
Please find the code to resize based on max size and scale.
extension UIImage {
func resizeImage(maxSize: CGFloat, scale : CGFloat) -> UIImage {
let size = self.size
if (size.width > maxSize || size.height > maxSize) {
let newSize = CGSize(width: size.width * scale, height: size.height * scale)
let rect = CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
self.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
return self
}
}
let resizedImage = yourImage.resizeImage(maxSize: 1500, scale: 0.65)
How can i draw custom oval shape like in below image in swift (not swiftUI).
Thank you in advance
I have tried to clone similar view using UIView and was able to create similar UI as you have stated on screenshot
And here is my code snippet
let width: CGFloat = view.frame.size.width
let height: CGFloat = view.frame.size.height
let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: self.view.bounds.size.width, height: self.view.bounds.size.height), cornerRadius: 0)
let rect = CGRect(x: width / 2 - 150, y: height / 2.5 - 100, width: 300, height: 400)
let circlePath = UIBezierPath(ovalIn: rect)
path.append(circlePath)
path.usesEvenOddFillRule = true
let fillLayer = CAShapeLayer()
fillLayer.path = path.cgPath
fillLayer.fillRule = .evenOdd
fillLayer.fillColor = UIColor.red.cgColor
fillLayer.opacity = 0.5
view.layer.addSublayer(fillLayer)
Hope this helps you. Good day.
You can draw oval path like this
class CustomOval: UView {
override func drawRect(rect: CGRect) {
var ovalPath = UIBezierPath(ovalIn: rect)
UIColor.gray.setFill()
ovalPath.fill()
}
}
I'm building an augmented reality app and would like to show preview images of the available DAE files.
Is it possible to use Swift to extract image data from a DAE file and to load that into an UIImage?
Out of the box there is no function to call that will pull a screen shot from a .dae file. Thats not to say that there isn't a way to do this on device.
To understand what you are asking lets consider the different components.
The .dae file is loaded using an ARSCNView, which is a reader for .dae files. The ARSCNView is traditionally added as a subview to your ViewController. In this instance you would need the viewer to render the 3D file offscreen so you need to create an ARSCNView but do not add it as a subview of the VC (i would add it during the initial setup to make sure it all loads, but then comment that line out)
Once you have your 3D loaded into your ARSCNView, use the below function to rip the context of the view into a JPG image
let viewSnapshot = takeSnapshotOfView(view: yourARSCNView)
and to rip the context and resize the JPG:
import UIKit
import CoreGraphics
func takeSnapshotOfView(view:UIView) -> UIImage? {
UIGraphicsBeginImageContext(CGSize(width: view.frame.size.width, height: view.frame.size.height))
view.drawHierarchy(in: CGRect(x: 0.0, y: 0.0, width: view.frame.size.width, height: view.frame.size.height), afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image?.resize(CGRect(x: 0, y: 0, width: 300, height: (image?.size.height)! / 300 * (image?.size.width)!)) // resize image before returning
}
extension UIImage {
func resize(_ toSize: CGRect) -> UIImage {
let size = self.size
let widthRatio = toSize.width / self.size.width
let heightRatio = toSize.height / self.size.height
var newSize: CGSize
if widthRatio > heightRatio {
newSize = CGSize(width: size.width * heightRatio, height: size.height * heightRatio)
} else {
newSize = CGSize(width: size.width * widthRatio, height: size.height * widthRatio)
}
let rect = CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
self.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
I have a transformed UIImage inside a UIVIew with clipsToBounds = true and I want to recreate (redraw) it exactly using a CGContext. It's hard to explain, here is the code:
class ViewController: UIViewController {
#IBOutlet weak var topView: UIView!
#IBOutlet weak var topImageView: UIImageView!
#IBOutlet weak var bottomImageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
// the example image
let image = UIImage(named: "image")!
// topImageView is embedded inside topView so that the image is being clipped
topView.clipsToBounds = true
topImageView.image = image
topImageView.contentMode = .center // I set this to center because it seems to be easier to draw it then
// The transformation data
let size = CGSize(width: 800, height: 200)
let scale: CGFloat = 1.5
let offset: (x: CGFloat, y: CGFloat) = (x: 0.0, y: 0.0)
// transform the imageView
topImageView.transform = CGAffineTransform(translationX: offset.x, y: offset.y).scaledBy(x: scale, y: scale)
// Now try to recreate the transform by using a transformed CGContext
UIGraphicsBeginImageContextWithOptions(size, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()!
// transform the context
// Almost the same as above, but I additionally move the context so that the middle of the image is in the middle of the size defined above
context.concatenate(CGAffineTransform(translationX: (-image.size.width / 2 + size.width / 2) + offset.x, y: (-image.size.height / 2 + size.height / 2) + offset.y).scaledBy(x: scale, y: scale))
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
bottomImageView.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
}
This is working only when I have scale set to 1.0 (so no scale). Otherwise, I don't get the correct image. Here's a screenshot so that you see it:
It's not the same. It's hard to figure out how to correctly transform the context, I don't entirely understand it. Any ideas?
Thanks!
It's a mystery. I've literally spent hours trying to get this right. 5 minutes after posting this question, I solved it. It's actually not that hard if you draw it (I used photoshop to visualize the coordinate system and played around a little bit. Should have done this earlier..)
UIGraphicsBeginImageContextWithOptions(size, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()!
context.concatenate(CGAffineTransform.identity.scaledBy(x: scale, y: scale).translatedBy(x: size.width / (2 * scale), y: size.height / (2 * scale)).translatedBy(x: offset.x / scale, y: offset.y / scale))
image.draw(in: CGRect(x: -image.size.width / 2, y: -image.size.height / 2, width: image.size.width, height: image.size.height))
bottomImageView.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
I'm trying to place an icon (in form of an image) next to a text in a UILabel. The icons are imported into the assets in al three sizes and are not blurry at all when I simply place them in a normal UIImageView.
However, within the NSTextAttachment they suddenly become extremely blurry and are too big, as well.
I already tried several things on my own and also tried nearly every snippet I could find online - nothing helps. This is what I'm left over with:
func updateWinnableCoins(coins: Int){
let attachImg = NSTextAttachment()
attachImg.image = resizeImage(image: #imageLiteral(resourceName: "geld"), targetSize: CGSize(width: 17.0, height: 17.0))
attachImg.setImageHeight(height: 17.0)
let imageOffsetY:CGFloat = -3.0;
attachImg.bounds = CGRect(x: 0, y: imageOffsetY, width: attachImg.image!.size.width, height: attachImg.image!.size.height)
let attchStr = NSAttributedString(attachment: attachImg)
let completeText = NSMutableAttributedString(string: "")
let tempText = NSMutableAttributedString(string: "You can win " + String(coins) + " ")
completeText.append(tempText)
completeText.append(attchStr)
self.lblWinnableCoins.textAlignment = .left;
self.lblWinnableCoins.attributedText = completeText;
}
func resizeImage(image: UIImage, targetSize: CGSize) -> (UIImage) {
let newRect = CGRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height).integral
UIGraphicsBeginImageContextWithOptions(targetSize, false, 0)
let context = UIGraphicsGetCurrentContext()
// Set the quality level to use when rescaling
context!.interpolationQuality = CGInterpolationQuality.default
let flipVertical = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: targetSize.height)
context!.concatenate(flipVertical)
// Draw into the context; this scales the image
context?.draw(image.cgImage!, in: CGRect(x: 0.0,y: 0.0, width: newRect.width, height: newRect.height))
let newImageRef = context!.makeImage()! as CGImage
let newImage = UIImage(cgImage: newImageRef)
// Get the resized image from the context and a UIImage
UIGraphicsEndImageContext()
return newImage
}
extension NSTextAttachment {
func setImageHeight(height: CGFloat) {
guard let image = image else { return }
let ratio = image.size.width / image.size.height
bounds = CGRect(x: bounds.origin.x, y: bounds.origin.y, width: ratio * height, height: height)
}
}
And this is how it looks:
The font size of the UILabel is 17, so I set the text attachment to be 17 big, too. When I set it to 9, it fits, but it's still very blurry.
What can I do about that?