Transparency when drawing in CGContext does not work Swift - swift

Hello i'm developing a drawing application. I try to draw multiple lines with transparency in a context (one line over another) but the transparencies do not work as I thought they work when drawing one image over another. I've made this test with 2 squares:
var imageView:UIImageView = UIImageView()
imageView.frame = CGRect(x: 0, y: 0, width: 400, height: 200)
self.addSubview(imageView)
/* CREATE A SQUARE UIImage */
let color:UIColor = UIColor(red:1.00, green:0.40, blue:0.21, alpha:1.0)
let bounds:CGRect = CGRect(x: 0, y: 0, width: 200, height: 200)
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0.0)
var ctx = UIGraphicsGetCurrentContext()
ctx?.setFillColor(color.cgColor)
ctx?.fill(bounds)
let rectangleUIImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
/* DRAW SCUARE WHIT ALPHA 1 */
UIGraphicsBeginImageContextWithOptions(imageView.frame.size, false, 0.0)
ctx = UIGraphicsGetCurrentContext()
ctx?.setAlpha(1.0)
ctx?.draw(rectangleUIImage.cgImage!, in: CGRect(x: 0, y: 0, width: 200, height: 200))
/* DRAW 10 SQUARES WHIT ALPHA 0.1 IN THE SAME PLACE */
ctx?.setAlpha(0.1)
for i in 0..<10 {
ctx?.draw(rectangleUIImage.cgImage!, in: CGRect(x: 200, y: 0, width: 200, height: 200))
}
imageView.image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
I would expect the above code to result in this:Two boxes visually with the same transparency
But no matter how many times I draw a transparent square on another I always find this :( :
Two boxes, one without transparency and one with transparency

Related

Erasing pixels from image and then reverting them back

I'm creating an eraser app where i have a picture and the user can erase the background with the following func :
func eraseImage( image: UIImage ,line: [CGPoint] ,brushSize: CGFloat) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(proxy.size, false, 0)
let context = UIGraphicsGetCurrentContext()
let rect = AVMakeRect(aspectRatio: image.size, insideRect: CGRect(x: 0, y:0, width: proxy.size.width, height: proxy!.size.height))
//lassoImageView.image?.draw(in: calculateRectOfImageInImageView(imageView: lassoImageView))
image.draw(in: rect, blendMode: .normal, alpha: 1)
context?.move(to: CGPoint(x: line.first!.x - 50, y: line.first!.y - 50))
for pointIndex in 1..<line.count {
context?.addLine(to: CGPoint(x: line[pointIndex].x - 50, y: line[pointIndex].y - 50))
}
context?.setBlendMode(.clear)
context?.setLineCap(.round)
context?.setLineWidth(brushSize)
context?.setShadow(offset: CGSize(width: 0, height: 0), blur: 8)
context?.strokePath()
if let img = UIGraphicsGetImageFromCurrentImageContext() {
return img
}
UIGraphicsEndImageContext()
return nil
}
I'm struggling with figuring out how can i add a drawing function where the user can correct it's earaser mistake and draw back some parts.
I'm thinking of drawing on the edited image the original image with a mask of the path which tracks the cgpoints of the users location. is that possible?
I would suggest setting up your image with a second CALayer installed as a mask. Install an image view into the mask layer's contents, and draw with clear/opaque colors into the mask to mask/expose pixels from the image layer.

Swift: NSStackView, bug only one subview is added

Hello in my code below I want to add NSImageView to my stackView, but there is a bug because there is only one that is added. The loop is 3 iterations so normally I should have 3 images:
let imageView = NSImageView(frame: NSRect(x: 0, y: 0, width: 50, height: 50))
imageView.image = image.image
icons.forEach { _ in
stackImage.addArrangedSubview(imageView)
}
print(stackImage.subviews.count) // Outpout 1
Create the NSImageView instances inside the forloop.
And you need to check stackImage.arrangedSubviews.count not stackImage.subviews.count
var icons = [NSImage(named: ""),NSImage(named: ""),NSImage(named: "")]
icons.forEach { image in
let imageView = NSImageView(frame: NSRect(x: 0, y: 0, width: 50, height: 50))
imageView.image = image
stackImage.addArrangedSubview(imageView)
}
print(stackImage.arrangedSubviews.count)

Swift UIView "Empty Image"

Could someone explain me the difference between these two pictures please?
Code with preview of the UIView:
Code without the preview of the UIView:
What's the difference? And why can't I get a preview on the second code example? It only shows "empty image"....
Thanks for helping me.
This seems to be a "feature" (bug) of a Swift playground. If you don't create the view instance using a non-zero frame width and height, you will get "empty image".
This works:
let rect = UIView(frame: CGRect(x: 0, y: 0, width: 1, height: 1))
rect.frame = CGRect(x: 0, y: 0, width: 200, height: 200)
rect.backgroundColor = .green
But this doesn't:
let rect = UIView(frame: .zero)
// also bad: let rect = UIView(frame: CGRect(x: 0, y: 0, width: 1, height: 0))
rect.frame = CGRect(x: 0, y: 0, width: 200, height: 200)
rect.backgroundColor = .green
And remember that:
let rect = UIView()
is essentially the same as doing:
let rect = UIView(frame: .zero)
So when using a playground, create a view with a non-zero frame width and height in the initializer if you don't want to see "empty image".

How to apply an inverse text mask in Swift?

I am trying programmatically create a layer with transparent text. Everything I try doesn't seem to work. My end goal is to create an inner shadow on text.
Instead of a circle as in the code below I want text.
let view = UIView(frame: CGRect(x: 0, y: 0, width: 500, height: 500))
view.backgroundColor = .white
// MASK
let blackSquare = UIView(frame: CGRect(x: 100, y: 100, width: 200, height: 200))
blackSquare.backgroundColor = .black
view.addSubview(blackSquare)
let maskLayer = CAShapeLayer()
// Create a path with the rectangle in it.
let path = CGMutablePath()
path.addArc(center: CGPoint(x: 100, y: 100), radius: 50, startAngle: 0.0, endAngle: 2.0 * .pi, clockwise: false)
path.addRect(CGRect(x: 0, y: 0, width: blackSquare.frame.width, height: blackSquare.frame.height))
maskLayer.backgroundColor = UIColor.black.cgColor
maskLayer.path = path;
maskLayer.fillRule = kCAFillRuleEvenOdd
blackSquare.layer.mask = maskLayer
maskLayer.masksToBounds = false
maskLayer.shadowRadius = 4
maskLayer.shadowOpacity = 0.5
maskLayer.shadowOffset = CGSize(width: 5, height: 5)
view.addSubview(blackSquare)
I'm also able to use text as a mask but I'm unable to invert the mask. Any help is appreciated.
EDIT:
I figured it out based on Rob's answer as suggested by Josh. Here's my playground code.
import Foundation
import UIKit
import PlaygroundSupport
// view
let view = UIView(frame: CGRect(x: 0, y: 0, width: 500, height: 500))
view.backgroundColor = .black
// button
let button = UIButton(frame: CGRect(x: 0, y: 0, width: 500, height: 500))
button.setTitle("120", for: .normal)
button.setTitleColor(.white, for: .normal)
button.titleLabel?.font = UIFont(name: "AvenirNextCondensed-UltraLight", size: 200)
view.addSubview(button)
addInnerShadow(button: button)
func addInnerShadow(button: UIButton) {
// text
let text = button.titleLabel!.text!
// get context
UIGraphicsBeginImageContextWithOptions(button.bounds.size, true, 0)
let context = UIGraphicsGetCurrentContext()
context?.scaleBy(x: 1, y: -1)
context?.translateBy(x: 0, y: -button.bounds.size.height)
let font = button.titleLabel!.font!
// draw the text
let attributes = [
NSAttributedStringKey.font: font,
NSAttributedStringKey.foregroundColor: UIColor.white
]
let size = text.size(withAttributes: attributes)
let point = CGPoint(x: (button.bounds.size.width - size.width) / 2.0, y: (button.bounds.size.height - size.height) / 2.0)
text.draw(at: point, withAttributes: attributes)
// capture the image and end context
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// create image mask
let cgimage = image?.cgImage!
let bytesPerRow = cgimage?.bytesPerRow
let dataProvider = cgimage?.dataProvider!
let bitsPerPixel = cgimage?.bitsPerPixel
let width = cgimage?.width
let height = cgimage?.height
let bitsPerComponent = cgimage?.bitsPerComponent
let mask = CGImage(maskWidth: width!, height: height!, bitsPerComponent: bitsPerComponent!, bitsPerPixel: bitsPerPixel!, bytesPerRow: bytesPerRow!, provider: dataProvider!, decode: nil, shouldInterpolate: false)
// create background
UIGraphicsBeginImageContextWithOptions(button.bounds.size, false, 0)
UIGraphicsGetCurrentContext()!.clip(to: button.bounds, mask: mask!)
view.backgroundColor!.setFill()
UIBezierPath(rect: button.bounds).fill()
let background = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let backgroundView = UIImageView(image: background)
// add shadows
backgroundView.layer.shadowOffset = CGSize(width: 2, height: 2)
backgroundView.layer.shadowRadius = 2
backgroundView.layer.shadowOpacity = 0.75
button.addSubview(backgroundView)
}
PlaygroundPage.current.liveView = view
Whilst not exactly the same, please refer to the answer provided here by Rob who answered a similar question:
How do I style a button to have transparent text?
This should get you started at the very least...
Stumbled on a possible solution that I've updated to proper syntax:
func mask(withRect rect: CGRect, inverse: Bool = false) {
let path = UIBezierPath(rect: rect)
let maskLayer = CAShapeLayer()
if inverse {
path.append(UIBezierPath(rect: self.view.bounds))
maskLayer.fillRule = kCAFillRuleEvenOdd
}
maskLayer.path = path.cgPath
self.view.layer.mask = maskLayer
}
You'll obviously need to pick parts out to see if it works for you.

How to make a UIImage be a blur effect view?

Ok, Im working in Swift here and there are a lot of answers like this How to use UIVisualEffectView? that talk about how to apply a UIVisualEffectView OVER an image, so that it blurs it like a background.
My problem is I need to have my image, or rather the outline of my image BE the Blur view - meaning I create a blur UIVisualEffectView in the shape of my image so the "color" of the image itself is the blur. An example mockup (pretend that is a blur):
I know you can trace a UIImage into a custom color like this:
func overlayImage(color: UIColor, img: UIImage) -> UIImage {
UIGraphicsBeginImageContextWithOptions(img.size, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()
color.setFill()
context!.translateBy(x: 0, y: img.size.height)
context!.scaleBy(x: 1.0, y: -1.0)
context!.setBlendMode(CGBlendMode.colorBurn)
let rect = CGRect(x: 0, y: 0, width: img.size.width, height: img.size.height)
context!.draw(img.cgImage!, in: rect)
context!.setBlendMode(CGBlendMode.sourceIn)
context!.addRect(rect)
context!.drawPath(using: CGPathDrawingMode.fill)
let coloredImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return coloredImage!
}
But I cant get my UIImageView to "mask" the blur view and achieve the effect. Right now with this attempt:
var img = UIImageView(image: UIImage(named: "dudeIco"))
img.frame = CGRect(x: 0, y: 0, width: self.bounds.width * 0.7, height: self.bounds.width * 0.7)
img.center = CGPoint(x: self.bounds.width/2, y: self.bounds.height/2)
self.addSubview(img)
let blur = UIVisualEffectView(effect: UIBlurEffect(style:
UIBlurEffectStyle.light))
blur.frame = img.bounds
blur.isUserInteractionEnabled = false
img.insertSubview(blur, at: 0)
I just get a blurred square. I need the shape of the image. How can I do this? Is this impossible?