How to add circular mask on camera in swift - iphone

Now I am swift newbie but I want to learn how to add circular mask on Camera in swift.
Now I am using Xcode7.2.1, swift 2.1, but I have no idea for that.
And I am developing this camera application by using the https://github.com/imaginary-cloud/CameraManager as third party library.
Here, I want to add my custom circular mask on Camera.
I am wondering who can help me to get this!
Thanks

UIImagePickerController has an cameraOverlayView property which you can use to provide a custom view that will act as an overlay. The example below creates an overlay view which is a square. Experiment with it to get what you want.
#IBAction func takePhoto(sender: AnyObject) {
if !UIImagePickerController.isSourceTypeAvailable(UIImagePickerControllerSourceType.Camera){
return
}
var imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.sourceType = UIImagePickerControllerSourceType.Camera;
//Create camera overlay
let pickerFrame = CGRectMake(0, UIApplication.sharedApplication().statusBarFrame.size.height, imagePicker.view.bounds.width, imagePicker.view.bounds.height - imagePicker.navigationBar.bounds.size.height - imagePicker.toolbar.bounds.size.height)
let squareFrame = CGRectMake(pickerFrame.width/2 - 200/2, pickerFrame.height/2 - 200/2, 200, 200)
UIGraphicsBeginImageContext(pickerFrame.size)
let context = UIGraphicsGetCurrentContext()
CGContextSaveGState(context)
CGContextAddRect(context, CGContextGetClipBoundingBox(context))
CGContextMoveToPoint(context, squareFrame.origin.x, squareFrame.origin.y)
CGContextAddLineToPoint(context, squareFrame.origin.x + squareFrame.width, squareFrame.origin.y)
CGContextAddLineToPoint(context, squareFrame.origin.x + squareFrame.width, squareFrame.origin.y + squareFrame.size.height)
CGContextAddLineToPoint(context, squareFrame.origin.x, squareFrame.origin.y + squareFrame.size.height)
CGContextAddLineToPoint(context, squareFrame.origin.x, squareFrame.origin.y)
CGContextEOClip(context)
CGContextMoveToPoint(context, pickerFrame.origin.x, pickerFrame.origin.y)
CGContextSetRGBFillColor(context, 0, 0, 0, 1)
CGContextFillRect(context, pickerFrame)
CGContextRestoreGState(context)
let overlayImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
let overlayView = UIImageView(frame: pickerFrame)
overlayView.image = overlayImage
imagePicker.cameraOverlayView = overlayView
self.presentViewController(imagePicker, animated: true, completion: nil)
}

Related

Swift ImageCropper returns an image outside of the specified "window"

I am making an app that has a UIWebView along with a button on a single view controller. When the button is clicked, an image (of the UIWebView) is captured using UIGraphicsContext.
This part works great! But when the button is clicked, after capturing the image, it displays the image as a subview on the same view, and I have been trying to use an ImageCropper Library that draws a CGRect in another subview over the UIImageView on the screen with a submit button. The rectangle itself can be resized (dragging the corners/edges) and moved around the view.
When the submit button is clicked, another subview is displayed in the top left hand portion of screen and display the image that was cropped (after clicking submit button) The idea is to only capture what is inside the rectangle. I am able to get the code working but the image captured is of the same image but not a section that is inside the CGRect.
I have 3 images that show how it works and shows the image that is cropped incorrectly.enter image description here . Shot 1 . Shot 2
Shot 3. I believe my problems lies within the size of image captured and the size of the image with the crop rect are not equal and that is why it is distorting it.
Does anyone know what might be the cause? Sorry for the long winded question but any help would be greatly appreciated!
Here is my code below:
ViewController.swift:
class ViewController: UIViewController {
#IBOutlet var webView: UIWebView!
#IBOutlet var imageView: UIImageView!
override func viewDidLoad() {
imageView.isHidden = true
let aString = URL(string: "https://www.kshuntfishcamp.com/home.page")
webView.loadRequest(URLRequest(url: aString!))
super.viewDidLoad()
}
#IBAction func takePhotoPressed(_ sender: UIButton) {
UIGraphicsBeginImageContextWithOptions(webView.bounds.size, false, 0.0)
if let aContext = UIGraphicsGetCurrentContext(){
webView.layer.render(in: aContext)
}
let capturedImage:UIImage? = UIGraphicsGetImageFromCurrentImageContext()
imageView = UIImageView(frame: CGRect(x: 22, y: 123, width: 330, height: 330))
let image = capturedImage
imageView.image = image
imageView.contentMode = UIViewContentMode.scaleAspectFill
imageView.clipsToBounds = true
imageView.isHidden = true
webView.isHidden = true
let editView = EditImageView(frame: self.view.frame)
let image2 = capturedImage!
editView.initWithImage(image: image2)
let croppedImage = editView.getCroppedImage()
self.view.addSubview(editView)
self.view.backgroundColor = UIColor.clear
UIImageWriteToSavedPhotosAlbum(croppedImage, nil, nil, nil)
}
EditImageView.swift - source (https://github.com/Thanatos-L/LyEditImageView)-only including parts that seem relevant to solving the problem
func initWithImage(image:UIImage){
imageView = UIImageView(frame: CGRect(x: 22, y: 123, width: 330, height: 330))
imageView.tag = IMAGE_VIEW_TAG;
self.addSubview(self.imageView)
imageView.isUserInteractionEnabled = true;
imageView.image = image
imageView.frame = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let frame = AVMakeRect(aspectRatio: imageView.frame.size, insideRect: self.frame);
imageView.frame = frame
originImageViewFrame = frame
NSLog("initWithImage %#", NSStringFromCGRect(originImageViewFrame))
imageZoomScale = 1.0
commitInit()
}
private func cropImage() {
let rect = self.convert(cropView.frame, to: imageView)
let imageSize = imageView.image?.size
let ratio = originImageViewFrame.size.width / (imageSize?.width)!
let zoomedRect = CGRect(x: rect.origin.x / ratio, y: rect.origin.y / ratio, width: rect.size.width / ratio, height: rect.size.height / ratio)
let croppedImage = cropImage(image: imageView.image!, toRect: zoomedRect)
var view: UIImageView? = self.viewWithTag(1301) as? UIImageView
if view == nil {
view = UIImageView()
}
view?.frame = CGRect(x: 0, y: 0, width: croppedImage.size.width , height: croppedImage.size.height)
view?.image = croppedImage
view?.tag = 1301
self.addSubview(view!)
}

Render As Template Image on an animated WKInterfaceImage

I have a WKInterfaceImage animation with 160 frames. Animation is great. Now I want to add a tint. This SO question mentions the WatchKit tint method using the Render As Template Image option in the Inspector, but it seems it only works on a single static image. It's tinting only the last frame and it's tinting that last frame the color of my tint in the Inspector not my code tint. I have tried Rendering only the first frame and rendering all the frames to no avail.
Do I have to loop through all of them or set a range or incorporate the setTint method inside of the startAnimatingWithImagesInRange method?
rotateButtonImage.setImageNamed("frame")
rotateButtonImage.startAnimatingWithImagesInRange(NSRange(location: 0, length: 159), duration: 1, repeatCount: 1)
rotateButtonImage.setTintColor(UIColor.redColor())
EDIT: So what I did is create an extension. It looks like this.
WKImage+Tint.swift
extension UIImage {
func imageWithTintColor(colorTint : UIColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
colorTint.setFill()
let context : CGContextRef = UIGraphicsGetCurrentContext()! as CGContextRef
CGContextTranslateCTM(context, 0, self.size.height)
CGContextScaleCTM(context, 1.0, -1.0)
CGContextSetBlendMode(context, CGBlendMode.Normal)
let rect : CGRect = CGRectMake(0, 0, self.size.width, self.size.height)
CGContextClipToMask(context, rect, self.CGImage)
CGContextFillRect(context, rect)
let newImage : UIImage = UIGraphicsGetImageFromCurrentImageContext() as UIImage
UIGraphicsEndImageContext()
return newImage
}
}
Then in my custom VC in awakeWithContext I called:
rotateButtonImage.image = rotateButtonImage.image.imageWithColor(UIColor.redColor())
But for some reason things are not auto-completing. My WKInterfaceImage is called rotateButtonImage and I've imported Foundation and WatchKit etc.
Should my extension or function return type be of type WKInterfaceImage instead? I tried changing those but got tons of errors.
So I think I found out this extension will not work in WatchKit lol
So you have to use the Inspector method. But it's still the tint is not working on my animation. I think this may be a bug? Single image can use a tint but maybe not multiple frames even though the code is valid.
setTintColor is only working when the image contain a single image template, as explained here.
https://developer.apple.com/library/ios/documentation/WatchKit/Reference/WKInterfaceImage_class/index.html#//apple_ref/occ/instm/WKInterfaceImage/setTintColor:
But, actually, there is a way to recolor animatedImage. It isn't performant, and you don't wanna do this with a large set of Images.
static func animatedImagesWithColor(color: UIColor) -> [UIImage] {
var animatedImages = [UIImage]()
(0...60).forEach { imageNumber in
if let img = UIImage(named: "MyImage\(imageNumber)") {
UIGraphicsBeginImageContextWithOptions(img.size, false, img.scale);
let context = UIGraphicsGetCurrentContext()
color.setFill()
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, CGRectMake(0, 0, img.size.width, img.size.height), img.CGImage);
CGContextFillRect(context, CGRectMake(0, 0, img.size.width, img.size.height));
animatedImages.append(UIGraphicsGetImageFromCurrentImageContext())
UIGraphicsEndImageContext()
}
}
return animatedImages
}
You use that array this way :
let animation = UIImage.animatedImageWithImages(UIImage.animatedImagesWithColor(.redColor()), duration: 50)
let range = NSRange(location: 0, length: 60)
animationGroup.setBackgroundImage(animation)
animationGroup.startAnimatingWithImagesInRange(range, duration: 50, repeatCount: -1)

How can I 'cut' a transparent hole in a UIImage?

I'm trying to cut an transparent square in a UIImage, however I honestly have no idea where/how to start.
Any help would be greatly appreciated.
Thanks!
Presume that your image is being displayed in a view - probably a UIImageView. Then we can punch a rectangular hole in that view by masking the view's layer. Every view has a layer. We will apply to this view's layer a mask which is itself a layer containing an image, which we will generate in code. The image will be black except for a clear rectangle somewhere in the middle. That clear rectangle will cause the hole in the image view.
So, let self.iv be this UIImageView. Try running this code:
CGRect r = self.iv.bounds;
CGRect r2 = CGRectMake(20,20,40,40); // adjust this as desired!
UIGraphicsBeginImageContextWithOptions(r.size, NO, 0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextAddRect(c, r2);
CGContextAddRect(c, r);
CGContextEOClip(c);
CGContextSetFillColorWithColor(c, [UIColor blackColor].CGColor);
CGContextFillRect(c, r);
UIImage* maskim = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CALayer* mask = [CALayer layer];
mask.frame = r;
mask.contents = (id)maskim.CGImage;
self.iv.layer.mask = mask;
For example, in this image, the white square is not a superimposed square, it is a hole, showing the white of the window background behind it:
EDIT: I feel obligated, since I mentioned it in a comment, to show how to do the same thing with a CAShapeLayer. The result is exactly the same:
CGRect r = self.iv.bounds;
CGRect r2 = CGRectMake(20,20,40,40); // adjust this as desired!
CAShapeLayer* lay = [CAShapeLayer layer];
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddRect(path, nil, r2);
CGPathAddRect(path, nil, r);
lay.path = path;
CGPathRelease(path);
lay.fillRule = kCAFillRuleEvenOdd;
self.iv.layer.mask = lay;
Here's a simple Swift function cut#hole#inView to copy and paste for 2017
func cut(hole: CGRect, inView view: UIView) {
let path: CGMutablePath = CGMutablePath()
path.addRect(view.bounds)
path.addRect(hole)
let shapeLayer = CAShapeLayer()
shapeLayer.path = path
shapeLayer.fillRule = .evenOdd
view.layer.mask = shapeLayer
}
Just needed the Version from #Fattie, thanks again! Here is the updated Code for Swift 5.1:
private func cut(holeRect: CGRect, inView view: UIView) {
let combinedPath = CGMutablePath()
combinedPath.addRect(view.bounds)
combinedPath.addRect(holeRect)
let maskShape = CAShapeLayer()
maskShape.path = combinedPath
maskShape.fillRule = .evenOdd
view.layer.mask = maskShape
}
If you want the cutout to have rounded corners you can replace combinedPath.addRect(holeRect) with rectanglePath.addRoundedRect(in: holeRect, cornerWidth: 8, cornerHeight: 8).
Here's the updated code to cut a hole in an UIImage (instead of UIView) using Swift:
func cut(hole: CGRect, inView image: UIImage) -> UIImage? {
UIGraphicsBeginImageContext(image.size)
image.draw(at: CGPoint.zero)
let context = UIGraphicsGetCurrentContext()!
let bez = UIBezierPath(rect: hole)
context.addPath(bez.cgPath)
context.clip()
context.clear(CGRect(x:0,y:0,width: image.size.width,height: image.size.height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}

Moving an object along a curve in iPhone development

I wanted to animate an image object by moving it along a particular curve. It is not a general or random curve but rather a curve which follows a particular path on screen.
Currently, Im manually specifying the list of x and y co-ordinates of the path along which i want the image object to move by setting its frame each time. This is a laborious process in the sense that im setting the specific x and y coordinates of the path and moving the image along it. Is there a more efficient way to do this?
Is there a way that i can specify,say, just about 15 - 20 points and have a curve traced along those to move the object? Any other way to acheive this? Any help would be much appreciated. Thanks.
You could use a combination of UIBezierPath and CAKeyFrameAnimation.
I found a very useful blog post dealing with this subject.
http://oleb.net/blog/2010/12/animating-drawing-of-cgpath-with-cashapelayer/
Here's a simplified version of what I used (it just animates the drawing of a square):
UIBezierPath *customPath = [UIBezierPath bezierPath];
[customPath moveToPoint:CGPointMake(100,100)];
[customPath addLineToPoint:CGPointMake(200,100)];
[customPath addLineToPoint:CGPointMake(200,200)];
[customPath addLineToPoint:CGPointMake(100,200)];
[customPath addLineToPoint:CGPointMake(100,100)];
UIImage *movingImage = [UIImage imageNamed:#"foo.png"];
CALayer *movingLayer = [CALayer layer];
movingLayer.contents = (id)movingImage.CGImage;
movingLayer.anchorPoint = CGPointZero;
movingLayer.frame = CGRectMake(0.0f, 0.0f, movingImage.size.width, movingImage.size.height);
[self.view.layer addSublayer:movingLayer];
CAKeyframeAnimation *pathAnimation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
pathAnimation.duration = 4.0f;
pathAnimation.path = customPath.CGPath;
pathAnimation.calculationMode = kCAAnimationLinear;
[movingLayer addAnimation:pathAnimation forKey:#"movingAnimation"];
In Swift-3 version of #Jilouc :-
override func viewDidLoad() {
super.viewDidLoad()
addAdditiveAnimation()
initiateAnimation()
}
//curve which follows a particular path
func pathToTrace() -> UIBezierPath {
let path = UIBezierPath(ovalIn: CGRect(x: 120 , y: 120, width: 100, height: 100))
let shapeLayer = CAShapeLayer()
shapeLayer.path = path.cgPath
shapeLayer.strokeColor = UIColor.red.cgColor
shapeLayer.lineWidth = 1.0
self.view.layer.addSublayer(shapeLayer)
return path
}
func addAdditiveAnimation() {
let movement = CAKeyframeAnimation(keyPath: "position")
movement.path = pathToTrace().cgPath
movement.duration = 5
movement.repeatCount = HUGE
movement.calculationMode = kCAAnimationPaced
movement.timingFunctions = [CAMediaTimingFunction(name: kCAMediaTimingFunctionEaseOut)]
self.movement = movement
}
func createLayer() -> CALayer {
let layer = CALayer()
let image = UIImage(named: "launch.png")
layer.frame = CGRect(x: 0 , y: 0, width: (image?.size.width),height: (image?.size.height))
layer.position = CGPoint(x: 5, y: 5)
layer.contents = image?.cgImage
layer.anchorPoint = .zero
layer.backgroundColor = UIColor.red.cgColor
//layer.cornerRadius = 5
self.view.layer.addSublayer(layer)
return layer
}
func initiateAnimation() {
let layer = createLayer()
layer.add(self.movement, forKey: "Object Movement")
}
Github Demo

How do I add a gradient to the text of a UILabel, but not the background?

hey, I want to be able to have a gradient fill on the text in a UILabel I know about CGGradient but i dont know how i would use it on a UILabel's text
i found this on google but i cant manage to get it to work
http://silverity.livejournal.com/26436.html
I was looking for a solution and DotSlashSlash has the answer hidden in one of the comments!
For the sake of completeness, the answer and the simplest solution is:
UIImage *myGradient = [UIImage imageNamed:#"textGradient.png"];
myLabel.textColor = [UIColor colorWithPatternImage:myGradient];
(Skip to bottom for full class source code)
Really useful answers by both Brad Larson and Bach. The second worked for me but it requires an image to be present in advance. I wanted something more dynamic so I combined both solutions into one:
draw the desired gradient on a UIImage
use the UIImage to set the color pattern
The result works and in the screenshot below you can see some Greek characters rendered fine too. (I have also added a stroke and a shadow on top of the gradient)
Here's the custom init method of my label along with the method that renders a gradient on a UIImage (part of the code for that functionality I got from a blog post I can not find now to reference it):
- (id)initWithFrame:(CGRect)frame text:(NSString *)aText {
self = [super initWithFrame:frame];
if (self) {
self.backgroundColor = [UIColor clearColor];
self.text = aText;
self.textColor = [UIColor colorWithPatternImage:[self gradientImage]];
}
return self;
}
- (UIImage *)gradientImage
{
CGSize textSize = [self.text sizeWithFont:self.font];
CGFloat width = textSize.width; // max 1024 due to Core Graphics limitations
CGFloat height = textSize.height; // max 1024 due to Core Graphics limitations
// create a new bitmap image context
UIGraphicsBeginImageContext(CGSizeMake(width, height));
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// push context to make it current (need to do this manually because we are not drawing in a UIView)
UIGraphicsPushContext(context);
//draw gradient
CGGradientRef glossGradient;
CGColorSpaceRef rgbColorspace;
size_t num_locations = 2;
CGFloat locations[2] = { 0.0, 1.0 };
CGFloat components[8] = { 0.0, 1.0, 1.0, 1.0, // Start color
1.0, 1.0, 0.0, 1.0 }; // End color
rgbColorspace = CGColorSpaceCreateDeviceRGB();
glossGradient = CGGradientCreateWithColorComponents(rgbColorspace, components, locations, num_locations);
CGPoint topCenter = CGPointMake(0, 0);
CGPoint bottomCenter = CGPointMake(0, textSize.height);
CGContextDrawLinearGradient(context, glossGradient, topCenter, bottomCenter, 0);
CGGradientRelease(glossGradient);
CGColorSpaceRelease(rgbColorspace);
// pop context
UIGraphicsPopContext();
// get a UIImage from the image context
UIImage *gradientImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up drawing environment
UIGraphicsEndImageContext();
return gradientImage;
}
I'll try to complete that UILabel subclass and post it.
EDIT:
The class is done and it's on my GitHub repository. Read about it here!
Swift 4.1
class GradientLabel: UILabel {
var gradientColors: [CGColor] = []
override func drawText(in rect: CGRect) {
if let gradientColor = drawGradientColor(in: rect, colors: gradientColors) {
self.textColor = gradientColor
}
super.drawText(in: rect)
}
private func drawGradientColor(in rect: CGRect, colors: [CGColor]) -> UIColor? {
let currentContext = UIGraphicsGetCurrentContext()
currentContext?.saveGState()
defer { currentContext?.restoreGState() }
let size = rect.size
UIGraphicsBeginImageContextWithOptions(size, false, 0)
guard let gradient = CGGradient(colorsSpace: CGColorSpaceCreateDeviceRGB(),
colors: colors as CFArray,
locations: nil) else { return nil }
let context = UIGraphicsGetCurrentContext()
context?.drawLinearGradient(gradient,
start: CGPoint.zero,
end: CGPoint(x: size.width, y: 0),
options: [])
let gradientImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
guard let image = gradientImage else { return nil }
return UIColor(patternImage: image)
}
}
Usage:
label.gradientColors = [UIColor.blue.cgColor, UIColor.red.cgColor]
SWIFT 3+
This solution is based on #Dimitris's answer. It is an extension on the UILabel class that will create a gradient over the label's text per your passed startColor and endColor. The UILabel extension is below:
extension UILabel {
func applyGradientWith(startColor: UIColor, endColor: UIColor) -> Bool {
var startColorRed:CGFloat = 0
var startColorGreen:CGFloat = 0
var startColorBlue:CGFloat = 0
var startAlpha:CGFloat = 0
if !startColor.getRed(&startColorRed, green: &startColorGreen, blue: &startColorBlue, alpha: &startAlpha) {
return false
}
var endColorRed:CGFloat = 0
var endColorGreen:CGFloat = 0
var endColorBlue:CGFloat = 0
var endAlpha:CGFloat = 0
if !endColor.getRed(&endColorRed, green: &endColorGreen, blue: &endColorBlue, alpha: &endAlpha) {
return false
}
let gradientText = self.text ?? ""
let name:String = NSFontAttributeName
let textSize: CGSize = gradientText.size(attributes: [name:self.font])
let width:CGFloat = textSize.width
let height:CGFloat = textSize.height
UIGraphicsBeginImageContext(CGSize(width: width, height: height))
guard let context = UIGraphicsGetCurrentContext() else {
UIGraphicsEndImageContext()
return false
}
UIGraphicsPushContext(context)
let glossGradient:CGGradient?
let rgbColorspace:CGColorSpace?
let num_locations:size_t = 2
let locations:[CGFloat] = [ 0.0, 1.0 ]
let components:[CGFloat] = [startColorRed, startColorGreen, startColorBlue, startAlpha, endColorRed, endColorGreen, endColorBlue, endAlpha]
rgbColorspace = CGColorSpaceCreateDeviceRGB()
glossGradient = CGGradient(colorSpace: rgbColorspace!, colorComponents: components, locations: locations, count: num_locations)
let topCenter = CGPoint.zero
let bottomCenter = CGPoint(x: 0, y: textSize.height)
context.drawLinearGradient(glossGradient!, start: topCenter, end: bottomCenter, options: CGGradientDrawingOptions.drawsBeforeStartLocation)
UIGraphicsPopContext()
guard let gradientImage = UIGraphicsGetImageFromCurrentImageContext() else {
UIGraphicsEndImageContext()
return false
}
UIGraphicsEndImageContext()
self.textColor = UIColor(patternImage: gradientImage)
return true
}
}
And usage:
let text = "YAAASSSSS!"
label.text = text
if label.applyGradientWith(startColor: .red, endColor: .blue) {
print("Gradient applied!")
}
else {
print("Could not apply gradient")
label.textColor = .black
}
SWIFT 2
class func getGradientForText(text: NSString) -> UIImage {
let font:UIFont = UIFont(name: "YourFontName", size: 50.0)!
let name:String = NSFontAttributeName
let textSize: CGSize = text.sizeWithAttributes([name:font])
let width:CGFloat = textSize.width // max 1024 due to Core Graphics limitations
let height:CGFloat = textSize.height // max 1024 due to Core Graphics limitations
//create a new bitmap image context
UIGraphicsBeginImageContext(CGSizeMake(width, height))
// get context
let context = UIGraphicsGetCurrentContext()
// push context to make it current (need to do this manually because we are not drawing in a UIView)
UIGraphicsPushContext(context!)
//draw gradient
let glossGradient:CGGradientRef?
let rgbColorspace:CGColorSpaceRef?
let num_locations:size_t = 2
let locations:[CGFloat] = [ 0.0, 1.0 ]
let components:[CGFloat] = [(202 / 255.0), (197 / 255.0), (52 / 255.0), 1.0, // Start color
(253 / 255.0), (248 / 255.0), (101 / 255.0), 1.0] // End color
rgbColorspace = CGColorSpaceCreateDeviceRGB();
glossGradient = CGGradientCreateWithColorComponents(rgbColorspace, components, locations, num_locations);
let topCenter = CGPointMake(0, 0);
let bottomCenter = CGPointMake(0, textSize.height);
CGContextDrawLinearGradient(context, glossGradient, topCenter, bottomCenter, CGGradientDrawingOptions.DrawsBeforeStartLocation);
// pop context
UIGraphicsPopContext();
// get a UIImage from the image context
let gradientImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up drawing environment
UIGraphicsEndImageContext();
return gradientImage;
}
Props to #Dimitris
The example you provide relies on private text drawing functions that you don't have access to on the iPhone. The author provides an example of how to do this using public API in a subsequent post. His later example uses a gradient image for the color of the text. (Unfortunately, it appears his blog has since been removed, but see Bach's answer here for the approach he used.)
If you still want to draw the gradient for your text color in code, it can be done by subclassing UILabel and overriding -drawRect: to have code like the following within it:
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0.0f, self.bounds.size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextSelectFont(context, "Helvetica", 20.0f, kCGEncodingMacRoman);
CGContextSetTextDrawingMode(context, kCGTextClip);
CGContextSetTextPosition(context, 0.0f, round(20.0f / 4.0f));
CGContextShowText(context, [self.text UTF8String], strlen([self.text UTF8String]));
CGContextClip(context);
CGGradientRef gradient;
CGColorSpaceRef rgbColorspace;
size_t num_locations = 2;
CGFloat locations[2] = { 0.0, 1.0 };
CGFloat components[8] = { 1.0, 1.0, 1.0, 1.0, // Start color
1.0, 1.0, 1.0, 0.1 }; // End color
rgbColorspace = CGColorSpaceCreateDeviceRGB();
gradient = CGGradientCreateWithColorComponents(rgbColorspace, components, locations, num_locations);
CGRect currentBounds = self.bounds;
CGPoint topCenter = CGPointMake(CGRectGetMidX(currentBounds), 0.0f);
CGPoint midCenter = CGPointMake(CGRectGetMidX(currentBounds), CGRectGetMidY(currentBounds));
CGContextDrawLinearGradient(context, gradient, topCenter, midCenter, 0);
CGGradientRelease(gradient);
CGColorSpaceRelease(rgbColorspace);
CGContextRestoreGState(context);
One shortcoming of this approach is that the Core Graphics functions I use don't handle Unicode text properly.
What the code does is it flips the drawing context vertically (the iPhone inverts the normal Quartz coordinate system on for the Y axis), sets the text drawing mode to intersect the drawn text with the clipping path, clips the area to draw to the text, and then draws a gradient. The gradient will only fill the text, not the background.
I tried using NSString's -drawAtPoint: method for this, which does support Unicode, but all the characters ran on top of one another when I switched the text mode to kCGTextClip.
Here's what I'm doing in Swift 3
override func viewDidLoad() {
super.viewDidLoad()
timerLabel.textColor = UIColor(patternImage: gradientImage(size: timerLabel.frame.size, color1: CIColor(color: UIColor.green), color2: CIColor(color: UIColor.red), direction: .Left))
}
func gradientImage(size: CGSize, color1: CIColor, color2: CIColor, direction: GradientDirection = .Up) -> UIImage {
let context = CIContext(options: nil)
let filter = CIFilter(name: "CILinearGradient")
var startVector: CIVector
var endVector: CIVector
filter!.setDefaults()
switch direction {
case .Up:
startVector = CIVector(x: size.width * 0.5, y: 0)
endVector = CIVector(x: size.width * 0.5, y: size.height)
case .Left:
startVector = CIVector(x: size.width, y: size.height * 0.5)
endVector = CIVector(x: 0, y: size.height * 0.5)
case .UpLeft:
startVector = CIVector(x: size.width, y: 0)
endVector = CIVector(x: 0, y: size.height)
case .UpRight:
startVector = CIVector(x: 0, y: 0)
endVector = CIVector(x: size.width, y: size.height)
}
filter!.setValue(startVector, forKey: "inputPoint0")
filter!.setValue(endVector, forKey: "inputPoint1")
filter!.setValue(color1, forKey: "inputColor0")
filter!.setValue(color2, forKey: "inputColor1")
let image = UIImage(cgImage: context.createCGImage(filter!.outputImage!, from: CGRect(x: 0, y: 0, width: size.width, height: size.height))!)
return image
}
There is a really simple solution for this! Here's how you add gradient colors to UILabel text.
We will achieve this in just two steps:
Create Gradient Image
Apply Gradient Image As textColor to UILabel
1.Create Gradient Image
extension UIImage {
static func gradientImageWithBounds(bounds: CGRect, colors: [CGColor]) -> UIImage {
let gradientLayer = CAGradientLayer()
gradientLayer.frame = bounds
gradientLayer.colors = colors
UIGraphicsBeginImageContext(gradientLayer.bounds.size)
gradientLayer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}
Use this as follows:
let gradientImage = UIImage.gradientImageWithBounds(bounds: myLabel.bounds, colors: [firstColor.cgColor, secondColor.cgColor])
⠀
2.Apply Gradient Image As textColor to UILabel
myLabel.textColor = UIColor.init(patternImage: gradientImage)
⠀
Note:
If you want the gradient to be horizontal, just add these two lines to gradientLayer instance:
gradientLayer.startPoint = CGPoint(x: 0.0, y: 0.5)
gradientLayer.endPoint = CGPoint(x: 1.0, y: 0.5)
⠀
Note 2:
The UIImage extension function works with other UIViews too; not just UILabel! So feel free to use this method no matter which UIView you use to apply gradient color.
yourLabel.textColor = UIColor(patternImage: UIImage(named: "ur gradient image name ")!)
SwiftUI
Although we use Text in SwiftUI instead of UILabel, If you consider how to apply a gradient on a Text, you should apply it as a mask. But since gradients are stretchable, you can make a simple extension like this:
extension View {
func selfSizeMask<T: View>(_ mask: T) -> some View {
ZStack {
self.opacity(0)
mask.mask(self)
}.fixedSize()
}
}
Demo
And then you can assign any gradient or other type of view as a self-size mask like:
Text("Gradient is on FIRE !!!")
.selfSizeMask(
LinearGradient(
gradient: Gradient(colors: [.red, .yellow]),
startPoint: .bottom,
endPoint: .top)
)
This method contains some bonus advantages that you can see here in this answer
Simplest Swift 3 Solution
Add an image to your project assets or create one programmatically then do the following:
let image = UIImage(named: "myGradient.png")!
label.textColor = UIColor.init(patternImage: image)
You could sub-class out UILable and do the draw method yourself. That would probably be the more difficult approach, there might be an easier way.