pie chart/plot in swift - swift

I've tried so many different ways of add plots package from GitHub or using core plots or combining objective-c into swift, but it appeared so many problems during the process, and during this week I didn't make a successful chart. I'm really depressed.
Is there anyone who succeed in creating a pie chart in swift? The similar questions seem don't have successful answers.
I would really appreciate your help!

Don't be depressed. You just have to add a more specific question to get more help. For example, if you start from scratch and try to integrate a plots package from Github, you have to say what package, how did you try to integrate it, what errors are you getting etc.
However, drawing a simple pie chart is pretty easy with CoreGraphics functionality. Here is a little gift from my code, this draws progress value as a simple black and white pie chart. It only has 2 sections, but you can generalize from it
#IBDesignable class ProgressPieIcon: UIView {
#IBInspectable var progress : Double = 0.0 {
didSet {
self.setNeedsDisplay()
}
}
required init(coder aDecoder: NSCoder) {
super.init(coder:aDecoder)
self.contentMode = .Redraw
}
override init(frame: CGRect) {
super.init(frame: frame)
self.backgroundColor = UIColor.clearColor()
self.contentMode = .Redraw
}
override func drawRect(rect: CGRect) {
let color = UIColor.blackColor().CGColor
let lineWidth : CGFloat = 2.0
// Calculate box with insets
let margin: CGFloat = lineWidth
let box0 = CGRectInset(self.bounds, margin, margin)
let side : CGFloat = min(box0.width, box0.height)
let box = CGRectMake((self.bounds.width-side)/2, (self.bounds.height-side)/2,side,side)
let ctx = UIGraphicsGetCurrentContext()
// Draw outline
CGContextBeginPath(ctx)
CGContextSetStrokeColorWithColor(ctx, color)
CGContextSetLineWidth(ctx, lineWidth)
CGContextAddEllipseInRect(ctx, box)
CGContextClosePath(ctx)
CGContextStrokePath(ctx)
// Draw arc
let delta : CGFloat = -CGFloat(M_PI_2)
let radius : CGFloat = min(box.width, box.height)/2.0
func prog_to_rad(p: Double) -> CGFloat {
let rad = CGFloat(p * 2 * M_PI)
return rad + delta
}
func draw_arc(s: CGFloat, e: CGFloat, color: CGColor) {
CGContextBeginPath(ctx)
CGContextMoveToPoint(ctx, box.midX, box.midY)
CGContextSetFillColorWithColor(ctx, color)
CGContextAddArc(
ctx,
box.midX,
box.midY,
radius-lineWidth/2,
s,
e,
0)
CGContextClosePath(ctx)
CGContextFillPath(ctx)
}
if progress > 0 {
let s = prog_to_rad(0)
let e = prog_to_rad(min(1.0, progress))
draw_arc(s, e, color)
}
}

We've made some changes to the Core Plot API, including replacing NSDecimal with NSDecimalNumber, for Swift compatibility. The changes are on the release-2.0 branch and not in a release package yet. See Core Plot issue #96 for more discussion of the issue.

Have you tried to look for any CorePlot tutorial with Swift ? Like this one maybe.
Otherwise, you might want to give a look to other framework.

Related

NSBezierPath stroke not scaled correctly

I have what should be a simple subclass of an NSView that draws circular nodes at specified locations.
To render the nodes in my view, I translate the graphics context's origin to the center of the view's frame and scale it such that it spans from -1.25 to 1.25 in the limiting dimension (the node coordinates are all in the range -1...1). I then create for each node an NSBezierPath using the ovalIn: constructor. Finally, I fill the path with yellow and stroke it with black.
But... While the yellow fill looks ok, the black outline is not being scaled correctly!
What am I missing?
Here's the code:
override func draw(_ dirtyRect: NSRect)
{
let nodeRadius = CGFloat(0.05)
let unscaledSpan = CGFloat(2.5)
super.draw(dirtyRect)
NSColor.white.set()
self.frame.fill()
guard let graph = graph else { return }
let scale = min(bounds.width/unscaledSpan, bounds.height/unscaledSpan)
NSGraphicsContext.current?.saveGraphicsState()
defer { NSGraphicsContext.current?.restoreGraphicsState() }
let xform = NSAffineTransform()
xform.translateX( by: 0.5*bounds.width, yBy: 0.5*bounds.height)
xform.scale(by: scale)
xform.concat()
for v in graph.vertices
{
let r = NSRect(x: v.x-nodeRadius, y: v.y-nodeRadius, width: 2.0*nodeRadius, height: 2.0*nodeRadius)
let p = NSBezierPath(ovalIn: r)
NSColor.yellow.set()
p.fill()
NSColor.black.set()
p.stroke()
}
}
This is what I'm seeing (shown with two different window sizes)
Clearly, the translation is working fine for both fill and stroke.
But, the scaling is off for stroke.
Thanks for any/all hints/suggestions.
Doh... I wasn't considering the effect of scaling on the line width.
Made the following edit and all is well:
...
let p = NSBezierPath(ovalIn: r)
p.lineWidth = CGFloat(0.01)
NSColor.yellow.set()
p.fill()
NSColor.black.set()
p.stroke()
...

Colored text background with rounded corners

I'm currently working on an iOS application in Swift, and I need to achieve the text effect shown in the attached picture.
I have a label which displays some text wrote by the user, and I need to make the text background corners (not the label background) rounded.
Is there a way to do that?
I'd search the web and Stackoverflow but with no luck.
Thank you.
Here is some code that could help you. The result I got is quite similar to what you want.
class MyTextView: UITextView {
let textViewPadding: CGFloat = 7.0
override func draw(_ rect: CGRect) {
self.layoutManager.enumerateLineFragments(forGlyphRange: NSMakeRange(0, self.text.count)) { (rect, usedRect, textContainer, glyphRange, Bool) in
let rect = CGRect(x: usedRect.origin.x, y: usedRect.origin.y + self.textViewPadding, width: usedRect.size.width, height: usedRect.size.height*1.2)
let rectanglePath = UIBezierPath(roundedRect: rect, cornerRadius: 3)
UIColor.red.setFill()
rectanglePath.fill()
}
}
}
Xavier solution works great.
If you want update background in real-time, use: textView.setNeedsDisplay()

Drawing a CAShapeLayer using autolayout

I am trying to draw and animate a circular button using CAShapeLayer but just the drawing gives me a lot of headache - I can't seem to figure out how to pass data into my class.
This is my setup:
- a class of type UIView which will draw the CAShapeLayer
- the view is rendered in my view controller and built using auto layout constraints
I have tried using layoutIfNeeded but seem to be passing the data too late for the view to be drawn. I have also tried redrawing the view in vieWillLayoutSubviews() but nothing. Example code below. What am I doing wrong?
Am I passing the data too early/too late?
Am I drawing the bezierPath too late?
I'd highly appreciate pointers.
And maybe a second follow up question: is there a simpler way to draw a circular path that is bound to it's views size?
In my View Controller:
import UIKit
class ViewController: UIViewController {
let buttonView: CircleButton = {
let view = CircleButton()
view.backgroundColor = .black
view.translatesAutoresizingMaskIntoConstraints = false
return view
}()
override func viewWillLayoutSubviews() {
}
override func viewDidLoad() {
super.viewDidLoad()
view.addSubview(buttonView)
buttonView.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true
buttonView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
buttonView.widthAnchor.constraint(equalTo: view.widthAnchor, multiplier: 0.75).isActive = true
buttonView.heightAnchor.constraint(equalTo: view.heightAnchor, multiplier: 0.25).isActive = true
buttonView.layoutIfNeeded()
buttonView.arcCenter = buttonView.center
buttonView.radius = buttonView.frame.width/2
}
override func viewDidAppear(_ animated: Bool) {
print(buttonView.arcCenter)
print(buttonView.radius)
}
}
And the class for the buttonView:
class CircleButton: UIView {
//Casting outer circular layers
let trackLayer = CAShapeLayer()
var arcCenter = CGPoint()
var radius = CGFloat()
//UIView Init
override init(frame: CGRect) {
super.init(frame: frame)
}
//UIView post init
override func layoutSubviews() {
super.layoutSubviews()
print("StudyButtonView arcCenter \(arcCenter)")
print("StudyButtonView radius \(radius)")
layer.addSublayer(trackLayer)
let outerCircularPath = UIBezierPath(arcCenter: arcCenter, radius: radius, startAngle: 0, endAngle: 2*CGFloat.pi, clockwise: true)
trackLayer.path = outerCircularPath.cgPath
trackLayer.strokeColor = UIColor.lightGray.cgColor
trackLayer.lineWidth = 5
trackLayer.strokeStart = 0
trackLayer.strokeEnd = 1
trackLayer.fillColor = UIColor.clear.cgColor
trackLayer.transform = CATransform3DMakeRotation(-CGFloat.pi/2, 0, 0, 1)
}
//Required for subclass
required init?(coder aDecoder: NSCoder) {
fatalError("has not been implemented")
}
}
There really isn't any correlation between auto-layout and the proper implementation of your CircleButton class. Your CircleButton class doesn't know or care whether it's being configured via auto-layout or whether it has some fixed size.
Your auto-layout code looks OK (other than points 5 and 6 below). Most of the issues in your code snippet rest in your CircleButton class. A couple of observations:
If you're going to rotate the shape layer, you have to set its frame, too, otherwise the size is .zero and it's going to end up rotating it about the origin of the view (and rotate outside of the bounds of the view, especially problematic if you're clipping subviews). Make sure to set the frame of the CAShapeLayer to be the bounds of the view before trying to rotate it. Frankly, I'd remove the transform, but given that you're playing around with strokeStart and strokeEnd, I'm guessing you may want to change these values later and have it start at 12 o'clock, in which case the transform makes sense.
Bottom line, if rotating, set the frame first. If not, setting the layer's frame is optional.
If you're going to change the properties of the view in order to update the shape layer, you'll want to make sure that the didSet observers do the appropriate updating of the shape layer (or call setNeedsLayout). You don't want your view controller from having to mess around with the internals of the shape layer, but you also want to make sure that these changes do get reflected in the shape layer.
It's a minor observation, but I'd suggest adding the shape layer during init and only configuring and adding it to the view hierarchy once. This is more efficient. So, have the various init methods call your own configure method. Then, do size-related stuff (like updating the path) in layoutSubviews. Finally, have properties observers that update the shape layer directly. This division of labor is more efficient.
If you want, you can make this #IBDesignable and put it in its own target in your project. Then you can add it right in IB and see what it will look like. You can also make all the various properties #IBInspectable, and you'll be able to set them right in IB, too. You then don't have to do anything in the code of your view controller if you don't want to. (But if you want to, feel free.)
A minor issue, but when you add your view programmatically, you don't need to call buttonView.layoutIfNeeded(). You only need to do that if you're animating constraints, which you're not doing here. Once you add the constraints (and fix the above issues), the button will be laid out correctly, with no explicit layoutIfNeeded required.
Your view controller has a line of code that says:
buttonView.arcCenter = buttonView.center
That is conflating arcCenter (which is a coordinate within the buttonView's coordinate space) and buttonView.center (which is the coordinate for the button's center within the view controller's root view's coordinate space). One has nothing to do with the other. Personally, I'd get rid of this manual setting of arcCenter, and instead have layoutSubviews in ButtonView take care of this dynamically, using bounds.midX and bounds.midY.
Pulling that all together, you get something like:
#IBDesignable
class CircleButton: UIView {
private let trackLayer = CAShapeLayer()
#IBInspectable var lineWidth: CGFloat = 5 { didSet { updatePath() } }
#IBInspectable var fillColor: UIColor = .clear { didSet { trackLayer.fillColor = fillColor.cgColor } }
#IBInspectable var strokeColor: UIColor = .lightGray { didSet { trackLayer.strokeColor = strokeColor.cgColor } }
#IBInspectable var strokeStart: CGFloat = 0 { didSet { trackLayer.strokeStart = strokeStart } }
#IBInspectable var strokeEnd: CGFloat = 1 { didSet { trackLayer.strokeEnd = strokeEnd } }
override init(frame: CGRect) {
super.init(frame: frame)
configure()
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
configure()
}
private func configure() {
trackLayer.fillColor = fillColor.cgColor
trackLayer.strokeColor = strokeColor.cgColor
trackLayer.strokeStart = strokeStart
trackLayer.strokeEnd = strokeEnd
layer.addSublayer(trackLayer)
}
override func layoutSubviews() {
super.layoutSubviews()
updatePath()
}
private func updatePath() {
let arcCenter = CGPoint(x: bounds.midX, y: bounds.midY)
let radius = (min(bounds.width, bounds.height) - lineWidth) / 2
trackLayer.lineWidth = lineWidth
trackLayer.path = UIBezierPath(arcCenter: arcCenter, radius: radius, startAngle: 0, endAngle: 2 * .pi, clockwise: true).cgPath
// There's no need to rotate it if you're drawing a complete circle.
// But if you're going to transform, set the `frame`, too.
trackLayer.transform = CATransform3DIdentity
trackLayer.frame = bounds
trackLayer.transform = CATransform3DMakeRotation(-.pi / 2, 0, 0, 1)
}
}
That yields:
Or you can tweak the settings right in IB, and you'll see it take effect:
And having made sure that all of the didSet observers for the properties of ButtonView either update the path or directly update some shape layer, the view controller can now update these properties and they'll automatically be rendered in the ButtonView.
The main issue that I see in your code is that you are adding the layer inside -layoutSubviews, this method is called multiple times during a view lifecycle.
If you don't want to make the view hosted layer a CAShapeLayer by using the layerClass property, you need to override the 2 init methods (frame and coder) and call a commonInit where you instantiate and add your CAShape layer as a sublayer.
In -layoutSubviews just set the frame property of it and the path according to the new view size.

I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.

How to set the BlurRadius of UIBlurEffectStyle.Light

I was wondering how to set the radius/blur factor of iOS new UIBlurEffectStyle.Light? I could not find anything in the documentation. But I want it to look similar to the classic UIImage+ImageEffects.h blur effect.
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
let blur = UIBlurEffect(style: UIBlurEffectStyle.Light)
let effectView = UIVisualEffectView(effect: blur)
effectView.frame = frame
addSubview(effectView)
}
Changing alpha is not a perfect solution. It does not affect blur intensity. You can setup an animation from nil to target blur effect and manually set time offset to get desired blur intensity. Unfortunately iOS will reset the animation offset when app returns from background.
Thankfully there is a simple solution that works on iOS >= 10. You can use UIViewPropertyAnimator. I didn't notice any issues with using it. I keeps custom blur intensity when app returns from background. Here is how you can implement it:
class CustomIntensityVisualEffectView: UIVisualEffectView {
/// Create visual effect view with given effect and its intensity
///
/// - Parameters:
/// - effect: visual effect, eg UIBlurEffect(style: .dark)
/// - intensity: custom intensity from 0.0 (no effect) to 1.0 (full effect) using linear scale
init(effect: UIVisualEffect, intensity: CGFloat) {
super.init(effect: nil)
animator = UIViewPropertyAnimator(duration: 1, curve: .linear) { [unowned self] in self.effect = effect }
animator.fractionComplete = intensity
}
required init?(coder aDecoder: NSCoder) {
fatalError()
}
// MARK: Private
private var animator: UIViewPropertyAnimator!
}
I also created a gist: https://gist.github.com/darrarski/29a2a4515508e385c90b3ffe6f975df7
You can change the alpha of the UIVisualEffectView that you add your blur effect to.
let blurEffect = UIBlurEffect(style: UIBlurEffectStyle.Light)
let blurEffectView = UIVisualEffectView(effect: blurEffect)
blurEffectView.alpha = 0.5
blurEffectView.frame = self.view.bounds
self.view.addSubview(blurEffectView)
This is not a true solution, as it doesn't actually change the radius of the blur, but I have found that it gets the job done with very little work.
Although it is a hack and probably it won't be accepted in the app store, it is still possible. You have to subclass the UIBlurEffect like this:
#import <objc/runtime.h>
#interface UIBlurEffect (Protected)
#property (nonatomic, readonly) id effectSettings;
#end
#interface MyBlurEffect : UIBlurEffect
#end
#implementation MyBlurEffect
+ (instancetype)effectWithStyle:(UIBlurEffectStyle)style
{
id result = [super effectWithStyle:style];
object_setClass(result, self);
return result;
}
- (id)effectSettings
{
id settings = [super effectSettings];
[settings setValue:#50 forKey:#"blurRadius"];
return settings;
}
- (id)copyWithZone:(NSZone*)zone
{
id result = [super copyWithZone:zone];
object_setClass(result, [self class]);
return result;
}
#end
Here blur radius is set to 50. You can change 50 to any value you need.
Then just use MyBlurEffect class instead of UIBlurEffect when creating your effect for UIVisualEffectView.
Recently developed Bluuur library to dynamically change blur radius of UIVisualEffectsView without usage any of private APIs: https://github.com/ML-Works/Bluuur
It uses paused animation of setting effect to achieve changing radius of blur. Solution based on this gist: https://gist.github.com/n00neimp0rtant/27829d87118d984232a4
And the main idea is:
// Freeze animation
blurView.layer.speed = 0;
blurView.effect = nil;
[UIView animateWithDuration:1.0 animations:^{
blurView.effect = [UIBlurEffect effectWithStyle:UIBlurEffectStyleLight];
}];
// Set animation progress from 0 to 1
blurView.layer.timeOffset = 0.5;
UPDATE:
Apple introduced UIViewPropertyAnimator class in iOS 10. Thats what we need exactly to animate .effect property of UIVisualEffectsView. Hope community will be able to back-port this functionality to previous iOS version.
This is totally doable. Use CIFilter in CoreImage module to customize blur radius. In fact, you can even achieve a blur effect with continuous varying (aka gradient) blur radius (https://stackoverflow.com/a/51603339/3808183)
import CoreImage
let ciContext = CIContext(options: nil)
guard let inputImage = CIImage(image: yourUIImage),
let mask = CIFilter(name: "CIGaussianBlur") else { return }
mask.setValue(inputImage, forKey: kCIInputImageKey)
mask.setValue(10, forKey: kCIInputRadiusKey) // Set your blur radius here
guard let output = mask.outputImage,
let cgImage = ciContext.createCGImage(output, from: inputImage.extent) else { return }
outUIImage = UIImage(cgImage: cgImage)
I'm afraid there's no such api currently. According to Apple's way of doing things, new functionality was always brought with restricts, and capabilities will bring out gradually. Maybe that will be possible on iOS 9 or maybe 10...
I have ultimate solution for this question:
fileprivate final class UIVisualEffectViewInterface {
func setIntensity(effectView: UIVisualEffectView, intensity: CGFloat){
let effect = effectView.effect
effectView.effect = nil
animator = UIViewPropertyAnimator(duration: 1, curve: .linear) { [weak effectView] in effectView?.effect = effect }
animator.fractionComplete = intensity
}
private var animator: UIViewPropertyAnimator! }
extension UIVisualEffectView{
private var key: UnsafeRawPointer? { UnsafeRawPointer(bitPattern: 16) }
private var interface: UIVisualEffectViewInterface{
if let key = key, let visualEffectViewInterface = objc_getAssociatedObject(self, key) as? UIVisualEffectViewInterface{
return visualEffectViewInterface
}
let visualEffectViewInterface = UIVisualEffectViewInterface()
if let key = key{
objc_setAssociatedObject(self, key, visualEffectViewInterface, objc_AssociationPolicy.OBJC_ASSOCIATION_RETAIN)
}
return visualEffectViewInterface
}
func intensity(_ value: CGFloat){
interface.setIntensity(effectView: self, intensity: value)
}}
This idea hits me after tried the above solutions, a little hacky but I got it working. Since we cannot modify the default radius which is set as "50", we can just enlarge it and scale it back down.
previewView.snp.makeConstraints { (make) in
make.centerX.centerY.equalTo(self.view)
make.width.height.equalTo(self.view).multipliedBy(4)
}
previewBlur.snp.makeConstraints { (make) in
make.edges.equalTo(previewView)
}
And then,
previewView.transform = CGAffineTransform(scaleX: 0.25, y: 0.25)
previewBlur.transform = CGAffineTransform(scaleX: 0.25, y: 0.25)
I got a 12.5 blur radius. Hope this will help :-)
Currently I didn't find any solution.
By the way you can add a little hack in order to let blur mask less "blurry", in this way:
let blurView = .. // here create blur view as usually
if let blurSubviews = self.blurView?.subviews {
for subview in blurSubviews {
if let filterView = NSClassFromString("_UIVisualEffectFilterView") {
if subview.isKindOfClass(filterView) {
subview.hidden = true
}
}
}
}
for iOS 11.*
in viewDidLoad()
let blurEffect = UIBlurEffect(style: .dark)
let blurEffectView = UIVisualEffectView()
view.addSubview(blurEffectView)
//always fill the view
blurEffectView.frame = self.view.bounds
blurEffectView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
UIView.animate(withDuration: 1) {
blurEffectView.effect = blurEffect
}
blurEffectView.pauseAnimation(delay: 0.5)
There is an undocumented way to do this. Not necessarily recommended, as it may get your app rejected by Apple. But it does work.
if let blurEffectType = NSClassFromString("_UICustomBlurEffect") as? UIBlurEffect.Type {
let blurEffectInstance = blurEffectType.init()
// set any value you want here. 40 is quite blurred
blurEffectInstance.setValue(40, forKey: "blurRadius")
let effectView: UIVisualEffectView = UIVisualEffectView(effect: blurEffectInstance)
// Now you have your blurred visual effect view
}
This works for me.
I put UIVisualEffectView in an UIView before add to my view.
I make this function to use easier. You can use this function to make blur any area in your view.
func addBlurArea(area: CGRect) {
let effect = UIBlurEffect(style: UIBlurEffectStyle.Dark)
let blurView = UIVisualEffectView(effect: effect)
blurView.frame = CGRect(x: 0, y: 0, width: area.width, height: area.height)
let container = UIView(frame: area)
container.alpha = 0.8
container.addSubview(blurView)
self.view.insertSubview(container, atIndex: 1)
}
For example, you can make blur all of your view by calling:
addBlurArea(self.view.frame)
You can change Dark to your desired blur style and 0.8 to your desired alpha value
If you want to accomplish the same behaviour as iOS spotlight search you just need to change the alpha value of the UIVisualEffectView (tested on iOS9 simulator)