Get and change hue of SKSpriteNode's SKColor(HSBA)? - sprite-kit

A SKSpriteNode's SKColor has a way to be created with Hue, Saturation, Brightness & Alpha:
let myColor = SKColor(hue: 0.5, saturation: 1, brightness: 1, alpha: 1)
mySprite.color = myColor
How do I get at the hue of a SKSpriteNode and make a change to it? eg, divide it by 2.

An SKSpriteNode is a node that draws a texture (optionally blended with a color), an image, a colored square. So, this is it's nature.
When you make an SKSpriteNode, you have an instance property that represent the texture used to draw the sprite called also texture
Since iOS 9.x, we are able to retrieve an image from a texture following the code below. In this example I call my SKSpriteNode as spriteBg:
let spriteBg = SKSpriteNode.init(texture: SKTexture.init(imageNamed: "myImage.png"))
if let txt = spriteBg.texture {
if #available(iOS 9.0, *) {
let image : UIImage = UIImage.init(cgImage:txt.cgImage())
} else {
// Fallback on earlier versions and forgot this code..
}
}
Following this interesting answer, we can translate it to a more confortable Swift 3.0 version:
func imageWith(source: UIImage, rotatedByHue: CGFloat) -> UIImage {
// Create a Core Image version of the image.
let sourceCore = CIImage(cgImage: source.cgImage!)
// Apply a CIHueAdjust filter
guard let hueAdjust = CIFilter(name: "CIHueAdjust") else { return source }
hueAdjust.setDefaults()
hueAdjust.setValue(sourceCore, forKey: "inputImage")
hueAdjust.setValue(CGFloat(rotatedByHue), forKey: "inputAngle")
let resultCore = hueAdjust.value(forKey: "outputImage") as! CIImage!
let context = CIContext(options: nil)
let resultRef = context.createCGImage(resultCore!, from: resultCore!.extent)
let result = UIImage(cgImage: resultRef!)
return result
}
So, finally with the previous code we can do:
if let txt = spriteBg.texture {
if #available(iOS 9.0, *) {
let image : UIImage = UIImage.init(cgImage:txt.cgImage())
let changedImage = imageWith(source: image, rotatedByHue: 0.5)
spriteBg.texture = SKTexture(image: changedImage)
} else {
// Fallback on earlier versions or bought a new iphone
}
}

I'm not in a place to be able to test this right now, but looking at the UIColor documentation (UIColor and SKColor are basically the same thing), you should be able to use the .getHue(...) function retrieve the color's components, make changes to it, then set the SKSpriteNode's color property to the new value. the .getHue(...) function "Returns the components that make up the color in the HSB color space."
https://developer.apple.com/reference/uikit/uicolor/1621949-gethue

Related

Why the SceneKit Material looks different, even when the image is the same?

The material content support many options to be loaded, two of these are NSImage (or UIImage) and SKTexture.
I noticed when loading the same image file (.png) with different loaders, the material is rendered different.
I'm very sure it is an extra property loaded from SpriteKit transformation, but I don't know what is it.
Why the SceneKit Material looks different, even when the image is the same?
This is the rendered example:
About the code:
let plane = SCNPlane(width: 1, height: 1)
plane.firstMaterial?.diffuse.contents = NSColor.green
let plane = SCNPlane(width: 1, height: 1)
plane.firstMaterial?.diffuse.contents = NSImage(named: "texture")
let plane = SCNPlane(width: 1, height: 1)
plane.firstMaterial?.diffuse.contents = SKTexture(imageNamed: "texture")
The complete example is here: https://github.com/Maetschl/SceneKitExamples/tree/master/MaterialTests
I think this has something to do with color spaces/gamma correction. My guess is that textures loaded via the SKTexture(imageNamed:) initializer aren't properly gamma corrected. You would think this would be documented somewhere, or other people would have noticed, but I can't seem to find anything.
Here's some code to swap with the last image in your linked sample project. I've force unwrapped as much as possible for brevity:
// Create the texture using the SKTexture(cgImage:) init
// to prove it has the same output image as SKTexture(imageNamed:)
let originalDogNSImage = NSImage(named: "dog")!
var originalDogRect = CGRect(x: 0, y: 0, width: originalDogNSImage.size.width, height: originalDogNSImage.size.height)
let originalDogCGImage = originalDogNSImage.cgImage(forProposedRect: &originalDogRect, context: nil, hints: nil)!
let originalDogTexture = SKTexture(cgImage: originalDogCGImage)
// Create the ciImage of the original image to use as the input for the CIFilter
let imageData = originalDogNSImage.tiffRepresentation!
let ciImage = CIImage(data: imageData)
// Create the gamma adjustment Core Image filter
let gammaFilter = CIFilter(name: "CIGammaAdjust")!
gammaFilter.setValue(ciImage, forKey: kCIInputImageKey)
// 0.75 is the default. 2.2 makes the dog image mostly match the NSImage(named:) intializer
gammaFilter.setValue(2.2, forKey: "inputPower")
// Create a SKTexture using the output of the CIFilter
let gammaCorrectedDogCIImage = gammaFilter.outputImage!
let gammaCorrectedDogCGImage = CIContext().createCGImage(gammaCorrectedDogCIImage, from: gammaCorrectedDogCIImage.extent)!
let gammaCorrectedDogTexture = SKTexture(cgImage: gammaCorrectedDogCGImage)
// Looks bad, like in StackOverflow question image.
// let planeWithSKTextureDog = planeWith(diffuseContent: originalDogTexture)
// Looks correct
let planeWithSKTextureDog = planeWith(diffuseContent: gammaCorrectedDogTexture)
Using a CIGammaAdjust filter with an inputPower of 2.2 makes the SKTexture almost? match the NSImage(named:) init. I've included the original image being loaded through SKTexture(cgImage:) to rule out any changes caused by using that initializer versus the SKTexture(imageNamed:) you asked about.

How do you extend the space (bounds) of a CIImage without stretching the original?

I'm applying several filters on an already cropped image, and I'd like a flipped duplicate of it next to the original. This would make it twice as wide.
Problem: How do you extend the bounds so both can fit? .cropped(to:CGRect) will stretch whatever original content was there. The reason there is existing content is because I'm trying to use applyingFilter as much as possible to save on processing. It's also why I'm cropping the original un-mirrored image.
Below is my CIImage "alphaMaskBlend2" with a compositing filter, and a transform applied to the same image that flips it and adjusts its position. sourceCore.extent is the size I want the final image.
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: (alphaMaskBlend2?.transformed(by: scaledImageTransform))!,
kCIInputBackgroundImageKey: alphaMaskBlend2!]).cropped(to: sourceCore.extent)
I've played around with the position of the transform in LLDB. I found with this filter being cropped, the left most image becomes stretched. If I use clamped to the same extent, and then I re-crop the image to the same extent again, the image is no longer distorted, but the bounds of the image is only half the width that it should be.
The only way I could achieve this, is compositing against a background image (sourceCore) that would be the size of the two images combined, and then compositing the other image:
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: alphaMaskBlend2!,
kCIInputBackgroundImageKey: sourceCore])
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: (alphaMaskBlend2?.cropped(to: cropRect).transformed(by: scaledImageTransform))!,
kCIInputBackgroundImageKey: alphaMaskBlend2!])
Problem is, that this is more expensive than necessary. I even tested it with benchmarking. It would make a lot more sense if I could do this with one composite.
While I can "flip" a CIImage I couldn't find a way to use an existing CIFilter to "stitch" it along side the original. However, with some basic knowledge of writing your own CIKernel, you can. A simple project of achieving this is here.
This project contains a sample image, and using CoreImage and a GLKView it:
flips the image by transposing the Y "bottom/top" coordinates for CIPerspectiveCorrection
creates a new "palette" image using CIConstantColor and then crops it using CICrop to be twice the width of the original
uses a very simple CIKernel (registered as "Stitch" to actually stitch it together
Here's the code to flip:
// use CIPerspectiveCorrection to "flip" on the Y axis
let minX:CGFloat = 0
let maxY:CGFloat = 0
let maxX = originalImage?.extent.width
let minY = originalImage?.extent.height
let flipFilter = CIFilter(name: "CIPerspectiveCorrection")
flipFilter?.setValue(CIVector(x: minX, y: maxY), forKey: "inputTopLeft")
flipFilter?.setValue(CIVector(x: maxX!, y: maxY), forKey: "inputTopRight")
flipFilter?.setValue(CIVector(x: minX, y: minY!), forKey: "inputBottomLeft")
flipFilter?.setValue(CIVector(x: maxX!, y: minY!), forKey: "inputBottomRight")
flipFilter?.setValue(originalImage, forKey: "inputImage")
flippedImage = flipFilter?.outputImage
Here's the code to create the palette:
let paletteFilter = CIFilter(name: "CIConstantColorGenerator")
paletteFilter?.setValue(CIColor(red: 0.7, green: 0.4, blue: 0.4), forKey: "inputColor")
paletteImage = paletteFilter?.outputImage
let cropFilter = CIFilter(name: "CICrop")
cropFilter?.setValue(paletteImage, forKey: "inputImage")
cropFilter?.setValue(CIVector(x: 0, y: 0, z: (originalImage?.extent.width)! * 2, w: (originalImage?.extent.height)!), forKey: "inputRectangle")
paletteImage = cropFilter?.outputImage
Here's the code to register and use the custom CIFilter:
// register and use stitch filer
StitchedFilters.registerFilters()
let stitchFilter = CIFilter(name: "Stitch")
stitchFilter?.setValue(originalImage?.extent.width, forKey: "inputThreshold")
stitchFilter?.setValue(paletteImage, forKey: "inputPalette")
stitchFilter?.setValue(originalImage, forKey: "inputOriginal")
stitchFilter?.setValue(flippedImage, forKey: "inputFlipped")
finalImage = stitchFilter?.outputImage
All of this code (long with layout constraints) in the demo project is in viewDidLoad, so please, place it where it belongs!
Here's the code to (a) create a CIFilter subclass called Stitch and (b) register it so you can use it like any other filter:
func openKernelFile(_ name:String) -> String {
let filePath = Bundle.main.path(forResource: name, ofType: ".cikernel")
do {
return try String(contentsOfFile: filePath!)
}
catch let error as NSError {
return error.description
}
}
let CategoryStitched = "Stitch"
class StitchedFilters: NSObject, CIFilterConstructor {
static func registerFilters() {
CIFilter.registerName(
"Stitch",
constructor: StitchedFilters(),
classAttributes: [
kCIAttributeFilterCategories: [CategoryStitched]
])
}
func filter(withName name: String) -> CIFilter? {
switch name {
case "Stitch":
return Stitch()
default:
return nil
}
}
}
class Stitch:CIFilter {
let kernel = CIKernel(source: openKernelFile("Stitch"))
var inputThreshold:Float = 0
var inputPalette: CIImage!
var inputOriginal: CIImage!
var inputFlipped: CIImage!
override var attributes: [String : Any] {
return [
kCIAttributeFilterDisplayName: "Stitch",
"inputThreshold": [kCIAttributeIdentity: 0,
kCIAttributeClass: "NSNumber",
kCIAttributeDisplayName: "Threshold",
kCIAttributeDefault: 0.5,
kCIAttributeMin: 0,
kCIAttributeSliderMin: 0,
kCIAttributeSliderMax: 1,
kCIAttributeType: kCIAttributeTypeScalar],
"inputPalette": [kCIAttributeIdentity: 0,
kCIAttributeClass: "CIImage",
kCIAttributeDisplayName: "Palette",
kCIAttributeType: kCIAttributeTypeImage],
"inputOriginal": [kCIAttributeIdentity: 0,
kCIAttributeClass: "CIImage",
kCIAttributeDisplayName: "Original",
kCIAttributeType: kCIAttributeTypeImage],
"inputFlipped": [kCIAttributeIdentity: 0,
kCIAttributeClass: "CIImage",
kCIAttributeDisplayName: "Flipped",
kCIAttributeType: kCIAttributeTypeImage]
]
}
override init() {
super.init()
}
override func setValue(_ value: Any?, forKey key: String) {
switch key {
case "inputThreshold":
inputThreshold = value as! Float
case "inputPalette":
inputPalette = value as! CIImage
case "inputOriginal":
inputOriginal = value as! CIImage
case "inputFlipped":
inputFlipped = value as! CIImage
default:
break
}
}
#available(*, unavailable) required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override var outputImage: CIImage {
return kernel!.apply(
extent: inputPalette.extent,
roiCallback: {(index, rect) in return rect},
arguments: [
inputThreshold as Any,
inputPalette as Any,
inputOriginal as Any,
inputFlipped as Any
])!
}
}
Finally, the CIKernel code:
kernel vec4 stitch(float threshold, sampler palette, sampler original, sampler flipped) {
vec2 coord = destCoord();
if (coord.x < threshold) {
return sample(original, samplerCoord(original));
} else {
vec2 flippedCoord = coord - vec2(threshold, 0.0);
vec2 flippedCoordinate = samplerTransform(flipped, flippedCoord);
return sample(flipped, flippedCoordinate);
}
}
Now, someone else may have something more elegant - maybe even using an existing CIFilter - but this works well. It only uses the GPU, so performance-wise, can be used in "real time". I added unneeded code (registering the filter, using a dictionary to define attributes) to make it more of a teaching exercise for those new to creating CIKernels that anyone with knowledge of using CIFilters can consume. If you focus on the kernel code, you'll recognize how similar to C it looks.
Last, a caveat. I am only stitching the (Y-axis) flipped image to the right of the original. You'll need to adjust things if you want something else.

Programmatically creating an SKTileDefinition

I've been beating my head against a wall for hours now. I am trying to modify a texture inside my app using a CIFilter and then use that new texture as a part of a new SKTileDefinition to recolor tiles on my map.
The function bellow finds tiles that players "own" and attempts to recolor them by changing the SKTileDefinition to the coloredDefinition.
func updateMapTileColoration(for players: Array<Player>){
for player in players {
for row in 0..<mainBoardMap!.numberOfRows {
for col in 0..<mainBoardMap!.numberOfColumns {
let rowReal = mainBoardMap!.numberOfRows - 1 - row
if player.crownLocations!.contains(CGPoint(x: row, y: col)) {
if let tile = mainBoardMap!.tileDefinition(atColumn: col, row: rowReal) {
let coloredDefinition = colorizeTile(tile: tile, into: player.color!)
print(coloredDefinition.name)
mainBoardMap!.tileSet.tileGroups[4].rules[0].tileDefinitions.append(coloredDefinition)
mainBoardMap!.setTileGroup(crownGroup!, andTileDefinition: crownGroup!.rules[0].tileDefinitions[1], forColumn: col, row: rowReal)
}
}
}
}
}
And here is the function that actulaly applies the CIFilter: colorizeTile
func colorizeTile(tile: SKTileDefinition, into color: UIColor) -> SKTileDefinition{
let texture = tile.textures[0]
let colorationFilter = CIFilter(name: "CIColorMonochrome")
colorationFilter!.setValue(CIImage(cgImage: texture.cgImage()), forKey: kCIInputImageKey)
colorationFilter!.setValue(CIColor(cgColor: color.cgColor), forKey: "inputColor")
colorationFilter!.setValue(0.25, forKey: "inputIntensity")
let coloredTexture = texture.applying(colorationFilter!)
let newDefinition = SKTileDefinition(texture: texture)
newDefinition.textures[0] = coloredTexture
newDefinition.name = "meow"
return newDefinition
}
I would love any help in figuring out why I cannot change the tileDefinition like I am trying to do. It seems intuitively correct to be able to define a new TileDefinition and add it to the tileGroup and then set the tile group to the specific tile definition. However, this is leading to blank tiles...
Any pointers?
After trying a bunch of things I finally figured out what is wrong. The tile definition wasn't being created correctly because I never actually drew a new texture. As I learned, a CIImage is not the drawn texture its just a recipe and we need a context to draw the texture. After this change, the SKTileDefinition is properly created. The problem wasn't where I thought it was so I am sort-of second hand anwering the question. My method for creating a SKTileDefinition was correct.
if drawContext == nil{
drawContext = CIContext()
}
let texture = tile.textures[0]
let colorationFilter = CIFilter(name: "CIColorMonochrome")
colorationFilter!.setValue(CIImage(cgImage: texture.cgImage()), forKey: kCIInputImageKey)
colorationFilter!.setValue(CIColor(cgColor: color.cgColor), forKey: "inputColor")
colorationFilter!.setValue(0.75, forKey: "inputIntensity")
let result = colorationFilter!.outputImage!
let output = drawContext!.createCGImage(result, from: result.extent)
let coloredTexture = SKTexture(cgImage: output!)
let newDefinition = SKTileDefinition(texture: texture)
newDefinition.textures[0] = coloredTexture
newDefinition.name = "meow"

Binarize Picture with Core Image on iOS

I was wondering if it is possible to binarize an image (convert to black and white only) with Core Image?
I made it with OpenCV and GPUImage, but would prefer it to use Apple Core Image, if that's possible
You can use MetalPerformanceShaders for that. And the CIImageProcessingKernel.
https://developer.apple.com/documentation/coreimage/ciimageprocessorkernel
Here is the code of the class needed.
class ThresholdImageProcessorKernel: CIImageProcessorKernel {
static let device = MTLCreateSystemDefaultDevice()
override class func process(with inputs: [CIImageProcessorInput]?, arguments: [String : Any]?, output: CIImageProcessorOutput) throws {
guard
let device = device,
let commandBuffer = output.metalCommandBuffer,
let input = inputs?.first,
let sourceTexture = input.metalTexture,
let destinationTexture = output.metalTexture,
let thresholdValue = arguments?["thresholdValue"] as? Float else {
return
}
let threshold = MPSImageThresholdBinary(
device: device,
thresholdValue: thresholdValue,
maximumValue: 1.0,
linearGrayColorTransform: nil)
threshold.encode(
commandBuffer: commandBuffer,
sourceTexture: sourceTexture,
destinationTexture: destinationTexture)
}
}
And this is how you can use it:
let context = CIContext(options: nil)
if let binaryCIImage = try? ThresholdImageProcessorKernel.apply(
withExtent: croppedCIImage.extent,
inputs: [croppedCIImage],
arguments: ["thresholdValue": Float(0.2)]) {
if let cgImage = context.createCGImage(binaryCIImage, from: binary.extent) {
DispatchQueue.main.async {
let resultingImage = UIImage(cgImage: cgImage)
if resultingImage.size.width > 100 {
print("Received an image \(resultingImage.size)")
}
}
}
}
Yes. You have at least two options, CIPhotoEffectMono or a small custom CIColorKernel.
CIPhotoEffectMono:
func createMonoImage(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setValue(CIImage(image: image), forKey: "inputImage")
let outputImage = filter!.outputImage
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Note, I'm writing this quickly, you may need to tighten up things for nil returns.
CIColorKernel:
The FadeToBW GLSL (0.0 factor full color, 1.0 factor is no color):
kernel vec4 fadeToBW(__sample s, float factor) {
vec3 lum = vec3(0.299,0.587,0.114);
vec3 bw = vec3(dot(s.rgb,lum));
vec3 pixel = s.rgb + (bw - s.rgb) * factor;
return vec4(pixel,s.a);
}
The code below opens this as a file called FadeToBW.cikernel. You can also post this as a String directly into the openKernelFile call.
The Swift code:
func createMonoImage(image:UIImage, inputColorFade:NSNumber) -> UIImage {
let ciKernel = CIColorKernel(string: openKernelFile("FadeToBW"))
let extent = image.extent
let arguments = [image, inputColorFade]
let outputImage = ciKernel.applyWithExtent(extent, arguments: arguments)
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Again, add some guards, etc.
I have had success by converting it to greyscale using CIPhotoEffectMono or equivalent, and then using CIColorControls with a ridiculously high inputContrast number (I used 10000). This effectively makes it black and white and thus binarized. Useful for those who don't want to mess with custom kernels.
Also, you can use an example like Apple's "Chroma Key" filter which uses Hue to filter, but instead of looking at Hue you just give the rules for binarizing the data (ie: when to set RGB all to 1.0 and when to set to 0.0).
https://developer.apple.com/documentation/coreimage/applying_a_chroma_key_effect
Found this thread from a Google search, and thought I'd mention that as of iOS 14 and OSX 11.0, CoreImage includes CIColorThreshold and CIColorThresholdOtsu filters (the latter using Otsu's method to calculate the threshold value from the image histogram)
See:
https://cifilter.io/CIColorThreshold/
https://cifilter.io/CIColorThresholdOtsu/
let outputImage = inputImage.applyingFilter("CIColorMonochrome",
parameters: [kCIInputColorKey: CIColor.white])
In you want to play with every out of 250 CIFilters please check this app out: https://apps.apple.com/us/app/filter-magic/id1594986951

How to set the BlurRadius of UIBlurEffectStyle.Light

I was wondering how to set the radius/blur factor of iOS new UIBlurEffectStyle.Light? I could not find anything in the documentation. But I want it to look similar to the classic UIImage+ImageEffects.h blur effect.
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
let blur = UIBlurEffect(style: UIBlurEffectStyle.Light)
let effectView = UIVisualEffectView(effect: blur)
effectView.frame = frame
addSubview(effectView)
}
Changing alpha is not a perfect solution. It does not affect blur intensity. You can setup an animation from nil to target blur effect and manually set time offset to get desired blur intensity. Unfortunately iOS will reset the animation offset when app returns from background.
Thankfully there is a simple solution that works on iOS >= 10. You can use UIViewPropertyAnimator. I didn't notice any issues with using it. I keeps custom blur intensity when app returns from background. Here is how you can implement it:
class CustomIntensityVisualEffectView: UIVisualEffectView {
/// Create visual effect view with given effect and its intensity
///
/// - Parameters:
/// - effect: visual effect, eg UIBlurEffect(style: .dark)
/// - intensity: custom intensity from 0.0 (no effect) to 1.0 (full effect) using linear scale
init(effect: UIVisualEffect, intensity: CGFloat) {
super.init(effect: nil)
animator = UIViewPropertyAnimator(duration: 1, curve: .linear) { [unowned self] in self.effect = effect }
animator.fractionComplete = intensity
}
required init?(coder aDecoder: NSCoder) {
fatalError()
}
// MARK: Private
private var animator: UIViewPropertyAnimator!
}
I also created a gist: https://gist.github.com/darrarski/29a2a4515508e385c90b3ffe6f975df7
You can change the alpha of the UIVisualEffectView that you add your blur effect to.
let blurEffect = UIBlurEffect(style: UIBlurEffectStyle.Light)
let blurEffectView = UIVisualEffectView(effect: blurEffect)
blurEffectView.alpha = 0.5
blurEffectView.frame = self.view.bounds
self.view.addSubview(blurEffectView)
This is not a true solution, as it doesn't actually change the radius of the blur, but I have found that it gets the job done with very little work.
Although it is a hack and probably it won't be accepted in the app store, it is still possible. You have to subclass the UIBlurEffect like this:
#import <objc/runtime.h>
#interface UIBlurEffect (Protected)
#property (nonatomic, readonly) id effectSettings;
#end
#interface MyBlurEffect : UIBlurEffect
#end
#implementation MyBlurEffect
+ (instancetype)effectWithStyle:(UIBlurEffectStyle)style
{
id result = [super effectWithStyle:style];
object_setClass(result, self);
return result;
}
- (id)effectSettings
{
id settings = [super effectSettings];
[settings setValue:#50 forKey:#"blurRadius"];
return settings;
}
- (id)copyWithZone:(NSZone*)zone
{
id result = [super copyWithZone:zone];
object_setClass(result, [self class]);
return result;
}
#end
Here blur radius is set to 50. You can change 50 to any value you need.
Then just use MyBlurEffect class instead of UIBlurEffect when creating your effect for UIVisualEffectView.
Recently developed Bluuur library to dynamically change blur radius of UIVisualEffectsView without usage any of private APIs: https://github.com/ML-Works/Bluuur
It uses paused animation of setting effect to achieve changing radius of blur. Solution based on this gist: https://gist.github.com/n00neimp0rtant/27829d87118d984232a4
And the main idea is:
// Freeze animation
blurView.layer.speed = 0;
blurView.effect = nil;
[UIView animateWithDuration:1.0 animations:^{
blurView.effect = [UIBlurEffect effectWithStyle:UIBlurEffectStyleLight];
}];
// Set animation progress from 0 to 1
blurView.layer.timeOffset = 0.5;
UPDATE:
Apple introduced UIViewPropertyAnimator class in iOS 10. Thats what we need exactly to animate .effect property of UIVisualEffectsView. Hope community will be able to back-port this functionality to previous iOS version.
This is totally doable. Use CIFilter in CoreImage module to customize blur radius. In fact, you can even achieve a blur effect with continuous varying (aka gradient) blur radius (https://stackoverflow.com/a/51603339/3808183)
import CoreImage
let ciContext = CIContext(options: nil)
guard let inputImage = CIImage(image: yourUIImage),
let mask = CIFilter(name: "CIGaussianBlur") else { return }
mask.setValue(inputImage, forKey: kCIInputImageKey)
mask.setValue(10, forKey: kCIInputRadiusKey) // Set your blur radius here
guard let output = mask.outputImage,
let cgImage = ciContext.createCGImage(output, from: inputImage.extent) else { return }
outUIImage = UIImage(cgImage: cgImage)
I'm afraid there's no such api currently. According to Apple's way of doing things, new functionality was always brought with restricts, and capabilities will bring out gradually. Maybe that will be possible on iOS 9 or maybe 10...
I have ultimate solution for this question:
fileprivate final class UIVisualEffectViewInterface {
func setIntensity(effectView: UIVisualEffectView, intensity: CGFloat){
let effect = effectView.effect
effectView.effect = nil
animator = UIViewPropertyAnimator(duration: 1, curve: .linear) { [weak effectView] in effectView?.effect = effect }
animator.fractionComplete = intensity
}
private var animator: UIViewPropertyAnimator! }
extension UIVisualEffectView{
private var key: UnsafeRawPointer? { UnsafeRawPointer(bitPattern: 16) }
private var interface: UIVisualEffectViewInterface{
if let key = key, let visualEffectViewInterface = objc_getAssociatedObject(self, key) as? UIVisualEffectViewInterface{
return visualEffectViewInterface
}
let visualEffectViewInterface = UIVisualEffectViewInterface()
if let key = key{
objc_setAssociatedObject(self, key, visualEffectViewInterface, objc_AssociationPolicy.OBJC_ASSOCIATION_RETAIN)
}
return visualEffectViewInterface
}
func intensity(_ value: CGFloat){
interface.setIntensity(effectView: self, intensity: value)
}}
This idea hits me after tried the above solutions, a little hacky but I got it working. Since we cannot modify the default radius which is set as "50", we can just enlarge it and scale it back down.
previewView.snp.makeConstraints { (make) in
make.centerX.centerY.equalTo(self.view)
make.width.height.equalTo(self.view).multipliedBy(4)
}
previewBlur.snp.makeConstraints { (make) in
make.edges.equalTo(previewView)
}
And then,
previewView.transform = CGAffineTransform(scaleX: 0.25, y: 0.25)
previewBlur.transform = CGAffineTransform(scaleX: 0.25, y: 0.25)
I got a 12.5 blur radius. Hope this will help :-)
Currently I didn't find any solution.
By the way you can add a little hack in order to let blur mask less "blurry", in this way:
let blurView = .. // here create blur view as usually
if let blurSubviews = self.blurView?.subviews {
for subview in blurSubviews {
if let filterView = NSClassFromString("_UIVisualEffectFilterView") {
if subview.isKindOfClass(filterView) {
subview.hidden = true
}
}
}
}
for iOS 11.*
in viewDidLoad()
let blurEffect = UIBlurEffect(style: .dark)
let blurEffectView = UIVisualEffectView()
view.addSubview(blurEffectView)
//always fill the view
blurEffectView.frame = self.view.bounds
blurEffectView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
UIView.animate(withDuration: 1) {
blurEffectView.effect = blurEffect
}
blurEffectView.pauseAnimation(delay: 0.5)
There is an undocumented way to do this. Not necessarily recommended, as it may get your app rejected by Apple. But it does work.
if let blurEffectType = NSClassFromString("_UICustomBlurEffect") as? UIBlurEffect.Type {
let blurEffectInstance = blurEffectType.init()
// set any value you want here. 40 is quite blurred
blurEffectInstance.setValue(40, forKey: "blurRadius")
let effectView: UIVisualEffectView = UIVisualEffectView(effect: blurEffectInstance)
// Now you have your blurred visual effect view
}
This works for me.
I put UIVisualEffectView in an UIView before add to my view.
I make this function to use easier. You can use this function to make blur any area in your view.
func addBlurArea(area: CGRect) {
let effect = UIBlurEffect(style: UIBlurEffectStyle.Dark)
let blurView = UIVisualEffectView(effect: effect)
blurView.frame = CGRect(x: 0, y: 0, width: area.width, height: area.height)
let container = UIView(frame: area)
container.alpha = 0.8
container.addSubview(blurView)
self.view.insertSubview(container, atIndex: 1)
}
For example, you can make blur all of your view by calling:
addBlurArea(self.view.frame)
You can change Dark to your desired blur style and 0.8 to your desired alpha value
If you want to accomplish the same behaviour as iOS spotlight search you just need to change the alpha value of the UIVisualEffectView (tested on iOS9 simulator)