MacOS and Swift 3 with CIAffineClamp filter - swift

I need to use CIAffineClamp in order to extend the image and prevent Gaussian Blur from blurring out edges of the image. I have the following code working in Swift 2:
let transform = CGAffineTransformIdentity
let clampFilter = CIFilter(name: "CIAffineClamp")
clampFilter.setValue(inputImage, forKey: "inputImage")
clampFilter.setValue(NSValue(CGAffineTransform: transform), forKey: "inputTransform")
In Swift 3 CGAffineTransformIdentity was renamed to CGAffineTransform.identity. My code compiles however I get the following error message in the console:
[CIAffineClamp inputTransfom] is not a valid object.
Documentation on Apple's websites states that inputTransform parameter on MacOS takes an NSAffineTransform object whose attribute type is CIAttributeTypeTransform. but I'm unsure how to use it.
Any help would be appreciated.

Seems NSAffineTransform has an initializer NSAffineTransform.init(transform:) which takes AffineTransform.
Please try this:
let transform = AffineTransform.identity
let clampFilter = CIFilter(name: "CIAffineClamp")!
clampFilter.setValue(inputImage, forKey: "inputImage")
clampFilter.setValue(NSAffineTransform(transform: transform), forKey: "inputTransform")
Or the last line can be:
clampFilter.setValue(transform, forKey: "inputTransform")
NSAffineTransform
Important
The Swift overlay to the Foundation framework provides the
AffineTransform structure, which bridges to the NSAffineTransform
class. The AffineTransform value type offers the same functionality as
the NSAffineTransform reference type, and the two can be used
interchangeably in Swift code that interacts with Objective-C APIs.
This behavior is similar to how Swift bridges standard string,
numeric, and collection types to their corresponding Foundation
classes.

Related

List all window names in Swift

I’m learning Swift. How do I fix the following code to list the window names?
import CoreGraphics
let windows = CGWindowListCopyWindowInfo(CGWindowListOption.optionAll, kCGNullWindowID)
for i in 0..<CFArrayGetCount(windows) {
if let window = CFArrayGetValueAtIndex(windows, i) {
print(CFDictionaryGetValue(window, kCGWindowName))
}
}
The error:
main.swift:6:32: error: cannot convert value of type 'UnsafeRawPointer' to expected argument type 'CFDictionary?'
print(CFDictionaryGetValue(window, kCGWindowName))
^~~~~~
as! CFDictionary
It becomes easier if you avoid using the Core Foundation types and methods, and bridge the values to native Swift types as early as possible.
Here, CGWindowListCopyWindowInfo() returns an optional CFArray of CFDictionaries, and that can be bridged to the corresponding Swift type [[String : Any]]. Then you can access its values with the usual Swift methods (array enumeration and dictionary subscripting):
if let windowInfo = CGWindowListCopyWindowInfo(.optionAll, kCGNullWindowID) as? [[ String : Any]] {
for windowDict in windowInfo {
if let windowName = windowDict[kCGWindowName as String] as? String {
print(windowName)
}
}
}
You can use unsafeBitCast(_:to:) to convert the opaque raw pointer to a CFDictionary. Note that you'll also need to convert the second parameter, to a raw pointer:
CFDictionaryGetValue(unsafeBitCast(window, to: CFDictionary.self), unsafeBitCast(kCGWindowName, to: UnsafeRawPointer.self))
unsafeBitCast(_:to:) tells the compiler to treat that variable as another type, however it's not very safe (thus the unsafe prefix), recommending to read the documentation for more details, especially the following note:
Warning
Calling this function breaks the guarantees of the Swift type system; use with extreme care.
In your particular case there should not be any problems using the function, since you're working with the appropriate types, as declared in the documentation of the Foundation functions you're calling.
Complete, workable code could look something like this:
import CoreGraphics
let windows = CGWindowListCopyWindowInfo(CGWindowListOption.optionAll, kCGNullWindowID)
for i in 0..<CFArrayGetCount(windows) {
let windowDict = unsafeBitCast(CFArrayGetValueAtIndex(windows, i), to: CFDictionary.self)
let rawWindowNameKey = unsafeBitCast(kCGWindowName, to: UnsafeRawPointer.self)
let rawWindowName = CFDictionaryGetValue(windowDict, rawWindowNameKey)
let windowName = unsafeBitCast(rawWindowName, to: CFString?.self) as String?
print(windowName ?? "")
}
Update
You can bring the CoreFoundation array sooner to the Swift world by casting right from the start:
let windows = CGWindowListCopyWindowInfo(CGWindowListOption.optionAll, kCGNullWindowID) as? [[AnyHashable: Any]]
windows?.forEach { window in
print(window[kCGWindowName])
}
The code is much readable, however it might pose performance problems, as the cast to [[AnyHashable: Any]]` can be expensive for large array consisting of large dictionaries.

Convert an array of dictionaries into a set Swift 4

I have an array of dictionaries ([[Double:Double]]) which I want to convert into a Set of dictionaries. My goal is to use the .symmetricDifference to find the differences between two arrays (both are of type [[Double:Double]]). How can I do this?
I found this on hackingwithswift.com and tried to use it but I am getting this error:
Type '[[Double : Double]]' does not conform to protocol 'Hashable'
I have also tried this code...
let array1:[[Double:Double]] = [[4.5:3.678], [6.7:9.2867], [7.3: 8.7564]]
let array2:[[Double:Double]] = [[4.5:3.678], [6.7:9.2867]]
let array3 = Set<[[Double:Double]]>(array1).symmetricDifference(Set(array2)) //On this line I get the error above.
You don't want a Set of [[Double:Double]]. You want a Set of [Double:Double], because those are the objects in the array and you want them to be the objects in the Set.
Thus the right thing will happen if you simply say
let array1:[[Double:Double]] = [[4.5:3.678], [6.7:9.2867], [7.3: 8.7564]]
let set1 = Set(array1)
and so on.
This might require you to update to a newer version of Swift. It works in Swift 4.2.

Swift MetalKit unknknown return type MTKMesh.newMeshes

Up until now I have been following a tutorial (released around the time of Metal 1), to learn Metal. I haven't encountered any errors I couldn't figure out until this point. I am trying to execute this code
var meshes: [AnyObject]?
//code
let device = MTLDevice() //device is fine
let asset = MDLAsset() //asset works fine
do{
meshes = try MTKMesh.newMeshes(asset: asset, device: device)
} catch //...
The error i'm getting is Cannot assign value of type '(modellOMeshes: [MDLMesh], metalKitMeshes: [MTKMesh])' to type '[AnyObject]?'
What is type of MTKMesh.newMeshes, and how can I store it in a variable? I tried casting it as! [AnyObject], but then xcode tells me that this cast would fail every time.
The return type of that method is ([MDLMesh], [MTKMesh]), a tuple comprised of an array of MTLMeshes and an array of MTKMeshes. The reason for this is that you might want the original collection of MDLMesh objects contained in the asset, in addition to the MTKMesh objects that are created for you.
So, you can declare meshes like this:
var meshes: ([MDLMesh], [MTKMesh])
Or, if you don't care about the original MDLMeshes, you can "destructure" the tuple to get just the portion you care about into a variable of type [MTKMesh]:
var meshes: [MTKMesh]
(_, meshes) = try MTKMesh.newMeshes(asset: asset, device: device)
As the function signature and the compiler error clearly show, the return type is (modelIOMeshes: [MDLMesh], metalKitMeshes: [MTKMesh]), so you should declare meshas accordingly:
var meshes: (modelIOMeshes: [MDLMesh], metalKitMeshes: [MTKMesh])?
The type is a named tuple containing two Arrays, holding MDLMesh and MTKMesh instances respectively.

IOS: Ambiguous Use of init(CGImage)

I am trying to convert a CGImage into a CIImage; however, it is not working.
This line of code:
let personciImage = CIImage(CGImage: imageView.image!.CGImage!)
throws the following error
Ambiguous use of 'init(CGImage)'
I'm really confused as to what this error means.
I need to do this conversion because CIDetector.featuresInImage() from the built in CoreImage framework requires a CIImage
I solved it on my own.
It turns out, I was capitalizing CGImage wrong. The code should really read:
let personciImage = CIImage(cgImage: imageView.image!.cgImage!)
This throws no errors.

How to do I initialise Swift array of object eg. CALayer

I was able to initialise CALayer using method 1 which is properly working in the rest of code but not method 2. Could you please advise the what is wrong.
Initialise CALayer
var layers:[CALayer]!
Method 1. is working
layers = [CALayer(), CALayer(), CALayer(), CALayer(), CALayer(), CALayer(), CALayer(), CALayer()]
Method 2. is not working
layers = [CALayer](count: 8, repeatedValue: CALayer())
You can use map and an interval:
layers = (0..<8).map { _ in CALayer() }
The _ in is an annoyance that shouldn’t be necessary but Swift’s type inference currently needs it.
The map approach has a big advantage over a for-based approach, since it means you can declare layers with let if it isn’t going to need to change further later:
let layers = (0..<8).map { _ in CALayer() }
It may also be marginally more efficient vs multiple appends, since the size of the array can be calculated ahead of time by map, vs append needing to resize the array multiple times as it grows.
Method 2 doesn't work because that initializer installs the exact same value in every index of the array.
You can use method one, or you can use a for loop:
var layers = [CALayer]()
layers.reserveCapacity(layerCount)
let layerCount = 10
for (_ in 1... layerCount)
{
layers.append(CALayer())
}
(I'm still getting used to Swift so that syntax might not be perfect, but it should give you the general idea)