There is a problem with QR code generation using the following simple code:
override func viewDidLoad() {
super.viewDidLoad()
let image = generateQRCode(from: "Hacking with Swift is the best iOS coding tutorial I've ever read!")
imageView.image = image
}
func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 5.3, y: 5.3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
This code produces the following image:
But when magnifying any corner marker, we can see the difference in border thickness:
I. e. not every scale value produces correct final image. How to fix it out?
The behavior you show is expected whenever you use a non-integer scale, such as 5.3. If having consistent marker widths is something you care about, use only integer scales, such as 5 or 6.
I'm applying several filters on an already cropped image, and I'd like a flipped duplicate of it next to the original. This would make it twice as wide.
Problem: How do you extend the bounds so both can fit? .cropped(to:CGRect) will stretch whatever original content was there. The reason there is existing content is because I'm trying to use applyingFilter as much as possible to save on processing. It's also why I'm cropping the original un-mirrored image.
Below is my CIImage "alphaMaskBlend2" with a compositing filter, and a transform applied to the same image that flips it and adjusts its position. sourceCore.extent is the size I want the final image.
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: (alphaMaskBlend2?.transformed(by: scaledImageTransform))!,
kCIInputBackgroundImageKey: alphaMaskBlend2!]).cropped(to: sourceCore.extent)
I've played around with the position of the transform in LLDB. I found with this filter being cropped, the left most image becomes stretched. If I use clamped to the same extent, and then I re-crop the image to the same extent again, the image is no longer distorted, but the bounds of the image is only half the width that it should be.
The only way I could achieve this, is compositing against a background image (sourceCore) that would be the size of the two images combined, and then compositing the other image:
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: alphaMaskBlend2!,
kCIInputBackgroundImageKey: sourceCore])
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: (alphaMaskBlend2?.cropped(to: cropRect).transformed(by: scaledImageTransform))!,
kCIInputBackgroundImageKey: alphaMaskBlend2!])
Problem is, that this is more expensive than necessary. I even tested it with benchmarking. It would make a lot more sense if I could do this with one composite.
While I can "flip" a CIImage I couldn't find a way to use an existing CIFilter to "stitch" it along side the original. However, with some basic knowledge of writing your own CIKernel, you can. A simple project of achieving this is here.
This project contains a sample image, and using CoreImage and a GLKView it:
flips the image by transposing the Y "bottom/top" coordinates for CIPerspectiveCorrection
creates a new "palette" image using CIConstantColor and then crops it using CICrop to be twice the width of the original
uses a very simple CIKernel (registered as "Stitch" to actually stitch it together
Here's the code to flip:
// use CIPerspectiveCorrection to "flip" on the Y axis
let minX:CGFloat = 0
let maxY:CGFloat = 0
let maxX = originalImage?.extent.width
let minY = originalImage?.extent.height
let flipFilter = CIFilter(name: "CIPerspectiveCorrection")
flipFilter?.setValue(CIVector(x: minX, y: maxY), forKey: "inputTopLeft")
flipFilter?.setValue(CIVector(x: maxX!, y: maxY), forKey: "inputTopRight")
flipFilter?.setValue(CIVector(x: minX, y: minY!), forKey: "inputBottomLeft")
flipFilter?.setValue(CIVector(x: maxX!, y: minY!), forKey: "inputBottomRight")
flipFilter?.setValue(originalImage, forKey: "inputImage")
flippedImage = flipFilter?.outputImage
Here's the code to create the palette:
let paletteFilter = CIFilter(name: "CIConstantColorGenerator")
paletteFilter?.setValue(CIColor(red: 0.7, green: 0.4, blue: 0.4), forKey: "inputColor")
paletteImage = paletteFilter?.outputImage
let cropFilter = CIFilter(name: "CICrop")
cropFilter?.setValue(paletteImage, forKey: "inputImage")
cropFilter?.setValue(CIVector(x: 0, y: 0, z: (originalImage?.extent.width)! * 2, w: (originalImage?.extent.height)!), forKey: "inputRectangle")
paletteImage = cropFilter?.outputImage
Here's the code to register and use the custom CIFilter:
// register and use stitch filer
StitchedFilters.registerFilters()
let stitchFilter = CIFilter(name: "Stitch")
stitchFilter?.setValue(originalImage?.extent.width, forKey: "inputThreshold")
stitchFilter?.setValue(paletteImage, forKey: "inputPalette")
stitchFilter?.setValue(originalImage, forKey: "inputOriginal")
stitchFilter?.setValue(flippedImage, forKey: "inputFlipped")
finalImage = stitchFilter?.outputImage
All of this code (long with layout constraints) in the demo project is in viewDidLoad, so please, place it where it belongs!
Here's the code to (a) create a CIFilter subclass called Stitch and (b) register it so you can use it like any other filter:
func openKernelFile(_ name:String) -> String {
let filePath = Bundle.main.path(forResource: name, ofType: ".cikernel")
do {
return try String(contentsOfFile: filePath!)
}
catch let error as NSError {
return error.description
}
}
let CategoryStitched = "Stitch"
class StitchedFilters: NSObject, CIFilterConstructor {
static func registerFilters() {
CIFilter.registerName(
"Stitch",
constructor: StitchedFilters(),
classAttributes: [
kCIAttributeFilterCategories: [CategoryStitched]
])
}
func filter(withName name: String) -> CIFilter? {
switch name {
case "Stitch":
return Stitch()
default:
return nil
}
}
}
class Stitch:CIFilter {
let kernel = CIKernel(source: openKernelFile("Stitch"))
var inputThreshold:Float = 0
var inputPalette: CIImage!
var inputOriginal: CIImage!
var inputFlipped: CIImage!
override var attributes: [String : Any] {
return [
kCIAttributeFilterDisplayName: "Stitch",
"inputThreshold": [kCIAttributeIdentity: 0,
kCIAttributeClass: "NSNumber",
kCIAttributeDisplayName: "Threshold",
kCIAttributeDefault: 0.5,
kCIAttributeMin: 0,
kCIAttributeSliderMin: 0,
kCIAttributeSliderMax: 1,
kCIAttributeType: kCIAttributeTypeScalar],
"inputPalette": [kCIAttributeIdentity: 0,
kCIAttributeClass: "CIImage",
kCIAttributeDisplayName: "Palette",
kCIAttributeType: kCIAttributeTypeImage],
"inputOriginal": [kCIAttributeIdentity: 0,
kCIAttributeClass: "CIImage",
kCIAttributeDisplayName: "Original",
kCIAttributeType: kCIAttributeTypeImage],
"inputFlipped": [kCIAttributeIdentity: 0,
kCIAttributeClass: "CIImage",
kCIAttributeDisplayName: "Flipped",
kCIAttributeType: kCIAttributeTypeImage]
]
}
override init() {
super.init()
}
override func setValue(_ value: Any?, forKey key: String) {
switch key {
case "inputThreshold":
inputThreshold = value as! Float
case "inputPalette":
inputPalette = value as! CIImage
case "inputOriginal":
inputOriginal = value as! CIImage
case "inputFlipped":
inputFlipped = value as! CIImage
default:
break
}
}
#available(*, unavailable) required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override var outputImage: CIImage {
return kernel!.apply(
extent: inputPalette.extent,
roiCallback: {(index, rect) in return rect},
arguments: [
inputThreshold as Any,
inputPalette as Any,
inputOriginal as Any,
inputFlipped as Any
])!
}
}
Finally, the CIKernel code:
kernel vec4 stitch(float threshold, sampler palette, sampler original, sampler flipped) {
vec2 coord = destCoord();
if (coord.x < threshold) {
return sample(original, samplerCoord(original));
} else {
vec2 flippedCoord = coord - vec2(threshold, 0.0);
vec2 flippedCoordinate = samplerTransform(flipped, flippedCoord);
return sample(flipped, flippedCoordinate);
}
}
Now, someone else may have something more elegant - maybe even using an existing CIFilter - but this works well. It only uses the GPU, so performance-wise, can be used in "real time". I added unneeded code (registering the filter, using a dictionary to define attributes) to make it more of a teaching exercise for those new to creating CIKernels that anyone with knowledge of using CIFilters can consume. If you focus on the kernel code, you'll recognize how similar to C it looks.
Last, a caveat. I am only stitching the (Y-axis) flipped image to the right of the original. You'll need to adjust things if you want something else.
İ am trying to move the SCNNode object which i placed on to a surface. İt moves but the scale changes and it becomes smaller, when i first start to move.
This is what i did;
#IBAction func dragBanana(_ sender: UIPanGestureRecognizer) {
guard let _ = self.sceneView.session.currentFrame else {return}
if(sender.state == .began) {
let location = sender.location(in: self.sceneView)
let hitTestResult = sceneView.hitTest(location, options: nil)
if !hitTestResult.isEmpty {
guard let hitResult = hitTestResult.first else {return}
movedObject = hitResult.node
}
}
if (sender.state == .changed) {
if(movedObject != nil) {
let location = sender.location(in: self.sceneView)
let hitTestResult = sceneView.hitTest(location, types: .existingPlaneUsingExtent)
guard let hitResult = hitTestResult.first else {return}
let matrix = SCNMatrix4(hitResult.worldTransform)
let vector = SCNVector3Make(matrix.m41, matrix.m42, matrix.m43)
movedObject?.position = vector
}
}
if (sender.state == .ended) {
movedObject = nil
}
}
My answer is probably very late, but I faced this issue myself and it took me a while to kind of figure out why this might happen. I'll share my experience and maybe you can relate to it.
My problem was that I was trying to change the position of the node after changing its scale at runtime (most of my 3D assets were very large when added, I scale them down with a pinch gesture). I noticed changing the scale was the cause of the position change not working as expected.
I found a very simple solution to this. You simply need to change this line:
movedObject?.position = vector
to this:
movedObject?.worldPosition = vector
According to SCNNode documentation, the position property determines the position of the node relative to its parent. While worldPosition is the position of the node relative to the scene's root node (i.e. the world origin of ARSCNView)
I hope this answers your question.
It's because you're moving the object on the 3 axis and Z changes that's why it feels like it scales but it's only getting closer to you.
I am trying to make a sprite move towards a point while avoiding some obstacles. The graph that I am using is a GKObstacleGraph, obtained from this scene:
For simplicity I decided not to use a circular physics body for the obstacles, but it's enough to use the sprites' bounds for now. So this is how I created the graph:
lazy var graph:GKObstacleGraph? =
{
guard let spaceship = self.spaceship else { return nil }
let obstacles = SKNode.obstacles(fromNodeBounds: self.rocks)
let graph = GKObstacleGraph(obstacles: obstacles, bufferRadius: Float(spaceship.size.width))
return graph
}()
When the user taps on any location of the scene, the spaceship should start moving toward that position by avoiding the obstacles:
func tap(locationInScene location:CGPoint)
{
guard let graph = self.graph else { return }
guard let spaceship = self.spaceship else { return }
let startNode = GKGraphNode2D(point: vector_float2(withPoint: spaceship.position))
let endNode = GKGraphNode2D(point: vector_float2(withPoint: location))
graph.connectUsingObstacles(node: startNode)
graph.connectUsingObstacles(node: endNode)
let path = graph.findPath(from: startNode, to: endNode)
print(path)
let goal = GKGoal(toFollow: GKPath(graphNodes: path, radius: Float(spaceship.size.width)) , maxPredictionTime: 1.0, forward: true)
let behavior = GKBehavior(goal: goal, weight: 1.0)
let agent = GKAgent2D()
agent.behavior = behavior
graph.remove([startNode, endNode])
spaceship.entity?.addComponent(agent)
// This is necessary. I don't know why, but if I don't do that, I
// end up having a duplicate entity in my scene's entities array.
self.entities = [spaceship.entity!]
}
But when I tap on a point on the scene, the spaceship starts moving indefinitely upwards. I tried to print the position of the GKAgent at every frame, and this is what I got:
With very low values, the spaceship still keeps moving upwards without stopping, ever.
This is my project on GitHub.
A SKSpriteNode's SKColor has a way to be created with Hue, Saturation, Brightness & Alpha:
let myColor = SKColor(hue: 0.5, saturation: 1, brightness: 1, alpha: 1)
mySprite.color = myColor
How do I get at the hue of a SKSpriteNode and make a change to it? eg, divide it by 2.
An SKSpriteNode is a node that draws a texture (optionally blended with a color), an image, a colored square. So, this is it's nature.
When you make an SKSpriteNode, you have an instance property that represent the texture used to draw the sprite called also texture
Since iOS 9.x, we are able to retrieve an image from a texture following the code below. In this example I call my SKSpriteNode as spriteBg:
let spriteBg = SKSpriteNode.init(texture: SKTexture.init(imageNamed: "myImage.png"))
if let txt = spriteBg.texture {
if #available(iOS 9.0, *) {
let image : UIImage = UIImage.init(cgImage:txt.cgImage())
} else {
// Fallback on earlier versions and forgot this code..
}
}
Following this interesting answer, we can translate it to a more confortable Swift 3.0 version:
func imageWith(source: UIImage, rotatedByHue: CGFloat) -> UIImage {
// Create a Core Image version of the image.
let sourceCore = CIImage(cgImage: source.cgImage!)
// Apply a CIHueAdjust filter
guard let hueAdjust = CIFilter(name: "CIHueAdjust") else { return source }
hueAdjust.setDefaults()
hueAdjust.setValue(sourceCore, forKey: "inputImage")
hueAdjust.setValue(CGFloat(rotatedByHue), forKey: "inputAngle")
let resultCore = hueAdjust.value(forKey: "outputImage") as! CIImage!
let context = CIContext(options: nil)
let resultRef = context.createCGImage(resultCore!, from: resultCore!.extent)
let result = UIImage(cgImage: resultRef!)
return result
}
So, finally with the previous code we can do:
if let txt = spriteBg.texture {
if #available(iOS 9.0, *) {
let image : UIImage = UIImage.init(cgImage:txt.cgImage())
let changedImage = imageWith(source: image, rotatedByHue: 0.5)
spriteBg.texture = SKTexture(image: changedImage)
} else {
// Fallback on earlier versions or bought a new iphone
}
}
I'm not in a place to be able to test this right now, but looking at the UIColor documentation (UIColor and SKColor are basically the same thing), you should be able to use the .getHue(...) function retrieve the color's components, make changes to it, then set the SKSpriteNode's color property to the new value. the .getHue(...) function "Returns the components that make up the color in the HSB color space."
https://developer.apple.com/reference/uikit/uicolor/1621949-gethue