Performant way to determine if paths are intersecting - swift

I have an app where the user can draw numbers, I'm currently trying to differentiate between digits. My idea was to group overlapping lines by checking if they intersect. Currently it takes all the points and draws a SwiftUI Path.
The problem is that the paths contain a lot of points. The arm of the 4 contains 49 points, the stalk of the 4 has 30, and the 2 has 82 points. This makes comparing all the line segments for an intersection very expensive.
I have two questions:
Are there any swift functions to reduce the number of points while retaining the overall shape?
Are there any good methods for quickly determining whether complex paths intersect?

For curve simplification, start with Ramer–Douglas–Peucker. For this problem, that might get you most of the way there all by itself.
Next, consider the bounding boxes of the curves. For a Path, this is the boundingRect property. If two bounding boxes do not overlap, then the paths cannot intersect. If all your curves are composed of straight lines, this boundingRect should work well. If you also use quad or cubic curves, you may want to investigate path.cgPath.boundingBoxOfPath, which will be a tighter box (it doesn't include the control points).
I expect those approaches will be the best way, but another approach is to draw the paths and then look for the intersections. For example, you can create a small context and draw one curve in red and another in blue with a .screen blend mode. Then scan for any purple. (You can easily make this work for three curves simultaneously. With some linear algebra, it may be possible to scale to more simultaneous curves.)
This approach is an approximation, and its accuracy can be tuned by changing the size of the context. I expect an approximation is going to be fine (and even preferable) for your problem.
Here is a slapped together implementation just to show how it can work. I haven't given any thought to making this clean; the point is just the technique:
let width = 64
let height = 64
let context = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: width * 4,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)!
context.setShouldAntialias(false)
context.setBlendMode(.screen)
let path1 = UIBezierPath(rect: CGRect(x: 10, y: 20, width: 50, height: 8))
context.addPath(path1.cgPath)
context.setFillColor(UIColor.blue.cgColor)
context.drawPath(using: .fill)
let path2 = UIBezierPath(rect: CGRect(x: 40, y: 0, width: 8, height: 50))
context.addPath(path2.cgPath)
context.setFillColor(UIColor.red.cgColor)
context.drawPath(using: .fill)
let data = context.data!.bindMemory(to: UInt8.self, capacity: width * height * 4)
for i in stride(from: 0, to: width * height * 4, by: 4) {
if data[i + 1] == 255 && data[i + 3] == 255 {
print("Found overlap!")
break
}
}
I've turned off anti-aliasing here for consistency (and speed). With anti-aliasing, you may get partial colors. On the other hand, turning on anti-aliasing and adjusting what range of colors you treat as "overlap" may lead to more accurate results.

Related

CGContext, issue when drawing neighboring paths individually, don't blend together

this is my first post on stack, so pardon any etiquette problems I'm not aware of
I am creating a simple drawing engine for one of my apps. It is mostly based on Apple's Touch Canvas demo for the apple pencil
https://developer.apple.com/documentation/uikit/touches_presses_and_gestures/illustrating_the_force_altitude_and_azimuth_properties_of_touch_input
That engine works by stroking paths, since I need to vary the width smoothly I decided to do it by filling paths, the logic to create that path is very similar to this cocos2d project
https://github.com/krzysztofzablocki/LineDrawing
I’m very happy with the performance of the engine at the moment, so I decided to be more ambitious: introduce variable opacity according to force. To do this, instead of drawing the whole path from the stroke, I subdivide it into segments, a segment looks like this using a stroke line

I draw each segment path one by one, in filled out form, it looks like this
So no issues right? Well, if you have an Ipad (or zoom), you notice a problem

The paths are actually stitched together! There is no smooth transition. Seems to be an antialiasing problem. What I noticed is that, if I very slightly offset the beginning of a new segment into the previous segment (right now, the end of the previous segment is the start of the new one), it seems to blend better , sometimes disappearing the stitches completely, but this is very tricky, while it works with colors at full opacity, when you reduce the opacity the same constant might not work, and sometimes there are variations in results when using the simulator vs a real device
I’m kinda lost here, is there any way kinda smooth the transition between segments?
This is my drawing routine, right now I'm using gradients, but I already tried to fill the path with a solid color with no clipping and got the same results
for (index,subpath) in pathResults.subPaths.enumerated(){
//Si no es inicio de linea el primer segmento es historico para suavizar, no dibujar
if !startOfLine && index == 0{
continue
}
let colors = [properties.color.cgColor.copy(alpha: opacityForPoint(subpath.startPoint)),
properties.color.cgColor.copy(alpha: opacityForPoint(subpath.endPoint))]
as CFArray
if let gradient = CGGradient(colorsSpace: context.colorSpace,
colors: colors,
locations: [0.0,1.0]){
context.addPath(subpath.subpath)
context.clip()
context.drawLinearGradient(gradient,
start: subpath.startPoint.properties.location,
end: subpath.endPoint.properties.location,
options: gradientOptions)
context.resetClip()
}
}
this is the code for the context I draw in:
private lazy var frozenContext: CGContext = {
let scale = self.properties.scale
var size = self.properties.size
size.width *= scale
size.height *= scale
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context: CGContext = CGContext(data: nil,
width: Int(size.width),
height: Int(size.height),
bitsPerComponent: 8,
bytesPerRow: 0,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
let transform = CGAffineTransform(scaleX: scale, y: scale)
context.concatenate(transform)
return context
}()

How to scale SCNNodes to fit in a box?

I have multiple collada files with objects (humans) of various sizes, created from different 3D program sources. I desire to scale the objects so they fit inside frame or box. From my reading, I cant using the bounding box to scale the node, so what feature do you utilize to scale the nodes, relative to each other?
// humanNode = {...get node, which is some unknown size }
let (minBound, maxBound) = humanNode.boundingBox
let blockNode = SCNNode(geometry: SCNBox(width: 10, height: 10, length: 10, chamferRadius: 0))
// calculate scale factor so it fits inside of box without having known its size before hand.
s = { ...some method to calculate the scale to fit the humanNode into the box }
humanNode.scale = SCNVector3Make(s, s, s)
How get its size relative to the literal box I want to put it in and scale it?
Is it possible to draw the node off screen to measure its size?

Swift SceneKit — physical blocks do not stick to each other

Blocks just crumble apart.
How can this problem be solved?
Initializing blocks:
var boxNode = SCNNode(geometry: SCNBox(width: 0.75, height: 0.15, length: 0.25, chamferRadius: 0))
boxNode.position = SCNVector3(x: x1, y: y, z: z1)
boxNode.geometry?.firstMaterial = SCNMaterial()
boxNode.geometry?.firstMaterial?.diffuse.contents = UIImage(named: "wood.jpg")
boxNode.physicsBody = SCNPhysicsBody(type: .dynamic, shape: nil)
boxNode.eulerAngles.y = Float(Double.pi / 2) * rotation
boxNode.physicsBody?.friction = 1
boxNode.physicsBody?.mass = 0.5
boxNode.physicsBody?.angularDamping = 1.0
boxNode.physicsBody?.damping = 1
picture
video
full code
I won't be able to tell you how to fix it as I have the exact same problem which I wasn't able to solve. However, as I played around I figured out a couple of things (which you may find useful):
The same problem hasn't happened to me in pure SceneKit, hence I think it's a bug in ARKit
Node with physics has to be added to the rootNode of the scene, otherwise odd stuff happens (elements passing through each other, gravity behaving in an inconsistent way)
If you pass nil as shape parameter, SceneKit will figure bounding box based on the geometry of the node. This hasn't worked properly for me so what I've done (using SceneKit editor) was to duplicate the geometry and then set it as a custom shape for the bounding box (have a look at the attached image)
Overall I've found physics simulation in SceneKit when used with ARKit to be extremely buggy and I spent a lot of time "tricking" it into working more-or-less how I wanted it to work.

SceneKit – Rotate and animate a SCNNode

I'm trying to display a pyramid that points following the z axis and then rotates on itself around z too.
As my camera is on the z axis, I'm expecting to see the pyramid from above. I managed to rotate the pyramid to see it this way but when I add the animation it seems to rotate on multiple axis.
Here is my code:
// The following create the pyramid and place it how I want
let pyramid = SCNPyramid(width: 1.0, height: 1.0, length: 1.0)
let pyramidNode = SCNNode(geometry: pyramid)
pyramidNode.position = SCNVector3(x: 0, y: 0, z: 0)
pyramidNode.rotation = SCNVector4(x: 1, y: 0, z: 0, w: Float(M_PI / 2))
scene.rootNode.addChildNode(pyramidNode)
// But the animation seems to rotate aroun 2 axis and not just z
var spin = CABasicAnimation(keyPath: "rotation")
spin.byValue = NSValue(SCNVector4: SCNVector4(x: 0, y: 0, z: 1, w: 2*Float(M_PI)))
spin.duration = 3
spin.repeatCount = HUGE
pyramidNode.addAnimation(spin, forKey: "spin around")
Trying to both manually set and animate the same property can cause issues. Using a byValue animation makes the problem worse -- that concatenates to the current transform, so it's harder to keep track of whether the current transform is what the animation expects to start with.
Instead, separate the fixed orientation of the pyramid (its apex is in the -z direction) from the animation (it spins around the axis it points in). There's two good ways to do this:
Make pyramidNode the child of another node that gets the one-time rotation (π/2 around x-axis), and apply the spin animation directly to pyramidNode. (In this case, the apex of the pyramid will still point in the +y direction of its local space, so you'll want to spin around that axis instead of the z-axis.)
Use the pivot property to transform the local space of pyramidNode's contents, and animate pyramidNode relative to its containing space.
Here's some code to show the second approach:
let pyramid = SCNPyramid(width: 1.0, height: 1.0, length: 1.0)
let pyramidNode = SCNNode(geometry: pyramid)
pyramidNode.position = SCNVector3(x: 0, y: 0, z: 0)
// Point the pyramid in the -z direction
pyramidNode.pivot = SCNMatrix4MakeRotation(CGFloat(M_PI_2), 1, 0, 0)
scene.rootNode.addChildNode(pyramidNode)
let spin = CABasicAnimation(keyPath: "rotation")
// Use from-to to explicitly make a full rotation around z
spin.fromValue = NSValue(SCNVector4: SCNVector4(x: 0, y: 0, z: 1, w: 0))
spin.toValue = NSValue(SCNVector4: SCNVector4(x: 0, y: 0, z: 1, w: CGFloat(2 * M_PI)))
spin.duration = 3
spin.repeatCount = .infinity
pyramidNode.addAnimation(spin, forKey: "spin around")
Some unrelated changes to improve code quality:
Use CGFloat when explicit conversion is required to initialize an SCNVector component; using Float or Double specifically will break on 32 or 64 bit architecture.
Use .infinity instead of the legacy BSD math constant HUGE. This type-infers to whatever the type of spin.repeatCount is, and uses a constant value that's defined for all floating-point types.
Use M_PI_2 for π/2 to be pedantic about precision.
Use let instead of var for the animation, since we never assign a different value to spin.
More on the CGFloat error business: In Swift, numeric literals have no type until the expression they're in needs one. That's why you can do things like spin.duration = 3 -- even though duration is a floating-point value, Swift lets you pass an "integer literal". But if you do let d = 3; spin.duration = d you get an error. Why? Because variables/constants have explicit types, and Swift doesn't do implicit type conversion. The 3 is typeless, but when it gets assigned to d, type inference defaults to choosing Int because you haven't specified anything else.
If you're seeing type conversion errors, you probably have code that mixes literals, constants, and/or values returned from functions. You can probably just make the errors go away by converting everything in the expression to CGFloat (or whatever the type you're passing that expression to is). Of course, that'll make your code unreadable and ugly, so once you get it working you might start removing conversions one at a time until you find the one that does the job.
SceneKit includes animation helpers which are much simpler & shorter to use than CAAnimations. This is ObjC but gets across the point:
[pyramidNode runAction:
[SCNAction repeatActionForever:
[SCNAction rotateByX:0 y:0 z:2*M_PI duration:3]]];
I changed byValue to toValue and this worked for me. So change the line...
spin.byValue = NSValue(SCNVector4: SCNVector4(...
Change it to...
spin.toValue = NSValue(SCNVector4: SCNVector4(x: 0, y:0, z:1, w: 2*float(M_PI))

In libgdx's Batch interface, what are these arguments used for?

I'm trying to figure out what all these arguments do, as when I draw my bullet image it appears as a solid block instead of a sprite that alternates between solid color and an empty portion (i.e instead of 10101 it's 11111, with 0's being empty parts in the texture).
Before, I was using batch.draw(texture, float x, float y) and it displays the texture correctly. However I was playing around with rotation, and this is the version of draw that seemed most suitable:
batch.draw(texture, x, y, originX, originY, width, height, scaleX, scaleY, rotation, srcX, srcY, srcWidth, srcHeight, flipX, flipY)
I can figure out the obvious ones, those being originX, originY (location to draw the image from its upper left pixel I believe) however then I don't know what the x, y coordinate after texture is for.
scaleX,scaleY, rotation, and flipX, flipY I know what to do with, but what is srcX and srcY, along with the srcWidth and srcHeight for?
edit: I played around and figured out what the srcX,srcY and srcHeight,Width do. I can not figure out what originX,Y does, even though I'm guess it's the centerpoint of the image. Since I don't want to play around with this one anyway, should I leave it as 0,0?
What would be common uses for manipulating the centerpoint of images?
Answering main question.
srcX, srcY, srcWidth, srcHeight are values determine which part (rectangle) of source texture you want to draw. For example, your source image is 100x100 pixels of size. And you want to draw only 60x60 part in a middle of source image.
batch.draw(texture, x, y, 20, 20, 60, 60);
Answering your edited question.
Origin is a center point for rotation and scale transformations. So if you want to your sprite scales and rotates around it's center point you should set origin values so:
float originX = width * 0.5f;
float originY = height * 0.5f;
In case you don't care about rotation and scaling you may not specify this params (leave it 0).
And keep in mind, that origin is not determines image drawing position (this is most common mistake). It means that two next method calls are draw image at same position (forth and fifth params are originX and originY):
batch.draw(image, x, y, 0, 0, width, height, ...);
batch.draw(image, x, y, 50, 50, width, height, ...);
According to the documentation, the parameters are as defined:
srcX - the x-coordinate in texel space
srcY - the y-coordinate in texel space
srcWidth - the source with in texels
srcHeight - the source height in texels