How to scale SCNNodes to fit in a box? - swift

I have multiple collada files with objects (humans) of various sizes, created from different 3D program sources. I desire to scale the objects so they fit inside frame or box. From my reading, I cant using the bounding box to scale the node, so what feature do you utilize to scale the nodes, relative to each other?
// humanNode = {...get node, which is some unknown size }
let (minBound, maxBound) = humanNode.boundingBox
let blockNode = SCNNode(geometry: SCNBox(width: 10, height: 10, length: 10, chamferRadius: 0))
// calculate scale factor so it fits inside of box without having known its size before hand.
s = { ...some method to calculate the scale to fit the humanNode into the box }
humanNode.scale = SCNVector3Make(s, s, s)
How get its size relative to the literal box I want to put it in and scale it?
Is it possible to draw the node off screen to measure its size?

Related

Performant way to determine if paths are intersecting

I have an app where the user can draw numbers, I'm currently trying to differentiate between digits. My idea was to group overlapping lines by checking if they intersect. Currently it takes all the points and draws a SwiftUI Path.
The problem is that the paths contain a lot of points. The arm of the 4 contains 49 points, the stalk of the 4 has 30, and the 2 has 82 points. This makes comparing all the line segments for an intersection very expensive.
I have two questions:
Are there any swift functions to reduce the number of points while retaining the overall shape?
Are there any good methods for quickly determining whether complex paths intersect?
For curve simplification, start with Ramer–Douglas–Peucker. For this problem, that might get you most of the way there all by itself.
Next, consider the bounding boxes of the curves. For a Path, this is the boundingRect property. If two bounding boxes do not overlap, then the paths cannot intersect. If all your curves are composed of straight lines, this boundingRect should work well. If you also use quad or cubic curves, you may want to investigate path.cgPath.boundingBoxOfPath, which will be a tighter box (it doesn't include the control points).
I expect those approaches will be the best way, but another approach is to draw the paths and then look for the intersections. For example, you can create a small context and draw one curve in red and another in blue with a .screen blend mode. Then scan for any purple. (You can easily make this work for three curves simultaneously. With some linear algebra, it may be possible to scale to more simultaneous curves.)
This approach is an approximation, and its accuracy can be tuned by changing the size of the context. I expect an approximation is going to be fine (and even preferable) for your problem.
Here is a slapped together implementation just to show how it can work. I haven't given any thought to making this clean; the point is just the technique:
let width = 64
let height = 64
let context = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: width * 4,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)!
context.setShouldAntialias(false)
context.setBlendMode(.screen)
let path1 = UIBezierPath(rect: CGRect(x: 10, y: 20, width: 50, height: 8))
context.addPath(path1.cgPath)
context.setFillColor(UIColor.blue.cgColor)
context.drawPath(using: .fill)
let path2 = UIBezierPath(rect: CGRect(x: 40, y: 0, width: 8, height: 50))
context.addPath(path2.cgPath)
context.setFillColor(UIColor.red.cgColor)
context.drawPath(using: .fill)
let data = context.data!.bindMemory(to: UInt8.self, capacity: width * height * 4)
for i in stride(from: 0, to: width * height * 4, by: 4) {
if data[i + 1] == 255 && data[i + 3] == 255 {
print("Found overlap!")
break
}
}
I've turned off anti-aliasing here for consistency (and speed). With anti-aliasing, you may get partial colors. On the other hand, turning on anti-aliasing and adjusting what range of colors you treat as "overlap" may lead to more accurate results.

Is it possible to define a 3d model/prefab size in unity?

Is there a way I can define a 3d model size in unity? Like height = 1, width = 3, depth = 3?
I want the model to take a defined space in unity's scene, no matter how big or small I make the fbx in Blender. So I can't use scale as changing the model size in Blender will break this scaling.
I need it to be a square 3 wide, 3 long and 1 high, not depending on the model's size that is exported from Blender. Is it possible?
The same question but from another angle - how to set model size in unity? There is only the scale setting, but no size setting. This looks weird.
So far I have found a workaround like getting object's rendered bounds and adjusting scaling quotient in a script, but this doesn't seem right to me.
You can use the Mesh.bounds to get the 3D model size without applied scaling.
Then you recalculate the scale according to your needs e.g.
// The desired scales
// x = width
// y = height
// z = depth
var targetScale = new Vector3(3, 1, 3);
var meshFilter = GetComponent<MeshFilter>();
var mesh = meshFilter.mesh;
// This would be equal to the 3D bounds if scale was 1,1,1
var meshBounds = mesh.bounds.size;
// This would make the model have world scale 1,1,1
var invertMeshBounds = new Vector3(1/meshBounds.x, 1/meshBounds.y, 1/meshBounds.z);
// Use this if you want exactly the scale 3,1,3 but maybe stretching the fbx
var finalScale = Vector3.Scale(invertMeshBounds, targetScale);
As I understand you want to keep the correct relative scales of the 3D model but make it fit into the defined targetScale so I would use the smallest of the 3 values as scaling factor
var minFactor = Mathf.Min(finalScale.x, finalScale.y);
minFactor = Mathf.Min(minFactor, finalScale.z);
transform.localScale = Vector3.one * minFactor;

How to handle 3D object size and its position in ARKit

I am facing difficulties with 3D object size and its x, y, z positioning. I added the 3D object to sceneView, but its size is too big. How do I reduce the 3D object size based on my requirement? Can anyone help me handle the 3D object's size and its x, y, z positioning?
I am using Swift to code.
Each SCNNode has a scale property:
Each component of the scale vector multiplies the corresponding
dimension of the node’s geometry. The default scale is 1.0 in all
three dimensions. For example, applying a scale of (2.0, 0.5, 2.0) to
a node containing a cube geometry reduces its height and increases its
width and depth.
Which can be set as follows:
var scale: SCNVector3 { get set }
If for example your node was called myNode, you could thus use the following to scale it by 1/10 of it's original size:
myNode.scale = SCNVector3(0.1, 0.1, 0.1)
Regarding positioning SCNNodes this can be achieved by setting the position property:
The node’s position locates it within the coordinate system of its
parent, as modified by the node’s pivot property. The default position
is the zero vector, indicating that the node is placed at the origin
of the parent node’s coordinate system.
If therefore, you wanted to add your SCNNode to the center of the worldOrigin, and 1m away from the camera you can use the following:
myNode.position = SCNVector3(0, 0, -1)
Hope it helps...

Getting SCNNode Bounding Size in Meters

I'm trying to wrap an SCNPlane around a SCNNode. I'm using ARKit so everything is measured in meters, but when I get the boundingBox, I get measurements in some other unit. I looked at Apple's documentation, and they don't specify what the units are.
For example, one of nodes is roughly 3 meters wide, but it says its 26 units.
I could do a rough division to get a constant and use that to do the unit conversions, but I was wondering if there's a less hacky way to do it?
let textContainerSize = textBodyNode.boundingBox
let xSize = textContainerSize.max.x - textContainerSize.min.x
let ySize = textContainerSize.max.y - textContainerSize.min.y
print("size")
print(xSize, ySize) // <-- returns (26,2)
let planeGeometry = SCNPlane(width: xSize, height: ySize)
One SceneKit unit is one meter in ARKit but the boundingBox is defined in the nodes local coordinate system. So your node probably has a parent with a scale different from 1.

how to avoid resizing texture in swift

I am adding the spritenode to the scene, the size is given.
But when I change the texture of the spritenode, the size automatically changes to the original size of the image(png) of the texture.
How can I avoid this?
My code:
var bomba = SKSpriteNode(imageNamed: "bomba2")
var actionbomba = SKAction()
bomba.size = CGSizeMake(frame2.size.width/18, frame2.size.width/18)
let bomba3 = SKTexture(imageNamed: "bomba3.png")
actionbomba.addObject(SKAction.moveBy(CGVectorMake(0, frame.size.height/2.65), duration: beweegsnelheid))
actionbomba.addObject(SKAction.setTexture(bomba3,resize: false))
addChild(bomba)
bomba.runAction(SKAction.repeatAction(SKAction.sequence(actionbomba), count: -1))
Do not set the size explicitly. From your information sprite kit will not automatically find the scale factor for each texture and scale it.
Instead, you set the scale factor of the node and each texture will have that scale applied to it.
[playernode setScale: x];
Something like this. You only have to set it when you create the node and each texture will be the size you would expect, given that your textures are the same size.
I use this method for all of my nodes that are animated with multiple textures and it works every time.