Display bounding boxes in Apple AR SceneKit - swift

I am using YOLOv3 with Apple Vision to classify objects during my AR SceneKit session. I want to render the bounding boxes of the detected objects in my screen view. Unfortunately, the bounding boxes are are placed too far down and have a wrong aspect ratio. Does someone know what the issue might be?
This is how I am currently transforming the bounding boxes.
Assumptions:
The app is in portrait mode
Vision request is performed with centerCrop and orientation .right.
Fix the coordinate origin of vision:
let newY = 1 - boundingBox.origin.y
let newBox = CGRect(x: boundingBox.origin.x, y: newY, width: boundingBox.width, height: boundingBox.height)
Undo center cropping of Vision:
let imageResolution: CGSize = currentFrame.camera.imageResolution
// Switching height and width because the original image is rotated
let imageWidth = imageResolution.height
let imageHeight = imageResolution.width
// Square inside of normalized coordinates.
let roi = CGRect(x: 0, y: 1 - (imageWidth/imageHeight + ((imageHeight-imageWidth) / (imageHeight*2))), width: 1, height: imageWidth / imageHeight)
let newBox = VNImageRectForNormalizedRectUsingRegionOfInterest(boundingBox, Int(imageWidth), Int(imageHeight), roi)
Bring coordinates back to normalized form:
let imageWidth = imageResolution.height
let imageHeight = imageResolution.width
let transformNormalize = CGAffineTransform(scaleX: 1.0 / imageWidth, y: 1.0 / imageHeight)
let newBox = boundingBox.applying(transformNormalize)
Transform to scene view: (I assume the error is here. I found out while debugging that the aspect ratio of the bounding box changes here.)
let viewPort = sceneView.frame.size
let transformFormat = currentFrame.displayTransform(for: .landscapeRight, viewportSize: viewPort)
let newBox = boundingBox.applying(transformFormat)
Scale up to viewport size:
let viewPort = sceneView.frame.size
let transformScale = CGAffineTransform(scaleX: viewPort.width, y: viewPort.height)
let newBox = boundingBox.applying(transformScale)
Thanks in advance for any help!
Picture of incorrect bounding box on a bottle of water

Related

How can I use CGRect() dynamically?

I have the following scene example where I can crop an image based on the selection (red square).
That square has dynamic Height and Width - base on this fact I want to use the selected Height and Width to crop what is inside of the Red square.
The function that I am using for cropping is from Apple developer and looks like this:
func cropImage(_ inputImage: UIImage, toRect cropRect: CGRect, viewWidth: CGFloat, viewHeight: CGFloat) -> UIImage?
{
let imageViewScale = max(inputImage.size.width / viewWidth,
inputImage.size.height / viewHeight)
// Scale cropRect to handle images larger than shown-on-screen size
let cropZone = CGRect(x:cropRect.origin.x * imageViewScale,
y:cropRect.origin.y * imageViewScale,
width:cropRect.size.width * imageViewScale,
height:cropRect.size.height * imageViewScale)
// Perform cropping in Core Graphics
guard let cutImageRef: CGImage = inputImage.cgImage?.cropping(to:cropZone)
else {
return nil
}
// Return image to UIImage
let croppedImage: UIImage = UIImage(cgImage: cutImageRef)
return croppedImage
}
Now. I want to use the given Height and Width to crop that selection.
let croppedImage = cropImage(image!, toRect: CGRect(x:?? , y:?? , width: ??, height: ??), viewWidth: ??, viewHeight: ??)
What should I fill in these parameters in order to crop the image based on the above dynamic selection?
Ok, since you just have info of width and height of the cropping shape. You'll need to calculate the x and y by yourself.
First, let's consider these information:
// let's pretend this is a sample of size that your crop tool provides to you
let cropSize = CGSize(width: 120, height: 260)
Next, you'll need to obtain the display size (width and height) of your image. Display size here is the frame's size of your image, not the size of the image itself.
// again, lets pretend it's just a frame size of your image
let imageSize = CGSize(width: 320, height: 480)
With this info, you can obtain the x and y necessary to compose a CGRect and then, provide to a cropping function you desire.
let x = (imageSize.width - cropSize.width) / 2
let y = (imageSize.height - cropSize.height) / 2
So now, you can create a rectangle to crop your image like this:
let cropRect = CGRect(x: x, y: y, width: cropSize.width, height: cropSize.height)
With cropRect you can use on both cropping or cropImage functions mentioned in your question.
Ok, let's assume that your image is in imageView, wich is located somewhere in your screen. The rect is a variable where your selected frame (related to the imageView.frame) is stored. So the result is:
let croppedImage = cropImage(image!, toRect: rect, viewWidth: imageView.width, viewHeight: imageView.height)
I've used the info from all of your answers and especially #matt's comment and this is the final solution.
Using the input values that my red square returned, I've adapted the original Crop function to this one:
func cropImage(_ inputImage: UIImage, width: Double, height: Double) -> UIImage?
{
let imsize = inputImage.size
let ivsize = UIScreen.main.bounds.size
var scale : CGFloat = ivsize.width / imsize.width
if imsize.height * scale < ivsize.height {
scale = ivsize.height / imsize.height
}
let croppedImsize = CGSize(width:height/scale, height:width/scale)
let croppedImrect =
CGRect(origin: CGPoint(x: (imsize.width-croppedImsize.width)/2.0,
y: (imsize.height-croppedImsize.height)/2.4),
size: croppedImsize)
let r = UIGraphicsImageRenderer(size:croppedImsize)
let croppedIm = r.image { _ in
inputImage.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
}
return croppedIm
}

SCNNode position in different places when image recognized Swift

I have this code in the didAddNode after an image is recognized.
The images are displaying on top of each other in an X pattern. I need them to be one next to the other. I have tried the SCNNode x, y, and z but the image is not moving within the middle point. I have done the anchor to be the reference image before but that also did not get me to the goal. I tried the SCNVector3(0,0,-0.2) the image disappears off the screen
//IMAGE 1
let plane = SCNPlane(width: 0.01 ,height: 0.01)
if let cgImage = ImageUse!.cgImage {
plane.firstMaterial?.diffuse.contents = cgImage
}
let overlayNode = SCNNode.init(geometry: plane)
overlayNode.eulerAngles.x = -Float.pi / 2
node.addChildNode(overlayNode)
//IMAGE 2
let plane2 = SCNPlane(width: 0.01,height: 0.01)
if let cgImage = ImageUse2!.cgImage {
plane2.firstMaterial?.diffuse.contents = cgImage
}
let overlayNode2 = SCNNode.init(geometry: plane2)
overlayNode2.eulerAngles.x = -Float.pi / 4
node.addChildNode(overlayNode2)
Here what is showing up:
This is my goal:
I got the image node to be placed on how I need it. I had to play around with overlayNode2.position = SCNVector3(-0.01, -0.01, -0.01) overlayNode2.eulerAngles.x = -Float.pi / 2

Why when using the front facing camera does my face detection box move in opposite direction?

Im using the Vision Framework to make a face detection app. The problem im having is when Im using the front facing camera the face detection box doesnt follow my face. When I move my face to the right the face detection box goes to the left and same thing when I move my face to the left the face detection box goes to the right. Why does this happen?
func drawFaceboundingBox(face : VNFaceObservation) {
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -frame.height)
let translate = CGAffineTransform.identity.scaledBy(x: frame.width, y: frame.height)
// The coordinates are normalized to the dimensions of the processed image, with the origin at the image's lower-left corner.
let facebounds = face.boundingBox.applying(translate).applying(transform)
_ = createLayer(in: facebounds)
}
Got it to work finally. Had to change the transform to this:
func drawFaceboundingBox(face : VNFaceObservation) {
let transform = CGAffineTransform(scaleX: -1, y: -1).translatedBy(x: -frame.width, y: -frame.height)
let translate = CGAffineTransform.identity.scaledBy(x: frame.width, y: frame.height)
// The coordinates are normalized to the dimensions of the processed image, with the origin at the image's lower-left corner.
let facebounds = face.boundingBox.applying(translate).applying(transform)
_ = createLayer(in: facebounds)
}

Crop/Mask circular image node in sprite kit gives jagged edges

Is it possible give a circular mask/crop to an image node without jagged edges?
Following this example from Apple (https://developer.apple.com/reference/spritekit/skcropnode), the result is not ideal. You can click on the link to see.
let shapeNode = SKShapeNode()
shapeNode.physicsBody = SKPhysicsBody(circleOfRadius: radius)
shapeNode.physicsBody?.allowsRotation = false
shapeNode.strokeColor = SKColor.clearColor()
// Add a crop node to mask the profile image
// profile images (start off with place holder)
let scale = 1.0
let profileImageNode = SKSpriteNode(imageNamed: "PlaceholderUser")
profileImageNode.setScale(CGFloat(scale))
let circlePath = CGPathCreateWithEllipseInRect(CGRectMake(-radius, -radius, radius*2, radius*2), nil)
let circleMaskNode = SKShapeNode()
circleMaskNode.path = circlePath
circleMaskNode.zPosition = 12
circleMaskNode.name = "connection_node"
circleMaskNode.fillColor = SKColor.whiteColor()
circleMaskNode.strokeColor = SKColor.clearColor()
let zoom = SKAction.fadeInWithDuration(0.25)
circleMaskNode.runAction(zoom)
let cropNode = SKCropNode()
cropNode.maskNode = circleMaskNode
cropNode.addChild(profileImageNode)
cropNode.position = shapeNode.position
shapeNode.addChild(cropNode)
self.addChild(shapeNode)
UPDATE:
Ok, so here's one solution I came up. Not super ideal but it works perfectly. Essentially, I have a to size/scale, and cut the image exactly the way it would go on the SKSpriteNode so I would not have to use SKCropNode or some variation of SKShapeNode.
I used these UIImage extensions by Leo Dabus to resize/shape the image exactly as needed. Cut a UIImage into a circle Swift(iOS)
var circle: UIImage? {
let square = CGSize(width: min(size.width, size.height), height: min(size.width, size.height))
let imageView = UIImageView(frame: CGRect(origin: CGPoint(x: 0, y: 0), size: square))
imageView.contentMode = .ScaleAspectFill
imageView.image = self
imageView.layer.cornerRadius = square.width/2
imageView.layer.masksToBounds = true
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
func resizedImageWithinRect(rectSize: CGSize) -> UIImage {
let widthFactor = size.width / rectSize.width
let heightFactor = size.height / rectSize.height
var resizeFactor = widthFactor
if size.height > size.width {
resizeFactor = heightFactor
}
let newSize = CGSizeMake(size.width/resizeFactor, size.height/resizeFactor)
let resized = resizedImage(newSize)
return resized
}
The final codes look like this:
//create/shape image
let image = UIImage(named: "TestImage")
let scaledImage = image?.resizedImageWithinRect(CGSize(width: 100, height: 100))
let circleImage = scaledImage?.circle
//create sprite
let sprite = SKSpriteNode(texture: SKTexture(image: circleImage!))
sprite.position = CGPoint(x: view.frame.width/2, y: view.frame.height/2)
//set texture/image
sprite.texture = SKTexture(image: circleImage!)
sprite.physicsBody = SKPhysicsBody(texture: SKTexture(image: circleImage!), size: CGSizeMake(100, 100))
if let physics = sprite.physicsBody {
//add the physic properties
}
//scale node
sprite.setScale(1.0)
addChild(sprite)
So if you have a perfectly scaled asset/image, then you probably dont need to do all this work, but I'm getting images from the backend that could come in any sizes.
There are two different techniques that can be combined to reduce the aliasing of edges created from cropping.
Create bigger images than you need, and then scale them down. Both the target (to be cropped) and the mask. Perform the cropping action, then scale down to required size.
Use very subtle blurring of the cropping shape, to soften its edges. This is best done in Photoshop or a similar editing program, to taste and need.
When these two techniques are combined, the results can be very good.
Let the stroke color to be displayed. Also, you can make line width a little thicker and the jagged edges will dissapear.
circleMaskNode.strokeColor = SKColor.whiteColor()
All you have to do is change the SKShapeNode's lineWidth property to be twice the radius of the circle:
func circularCropNode(radius: CGFloat, add: SKNode) {
let cropper = SKCropNode.init()
cropper.addChild(add)
addChild(cropper)
let circleMask = SKShapeNode.init(circleOfRadius: radius/2)
circleMask.lineWidth = radius
cropper.maskNode = circleMask
}

Nodes position within scene size

In my SKScene, I am making a simple space shooter game. How can I make sure that my enemies always appear within the screen size regardless of which iphone the game is played at?
In other words words, how do I calculate the max and min X-coordinates of the scene size and more importantly how do i know what's the current scene size depending on which iphone the game is run at?
Don't resize your scene depend by iPhone model, leave Sprite-kit do this kind of job:
scene.scaleMode = SKSceneScaleMode.ResizeFill
The scene is not scaled to match the view. Instead, the scene is
automatically resized so that its dimensions always matches those of
the view.
About the size, when a scene is first initialized, its size property is configured by the designated initializer. The size of the scene specifies the size of the visible portion of the scene in points. This is only used to specify the visible portion of the scene.
Update to help you in positioning:
If you want to set your positions instead to use scaleMode you can
set your scene.scaleMode to .AspectFill for this to work on all scenes and the scene size has to be 2048x1536 or 1536x2048. This will make it scaleable for iPad too.
class StartScene: SKScene {
let playableArea: CGRect!
}
override init(size: CGSize) {
//1. Get the aspect ratio of the device
let deviceWidth = UIScreen.mainScreen().bounds.width
let deviceHeight = UIScreen.mainScreen().bounds.height
let maxAspectRatio: CGFloat = deviceWidth / deviceHeight
//2. For landscape orientation, use this
let playableHeight = size.width / maxAspectRatio
let playableMargin = (size.height - playableHeight) / 2.0
playableArea = CGRect(x: 0, y: playableMargin, width: size.width, height: playableHeight)
//3. For portrait orientation, use this
let playableWidth = size.height / maxAspectRatio
let playableMargin = (size.width - playableWidth) / 2.0
playableArea = CGRect(x: playableMargin, y: 0, width: playableWidth, height: size.height)
super.init(size: size)
}
So you can positioning your object with:
ball.position = CGPoint(x: CGRectGetMidX(playableArea), y: CGRectGetMaxY(playableArea) - (ball.size.height * 0.90))
This code works in the iPhone 4S, 5, 5S, 6, 6 Plus, 6S, 6S Plus, and iPads.
If you want to see your borders (for debug or not):
func drawWorkArea() {
let shape = SKShapeNode()
let path = CGPathCreateMutable()
CGPathAddRect(path, nil, workArea)
shape.path = path
shape.strokeColor = SKColor.redColor()
shape.lineWidth = 8
addChild(shape)
}