Color of pixel in ARSCNView - swift

I am trying to get the color of a pixel at a CGPoint determined by the location of a touch. I have tried the following code but the color value is incorrect.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = event?.allTouches?.first {
let loc:CGPoint = touch.location(in: touch.view)
// in debugger, the image is correct
let image = sceneView.snapshot()
guard let color = image[Int(loc.x), Int(loc.y)] else{
return
}
print(color)
}
}
....
extension UIImage {
subscript (x: Int, y: Int) -> [UInt8]? {
if x < 0 || x > Int(size.width) || y < 0 || y > Int(size.height) {
return nil
}
let provider = self.cgImage!.dataProvider
let providerData = provider!.data
let data = CFDataGetBytePtr(providerData)
let numberOfComponents = 4
let pixelData = ((Int(size.width) * y) + x) * numberOfComponents
let r = data![pixelData]
let g = data![pixelData + 1]
let b = data![pixelData + 2]
return [r, g, b]
}
}
Running this and touching a spot on the screen that is a very large consistent bright orange yields a wide range of RGB values and looking at the color they actually produce yields a completely different color (dark blue in the case of the orange).
I'm guessing that maybe the coordinate systems are different and I'm actually getting a different point on the image possibly?
EDIT: Also I should mention that the part I'm tapping on is a 3D model that is not affected by lighting so the color should and appears to be consistent through run time.

Well it was easier than I thought. I first adjusted some things like making the touch be registered within my sceneview instead:
let loc:CGPoint = touch.location(in: sceneView)
I then reproportioned my cgpoint to adjust to the image view by doing the following:
let image = sceneView.snapshot()
let x = image.size.width / sceneView.frame.size.width
let y = image.size.height / sceneView.frame.size.height
guard let color = image[Int(x * loc.x), Int(y * loc.y)] else{
return
}
This allowed me to have consistent rgb values finally (not just numbers that changed on every touch even if I was on the same color). But the values were still off. For some reason my returned array was in reverse so I changed that with:
let b = data![pixelData]
let g = data![pixelData + 1]
let r = data![pixelData + 2]
I'm not sure why I had to do that last part, so any insight into that would be appreciated!

Related

Drawing boxes around each digit to be entered in UITextField

I am trying to draw boxes around each digit entered by a user in UITextField for which keyboard type is - Number Pad.
To simplify the problem statement I assumed that each of the digits (0 to 9) will have same bounding box for its glyph, which I obtained using below code:
func getGlyphBoundingRect() -> CGRect? {
guard let font = font else {
return nil
}
// As of now taking 8 as base digit
var unichars = [UniChar]("8".utf16)
var glyphs = [CGGlyph](repeating: 0, count: unichars.count)
let gotGlyphs = CTFontGetGlyphsForCharacters(font, &unichars, &glyphs, unichars.count)
if gotGlyphs {
let cgpath = CTFontCreatePathForGlyph(font, glyphs[0], nil)!
let path = UIBezierPath(cgPath: cgpath)
return path.cgPath.boundingBoxOfPath
}
return nil
}
I am drawing each bounding box thus obtained using below code:
func configure() {
guard let boundingRect = getGlyphBoundingRect() else {
return
}
for i in 0..<length { // length denotes number of allowed digits in the box
var box = boundingRect
box.origin.x = (CGFloat(i) * boundingRect.width)
let shapeLayer = CAShapeLayer()
shapeLayer.frame = box
shapeLayer.borderWidth = 1.0
shapeLayer.borderColor = UIColor.orange.cgColor
layer.addSublayer(shapeLayer)
}
}
Now problem is -
If I am entering digits - 8,8,8 in the text field then for first occurrence of digit the bounding box drawn is aligned, however for second occurrence of same digit the bounding box appears a bit offset (by negative x), the offset value (in negative x) increases for subsequent occurrences of same digit.
Here is image for reference -
I tried to solve the problem by setting NSAttributedString.Key.kern to 0, however it did not change the behavior.
Am I missing any important property in X axis from the calculation due to which I am unable to get properly aligned bounding box over each digit? Please suggest.
The key function you need to use is:
protocol UITextInput {
public func firstRect(for range: UITextRange) -> CGRect
}
Here's the solution as a function:
extension UITextField {
func characterRects() -> [CGRect] {
var beginningOfRange = beginningOfDocument
var characterRects = [CGRect]()
while beginningOfRange != endOfDocument {
guard let endOfRange = position(from: beginningOfRange, offset: 1), let textRange = textRange(from: beginningOfRange, to: endOfRange) else { break }
beginningOfRange = endOfRange
var characterRect = firstRect(for: textRange)
characterRect = convert(characterRect, from: textInputView)
characterRects.append(characterRect)
}
return characterRects
}
}
Note that you may need to clip your rects if you're text is too long for the text field. Here's an example of the solution witout clipping:

Reshape Face Coordinate in Swift

I want to reshape the face coordinate like showing in the video: https://www.dropbox.com/s/vsttylwgt25szha/IMG_6590.TRIM.MOV?dl=0 (Sorry, unfortunetly the video is about 11 MB in size).
I've just capture the face coordinate using iOS Vision API:
// Facial landmarks are GREEN.
fileprivate func drawFeatures(onFaces faces: [VNFaceObservation], onImageWithBounds bounds: CGRect) {
CATransaction.begin()
for faceObservation in faces {
let faceBounds = boundingBox(forRegionOfInterest: faceObservation.boundingBox, withinImageBounds: bounds)
guard let landmarks = faceObservation.landmarks else {
continue
}
// Iterate through landmarks detected on the current face.
let landmarkLayer = CAShapeLayer()
let landmarkPath = CGMutablePath()
let affineTransform = CGAffineTransform(scaleX: faceBounds.size.width, y: faceBounds.size.height)
// Treat eyebrows and lines as open-ended regions when drawing paths.
let openLandmarkRegions: [VNFaceLandmarkRegion2D?] = [
//landmarks.leftEyebrow,
//landmarks.rightEyebrow,
landmarks.faceContour,
landmarks.noseCrest,
// landmarks.medianLine
]
// Draw eyes, lips, and nose as closed regions.
let closedLandmarkRegions = [
landmarks.nose
].compactMap { $0 } // Filter out missing regions.
// Draw paths for the open regions.
for openLandmarkRegion in openLandmarkRegions where openLandmarkRegion != nil {
landmarkPath.addPoints(in: openLandmarkRegion!,
applying: affineTransform,
closingWhenComplete: false)
}
// Draw paths for the closed regions.
for closedLandmarkRegion in closedLandmarkRegions {
landmarkPath.addPoints(in: closedLandmarkRegion ,
applying: affineTransform,
closingWhenComplete: true)
}
// Format the path's appearance: color, thickness, shadow.
landmarkLayer.path = landmarkPath
landmarkLayer.lineWidth = 1
landmarkLayer.strokeColor = UIColor.green.cgColor
landmarkLayer.fillColor = nil
landmarkLayer.shadowOpacity = 1.0
landmarkLayer.shadowRadius = 1
// Locate the path in the parent coordinate system.
landmarkLayer.anchorPoint = .zero
landmarkLayer.frame = faceBounds
landmarkLayer.transform = CATransform3DMakeScale(1, -1, 1)
pathLayer?.addSublayer(landmarkLayer)
}
CATransaction.commit()
}
How to step forward from here? Can anyone guide me please?

Duplicate image with touch

Im trying to duplicate an image in Swift 4 when the user touches the screen
var positionArray = Array(repeating: Array(repeating: 0, count: 2), count: 50)
var counter = 0
var pointNum = 0
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.view)
let locx = Int(position.x)
let locy = Int(position.y)
positionArray[counter] = [locx, locy]
print(positionArray[counter])
counter = counter + 1
point.center = position
pointNum = pointNum + 1
}
}
As you can see this is what I use to register touch, but I only have a single image (point) that moves to where the user touches.
Your code saves your points to an array of points, and then moves the center of something called point. Is point an image view?
If you want to create a new copy of an image everywhere the user taps then you need to create a new copy of the image. Something like this:
let bounds = point.bounds
let newImageView = UIImageView(frame: bounds)
newImageView.translatesAutoresizingMaskIntoConstraints = true
newImageView.image = UIImage(named: "MyImageViewName")
self.view.addSubView(newImageView)
newImageView.center = position
Note that it would really be better to add auto layout anchors that anchor the new image view's center to the superview's origin, but the above code should work. (I think. I've rarely used newImageView.translatesAutoresizingMaskIntoConstraints = true.)

Placing an object in front of camera at Touch Location

The following code places the node in front of the camera but always at the center 10cm away from the camera position. I want to place the node 10cm away in z-direction but at the x and y co-ordinates of where I touch the screen. So touching on different parts of the screen should result in a node being placed 10cm away in front of the camera but at the x and y location of the touch and not always at the center.
var cameraRelativePosition = SCNVector3(0,0,-0.1)
let sphere = SCNNode()
sphere.geometry = SCNSphere(radius: 0.0025)
sphere.geometry?.firstMaterial?.diffuse.contents = UIColor.white
Service.addChildNode(sphere, toNode: self.sceneView.scene.rootNode,
inView: self.sceneView, cameraRelativePosition:
cameraRelativePosition)
Service.swift
class Service: NSObject {
static func addChildNode(_ node: SCNNode, toNode: SCNNode, inView:
ARSCNView, cameraRelativePosition: SCNVector3) {
guard let currentFrame = inView.session.currentFrame else { return }
let camera = currentFrame.camera
let transform = camera.transform
var translationMatrix = matrix_identity_float4x4
translationMatrix.columns.3.x = cameraRelativePosition.x
translationMatrix.columns.3.y = cameraRelativePosition.y
translationMatrix.columns.3.z = cameraRelativePosition.z
let modifiedMatrix = simd_mul(transform, translationMatrix)
node.simdTransform = modifiedMatrix
toNode.addChildNode(node)
}
}
The result should look exactly like this : https://justaline.withgoogle.com
We can use the unprojectPoint(_:) method of SCNSceneRenderer (SCNView and ARSCNView both conform to this protocol) to convert a point on the screen to a 3D point.
When tapping the screen we can calculate a ray this way:
func getRay(for point: CGPoint, in view: SCNSceneRenderer) -> SCNVector3 {
let farPoint = view.unprojectPoint(SCNVector3(Float(point.x), Float(point.y), 1))
let nearPoint = view.unprojectPoint(SCNVector3(Float(point.x), Float(point.y), 0))
let ray = SCNVector3Make(farPoint.x - nearPoint.x, farPoint.y - nearPoint.y, farPoint.z - nearPoint.z)
// Normalize the ray
let length = sqrt(ray.x*ray.x + ray.y*ray.y + ray.z*ray.z)
return SCNVector3Make(ray.x/length, ray.y/length, ray.z/length)
}
The ray has a length of 1, so by multiplying it by 0.1 and adding the camera location we get the point you were searching for.

Detect if CGPoint within polygon

I have a set of CGPoints which make up a polygon shape, how can I detect if a single CGPoint is inside or outside of the polygon?
Say, the shape was a triangle and the CGPoint was moving hoizontally, how could I detect when it crossed the triangle line?
I can use CGRectContainsPoint when the shape is a regular 4-sided shape but I can't see how I would do it with an odd shape.
You can create a CG(Mutable)PathRef (or a UIBezierPath that wraps a CGPathRef) from your points and use the CGPathContainsPoint function to check if a point is inside that path. If you use UIBezierPath, you could also use the containsPoint: method.
For that you need to write one method that implements a point inside polygon algorithm.
This method will take an array with N points (the polygon) as an argument and one specific point. It should return true if the point is inside the polygon and false if not.
See this great answer on S.O.
Here is the implementation in Swift:
extension CGPoint {
func isInsidePolygon(vertices: [CGPoint]) -> Bool {
guard vertices.count > 0 else { return false }
var i = 0, j = vertices.count - 1, c = false, vi: CGPoint, vj: CGPoint
while true {
guard i < vertices.count else { break }
vi = vertices[i]
vj = vertices[j]
if (vi.y > y) != (vj.y > y) &&
x < (vj.x - vi.x) * (y - vi.y) / (vj.y - vi.y) + vi.x {
c = !c
}
j = i
i += 1
}
return c
}
}
Swift 3
A simpler way using Swift 3, is using UIBezierPath contains method.
When creating an instance of CAShapeLayer, make sure to set the accessibilityPath
shapeLayer.path = bazierPath.cgPath
shapeLayer.accessibilityPath = bazierPath
Checking if path contains touch location.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let point = touches.first?.location(in: self) else { return }
for shape in layer.sublayers ?? [] where shape is CAShapeLayer {
guard let layer = shape as? CAShapeLayer,
let bazier = layer.accessibilityPath else { continue }
// Handle touch
print(bazier.contains(point))
}
}