Drawing boxes around each digit to be entered in UITextField - swift

I am trying to draw boxes around each digit entered by a user in UITextField for which keyboard type is - Number Pad.
To simplify the problem statement I assumed that each of the digits (0 to 9) will have same bounding box for its glyph, which I obtained using below code:
func getGlyphBoundingRect() -> CGRect? {
guard let font = font else {
return nil
}
// As of now taking 8 as base digit
var unichars = [UniChar]("8".utf16)
var glyphs = [CGGlyph](repeating: 0, count: unichars.count)
let gotGlyphs = CTFontGetGlyphsForCharacters(font, &unichars, &glyphs, unichars.count)
if gotGlyphs {
let cgpath = CTFontCreatePathForGlyph(font, glyphs[0], nil)!
let path = UIBezierPath(cgPath: cgpath)
return path.cgPath.boundingBoxOfPath
}
return nil
}
I am drawing each bounding box thus obtained using below code:
func configure() {
guard let boundingRect = getGlyphBoundingRect() else {
return
}
for i in 0..<length { // length denotes number of allowed digits in the box
var box = boundingRect
box.origin.x = (CGFloat(i) * boundingRect.width)
let shapeLayer = CAShapeLayer()
shapeLayer.frame = box
shapeLayer.borderWidth = 1.0
shapeLayer.borderColor = UIColor.orange.cgColor
layer.addSublayer(shapeLayer)
}
}
Now problem is -
If I am entering digits - 8,8,8 in the text field then for first occurrence of digit the bounding box drawn is aligned, however for second occurrence of same digit the bounding box appears a bit offset (by negative x), the offset value (in negative x) increases for subsequent occurrences of same digit.
Here is image for reference -
I tried to solve the problem by setting NSAttributedString.Key.kern to 0, however it did not change the behavior.
Am I missing any important property in X axis from the calculation due to which I am unable to get properly aligned bounding box over each digit? Please suggest.

The key function you need to use is:
protocol UITextInput {
public func firstRect(for range: UITextRange) -> CGRect
}
Here's the solution as a function:
extension UITextField {
func characterRects() -> [CGRect] {
var beginningOfRange = beginningOfDocument
var characterRects = [CGRect]()
while beginningOfRange != endOfDocument {
guard let endOfRange = position(from: beginningOfRange, offset: 1), let textRange = textRange(from: beginningOfRange, to: endOfRange) else { break }
beginningOfRange = endOfRange
var characterRect = firstRect(for: textRange)
characterRect = convert(characterRect, from: textInputView)
characterRects.append(characterRect)
}
return characterRects
}
}
Note that you may need to clip your rects if you're text is too long for the text field. Here's an example of the solution witout clipping:

Related

NSTextFinder and its drawIncrementalMatchHighlight method that doesn't seem to work

NSTextFinder has a class method drawIncrementalMatchHighlight. Its description makes me think that the method should be used to highlight text on my own with the Apple's default animation of the highlighting. In other words, I should be able to replicate the showFindIndicator(for:) method of the AppKit NSTextView.
But it does not seem to work for me. I did exactly what the documentation tells me:
func showFindIndicator(for charRange: NSRange) {
// get the text range from the NSRange (I'm using TextKit 2)
guard let textRange = textLayoutManager.textContentManager?.textRange(from: charRange) else { return }
// get the screen rect for the text range
var rangeRect: CGRect = .zero
textLayoutManager.enumerateTextSegments(in: textRange, type: .selection, options: .rangeNotRequired, using: { _, rect, _, _ in
rangeRect = rect
return false
})
// create the graphics context to draw the highlighting into and get the layout fragment that draws the text
guard let bitmapRep = bitmapImageRepForCachingDisplay(in: rangeRect),
let context = NSGraphicsContext(bitmapImageRep: bitmapRep),
let layoutFragment = textLayoutManager.textLayoutFragment(for: textRange.location)
else { return }
// make the context current
NSGraphicsContext.current = context
// draw the background
self.backgroundColor?.setFill()
context.cgContext.fill(rangeRect)
// draw the highlight
NSTextFinder.drawIncrementalMatchHighlight(in: rangeRect)
// and finally draw the text
let origin = layoutFragment.layoutFragmentFrame.origin
layoutFragment.draw(at: origin, in: context.cgContext)
}
But all of this produces nothing. I cannot see any feedback on the screen.
What am I doing wrong with the drawIncrementalMatchHighlight method?

Color of pixel in ARSCNView

I am trying to get the color of a pixel at a CGPoint determined by the location of a touch. I have tried the following code but the color value is incorrect.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = event?.allTouches?.first {
let loc:CGPoint = touch.location(in: touch.view)
// in debugger, the image is correct
let image = sceneView.snapshot()
guard let color = image[Int(loc.x), Int(loc.y)] else{
return
}
print(color)
}
}
....
extension UIImage {
subscript (x: Int, y: Int) -> [UInt8]? {
if x < 0 || x > Int(size.width) || y < 0 || y > Int(size.height) {
return nil
}
let provider = self.cgImage!.dataProvider
let providerData = provider!.data
let data = CFDataGetBytePtr(providerData)
let numberOfComponents = 4
let pixelData = ((Int(size.width) * y) + x) * numberOfComponents
let r = data![pixelData]
let g = data![pixelData + 1]
let b = data![pixelData + 2]
return [r, g, b]
}
}
Running this and touching a spot on the screen that is a very large consistent bright orange yields a wide range of RGB values and looking at the color they actually produce yields a completely different color (dark blue in the case of the orange).
I'm guessing that maybe the coordinate systems are different and I'm actually getting a different point on the image possibly?
EDIT: Also I should mention that the part I'm tapping on is a 3D model that is not affected by lighting so the color should and appears to be consistent through run time.
Well it was easier than I thought. I first adjusted some things like making the touch be registered within my sceneview instead:
let loc:CGPoint = touch.location(in: sceneView)
I then reproportioned my cgpoint to adjust to the image view by doing the following:
let image = sceneView.snapshot()
let x = image.size.width / sceneView.frame.size.width
let y = image.size.height / sceneView.frame.size.height
guard let color = image[Int(x * loc.x), Int(y * loc.y)] else{
return
}
This allowed me to have consistent rgb values finally (not just numbers that changed on every touch even if I was on the same color). But the values were still off. For some reason my returned array was in reverse so I changed that with:
let b = data![pixelData]
let g = data![pixelData + 1]
let r = data![pixelData + 2]
I'm not sure why I had to do that last part, so any insight into that would be appreciated!

swift cursor position with emojis

The standard method I use to get the cursor position in a UITextfield does not seem to work with some emojis. The following code queries a textField for cursor position after insertion of two characters, first an emoji and then a letter character. When the emoji is inserted into the textField the function returns a value of 2 for the cursor position instead of the expected result of 1. Any ideas into what I am doing wrong or how to correct this. Thanks
Here is the code from an xcode playground:
class MyViewController : UIViewController {
override func loadView() {
//setup view
let view = UIView()
view.backgroundColor = .white
let textField = UITextField()
textField.frame = CGRect(x: 150, y: 200, width: 200, height: 20)
textField.textColor = .black
view.addSubview(textField)
self.view = view
//check cursor position
var str = "🐊"
textField.insertText(str)
print("cursor position after '\(str)' insertion is \(getCursorPosition(textField))")
textField.text = ""
str = "A"
textField.insertText(str)
print("cursor position after '\(str)' insertion is \(getCursorPosition(textField))")
}
func getCursorPosition(_ textField: UITextField) -> Int {
if let selectedRange = textField.selectedTextRange {
let cursorPosition = textField.offset(from: textField.beginningOfDocument, to: selectedRange.end)
return cursorPosition
}
return -1
}
}
the code return the following output:
cursor position after '🐊' insertion is 2
cursor position after 'A' insertion is 1
I'm trying to use the cursor position to split the text string into two pieces -- the text that occurs before the cursor and the text that occurs after the cursor. To do this I use the cursor position as an index for a character array I have created using the map function as follows. The cursor position leads to an incorrect array index with an emoji
var textBeforeCursor = String()
var textAfterCursor = String()
let array = textField.text!.map { String($0) }
let cursorPosition = getCursorPosition(textField)
for index in 0..<cursorPosition {
textBeforeCursor += array[index]
}
Your issue is that the NSRange value returned by UITextField selectedTextRange and the offset need to be properly converted to a Swift String.Index.
func getCursorPosition(_ textField: UITextField) -> String.Index? {
if let selectedRange = textField.selectedTextRange {
let cursorPosition = textField.offset(from: textField.beginningOfDocument, to: selectedRange.end)
let positionRange = NSRange(location: 0, length: cursorPosition)
let stringOffset = Range(positionRange, in: textField.text!)!
return stringOffset.upperBound
}
return nil
}
Once you have that String.Index you can split the string.
if let index = getCursorPosition(textField) {
let textBeforeCursor = textField.text![..<index]
let textAfterCursor = textField.text![index...]
}

How can I get the optical bounds of an NSAttributedString?

I need the optical bounds of an attributed string. I know I can call the .size() method and read its width but this obviously gives me typographic bounds with additional space to the right.
My strings would all be very short and consist only of 1-3 characters, so every string would contain exactly one glyphrun.
I found the function CTRunGetImageBounds, and after following the hints in the link from the comment I was able to extract the run and get the bounds, but obviously this does not give me the desired result.
The following swift 4 code works in an XCode9 Playground:
import Cocoa
import PlaygroundSupport
public func getGlyphWidth(glyph: CGGlyph, font: CTFont) -> CGFloat {
var glyph = glyph
var bBox = CGRect()
CTFontGetBoundingRectsForGlyphs(font, .default, &glyph, &bBox, 1)
return bBox.width
}
class MyView: NSView {
init(inFrame: CGRect) {
super.init(frame: inFrame)
}
required init?(coder decoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func draw(_ rect: CGRect) {
// setup context properties
let context: CGContext = NSGraphicsContext.current!.cgContext
context.setStrokeColor(CGColor.black)
context.setTextDrawingMode(.fill)
// prepare variables and constants
let alphabet = ["A","B","C","D","E","F","G","H","I","J","K","L"]
let font = CTFontCreateWithName("Helvetica" as CFString, 48, nil)
var glyphX: CGFloat = 10
// draw alphabet as single glyphs
for letter in alphabet {
var glyph = CTFontGetGlyphWithName(font, letter as CFString)
var glyphPosition = CGPoint(x: glyphX, y: 80)
CTFontDrawGlyphs(font, &glyph, &glyphPosition, 1, context)
glyphX+=getGlyphWidth(glyph: glyph, font: font)
}
let textStringAttributes: [NSAttributedStringKey : Any] = [
NSAttributedStringKey.font : font,
]
glyphX = 10
// draw alphabet as attributed strings
for letter in alphabet {
let textPosition = NSPoint(x: glyphX, y: 20)
let text = NSAttributedString(string: letter, attributes: textStringAttributes)
let line = CTLineCreateWithAttributedString(text)
let runs = CTLineGetGlyphRuns(line) as! [CTRun]
let width = (CTRunGetImageBounds(runs[0], nil, CFRange(location: 0,length: 0))).maxX
text.draw(at: textPosition)
glyphX += width
}
}
}
var frameRect = CGRect(x: 0, y: 0, width: 400, height: 150)
PlaygroundPage.current.liveView = MyView(inFrame: frameRect)
The code draws the single letters from A - L as single Glyphs in the upper row of the playground's live view. The horizontal position will be advanced after each letter by the letter's width which is retrieved via the getGlyphWidth function.
Then it uses the same letters to create attributed strings from it which will then be used to create first a CTLine, extract the (only) CTRun from it and finally measure its width. The result is seen in the second line in the live view.
The first line is the desired result: The width function returns exactly the width of every single letter, resulting in them touching each other.
I want the same result with the attributed string version, but here the ImageBounds seem to add an additional padding which I want to avoid.
How can I measure the exact width from the leftmost to the rightmost pixel of a given text?
And is there a less clumsy way to achieve this without having to cast four times (NSAtt.Str->CTLine->CTRun->CGRect->maxX) ?
Ok, I found the answer myself:
Using the .width parameter of the CTRunGetImageBounds instead of .maxX brings the right result
The same function also does exist for the CTLine: CTLineGetImageBounds

How to highlight a UITextView's text line by line in swift?

I am trying to highlight line by line the text in a UITextView. I want to iterate over each line and highlight that one for the user to see, then I want to remove the highlighting effect in preparation for the next line. I have tried and failed to create a solution and this is my best chance right now.
Here is some of what I have been working on so far, it currently fills the UITextView with "NSBackgroundColor 1101" for some reason and I do not know why that is.
func highlight() {
let str = "This is\n some placeholder\n text\nwith newlines."
var newStr = NSMutableAttributedString(string: "")
var arr:[String] = str.components(separatedBy: "\n")
var attArr:[NSMutableAttributedString] = []
for i in 0..<arr.count {
attArr.append(NSMutableAttributedString(string: arr[i]))
}
for j in 0..<attArr.count {
let range = NSMakeRange(0, attArr[j].length)
attArr[j].addAttribute(.backgroundColor, value: UIColor.yellow, range: range)
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5){
for m in 0..<attArr.count {
newStr = NSMutableAttributedString(string: "\(attArr[m])\n")
self.textView.attributedText = newStr
}
}
attArr[j].removeAttribute(.backgroundColor, range: range)
//remove from texview here
}
}
As you can see this algorithm is supposed to strip the textView text and place that into an array by separating each line the new line delimiter.
The next thing done is to create an array filled with the same text but as mutable attributed string to begin adding the highlight attribute.
Each time a highlighted line appears there is a small delay until the next line begins to highlight. If anyone can help or point me in to the right direction to begin implementing this correctly it would help immensely,
Thank you!
So you want this:
You need the text view's contents to always be the full string, with one line highlighted, but your code sets it to just the highlighted line. Your code also schedules all the highlights to happen at the same time (.now() + 0.5) instead of at different times.
Here's what I'd suggest:
Create an array of ranges, one range per line.
Use that array to modify the text view's textStorage by removing and adding the .backgroundColor attribute as needed to highlight and unhighlight lines.
When you highlight line n, schedule the highlighting of line n+1. This has two advantages: it will be easier and more efficient to cancel the animation early if you need to, and it will be easier to make the animation repeat endlessly if you need to.
I created the demo above using this playground:
import UIKit
import PlaygroundSupport
let text = "This is\n some placeholder\n text\nwith newlines."
let textView = UITextView(frame: CGRect(x: 0, y:0, width: 200, height: 100))
textView.backgroundColor = .white
textView.text = text
let textStorage = textView.textStorage
// Use NSString here because textStorage expects the kind of ranges returned by NSString,
// not the kind of ranges returned by String.
let storageString = textStorage.string as NSString
var lineRanges = [NSRange]()
storageString.enumerateSubstrings(in: NSMakeRange(0, storageString.length), options: .byLines, using: { (_, lineRange, _, _) in
lineRanges.append(lineRange)
})
func setBackgroundColor(_ color: UIColor?, forLine line: Int) {
if let color = color {
textStorage.addAttribute(.backgroundColor, value: color, range: lineRanges[line])
} else {
textStorage.removeAttribute(.backgroundColor, range: lineRanges[line])
}
}
func scheduleHighlighting(ofLine line: Int) {
DispatchQueue.main.asyncAfter(deadline: .now() + .seconds(1)) {
if line > 0 { setBackgroundColor(nil, forLine: line - 1) }
guard line < lineRanges.count else { return }
setBackgroundColor(.yellow, forLine: line)
scheduleHighlighting(ofLine: line + 1)
}
}
scheduleHighlighting(ofLine: 0)
PlaygroundPage.current.liveView = textView