Cannot get foreground color of NSTextView - swift

I am trying to use the code that follows to get the foreground color of an NSTextView. Unfortunately I get a runtime error that must be related with the involved color spaces. How can I fix m
if let textStorage = textView.textStorage {
let rowObj = textStorage.paragraphs[row]
let range = NSMakeRange(0, rowObj.string.characters.count)
colorOrRowBeforeSelection = rowObj.foregroundColor!
if(rowObj.foregroundColor != nil) {
let r = rowObj.foregroundColor!.redComponent
let g = rowObj.foregroundColor!.greenComponent
let b = rowObj.foregroundColor!.blueComponent
} else {
Log.e("cannot get foreground color components")
}
} else {
Log.e("textStorage = nil")
}
I get the following runtime error:
[General] *** invalid number of components for colorspace in initWithColorSpace:components:count:

I made it working with the following:
let components = UnsafeMutablePointer<CGFloat>.allocate(capacity: 4)
rowObj.foregroundColor!.getComponents(components)
let c = NSColor(colorSpace: NSColorSpace.sRGB, components: components, count: 4)
components.deinitialize()
components.deallocate(capacity: 4)
let r = c.redComponent
let g = c.greenComponent
let b = c.blueComponent
EDIT: I made a fix on the memory handling, now the code works except from the fact the it turns what should be white (in the grey color space) into yellow... hope someone can fix that.

Related

Data via Buffer Post Xcode 11.5 Update

What I Have:
Referencing Apple's Chroma Key Code, it states that we can create a Chroma Key Filter Cube via
func chromaKeyFilter(fromHue: CGFloat, toHue: CGFloat) -> CIFilter?
{
// 1
let size = 64
var cubeRGB = [Float]()
// 2
for z in 0 ..< size {
let blue = CGFloat(z) / CGFloat(size-1)
for y in 0 ..< size {
let green = CGFloat(y) / CGFloat(size-1)
for x in 0 ..< size {
let red = CGFloat(x) / CGFloat(size-1)
// 3
let hue = getHue(red: red, green: green, blue: blue)
let alpha: CGFloat = (hue >= fromHue && hue <= toHue) ? 0: 1
// 4
cubeRGB.append(Float(red * alpha))
cubeRGB.append(Float(green * alpha))
cubeRGB.append(Float(blue * alpha))
cubeRGB.append(Float(alpha))
}
}
}
let data = Data(buffer: UnsafeBufferPointer(start: &cubeRGB, count: cubeRGB.count))
// 5
let colorCubeFilter = CIFilter(name: "CIColorCube", withInputParameters: ["inputCubeDimension": size, "inputCubeData": data])
return colorCubeFilter
}
I then created a function to be able to insert any image into this filter and return the filtered image.
public func filteredImage(ciimage: CIImage) -> CIImage? {
let filter = chromaKeyFilter(fromHue: 110/360, toHue: 130/360)! //green screen effect colors
filter.setValue(ciimage, forKey: kCIInputImageKey)
return RealtimeDepthMaskViewController.filter.outputImage
}
I can then execute this function on any image and obtain a chroma key'd image.
if let maskedImage = filteredImage(ciimage: ciimage) {
//Do something
}
else {
print("Not filtered image")
}
Update Issues:
let data = Data(buffer: UnsafeBufferPointer(start: &cubeRGB, count: cubeRGB.count))
However, once I updated Xcode to v11.6, I obtain the warning Initialization of 'UnsafeBufferPointer<Float>' results in a dangling buffer pointer as well as a runtime error Thread 1: EXC_BAD_ACCESS (code=1, address=0x13c600020) on the line of code above.
I tried addressing this issue with this answer to correct Swift's new UnsafeBufferPointer warning. The warning is then corrected and I no longer have a runtime error.
Problem
Now, although the warning doesn't appear and I don't experience a runtime error, I still get the print statement Not filtered image. I assume that the issue stems from the way the data is being handled, or deleted, not entirely sure how to correctly handle UnsafeBufferPointers alongside Data.
What is the appropriate way to correctly obtain the Data for the Chroma Key?
I wasn't sure what RealtimeDepthMaskViewController was in this context, so just returned the filter output instead. Apologies if this was meant to be left as-is. Also added a guard statement with the possibility of returning nil - which matches your optional return type for the function.
public func filteredImage(ciImage: CIImage) -> CIImage? {
guard let filter = chromaKeyFilter(fromHue: 110/360, toHue: 130/360) else { return nil }
filter.setValue(ciImage, forKey: "inputImage")
return filter.outputImage // instead of RealtimeDepthMaskViewController.filter.outputImage
}
For the dangling pointer compiler warning, I found a couple approaches:
// approach #1
var data = Data()
cubeRGB.withUnsafeBufferPointer { ptr in
data = Data(buffer: ptr)
}
// approach #2
let byteCount = MemoryLayout<Float>.size * cubeRGB.count
let data = Data(bytes: &cubeRGB, count: byteCount)
One caveat: looked at this with Xcode 11.6 rather than 11.5

Modify NSImage's pixel color

I have some problem with modify NSImage's pixel color. What I'm doing is checking the pixel's color and change them.
Here is my code:
image.lockFocus()
guard let ctx = NSGraphicsContext.current?.cgContext else {
image.unlockFocus()
return nil
}
// draw
guard let buffer = ctx.data else {
return nil
}
let pixelBuffer = buffer.bindMemory(to: UInt32.self, capacity: width * height)
let widthInPixelbuffer = ctx.bytesPerRow / (ctx.bitsPerPixel / ctx.bitsPerComponent)
let heightInPixelBuffer = ctx.height
let upperBound = Int(Float(heightInPixelBuffer) * 0.5)
for column in 0 ..< widthInPixelbuffer {
for row in 1 ... upperBound {
let offset = (upperBound - row) * widthInPixelbuffer + column
if pixelBuffer[offset] == bgColor {
break
} else {
pixelBuffer[offset] = 0xFFFFFF00 // yellow
}
}
}
image.unlockFocus()
My problem is that after I changed some pixel to yellow, what I see in the NSImage is transparent.(on my Mac book pro)
But it works fine on my Mac mini. I see the yellow color as expected.
I can't figure out it, could someone told me why?
How can I fixe it?
I have searched for this some days but I don't find the reason on the network.
thanks!

Drawing boxes around each digit to be entered in UITextField

I am trying to draw boxes around each digit entered by a user in UITextField for which keyboard type is - Number Pad.
To simplify the problem statement I assumed that each of the digits (0 to 9) will have same bounding box for its glyph, which I obtained using below code:
func getGlyphBoundingRect() -> CGRect? {
guard let font = font else {
return nil
}
// As of now taking 8 as base digit
var unichars = [UniChar]("8".utf16)
var glyphs = [CGGlyph](repeating: 0, count: unichars.count)
let gotGlyphs = CTFontGetGlyphsForCharacters(font, &unichars, &glyphs, unichars.count)
if gotGlyphs {
let cgpath = CTFontCreatePathForGlyph(font, glyphs[0], nil)!
let path = UIBezierPath(cgPath: cgpath)
return path.cgPath.boundingBoxOfPath
}
return nil
}
I am drawing each bounding box thus obtained using below code:
func configure() {
guard let boundingRect = getGlyphBoundingRect() else {
return
}
for i in 0..<length { // length denotes number of allowed digits in the box
var box = boundingRect
box.origin.x = (CGFloat(i) * boundingRect.width)
let shapeLayer = CAShapeLayer()
shapeLayer.frame = box
shapeLayer.borderWidth = 1.0
shapeLayer.borderColor = UIColor.orange.cgColor
layer.addSublayer(shapeLayer)
}
}
Now problem is -
If I am entering digits - 8,8,8 in the text field then for first occurrence of digit the bounding box drawn is aligned, however for second occurrence of same digit the bounding box appears a bit offset (by negative x), the offset value (in negative x) increases for subsequent occurrences of same digit.
Here is image for reference -
I tried to solve the problem by setting NSAttributedString.Key.kern to 0, however it did not change the behavior.
Am I missing any important property in X axis from the calculation due to which I am unable to get properly aligned bounding box over each digit? Please suggest.
The key function you need to use is:
protocol UITextInput {
public func firstRect(for range: UITextRange) -> CGRect
}
Here's the solution as a function:
extension UITextField {
func characterRects() -> [CGRect] {
var beginningOfRange = beginningOfDocument
var characterRects = [CGRect]()
while beginningOfRange != endOfDocument {
guard let endOfRange = position(from: beginningOfRange, offset: 1), let textRange = textRange(from: beginningOfRange, to: endOfRange) else { break }
beginningOfRange = endOfRange
var characterRect = firstRect(for: textRange)
characterRect = convert(characterRect, from: textInputView)
characterRects.append(characterRect)
}
return characterRects
}
}
Note that you may need to clip your rects if you're text is too long for the text field. Here's an example of the solution witout clipping:

Color of pixel in ARSCNView

I am trying to get the color of a pixel at a CGPoint determined by the location of a touch. I have tried the following code but the color value is incorrect.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = event?.allTouches?.first {
let loc:CGPoint = touch.location(in: touch.view)
// in debugger, the image is correct
let image = sceneView.snapshot()
guard let color = image[Int(loc.x), Int(loc.y)] else{
return
}
print(color)
}
}
....
extension UIImage {
subscript (x: Int, y: Int) -> [UInt8]? {
if x < 0 || x > Int(size.width) || y < 0 || y > Int(size.height) {
return nil
}
let provider = self.cgImage!.dataProvider
let providerData = provider!.data
let data = CFDataGetBytePtr(providerData)
let numberOfComponents = 4
let pixelData = ((Int(size.width) * y) + x) * numberOfComponents
let r = data![pixelData]
let g = data![pixelData + 1]
let b = data![pixelData + 2]
return [r, g, b]
}
}
Running this and touching a spot on the screen that is a very large consistent bright orange yields a wide range of RGB values and looking at the color they actually produce yields a completely different color (dark blue in the case of the orange).
I'm guessing that maybe the coordinate systems are different and I'm actually getting a different point on the image possibly?
EDIT: Also I should mention that the part I'm tapping on is a 3D model that is not affected by lighting so the color should and appears to be consistent through run time.
Well it was easier than I thought. I first adjusted some things like making the touch be registered within my sceneview instead:
let loc:CGPoint = touch.location(in: sceneView)
I then reproportioned my cgpoint to adjust to the image view by doing the following:
let image = sceneView.snapshot()
let x = image.size.width / sceneView.frame.size.width
let y = image.size.height / sceneView.frame.size.height
guard let color = image[Int(x * loc.x), Int(y * loc.y)] else{
return
}
This allowed me to have consistent rgb values finally (not just numbers that changed on every touch even if I was on the same color). But the values were still off. For some reason my returned array was in reverse so I changed that with:
let b = data![pixelData]
let g = data![pixelData + 1]
let r = data![pixelData + 2]
I'm not sure why I had to do that last part, so any insight into that would be appreciated!

Xcode 6 Swift: error "Gamescene does not have a member named 'spawnBlocks'"

Ok so I'm working in Swift and I initialized spawnBlocksas a function and yet I am getting the error:"Gamescene does not have a member named 'spawnBlocks'"
I looked at some other questions on SO but they don't seem to address this specific issue.I'm not sure how else to write this. The code:
let spawn = SKAction.runBlock({() in self.spawnBlocks()}) //Error occurs on this line
let delay = SKAction.waitForDuration(NSTimeInterval(2))
let spawnThenDelay = SKAction.sequence([spawn, delay])
let spawnThenDelayForever = SKAction.repeatActionForever(spawnThenDelay)
self.runAction(spawnThenDelayForever)
func spawnBlocks() {
let blockPair = SKNode()
blockPair.position = CGPointMake(self.frame.size.width + block1Texture.size().width * 2, 0) //change to height
blockPair.zPosition = -10 //zposition = in front of or in back of, other objects are at 0
}
}