I've been trying to get a solid color when I draw a 0.5pt width line but it gets opaque when the value is 1.0 or less (see picture).
This is the code (Swift 4)
func DrawLine(from:CGPoint, to:CGPoint) {
let path = UIBezierPath()
path.move(to: from)
path.addLine(to: to)
let lineLayer = CAShapeLayer()
lineLayer.path = path.cgPath
lineLayer.lineWidth = linesWidth
lineLayer.strokeColor = linesColor.cgColor
lineLayer.isOpaque = false // trying to make it work line
lineLayer.opacity = 1 // trying to make it work line
lineLayer.shadowColor = UIColor.clear.cgColor // trying to make it work line
lineLayer.shadowOffset = .zero // trying to make it work line
lineLayer.shadowOpacity = 0 // trying to make it work line
self.layer.insertSublayer(lineLayer, at: 0)
}
Thanks.
I found an answer at https://www.raywenderlich.com/411-core-graphics-tutorial-part-1-getting-started (All because the anti-aliasing as explained in the first answer by Codo)
If you have oddly sized straight lines, you’ll need to position them
at plus or minus 0.5 points to prevent anti-aliasing
So, if the lineWidth is 1pt or less I add 0.5 points or (1 / scale).
Now the line is crispy
I think when you say opaque you rather mean partially transparent. And I guess we're talking about macOS here, right?
How do you expect a line of less than 1 pixels looks on the screen? A pixel is the smallest unit of the screen. The entire pixel has the same color. It can't be partially red and partially white.
So macOS – as part of the antialiasing – blends the thin line and the background, i.e. it makes the pixels partially transparent before drawing them on the background. The effect is that the line is perceived as thinner even though it is still 1 pixel wide.
If you don't like this effect, do not draw lines of less than 1 pixel. But it's the only way a line looks thinner than 1 pixel.
BTW: Pixel size depends on the resolution. On a retina device, 1 pixel is 0.5 point, on non-retina devices it's 1 point and there are even factors in-between.
Related
The positions and sizes of my Game Pieces, as set by CGPoint(..) and CGRect(..), don’t make arithmetic sense to me when looked at with respect to the width and height of the surrounding container of all Game Pieces?
Let me illustrate with just one specific example –
I call the surrounding container = “room”.
One of many specific Game Pieces = “rock”.
Here’s the math
roomWidth = Double(UIScreen.main.bounds.width)
roomHeight = Double(UIScreen.main.bounds.height)
While in Portrait mode:
roomWidth = 744.0
roomHeight = 1133.0
When rotated to Landscape mode:
roomWidth = 1133.0,
roomHeight = 744.0
So far so good .. here’s the problem:
When I look at my .sks file, the width of the “rock” and its adjacent game pieces far exceeds the roomWidth; for example,
Widths of rock + paddle + tree = 507 + 768 + 998 which obviously exceeds the room’s width for either Portrait or Landscape mode – and this math doesn’t even address the separation between Game Pieces.
The final math “craziness” looks at the swift xPos values for each Game Piece as specified in my .sks file:
Room: xPos = 40,
Rock: xPos = -390,
Paddle: xPos = -259,
Tree: xPos = 224
I cannot grasp the two high negative numbers .. to me, that means the Rock and the Paddle shouldn’t even be visible .. seriously off-screen.
One significant addition = I did set the Autoresizing Mask to center horizontally and vertically
I need a serious infusion of “smarts” here.
The default anchorPoint of an sks file (SpriteKit Scene file) is (0.5, 0.5). So the origin (0, 0) of the scene is drawn at the center of the SKView. You can change the anchor point in the Attributes inspector when editing the sks file. The default means that negative coordinates not too far from the origin will be visible in the SKView.
The scene also has a scaleMode property which determines how the scene is scaled if its size doesn't match the SKView's size. The default is .fill, which means the view scales the scene's axes independently so the scene's size exactly fills the view.
I am trying to make the line space a little less than the default for a small window.
I have code similar to this question: How to Increase Line spacing in UILabel in Swift
let title = "This is text that will be long enough to form two lines"
let styles = NSMutableParagraphStyle()
styles.lineSpacing = 0.1
let attribs = [
NSAttributedString.Key.paragraphStyle:styles
]
let attrString:NSAttributedString = NSAttributedString.init(string: title, attributes: attribs)
introText.attributedStringValue = attrString
While changing lineSpacing to 10 makes a noticeable difference, I can't see a difference if I make it less than 1.
Here is what 0.1 looks like:
Line spacing is measured in points, not lines. There's basically no such thing as a fraction of a point (for drawing purposes; I am simplifying, since retina screens do exist). Zero is the minimum, and when you say 0.1, you are there; you can't reduce the leading any further.
Keep in mind the relationship of points to pixels. For most recent devices, a point represents two or three pixels. I have set a UIView with a height of 0.5, used as a divider, which is about 1 pixel on many devices, and been able to see the difference. A height of 0.1 is probably rounded off to nothing, though.
I'm trying to write a script file for FrameMaker that creates a keyboard shortcut for a frame border. Everything works fine except for the BorderWidth attribute:
aframe.Pen = 0;
aframe.Color = "Black";
aframe.BorderWidth = 0.5;
I want to set the border width to 0.5pt but it always comes out as 1pt.
How can I make the border thinner using this script?
In framemaker 1 pt is equal to 65536.
By setting 0.5, youve set it to a value lower than the minimum, so it defaults to 1 pt.
To get the desired result, set BorderWidth to 65536/2.
This is a bizarre one for me and after having spent two days trying to fix it and reading what I could find on apple sites and stack overflow I still have no solution. Hopefully someone can help me.
So I am rotating a CAShapeLayer which is in the coordinate system of the view. After rotation the frame-coordinates are updated but those for the path are not.
On screen the path and frame both display as rotated! So If I used path.contains to see if a point belongs the CAShapeLayer after rotation I get wrong answer. Using the rotated frame does not work because frames of adjacent paths can overlap and give wrong answer.
Here is the code that rotates the relevant CAShapeLayer:
let shapeCopy = CAShapeLayer()
let inputShape = tempShapeList[index]
shapeCopy.backgroundColor =UIColor.red.withAlphaComponent(0.75).cgColor
shapeCopy.frame = inputShape.frame
shapeCopy.bounds = inputShape.bounds
shapeCopy.path = inputShape.path
shapeCopy.position = inputShape.position
shapeCopy.anchorPoint = inputShape.anchorPoint
print("bounding rect pre rotation: \(shapeCopy.frame)")
print("path pre rotation: \((shapeCopy.path)!)")
let transform = CATransform3DMakeRotation(CGFloat(75*Double.pi/180.0), 0, 0, 1)
shapeCopy.transform = transform
print("bounding rect post rotation:\(shapeCopy.frame)")
print("path post rotation: \((shapeCopy.path)!)")
if ((shapeCopy.path)!.contains(newPoint)) {
containingView.layer.addSublayer(shapeCopy)
answer = index
print("Prize is:\(String(describing: textLabelList[index].text))")
break
}
The message in the debugger:
bounding rect pre rotation: (139.075809065823, 236.846930318145, 174.164592138914, 163.153069681855)
path pre rotation: Path 0x600000236a60:
moveto (207, 400)
lineto (138.901, 266.349)
curveto (196.803, 236.847) (267.115, 247.983) (313.066, 293.934)
lineto (207, 400)
closepath
bounding rect post rotation:(189.419925763055, 292.163148046286, 202.670877072107, 210.457199272682)
path post rotation: Path 0x600000236a60:
moveto (207, 400)
lineto (138.901, 266.349)
curveto (196.803, 236.847) (267.115, 247.983) (313.066, 293.934)
lineto (207, 400)
closepath
ScreenShot of the simulator:
Screen shot of the simulator
In the screen shot you will see the rotated path and the frame of the path in the dark colored pie and slightly translucent frame.
However the coordinates of the path haven't changed. So the program believes that the red dot belongs to the shaded slice that got rotated away! If the paths updated correctly the red dot would belong to the yellow slice labelled "e6 ¢" gives wrong answers.
Also note that the background fortune wheel is a view etc in its own coordinate system. The rotated dark pie is in the coordinate system of the top level view as is the red dot.
Not sure if the post is fully clear - apologize in advance for this verbose post. If I have missed on any detail that can help please let me know.
Thanks in advance....
Applying a transform to a layer doesn't change the way the layer's content is stored. If the layer contains an image, the image is stored unrotated, and if the layer contains a path, the path is stored unrotated.
Instead, when the window server builds up (“composites”) the screen image, it applies the transform as it is drawing the layer's content into the frame buffer.
The frame property is different. It is actually computed from several other properties: position, bounds.size, anchorPoint, and transform.
You want to test whether a point is inside the on-screen appearance of the layer's path—that is, the path with the transform applied.
One way to do this is to convert the point into the layer's coordinate system. To convert it, you also need to know the original coordinate system of the point. Then you can use -[CALayer convertPoint:fromLayer] or -[CALayer convertPoint:toLayer:]. For example, suppose you have a tap gesture recognizer and you want to know if the tap is inside the path:
#IBAction func tapperDidFire(_ tapper: UITapGestureRecognizer) {
let newPoint = tapper.location(in: view)
let newPointInShapeLayer = shapeLayer.convert(newPoint, from: view.layer)
if shapeLayer.path?.contains(newPointInShapeLayer) ?? false {
print("Hit!")
}
}
I have a custom UIView which is drawn using its -[drawRect:] method.
The problem is that the anti-aliasing acts very weird as black lines horizontal or vertical lines are drawn very blurry.
If I disable anti-aliasing with CGContextSetAllowsAntialiasing, everything is drawn as expected.
Anti-Aliasing:
alt text http://dustlab.com/stuff/antialias.png
No Anti-Aliasing (which looks like the expected result with AA):
alt text http://dustlab.com/stuff/no_antialias.png
The line width is exactly 1, and all coordinates are integral values.
The same happens if I draw a rectangle using CGContextStrokeRect, but not if I draw exactly the same CGRect with UIRectStroke.
Since a stroke expands equal amounts to both sides, a line of one pixel width must not be placed on an integer coordinate, but at 0.5 pixels offset.
Calculate correct coordinates for stroked lines like this:
CGPoint pos = CGPointMake(floorf(pos.x) + 0.5f, floorf(pos.y) + 0.5f);
BTW: Don't cast your values to int and back to float to get rid of the decimal part. There's a function for this in C called floor.
in your view frames, you probably have float values that are not integers. While the frames are precise enough to do fractions of a pixel (float), you will get blurriness unless you cast to an int
CGRect frame = CGRectMake((int)self.frame.bounds..., (int)...., (int)...., (int)....);