Not sure if I am asking this question correctly, but I have two components; a CIImage and a UIBezierPath. Ideally, I want to create a CGRect that encapsulates my UIBezierPath; everything inside of the path would be white, everything outside of the path would be black. This way, I can then render this CGRect to some sort of an image, which I could then use as a mask for other purposes.
I am struggling to figure out how to do this with a focus on performance. My tests, as noted below, leverage using UIGraphicsImageRenderer which is far too slow for my needs (I will be doing this on sample buffers from a camera). Therefore, I would like to stick within CoreImage. This is my attempt;
// Path
let path = UIBezierPath()
// ... define the path's shape and close it
// My source image
let image = CIImage(cgImage: UIImage(named: "test.jpg")!.cgImage!)
// Renderer
let renderer = UIGraphicsImageRenderer(size: image.extent.size)
// Render path as mask
let img = renderer.image { ctx in
ctx.cgContext.setFillColor(UIColor.black.cgColor)
ctx.cgContext.fill(CGRect(x: 0, y: 0, width: image.extent.size.width, height: image.extent.size.height))
ctx.cgContext.setFillColor(UIColor.white.cgColor)
ctx.cgContext.addPath(path.cgPath)
ctx.cgContext.drawPath(using: .fill)
}
// Put a filter on the image
let imageFiltered = image.applyingFilter("CIPhotoEffectNoir")
// Blend with mask
let maskFilter = CIFilter.blendWithMask()
maskFilter.inputImage = imageFiltered
maskFilter.backgroundImage = image
maskFilter.maskImage = CIImage(cgImage: img.cgImage!)
// Output
if let output = maskFilter.outputImage {
... use CIContext() to render back to CVPixelBuffer for preview on MTKView.
}
Overall, the goal is to have a defined portion of an image (which will not conform to a traditional shape like a square or circle) which will be filtered with a CIFilter, then composited back over the original. If there is a better approach (such as somehow taking the original image, filtering it, cropping it to the path (leaving everything outside of the path transparent) and composing, that would likely be better performant.
To note, the above sample code results in a crash as the UIGraphicsImageRenderer cannot render the mask fast enough.
Your approach looks good so far. I assume the slow part is the generation of the mask image with Core Graphics. Unfortunately, there is no direct way to do the same with Core Image directly (on the GPU). However, you can try the following:
(Assuming from your previous question that the path always has a certain shape,) you can generate a mask image containing the path once for a certain reference size of your choice. Make sure that the path doesn't "touch" the border.
Then, when you want to use it as a mask, move and scale the shape image to the correct place using transformations and let its edges extend infinitely (to cover the whole underlying image; that's why the shape shouldn't touch the edges). Something like this:
let pathImage = CIImage(cgImage: img.cgImage!)
// scale path to the size of the area you want to mask
var mask = pathImage.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
// move path to the place you want to cover
mask = mask.transformed(by: CGAffineTransform(translationX: offsetX, y: offsetY))
// let mask fill the rest of the area
mask = mask.clampedToExtent()
// use mask as maskImage...
You should be able to recycle the pathImage for every frame and thereby avoiding Core Graphics and CPU-GPU-synchronization.
Related
My goal is to render an SCNScene off screen with a transparent background, as a PNG. Full reproducing project here.
It works, but when I enable jittering, the resulting render is semitransparent. In this example I pasted in the resulting PNG on top of an image with black squares, and you will notice that the black squares are in fact visible:
As you can see, the black boxes are visible through the 3D objects.
But, when I disable jittering, I get this:
As you can see, the black boxes are not visible.
I'm on Monterrey 12.1 (21C52). I'm testing the images in Preview and in Figma.
I'm using standard SDK features only. Here's what I do:
scene.background.contents = NSColor.clear
let snapshotRenderer = SCNRenderer(device: MTLCreateSystemDefaultDevice())
snapshotRenderer.pointOfView = sceneView.pointOfView
snapshotRenderer.scene = scene
snapshotRenderer.scene!.background.contents = NSColor.clear
snapshotRenderer.autoenablesDefaultLighting = true
// setting this to false does not make the image semi-transparent
snapshotRenderer.isJitteringEnabled = true
let size = CGSize(width: 1000, height: 1000)
let image = snapshotRenderer.snapshot(atTime: .zero, with: size, antialiasingMode: .multisampling16X)
let imageRep = NSBitmapImageRep(data: image.tiffRepresentation!)
let pngData = imageRep?.representation(using: .png, properties: [:])
try! pngData!.write(to: destination)
The docs for jittering says
Jittering is a process that SceneKit uses to improve the visual quality of a rendered scene. While the scene’s content is still, SceneKit moves the pointOfView location very slightly (by less than a pixel in projected screen space). It then composites images rendered after several such moves to create the final rendered scene, creating an antialiasing effect that smooths the edges of rendered geometry.
To me, that doesn't sound like something that is expected to produce semi-transparency?
I'm trying to draw several shapes (rectangle, triangle, circle, ...) in swift and for that, I use uibezierpath but I am not able to draw exactly what I want.
I need to draw for example a rectangle, but the borders of this rectangle need to have different line width.
To do that, I create different path then use the "appendpath" to merge them in one path. So that works, BUT I also need to have a background image in this rectangle.
For that, I create a layer and set it an image. The issue is that, no background image are displayed when I use "appendpath", certainly because it doesn't recognize my drawing as a rectangle.
I hope it is clear enough, but is there a way to draw a shape with a background image, and have different border width ?
Thanks for your help !!
There are two solutions I'd suggest you try:
1) Masking
Create a normal CALayer and set the image as its contents. Then create a CAShapeLayer with the path you like and use it as the first layer's mask.
E.g.:
let imageLayer = CALayer()
imageLayer.contents = UIImage(named: "yourImage")?.CGImage // Your image here
imageLayer.frame = ... // Define a frame
let maskPath = UIBezierPath(...) // Create your path here
let maskLayer = CAShapeLayer()
maskLayer.path = maskPath.CGPath
imageLayer.mask = maskLayer
Don't forget to set the right frames and paths, and you should be able to achieve the effect you wanted.
2) Fill color
Create a CAShapeLayer with the path you like, then use your image as its fillColor.
E.g.:
let path = UIBezierPath(...) // Create your path here
let layer = CAShapeLayer()
layer.path = path.CGPath
let image = UIImage(named: "yourImage") // Your image here
layer.fillColor = UIColor(patternImage: image!).CGColor
You may find this approach easier at first, but controlling the way the image fills your shape is not trivial at all.
I hope this will help.
If you'd like more details, please provide an image or a sketch of what you're trying to achieve and / or the code you've written so far. Thanks!
I'm trying to draw a path on a high resolution image, that's nothing complicated for an iPhone but if I add shadow to my path everything lags. It lags only when I work on images with a certain resolution (2000 x 3000) even less.
The Storyboard vies are:
-Scroll View
-Image View
-Draw View
So I have the DrawingView on top of the ImageView when I need to draw.
So the ImageView and the DrawView (view.bounds.size) have the same resolution as the image (e.g. 2000 x 3000) (there's the problem).
I'm drawing on a view with a high resolution.
I'm not directly calling drawRect: but only calling setNeedsDisplay() inside touchesBegan() and touchesMoved() after doing some operations (path.moveToPoint, path.addCurveToPoint, array operations) and adding points to my array.
In drawRect: I essentially have:
override func drawRect(rect: CGRect) {
print(self.bounds.size)
UIColor.greenColor().setStroke()
path.lineCapStyle = .Round
path.lineJoinStyle = .Round
path.lineWidth = 60.0
context = UIGraphicsGetCurrentContext()!
CGContextAddPath(context, path.CGPath)
CGContextSetShadowWithColor(context, CGSizeZero, 14.0, UIColor.whiteColor().CGColor) // <-- with this shadow it lags a lot.
path.stroke()
}
My path is a UIBezierPath().
Any ideas to improve the speed?
Update:
I followed what #brimstone said. I now have ImageView with a lower resolution, but have to apply my drawn path to the high resolution image.
(I'm trying to hand crop an image with the path that the user draws)
In this code I already got my closed path:
let layer = CAShapeLayer()
layer.path = path.CGPath
self.imageToEditView.layer.mask = layer
UIGraphicsBeginImageContext(self.imageEdited.size)
self.imageToEditView.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let croppedCGImage = CGImageCreateWithImageInRect(image.CGImage!, CGPathGetBoundingBox(path.CGPath))
let croppedImage = UIImage(CGImage: croppedCGImage!)
self.imageToEditView.image = croppedImage
self.imageToEditView.layer.mask = nil
imageToEditView.bounds.size = low resolution
imageEdited.size = high resolution
I need to set the hight resolution (I think) when i renderInContext. But how can I change the resolution of the imageView now?
Try downsizing it for the user to draw over (doesn't make a huge difference on small iPhone screens for user experience), then apply the edits to the high-res image.
To downsize images, either use UIImagePNGRepresentation, which may make your image sufficiently smaller, or (if you're still having memory issues), try using techniques in this tutorial and this answer to make it even smaller.
Then, you can take the content of what they've drawn and apply it to the high-res image.
Alternatively, look at high-res optimisation techniques by Apple: https://developer.apple.com/library/ios/documentation/2DDrawing/Conceptual/DrawingPrintingiOS/SupportingHiResScreensInViews/SupportingHiResScreensInViews.html
Any suggestion on using SpriteKit rendering engine, but have the ability to draw game graphic programmatically. Those graphics are simple but slightly complex than just rectangular and oval.
I recently found out the SKShapeNode have a cap on the complexity when using custom path. I basically have a function that take a number to generate random CGPath shape, The larger the number the more complex and details about the shape. When I use a small number, there is no problem, but when I use a larger number to generate more complex path, SKShapeNode give me this error. (I tested on real device with iOS9.2.1 as well)
Assertion failed: (length + offset <= _length)
Related Question
Now I don't think using SKShapeNode is a good idea, so any suggestion on programmatically draw graphics that able to work nicely with SpriteKit?
I read somewhere else which is draw using the any graphics API available as long as it can convert to a format that SKTexture can initialize, is the right way to go?
If you will use SKTexture you can draw your chart with CGContext.
First of all you need create array of chart's points like:
let pathPoints: [CGPoint] = [...]
Then you need to call UIGraphicsBeginImageContext(rect.size) where rect.size is CGSize object which determines the size of the region in which will be inscribed the chart and rect is CGRect object. Next step is creation of CGContextRef and it setting:
UIGraphicsBeginImageContext(size)
let context: CGContextRef = UIGraphicsGetCurrentContext()!
CGContextSetStrokeColorWithColor(context, UIColor.redColor().CGColor);
CGContextSetLineWidth(context, lineWidth)
where lineWidth is CGFloat type value.
After this you can create your chart in context like this:
if pathPoints.count > 1 {
CGContextMoveToPoint(context, pathPoints.first!.x, pathPoints.first!.y)
for var i = 1; i < pathPoints.count; ++i {
CGContextAddLineToPoint(context, pathPoints[i].x, pathPoints[i].y)
}
}
CGContextStrokePath(context)
Then you need to create image form context like this:
let image = UIGraphicsGetImageFromCurrentImageContext()
Then create texture form image and use it for SKSpriteNode object that will be added to the scene:
let pathTexture = SKTexture(image: image)
let pathNode = SKSpriteNode(texture: pathTexture)
pathNode.position = CGPointMake(rect.origin.x + rect.width/2 - lineWidth/2, rect.origin.y + rect.height/2 - lineWidth/2)
pathNode.zPosition = 0
someParrentNodeThatOnScene.addChild(pathNode)
UIGraphicsEndImageContext()
That's all you need to create chart.
I've created a class that draws a coffee mug using code I imported from PaintCode and I applied this class to a view. Using #IBDesignable, I can see in my storyboard that the mug is being drawn inside the view, however the overall shape is too big. I could redraw the shape in code so that it fits the current size of the view, but isn't there a way to scale the shape after it is drawn so that as my view changes size on different devices the shape is scaled correctly?
I've looked into CGContextScaleCTM(aRef, <#sx: CGFloat#>, <#sy: CGFloat#>) but I am not sure how to convert the CGRect of my view's bounds to the right scale factor
I didn't want to post all of it, but my drawing code begins like this
bezierPath.moveToPoint(CGPointMake(64.8, 52.81))
bezierPath.addCurveToPoint(CGPointMake(58.89, 43.44), controlPoint1: CGPointMake(64.21, 48.28), controlPoint2: CGPointMake(62.11, 44.95))
bezierPath.addCurveToPoint(CGPointMake(56.82, 42.76), controlPoint1: CGPointMake(58.24, 43.13), controlPoint2: CGPointMake(57.55, 42.9))
This goes on then
bezierPath.closePath()
bezierPath.miterLimit = 4
bezierPath.usesEvenOddFillRule = true;
Then there are are two other chunks of drawing code for drawing two little lines for the coffee steam. I append these two paths to the original bezierPath, then I set a fill color and fill the whole shape.
In code you can just scale your paths as you want using this UIBezierPath swift extension PaintCodeScale.
e.g
bezierPath.fit(into: rect).moveCenter(to: rect.center).fill()
Since I used PaintCode to generate my drawing code, I found a way to implement #dasdom's suggestion using help from the app.
In PaintCode there is a "frame" tool which you can place around your drawing. This enables constraints for your artwork so that the vectors are re-drawn relative to the frame size. The frame is a variable that is exported along with your code when you bring it into Xcode. When I added the drawing code to my class in Xcode and then added the class to my view in Storyboard, Xcode automatically scaled the frame to the view size and thus the drawing code within my class was also autmatically resized to fit my view. Now, the artwork will be automatically re-drawn to fit whatever view I add my class to. The automatic re-sizing may be occurring due to the "Automatically resize subviews" option that is enabled in Storyboard for the view that I have applied my graphics class to.
func scalePath(path: UIBezierPath) -> UIBezierPath {
let w1: CGFloat = path.bounds.size.width
let h1: CGFloat = path.bounds.size.height
let w2: CGFloat = self.frame.width
let h2: CGFloat = self.frame.height
var s: CGFloat = 1.0
// take the smaller one and scale 1:1 to fit (to keep the aspect ratio)
if w2 <= h2 {
s = w2 / w1
} else {
s = h2 / h1
}
path.apply(CGAffineTransform(scaleX: s, y: s))
return (path)
}