I'm trying to draw multiple images into a single PDFPage.
Looking at the docs and over StackOverflow seems like the best I got is to use PDFPage with an UIImage initializer like so:
let pdfPage = PDFPage(image: image)
But it just creates a page with a full-page image. I tried to draw the images using CGContext but I don't understand how to use PDFPage within a drawing context for it to draw the images rapidly like in the example below.
let bounds = page.bounds(for: .cropBox)
// Create a `UIGraphicsImageRenderer` to use for drawing an image.
let renderer = UIGraphicsImageRenderer(bounds: bounds, format: UIGraphicsImageRendererFormat.default())
let image = renderer.image { (context) in
// How do I rapidly draw them here?
}
Any help will be highly appreciated!
The result I get with PDFPage(image: <UIImage>) vs expected result:
You probably need to improve the positioning logic for the image in the loop, but this should point you to the right direction.
import UIKit
import PDFKit
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let pdfView = PDFView()
pdfView.frame = CGRect(x: 0, y: 50, width: view.frame.width, height: view.frame.height - 50)
if let d = createPDFDocument() {
pdfView.document = d
}
view.addSubview(pdfView)
}
func createPDFDocument() -> PDFDocument? {
let pdfDocument = PDFDocument()
let page = PDFPage()
let bounds = page.bounds(for: .cropBox)
let imageRenderer = UIGraphicsImageRenderer(bounds: bounds, format: UIGraphicsImageRendererFormat.default())
let image = imageRenderer.image { (context) in
context.cgContext.saveGState()
context.cgContext.translateBy(x: 0, y: bounds.height)
context.cgContext.concatenate(CGAffineTransform.init(scaleX: 1, y: -1))
page.draw(with: .mediaBox, to: context.cgContext)
context.cgContext.restoreGState()
// Improve logic for image position
Range(1...4).forEach { value in
let image = UIImage(named: "YOUR_IMAGE_NAME")
let rect = CGRect(x: 50 * value, y: 0, width: 40, height: 100)
image?.draw(in: rect)
}
}
let newPage = PDFPage(image: image)!
pdfDocument.insert(newPage, at: 0)
return pdfDocument
}
}
Related
I have found the following problem and unfortunatly other posts have not helped me to a working solution.
I have a simple app that shows the camera preview (AVCaptureVideoPreviewLayer) where the video gravity has been set to resizeAspectFill (videoGravity = .resizeAspectFill).
From my understanding this only streches the image in the width to make to fill the screen.
On my preview layer I also have applied a CGRect as a mask with fixed x, y, width and height.
Now once I take a photo i'm trying to crop that exact rectangle out of the image. For my understanding i'm supposed to use some kind of math to convert the CGRect to the same aspect ratio as the image that I get from the AVCapturePhotoOutput method but it never seems to crop correctly in the width.
private func cropImage(image: UIImage) {
let rect = CGRect(x: 25, y: 150, width: 325, height: 230)
let scale = CGAffineTransform(scaleX: 1/self.view.frame.width, y: 1/self.view.frame.height)
let flip = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -1)
let bounds = rect.applying(scale).applying(flip)
let topLeft = bounds.topLeft.scaled(to: image.size)
let topRight = bounds.topRight.scaled(to: image.size)
let bottomLeft = bounds.bottomLeft.scaled(to: image.size)
let bottomRight = bounds.bottomRight.scaled(to: image.size)
var ciImage = CIImage(image: image.forceSameOrientation())!
ciImage = ciImage.applyingFilter("CIPerspectiveCorrection", parameters: [
"inputTopLeft": CIVector(cgPoint: bottomLeft),
"inputTopRight": CIVector(cgPoint: bottomRight),
"inputBottomLeft": CIVector(cgPoint: topLeft),
"inputBottomRight": CIVector(cgPoint: topRight)
])
let context = CIContext()
let cgImage = context.createCGImage(ciImage, from: ciImage.extent)
let output = UIImage(cgImage: cgImage!)
let vc = PreviewViewController()
vc.imageView.image = output
self.present(vc, animated: true, completion: nil)
}
So again, basically it does crop at the correct height but its only the width that does not seem to go well.
Image example of what I would want to capture.
https://imgur.com/a/8GryEgX
As you can see the bounding box in the top left stops after the "Q" button.
Result:
https://imgur.com/FwKRWxK
As you can see in this image, it does crop correctly in the height however if we take a look at the top left it also includes half of the button to the left of the "Q" (Tab button)
Any help towards the solution would be appreciated!
I managed to solve the issue with this code.
private func cropToPreviewLayer(from originalImage: UIImage, toSizeOf rect: CGRect) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }
// This previewLayer is the AVCaptureVideoPreviewLayer which the resizeAspectFill and videoOrientation portrait has been set.
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: rect)
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: (outputRect.origin.x * width), y: (outputRect.origin.y * height), width: (outputRect.size.width * width), height: (outputRect.size.height * height))
if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}
return nil
}
usage of the piece of code for my case:
let rect = CGRect(x: 25, y: 150, width: 325, height: 230)
let croppedImage = self.cropToPreviewLayer(from: image, toSizeOf: rect)
self.imageView.image = croppedImage
I'm trying to draw circles and in the middle of each circle, I want to draw an image.
my circles work fine but I'm not getting along with the images.
I don't understand why I can't just draw an UIImage directly.
The code below //draw PNGs is what my question is about but I posted the whole code.
thanks in advance for any help
import UIKit
import CoreGraphics
enum Shape2 {
case circle
case Rectangle
case Line
}
class Canvas: UIView {
let viewModel = ViewModel(set: set1)
var currentShape: Shape2?
override func draw(_ rect: CGRect) {
guard let currentContext = UIGraphicsGetCurrentContext() else {
print("Could not get Context")
return
}
drawIcons (user: currentContext)
}
private func drawIcons(user context: CGContext){
for i in 0...viewModel.iconsList.count-1 {
let centerPoint = CGPoint(x: viewModel.icons_coord_x[i], y: viewModel.icons_coord_y[i])
context.addArc(center: centerPoint, radius: CGFloat(viewModel.Diameters[i]), startAngle: CGFloat(0).degreesToRadians, endAngle: CGFloat(360).degreesToRadians, clockwise: true)
context.setFillColor(UIColor.blue.cgColor)
//context.setFillColor(viewModel.iconsbackground_colors[i].cgColor)
context.fillPath()
context.setLineWidth(4.0)
//draw PNGs:
let image = UIImage(named: "rocket")!
let ciImage = image.ciImage
let imageRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let context2 = CIContext(options: nil)
let cgImage = context2.createCGImage(ciImage ?? <#default value#>, from: ciImage?.extent ?? <#default value#>)
context.draw(CGImage() as! CGLayer, in: imageRect)
}
}
func drawShape(selectedShape: Shape2){
currentShape = selectedShape
setNeedsDisplay()
}
} ```
I don't understand why I can't just draw an UIImage directly.
You can.
https://developer.apple.com/documentation/uikit/uiimage/1624132-draw
https://developer.apple.com/documentation/uikit/uiimage/1624092-draw
I have large images uploaded by users in Swift and I need to resize them all to 100x100px to create thumbnails to store in my server. So far I have found that this resizes an image given a CGSize:
func resizedImage(image: UIImage, size: CGSize) -> UIImage? {
let renderer = UIGraphicsImageRenderer(size: size)
return renderer.image { (context) in
image.draw(in: CGRect(origin: .zero, size: size))
}
}
Is there any way to create a CGSize knowing that my target size is strictly 100x100px?
Got this to work:
extension UIImage {
func resizedImage(pixelSize: (width: Int, height: Int)) -> UIImage? {
var size = CGSize(width: CGFloat(pixelSize.width) / UIScreen.main.scale, height: CGFloat(pixelSize.height) / UIScreen.main.scale)
let rect = AVMakeRect(aspectRatio: self.size, insideRect: CGRect(x:0, y:0, width: size.width, height: size.height))
let renderer = UIGraphicsImageRenderer(size: size)
return renderer.image { (context) in
self.draw(in: rect)
}
}
}
You should initialize your render based on the user device scale and multiply its width and height instead of dividing it:
extension UIImage {
func aspectFitScaled(to size: CGSize) -> UIImage {
let format = imageRendererFormat
format.opaque = false
format.scale = UIScreen.main.scale
let isLandscape = self.size.width > self.size.height
let ratio = isLandscape ? size.width / self.size.width : size.height / self.size.height
let drawSize = self.size.scaled(by: ratio)
let x = (size.width - drawSize.width) / 2
let y = (size.height - drawSize.height) / 2
let origin = CGPoint(x: x, y: y)
return UIGraphicsImageRenderer(size: size, format: format).image { _ in
draw(in: CGRect(origin: origin, size: drawSize))
}
}
}
usage:
class ViewController: UIViewController {
// imageView frame is 200 x 200
#IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
// original image size is (719.0, 808.0)
let image = UIImage(data: try! Data(contentsOf: URL(string: "https://i.stack.imgur.com/Xs4RX.jpg")!))!
imageView.backgroundColor = .gray
let ivImage = image.aspectFitScaled(to: imageView.frame.size)
imageView.image = ivImage
print("ivImage.size", ivImage.size) // (200.0, 200.0)
print("ivImage.scale", ivImage.scale) // screen scale 3.0 iPhone 8 Plus
// lets check the real image dimension
let data = ivImage.jpegData(compressionQuality: 1)!
let savedSize = UIImage(data: data)!.size
print("savedSize", savedSize) // savedSize (600.0, 600.0)
}
}
My extension method right now takes a screenshot of the entire uiview inside of the view controller. I would like to use the same function to do the same thing only take a exact area of of the uiview instead of the whole view. Specifically I would like to capture x:0,y:0,length 200,Height 200,
func screenshot() -> UIImage {
let imageSize = UIScreen.main.bounds.size as CGSize;
UIGraphicsBeginImageContextWithOptions(imageSize, false, 0)
let context = UIGraphicsGetCurrentContext()
for obj : AnyObject in UIApplication.shared.windows {
if let window = obj as? UIWindow {
if window.responds(to: #selector(getter: UIWindow.screen)) || window.screen == UIScreen.main {
// so we must first apply the layer's geometry to the graphics context
context!.saveGState();
// Center the context around the window's anchor point
context!.translateBy(x: window.center.x, y: window.center
.y);
// Apply the window's transform about the anchor point
context!.concatenate(window.transform);
// Offset by the portion of the bounds left of and above the anchor point
context!.translateBy(x: -window.bounds.size.width * window.layer.anchorPoint.x,
y: -window.bounds.size.height * window.layer.anchorPoint.y);
// Render the layer hierarchy to the current context
window.layer.render(in: context!)
// Restore the context
context!.restoreGState();
}
}
}
let image = UIGraphicsGetImageFromCurrentImageContext();
return image!
}
How about:
extension UIView {
func screenshot(for rect: CGRect) -> UIImage {
return UIGraphicsImageRenderer(bounds: rect).image { _ in
drawHierarchy(in: CGRect(origin: .zero, size: bounds.size), afterScreenUpdates: true)
}
}
}
This makes it a bit more reusable, but you can change it to be a hardcoded value if you want.
let image = self.view.screenshot(for: CGRect(x: 0, y: 0, width: 200, height: 200))
I wrote an NSImage extension to allow me to take random samples of an image. I would like those samples to retain the same quality as the original image. However, they appear to be aliased or slightly blurry. Here's an example - the original drawn on the right and a random sample on the left:
I'm playing around with this in SpriteKit at the moment. Here's how I create the original image:
let bg = NSImage(imageLiteralResourceName: "ref")
let tex = SKTexture(image: bg)
let sprite = SKSpriteNode(texture: tex)
sprite.position = CGPoint(x: size.width/2, y:size.height/2)
addChild(sprite)
And here's how I create the sample:
let sample = bg.sample(size: NSSize(width: 100, height: 100))
let sampletex = SKTexture(image:sample!)
let samplesprite = SKSpriteNode(texture:sampletex)
samplesprite.position = CGPoint(x: 60, y:size.height/2)
addChild(samplesprite)
Here's the NSImage extension (and randomNumber func) that creates the sample:
extension NSImage {
/// Returns the height of the current image.
var height: CGFloat {
return self.size.height
}
/// Returns the width of the current image.
var width: CGFloat {
return self.size.width
}
func sample(size: NSSize) -> NSImage? {
// Resize the current image, while preserving the aspect ratio.
let source = self
// Make sure that we are within a suitable range
var checkedSize = size
checkedSize.width = floor(min(checkedSize.width,source.size.width * 0.9))
checkedSize.height = floor(min(checkedSize.height, source.size.height * 0.9))
// Get random points for the crop.
let x = randomNumber(range: 0...(Int(source.width) - Int(checkedSize.width)))
let y = randomNumber(range: 0...(Int(source.height) - Int(checkedSize.height)))
// Create the cropping frame.
var frame = NSRect(x: x, y: y, width: Int(checkedSize.width), height: Int(checkedSize.height))
// let ref = source.cgImage.cropping(to:frame)
let ref = source.cgImage(forProposedRect: &frame, context: nil, hints: nil)
let rep = NSBitmapImageRep(cgImage: ref!)
// Create a new image with the new size
let img = NSImage(size: checkedSize)
// Set a graphics context
img.lockFocus()
defer { img.unlockFocus() }
// Fill in the sample image
if rep.draw(in: NSMakeRect(0, 0, checkedSize.width, checkedSize.height),
from: frame,
operation: NSCompositingOperation.copy,
fraction: 1.0,
respectFlipped: false,
hints: [NSImageHintInterpolation:NSImageInterpolation.high.rawValue]) {
// Return the cropped image.
return img
}
// Return nil in case anything fails.
return nil
}
}
func randomNumber(range: ClosedRange<Int> = 0...100) -> Int {
let min = range.lowerBound
let max = range.upperBound
return Int(arc4random_uniform(UInt32(1 + max - min))) + min
}
I've tried this about 10 different ways and the results always seem to be a slightly blurry sample. I even checked for smudges on my screen. :)
How can I create a sample of an NSImage that retains the exact qualities of the section of the original source image?
Switching the interpolation mode to NSImageInterpolation.none was apparently sufficient in this case.
It's also important to handle the draw destination rect correctly. Since cgImage(forProposedRect:...) may change the proposed rect, you should use a destination rect that's based on it. You should basically use a copy of frame that's offset by (-x, -y) so it's relative to (0, 0) instead of (x, y).