PDF Thumbnail Generator Always Returning Nil | Swift/Xcode - swift

Would someone please explain to me why this pdf generator I'm attempting to use is always returning nil? I'm attempting to get a thumbnail to display in a UITableView alongside the filename of the PDF. Unfortunately, out of the four or so thumbnail generators I've tried, none of them have returned anything other than nil.
func uploadPDF() {
let types = UTType.types(tag: "pdf",
tagClass: UTTagClass.filenameExtension,
conformingTo: nil)
let documentPickerController = UIDocumentPickerViewController(forOpeningContentTypes: types)
documentPickerController.delegate = self
self.present(documentPickerController, animated: true, completion: nil)
}
func documentPicker(_ controller: UIDocumentPickerViewController, didPickDocumentsAt urls: [URL]) {
for url in urls {
let thumbnail = thumbnailFromPdf(withUrl: url, pageNumber: 0)
self.modelController.bidPDFUploadThumbnails.append(thumbnail!)
}
tableView.reloadData()
}
func thumbnailFromPdf(withUrl url:URL, pageNumber:Int, width: CGFloat = 240) -> UIImage? {
guard let pdf = CGPDFDocument(url as CFURL),
let page = pdf.page(at: pageNumber)
else {
return nil
}
var pageRect = page.getBoxRect(.mediaBox)
let pdfScale = width / pageRect.size.width
pageRect.size = CGSize(width: pageRect.size.width*pdfScale, height: pageRect.size.height*pdfScale)
pageRect.origin = .zero
UIGraphicsBeginImageContext(pageRect.size)
let context = UIGraphicsGetCurrentContext()!
// White BG
context.setFillColor(UIColor.white.cgColor)
context.fill(pageRect)
context.saveGState()
// Next 3 lines makes the rotations so that the page look in the right direction
context.translateBy(x: 0.0, y: pageRect.size.height)
context.scaleBy(x: 1.0, y: -1.0)
context.concatenate(page.getDrawingTransform(.mediaBox, rect: pageRect, rotate: 0, preserveAspectRatio: true))
context.drawPDFPage(page)
context.restoreGState()
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
Generator source: Thumbnail Generator

the pdf document starts from page 1 not 0 because its not an array.
so simple is
let thumbnail = thumbnailFromPdf(withUrl: url, pageNumber: 1)
you'll get it
rather than using page number you can direct access the thumbnail of by default first page as follow:
import PDFKit
func generatePdfThumbnail(of thumbnailSize: CGSize , for documentUrl: URL, atPage pageIndex: Int) -> UIImage? {
let pdfDocument = PDFDocument(url: documentUrl)
let pdfDocumentPage = pdfDocument?.page(at: pageIndex)
return pdfDocumentPage?.thumbnail(of: thumbnailSize, for: PDFDisplayBox.trimBox)
}

Related

Take snapshot from UIView with lower resolution

I'm taking snapshot from a PDFView in PDFKit for streaming (20 times per sec), and I use this extesnsion
extension UIView {
func asImageBackground(viewLayer: CALayer, viewBounds: CGRect) -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: viewBounds)
return renderer.image { rendererContext in
viewLayer.render(in: rendererContext.cgContext)
}
}
}
But the output UIImage from this extension has a high resolution which make it difficult to stream. I can reduce it by this extension
extension UIImage {
func resize(_ max_size: CGFloat) -> UIImage {
// adjust for device pixel density
let max_size_pixels = max_size / UIScreen.main.scale
// work out aspect ratio
let aspectRatio = size.width/size.height
// variables for storing calculated data
var width: CGFloat
var height: CGFloat
var newImage: UIImage
if aspectRatio > 1 {
// landscape
width = max_size_pixels
height = max_size_pixels / aspectRatio
} else {
// portrait
height = max_size_pixels
width = max_size_pixels * aspectRatio
}
// create an image renderer of the correct size
let renderer = UIGraphicsImageRenderer(size: CGSize(width: width, height: height), format: UIGraphicsImageRendererFormat.default())
// render the image
newImage = renderer.image {
(context) in
self.draw(in: CGRect(x: 0, y: 0, width: width, height: height))
}
// return the image
return newImage
}
}
but it add an additional workload which make the process even worse. Is there any better way?
Thanks
You can downsample it using ImageIO which is recommended by Apple:
extension UIImage {
func downsample(to resolution: CGSize) -> UIImage? {
let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let data = self.jpegData(compressionQuality: 0.75) as? CFData, let imageSource = CGImageSourceCreateWithData(data, imageSourceOptions) else {
return nil
}
let maxDimensionInPixels = Swift.max(resolution.width, resolution.height) * 3
let downsampleOptions = [
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceShouldCacheImmediately: true,
kCGImageSourceCreateThumbnailWithTransform: true,
kCGImageSourceThumbnailMaxPixelSize: maxDimensionInPixels
] as CFDictionary
guard let downsampledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, downsampleOptions) else {
return nil
}
return UIImage(cgImage: downsampledImage)
}
}

Combine two images with CGAffineTransform

I am using the new Apple Vision API's VNImageTranslationAlignmentObservation to get a CGAffineTransform returned. The idea is that you pass it two images that can be merged together and it returns the CGAffineTransform so that you can do so. I have managed to get the code working so that i get a CGAffineTransform returned but after a lot of reading im at a loss as how i can merge two images with the information.
My code is here:
import UIKit
import Vision
class ImageTranslation {
let referenceImage: CGImage!
let floatingImage: CGImage!
let imageTranslationRequest: VNTranslationalImageRegistrationRequest!
init(referenceImage: CGImage, floatingImage: CGImage) {
self.referenceImage = referenceImage
self.floatingImage = floatingImage
self.imageTranslationRequest = VNTranslationalImageRegistrationRequest(targetedCGImage: floatingImage, completionHandler: nil)
}
func handleImageTranslationRequest() -> UIImage {
var alignmentTransform: CGAffineTransform!
let vnImage = VNSequenceRequestHandler()
try? vnImage.perform([imageTranslationRequest], on: referenceImage)
if let results = imageTranslationRequest.results as? [VNImageTranslationAlignmentObservation] {
print("Image Transformations found \(results.count)")
results.forEach { result in
alignmentTransform = result.alignmentTransform
print(alignmentTransform)
}
}
return applyTransformation(alignmentTransform)
}
private func applyTransformation(_ transform: CGAffineTransform) -> UIImage {
let image = UIImage(cgImage: referenceImage)
return image
}
}
The printed transform i get is like so CGAffineTransform(a: 1.0, b: 0.0, c: 0.0, d: 1.0, tx: 672.0, ty: 894.0)
How can i apply this two the two images passed in?
I've been playing with some examples (portrait images only but should work with landscape also) and this has worked:
func mergeImages(first image1:UIImage, second image2:UIImage, transformation: CGAffineTransform) -> UIImage {
let size = CGSize(width: image1.size.width + image2.size.width - (image2.size.width - transformation.tx), height: image1.size.height + image2.size.height - (image2.size.height - transformation.ty))
let renderer = UIGraphicsImageRenderer(size: size)
return renderer.image { context in
let pointImg2 = CGPoint.zero.applying(transformation)
image2.draw(at: pointImg2)
let pointImg1 = CGPoint.zero
image1.draw(at: pointImg1)
}
}
Let me know if it isn't working (upload your sample images if you can) and I'll fix it.

Save and Load Core Graphic UIImage Array on watchOS

I would like to be able to save a UIImage array created on the Apple Watch with watchOS and play this series of images as an animation as a group background. I can make the image array and play it but I cannot figure out how to store/save these images so I can retrieve/load them the next time I run the app so I don't have to build them every time the app runs.
Here is an example of how I am building the images with Core Graphics (Swift 3):
import WatchKit
import Foundation
class InterfaceController: WKInterfaceController
{
#IBOutlet var colourGroup: WKInterfaceGroup!
override func awake(withContext context: AnyObject?)
{
super.awake(withContext: context)
}
override func willActivate()
{
var imageArray: [UIImage] = []
for imageNumber in 1...250
{
let newImage: UIImage = drawImage(fade: CGFloat(imageNumber)/250.0)
imageArray.append(newImage)
}
let animatedImages = UIImage.animatedImage(with:imageArray, duration: 10)
colourGroup.setBackgroundImage(animatedImages)
let imageRange: NSRange = NSRange(location: 0, length: 200)
colourGroup.startAnimatingWithImages(in: imageRange, duration: 10, repeatCount: 0)
super.willActivate()
}
func drawImage(fade: CGFloat) -> UIImage
{
let boxColour: UIColor = UIColor(red: 1.0, green: 1.0, blue: 1.0, alpha: fade)
let opaque: Bool = false
let scale: CGFloat = 0.0
let bounds: CGRect = WKInterfaceDevice.current().screenBounds
let imageSize: CGSize = CGSize(width: bounds.width, height: 20.0)
UIGraphicsBeginImageContextWithOptions(imageSize, opaque, scale)
let radius: CGFloat = imageSize.height/2.0
let rect: CGRect = CGRect(x: 0.0, y: 0.0, width: imageSize.width, height: imageSize.height)
let selectorBox: UIBezierPath = UIBezierPath(roundedRect: rect, cornerRadius: radius)
let boxLineWidth: Double = 0.0
selectorBox.lineWidth = CGFloat(boxLineWidth)
boxColour.setFill()
selectorBox.fill()
// return the image
let result: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return result
}
override func didDeactivate()
{
// This method is called when watch view controller is no longer visible
super.didDeactivate()
}
}
Basically I am looking for a way to save and load a [UIImage] in a manner that I can use UIImage.animatedImage(with:[UIImage], duration: TimeInterval) with the array
Is there a way to save the image array so I can load it next time I run the app rather than rebuild the images?
Thanks
Greg
NSKeyedArchiver and NSKeyedUnarchiver did the trick. Here is Swift code for XCode 8b4:
override func willActivate()
{
var imageArray: [UIImage] = []
let fileName: String = "TheRings"
let fileManager = FileManager.default
let url = fileManager.urls(for: .documentDirectory, in: .userDomainMask).first! as NSURL
let theURL: URL = url.appendingPathComponent(fileName)!
if let rings = NSKeyedUnarchiver.unarchiveObject(withFile: theURL.path) as? [UIImage]
{
print("retrieving rings - found rings")
imageArray = rings
}
else
{
print("retrieving rings - can't find rings, building new ones")
for imageNumber in 1...250
{
let newImage: UIImage = drawImage(fade: CGFloat(imageNumber)/250.0)
imageArray.append(newImage)
}
NSKeyedArchiver.archiveRootObject(imageArray, toFile: theURL.path)
}
let animatedImages = UIImage.animatedImage(with:imageArray, duration: 10)
colourGroup.setBackgroundImage(animatedImages)
let imageRange: NSRange = NSRange(location: 0, length: 200)
colourGroup.startAnimatingWithImages(in: imageRange, duration: 10, repeatCount: 0)
super.willActivate()
}

change resolution and size of image with cocoa/osx/swift (no mobile apps)

I try to change the size and the resolution of an image programmatically, afterwards I save this image.
The imagesize in the imageView is changing, but when I look at my file "file3.png" it always has the original resolution of 640x1142.
I googled around but can't find a solution. I try to redraw the image. But maybe it's the wrong strategy.
thanks
#IBAction func pickOneImageBtn(sender: AnyObject) {
//load image from path
pickedImage.image = loadImageFromPath(fileInDocumentsDirectory("Angebote.png"))
let newSize = NSSize(width: 10, height: 10)
if let image = pickedImage.image {
print("found image")
//cast to CGImage
var imageRect:CGRect = CGRectMake(0, 0, image.size.width, image.size.height)
let imageRef = image.CGImageForProposedRect(&imageRect, context: nil, hints: nil)
if let imageRefExists = imageRef {
print("Cast to CGImage worked \(imageRefExists)")
}
//redraw to NSImage with new size
let imageWithNewSize = NSImage(CGImage: imageRef!, size: newSize)
//save on disk
let imgData: NSData! = imageWithNewSize.TIFFRepresentation!
let bitmap: NSBitmapImageRep! = NSBitmapImageRep(data: imgData!)
if let pngCoverImage = bitmap!.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [:]) {
pngCoverImage.writeToFile("/...correctpath.../imageSourceForResize/file3.png", atomically: false)
print("saved new image")
}
//the size is smaller
pickedImage.image = imageWithNewSize
}
}
Change
let imgData: NSData! = pickedImage.image!.TIFFRepresentation!
to
let imgData: NSData! = imageWithNewSize.TIFFRepresentation!
I tried to change the size of a NSImage for Mac application and here is the working function to resize an image written in swift.
func resize(image: NSImage, w: Int, h: Int) -> NSImage
{
let destSize = NSMakeSize(CGFloat(w), CGFloat(h))
let newImage = NSImage(size: destSize)
newImage.lockFocus()
image.drawInRect(NSMakeRect(0, 0, destSize.width, destSize.height), fromRect: NSZeroRect, operation: NSCompositingOperation.CompositeCopy, fraction: 1.0)
newImage.unlockFocus()
newImage.size = destSize
return NSImage(data: newImage.TIFFRepresentation!)!
}
You need to pass 3 parameters to call this function i.e NSImage, width, height and this function will return resized image.
targetimage = resize(source, w: Int(targetwidth), h: Int(targetheight))

How to crop image swift

in my project i have a UIImageView with square format for the image of the user profile, but when i add an image for example from the library, it appears shapeless. I have finded a code to crop the images to square format but i don't understand where to insert it in my code to crop the selected image from library or a new photo. Can you help me?
Here the code:
ImageUtil.swift
import UIKit
class ImageUtil: NSObject {
func cropToSquare(image originalImage: UIImage) -> UIImage {
// Create a copy of the image without the imageOrientation property so it is in its native orientation (landscape)
let contextImage: UIImage = UIImage(CGImage: originalImage.CGImage)!
// Get the size of the contextImage
let contextSize: CGSize = contextImage.size
let posX: CGFloat
let posY: CGFloat
let width: CGFloat
let height: CGFloat
// Check to see which length is the longest and create the offset based on that length, then set the width and height of our rect
if contextSize.width > contextSize.height {
posX = ((contextSize.width - contextSize.height) / 2)
posY = 0
width = contextSize.height
height = contextSize.height
} else {
posX = 0
posY = ((contextSize.height - contextSize.width) / 2)
width = contextSize.width
height = contextSize.width
}
let rect: CGRect = CGRectMake(posX, posY, width, height)
// Create bitmap image from context using the rect
let imageRef: CGImageRef = CGImageCreateWithImageInRect(contextImage.CGImage, rect)
// Create a new image based on the imageRef and rotate back to the original orientation
let image: UIImage = UIImage(CGImage: imageRef, scale: originalImage.scale, orientation: originalImage.imageOrientation)!
return image
}
}
AddController.swift
import UIKit
class AddController: UIViewController, UITextFieldDelegate, CameraManagerDelegate {
#IBOutlet var immagine: UIImageView!
#IBOutlet var fieldNome: UITextField!
var immagineSelezionata : UIImage?
override func viewDidLoad() {
super.viewDidLoad()
navigationController!.navigationBar.barStyle = UIBarStyle.BlackTranslucent
navigationController!.navigationBar.tintColor = UIColor.whiteColor()
navigationController!.navigationBar.barTintColor = UIColor(red: 60/255.0, green: 172/255.0, blue: 183/255.0, alpha: 1.0)
//navigationItem.titleView = UIImageView(image: UIImage(named: "logo"))
//tableView.backgroundColor = UIColor(red: 60/255.0, green: 172/255.0, blue: 183/255.0, alpha: 1.0)
UIApplication.sharedApplication().setStatusBarStyle(UIStatusBarStyle.LightContent, animated: false)
let singleTap: UITapGestureRecognizer = UITapGestureRecognizer(target: self, action: "selezionaFoto:")
singleTap.numberOfTapsRequired = 1
singleTap.numberOfTouchesRequired = 1
self.immagine.addGestureRecognizer(singleTap)
self.immagine.userInteractionEnabled = true
fieldNome.delegate = self
CameraManager.sharedInstance.delegate = self
var keyboardToolbar = UIToolbar(frame: CGRectMake(0, 0, self.view.bounds.size.width, 44))
keyboardToolbar.barStyle = UIBarStyle.BlackTranslucent
keyboardToolbar.backgroundColor = UIColor(red: 60/255.0, green: 172/255.0, blue: 183/255.0, alpha: 1.0)
keyboardToolbar.tintColor = UIColor.whiteColor()
var flex = UIBarButtonItem(barButtonSystemItem: UIBarButtonSystemItem.FlexibleSpace, target: nil, action: nil)
var save = UIBarButtonItem(title: NSLocalizedString("Done", comment: ""),
style: UIBarButtonItemStyle.Done,
target: fieldNome,
action: "resignFirstResponder")
keyboardToolbar.setItems([flex, save], animated: false)
fieldNome.inputAccessoryView = keyboardToolbar
}
func myUIImageViewTapped(recognizer: UITapGestureRecognizer) {
if(recognizer.state == UIGestureRecognizerState.Ended){
println("myUIImageView has been tapped by the user.")
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
#IBAction func annulla(sender: UIBarButtonItem) {
dismissViewControllerAnimated(true, completion: nil)
}
#IBAction func salva(sender: UIBarButtonItem) {
if fieldNome.text.isEmpty
{
return
}
var profilo = ProfiloModel(nomeIn: fieldNome.text,
immagineIn: UIImage(named:"icon-profile")!)
if let img = immagineSelezionata {
profilo.immagine = img
}
//DataManager.sharedInstance.storage.insert(profilo, atIndex: 0)
DataManager.sharedInstance.storage.append(profilo)
DataManager.sharedInstance.salvaArray()
DataManager.sharedInstance.master.tableView.reloadData()
dismissViewControllerAnimated(true, completion: nil)
}
#IBAction func selezionaFoto(sender: UITapGestureRecognizer) {
fieldNome.resignFirstResponder()
func selezionaLibreria(action : UIAlertAction!) {
UIApplication.sharedApplication().setStatusBarStyle(UIStatusBarStyle.Default, animated: true)
CameraManager.sharedInstance.newImageFromLibraryForController(self, editing: false)
}
func scattaFoto(action : UIAlertAction!) {
UIApplication.sharedApplication().setStatusBarStyle(UIStatusBarStyle.Default, animated: true)
var circle = UIImageView(frame: UIScreen.mainScreen().bounds)
circle.image = UIImage(named: "overlay")
CameraManager.sharedInstance.newImageShootForController(self, editing: false, overlay:circle)
}
var myActionSheet = UIAlertController(title: NSLocalizedString("ACTION_IMAGE_TITLE", comment: ""),
message: NSLocalizedString("ACTION_IMAGE_TEXT", comment: ""),
preferredStyle: UIAlertControllerStyle.ActionSheet)
myActionSheet.addAction(UIAlertAction(title: NSLocalizedString("BUTTON_LIBRARY", comment: ""),
style: UIAlertActionStyle.Default,
handler: selezionaLibreria))
myActionSheet.addAction(UIAlertAction(title: NSLocalizedString("BUTTON_SHOOT", comment: ""),
style: UIAlertActionStyle.Default,
handler: scattaFoto))
myActionSheet.addAction(UIAlertAction(title: NSLocalizedString("BUTTON_CANCEL", comment: ""),
style: UIAlertActionStyle.Cancel,
handler: nil))
self.presentViewController(myActionSheet, animated: true, completion: nil)
}
func textFieldShouldReturn(textField: UITextField) -> Bool {
textField.resignFirstResponder() // chiudere la tastiera nei campi di testo
return true
}
func incomingImage(image: UIImage) {
UIApplication.sharedApplication().setStatusBarStyle(UIStatusBarStyle.LightContent, animated: true)
immagine.image = image
immagineSelezionata = image
}
func cancelImageSelection() {
UIApplication.sharedApplication().setStatusBarStyle(UIStatusBarStyle.LightContent, animated: true)
}
}
I Found a link can help you
CGBitmapContextCreate & CGContextDrawImage
Core Graphics / Quartz 2D offers a lower-level set of APIs that allow for more advanced configuration. Given a CGImage, a temporary bitmap context is used to render the scaled image, using CGBitmapContextCreate() and
CGBitmapContextCreateImage():
let image = UIImage(contentsOfFile: self.URL.absoluteString!).CGImage
let width = CGImageGetWidth(image) / 2.0
let height = CGImageGetHeight(image) / 2.0
let bitsPerComponent = CGImageGetBitsPerComponent(image)
let bytesPerRow = CGImageGetBytesPerRow(image)
let colorSpace = CGImageGetColorSpace(image)
let bitmapInfo = CGImageGetBitmapInfo(image)
let context = CGBitmapContextCreate(nil, width, height,
bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo)
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), image)
let scaledImage = UIImage(CGImage: CGBitmapContextCreateImage(context))
CGBitmapContextCreate takes several arguments to construct a context with desired dimensions and amount of memory for each channel within a given colorspace. In the example, these values are fetched from the CGImage. Next, CGContextSetInterpolationQuality allows for the context to interpolate pixels at various levels of fidelity. In this case, kCGInterpolationHigh is passed for best results. CGContextDrawImage allows for the image to be drawn at a given size and position, allowing for the image to be cropped on a particular edge or to fit a set of image features, such as faces. Finally, CGBitmapContextCreateImage creates a CGImage from the context.
Font Image Resizing Techiniques
See Image-Cropper.
Excerpt:
func btnCropAndSaveClicked(sender: UIButton!) {
print("save btn clicked")
let scale = 1 / scrollView.zoomScale
let visibleRect = CGRect(
x: (scrollView.contentOffset.x + scrollView.contentInset.left) * scale,
y: (scrollView.contentOffset.y + scrollView.contentInset.top) * scale,
width: CROP_WINDOW_WIDTH * scale,
height: CROP_WINDOW_HEIGHT * scale)
let imageRef: CGImageRef? = CGImageCreateWithImageInRect(processedImage.CGImage, visibleRect)!
let croppedImage:UIImage = UIImage(CGImage: imageRef!)
UIImageWriteToSavedPhotosAlbum(croppedImage, nil, nil, nil)
showResultMessage()
}
That code allows you to zoom in and out the selected picture, move it around and crop the desired area of that picture. It's simple to implement, everything is done on the ViewController. Hope this solves your problem.