How to name UIImage drawn from UIBezierPath() programmatically in Xcode-Swift? - swift

How do we name an image programmatically. For example, assign a name to the image generated below. A name that we can use to distinguish the image from other images drawn programmatically.
func drawOval (width: CGFloat, height: CGFloat, name: String) -> UIImage {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: width, height: height))
let image = renderer.image { ctx in
let path = UIBezierPath(ovalIn: CGRect(x: 0, y: 0, width: width, height: height))
path.stroke()
}
// TO DO: Assign this image a name, for example "image01"
return image
}

You can use tags on each UIImageView. I’m not aware of a way to add an identifier to a UIImage directly since it is a subclass of NSObject and not UIView. In order to add a tag to an object in Swift, the object must be a view of some kind.
To implement this, you would keep a variable outside of that function that keeps track of the current tag, then increment it in your function. For example:
var currentTag = 0
//Function now returns a UIImageView
func drawOval (width: CGFloat, height: CGFloat, name: String) -> UIImageView {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: width, height: height))
let image = renderer.image { ctx in
let path = UIBezierPath(ovalIn: CGRect(x: 0, y: 0, width: width, height: height))
path.stroke()
}
let imageView = UIImageView(image: image)
imageView.tag = currentTag
currentTag += 1
return imageView
}
Then later in your code:
if (imageView.tag == 0) {
//Do something
}
//You can also use
let taggedImageView = viewWithTag(0)
EDIT: If you want to save the images and load them via one of the available UIImage initializers, you can write them to a cache folder on disk, then retrieve them using UIImage(pathToFile:):
//This will store the images in the caches directory for your app, which
//the system can clear when the device is low on storage. It will not be
//cleared while your app is open, though.
func saveImageToCacheDynamically(image: UIImage, name: String) {
let paths = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask)
let localPath = paths[0].appendingPathComponent(“ImageCache”, isDirectory: true)
do {
if FileManager.default.fileExists(atPath: localPath.absoluteString) {
//Write the png data representation of the image to disk in plaintext format
try image.pngData().write(to: localPath.appendingPathComponent("\(name).txt"))
} else {
try FileManager.default.createDirectory(at: localPath, withIntermediateDirectories: true, attributes: nil)
//Write the png data representation of the image to disk in plaintext format
try image.pngData().write(to: localPath.appendingPathComponent("\(name).txt"))
}
} catch {
print("Error locally saving image: \(error.localizedDescription)")
}
//Later in your code...
let paths = FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask)
let localPath = paths[0].appendingPathComponent("ImageCache/\(imageIdentifier).txt")
if FileManager.default.fileExists(atPath: localPath.absoluteString) {
var fileData: Data!
do {
try fileData = Data(contentsOf: localPath)
} catch {
print("Error reading image file: \(error.localizedDescription)")
}
let image = UIImage(data: fileData)
//Do something with image
} else {
print("Error: image does not exist")
}

I believe what you are looking for is something like NSCache...
You can define a cache by something similar to this:
let imageCache = NSCache<String, UIImage>()
Then you can add objects to the cache like this, where someKeyString is the 'name' you are referring to:
imageCache.setObject(someImage, forKey: someKeyString)
And then finally you can retrieve images from the cache like
imageCache.object(forKey: someKeyString)
I would recommend using extensions or something similar to maintain a reference to your cache everywhere in your app.
** NOTE:
NSCaches are cleared when memory space is short, your app closes, etc. See here
For more permanent storage, I would recommend using UserDefaults, which Apple describes as "An interface to the user’s defaults database, where you store key-value pairs persistently across launches of your app." Use this for things like profile images or things that won't change very often. I would also recommend looking into Core Data

Related

Swift: Loss of resolution when redrawing PDF

I'm trying to use a PDF file (contains a form with table for several 'records') as a template.
My idea is to read in the original file as a template, and then use Swifts drawing features to insert the text at the relevant positions before saving as a new pdf file and printing.
My problem is that i'm seeing a small loss of resolution (fonts are slightly wooly, gridlines are no longer crisp) when re-saving the output.
I've tried two approaches, the first with my 'template' as a file in the project, and the second with it as an Asset (Scales: Single Scale, Resizing: Preserve Vector Data).
Here is my code:
func createCompletedForm(records: [MyDataObject]) -> URL? {
let directoryURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let fileURL = URL(fileURLWithPath: "pdfToPrint", relativeTo: directoryURL).appendingPathExtension("pdf")
guard let templateUrl = Bundle.main.url(forResource: "MyPdfTemplate", withExtension: "pdf") else { return nil }
guard let document = CGPDFDocument(templateUrl as CFURL) else { return nil }
guard let page = document.page(at: 1) else { return nil }
let pageTemplate = page.getBoxRect(.mediaBox)
UIGraphicsBeginPDFContextToFile(fileURL.path, pageTemplate, nil)
guard let pdfContext = UIGraphicsGetCurrentContext() else {
print("Unable to access PDF Context.")
return nil
}
// Mark the beginning of the page.
pdfContext.beginPDFPage(nil)
// Save the context state to restore after we are done drawing the image.
pdfContext.saveGState()
// Change the PDF context to match the UIKit coordinate system.
pdfContext.translateBy(x: 0, y: StandardPageDimensions.ISO216_A4.height)
pdfContext.scaleBy(x: 1, y: -1)
// Option 1: Draw PDF from a file added to my project
DrawingHelper.drawPDFfromCGPDF(page: page, drawingArea: pageTemplate)
// Option 2: Draw PDF from Assets
//let baseTemplate = UIImage(named: "MyPdfTemplate")
//baseTemplate?.draw(at: CGPoint(x: 0, y: 0))
// Draw the records over the template - NOT the source of the problem, happens even when commented out
//addRecordsToTemplate(records: records)
// Restoring the context back to its original state.
pdfContext.restoreGState()
// Mark the end of the current page.
pdfContext.endPDFPage()
UIGraphicsEndPDFContext()
// Useful to find and open the file produced on the simulator
print("pdf created at : \(fileURL.path)")
return fileURL
}
// And the drawing function from my helper class
static func drawPDFfromCGPDF(page: CGPDFPage, drawingArea: CGRect) {
let renderer = UIGraphicsImageRenderer(size: drawingArea.size)
let img = renderer.image { ctx in
UIColor.white.set()
ctx.fill(drawingArea)
ctx.cgContext.translateBy(x: 0.0, y: drawingArea.size.height)
ctx.cgContext.scaleBy(x: 1.0, y: -1.0)
ctx.cgContext.drawPDFPage(page)
}
img.draw(at: CGPoint(x: 0, y: 0))
}

How to Save PDF (and print) from UITableView (or cell) easy way?

I read and search a lot on internet and stack, but I have a problem
I've used this extension
extension UITableView {
// Export pdf from UITableView and save pdf in drectory and return pdf file path
func exportAsPdfFromTable() -> String {
self.showsVerticalScrollIndicator = false
let originalBounds = self.bounds
self.bounds = CGRect(x:originalBounds.origin.x, y: originalBounds.origin.y, width: self.contentSize.width, height: self.contentSize.height)
let pdfPageFrame = CGRect(x: 0, y: 0, width: self.bounds.size.width, height: self.contentSize.height)
let pdfData = NSMutableData()
UIGraphicsBeginPDFContextToData(pdfData, pdfPageFrame, nil)
UIGraphicsBeginPDFPageWithInfo(pdfPageFrame, nil)
guard let pdfContext = UIGraphicsGetCurrentContext() else { return "" }
self.layer.render(in: pdfContext)
UIGraphicsEndPDFContext()
self.bounds = originalBounds
// Save pdf data
return self.saveTablePdf(data: pdfData)
}
// Save pdf file in document directory
func saveTablePdf(data: NSMutableData) -> String {
let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
let docDirectoryPath = paths[0]
let pdfPath = docDirectoryPath.appendingPathComponent("myPDF.pdf")
if data.write(to: pdfPath, atomically: true) {
return pdfPath.path
} else {
return ""
}
}
}
Then i save the path in this way
myPdfPath = self.tableView.exportAsPdfFromTable()
I tried several ways to show the pdf:
- UIActivityViewController (with this work but not well)
- WKWebView
- SFSafariViewController (only http / https, path file not work)
- UIDocumentInteractionController (not show preview and not work)
The extension used to save the pdf does not work well (not fill print preview page). Generate a 125MB pdf with a title I don't want.
Is there an easy way to do this?
1) TableView (generate pdf on a white background (always even if the tableview is red for example)
2) Print the generated pdf.
Thanks

IMessage MSSticker view created from UIView incorrect sizing

Hey I have been struggling with this for a couple of days now and can't seem to find any documentation out side of the standard grid views for MSStickerView sizes
I am working on an app that creates MSStickerViews dynamically - it does this via converting a UIView into an UIImage saving this to disk then passing the URL to MSSticker before creating the MSStickerView the frame of this is then set to the size of the original view.
The problem I have is that when I drag the MSStickerView into the messages window, the MSStickerView shrinks while being dragged - then when dropped in the messages window, changes to a larger size. I have no idea how to control the size when dragged or the final image size
Heres my code to create an image from a view
extension UIView {
func imageFromView() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
self.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
}
And here's the code to save this to disk
extension UIImage {
func savedPath(name: String) -> URL{
let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
let filePath = "\(paths[0])/name.png"
let url = URL(fileURLWithPath: filePath)
// Save image.
if let data = self.pngData() {
do {
try data.write(to: url)
} catch let error as NSError {
}
}
return url
}
}
finally here is the code that converts the data path to a Sticker
if let stickerImage = backgroundBox.imageFromView() {
let url = stickerImage.savedPath(name: textBox.text ?? "StickerMCSticker")
if let msSticker = try? MSSticker(contentsOfFileURL: url, localizedDescription: "") {
var newFrame = self.backgroundBox.frame
newFrame.size.width = newFrame.size.width
newFrame.size.height = newFrame.size.height
let stickerView = MSStickerView(frame: newFrame, sticker: msSticker)
self.view.addSubview(stickerView)
print("** sticker frame \(stickerView.frame)")
self.sticker = stickerView
}
}
I wondered first off if there was something I need to do regarding retina sizes, but adding #2x in the file just breaks the image - so am stuck on this - the WWDC sessions seem to show stickers being created from file paths and not altering in size in the transition between drag and drop - any help would be appreciated!
I fixed this issue eventually by getting the frame from the view I was copying's frame then calling sizeToFit()-
init(sticker: MSSticker, size: CGSize) {
let stickerFrame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
self.sticker = MSStickerView(frame: stickerFrame, sticker: sticker)
self.sticker.sizeToFit()
super.init(nibName: nil, bund
as the StickerView was not setting the correct size. Essentially the experience I was seeing was that the sticker size on my view was not accurate with the size of the MSSticker - so the moment the drag was initialized, the real sticker size was implemented (which was different to the frame size / autoLayout I was applying in my view)

Why filtering a cropped image is 4x slower than filtering resized image (both have the same dimensions)

I've been trying to wrap my head around this problem with no luck. I have a very simple Swift command-line application which takes one argument - image path to load. It crops the image and filters that image fragment with SepiaTone filter.
It works just fine. It crops the image to 200x200 and filters it with SepiaTone. Now here's the problem that I'm facing - the whole process takes 600ms on my MacBook Air. Now when I RESIZE (instead of cropping) input image to the same dimensions (200x200) it takes 150ms.
Why is that? In both cases I'm filtering an image which is 200x200 in size. I'm using this particular image for testing (5966x3978).
UPDATE:
It's this particular line of code that takes 4x longer when dealing with cropped image:
var ciImage:CIImage = CIImage(cgImage: cgImage)
END OF UPDATE
Code for cropping (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// CROP THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
let rect = CGRect(x: 0, y: 0, width: 200, height: 200)
if let croppedImage = cgImage.cropping(to: rect) {
cgImage = croppedImage
} else {
exit(EXIT_FAILURE)
}
// END CROPPING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Code for resizing (200x200):
// parse args and get image path
let args:Array = CommandLine.arguments
let inputFile:String = args[CommandLine.argc - 1]
let inputURL:URL = URL(fileURLWithPath: inputFile)
// load the image from path into NSImage
// and convert NSImage into CGImage
guard
let nsImage = NSImage(contentsOf: inputURL),
var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil)
else {
exit(EXIT_FAILURE)
}
// RESIZE THE IMAGE TO 200x200
// THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT
// IN THOSE TWO EXAMPLES
guard let CGcontext = CGContext(data: nil,
width: 200,
height: 200,
bitsPerComponent: cgImage.bitsPerComponent,
bytesPerRow: cgImage.bytesPerRow,
space: cgImage.colorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: cgImage.bitmapInfo.rawValue)
else {
exit(EXIT_FAILURE)
}
CGcontext.draw(cgImage, in: CGRect(x: 0, y: 0, width: 200, height: 200))
if let resizeOutput = CGcontext.makeImage() {
cgImage = resizeOutput
}
// END RESIZING
// convert CGImage to CIImage
var ciImage:CIImage = CIImage(cgImage: cgImage)
// initiate SepiaTone
guard
let sepiaFilter = CIFilter(name: "CISepiaTone")
else {
exit(EXIT_FAILURE)
}
sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey)
sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey)
guard
let result = sepiaFilter.outputImage
else {
exit(EXIT_FAILURE)
}
let context:CIContext = CIContext()
// perform filtering in a GPU context
guard
let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent)
else {
exit(EXIT_FAILURE)
}
Its very likely that the cgImage lives in video memory and when you scale the image it actually uses the hardware to write the image to a new area of memory. When you crop the cgImage the documentation implies that it is just referencing the original image. The line
var ciImage:CIImage = CIImage(cgImage: cgImage)
must be triggering a read (maybe to main memory?), and in the case of your scaled image it can probably just read the whole buffer continuously. In the case of the cropped image it may be reading it line by line and this could account for the difference, but thats just me guessing.
It looks like you are doing two very different things. In the "slow" version you are cropping (as in taking a small CGRect of the original image) and in the "fast" version you are resizing (as in reducing the original down to a CGRect).
You can prove this by adding two UIImageViews and adding these lines after each declaration of ciImage:
slowImage.image = UIImage(ciImage: ciImage)
fastImage.image = UIImage(ciImage: ciImage)
Here are two simulator screenshots, with the "slow" image above the "fast" image. The first is with your code where the "slow" CGRect origin is (0,0) and the second is with it adjusted to (2000,2000):
Origin is (0,0)
Origin is (2000,2000)
Knowing this, I can come up with a few things happening on the timing.
I'm including a link to Apple's documentation on the cropping function. It explains that it is doing some CGRect calculations behind the scenes but it doesn't explain how it pulls the pixel bits out of the full-sized CG image - I think that's where the real slow down is.
In the end though, it looks like the timing is due to doing two entirely different things.
CGRect.cropping(to:)

How can I improve image quality in the pdf?

Working on a pdf photo report app, and struggling with the low image quality in the pdfs that are generated.
func drawImage(index: Int, rectPos: Int) {
let image = getImage(index)
let xPosition = CGFloat(rectArray[rectPos][0])
let yPosition = CGFloat(rectArray[rectPos][1])
image.drawInRectAspectFill(CGRectMake(xPosition, yPosition, 325, 244))
}
func getImage(index: Int) -> UIImage {
var thumbnail = UIImage()
if self.photoAsset.count != 0 {
let initialRequestOptions = PHImageRequestOptions()
initialRequestOptions.resizeMode = .Exact
initialRequestOptions.deliveryMode = .HighQualityFormat
initialRequestOptions.synchronous = true
PHImageManager.defaultManager().requestImageForAsset(self.photoAsset[index], targetSize: CGSizeMake(325, 244), contentMode: PHImageContentMode.Default, options: initialRequestOptions, resultHandler: { (result, info) -> Void in
thumbnail = result!
})
}
return thumbnail
}
I then use these functions to grab the image and place it into a position on a page after UIGraphicsBeginPDFPageWithInfo(page, nil)...
I'm using BSImagePicker pod to get the images.
And finally my photoAsset is just an array of PHAsset photos that is generated after the user selects the images from the pod's CollectionView...
So far I tried all the settings for the initialRequestOptions.deliveryMode... highquality doesn't seem to make images any better.
What am I doing wrong here? Thanks!
Changing the target resolution when requesting image sets the minimum resolution of the image you are going to use.
instead of:
PHImageManager.defaultManager().requestImageForAsset(self.photoAsset[index], targetSize: CGSizeMake(325, 244)...
I simply doubled the size of the image and the image that is drawn to the pdf is better quality.
PHImageManager.defaultManager().requestImageForAsset(self.photoAsset[index], targetSize: CGSizeMake(650, 488)...