Function in Swift to Append a Pdf file to another Pdf - swift

I created two different pdf files in two different views using following code:
private func toPDF(views: [UIView]) -> NSData? {
if views.isEmpty {return nil}
let pdfData = NSMutableData()
UIGraphicsBeginPDFContextToData(pdfData, CGRect(x: 0, y: 0, width: 1024, height: 1448), nil)
let context = UIGraphicsGetCurrentContext()
for view in views {
UIGraphicsBeginPDFPage()
view.layer.renderInContext(context!)
}
UIGraphicsEndPDFContext()
return pdfData
}
In the final view I call both files using:
let firstPDF = NSUserDefaults.standardUserDefaults().dataForKey("PDFone")
let secondPDF = NSUserDefaults.standardUserDefaults().dataForKey("PDFtwo")
My question is: Can anyone suggest a function which append the second file to the first one? (Both are in NSData Format)

Swift 4:
func merge(pdfs:Data...) -> Data
{
let out = NSMutableData()
UIGraphicsBeginPDFContextToData(out, .zero, nil)
guard let context = UIGraphicsGetCurrentContext() else {
return out as Data
}
for pdf in pdfs {
guard let dataProvider = CGDataProvider(data: pdf as CFData), let document = CGPDFDocument(dataProvider) else { continue }
for pageNumber in 1...document.numberOfPages {
guard let page = document.page(at: pageNumber) else { continue }
var mediaBox = page.getBoxRect(.mediaBox)
context.beginPage(mediaBox: &mediaBox)
context.drawPDFPage(page)
context.endPage()
}
}
context.closePDF()
UIGraphicsEndPDFContext()
return out as Data
}

This can be done quite easily with PDFKit and its PDFDocument.
I'm using this extension:
import PDFKit
extension PDFDocument {
func addPages(from document: PDFDocument) {
let pageCountAddition = document.pageCount
for pageIndex in 0..<pageCountAddition {
guard let addPage = document.page(at: pageIndex) else {
break
}
self.insert(addPage, at: self.pageCount) // unfortunately this is very very confusing. The index is the page *after* the insertion. Every normal programmer would assume insert at self.pageCount-1
}
}
}

Swift 5:
Merge pdfs like this to keep links, etc...
See answer here

Related

Get Image resolution from an URL in the background thread

I'm trying to get the resolution of an image from URL without actually downloading it. So I have a function for that:
public func resolutionForImage(url: String) -> CGSize? {
guard let url = URL(string: url) else { return nil }
guard let source = CGImageSourceCreateWithURL(url as CFURL, nil) else {
return nil
}
let propertiesOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let properties = CGImageSourceCopyPropertiesAtIndex(source, 0, propertiesOptions) as? [CFString: Any] else {
return nil
}
if let width = properties[kCGImagePropertyPixelWidth] as? CGFloat,
let height = properties[kCGImagePropertyPixelHeight] as? CGFloat {
return CGSize(width: width, height: height)
} else {
return nil
}
}
It works fine, but I need to run it in the Background thread, in the main thread in block the UI
And this function is called in an another function in a collectionView cell, so in the end calculateImageHeight output should be in the main thread, could anyone help me manage it, still not in good in the threading
public func calculateImageHeight(ratio: Double = 0.0) -> CGFloat {
var calculatedRatio = ratio
if ratio == 0.0 {
if let size = resolutionForImage(url: imageUrl) {
calculatedRatio = size.height/size.width
}
}
let height = imageViewWidth * calculatedRatio
return height.isFinite ? height : 100
}

My videos have a naturalSize of only (4.0, 3.0) pixels, which is also extracted frame size

Context
I'm dealing with video files that are 1280x920, that's their actual pixel size when displayed in QuickTime, or even played in my AVPlayer.
I have a bunch of videos in a folder and I need to stick them together on a AVMutableComposition and play it.
I also need, for each video, to extract the last frame.
What I did so far was using AVAssetImageGenerator on each on my individual AVAsset and it worked, whether I was using generateCGImagesAsynchronously or copyCGImage.
But I thought it would be more efficient to run generateCGImagesAsynchronously on my composition asset, so I have only one call instead of looping with each original tracks.
Instead of :
v-Get Frame
AVAsset1 |---------|
AVAsset2 |---------|
AVAsset3 |---------|
I want to do :
v----------v----------v- Get Frames
AVMutableComposition: |---------||---------||---------|
Problem
Here is the actual issue:
import AVKit
var video1URL = URL(fileReferenceLiteralResourceName: "video_bad.mp4") // One of my video file
let asset1 = AVAsset(url: video1URL)
let track1 = asset1.tracks(withMediaType: .video).first!
_ = track1.naturalSize // {w 4 h 3}
var video2URL = URL(fileReferenceLiteralResourceName: "video_ok.mp4") // Some mp4 I got from internet
let asset2 = AVAsset(url: video2URL)
let track2 = asset2.tracks(withMediaType: .video).first!
_ = track2.naturalSize // {w 1920 h 1080}
Here is the actual screenshot of the playground (that you can download here):
And here is something else :
Look at the "Current Scale" information in QuickTime inspector. The video displays just fine, but it's showed as being really magnified (note that no pixel is blurry or anything, it has to do with some metadata)
The video file I'm working with in QuickTime:
The video file from internet:
Question
What metadata that information is and how to deal with it?
Why it is different on the original track than when put on a different composition?
How I can extract a frame on such videos?
So if you stumble across this post, it's probably you are trying to figure out Tesla's way of writing videos.
There is no easy solution to that issue, that is caused by Tesla software incorrectly setting metadata in .mov video files. I opened an incident with Apple and they were able to confirm this.
So I wrote some code to actually go and fix the video file by rewriting the bytes where it indicates the video track size.
Here we go, it's ugly but for the sake of completeness I wanted to post a solution here, if not the best.
import Foundation
struct VideoFixer {
var url: URL
private var fh: FileHandle?
static func fix(_ url: URL) {
var fixer = VideoFixer(url)
fixer.fix()
}
init(_ url: URL) {
self.url = url
}
mutating func fix() {
guard let fh = try? FileHandle(forUpdating: url) else {
return
}
var atom = Atom(fh)
atom.seekTo(AtomType.moov)
atom.enter()
if atom.atom_type != AtomType.trak {
atom.seekTo(AtomType.trak)
}
atom.enter()
if atom.atom_type != AtomType.tkhd {
atom.seekTo(AtomType.tkhd)
}
atom.seekTo(AtomType.tkhd)
let data = atom.data()
let width = data?.withUnsafeBytes { $0.load(fromByteOffset: 76, as: UInt16.self).bigEndian }
let height = data?.withUnsafeBytes { $0.load(fromByteOffset: 80, as: UInt16.self).bigEndian }
if width==4 && height==3 {
guard let offset = try? fh.offset() else {
return
}
try? fh.seek(toOffset: offset+76)
//1280x960
var newWidth = UInt16(1280).byteSwapped
var newHeight = UInt16(960).byteSwapped
let dataWidth = Data(bytes: &newWidth, count: 2)
let dataHeight = Data(bytes: &newHeight, count: 2)
fh.write(dataWidth)
try? fh.seek(toOffset: offset+80)
fh.write(dataHeight)
}
try? fh.close()
}
}
typealias AtomType = UInt32
extension UInt32 {
static var ftyp = UInt32(1718909296)
static var mdat = UInt32(1835295092)
static var free = UInt32(1718773093)
static var moov = UInt32(1836019574)
static var trak = UInt32(1953653099)
static var tkhd = UInt32(1953196132)
}
struct Atom {
var fh: FileHandle
var atom_size: UInt32 = 0
var atom_type: UInt32 = 0
init(_ fh: FileHandle) {
self.fh = fh
self.read()
}
mutating func seekTo(_ type:AtomType) {
while self.atom_type != type {
self.next()
}
}
mutating func next() {
guard var offset = try? fh.offset() else {
return
}
offset = offset-8+UInt64(atom_size)
if (try? self.fh.seek(toOffset: UInt64(offset))) == nil {
return
}
self.read()
}
mutating func read() {
self.atom_size = fh.nextUInt32().bigEndian
self.atom_type = fh.nextUInt32().bigEndian
}
mutating func enter() {
self.atom_size = fh.nextUInt32().bigEndian
self.atom_type = fh.nextUInt32().bigEndian
}
func data() -> Data? {
guard let offset = try? fh.offset() else {
return nil
}
let data = fh.readData(ofLength: Int(self.atom_size))
try? fh.seek(toOffset: offset)
return data
}
}
extension FileHandle {
func nextUInt32() -> UInt32 {
let data = self.readData(ofLength: 4)
let i32array = data.withUnsafeBytes { $0.load(as: UInt32.self) }
//print(i32array)
return i32array
}
}

Convert HTML to NSAttributedString in Background?

I am working on an app that will retrieve posts from WordPress and allow the user to view each post individually, in detail. The WordPress API brings back the post content which is the HTML of the post. (Note: the img tags are referencing the WordPress URL of the uploaded image)
Originally, I was using a WebView & loading the retrieved content directly into it. This worked great; however, the images seemed to be loading on the main thread, as it caused lag & would sometimes freeze the UI until the image had completed downloading. I was suggested to check out the Aztec Editor library from WordPress; however, I could not understand how to use it (could not find much documentation).
My current route is parsing the HTML content and creating a list of dictionaries (keys of type [image or text] and content). Once it is parsed, I build out the post by dynamically adding Labels & Image views (which allows me to download images in background). While this does seem overly-complex & probably the wrong route, it is working well (would be open to any other solutions, though!) My only issue currently is the delay of converting an HTML string to NSAttributedText. Before adding the text content to the dictionary, I will convert it from a String to an NSAttributedString. I have noticed a few seconds delay & the CPU of the simulator getting up to 50-60% for a few seconds, then dropping. Is there anyway I could do this conversion on a background thread(s) and display a loading animation during this time?
Please let me know if you have any ideas or suggestions for a better solution. Thank you very much!
Code:
let postCache = NSCache<NSString, AnyObject>()
var yPos = CGFloat(20)
let screenWidth = UIScreen.main.bounds.width
...
func parsePost() -> [[String:Any]]? {
if let postFromCache = postCache.object(forKey: postToView.id as NSString) as? [[String:Any]] {
return postFromCache
} else {
var content = [[String:Any]]()
do {
let doc: Document = try SwiftSoup.parse(postToView.postContent)
if let elements = try doc.body()?.children() {
for elem in elements {
if(elem.hasText()) {
do {
let html = try elem.html()
if let validHtmlString = html.htmlToAttributedString {
content.append(["text" : validHtmlString])
}
}
} else {
let imageElements = try elem.getElementsByTag("img")
if(imageElements.size() > 0) {
for image in imageElements {
var imageDictionary = [String:Any]()
let width = (image.getAttributes()?.get(key: "width"))!
let height = (image.getAttributes()?.get(key: "height"))!
let ratio = CGFloat(Float(height)!/Float(width)!)
imageDictionary["ratio"] = ratio
imageDictionary["image"] = (image.getAttributes()?.get(key: "src"))!
content.append(imageDictionary)
}
}
}
}
}
} catch {
print("error")
}
if(content.count > 0) {
postCache.setObject(content as AnyObject, forKey: postToView.id as NSString)
}
return content
}
}
func buildUi(content: [[String:Any]]) {
for dict in content {
if let attributedText = dict["text"] as? NSAttributedString {
let labelToAdd = UILabel()
labelToAdd.attributedText = attributedText
labelToAdd.numberOfLines = 0
labelToAdd.frame = CGRect(x:0, y:yPos, width: 200, height: 0)
labelToAdd.sizeToFit()
yPos += labelToAdd.frame.height + 5
self.scrollView.addSubview(labelToAdd)
} else if let imageName = dict["image"] as? String {
let ratio = dict["ratio"] as! CGFloat
let imageToAdd = UIImageView()
let url = URL(string: imageName)
Nuke.loadImage(with: url!, into: imageToAdd)
imageToAdd.frame = CGRect(x:0, y:yPos, width: screenWidth, height: screenWidth*ratio)
yPos += imageToAdd.frame.height + 5
self.scrollView.addSubview(imageToAdd)
}
}
self.scrollView.contentSize = CGSize(width: self.scrollView.contentSize.width, height: yPos)
}
extension String {
var htmlToAttributedString: NSAttributedString? {
guard let data = data(using: .utf8) else { return NSAttributedString() }
do {
return try NSAttributedString(data: data, options: [.documentType: NSAttributedString.DocumentType.html, .characterEncoding:String.Encoding.utf8.rawValue], documentAttributes: nil)
} catch {
return NSAttributedString()
}
}
var htmlToString: String {
return htmlToAttributedString?.string ?? ""
}
}
( Forgive me for the not-so-clean code! I am just wanting to make sure I can achieve a desirable outcome before I start refactoring. Thanks again! )

Apple Vision framework – Text extraction from image

I am using Vision framework for iOS 11 to detect text on image.
The texts are getting detected successfully, but how we can get the detected text?
Recognizing text in an image
VNRecognizeTextRequest works starting from iOS 13.0 and macOS 10.15 and higher.
In Apple Vision you can easily extract text from image using VNRecognizeTextRequest class, allowing you to make an image analysis request that finds and recognizes text in an image.
Here's a SwiftUI solution showing you how to do it (tested in Xcode 13.4, iOS 15.5):
import SwiftUI
import Vision
struct ContentView: View {
var body: some View {
ZStack {
Color.black.ignoresSafeArea()
Image("imageText").scaleEffect(0.5)
SomeText()
}
}
}
The logic is the following:
struct SomeText: UIViewRepresentable {
let label = UITextView(frame: .zero)
func makeUIView(context: Context) -> UITextView {
label.backgroundColor = .clear
label.textColor = .systemYellow
label.textAlignment = .center
label.font = .boldSystemFont(ofSize: 25)
return label
}
func updateUIView(_ uiView: UITextView, context: Context) {
let path = Bundle.main.path(forResource: "imageText", ofType: "png")
let url = URL(fileURLWithPath: path!)
let requestHandler = VNImageRequestHandler(url: url, options: [:])
let request = VNRecognizeTextRequest { (request, _) in
guard let obs = request.results as? [VNRecognizedTextObservation]
else { return }
for observation in obs {
let topCan: [VNRecognizedText] = observation.topCandidates(1)
if let recognizedText: VNRecognizedText = topCan.first {
label.text = recognizedText.string
}
}
} // non-realtime asynchronous but accurate text recognition
request.recognitionLevel = VNRequestTextRecognitionLevel.accurate
// nearly realtime but not-accurate text recognition
request.recognitionLevel = VNRequestTextRecognitionLevel.fast
try? requestHandler.perform([request])
}
}
If you wanna know a list of supported languages for recognition, read this post please.
Not exactly a dupe but similar to: Converting a Vision VNTextObservation to a String
You need to either use CoreML or another library to perform OCR (SwiftOCR, etc.)
This will return a overlay image with rectangle box on detected text
Here is the full xcode project
https://github.com/cyruslok/iOS11-Vision-Framework-Demo
Hope it is helpful
// Text Detect
func textDetect(dectect_image:UIImage, display_image_view:UIImageView)->UIImage{
let handler:VNImageRequestHandler = VNImageRequestHandler.init(cgImage: (dectect_image.cgImage)!)
var result_img:UIImage = UIImage.init();
let request:VNDetectTextRectanglesRequest = VNDetectTextRectanglesRequest.init(completionHandler: { (request, error) in
if( (error) != nil){
print("Got Error In Run Text Dectect Request");
}else{
result_img = self.drawRectangleForTextDectect(image: dectect_image,results: request.results as! Array<VNTextObservation>)
}
})
request.reportCharacterBoxes = true
do {
try handler.perform([request])
return result_img;
} catch {
return result_img;
}
}
func drawRectangleForTextDectect(image: UIImage, results:Array<VNTextObservation>) -> UIImage {
let renderer = UIGraphicsImageRenderer(size: image.size)
var t:CGAffineTransform = CGAffineTransform.identity;
t = t.scaledBy( x: image.size.width, y: -image.size.height);
t = t.translatedBy(x: 0, y: -1 );
let img = renderer.image { ctx in
for item in results {
let TextObservation:VNTextObservation = item
ctx.cgContext.setFillColor(UIColor.clear.cgColor)
ctx.cgContext.setStrokeColor(UIColor.green.cgColor)
ctx.cgContext.setLineWidth(1)
ctx.cgContext.addRect(item.boundingBox.applying(t))
ctx.cgContext.drawPath(using: .fillStroke)
for item_2 in TextObservation.characterBoxes!{
let RectangleObservation:VNRectangleObservation = item_2
ctx.cgContext.setFillColor(UIColor.clear.cgColor)
ctx.cgContext.setStrokeColor(UIColor.red.cgColor)
ctx.cgContext.setLineWidth(1)
ctx.cgContext.addRect(RectangleObservation.boundingBox.applying(t))
ctx.cgContext.drawPath(using: .fillStroke)
}
}
}
return img
}

CMFormatDescription to CMVideoFormatDescription

I'm trying to get the resolution of the camera of a device using swift.
I'm using CMVideoFormatDescriptionGetDimensions which requires a CMVideoFormatDescription, but AVCaptureDevice.formatDescription returns a CMFormatDescription. I've tried a multitude of ways to cast CMFormatDescription to CMVideoFormatDescription and can't seem to get it working.
Below is a sample of the code that I'm using:
for format in device.formats as [AVCaptureDeviceFormat] {
let videoDimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription)
}
This doesn't seem possible in Swift at the moment. One solution then would be to write a helper function in objective-c, such as:
CMVideoDimensions CMFormatDescriptionGetDimensions(CMFormatDescriptionRef formatDescription)
{
if (CMFormatDescriptionGetMediaType(formatDescription) == kCMMediaType_Video)
return CMVideoFormatDescriptionGetDimensions(formatDescription);
else
return (CMVideoDimensions) {
.width = 0,
.height = 0
};
}
Include the header with the function prototype in the Swift bridging header so that it will be accessible as a global function from your Swift code.
I was able to get the resolution using the swift method below:
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo) as AVCaptureDevice
let formatDesc = captureDevice.activeFormat.formatDescription
let dimensions = CMVideoFormatDescriptionGetDimensions(formatDesc)
Here's a solution in pure Swift, really only usable for logging purposes and such. Paste the following function in your class or somewhere else:
func widthAndHeightFromTrack(track: AVAssetTrack) -> CGSize {
let str = track.formatDescriptions.description
let regex = try! NSRegularExpression(pattern: "[0-9]{2,4} x [0-9]{2,4}", options: [])
if let result = regex.firstMatchInString(str, options: [], range: NSMakeRange(0, str.characters.count)) {
let dimensionString = (str as NSString).substringWithRange(result.range)
let dimensionArray = dimensionString.componentsSeparatedByString(" x ")
let width = Int(dimensionArray[0])
let height = Int(dimensionArray[1])
return CGSize(width: width!, height: height!)
}
return CGSizeZero
}
Example usage:
let allTracks: AVAsset = someAVAsset.tracksWithMediaType(AVMediaTypeVideo)
let videoTrack = allTracks[0]
let videoTrackDimensions = widthAndHeightFromTrack(videoTrack)
// You now have a CGSize, print it
print("Dimensions: \(videoTrackDimensions)")
Of course, the above solution will completely break whenever Apple changes something in the string representation of the CMFormatDescription. But it's useful for logging the dimensions.
Maybe, question is too old, but the Swift issue is still not fixed.
public extension AVURLAsset {
var audioFormatDescription: CMAudioFormatDescription? {
if let track = self.tracks(withMediaType: .audio).first,
let untypedDescription = track.formatDescriptions.first {
// hacks, warnings, disablings of swiftlint below are wrork-around of
// Swift bug: it fails converting 'Any as CMFormatDescription'
let forceTyped: CMFormatDescription?
//swiftlint:disable force_cast
= untypedDescription as! CMAudioFormatDescription
//swiftlint:enable force_cast
if let description = forceTyped {
return description
} else {
return nil
}
} else {
return nil
}
}
}