FBSimulatorControl : Convert CVPixelBufferRef data to jpeg data for streaming - swift

I am developing a CLI in swift and using FBSimulatorControl in it. I want to stream the data obtained from consumeData: delegate method. As per the data obtained in the method is data obtained from CVPixelBufferRef
I logged an issue in the github repo of FBSimulatorControl.
I tried to regenerate the CVPixelBuffer from the data and tried to obtain the CIImage from it and convert that to jpeg data. But it does not seem to work. Can anyone help me with this? Adding below the code I've tried
var pixelBuffer:CVPixelBuffer? = nil
let result = CVPixelBufferCreate(kCFAllocatorDefault, 750, 1334, kCVPixelFormatType_32BGRA, nil, &pixelBuffer)
if result != kCVReturnSuccess {
print("pixel buffer create success")
return
}
CVPixelBufferLockBaseAddress(pixelBuffer!, .init(rawValue: 0))
let yDestPlane:UnsafeMutableRawPointer? = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer!, 0)
if yDestPlane == nil {
print("failed to create ydestplane")
return
}
let nsData = data as NSData
let rawPtr = nsData.bytes
memcpy(yDestPlane!, rawPtr, 750*1334*4)
CVPixelBufferUnlockBaseAddress(pixelBuffer!, .init(rawValue: 0))
kCVPixelFormatType_32BGRA, getbasea, 2208, NULL, NULL, nil, &pixelBuffer)
if #available(OSX 10.12, *) {
let ciImage = CIImage.init(cvPixelBuffer: pixelBuffer!)
let tempContext = CIContext.init(options: nil)
let videoImage = tempContext.createCGImage(ciImage, from: CGRect.init(x: 0, y: 0, width: CVPixelBufferGetWidth(pixelBuffer!), height: CVPixelBufferGetHeight(pixelBuffer!)))
let imageSize: NSSize = NSMakeSize(750, 1334)
let nsImageTest = NSImage(cgImage: videoImage!, size: imageSize)
if let bits = nsImageTest.representations.first as? NSBitmapImageRep {
let jpegFinalData = bits.representation(using: .JPEG, properties: [:])
if self.isStreaming {
var simIndex:Int?
for i in 0...(AUConnectionSimulatorMap.instance.simConnectionMap.count-1) {
if sim.udid == AUConnectionSimulatorMap.instance.simConnectionMap[i].sim.sim.udid {
simIndex = i
break
}
}
var finalData:Data = Data()
let finalDict = ["data":["type":"onScreenFrame","value":jpegFinalData!]] as Dictionary<String,Any>
try! finalData.pack(finalDict)
AUConnectionSimulatorMap.instance.simConnectionMap[simIndex!].ws?.write(data: finalData)
}
}
} else {
// Fallback on earlier versions
}

Related

How can I create a spectrogram from an audio file?

I have tried to create a spectrogram using this apple tutorial but it uses live audio input from the microphone. I want to create one from an existing file. I have tried to convert apples example from live input to existing files with no luck, so I am wondering if there are any better resources out there.
Here is how I am getting the samples:
let samples: (naturalTimeScale: Int32, data: [Float]) = {
guard let samples = AudioUtilities.getAudioSamples(
forResource: resource,
withExtension: wExtension) else {
fatalError("Unable to parse the audio resource.")
}
return samples
}()
// Returns an array of single-precision values for the specified audio resource.
static func getAudioSamples(forResource: String,
withExtension: String) -> (naturalTimeScale: CMTimeScale,
data: [Float])? {
guard let path = Bundle.main.url(forResource: forResource,
withExtension: withExtension) else {
return nil
}
let asset = AVAsset(url: path.absoluteURL)
guard
let reader = try? AVAssetReader(asset: asset),
let track = asset.tracks.first else {
return nil
}
let outputSettings: [String: Int] = [
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVNumberOfChannelsKey: 1,
AVLinearPCMIsBigEndianKey: 0,
AVLinearPCMIsFloatKey: 1,
AVLinearPCMBitDepthKey: 32,
AVLinearPCMIsNonInterleaved: 1
]
let output = AVAssetReaderTrackOutput(track: track,
outputSettings: outputSettings)
reader.add(output)
reader.startReading()
var samplesData = [Float]()
while reader.status == .reading {
if
let sampleBuffer = output.copyNextSampleBuffer(),
let dataBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) {
let bufferLength = CMBlockBufferGetDataLength(dataBuffer)
var data = [Float](repeating: 0,
count: bufferLength / 4)
CMBlockBufferCopyDataBytes(dataBuffer,
atOffset: 0,
dataLength: bufferLength,
destination: &data)
samplesData.append(contentsOf: data)
}
}
return (naturalTimeScale: track.naturalTimeScale, data: samplesData)
}
And here is how I am performing the "fft" or dct in this case:
static var sampleCount = 1024
let forwardDCT = vDSP.DCT(count: sampleCount,
transformType: .II)
guard let freqs = forwardDCT?.transform(samples.data) else { return }
This is the part where I begin to get lost/stuck in the apple tutorial. How can I create the spectrogram from here?

The File "xxx" couldn’t be opened because there is no such file from directory

In my video recording app, I record a video and save it to the photo library. The ultimate goal is to take the recently taken videos and merge them with this merge function.
extension AVMutableComposition {
func mergeVideo(_ urls: [URL], completion: #escaping (_ url: URL?, _ error: Error?) -> Void) {
guard let documentDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first else {
completion(nil, nil)
return
}
let dateFormatter = DateFormatter()
dateFormatter.dateStyle = .long
dateFormatter.timeStyle = .short
let date = dateFormatter.string(from: Date())
let outputURL = documentDirectory.appendingPathComponent("mergedVideo_\(date).mp4")
// If there is only one video, we dont to touch it to save export time.
if let url = urls.first, urls.count == 1 {
do {
try FileManager().copyItem(at: url, to: outputURL)
completion(outputURL, nil)
} catch let error {
completion(nil, error)
}
return
}
let maxRenderSize = CGSize(width: 1280.0, height: 720.0)
var currentTime = CMTime.zero
var renderSize = CGSize.zero
// Create empty Layer Instructions, that we will be passing to Video Composition and finally to Exporter.
var instructions = [AVMutableVideoCompositionInstruction]()
urls.enumerated().forEach { index, url in
let asset = AVAsset(url: url)
print(asset)
let assetTrack = asset.tracks.first!
// Create instruction for a video and append it to array.
let instruction = AVMutableComposition.instruction(assetTrack, asset: asset, time: currentTime, duration: assetTrack.timeRange.duration, maxRenderSize: maxRenderSize)
instructions.append(instruction.videoCompositionInstruction)
// Set render size (orientation) according first videro.
if index == 0 {
renderSize = instruction.isPortrait ? CGSize(width: maxRenderSize.height, height: maxRenderSize.width) : CGSize(width: maxRenderSize.width, height: maxRenderSize.height)
}
do {
let timeRange = CMTimeRangeMake(start: .zero, duration: assetTrack.timeRange.duration)
// Insert video to Mutable Composition at right time.
try insertTimeRange(timeRange, of: asset, at: currentTime)
currentTime = CMTimeAdd(currentTime, assetTrack.timeRange.duration)
} catch let error {
completion(nil, error)
}
}
// Create Video Composition and pass Layer Instructions to it.
let videoComposition = AVMutableVideoComposition()
videoComposition.instructions = instructions
// Do not forget to set frame duration and render size. It will crash if you dont.
videoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
videoComposition.renderSize = renderSize
guard let exporter = AVAssetExportSession(asset: self, presetName: AVAssetExportPresetHighestQuality) else {
completion(nil, nil)
return
}
exporter.outputURL = outputURL
exporter.outputFileType = .mp4
// Pass Video Composition to the Exporter.
exporter.videoComposition = videoComposition
exporter.shouldOptimizeForNetworkUse = true
exporter.exportAsynchronously {
DispatchQueue.main.async {
completion(exporter.outputURL, nil)
}
}
}
static func instruction(_ assetTrack: AVAssetTrack, asset: AVAsset, time: CMTime, duration: CMTime, maxRenderSize: CGSize)
-> (videoCompositionInstruction: AVMutableVideoCompositionInstruction, isPortrait: Bool) {
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: assetTrack)
// Find out orientation from preferred transform.
let assetInfo = orientationFromTransform(assetTrack.preferredTransform)
// Calculate scale ratio according orientation.
var scaleRatio = maxRenderSize.width / assetTrack.naturalSize.width
if assetInfo.isPortrait {
scaleRatio = maxRenderSize.height / assetTrack.naturalSize.height
}
// Set correct transform.
var transform = CGAffineTransform(scaleX: scaleRatio, y: scaleRatio)
transform = assetTrack.preferredTransform.concatenating(transform)
layerInstruction.setTransform(transform, at: .zero)
// Create Composition Instruction and pass Layer Instruction to it.
let videoCompositionInstruction = AVMutableVideoCompositionInstruction()
videoCompositionInstruction.timeRange = CMTimeRangeMake(start: time, duration: duration)
videoCompositionInstruction.layerInstructions = [layerInstruction]
return (videoCompositionInstruction, assetInfo.isPortrait)
}
static func orientationFromTransform(_ transform: CGAffineTransform) -> (orientation: UIImage.Orientation, isPortrait: Bool) {
var assetOrientation = UIImage.Orientation.up
var isPortrait = false
switch [transform.a, transform.b, transform.c, transform.d] {
case [0.0, 1.0, -1.0, 0.0]:
assetOrientation = .right
isPortrait = true
case [0.0, -1.0, 1.0, 0.0]:
assetOrientation = .left
isPortrait = true
case [1.0, 0.0, 0.0, 1.0]:
assetOrientation = .up
case [-1.0, 0.0, 0.0, -1.0]:
assetOrientation = .down
default:
break
}
return (assetOrientation, isPortrait)
}
}
After calling this function, if the 'urls' array has only 1 item then I get an error saying "The File (insert filename) couldn't be opened because there is no such file." Otherwise, if the array has more than 1 item than the app crashes due to a force unwrapping optional value found nil. This is how I'm formatting the url and saving them to the app directory
func tempURL() -> URL? {
let directory = NSTemporaryDirectory() as NSString
if directory != "" {
let path = directory.appendingPathComponent(NSUUID().uuidString + ".mp4")
return URL(fileURLWithPath: path)
}
return nil
}
Any ideas on what's the issue or how to fix this?
assetTrack = asset.tracks.first!
.first! is a force-unwrapped optional so try doing something like
assetTrack = asset.tracks.first ?? asset.tracks.someValue

Tensor (.tflite) Model inference returning nil using Firebase SDK on Swift

Preface:
My ML (specifically NN() knowledge is very limited and i'm really only getting more and more familiar as time goes on.
Essentially, I have a model that accepts input [1, H, W, 3] (1 image, height, width, 3 channels) and SHOULD output [1, H, W, 2] (1 image, height, width, 2 channels). The idea is that with that, I'll be able to grab image data from the output with 1 of the channels in order to then convert it to an actual image which should essentially display indication AND sort of highlighting if a certain "something" existed in the input image using that 1 color channel (or the other color channel).
The model author is actively working on the model so it's nothing close to a perfect model.
So, with that:
I was initially using the tensorflowlite SDK to do everything but I found that official documentation, examples, and open source work wasn't even close to comparable with Firebase SDK. Plus, the actual project (currently testing this in a test environment) already uses Firebase SDK. Anyway, i was able to get some form of output, but I wasn't normalizing the image properly so the output wasn't as expected but at least there was SOMETHING.
Using this guide on Firebase, I am trying to run an inference on a tflite model.
From the below code you'll see that I have TensorFlowLite as a dependency but i'm not actually ACTIVELY using it. I have a function that uses it but the function isn't called.
So essentially you can ignore: parseOutputTensor, coordinateToIndex, and enum: Constants
Theories:
My modelInputs aren't set up properly.
I'm not correctly looking at the output
I'm not resizing and processing the image correctly before I use it to set the input data for inference
I don't know what I"m doing and i'm way off. D:
Below is my code:
import UIKit
import Firebase
import AVFoundation
import TensorFlowLite
class ViewController: UIViewController {
var captureSesssion : AVCaptureSession!
var cameraOutput : AVCapturePhotoOutput!
var previewLayer : AVCaptureVideoPreviewLayer!
#objc let device = AVCaptureDevice.default(for: .video)!
private var previousInferenceTimeMs: TimeInterval = Date.distantPast.timeIntervalSince1970 * 1000
private let delayBetweenInferencesMs: Double = 1000
#IBOutlet var imageView: UIImageView!
private var button1 : UIButton = {
var button = UIButton()
button.setTitle("button lol", for: .normal)
button.translatesAutoresizingMaskIntoConstraints = false
button.addTarget(self, action: #selector(buttonClicked), for: .touchDown)
return button
}()
override func viewDidLoad() {
super.viewDidLoad()
startCamera()
view.addSubview(button1)
view.bringSubviewToFront(button1)
button1.bottomAnchor.constraint(equalTo: view.bottomAnchor).isActive = true
button1.titleLabel?.font = UIFont(name: "Helvetica", size: 25)
button1.widthAnchor.constraint(equalToConstant: view.frame.width/3).isActive = true
button1.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true
}
#objc func buttonClicked() {
cameraPressed()
}
private func configureLocalModel() -> CustomLocalModel {
guard let modelPath = Bundle.main.path(forResource: "modelName", ofType: "tflite") else { fatalError("Couldn't find the modelPath") }
return CustomLocalModel(modelPath: modelPath)
}
private func createInterpreter(customLocalModel: CustomLocalModel) -> ModelInterpreter{
return ModelInterpreter.modelInterpreter(localModel: customLocalModel)
}
private func setModelInputOutput() -> ModelInputOutputOptions? {
var ioOptions : ModelInputOutputOptions
do {
ioOptions = ModelInputOutputOptions()
try ioOptions.setInputFormat(index: 0, type: .float32, dimensions: [1, 512, 512, 3])
try ioOptions.setOutputFormat(index: 0, type: .float32, dimensions: [1, 512, 512, 2])
} catch let error as NSError {
print("Failed to set input or output format with error: \(error.localizedDescription)")
}
return ioOptions
}
private func inputDataForInference(theImage: CGImage) -> ModelInputs?{
let image: CGImage = theImage
guard let context = CGContext(
data: nil,
width: image.width, height: image.height,
bitsPerComponent: 8, bytesPerRow: image.width * 4,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue
) else { fatalError("Context issues") }
context.draw(image, in: CGRect(x: 0, y: 0, width: image.width, height: image.height))
guard let imageData = context.data else { fatalError("Context issues") }
let inputs : ModelInputs
var inputData = Data()
do {
for row in 0 ..< 512 {
for col in 0 ..< 512 {
let offset = 4 * (col * context.width + row)
// (Ignore offset 0, the unused alpha channel)
let red = imageData.load(fromByteOffset: offset+1, as: UInt8.self)
let green = imageData.load(fromByteOffset: offset+2, as: UInt8.self)
let blue = imageData.load(fromByteOffset: offset+3, as: UInt8.self)
// Normalize channel values to [0.0, 1.0]. This requirement varies
// by model. For example, some models might require values to be
// normalized to the range [-1.0, 1.0] instead, and others might
// require fixed-point values or the original bytes.
var normalizedRed = Float32(red) / 255.0
var normalizedGreen = Float32(green) / 255.0
var normalizedBlue = Float32(blue) / 255.0
// Append normalized values to Data object in RGB order.
let elementSize = MemoryLayout.size(ofValue: normalizedRed)
var bytes = [UInt8](repeating: 0, count: elementSize)
memcpy(&bytes, &normalizedRed, elementSize)
inputData.append(&bytes, count: elementSize)
memcpy(&bytes, &normalizedGreen, elementSize)
inputData.append(&bytes, count: elementSize)
memcpy(&bytes, &normalizedBlue, elementSize)
inputData.append(&bytes, count: elementSize)
}
}
inputs = ModelInputs()
try inputs.addInput(inputData)
} catch let error {
print("Failed to add input: \(error)")
}
return inputs
}
private func runInterpreter(interpreter: ModelInterpreter, inputs: ModelInputs, ioOptions: ModelInputOutputOptions){
interpreter.run(inputs: inputs, options: ioOptions) { outputs, error in
guard error == nil, let outputs = outputs else { fatalError("interpreter run error is nil or outputs is nil") }
let output = try? outputs.output(index: 0) as? [[NSNumber]]
print()
print("output?[0]: \(output?[0])")
print("output?.count: \(output?.count)")
print("output?.description: \(output?.description)")
}
}
private func gotImage(cgImage: CGImage){
let configuredModel = configureLocalModel()
let interpreter = createInterpreter(customLocalModel: configuredModel)
guard let modelioOptions = setModelInputOutput() else { fatalError("modelioOptions got image error") }
guard let modelInputs = inputDataForInference(theImage: cgImage) else { fatalError("modelInputs got image error") }
runInterpreter(interpreter: interpreter, inputs: modelInputs, ioOptions: modelioOptions)
}
private func resizeImage(image: UIImage, targetSize: CGSize) -> UIImage {
let newSize = CGSize(width: targetSize.width, height: targetSize.height)
// This is the rect that we've calculated out and this is what is actually used below
let rect = CGRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height)
// Actually do the resizing to the rect using the ImageContext stuff
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
extension ViewController: AVCapturePhotoCaptureDelegate{
func startCamera(){
captureSesssion = AVCaptureSession()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSesssion)
captureSesssion.sessionPreset = AVCaptureSession.Preset.photo;
cameraOutput = AVCapturePhotoOutput()
previewLayer.frame = CGRect(x: view.frame.origin.x, y: view.frame.origin.y, width: view.frame.width, height: view.frame.height)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
do {
try device.lockForConfiguration()
} catch {
return
}
device.focusMode = .continuousAutoFocus
device.unlockForConfiguration()
print("startcamera")
if let input = try? AVCaptureDeviceInput(device: device) {
if captureSesssion.canAddInput(input) {
captureSesssion.addInput(input)
if captureSesssion.canAddOutput(cameraOutput) {
captureSesssion.addOutput(cameraOutput)
view.layer.addSublayer(previewLayer)
captureSesssion.startRunning()
}
} else {
print("issue here : captureSesssion.canAddInput")
_ = UIAlertController(title: "Your camera doesn't seem to be working :(", message: "Please make sure your camera works", preferredStyle: .alert)
}
} else {
fatalError("TBPVC -> startCamera() : AVCaptureDeviceInput Error")
}
}
func cameraPressed(){
let outputFormat = [kCVPixelBufferPixelFormatTypeKey as String: kCMPixelFormat_32BGRA]
let settings = AVCapturePhotoSettings(format: outputFormat)
cameraOutput.capturePhoto(with: settings, delegate: self)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
print("got image")
// guard let cgImageFromPhoto = photo.cgImageRepresentation()?.takeRetainedValue() else { fatalError("cgImageRepresentation()?.takeRetainedValue error") }
guard let imageData = photo.fileDataRepresentation() else {
fatalError("Error while generating image from photo capture data.")
}
guard let uiImage = UIImage(data: imageData) else {
fatalError("Unable to generate UIImage from image data.")
}
let tempImage = resizeImage(image: uiImage, targetSize: CGSize(width: 512, height: 512))
// generate a corresponding CGImage
guard let tempCgImage = tempImage.cgImage else {
fatalError("Error generating CGImage")
}
gotImage(cgImage: tempCgImage)
}
#objc func image(_ image: UIImage, didFinishSavingWithError error: Error?, contextInfo: UnsafeRawPointer) {
if let error = error {
let ac = UIAlertController(title: "Save error", message: error.localizedDescription, preferredStyle: .alert)
ac.addAction(UIAlertAction(title: "OK", style: .default))
present(ac, animated: true)
} else {
let ac = UIAlertController(title: "Saved!", message: "Your altered image has been saved to your photos.", preferredStyle: .alert)
ac.addAction(UIAlertAction(title: "OK", style: .default))
present(ac, animated: true)
}
}
}

swift generate a qrcode [duplicate]

This question already has answers here:
What does "Fatal error: Unexpectedly found nil while unwrapping an Optional value" mean?
(16 answers)
Closed 2 years ago.
I tried to generate a qrcode, but it has error
Thread 1: Fatal error: Unexpectedly found nil while unwrapping an Optional value
let myString = "ggigiuui"
let data = myString.data(using: .ascii, allowLossyConversion: false)
let filter = CIFilter(name: "CIQRCodeGenerator")
filter?.setValue(data, forKey: "inputMessage")
let img = UIImage(ciImage: (filter?.outputImage)!)
qponImage.image = img
I have used the following code, and it is working perfectly.Where self.imgQRCode is the imageview on which you want to display QR.
func generateQRCode(from string: String) -> UIImage?
{
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CIQRCodeGenerator")
{
filter.setValue(data, forKey: "inputMessage")
guard let qrImage = filter.outputImage else {return nil}
let scaleX = self.imgQRCode.frame.size.width / qrImage.extent.size.width
let scaleY = self.imgQRCode.frame.size.height / qrImage.extent.size.height
let transform = CGAffineTransform(scaleX: scaleX, y: scaleY)
if let output = filter.outputImage?.transformed(by: transform)
{
return UIImage(ciImage: output)
}
}
return nil
}
Please try this,
func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
This is how you can generate a QRCode and display in UIImageView
first of all create new Cocoa Touch Class .swift file and import these two framework:
import UIKit
import CoreImage
and the second step you just need to add the extension of URL and CIImage on the same .swift file.
extensions :
extension URL {
/// Creates a QR code for the current URL in the given color.
func qrImage(using color: UIColor, logo: UIImage? = nil) -> CIImage? {
let tintedQRImage = qrImage?.tinted(using: color)
guard let logo = logo?.cgImage else {
return tintedQRImage
}
return tintedQRImage?.combined(with: CIImage(cgImage: logo))
}
/// Returns a black and white QR code for this URL.
var qrImage: CIImage? {
guard let qrFilter = CIFilter(name: "CIQRCodeGenerator") else { return nil }
let qrData = absoluteString.data(using: String.Encoding.ascii)
qrFilter.setValue(qrData, forKey: "inputMessage")
let qrTransform = CGAffineTransform(scaleX: 12, y: 12)
return qrFilter.outputImage?.transformed(by: qrTransform)
}
}
extension CIImage {
/// Inverts the colors and creates a transparent image by converting the mask to alpha.
/// Input image should be black and white.
var transparent: CIImage? {
return inverted?.blackTransparent
}
/// Inverts the colors.
var inverted: CIImage? {
guard let invertedColorFilter = CIFilter(name: "CIColorInvert") else { return nil }
invertedColorFilter.setValue(self, forKey: "inputImage")
return invertedColorFilter.outputImage
}
/// Converts all black to transparent.
var blackTransparent: CIImage? {
guard let blackTransparentFilter = CIFilter(name: "CIMaskToAlpha") else { return nil }
blackTransparentFilter.setValue(self, forKey: "inputImage")
return blackTransparentFilter.outputImage
}
/// Applies the given color as a tint color.
func tinted(using color: UIColor) -> CIImage?
{
guard
let transparentQRImage = transparent,
let filter = CIFilter(name: "CIMultiplyCompositing"),
let colorFilter = CIFilter(name: "CIConstantColorGenerator") else { return nil }
let ciColor = CIColor(color: color)
colorFilter.setValue(ciColor, forKey: kCIInputColorKey)
let colorImage = colorFilter.outputImage
filter.setValue(colorImage, forKey: kCIInputImageKey)
filter.setValue(transparentQRImage, forKey: kCIInputBackgroundImageKey)
return filter.outputImage!
}
/// Combines the current image with the given image centered.
func combined(with image: CIImage) -> CIImage? {
guard let combinedFilter = CIFilter(name: "CISourceOverCompositing") else { return nil }
let centerTransform = CGAffineTransform(translationX: extent.midX - (image.extent.size.width / 2), y: extent.midY - (image.extent.size.height / 2))
combinedFilter.setValue(image.transformed(by: centerTransform), forKey: "inputImage")
combinedFilter.setValue(self, forKey: "inputBackgroundImage")
return combinedFilter.outputImage!
}
}
and the third step you have to bund the outlet of your imageview in which you want to display generated QRCode.
your ViewController.swift file something like this.
// desired color of QRCode
let OrangeColor = UIColor(red:0.93, green:0.31, blue:0.23, alpha:1.00)
// app logo or whatever UIImage you want to set in the center.
let Logo = UIImage(named: "logo_which_you_want_to_set_in_the center_of_the_QRCode")!
#IBOutlet weak var imgQRImage: UIImageView!
and last and final step add the QRCode to imgQRImage and put the code in your viewDidLoad()
override func viewDidLoad() {
super.viewDidLoad()
let QRLink = "https://www.peerbits.com/"
guard let qrURLImage = URL(string: QRLink)?.qrImage(using: self.OrangeColor, logo: self.Logo)else{return}
self.imgQRImage.image = UIImage(ciImage: qrURLImage)
}
As mention in docs we can use CIQRCodeGenerator
func qrCode(_ outputSize: CGSize) -> UIImage?
{
if let data = data(using: .isoLatin1),
let outputImage = CIFilter(
name: "CIQRCodeGenerator",
parameters: [
"inputMessage": data,
"inputCorrectionLevel": "Q"
]
)?.outputImage {
let size: CGRect = outputImage.extent.integral
let format = UIGraphicsImageRendererFormat()
format.scale = UIScreen.main.scale
return UIGraphicsImageRenderer(size: output, format: format)
.image { _ in
outputImage
.transformed(
by: .init(
scaleX: outputSize.width/size.width,
y: outputSize.height/size.height
)
)
.uiimage
.draw(in: .init(origin: .zero, size: outputSize))
}
} else {
return nil
}
}
extension CIImage {
var uiimage: UIImage {
.init(ciImage: self)
}
}
this is bit modified version of this post
and in case u need to parse qr code image for content
func decodeQRCode(_ image: UIImage?) -> [CIQRCodeFeature]? {
if let image = image,
let ciImage = CIImage(image: image) {
let context = CIContext()
var options: [String: Any] = [
CIDetectorAccuracy: CIDetectorAccuracyHigh
]
let qrDetector = CIDetector(
ofType: CIDetectorTypeQRCode,
context: context,
options: options
)
if ciImage.properties.keys
.contains((kCGImagePropertyOrientation as String)) {
options = [
CIDetectorImageOrientation: ciImage
.properties[(kCGImagePropertyOrientation as String)] as Any
]
} else {
options = [CIDetectorImageOrientation: 1]
}
let features = qrDetector?.features(in: ciImage, options: options)
return features?
.compactMap({ $0 as? CIQRCodeFeature })
}
return nil
}
}

Function in Swift to Append a Pdf file to another Pdf

I created two different pdf files in two different views using following code:
private func toPDF(views: [UIView]) -> NSData? {
if views.isEmpty {return nil}
let pdfData = NSMutableData()
UIGraphicsBeginPDFContextToData(pdfData, CGRect(x: 0, y: 0, width: 1024, height: 1448), nil)
let context = UIGraphicsGetCurrentContext()
for view in views {
UIGraphicsBeginPDFPage()
view.layer.renderInContext(context!)
}
UIGraphicsEndPDFContext()
return pdfData
}
In the final view I call both files using:
let firstPDF = NSUserDefaults.standardUserDefaults().dataForKey("PDFone")
let secondPDF = NSUserDefaults.standardUserDefaults().dataForKey("PDFtwo")
My question is: Can anyone suggest a function which append the second file to the first one? (Both are in NSData Format)
Swift 4:
func merge(pdfs:Data...) -> Data
{
let out = NSMutableData()
UIGraphicsBeginPDFContextToData(out, .zero, nil)
guard let context = UIGraphicsGetCurrentContext() else {
return out as Data
}
for pdf in pdfs {
guard let dataProvider = CGDataProvider(data: pdf as CFData), let document = CGPDFDocument(dataProvider) else { continue }
for pageNumber in 1...document.numberOfPages {
guard let page = document.page(at: pageNumber) else { continue }
var mediaBox = page.getBoxRect(.mediaBox)
context.beginPage(mediaBox: &mediaBox)
context.drawPDFPage(page)
context.endPage()
}
}
context.closePDF()
UIGraphicsEndPDFContext()
return out as Data
}
This can be done quite easily with PDFKit and its PDFDocument.
I'm using this extension:
import PDFKit
extension PDFDocument {
func addPages(from document: PDFDocument) {
let pageCountAddition = document.pageCount
for pageIndex in 0..<pageCountAddition {
guard let addPage = document.page(at: pageIndex) else {
break
}
self.insert(addPage, at: self.pageCount) // unfortunately this is very very confusing. The index is the page *after* the insertion. Every normal programmer would assume insert at self.pageCount-1
}
}
}
Swift 5:
Merge pdfs like this to keep links, etc...
See answer here