How to generate QRCode image with parameters in swift? - swift

I need to create QRCode image with app registered users 4 parameters, so that if i scan that QRCode image from other device i need to display that user details, that is my requirement.
here i am getting registered user details: with these parameters i need to generate QRCode image
var userId = userModel?.userId
var userType = userModel?.userType
var addressId = userModel?.userAddrId
var addressType = userModel?.userAddrType
according to [this answer][1] i have created QRCode with string... but i. need to generate with my registered user parameters
sample Code with string:
private func createQRFromString(str: String) -> CIImage? {
let stringData = str.data(using: .utf8)
let filter = CIFilter(name: "CIQRCodeGenerator")
filter?.setValue(stringData, forKey: "inputMessage")
filter?.setValue("H", forKey: "inputCorrectionLevel")
return filter?.outputImage
}
var qrCode: UIImage? {
if let img = createQRFromString(str: "Hello world program created by someone") {
let someImage = UIImage(
ciImage: img,
scale: 1.0,
orientation: UIImage.Orientation.down
)
return someImage
}
return nil
}
#IBAction func qrcodeBtnAct(_ sender: Any) {
qrImag.image = qrCode
}
please suggest me
[1]: Is there a way to generate QR code image on iOS

You say you need a QR reader, but here you are solely talking about QR generation. Those are two different topics.
In terms of QR generation, you just need to put your four values in the QR payload. Right now you’re just passing a string literal, but you can just update that string to include your four properties in whatever easily decoded format you want.
That having been said, when writing apps like this, you often want to able to scan your QR code not only from within the app, but also any QR scanning app, such as the built in Camera app, and have it open your app. That influences how you might want to encode your payload.
The typical answer would be to make your QR code payload be a URL, using, for example, a universal link. See Supporting Universal Links in Your App. So, first focus on enabling universal links.
Once you’ve got the universal links working (not using QR codes at all, initially), the question then becomes how one would programmatically create the universal link that you’d supply to your QR generator routine, above. For that URLComponents is a great tool for encoding URLs. For example, see Swift GET request with parameters. Just use your universal link for the host used in the URL.
FWIW, while I suggest just encoding a universal link URL into your QR code, above, another option would be some other deep linking pattern, such as branch.io.

Related

Use Images in a swift framework

I need a picture to appeare in a framework. The only way i found needed that i know the name of the app it is in. Is there another way to get assets into your framework?
(For getting this posted:
my background search didnt help)
Almost 5 years ago I posted this answer. It contains two pieces of code to pull out an asset from a Framework's bundle. The key piece of code is this:
public func returnFile(_ resource:String, _ fileName:String, _ fileType:String) -> String {
let identifier = "com.companyname.appname" // replace with framework bundle identifier
let fileBundle = Bundle.init(identifier: identifier)
let filePath = (fileBundle?.path(forResource: resource, ofType: "bundle"))! + "/" + fileName + "." + fileType
do {
return try String(contentsOfFile: filePath)
}
catch let error as NSError {
return error.description
}
So what if your framework, which needs to know two things (app bundle and light/dark mode) tweaked this code? Move identifier out to be accessible to the app and not local to this function. Then create either a new variable (I think this is the best way) or a new function to work with the correct set of assets based on light or dark mode.
Now your apps can import your framework, and set things up in it's consumers appropriately. (I haven't tried this, but in theory I think it should work.)

Capture if a barcode was scanned inside application

I am developing an application that creates a barcode using CIFilter. The question I have is if there is a way to capture at the app level every time the barcode is scanned? The barcode is a way for the device holder to redeem some sort of discount at different businesses. After it is scanned I would want to hit an API that captures that the barcode was scanned by the device holder. Without having to tie into the business systems. What would be the best approach for this? If there is one.
Just a snippet of how I'm creating these barcodes
func generateBarcode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CICode128BarcodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
Thanks
The short answer is: most likely not.
The screen that is displaying the barcode has (physically) no way of telling if it's being looked at or scanned. It only emits light, it doesn't receive any information.
I can only think of two ways of getting that information:
Use the sensors of the device, like the front camera, to determine if the screen is being scanned. But this is a very hard task since you have to analyze the video stream for signs of scanning (probably with machine learning of some sort). It would also require that the user gives permission to use the camera just for... some kind of feedback?
The scanner needs to somehow communicate with the device through some other means, like a local network or through the internet. This an API of some sort, however.
Maybe it's enough for your use case to just track when the user opened the barcode inside the app, assuming this will most likely only happen when they let it be scanned.

Using CoreML to classify NSImages

I'm trying to work with Xcode CoreML to classify images that are simply single digits or letters. To start out with I'm just usiing .png images of digits. Using Create ML tool, I built an image classifier (NOT including any Vision support stuff) and provided a set of about 300 training images and separate set of 50 testing images. When I run this model, it trains and tests successfully and generates a model. Still within the tool I access the model and feed it another set of 100 images to classify. It works properly, identifying 98 of them corrrectly.
Then I created a Swift sample program to access the model (from the Mac OS X single view template); it's set up to accept a dropped image file and then access the model's prediction method and print the result. The problem is that the model expects an object of type CVPixelBuffer and I'm not sure how to properly create this from NSImage. I found some reference code and incorported but when I actually drag my classification images to the app it's only about 50% accurate. So I'm wondering if anyone has any experience with this type of model. It would be nice if there were a way to look at the "Create ML" source code to see how it processes a dropped image when predicting from the model.
The code for processing the image and invoking model prediction method is:
// initialize the model
mlModel2 = MLSample() //MLSample is model generated by ML Create tool and imported to project
// prediction logic for the image
// (included in a func)
//
let fimage = NSImage.init(contentsOfFile: fname) //fname is obtained from dropped file
do {
let fcgImage = fimage.cgImage(forProposedRect: nil, context: nil, hints: nil)
let imageConstraint = mlModel2?.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint
let featureValue = try MLFeatureValue(cgImage: fcgImage!, constraint: imageConstraint!, options: nil)
let pxbuf = featureValue.imageBufferValue
let mro = try mlModel2?.prediction(image: pxbuf!)
if mro != nil {
let mroLbl = mro!.classLabel
let mroProb = mro!.classLabelProbs[mroLbl] ?? 0.0
print(String.init(format: "M2 MLFeature: %# %5.2f", mroLbl, mroProb))
return
}
}
catch {
print(error.localizedDescription)
}
return
There are several ways to do this.
The easiest is what you're already doing: create an MLFeatureValue from the CGImage object.
My repo CoreMLHelpers has a different way to convert CGImage to CVPixelBuffer.
A third way is to get Xcode 12 (currently in beta). The automatically-generated class now accepts images instead of just CVPixelBuffer.
In cases like this it's useful to look at the image that Core ML actually sees. You can use the CheckInputImage project from https://github.com/hollance/coreml-survival-guide to verify this (it's an iOS project but easy enough to port to the Mac).
If the input image is correct, and you still get the wrong predictions, then probably the image preprocessing options on the model are wrong. For more info: https://machinethink.net/blog/help-core-ml-gives-wrong-output/

Drag a file promise from a NSView onto the desktop or another application (macOS)

I need to be able to drag a file representation (a pdf in my case) from an NSView contained within my application onto the Desktop or another application that supports opening PDF files.
I spent a few hours trying to get this working in my own app, and I thought I'd add my solution here as there's a lot of half-solutions online, some of which rely on Obj-C extensions and others which are outdated and are no longer supported. I'm hoping this post os the sort of post I'd wished I'd found during my own searches. I'm also aware of all of the minutae of the system (for example, using file coordinators instead of a direct write) but this seems to be the minimum code required to implement.
I've also provided a simple Swift NSView implementation.
The operation occurs in three main stages.
Basic overview
You'll need to make your view (or other control) a 'Data Provider' for the drag by implementing the NSPasteboardItemDataProvider protocol. The majority of the work required (other than starting the drag) occurs in the following protocol function.
func pasteboard(_ pasteboard: NSPasteboard?, item _: NSPasteboardItem, provideDataForType type: NSPasteboard.PasteboardType)
Starting the drag
This section occurs when the drag starts. In my case, I was doing this in mouseDown(), but you could also do this in the mouseDragged for example.
Tell the pasteboard that we will provide the file type UTI for the drop (kPasteboardTypeFilePromiseContent)
Tell the pasteboard that we will provide a file promise (kPasteboardTypeFileURLPromise) for the data type specified in (1)
Responding to the receiver asking for the content that we'll provide
kPasteboardTypeFilePromiseContent
This is the first callback from the receiver of the drop (via pasteboard(pasteboard:item:provideDataForType:))
The receiver is asking us what type (UTI) of file we will provide.
Respond by setting the UTI (using setString("") on the pasteboard object) for the type kPasteboardTypeFilePromiseContent
Responding to the receiver asking for the file
kPasteboardTypeFileURLPromise
This is the second callback from the receiver (via pasteboard(pasteboard:item:provideDataForType:))
The receiver is asking us to write the data to a file on disk.
The receiver tells us the folder to write our content to (com.apple.pastelocation)
Write the data to disk inside the folder that the receiver has told us.
Respond by setting the resulting URL of the written file (using setString() on the pasteboard object) for the type kPasteboardTypeFileURLPromise. Note that the format of this string needs to be file:///... so .absoluteString() needs to be used.
And we're done!
Sample
// Some definitions to help reduce the verbosity of our code
let PasteboardFileURLPromise = NSPasteboard.PasteboardType(rawValue: kPasteboardTypeFileURLPromise)
let PasteboardFilePromiseContent = NSPasteboard.PasteboardType(rawValue: kPasteboardTypeFilePromiseContent)
let PasteboardFilePasteLocation = NSPasteboard.PasteboardType(rawValue: "com.apple.pastelocation")
class MyView: NSView {
override func mouseDown(with event: NSEvent) {
let pasteboardItem = NSPasteboardItem()
// (1, 2) Tell the pasteboard item that we will provide both file and content promises
pasteboardItem.setDataProvider(self, forTypes: [PasteboardFileURLPromise, PasteboardFilePromiseContent])
// Create the dragging item for the drag operation
let draggingItem = NSDraggingItem(pasteboardWriter: pasteboardItem)
draggingItem.setDraggingFrame(self.bounds, contents: image())
// Start the dragging session
beginDraggingSession(with: [draggingItem], event: event, source: self)
}
}
Then, in your Pasteboard Item Data provider extension...
extension MyView: NSPasteboardItemDataProvider {
func pasteboard(_ pasteboard: NSPasteboard?, item _: NSPasteboardItem, provideDataForType type: NSPasteboard.PasteboardType) {
if type == PasteboardFilePromiseContent {
// The receiver will send this asking for the content type for the drop, to figure out
// whether it wants to/is able to accept the file type (3).
// In my case, I want to be able to drop a file containing PDF from my app onto
// the desktop or another app, so, add the UTI for the pdf (4).
pasteboard?.setString("com.adobe.pdf", forType: PasteboardFilePromiseContent)
}
else if type == PasteboardFileURLPromise {
// The receiver is interested in our data, and is happy with the format that we told it
// about during the kPasteboardTypeFilePromiseContent request.
// The receiver has passed us a URL where we are to write our data to (5).
// It is now waiting for us to respond with a kPasteboardTypeFileURLPromise
guard let str = pasteboard?.string(forType: PasteboardFilePasteLocation),
let destinationFolderURL = URL(string: str) else {
// ERROR:- Receiver didn't tell us where to put the file?
return
}
// Here, we build the file destination using the receivers destination URL
// NOTE: - you need to manage duplicate filenames yourself!
let destinationFileURL = destinationFolderURL.appendingPathComponent("dropped_file.pdf")
// Write your data to the destination file (6). Do better error handling here!
let pdfData = self.dataWithPDF(inside: self.bounds)
try? pdfData.write(to: destinationFileURL, options: .atomicWrite)
// And finally, tell the receiver where we wrote our file (7)
pasteboard?.setString(destinationFileURL.absoluteString, forType: PasteboardFileURLPromise)
}
}
If anyone finds issues with this or it's completely incorrect please let me know! It seems to work for my app at least.
As Willeke has pointed out, Apple has some sample code for using the (newer) NSFilePromiseProvider mechanism for drag drop.
https://developer.apple.com/documentation/appkit/documents_files_and_icloud/supporting_drag_and_drop_through_file_promises
I wish my search had started at Apple's Developer pages instead of Google 🙃. Oh well! The sample provided is valid and still works, so if this post helps someone locate more cohesive info regarding drag drop then fantastic.

How to show pdf data from a filestream

I am working against a REST API where you can make a call and then receive a image or pdf. I am using URLSession.shared.dataTask to make the call and when there is a image the call is a success (but it takes quite a long time, more then 5 seconds) and I can show the image in a UIImageView. But when there is a pdf, I don't know how to handle the result. The API returns the image / pdf as a ”filestream”.
When I print the data to the console it prints the size (bytes) and its the same size as in the server so in some way I have the correct data, I just don't know how to view the pdf.
I am using swift 3, iOS 10, xcode 8.
First of all you may want to ask your question into two part. Please edit it and ask the second part again.
There are two parts in this topic
1. Downloading the PDF and save it in File System
2. Get the pdf that saved in File System and read it using UIWebView or UIDocumentInteractionController
So, I will explain for the first one.
The first one can be done if you use REST HTTP client : Alamofire : Elegant HTTP Networking in Swift. So no need to use URLSession for such case and you will have to write so many lines if you do. It's simple and easy. So, I want you to try it. If you need URLSession instead, leave comment.
So how to download pdf using Alamofire :
let destination: DownloadRequest.DownloadFileDestination = { _, _ in
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
//.documentDirectory means it will store in Application Document Folder
let fileURL = documentsURL.appendPathComponent("data.pdf")
return (fileURL, [.removePreviousFile, .createIntermediateDirectories])
}
Alamofire.download(urlString, to: destination).response { response in
print(response)
if response.error == nil, let filePath = response.destinationURL?.path {
// do anything you want with filePath here.
// Here what you have to do Step 2. (Read the file that you downloaded)
}
}
This download procedure doesn't include requesting download link with encoded parameters. It was just simple way.