Capture if a barcode was scanned inside application - swift

I am developing an application that creates a barcode using CIFilter. The question I have is if there is a way to capture at the app level every time the barcode is scanned? The barcode is a way for the device holder to redeem some sort of discount at different businesses. After it is scanned I would want to hit an API that captures that the barcode was scanned by the device holder. Without having to tie into the business systems. What would be the best approach for this? If there is one.
Just a snippet of how I'm creating these barcodes
func generateBarcode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CICode128BarcodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
Thanks

The short answer is: most likely not.
The screen that is displaying the barcode has (physically) no way of telling if it's being looked at or scanned. It only emits light, it doesn't receive any information.
I can only think of two ways of getting that information:
Use the sensors of the device, like the front camera, to determine if the screen is being scanned. But this is a very hard task since you have to analyze the video stream for signs of scanning (probably with machine learning of some sort). It would also require that the user gives permission to use the camera just for... some kind of feedback?
The scanner needs to somehow communicate with the device through some other means, like a local network or through the internet. This an API of some sort, however.
Maybe it's enough for your use case to just track when the user opened the barcode inside the app, assuming this will most likely only happen when they let it be scanned.

Related

Why does AVCaptureDevice.default return nil in swiftui?

I've been trying to create my own in-built camera but it's crashing when I try to set up the device.
func setUp() {
do {
self.session.beginConfiguration()
let device = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .front)
let input = try AVCaptureDeviceInput(device: device!)
if self.session.canAddInput(input) {
self.session.addInput(input)
}
if self.session.canAddOutput(self.output) {
self.session.addOutput(self.output)
}
self.session.commitConfiguration()
} catch {
print(error.localizedDescription)
}
}
When I execute the program, it crashed with the input because I try to force unwrap a nil value which is device.
I have set the required authorization so that the app can use the camera and it still end up with a nil value.
If anyone has any clue how to solve the problem it would be very appreciate
You're asking for a builtInDualCamera, i.e. one that supports:
Automatic switching from one camera to the other when the zoom factor, light level, and focus position allow.
Higher-quality zoom for still captures by fusing images from both cameras.
Depth data delivery by measuring the disparity of matched features between the wide and telephoto cameras.
Delivery of photos from constituent devices (wide and telephoto cameras) from a single photo capture request.
And you're requiring it to be on the front of the phone. I don't know any iPhone that has such a camera on the front (particularly the last one). You likely meant to request position: .back like in the example code. But keep in mind that not all phones have a dual camera on the back either.
You might want to use default(for:) to request the default "video" camera rather than requiring a specific type of camera. Alternately, you can use a AVCaptureDevice.DiscoverSession to find a camera based on specific characteristics.

How to generate QRCode image with parameters in swift?

I need to create QRCode image with app registered users 4 parameters, so that if i scan that QRCode image from other device i need to display that user details, that is my requirement.
here i am getting registered user details: with these parameters i need to generate QRCode image
var userId = userModel?.userId
var userType = userModel?.userType
var addressId = userModel?.userAddrId
var addressType = userModel?.userAddrType
according to [this answer][1] i have created QRCode with string... but i. need to generate with my registered user parameters
sample Code with string:
private func createQRFromString(str: String) -> CIImage? {
let stringData = str.data(using: .utf8)
let filter = CIFilter(name: "CIQRCodeGenerator")
filter?.setValue(stringData, forKey: "inputMessage")
filter?.setValue("H", forKey: "inputCorrectionLevel")
return filter?.outputImage
}
var qrCode: UIImage? {
if let img = createQRFromString(str: "Hello world program created by someone") {
let someImage = UIImage(
ciImage: img,
scale: 1.0,
orientation: UIImage.Orientation.down
)
return someImage
}
return nil
}
#IBAction func qrcodeBtnAct(_ sender: Any) {
qrImag.image = qrCode
}
please suggest me
[1]: Is there a way to generate QR code image on iOS
You say you need a QR reader, but here you are solely talking about QR generation. Those are two different topics.
In terms of QR generation, you just need to put your four values in the QR payload. Right now you’re just passing a string literal, but you can just update that string to include your four properties in whatever easily decoded format you want.
That having been said, when writing apps like this, you often want to able to scan your QR code not only from within the app, but also any QR scanning app, such as the built in Camera app, and have it open your app. That influences how you might want to encode your payload.
The typical answer would be to make your QR code payload be a URL, using, for example, a universal link. See Supporting Universal Links in Your App. So, first focus on enabling universal links.
Once you’ve got the universal links working (not using QR codes at all, initially), the question then becomes how one would programmatically create the universal link that you’d supply to your QR generator routine, above. For that URLComponents is a great tool for encoding URLs. For example, see Swift GET request with parameters. Just use your universal link for the host used in the URL.
FWIW, while I suggest just encoding a universal link URL into your QR code, above, another option would be some other deep linking pattern, such as branch.io.

Using multiple audio devices simultaneously on osx

My aim is to write an audio app for low latency realtime audio analysis on OSX. This will involve connecting to one or more USB interfaces and taking specific channels from these devices.
I started with the learning core audio book and writing this using C. As I went down this path it came to light that a lot of the old frameworks have been deprecated. It appears that the majority of what I would like to achieve can be written using AVAudioengine and connecting AVAudioUnits, digging down into core audio level only for the lower things like configuring the hardware devices.
I am confused here as to how to access two devices simultaneously. I do not want to create an aggregate device as I would like to treat the devices individually.
Using core audio I can list the audio device ID for all devices and change the default system output device here (and can do the input device using similar methods). However this only allows me one physical device, and will always track the device in system preferences.
static func setOutputDevice(newDeviceID: AudioDeviceID) {
let propertySize = UInt32(MemoryLayout<UInt32>.size)
var deviceID = newDeviceID
var propertyAddress = AudioObjectPropertyAddress(
mSelector: AudioObjectPropertySelector(kAudioHardwarePropertyDefaultOutputDevice),
mScope: AudioObjectPropertyScope(kAudioObjectPropertyScopeGlobal),
mElement: AudioObjectPropertyElement(kAudioObjectPropertyElementMaster))
AudioObjectSetPropertyData(AudioObjectID(kAudioObjectSystemObject), &propertyAddress, 0, nil, propertySize, &deviceID)
}
I then found that the kAudioUnitSubType_HALOutput is the way to go for specifying a static device only accessible through this property. I can create a component of this type using:
var outputHAL = AudioComponentDescription(componentType: kAudioUnitType_Output, componentSubType: kAudioUnitSubType_HALOutput, componentManufacturer: kAudioUnitManufacturer_Apple, componentFlags: 0, componentFlagsMask: 0)
let component = AudioComponentFindNext(nil, &outputHAL)
guard component != nil else {
print("Can't get input unit")
exit(-1)
}
However I am confused about how you create a description of this component and then find the next device that matches the description. Is there a property where I can select the audio device ID and link the AUHAL to this?
I also cannot figure out how to assign an AUHAL to an AVAudioEngine. I can create a node for the HAL but cannot attach this to the engine. Finally is it possible to create multiple kAudioUnitSubType_HALOutput components and feed these into the mixer?
I have been trying to research this for the last week, but nowhere closer to the answer. I have read up on channel mapping and everything I need to know down the line, but at this level getting the audio at. lower level seems pretty undocumented, especially when using swift.

Performance for retrieving PHAssets with moments

I am currently building a gallery of user photos into an app. So far I simply listed all the user's photos in a UICollectionView. Now I would like to add moment clusters as sections, similar to the iOS Photos app.
What I am doing (a bit simplified):
let momentClusters = PHCollectionList.fetchMomentLists(with: .momentListCluster, options: options)
momentClusters.enumerateObjects { (momentCluster, _, _) in
let moments = PHAssetCollection.fetchMoments(inMomentList: momentCluster, options: nil)
var assetFetchResults: [PHFetchResult<PHAsset>] = []
moments.enumerateObjects { (moment, _, _) in
let fetchResult = PHAsset.fetchAssets(in: moment, options: options)
assetFetchResults.append(fetchResult)
}
//save assetFetchResults somewhere and use it in UICollectionView methods
}
Turns out this is A LOT more time-intensive than what I did before - up to a minute compared to about 2 seconds on my iPhone X (with a gallery of about 15k pictures). Obviously, this is unacceptable.
Why is the performance of fetching moments so bad, and how can I improve it? Am I using the API wrong?
I tried loading assets on-demand, but it's very difficult, since I then have to work with estimated item counts per moment, and reload sections while the user is scrolling - I couldn't get this to work in a way that is satisfactory (smooth scrolling, no noticable reload).
Any help? How is this API supposed to work? Am I using it wrong.
Update / Part solution
So after playing around, it turns out that the following was a big part of the problem:
I was fetching assets using options with a sort descriptor:
let options = PHFetchOptions()
options.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: true)]
options.predicate = NSPredicate(format: "mediaType = %d", PHAssetMediaType.image.rawValue)
let assets = PHAsset.fetchAssets(in: moment, options: options)
It seems sorting doesn't allow PhotoKit to make use of indices or caches it has internally. Removing the sortDescriptors speeds up the fetch significantly. It's still slower than before and any further tips are appreciated, but this makes loading times way more bearable.
Note that, without the sort descriptor, assets will be returned oldest ones first, but this can be easily fixed manually by retrieving assets in reversed order in cellForItemAt: (so the cell at 0,0 will get the last asset of the first moment)
Disclaimer: Performance-related answers are necessarily speculative...
You've described two extremes:
Prefetch all assets for all moments before displaying the collection
Fetch all assets lazily, use estimated counts and reload
But there are in-between options. For example, you can fetch only the moments at first, then let fetching assets per moment be driven by the collection view. But instead of waiting until moments are visible before fetching their contents, use the UICollectionViewDataSourcePrefetching protocol to decide what to start fetching before those items are due to be visible on screen.

HTTP requests simultaneously in Swift

On start my app I do some http requests, some heavy http requests (downloading some images) and some heavy tasks with UIGraphics (for example doing icon for GMSMarker from two UIImages and other operations with GraphicsContext). It costs some time, so I want to do all that tasks simultaneously. Can you show me best way make it?
On start I have to:
Download and write to local database all devices
Download and write to local database all geofences
Download and write to local database all users
Download and write to local database all positions
Download images for devices, users and geofences
Setup GMSMarkers for devices, users and geofences (after images for that objects will be available - for setting icon of marker)
Code of my login function (it works, but too slow):
func loginPressed(_ sender: UIButton) {
guard
let username = self.usernameTextField.text,
let password = self.passwordTextField.text,
!username.isEmpty,
!password.isEmpty
else {
return
}
self.loginButton.isEnabled = false
self.activityIndicator.startAnimating()
WebService.shared.connect(email: username, password: password) { error, loggedUser in
guard
error == nil,
let loggedUser = loggedUser
else {
self.showAlert(title: "Ошибка подключения", message: error?.localizedDescription ?? "", style: .alert)
self.activityIndicator.stopAnimating()
self.loginButton.isEnabled = true
return
}
DB.users.client.insert(loggedUser)
print("Start loading user photo...")
loggedUser.loadPhoto() { image in
if let image = image {
loggedUser.photo = UIImageJPEGRepresentation(image, 0.0)
}
print("User photo loaded...")
loggedUser.marker = UserMarker(loggedUser, at: CLLocation(latitude: 48.7193900, longitude: 44.50183))
DB.users.client.modify(loggedUser)
}
DB.geofences.server.getAll() { geofences in
DB.devices.server.getAll() { devices in
DB.positions.server.getAll() { positions in
for device in devices {
device.loadPhoto() { image in
if let image = image {
device.photo = UIImageJPEGRepresentation(image, 0.0)
}
if let position = positions.findById(device.positionId) {
device.marker = DeviceMarker(device, at: position)
}
device.attributes.battery = device.lastKnownBattery(in: positions)
}
}
geofences.forEach({$0.marker = GeofenceMarker($0)})
DB.geofences.client.updateAddress(geofences) { geofences in
if DEBUG_LOGS {
print("Geofences with updated addresses: ")
geofences.forEach({print("\($0.name), \($0.address ?? "")")})
}
DB.devices.client.insert(devices)
DB.geofences.client.insert(geofences)
DB.positions.client.insert(positions)
self.activityIndicator.stopAnimating()
WebService.shared.addObserver(DefaultObserver.shared)
self.performSegue(withIdentifier: "toMapController", sender: self)
}
}
}
}
}
}
Not sure it's good idea post here code snippets of all classes and objects, hope you'll get idea.
Any help would be appreciated.
P.S. In case you wonder what is DB, it's database, which consists of two parts - server side and client side for each group of objects, so first task is get all objects from server and write them in memory (in client database)
P.S. I've changed logic from "download everything" on login to "download all I need right now and download rest later". So now after I've got all devices, geofences and positions I'm performing segue to MapController, on which I show all those objects. Just after login I'm showing deviceMarkers (GMSMarker) with default iconView. SO question is - can I after show map with all objects start download photos of devices in background and refresh markers with that photos after that (in main thread of course)?
Network requests are asynchronous by default, so what you are asking for has already the intended behavior.
You can, however, make your life much more simple by using a Promises library such as then.
An example usage might look like:
login(email: "foo#bar.com", password: "pa$$w0rd")
.whenAll(syncDevices(), syncGeofences(), syncUsers(), syncPositions(), syncImages())
.onError { err in
// process error
}
.finally {
// update UI
}
Your login function is slow because you are downloading AND writing to disk (as you mentioned in your question) "all" the data in your closures (geofences, devices and/or positions). Furthermore, all your operations are being executed in the main thread. You should never do I/O (networking, writing to disk) in the main thread, as this thread is used mainly for UI updates. You should offload the expensive tasks to another thread using GCD.
Also, it is worth mentioning that writing to disk is a relatively slow operation, especially if you are doing it for EVERY item you are downloading.
I would recommend you to JUST download any data to be displayed and then use an async task in a DispatchQueue (GCD) to persist the data downloaded to disk AFTER you've displayed the data in your UI.
I am not sure what the DB.geofences.server.getAll() lines do for geofences, devices and positions (in regards to how you handle your networking or database fetching), so I cannot advice you on that. What I can advice you on, is to structure your code the following way:
When user logs in, do the validation against the DB (remote) and guard against a valid login. Transition to your next view controller next (don't execute all your logic), since I can see that you are delegating way too much responsibility to your login action (for your login button).
From that second view controller, get your data through networking calls asynchronously on another thread using a .UserInitiated priority (to get results fast).
After doing all your networking operations, call DispatchQueue.main.async { ... } to update your UI Asynchronously in the main thread with the data you just got.
AFTER you've displayed the downloaded data, you can persist it to your local DB, ideally using another DispatchQueue async task.
If anything I said above does not make any sense to you, please read AppCoda's article about GCD here and RayWenderlich's GCD article here. They will give you the basic knowledge about GCD in iOS. Once you've done that, come back and try to structure your code the way I recommended above.
I hope this helps!