I'm a newbie with Ios. i'm learning swift and overlooked object c.
Currently, i'm writing an demo with swift and xcode 6.1 which can scan qrcode and barcode from camera or an image from image library.
before, i tried using zbar SDK to do this, but it happened error and i didn't know how to fix it, i posted this error in post: Scan qrcode and barcode from camera and image which picked from image library in swift , but nobody answered.
i'm trying using ZXingObjC to scan qrcode and barcode from image and camera https://github.com/TheLevelUp/ZXingObjC , i read its usage and tried to convert to swift. but it happened error and i don't know how to fix it.
Here is my code:
import UIKit
class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
#IBOutlet weak var lblResult: UILabel!
#IBOutlet weak var imgView: UIImageView!
var imagePicker = UIImagePickerController()
override func viewDidLoad() {
super.viewDidLoad()
imagePicker.delegate = self
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
#IBAction func scanCode(sender: AnyObject) {
imagePicker.sourceType = .PhotoLibrary
imagePicker.allowsEditing = false
presentViewController(imagePicker, animated: true, completion: nil)
}
func imagePickerController(picker: UIImagePickerController!, didFinishPickingMediaWithInfo info: NSDictionary!) {
var tempImage:UIImage = info[UIImagePickerControllerOriginalImage] as UIImage
imgView.contentMode = .ScaleAspectFit
imgView.image = tempImage
dismissViewControllerAnimated(true, completion: nil)
//====> object c code <=====
/*
ZXLuminanceSource *source = [[[ZXCGImageLuminanceSource alloc] initWithCGImage:imageToDecode] autorelease];
ZXBinaryBitmap *bitmap = [ZXBinaryBitmap binaryBitmapWithBinarizer:[ZXHybridBinarizer binarizerWithSource:source]];
NSError *error = nil;
ZXDecodeHints *hints = [ZXDecodeHints hints];
ZXMultiFormatReader *reader = [ZXMultiFormatReader reader];
ZXResult *result = [reader decode:bitmap
hints:hints
error:&error];
if (result) {
}
*/
//====> Convert to swift and happen error <=====
let source: ZXLuminanceSource = ZXCGImageLuminanceSource(initWithCGImage: tempImage)
let binazer: ZXHybridBinarizer = ZXHybridBinarizer(source: source)
let bitmap: ZXBinaryBitmap = ZXBinaryBitmap(binarizer: binazer)
var error: NSError?
var hints: ZXDecodeHints = ZXDecodeHints()
var reader: ZXMultiFormatReader = ZXMultiFormatReader()
var result: ZXResult = reader(bitmap, hints:hints, error: error)
if (result) {
lblResult.text = result.text;
}
}
}
I will be very grateful if someone let me know why it happen error and how to fix it (please give detail instructions because i have just learned swift and ios for 3 weeks without learning object c). Thanks.
Edited:
This code worked for me.
let source: ZXLuminanceSource = ZXCGImageLuminanceSource(CGImage: tempImage.CGImage)
let binazer = ZXHybridBinarizer(source: source)
let bitmap = ZXBinaryBitmap(binarizer: binazer)
var error: NSError?
let hints: ZXDecodeHints = ZXDecodeHints.hints() as ZXDecodeHints
var reader = ZXMultiFormatReader()
if let result = reader.decode(bitmap, hints: hints, error: &error) {
lblResult.text = result.text;
}
You're almost there — this should get you the rest of the way. Note the comments:
// initializers are imported without "initWith"
let source: ZXLuminanceSource = ZXCGImageLuminanceSource(CGImage: tempImage)
let binazer = ZXHybridBinarizer(source: source)
let bitmap = ZXBinaryBitmap(binarizer: binazer)
var error: NSError?
var hints = ZXDecodeHints()
var reader = ZXMultiFormatReader()
// 1) you missed the name of the method, "decode", and
// 2) use optional binding to make sure you get a value
if let result = reader.decode(bitmap, hints:hints, error: error) {
lblResult.text = result.text;
}
Related
I've been trying to figure out how Core Data works with using Swift. I don't think I'm grasping the proper concept of the whole thing. I get that I need to be interacting with Context to store data to PersistentContainer, but it seems everytime I press on save button, the data is stored as brand new. I want it to be able to update the existing row. Below is my code. Any help will be greatly appreciated. Thank you.
import UIKit
import CoreData
class ViewController: UIViewController {
var editNotes: Note?
let dataFilePath = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
#IBOutlet weak var textView: UITextView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
#IBAction func saveButton(_ sender: UIButton) {
print (dataFilePath)
print (sender.tag)
var new: Note?
if let note = editNotes {
new = note
} else {
new = Note(context: context)
}
new?.body = textView.text
new?.date = NSDate() as Date
do {
ad.saveContext()
self.dismiss(animated: true, completion: nil)
} catch {
print(“cannot save”)
}
}
}
I'm trying to get depth data from the camera in iOS 11 with AVDepthData, tho when I setup a photoOutput with the AVCapturePhotoCaptureDelegate the photo.depthData is nil.
So I tried setting up the AVCaptureDepthDataOutputDelegate with a AVCaptureDepthDataOutput, tho I don't know how to capture the depth photo?
Has anyone ever got an image from AVDepthData?
Edit:
Here's the code I tried:
// delegates: AVCapturePhotoCaptureDelegate & AVCaptureDepthDataOutputDelegate
#IBOutlet var image_view: UIImageView!
#IBOutlet var capture_button: UIButton!
var captureSession: AVCaptureSession?
var sessionOutput: AVCapturePhotoOutput?
var depthOutput: AVCaptureDepthDataOutput?
var previewLayer: AVCaptureVideoPreviewLayer?
#IBAction func capture(_ sender: Any) {
self.sessionOutput?.capturePhoto(with: AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]), delegate: self)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
self.previewLayer?.removeFromSuperlayer()
self.image_view.image = UIImage(data: photo.fileDataRepresentation()!)
let depth_map = photo.depthData?.depthDataMap
print("depth_map:", depth_map) // is nil
}
func depthDataOutput(_ output: AVCaptureDepthDataOutput, didOutput depthData: AVDepthData, timestamp: CMTime, connection: AVCaptureConnection) {
print("depth data") // never called
}
override func viewDidLoad() {
super.viewDidLoad()
self.captureSession = AVCaptureSession()
self.captureSession?.sessionPreset = .photo
self.sessionOutput = AVCapturePhotoOutput()
self.depthOutput = AVCaptureDepthDataOutput()
self.depthOutput?.setDelegate(self, callbackQueue: DispatchQueue(label: "depth queue"))
do {
let device = AVCaptureDevice.default(for: .video)
let input = try AVCaptureDeviceInput(device: device!)
if(self.captureSession?.canAddInput(input))!{
self.captureSession?.addInput(input)
if(self.captureSession?.canAddOutput(self.sessionOutput!))!{
self.captureSession?.addOutput(self.sessionOutput!)
if(self.captureSession?.canAddOutput(self.depthOutput!))!{
self.captureSession?.addOutput(self.depthOutput!)
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!)
self.previewLayer?.frame = self.image_view.bounds
self.previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.previewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
self.image_view.layer.addSublayer(self.previewLayer!)
}
}
}
} catch {}
self.captureSession?.startRunning()
}
I'm trying two things, one where the depth data is nil and one where I'm trying to call a depth delegate method.
Dose anyone know what I'm missing?
First, you need to use the dual camera, otherwise you won't get any depth data.
let device = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back)
And keep a reference to your queue
let dataOutputQueue = DispatchQueue(label: "data queue", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem)
You'll also probably want to synchronize the video and depth data
var outputSynchronizer: AVCaptureDataOutputSynchronizer?
Then you can synchronize the two outputs in your viewDidLoad() method like this
if sessionOutput?.isDepthDataDeliverySupported {
sessionOutput?.isDepthDataDeliveryEnabled = true
depthDataOutput?.connection(with: .depthData)!.isEnabled = true
depthDataOutput?.isFilteringEnabled = true
outputSynchronizer = AVCaptureDataOutputSynchronizer(dataOutputs: [sessionOutput!, depthDataOutput!])
outputSynchronizer!.setDelegate(self, queue: self.dataOutputQueue)
}
I would recommend watching WWDC session 507 - they also provide a full sample app that does exactly what you want.
https://developer.apple.com/videos/play/wwdc2017/507/
To give more details to #klinger answer, here is what you need to do to get Depth Data for each pixel, I wrote some comments, hope it helps!
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
//## Convert Disparity to Depth ##
let depthData = (photo.depthData as AVDepthData!).converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let depthDataMap = depthData.depthDataMap //AVDepthData -> CVPixelBuffer
//## Data Analysis ##
// Useful data
let width = CVPixelBufferGetWidth(depthDataMap) //768 on an iPhone 7+
let height = CVPixelBufferGetHeight(depthDataMap) //576 on an iPhone 7+
CVPixelBufferLockBaseAddress(depthDataMap, CVPixelBufferLockFlags(rawValue: 0))
// Convert the base address to a safe pointer of the appropriate type
let floatBuffer = unsafeBitCast(CVPixelBufferGetBaseAddress(depthDataMap), to: UnsafeMutablePointer<Float32>.self)
// Read the data (returns value of type Float)
// Accessible values : (width-1) * (height-1) = 767 * 575
let distanceAtXYPoint = floatBuffer[Int(x * y)]
}
There are two ways to do this, and you are trying to do both at once:
Capture depth data along with the image. This is done by using the photo.depthData object from photoOutput(_:didFinishProcessingPhoto:error:). I explain why this did not work for you below.
Use a AVCaptureDepthDataOutput and implement depthDataOutput(_:didOutput:timestamp:connection:). I am not sure why this did not work for you, but implementing depthDataOutput(_:didOutput:timestamp:connection:) might help you figure out why.
I think that #1 is a better option, because it pairs the depth data with the image. Here's how you would do that:
#IBAction func capture(_ sender: Any) {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
settings.isDepthDataDeliveryEnabled = true
self.sessionOutput?.capturePhoto(with: settings, delegate: self)
}
// ...
override func viewDidLoad() {
// ...
self.sessionOutput = AVCapturePhotoOutput()
self.sessionOutput.isDepthDataDeliveryEnabled = true
// ...
}
Then, depth_map shouldn't be nil. Make sure to read both this and this (separate but similar pages) for more information about obtaining depth data.
For #2, I'm not quite sure why depthDataOutput(_:didOutput:timestamp:connection:) isn't being called, but you should implement depthDataOutput(_:didDrop:timestamp:connection:reason:) to see if depth data is being dropped for some reason.
The way you init your capture device is not right.
You should use the dual camera mode.
as for oc like follows:
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInDualCamera mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionBack];
I'm trying to read a QR code from a static image using Swift.
I can easily read it using a video source although it seems to be very different for images and I can't find too many resources online for this.
Any help appreciated, thanks.
You can make a great QRCode scanner using ZXingObjC. It's a barcode image processing library designed to be used on both iOS devices and in Mac applications. It scans from live video or from images in your photo library and supports all the major QRCode formats.
This is only to get you started in the right direction. You'll need more methods to set up the camera etc. ZXingObjC includes sample projects and there are camera set up solutions all over SO so it's pretty straight forward.
You'll need to install ZXingObjC pods pod 'ZXingObjC' as well as create a bridging-header.h file of course to be able to use the ZXingObjC library.
ViewController.swift
import UIKit
class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
#IBOutlet weak var labelOutput: UILabel!
#IBOutlet weak var QRImage: UIImageView!
var imagePicker = UIImagePickerController()
// imagePicker delegate is itself (UIImagePickerController)
override func viewDidLoad() {
super.viewDidLoad()
imagePicker.delegate = self
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
#IBAction func scanQRCode(sender: AnyObject) {
imagePicker.sourceType = .PhotoLibrary
imagePicker.allowsEditing = false
presentViewController(imagePicker, animated: true, completion: nil)
}
// set up the picker
// initialize luminance source, scanning algorithm, decoding of bitmap, reader helpers, decoder
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : AnyObject]) {
let placeHolderImage:UIImage = info[UIImagePickerControllerOriginalImage] as! UIImage
QRImage.contentMode = .ScaleAspectFit
QRImage.image = placeHolderImage
dismissViewControllerAnimated(true, completion: nil)
let luminanceSource: ZXLuminanceSource = ZXCGImageLuminanceSource(CGImage: placeHolderImage.CGImage)
let binarizer = ZXHybridBinarizer(source: luminanceSource)
let bitmap = ZXBinaryBitmap(binarizer: binarizer)
let hints: ZXDecodeHints = ZXDecodeHints.hints() as! ZXDecodeHints
let QRReader = ZXMultiFormatReader()
// throw/do/catch and all that jazz
do {
let result = try QRReader.decode(bitmap, hints: hints)
labelOutput.text = result.text
} catch let err as NSError {
print(err)
}
}
// Conform to ZXCaptureDelegate
func captureResult(capture: ZXCapture!, result: ZXResult!) {
// do some stuff
return
}
}
One note: As of this post there is a known initializer error in the library's ZXParsedResult.m file. After installing the library the location of the file in Xcode is: Project -> Pods -> ZXingObjC -> All -> ZXParsedResult.m
On line 29 Change the Objective-C code
+ (id)parsedResultWithType:(ZXParsedResultType)type {
return [[self alloc] initWithType:type];
}
to
+ (id)parsedResultWithType:(ZXParsedResultType)type {
return [(ZXParsedResult *)[self alloc] initWithType:type];
}
This question already has answers here:
Swift 2 ( executeFetchRequest ) : error handling
(5 answers)
Closed 6 years ago.
I was experimenting with Xcode Core Data and I came across a problem.
Here is the link to the tutorial I was using: LINK
I also would like to mention that I tried googling this and have had no luck figuring it out for myself. Hopefully one of you guys could help me out here.
Error Line
The following line of code: "var results:NSArray = context.executeFetchRequest(request, error: nil)"
has been giving me the warning "Extra argument 'error' in call"
My steps towards trying to fix this:
do try catch
use Xcode 7
My code:
import UIKit
import CoreData
class vcMain: UIViewController {
#IBOutlet var txtUsername: UITextField!
#IBOutlet var txtPassword: UITextField!
#IBAction func btnSave(){
let appDel:AppDelegate = (UIApplication.sharedApplication().delegate as! AppDelegate)
let context:NSManagedObjectContext = appDel.managedObjectContext
let newUser = NSEntityDescription.insertNewObjectForEntityForName("Users", inManagedObjectContext: context) as NSManagedObject
newUser.setValue("Test Username", forKey: "username")
newUser.setValue("Test Password", forKey: "password")
//saves
do {
try context.save()
} catch {}
print(newUser)
print("Object Saved.")
}
#IBAction func btnLoad(){
let appDel:AppDelegate = (UIApplication.sharedApplication().delegate as! AppDelegate)
let context:NSManagedObjectContext = appDel.managedObjectContext
let request = NSFetchRequest(entityName: "Users")
request.returnsObjectsAsFaults = false;
// currently being worked on to restore saved data
var results:NSArray = context.executeFetchRequest(request, error: nil)
if (results.count > 0) {
for res in results{
print(res)
}
} else {
print("0 results returned... Potential Error")
}
}
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
This API does not take additional parameter. The prototype in swift is-
func executeFetchRequest(_ request: NSFetchRequest) throws -> [AnyObject]
This call can potentially throw exceptions if there is any issue. So you will need to enclose this in a do..try and handle the exception as needed.
Change the line
var results:NSArray = context.executeFetchRequest(request, error: nil)
to
var results:NSArray = context.executeFetchRequest(request)
I want to be able to open an image in Swift. This is my first Swift project.
#IBAction func SelectFileToOpen(sender: NSMenuItem) {
var openPanel = NSOpenPanel();
openPanel.allowsMultipleSelection = false;
openPanel.canChooseDirectories = false;
openPanel.canCreateDirectories = false;
openPanel.canChooseFiles = true;
let i = openPanel.runModal();
if(i == NSOKButton){
print(openPanel.URL);
var lettersPic = NSImage(contentsOfURL: openPanel.URL!);
imageView.image = lettersPic;
}
}
Output of my NSLog when using the open panel
Optional(file:///Users/ethansanford/Desktop/BigWriting.png)
fatal error: unexpectedly found nil while unwrapping an Optional value
How can I allow the user to open a png file of interest.
When I specifying the same file in the code everything works well. An example of me indicating which file to open in the code without using the open file panel and acting as a user:
let pictureURl = NSURL(fileURLWithPath: "///Users/ethansanford/Desktop/BigWriting.png");
var lettersPic = NSImage(contentsOfURL: pictureURl!);
imageView.image = lettersPic;
Is there a problem with the format of my URL or something? Any help would be appreciated.
Add a new file to your project (swift source file) and add this extension there
Xcode 9 • Swift 4
extension NSOpenPanel {
var selectUrl: URL? {
title = "Select Image"
allowsMultipleSelection = false
canChooseDirectories = false
canChooseFiles = true
canCreateDirectories = false
allowedFileTypes = ["jpg","png","pdf","pct", "bmp", "tiff"] // to allow only images, just comment out this line to allow any file type to be selected
return runModal() == .OK ? urls.first : nil
}
var selectUrls: [URL]? {
title = "Select Images"
allowsMultipleSelection = true
canChooseDirectories = false
canChooseFiles = true
canCreateDirectories = false
allowedFileTypes = ["jpg","png","pdf","pct", "bmp", "tiff"] // to allow only images, just comment out this line to allow any file type to be selected
return runModal() == .OK ? urls : nil
}
}
In your View Controller:
class ViewController: NSViewController {
#IBOutlet weak var imageView: NSImageView!
#IBAction func saveDocument(_ sender: NSMenuItem) {
print("SAVE")
}
#IBAction func newDocument(_ sender: NSMenuItem) {
print("NEW")
}
// connect your view controller to the first responder window adding the openDocument method
#IBAction func openDocument(_ sender: NSMenuItem) {
print("openDocument ViewController")
if let url = NSOpenPanel().selectUrl {
imageView.image = NSImage(contentsOf: url)
print("file selected:", url.path)
} else {
print("file selection was canceled")
}
}
}
Hmmm... I didn't see anything wrong necessarily with your code, so I test ran this code (selecting a PNG file on my desktop):
let openPanel = NSOpenPanel()
openPanel.allowsMultipleSelection = false
openPanel.canChooseDirectories = false
openPanel.canCreateDirectories = false
openPanel.canChooseFiles = true
let i = openPanel.runModal()
if(i == NSModalResponseOK){
print(openPanel.URL)
let lettersPic = NSImage(contentsOfURL: openPanel.URL!)
print(lettersPic)
}
What I got was:
Optional(file:///Users/jwlaughton/Desktop/flame%2012-32.png)
Optional(
"NSBitmapImageRep 0x6000000a4140 Size={1440, 900} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=1440x900 Alpha=NO
Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting)
CGImageSource=0x608000160cc0" )>)
Which seems OK to me.
Maybe the issue is you need to say:
imageView.image = lettersPic!;
EDIT:
So to test further, I extended the test code a little to:
if(i == NSOKButton){
print(openPanel.URL);
var lettersPic = NSImage(contentsOfURL: openPanel.URL!);
print(lettersPic);
let view:NSImageView = NSImageView();
view.image = lettersPic
print(view)
}
Everything still works OK. Sorry I couldn't duplicate your problem.
This is the code that ended up working for me. I had to disable story boards. I had to make a class called Main. This is not to be confused with a special class called main.swift which replaces appdelegate.swift.And i also had to import Cocoa. Then I had to Specify that main inhereted from nsobject. This was so I could first make the connections between interface builder and put In ibactions and outlets in my Main.swift file.
//
// Main.swift
// Open
//
// Created by ethan sanford on 2015-01-18.
// Copyright (c) 2015 ethan D sanford. All rights reserved.
//
import Foundation
import Cocoa
class Main: NSObject{
#IBOutlet var imageWell: NSImageCell!
var myURL = NSURL(fileURLWithPath: "")
#IBAction func main(sender: AnyObject) {
imageWell.image = NSImage(byReferencingURL: myURL!)
}
#IBAction func open(sender: AnyObject) {
var openPanel = NSOpenPanel();
openPanel.allowsMultipleSelection = false;
openPanel.canChooseDirectories = false;
openPanel.canCreateDirectories = false;
openPanel.canChooseFiles = true;
let i = openPanel.runModal();
if(i == NSOKButton){
print(openPanel.URL);
myURL = openPanel.URL;
}
}
}
it works a little strangely you have to chose your file click open. Then hit the button connected with #IBAction func main(sender: AnyObject) {