There is a problem with QR code generation using the following simple code:
override func viewDidLoad() {
super.viewDidLoad()
let image = generateQRCode(from: "Hacking with Swift is the best iOS coding tutorial I've ever read!")
imageView.image = image
}
func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 5.3, y: 5.3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
This code produces the following image:
But when magnifying any corner marker, we can see the difference in border thickness:
I. e. not every scale value produces correct final image. How to fix it out?
The behavior you show is expected whenever you use a non-integer scale, such as 5.3. If having consistent marker widths is something you care about, use only integer scales, such as 5 or 6.
Related
extension CIImage {
/// Combines the current image with the given image centered.
func combined(with image: CIImage) -> CIImage? {
guard let combinedFilter = CIFilter(name: "CISourceOverCompositing") else { return nil }
let centerTransform = CGAffineTransform(translationX: extent.midX - (image.extent.size.width / 2), y: extent.midY - (image.extent.size.height / 2))
combinedFilter.setValue(image.transformed(by: centerTransform), forKey: "inputImage")
combinedFilter.setValue(self, forKey: "inputBackgroundImage")
return combinedFilter.outputImage!
}
}
/// Creates a QR code for the current URL in the given color.
func qrImage(using color: UIColor, logo: UIImage? = nil) -> CIImage? {
let tintedQRImage = qrImage?.tinted(using: color)
guard let logo = logo?.cgImage else {
return tintedQRImage
}
return tintedQRImage?.combined(with: CIImage(cgImage: logo))
}
/// Returns a black and white QR code for this URL.
var qrImage: CIImage? {
guard let qrFilter = CIFilter(name: "CIQRCodeGenerator") else { return nil }
let qrData = absoluteString.data(using: String.Encoding.ascii)
qrFilter.setValue(qrData, forKey: "inputMessage")
let qrTransform = CGAffineTransform(scaleX: 12, y: 12)
return qrFilter.outputImage?.transformed(by: qrTransform)
}
I created the qr code in the color I want, but when I put the logo, the logo comes over the qr code and a very ugly image is formed. As in the first picture.How can I create a space in the middle of the qr and put the logo in that space?
private func configureQrCode() {
let qrCodeColor = UIColor.qrColor
let qrCodeLogo = UIImage(named: "signinlogo")
guard let qrURLImage = URL(string: QRCodeConstants.qrLink)?.qrImage(using: qrCodeColor, logo: qrCodeLogo) else { return }
qrCode.image = UIImage(ciImage: qrURLImage)
}
[![][1]][1]
[![I want it to look like this, but it's like the one above][2]][2]
QR codes have an error correction feature that you can use to your advantage. Give the logo a white background and put it in the center of the QR code, just make sure the logo isn’t too big or the error correction won’t be able to account for it. If you need an example to follow, use this: https://www.avanderlee.com/swift/qr-code-generation-swift/
I have followed this tutorial (https://medium.com/#dominicfholmes/generating-qr-codes-in-swift-4-b5dacc75727c) to generate qr, but I am trying to generate customized qr and one of the requirements is that instead of being squares they are circles in the corners. This is possible?
func generateQR(fromString : String) -> UIImage? {
let data = fromString.data(using: String.Encoding.ascii)
// Get a QR CIFilter
guard let qrFilter = CIFilter(name: "CIQRCodeGenerator") else { return nil}
// Input the data
qrFilter.setValue(data, forKey: "inputMessage")
// Get the output image
guard let qrImage = qrFilter.outputImage else { return nil}
// Scale the image
let transform = CGAffineTransform(scaleX: 10, y: 10)
let scaledQrImage = qrImage.transformed(by: transform)
// Invert the colors
guard let colorInvertFilter = CIFilter(name: "CIColorInvert") else { return nil}
colorInvertFilter.setValue(scaledQrImage, forKey: "inputImage")
guard let outputInvertedImage = colorInvertFilter.outputImage else { return nil}
// Replace the black with transparency
guard let maskToAlphaFilter = CIFilter(name: "CIMaskToAlpha") else { return nil}
maskToAlphaFilter.setValue(outputInvertedImage, forKey: "inputImage")
guard let outputCIImage = maskToAlphaFilter.outputImage else { return nil}
// Do some processing to get the UIImage
let context = CIContext()
guard let cgImage = context.createCGImage(outputCIImage, from: outputCIImage.extent) else { return nil}
let processedImage = UIImage(cgImage: cgImage)
return processedImage
}
There is an example of expected result
https://www.qrcode-monkey.com/img/qrcode-logo.png
It's been a while since I've used the Core Image QR code generator filter, CIQRCodeGenerator. Looking at the docs, it only takes a couple of parameters, inputMessage and inputCorrectionLevel. There's no facility other than those parameters to customize the QR code it generates.
I guess you could do image processing on the resulting image to find the "bullseye" corner squares to change them to rounded rectangles, but that would be a fair challenge.
Conversely you could always write your own QR code rendering library. The image processing part isn't that complicated. It's figuring out the QR code standard and how to generate the dot pattern that would be hard. I haven't looked up the specs for QR codes but it's public.
You might take an existing open source QR code library and modify it to create the rounded rectangle corner squares you are after. I think this is the option I would pursue if it was my task. With any luck you can find a well-written library that first generates the QR code as a grid of booleans, and then uses a separate function to render that grid of boons into an image.
I'm applying several filters on an already cropped image, and I'd like a flipped duplicate of it next to the original. This would make it twice as wide.
Problem: How do you extend the bounds so both can fit? .cropped(to:CGRect) will stretch whatever original content was there. The reason there is existing content is because I'm trying to use applyingFilter as much as possible to save on processing. It's also why I'm cropping the original un-mirrored image.
Below is my CIImage "alphaMaskBlend2" with a compositing filter, and a transform applied to the same image that flips it and adjusts its position. sourceCore.extent is the size I want the final image.
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: (alphaMaskBlend2?.transformed(by: scaledImageTransform))!,
kCIInputBackgroundImageKey: alphaMaskBlend2!]).cropped(to: sourceCore.extent)
I've played around with the position of the transform in LLDB. I found with this filter being cropped, the left most image becomes stretched. If I use clamped to the same extent, and then I re-crop the image to the same extent again, the image is no longer distorted, but the bounds of the image is only half the width that it should be.
The only way I could achieve this, is compositing against a background image (sourceCore) that would be the size of the two images combined, and then compositing the other image:
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: alphaMaskBlend2!,
kCIInputBackgroundImageKey: sourceCore])
alphaMaskBlend2 = alphaMaskBlend2?.applyingFilter("CISourceAtopCompositing",
parameters: [kCIInputImageKey: (alphaMaskBlend2?.cropped(to: cropRect).transformed(by: scaledImageTransform))!,
kCIInputBackgroundImageKey: alphaMaskBlend2!])
Problem is, that this is more expensive than necessary. I even tested it with benchmarking. It would make a lot more sense if I could do this with one composite.
While I can "flip" a CIImage I couldn't find a way to use an existing CIFilter to "stitch" it along side the original. However, with some basic knowledge of writing your own CIKernel, you can. A simple project of achieving this is here.
This project contains a sample image, and using CoreImage and a GLKView it:
flips the image by transposing the Y "bottom/top" coordinates for CIPerspectiveCorrection
creates a new "palette" image using CIConstantColor and then crops it using CICrop to be twice the width of the original
uses a very simple CIKernel (registered as "Stitch" to actually stitch it together
Here's the code to flip:
// use CIPerspectiveCorrection to "flip" on the Y axis
let minX:CGFloat = 0
let maxY:CGFloat = 0
let maxX = originalImage?.extent.width
let minY = originalImage?.extent.height
let flipFilter = CIFilter(name: "CIPerspectiveCorrection")
flipFilter?.setValue(CIVector(x: minX, y: maxY), forKey: "inputTopLeft")
flipFilter?.setValue(CIVector(x: maxX!, y: maxY), forKey: "inputTopRight")
flipFilter?.setValue(CIVector(x: minX, y: minY!), forKey: "inputBottomLeft")
flipFilter?.setValue(CIVector(x: maxX!, y: minY!), forKey: "inputBottomRight")
flipFilter?.setValue(originalImage, forKey: "inputImage")
flippedImage = flipFilter?.outputImage
Here's the code to create the palette:
let paletteFilter = CIFilter(name: "CIConstantColorGenerator")
paletteFilter?.setValue(CIColor(red: 0.7, green: 0.4, blue: 0.4), forKey: "inputColor")
paletteImage = paletteFilter?.outputImage
let cropFilter = CIFilter(name: "CICrop")
cropFilter?.setValue(paletteImage, forKey: "inputImage")
cropFilter?.setValue(CIVector(x: 0, y: 0, z: (originalImage?.extent.width)! * 2, w: (originalImage?.extent.height)!), forKey: "inputRectangle")
paletteImage = cropFilter?.outputImage
Here's the code to register and use the custom CIFilter:
// register and use stitch filer
StitchedFilters.registerFilters()
let stitchFilter = CIFilter(name: "Stitch")
stitchFilter?.setValue(originalImage?.extent.width, forKey: "inputThreshold")
stitchFilter?.setValue(paletteImage, forKey: "inputPalette")
stitchFilter?.setValue(originalImage, forKey: "inputOriginal")
stitchFilter?.setValue(flippedImage, forKey: "inputFlipped")
finalImage = stitchFilter?.outputImage
All of this code (long with layout constraints) in the demo project is in viewDidLoad, so please, place it where it belongs!
Here's the code to (a) create a CIFilter subclass called Stitch and (b) register it so you can use it like any other filter:
func openKernelFile(_ name:String) -> String {
let filePath = Bundle.main.path(forResource: name, ofType: ".cikernel")
do {
return try String(contentsOfFile: filePath!)
}
catch let error as NSError {
return error.description
}
}
let CategoryStitched = "Stitch"
class StitchedFilters: NSObject, CIFilterConstructor {
static func registerFilters() {
CIFilter.registerName(
"Stitch",
constructor: StitchedFilters(),
classAttributes: [
kCIAttributeFilterCategories: [CategoryStitched]
])
}
func filter(withName name: String) -> CIFilter? {
switch name {
case "Stitch":
return Stitch()
default:
return nil
}
}
}
class Stitch:CIFilter {
let kernel = CIKernel(source: openKernelFile("Stitch"))
var inputThreshold:Float = 0
var inputPalette: CIImage!
var inputOriginal: CIImage!
var inputFlipped: CIImage!
override var attributes: [String : Any] {
return [
kCIAttributeFilterDisplayName: "Stitch",
"inputThreshold": [kCIAttributeIdentity: 0,
kCIAttributeClass: "NSNumber",
kCIAttributeDisplayName: "Threshold",
kCIAttributeDefault: 0.5,
kCIAttributeMin: 0,
kCIAttributeSliderMin: 0,
kCIAttributeSliderMax: 1,
kCIAttributeType: kCIAttributeTypeScalar],
"inputPalette": [kCIAttributeIdentity: 0,
kCIAttributeClass: "CIImage",
kCIAttributeDisplayName: "Palette",
kCIAttributeType: kCIAttributeTypeImage],
"inputOriginal": [kCIAttributeIdentity: 0,
kCIAttributeClass: "CIImage",
kCIAttributeDisplayName: "Original",
kCIAttributeType: kCIAttributeTypeImage],
"inputFlipped": [kCIAttributeIdentity: 0,
kCIAttributeClass: "CIImage",
kCIAttributeDisplayName: "Flipped",
kCIAttributeType: kCIAttributeTypeImage]
]
}
override init() {
super.init()
}
override func setValue(_ value: Any?, forKey key: String) {
switch key {
case "inputThreshold":
inputThreshold = value as! Float
case "inputPalette":
inputPalette = value as! CIImage
case "inputOriginal":
inputOriginal = value as! CIImage
case "inputFlipped":
inputFlipped = value as! CIImage
default:
break
}
}
#available(*, unavailable) required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override var outputImage: CIImage {
return kernel!.apply(
extent: inputPalette.extent,
roiCallback: {(index, rect) in return rect},
arguments: [
inputThreshold as Any,
inputPalette as Any,
inputOriginal as Any,
inputFlipped as Any
])!
}
}
Finally, the CIKernel code:
kernel vec4 stitch(float threshold, sampler palette, sampler original, sampler flipped) {
vec2 coord = destCoord();
if (coord.x < threshold) {
return sample(original, samplerCoord(original));
} else {
vec2 flippedCoord = coord - vec2(threshold, 0.0);
vec2 flippedCoordinate = samplerTransform(flipped, flippedCoord);
return sample(flipped, flippedCoordinate);
}
}
Now, someone else may have something more elegant - maybe even using an existing CIFilter - but this works well. It only uses the GPU, so performance-wise, can be used in "real time". I added unneeded code (registering the filter, using a dictionary to define attributes) to make it more of a teaching exercise for those new to creating CIKernels that anyone with knowledge of using CIFilters can consume. If you focus on the kernel code, you'll recognize how similar to C it looks.
Last, a caveat. I am only stitching the (Y-axis) flipped image to the right of the original. You'll need to adjust things if you want something else.
I am trying to combine CoreML and ARKit in my project using the given inceptionV3 model on Apple website.
I am starting from the standard template for ARKit (Xcode 9 beta 3)
Instead of intanciating a new camera session, I reuse the session that has been started by the ARSCNView.
At the end of my viewDelegate, I write:
sceneView.session.delegate = self
I then extend my viewController to conform to the ARSessionDelegate protocol (optional protocol)
// MARK: ARSessionDelegate
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate frame: ARFrame) {
do {
let prediction = try self.model.prediction(image: frame.capturedImage)
DispatchQueue.main.async {
if let prob = prediction.classLabelProbs[prediction.classLabel] {
self.textLabel.text = "\(prediction.classLabel) \(String(describing: prob))"
}
}
}
catch let error as NSError {
print("Unexpected error ocurred: \(error.localizedDescription).")
}
}
}
At first I tried that code, but then noticed that inception requires a pixel Buffer of type Image. < RGB,<299,299>.
Although not recommenced, I thought I would just resize my frame then try to get a prediction out of it. I am resizing using this function (took it from https://github.com/yulingtianxia/Core-ML-Sample)
func resize(pixelBuffer: CVPixelBuffer) -> CVPixelBuffer? {
let imageSide = 299
var ciImage = CIImage(cvPixelBuffer: pixelBuffer, options: nil)
let transform = CGAffineTransform(scaleX: CGFloat(imageSide) / CGFloat(CVPixelBufferGetWidth(pixelBuffer)), y: CGFloat(imageSide) / CGFloat(CVPixelBufferGetHeight(pixelBuffer)))
ciImage = ciImage.transformed(by: transform).cropped(to: CGRect(x: 0, y: 0, width: imageSide, height: imageSide))
let ciContext = CIContext()
var resizeBuffer: CVPixelBuffer?
CVPixelBufferCreate(kCFAllocatorDefault, imageSide, imageSide, CVPixelBufferGetPixelFormatType(pixelBuffer), nil, &resizeBuffer)
ciContext.render(ciImage, to: resizeBuffer!)
return resizeBuffer
}
Unfortunately, this is not enough to make it work. This is the error that is catched:
Unexpected error ocurred: Input image feature image does not match model description.
2017-07-20 AR+MLPhotoDuplicatePrediction[928:298214] [core]
Error Domain=com.apple.CoreML Code=1
"Input image feature image does not match model description"
UserInfo={NSLocalizedDescription=Input image feature image does not match model description,
NSUnderlyingError=0x1c4a49fc0 {Error Domain=com.apple.CoreML Code=1
"Image is not expected type 32-BGRA or 32-ARGB, instead is Unsupported (875704422)"
UserInfo={NSLocalizedDescription=Image is not expected type 32-BGRA or 32-ARGB, instead is Unsupported (875704422)}}}
Not sure what I can do from here.
If there is any better suggestion to combine both, I'm all ears.
Edit: I also tried the resizePixelBuffer method from the YOLO-CoreML-MPSNNGraph suggested by #dfd , the error is exactly the same.
Edit2: So I changed the pixel format to be kCVPixelFormatType_32BGRA (not the same format as the pixelBuffer passed in the resizePixelBuffer).
let pixelFormat = kCVPixelFormatType_32BGRA // line 48
I do not have the error anymore. But as soon as I try to make a prediction, the AVCaptureSession stops. Seems I am running into the same issue Enric_SA is running on the apple developers forum.
Edit3: So I tried implementing rickster solution. Works well with inceptionV3. I wanted to try a a feature observation (VNClassificationObservation). At this time, it is not working using TinyYolo. The bounding are wrong. Trying to figure it out.
Don't process images yourself to feed them to Core ML. Use Vision. (No, not that one. This one.) Vision takes an ML model and any of several image types (including CVPixelBuffer) and automatically gets the image to the right size and aspect ratio and pixel format for the model to evaluate, then gives you the model's results.
Here's a rough skeleton of the code you'd need:
var request: VNRequest
func setup() {
let model = try VNCoreMLModel(for: MyCoreMLGeneratedModelClass().model)
request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)
}
func classifyARFrame() {
let handler = VNImageRequestHandler(cvPixelBuffer: session.currentFrame.capturedImage,
orientation: .up) // fix based on your UI orientation
handler.perform([request])
}
func myResultsMethod(request: VNRequest, error: Error?) {
guard let results = request.results as? [VNClassificationObservation]
else { fatalError("huh") }
for classification in results {
print(classification.identifier, // the scene label
classification.confidence)
}
}
See this answer to another question for some more pointers.
I was wondering if it is possible to binarize an image (convert to black and white only) with Core Image?
I made it with OpenCV and GPUImage, but would prefer it to use Apple Core Image, if that's possible
You can use MetalPerformanceShaders for that. And the CIImageProcessingKernel.
https://developer.apple.com/documentation/coreimage/ciimageprocessorkernel
Here is the code of the class needed.
class ThresholdImageProcessorKernel: CIImageProcessorKernel {
static let device = MTLCreateSystemDefaultDevice()
override class func process(with inputs: [CIImageProcessorInput]?, arguments: [String : Any]?, output: CIImageProcessorOutput) throws {
guard
let device = device,
let commandBuffer = output.metalCommandBuffer,
let input = inputs?.first,
let sourceTexture = input.metalTexture,
let destinationTexture = output.metalTexture,
let thresholdValue = arguments?["thresholdValue"] as? Float else {
return
}
let threshold = MPSImageThresholdBinary(
device: device,
thresholdValue: thresholdValue,
maximumValue: 1.0,
linearGrayColorTransform: nil)
threshold.encode(
commandBuffer: commandBuffer,
sourceTexture: sourceTexture,
destinationTexture: destinationTexture)
}
}
And this is how you can use it:
let context = CIContext(options: nil)
if let binaryCIImage = try? ThresholdImageProcessorKernel.apply(
withExtent: croppedCIImage.extent,
inputs: [croppedCIImage],
arguments: ["thresholdValue": Float(0.2)]) {
if let cgImage = context.createCGImage(binaryCIImage, from: binary.extent) {
DispatchQueue.main.async {
let resultingImage = UIImage(cgImage: cgImage)
if resultingImage.size.width > 100 {
print("Received an image \(resultingImage.size)")
}
}
}
}
Yes. You have at least two options, CIPhotoEffectMono or a small custom CIColorKernel.
CIPhotoEffectMono:
func createMonoImage(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
filter!.setValue(CIImage(image: image), forKey: "inputImage")
let outputImage = filter!.outputImage
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Note, I'm writing this quickly, you may need to tighten up things for nil returns.
CIColorKernel:
The FadeToBW GLSL (0.0 factor full color, 1.0 factor is no color):
kernel vec4 fadeToBW(__sample s, float factor) {
vec3 lum = vec3(0.299,0.587,0.114);
vec3 bw = vec3(dot(s.rgb,lum));
vec3 pixel = s.rgb + (bw - s.rgb) * factor;
return vec4(pixel,s.a);
}
The code below opens this as a file called FadeToBW.cikernel. You can also post this as a String directly into the openKernelFile call.
The Swift code:
func createMonoImage(image:UIImage, inputColorFade:NSNumber) -> UIImage {
let ciKernel = CIColorKernel(string: openKernelFile("FadeToBW"))
let extent = image.extent
let arguments = [image, inputColorFade]
let outputImage = ciKernel.applyWithExtent(extent, arguments: arguments)
let cgimg = ciCtx.createCGImage(outputImage!, from: (outputImage?.extent)!)
return UIImage(cgImage: cgimg!)
}
Again, add some guards, etc.
I have had success by converting it to greyscale using CIPhotoEffectMono or equivalent, and then using CIColorControls with a ridiculously high inputContrast number (I used 10000). This effectively makes it black and white and thus binarized. Useful for those who don't want to mess with custom kernels.
Also, you can use an example like Apple's "Chroma Key" filter which uses Hue to filter, but instead of looking at Hue you just give the rules for binarizing the data (ie: when to set RGB all to 1.0 and when to set to 0.0).
https://developer.apple.com/documentation/coreimage/applying_a_chroma_key_effect
Found this thread from a Google search, and thought I'd mention that as of iOS 14 and OSX 11.0, CoreImage includes CIColorThreshold and CIColorThresholdOtsu filters (the latter using Otsu's method to calculate the threshold value from the image histogram)
See:
https://cifilter.io/CIColorThreshold/
https://cifilter.io/CIColorThresholdOtsu/
let outputImage = inputImage.applyingFilter("CIColorMonochrome",
parameters: [kCIInputColorKey: CIColor.white])
In you want to play with every out of 250 CIFilters please check this app out: https://apps.apple.com/us/app/filter-magic/id1594986951