Colorizing an image in Swift - swift

I am trying to figure out some basic operations in working with Swift and images (PNG and JPG).
I have gotten to the point where I can successfully load a given image, but am unsure how to properly apply image adjustments that will stick.
Specifically I am trying to be able to trigger the following:
colorize (HSB adjustment)
invert colors
From the samples I could find online it seems most code samples are for objective C, and I've been unable to get anything working in my current playground. It would seem from the documentation that I should be able to use filters (using CoreImage) but that is where I get lost.
Can anyone point me to or show me a valid (simple) approach that accomplishes this in Swift?
Many thanks in advance!
** EDIT ***
Here's the code I've got so far - working a bit better thanks to that link. However I still run into a crash when trying to output the results (that line is commented out)
So far all the examples I could find around the filtering code are objectiveC based.
import UIKit
var img = UIImage(named: "background.png")
var context = CIContext(options:nil)
var filter = CIFilter(name: "CIColorInvert");
filter.setValue(img, forKey: kCIInputImageKey)
//let newImg = filter.outputImage

Have you tried Google? "coreimage swift" gave me: http://www.raywenderlich.com/76285/beginning-core-image-swift
If this doesn't help, please post the code you've tried that didn't work.

Related

Output shape of mlmodel NNClassifier is Multiarray, VNClassificationObservation not working?

Need help in deploying coreML model generated from GCP to be built, and deployed on Xcode ?
The app on my iPhone opens up and I can take a picture, but the model gets stuck at 'classifying...'
This was initially due to the input image size (I changed it to 224*224) which I was able to fix using coremltools but looks like for the output I need to have a dictionary output when the .mlmodel that I have has a multiarray(float32) output. Also, GCP coreML provided two files, a label.txt file and .mlmodel.
So, I have two questions:
How do I leverage the label.text file during the classification/Xcode build process ?
My error happens at
{ guard let results = request.results as? [VNClassificationObservation] else {
fatalError("Model failed to load image")
}
Can I change my mlmodel output from multiarray to dictionary with labels to suit VNClassificationObservation OR VNCoreMLFeatureValueObservation can be used in someway with multiarray output ? I tried it but app on the iphone gets stuck.
Not sure how to use the label file in Xcode. Any help is much appreciated. I have spent a day researching online.
You will only get VNClassificationObservation when the model is a classifier. If you're getting an MLMultiArray as output, then your model is NOT a classifier according to Core ML.
It's possible to change your model into a classifier using coremltools. You need to write a Python script that:
loads the mlmodel
assigns the layers from model._spec.neuralNetwork to model._spec.neuralNetworkClassifier
adds two outputs, one for the winning class label & one for the dictionary with the probabilities for all class labels
fill in the class labels
save the mlmodel

Using CoreML to classify NSImages

I'm trying to work with Xcode CoreML to classify images that are simply single digits or letters. To start out with I'm just usiing .png images of digits. Using Create ML tool, I built an image classifier (NOT including any Vision support stuff) and provided a set of about 300 training images and separate set of 50 testing images. When I run this model, it trains and tests successfully and generates a model. Still within the tool I access the model and feed it another set of 100 images to classify. It works properly, identifying 98 of them corrrectly.
Then I created a Swift sample program to access the model (from the Mac OS X single view template); it's set up to accept a dropped image file and then access the model's prediction method and print the result. The problem is that the model expects an object of type CVPixelBuffer and I'm not sure how to properly create this from NSImage. I found some reference code and incorported but when I actually drag my classification images to the app it's only about 50% accurate. So I'm wondering if anyone has any experience with this type of model. It would be nice if there were a way to look at the "Create ML" source code to see how it processes a dropped image when predicting from the model.
The code for processing the image and invoking model prediction method is:
// initialize the model
mlModel2 = MLSample() //MLSample is model generated by ML Create tool and imported to project
// prediction logic for the image
// (included in a func)
//
let fimage = NSImage.init(contentsOfFile: fname) //fname is obtained from dropped file
do {
let fcgImage = fimage.cgImage(forProposedRect: nil, context: nil, hints: nil)
let imageConstraint = mlModel2?.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint
let featureValue = try MLFeatureValue(cgImage: fcgImage!, constraint: imageConstraint!, options: nil)
let pxbuf = featureValue.imageBufferValue
let mro = try mlModel2?.prediction(image: pxbuf!)
if mro != nil {
let mroLbl = mro!.classLabel
let mroProb = mro!.classLabelProbs[mroLbl] ?? 0.0
print(String.init(format: "M2 MLFeature: %# %5.2f", mroLbl, mroProb))
return
}
}
catch {
print(error.localizedDescription)
}
return
There are several ways to do this.
The easiest is what you're already doing: create an MLFeatureValue from the CGImage object.
My repo CoreMLHelpers has a different way to convert CGImage to CVPixelBuffer.
A third way is to get Xcode 12 (currently in beta). The automatically-generated class now accepts images instead of just CVPixelBuffer.
In cases like this it's useful to look at the image that Core ML actually sees. You can use the CheckInputImage project from https://github.com/hollance/coreml-survival-guide to verify this (it's an iOS project but easy enough to port to the Mac).
If the input image is correct, and you still get the wrong predictions, then probably the image preprocessing options on the model are wrong. For more info: https://machinethink.net/blog/help-core-ml-gives-wrong-output/

Pulling data from a GKLeaderboard

Okay, so I have spent a large amount of time searching the internet for help on this to no success, so I would like some help.
I am making a game with SpriteKit, and I have decided to implement my own leaderboard style, rather than the clunky Game Center default. I have managed to log the user into GC, but cannot find the correct (and working) Swift 3 code for pulling information from the leaderboard. I want to pull the top 10 score, along with the current user score (if they aren't already in the top 10). The information I would like from them is position, username and score.
I know this is a fairly simple concept, but every tutorial online either uses the default GC view, or is extremely old/outdated code which no longer works. I just need to know how to pull this information from the leaderboard, then I can process it all myself!
Thank you for any help in advance!
It seems like Apple doesn't have proper example code in Swift, but here's a Swift version loosely based on their Objective-C example:
let leaderboard = GKLeaderboard()
leaderboard.playerScope = .global
leaderboard.timeScope = .allTime
leaderboard.identifier = "YourIdentifier"
leaderboard.loadScores { scores, error in
guard let scores = scores else { return }
for score in scores {
print(score.value)
}
}
Note, this is just a translation from Apple's Objective-C code, and I haven't tested it with real data.

SceneKit, CIFilter. CICategoryBlur filters nothing visible

I'm trying to blur a node or create a blur effect in SceneKit.
let ship = scene.rootNode.childNodeWithName("ship", recursively: true)!
var ciFilters = [CIFilter]()
ciFilters.append(CIFilter(name: "CIGaussianBlur", withInputParameters: [kCIInputRadiusKey: 30])!)
ship.filters = ciFilters
Results in:
Which, I thought ok maybe I'm using the CIFilter incorrectly. However CIHalftone works fine:
ciFilters.append(CIFilter(name: "CICMYKHalftone", withInputParameters: ["inputWidth": 20])!)
I've also tried with CIZoomBlur also doesn't work, however
CIPixellate does.
Am I missing something fundamental here?
Update: I have tried everything in CICategoryBlur and can't get any of them to work, however every other CICategory item i've tried I could get to work.
Thanks.

Texture feature extraction using Gray Level Cooccurence Matrix

I'm doing a project in liver tumor classification. I used this code and it gave some output. I don't know whether I'm correct.
Actually I initially used Region Growing method for liver segmentation and from that I segmented tumor using FCM. So, to this GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct.
I gave the parameters exactly as in the example. Actually what do they mean? Do I need to change them for different images? If so, how to give the parameters? I'm completely new to this. So, kindly guide me.
I got this output. Am I correct?
stats =
autoc: [1.857855266614132e+000 1.857955341199538e+000]
contr: [5.103143332457753e-002 5.030548650257343e-002]
corrm: [9.512661919561399e-001 9.519459060378332e-001]
corrp: [9.512661919561385e-001 9.519459060378338e-001]
cprom: [7.885631654779597e+001 7.905268525471267e+001]
cshad: [1.219440700252286e+001 1.220659371449108e+001]
dissi: [2.037387269065756e-002 1.935418927908687e-002]
energ: [8.987753042491253e-001 8.988459843719526e-001]
entro: [2.759187341212805e-001 2.743152140681436e-001]
homom: [9.930016927881388e-001 9.935307908219834e-001]
homop: [9.925660617240367e-001 9.930960070222014e-001]
maxpr: [9.474275457490587e-001 9.474466930429607e-001]
sosvh: [1.847174384255155e+000 1.846913030238459e+000]
savgh: [2.332207337361002e+000 2.332108469591401e+000]
svarh: [6.311174784234007e+000 6.314794324825067e+000]
senth: [2.663144677055123e-001 2.653725436772341e-001]
dvarh: [5.103143332457753e-002 5.030548650257344e-002]
denth: [7.573115918713391e-002 7.073380266499811e-002]
inf1h: [-8.199645492654247e-001 -8.265514568489666e-001]
inf2h: [5.643539051044213e-001 5.661543271625117e-001]
indnc: [9.980238521073823e-001 9.981394883569174e-001]
idmnc: [9.993275086521848e-001 9.993404634013308e-001]
Kindly guide me. Thank you
its ok but i don't think we usually need all this extra information i usually prefer to use the following code
GLCM2 = graycomatrix(img,'Offset',[1 1]);
stats = graycoprops(GLCM2);
i hope it will help you