I did a segmentation with the tflite library for flutter and it works fine, I load the model, make an RGB [3, 224, 224] input and run it through the interpreter of the tflite_flutter_helper library.
But how to convert the output of my model, [1, 1, 224, 224] back to a TensorImage or an Image in general? When I run
TensorImage resultImage = TensorImage.fromTensorBuffer(tensorBuffer);
or
TensorImage resultImage = TensorImage(tensorBuffer.getDataType());
resultImage.loadTensorBuffer(tensorBuffer);
I get the error message:
The shape of a RGB image should be (h, w, c) or (1, h, w, c), and channels representing R, G, B in order. The provided image shape is [1, 224, 224, 1]
I tried solving it by rearranging my output to the shape of (1, h, w, c) like shown in the error to [1, 224, 224, 1], but I get the same result. Heres my full code:
ImageProcessor imageProcessor = ImageProcessorBuilder()
.add(ResizeOp(224, 224, ResizeMethod.NEAREST_NEIGHBOUR))
.add(NormalizeOp(127.5, 127.5))
.build();
SequentialProcessor<TensorBuffer> probabilityProcessor = TensorProcessorBuilder().add(DequantizeOp(0, 1 / 255)).build();
TensorImage tensorImage = TensorImage(TfLiteType.float32);
tensorImage.loadImage(img.Image.fromBytes(224, 224, image.readAsBytesSync()));
tensorImage = imageProcessor.process(tensorImage);
TensorBuffer tensorBuffer;
try{
Interpreter interpreter = await Interpreter.fromAsset('models/enet.tflite');
tensorBuffer = TensorBuffer.createFixedSize(interpreter.getOutputTensor(0).shape, interpreter.getOutputTensor(0).type);
interpreter.run(tensorImage.buffer, tensorBuffer.getBuffer());
tensorBuffer = probabilityProcessor.process(tensorBuffer);
// ignore: invalid_use_of_protected_member
tensorBuffer.resize(List<int>.of([1, 224, 224, 1]));
TensorImage resultImage = TensorImage(tensorBuffer.getDataType());
resultImage.loadTensorBuffer(tensorBuffer);
}catch(e){
print('Error loading model: ' + e.toString());
}
I furthermore tried reading in the buffer from the tensorBuffer directly into a Image of flutter via
Image result = Image.memory(tensorBuffer.getBuffer().asUint8List());
with an invalid imae data exception as a result.
**** EDIT ****
I also tried the ImageConversions class from tflite_flutter_helper with
img.Image resultImage = ImageConversions.convertGrayscaleTensorBufferToImage(tensorBuffer);
but still no success...
Related
i am using opencv package https://pub.dev/packages/opencv_4 but can not get fingertprints clearly from image.
i am using this function.
Uint8List? _byte = await Cv2.morphologyEx(
pathFrom: CVPathFrom.GALLERY_CAMERA,
pathString: photoFinger.path,
operation: Cv2.COLOR_BayerGB2RGB ,
kernelSize: [30, 30],
);
i have python code using openvcv lib but dont know how to convert it into dart. using openCv dart package.
python code is:
import cv2
# Read the input image
img = cv2.imread('input_image.jpg', cv2.IMREAD_GRAYSCALE)
# Apply Gaussian blur to remove noise
img = cv2.GaussianBlur(img, (5, 5), 0)
# Apply adaptive thresholding to segment the fingerprint
img = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 11, 5)
# Apply morphological operations to remove small objects and fill in gaps
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
img = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel, iterations=1)
img = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel, iterations=1)
# Estimate the orientation field of the fingerprint
sobel_x = cv2.Sobel(img, cv2.CV_32F, 1, 0, ksize=3)
sobel_y = cv2.Sobel(img, cv2.CV_32F, 0, 1, ksize=3)
theta = cv2.phase(sobel_x, sobel_y)
# Apply non-maximum suppression to thin the ridges
theta_quantized = np.round(theta / (np.pi / 8)) % 8
thin = cv2.ximgproc.thinning(img, thinningType=cv2.ximgproc.THINNING_ZHANGSUEN, thinningIterations=5)
# Extract minutiae points from the fingerprint
minutiae = cv2.ximgproc.getFastFeatureDetector().detect(thin)
# Display the output image with minutiae points
img_with_minutiae = cv2.drawKeypoints(img, minutiae, None, color=(0, 255, 0), flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow('Output Image', img_with_minutiae)
cv2.waitKey(0)
cv2.destroyAllWindows()
In CImg, I have split an RGBA image apart into multiple single-channel images, with code like:
CImg<unsigned char> input("foo.png");
CImg<unsigned char> r = input.get_channel(0), g = input.get_channel(1), b = input.get_channel(2), a = input.get_channel(3);
Then I try to swizzle the channel order:
CImg<unsigned char> output(input.width(), input.height(), 1, input.channels());
output.channel(0) = g;
output.channel(1) = b;
output.channel(2) = r;
output.channel(3) = a;
When I save the image out, however, it turns out grayscale, apparently based on the alpha channel value; for example, this input:
becomes this output:
How do I specify the image color format so that CImg saves into the correct color space?
Simply copying a channel does not work like that; a better approach is to copy the pixel data with std::copy:
std::copy(g.begin(), g.end(), &output.atX(0, 0, 0, 0));
std::copy(b.begin(), b.end(), &output.atX(0, 0, 0, 1));
std::copy(r.begin(), r.end(), &output.atX(0, 0, 0, 2));
std::copy(a.begin(), a.end(), &output.atX(0, 0, 0, 3));
This results in an output image like:
I'm creating a mobile client for my object-detection server. I already have a perfectly-working python client which takes an image as input, sends it to the server in an HTTP request and receives prediction as a json response. I'm trying to achieve the same in Dart which I'm fairly new to.
The python code I have converts the input JPG image into a numpy array of RGB values in the following format (using a 5x4 image as an example)-
[
[[182, 171, 153], [203, 188, 169], [242, 214, 200], [255, 235, 219], [155, 111, 98]],
[[181, 171, 146], [204, 190, 164], [255, 237, 214], [177, 142, 120], [84, 42, 20]],
[[176, 168, 129], [218, 206, 166], [180, 156, 118], [91, 59, 21], [103, 64, 25]],
[[186, 180, 132], [166, 156, 107], [91, 68, 24], [94, 63, 17], [122, 84, 39]]
]
In my dart code, I have attempted to convert the image into a list of 8-bit unsigned integers using-
Uint8List inputImg = (await rootBundle.load("assets/test.jpg")).buffer.asUint8List()
It gives me a long array of over 800 ints for the same 5x4 image.
On testing it with two single-pixel images (one black and one white), a large section of the Uint8List seems to be repeating for each. I isolated the differing sections of the array and they do not correspond with the RGB values expected for the colors (I expected [0 0 0] and [255 255 255], but got something like 255, 0, 63, 250, 0, 255, 0 and 254, 254, 40, 3 for the two respectively)
I just need the RGB values in the image. Would appreciate someone pointing me in the right direction!
Images are normally compressed when they're stored. Based on the file extension, I'm guessing you're using JPEG encoding. This means the data stored in the assets/test.jpg file is not an array of colors. That would be an inefficient use of data storage if everything were done that way. To get that array of colors, you need to decode the image. This can be done with the image package.
To do this, first add the package as a dependency by adding the following to the pubspec.yaml:
dependencies:
image: ^3.0.4
You should follow the same method of obtaining the encoded image data. But you then need to decode it.
final Uint8List inputImg = (await rootBundle.load("assets/test.jpg")).buffer.asUint8List();
final decoder = JpegDecoder();
final decodedImg = decoder.decodeImage(inputImg);
final decodedBytes = decodedImg.getBytes(format: Format.rgb);
decodedBytes contains a single depth list of all your pixel values in RGB format. To get it into your desired format, just loop over the values and add them to a new list.
List<List<List<int>> imgArr = [];
for(int y = 0; y < decodedImage.height; y++) {
imgArr.add([]);
for(int x = 0; x < decodedImage.width; x++) {
int red = decodedBytes[y*decodedImage.width*3 + x*3];
int green = decodedBytes[y*decodedImage.width*3 + x*3 + 1];
int blue = decodedBytes[y*decodedImage.width*3 + x*3 + 2];
imgArr[y].add([red, green, blue]);
}
}
To do this, first add the package as a dependency by adding the following to the pubspec.yaml:
image: ^3.2.0
import image library to your file:
import 'package:image/image.dart' as im;
get brightness of image:
int getBrightness(File file) {
im.Image? image = im.decodeImage(file.readAsBytesSync());
final data = image?.getBytes();
var color= 0;
for (var x = 0; x < data!.length; x += 4) {
final int r = data[x];
final int g = data[x + 1];
final int b = data[x + 2];
final int avg = ((r + g + b) / 3).floor();
color += avg;
}
return (color/ (image!.width * image.height)).floor();
}
Check selected image is dark or light:
bool isImageDark(int brightness) {
return brightness <= 127;
}
Use above method like this:
int brightness = getBrightness(file);
bool isDark= isImageDark(brightness);
Now you can change your icon color based on your background image
I'm trying to use OpenCV, more specifically its HoughCircles to detect and measure the pupil and iris, currently I've been playing with some of the variables in the function, because it either returns 0 circles, or an excessive amount. Below is the code and test image I'm using.
Code for measuring iris:
eye1 = [self increaseIn:eye1 Contrast:2 andBrightness:0];
cv::cvtColor(eye1, eye1, CV_RGBA2RGB);
cv::bilateralFilter(eye1, eye2, 75, 100, 100);
cv::vector<cv::Vec3f> circles;
cv::cvtColor(eye2, eye1, CV_RGBA2GRAY);
cv::morphologyEx(eye1, eye1, 4, cv::getStructuringElement(cv::MORPH_RECT,cv::Size(3, 3)));
cv::threshold(eye1, eye1, 0, 255, cv::THRESH_OTSU);
eye1 = [self circleCutOut:eye1 Size:50];
cv::GaussianBlur(eye1, eye1, cv::Size(7, 7), 0);
cv::HoughCircles(eye1, circles, CV_HOUGH_GRADIENT, 2, eye1.rows/4);
Code for measuring pupil:
eye1 = [self increaseBlackPupil:eye1];
cv::Mat eye2 = cv::Mat::zeros(eye1.rows, eye1.cols, CV_8UC3);
eye1 = [self increaseIn:eye1 Contrast:2 andBrightness:0];
cv::cvtColor(eye1, eye1, CV_RGBA2RGB);
cv::bilateralFilter(eye1, eye2, 75, 100, 100);
cv::threshold(eye2, eye1, 25, 255, CV_THRESH_BINARY);
cv::SimpleBlobDetector::Params params;
params.minDistBetweenBlobs = 75.0f;
params.filterByInertia = false;
params.filterByConvexity = false;
params.filterByCircularity = false;
params.filterByArea = true;
params.minArea = 50;
params.maxArea = 500;
cv::Ptr<cv::FeatureDetector> blob_detector = new cv::SimpleBlobDetector(params);
blob_detector->create("SimpleBlob");
cv::vector<cv::KeyPoint> keypoints;
blob_detector->detect(eye1, keypoints);
I know the image is rough, I've been also trying to find a way to clean it up and make the edges clearer.
So my question to put it plainly: What can I do to adjust the parameters in the function HoughCircles or changes to the images to make the iris and pupil detected?
Thanks
Ok, without experimenting too much, what I understand is that you've only applied a bilateral filter to the image before using the Hough circle detector.
In my opinion, you need to include a thresholding step into the process.
I took your sample image that you provided in the post and made it undergo the following steps:
conversion to greyscale
morphological gradient
thresholding
hough circle detection.
after the thresholding step, I got the following image for the left eye only:
here's the code:
greyscale:
cvCvtColor(im_rgb,im_rgb,CV_RGB2GRAY);
morphology:
cv::morphologyEx(im_rgb,im_rgb,4,cv::getStructuringElement(cv::MORPH_RECT,cv::Size(size,size)));
thresholding:
cv::threshold(im_rgb, im_rgb, low, high, cv::THRESH_OTSU);
hough circle detection:
cv::vector<cv::Vec3f> circles;
cv::HoughCircles(im_rgb, circles, CV_HOUGH_GRADIENT, 2, im_rgb.rows/4);
Now when I print:
NSLog(#"Found %ld cirlces", circles.size());
I get:
"Found 1 cirlces"
Hope this helps.
Basically i want to implement color replacement feature for my paint application.
Below are original and expected output
Original:
After changing wall color selected by user along with some threshold for replacement
I have tried two approaches but could not got working as expected
Approach 1:
Queue-based Flood Fill algorithm for color replacement
but with i got below output with terribly slow and wall shadow has not been preserved.
Approach 2:
So i have tried to look at another option and found below post from SO
How to change a particular color in an image?
but i could not understand logic and not sure about my code implementation from step 3.
Please find below code for each step wise with my understanding.
1) Convert the image from RGB to HSV using cvCvtColor (we only want to
change the hue).
IplImage *mainImage=[self CreateIplImageFromUIImage:[UIImage imageNamed:#"original.jpg"]];
IplImage *hsvImage = cvCreateImage(cvGetSize(mainImage), IPL_DEPTH_8U, 3);
IplImage *threshImage = cvCreateImage(cvGetSize(mainImage), IPL_DEPTH_8U, 3);
cvCvtColor(mainImage,hsvImage,CV_RGB2HSV);
2) Isolate a color with cvThreshold specifying a
certain tolerance (you want a range of colors, not one flat color).
cvThreshold(hsvImage, threshImage, 0, 100, CV_THRESH_BINARY);
3) Discard areas of color below a minimum size using a blob detection
library like cvBlobsLib. This will get rid of dots of the similar
color in the scene. Do i need to specify original image or thresold image?
CBlobResult blobs = CBlobResult(threshImage, NULL, 0);
blobs.Filter( blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 10);
4) Mask the color with cvInRangeS and use the
resulting mask to apply the new hue.
Not sure about this function how it helps in color replacement and not able to understand arguments to be provided.
5) cvMerge the new image with the
new hue with an image composed by the saturation and brightness
channels that you saved in step one.
i understand that cvMerge will merge three channel of H S and V but how i can use output of above three steps.
so basically stuck with opencv implementation,
if possible then please guide me for opencv implemenation or any other solution to tryout.
Finally i am able to achieve some desired output using below javacv code and same ported to opencv too.
this solution has 2 problems
don't have edge detection, i think using contours i can achieve it
replaced color has flat hue and sat which should set based on source
pixel hue sat difference but not sure how to achieve that. may be
instead of cvSet using cvAddS
IplImage image = cvLoadImage("sample.png");
CvSize cvSize = cvGetSize(image);
IplImage hsvImage = cvCreateImage(cvSize, image.depth(),image.nChannels());
IplImage hChannel = cvCreateImage(cvSize, image.depth(), 1);
IplImage sChannel = cvCreateImage(cvSize, image.depth(), 1);
IplImage vChannel = cvCreateImage(cvSize, image.depth(), 1);
cvSplit(hsvImage, hChannel, sChannel, vChannel, null);
IplImage cvInRange = cvCreateImage(cvSize, image.depth(), 1);
CvScalar source=new CvScalar(72/2,0.07*255,66,0); //source color to replace
CvScalar from=getScaler(source,false);
CvScalar to=getScaler(source, true);
cvInRangeS(hsvImage, from , to, cvInRange);
IplImage dest = cvCreateImage(cvSize, image.depth(), image.nChannels());
IplImage temp = cvCreateImage(cvSize, IPL_DEPTH_8U, 2);
cvMerge(hChannel, sChannel, null, null, temp);
cvSet(temp, new CvScalar(45,255,0,0), cvInRange);// destination hue and sat
cvSplit(temp, hChannel, sChannel, null, null);
cvMerge(hChannel, sChannel, vChannel, null, dest);
cvCvtColor(dest, dest, CV_HSV2BGR);
cvSaveImage("output.png", dest);
method to for calculating threshold
CvScalar getScaler(CvScalar seed,boolean plus){
if(plus){
return CV_RGB(seed.red()+(seed.red()*thresold),seed.green()+(seed.green()*thresold),seed.blue()+(seed.blue()*thresold));
}else{
return CV_RGB(seed.red()-(seed.red()*thresold),seed.green()-(seed.green()*thresold),seed.blue()-(seed.blue()*thresold));
}
}
I know this answer will be useful to someone someday.
try this out in your view viewdidLoad() override method for iOS.
image in the code snippet below should be from your UIImageView
seed also is fixed.you can make it dynamic based on user tap event.
cv::Mat mask = cv::Mat::zeros(image.rows + 2, image.cols + 2, CV_8U);
imageView.image = [self UIImageFromCVMat:image];
cv::cvtColor(image, image, cv::COLOR_BGR2RGB);
try {
if(seed.x > 0 && seed.y > 0){
cv::floodFill(image, mask, seed, cv::Scalar(50, 155, 20) ,0, cv::Scalar(2,2, 2), cv::Scalar(2,2, 2), 8);
cv::floodFill(image, mask, seed2, cv::Scalar(50, 155, 20) ,0, cv::Scalar(2,2, 2), cv::Scalar(2,2, 2), 8);
cv::floodFill(image, mask, seed3, cv::Scalar(50, 155, 0) ,0, cv::Scalar(2,2, 2), cv::Scalar(2,2, 2), 8);
}
} catch (Exception ex) {
}
cv::cvtColor(image, image, cv::COLOR_RGB2BGR);
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
self.imageView.image = [self UIImageFromCVMat:image];