Image Segmentation in Flutter using deeplabv3 or another method but google ml kit - flutter

I developed app to remove image background using google MLKit Image. Image Segmentation does not give prices results., i studied Deeplabv3 has precise results.
link is below
http://bggit.ihub.org.cn/p30597648/tensorflow/raw/43c7b99a10f083b8ab3e2327b7068a5b5b9a7e96/tensorflow/lite/g3doc/models/segmentation/images/segmentation.gif
i used this project example deeplab. there is not right result there are random result.
https://github.com/shaqian/flutter_tflite
please help me to remove image background with precise result in flutter.
i will be thankful to you

Related

Image Segmentation in flutter using tenserlite model

Can anybody suggest me how can I extract segmented image in flutter . I was able to use tenserlite model
for segmentation but i don't know how i can extract that segmented image from it.
I also wanted to do image segmentation. So I did a lot of search. And I found one open source.
https://github.com/kshitizrimal/Flutter-TFLite-Image-Segmentation

Is there any option in leaflet to stitch few image together and display

I am want to write a web app that involves navigating technical illustrations (pan, zoom) , for that i used a sample for that i used a sample
to display a non tile image using leaflet.
Now i need to input 3 or 4 image and it should stitch and diplay together and could use the technical illustrations (pan, zoom).
Could you please help me to find a solution.

zxing does not find readable barcode

I try to detect barcodes in webcam images and have problems to find small barcodes in a large image. xzing (svn trunk) fails to find the barcode in the first image (See links) even with try-harder. If I however manually crop the image (second image) it has no problem extracting the information. So it should also be possible to find the barcode in the first image.
Is there a way to tell xzing to also find smaller barcodes? Or is there already some sliding window implemented or maybe a gradient based barcode localizer?
Original: No barcode found
http://postimg.org/image/lh9xf7lw1/
Cropped Version: Barcode extracted
http://postimg.org/image/e1kb49tw1/
Try different binarizers. You are probably using the one that computes a histogram over the whole image. The varied brightness causes the barcode itself to be treated as more uniform patch of black. The hybrid binarizer is more localized and likely to get the same effect as cutting out the rest of the image manually.
In my case this tip helped
Android zxing NotFoundException
It suggests adding TRY_HARDER
Hashtable<DecodeHintType, Object> decodeHints = new Hashtable<>();
decodeHints.put(DecodeHintType.TRY_HARDER, Boolean.TRUE);
result = reader.decode(bitmap, decodeHints);

Remove image background(ID Card) and prepared it for OCR

I want to integrate OCR in an iOS application. I have found some helpful tutorials, specially This Article: How To: Compile and Use Tesseract (3.01) on iOS (SDK 5), helped me a lot. Now I can read plain text from any image which has a clear background. But I want to read information from an ID card which doesn't have clear background at all!
I have also found some answers regarding removing background in stackoverflow, for example: Prepare complex image for OCR, Remove Background Color or Texture Before OCR Processing and How to use OpenCV to remove non text areas from a business card?
But those solutions are not for iOS. I understand the steps, but I need an iOS example and if it is using Core Image, than it would be better for me.
I have no problem in OCR end, but my problem is to remove the background.
Initial Image:
After removing, the image should look like this:
Can you refer me an iOS example? or Is it possible to refer me an iOS example to remove all the color without Black color?
the best way to detect a card in sence is traing a cascade classifier.
Training is not a very small project. the count of the sample images should be more thank 10K.
Once you get the trained cascade classifier, you can detect the the card quickly.
The detection is very quick on iOS, but the tesseract recognition is not very fast,

How to give sketch effect on image in iphone?

I am implementing an iPhone application, in which I have implemented the following functionalities:
Select photo
Capture photo
Now I want to give a sketch effect to that photo like this one.
How could I do this?
If I may once again recommend it, my open source GPUImage framework has a built in filter that does just this. The GPUImageSketchFilter uses Sobel edge detection to highlight edges in black on images or video, leading to the exact same effect as seen in that application:
The above image was drawn from this answer, where I describe how that filter works, as well as show a couple other filter examples.
In fact, the SimplePhotoFilter example application that comes with the framework does exactly what you describe (capture a photo, apply a sketch filter to it, and save it to the photo library), so I'd start there if you want to get this up and running quickly.
OpenCV can be used to give sketch effect on image in iphone.
Refer iphone-how-to-convert-a-photo-into-a-pencil-drawing link and get helped.
Core Image filters are probably the best way to go.
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Reference/CoreImageFilterReference/.