I want to develop an app that does the following:
takes a photo
searches the database for a similar photo
now the question .. how can I do the second step with flutter ??
so that means how can I compare of two photos, where the first photo is in the database and the second is just taken from the camera
Let's split your problem into parts
For image comparison, or as I understand your question, finding if an image exists inside another image, OpenCV is your best friend. The first thing is to get comfortable with OpenCV and your favorite language (Python, C#, Java).
Checking images for similarity with OpenCV
I'm not sure if you edited the question while I was writing this answer, or if I just misunderstood.
If you want to make the comparison inside your app,
understand how to invoke OpenCV from your Flutter application
https://pub.dev/packages/opencv
If the comparison is not made on the phone but is going to be made after the user uploads the photo to a server, then you need to create a REST endpoint to upload the picture, compare it with other photos stored in the server (using the aforementioned OpenCV) and return the response to the user. To transmit the image to the server you could convert it to base64.
Related
I've been playing around with making a draftjs plugin that lets the user paste in mixed text&image content from websites and have images auto-uploaded to the server. I've quickly come to the realization that it's not easy, simply because of how many different sites use different kinds of counter-measures for copy/pasting images. Standard image tags in page content are no problem - easily grab the src and handle the file upload from the url. However, many sites use all kinds of trickery to make this a pain. For example, some will only serve small thumbnails, requiring a GET request on the image with a hash key in order to retrieve a larger version. Others somehow seem to corrupt the image so that it's unreadable by the time it's been retrieved. Others still play with weird embed tags to mess with draftjs' image blocks.
But then I open up a Google Docs file, and find that when I copy any images into that from a website, there's never any troubles whatsoever. All the problematic websites that I'm finding myself having to write specific methods for retrieving from seem to be handled by Google Docs with ease.
Am I using completely the wrong approach by trying to retrieve images from a url? Does Google use a far superior approach (yes, I presume) - in which case, does anyone have any idea what that approach might be?
Is there a way to mass convert an image? We're experimenting with replacing imagemagick and taking the load off of our servers -- I've got a version working that just loops through the sizes and calls convert on the original image, making 23 copies of differing styles (sizes and crops). However, if the user leaves the page before all the conversions are done, the script stops and I end up missing a bunch of image styles.
Is there a good way to get around this with Filepicker.io? I'd really like to be able to just pass a list of options to the convert method and have it complete in the background.
Thanks in advance,
- Jeff
The best way to do this is either using the /convert endpoint to do on-the-fly conversion (https://developers.filepicker.io/docs/web/#fpurl-images) or to do a POST to our rest endpoint to create the converted images and store them in S3 via a server-side call
To make it even easier to work with user content, we enable image post-processing. This way, regardless of what type of file a user uploads from the Cloud or their local device, you can be sure it's in exactly the right size. To convert an image, take the filepicker url and append /convert, along with query parameters specifying what you want to change. See the DocsĀ»
filepicker.io shows an example of a cropping tool on their front page. Could that be built into the picker itself?
We've discussed building it into the upload experience a la iOS, but think that the functionality is best done as a step after the upload. The demo on our front page is done using JCrop, and at some point we'll open-source the demo as a jquery plugin or similar.
There is a very cool iPhone app called Viddy where you can download filters to apply to videos.
How can they pack filters outside the app, and make them available to users via downloading?
One way would be to have an in-app purchase that's just a document that describes an image processing graph. (Think of a nodal graph representation for something like Shake or Nuke.) For example, a glow is often implemented as a blurred image mixed with the original image. You could create a document which describes that processing graph. Once you've downloaded such a document into your app, you can implement it using Core Image filters, or write your own using GLSL, or even just straight CPU processing.
It's pretty simple, they do use shaders and they're downloaded from the internet.
Download iExplorer for Mac, connect your iPhone with Viddy installed.
Check Library/effects folder in Viddy.app. You'll find afx_1_0.xml and vfx_1_0.xml files there.
Download them to your Mac, open them and you'll find filters definitions there along with URL to download them.
An example is SOHO filter. Download this file, open it and you'll see three files there: shader.fx3 where shader is defined, thumb.png for thumbnail and vignette.png file, which is used for this shader as well.
We did use same approach in unnamed application, but we did encrypt all this information along with shaders itself to avoid analysis like this one :)
Encryption, decryption example request in comment
Let's say you have .fx file with your shader (or any other file).
Open Xcode and go to Build Rules where you can define build rule for *.fx files. Set it to run your Custom script: which can look like this one:
ENC_KEY="your-encryption-key"
${PROJECT_DIR}/../Tools/bin/crypt -e -k $ENC_KEY -i ${INPUT_FILE_PATH} \
-o "${BUILT_PRODUCTS_DIR}/${UNLOCALIZED_RESOURCES_FOLDER_PATH}/${INPUT_FILE_BASE}.cfx"
This script produces .cfx file, which has same content as .fx file, but is encrypted.
crypt binary came from this project: download crypt Xcode project.
Download encrypted resource demo.
Copy EncryptedFileURLProtocol.* and NSURL+EncryptedFileURLProtocol.* files into your project.
In app delegate call this to register your protocol [NSURLProtocol registerClass:[EncryptedFileURLProtocol class]];
And now when you do want to open encrypted resource, you have to use protocol encrypted-file instead of file://. This task handles NSURL category from demo project and you can simply use [NSURL encryptedFileURLWithPath:#"/path/to/my/encrypted/file"].
It's pretty simple and you'll find most info you need in sample app (link above). Also you can mangle your encryption / decryption key in application, so, people have to think and the key is not easily readable. Now, when you access encrypted file via this NSURL, it's automatically decrypted for you in app. The decryption key is set in sharedKey in EncryptedFileURLProtocol.m file.
The easiest way to do this is to build the filters into the app itself, and have the in-app purchase simply unlock the ability to use them.
If you wanted to avoid the download time for all the additional images or other pieces needed, you could still include the code in the main app, and just download the extra resources needed. You can use something like Urban Airship's IAP support to host & download the IAP resources. (You might also want to look into new features of iOS 6 in this vein.)
GLSL shaders may be downloaded in source code form and then to be used for processing. It gives very flexible way to create new filters after having app published. From another hand it might be enough just to update (download) additional filter data. For example, Instagram uses same color curve technique for most filters but with different curve data, so it they want, they will be able to update their filters online.
Filter for videos also uses CIImage class like intagram application for images. See the link here:"http://www.icapps.be/face-detection-with-core-image-on-live-video/". Now filters can be download the filter (actually its In App Purchase happening).
Put the purchase/download method right beneath the case:
case SKPaymentTransactionStatePurchased:
[self ...];
so whats happening is purchase of filter for free which can be used on any video. Actually method is enabled to have filter after SKPaymentTransactionStatePurchased.
The iPhone makes it really simple to snarf down an image from the web; you can turn a URL into a UIImage in one line of code. So I'd like to enable my app (an educational puzzle game... my first!) to download some random images to make it more interesting and dynamic.
I thought about using Kodak's image of the day RSS feed, but I'm having quite a time figuring out how to parse it. Rather than being a simple list of image URLs, it seems to reference a bunch of "jhtml" URLs, which run Javascript to display the images in your RSS reader. Is this intentionally obfuscated, or am I missing some basic step to parse this?
I also tried the Astronomy Picture of the Day, via this RSS feed, but it's just the original page's HTML stuffed into CDATA... ugh.
So I guess this is really two questions:
Is there a simple way to parse these feeds to actually get at the JPG URLs on the iPhone?
Is there a better source for "picture of the day" type images?
PS: I'm using NSXMLParser, which I learned to use here.
I would recommend going with something that has an API, perhaps the Flickr "Interestingness" feed:
http://www.flickr.com/services/api/flickr.interestingness.getList.html
There is an objective-C library written to help with accessing Flickr but not sure if this API call is included:
http://github.com/lukhnos/objectiveflickr/tree/master