I am currently trying to extract from texts some sentences/groups of words that mean "there was an anomaly on this satellite at this date regarding this equipment" using rasa_nlu.
I have thought to intent detection because I had already used wit.ai and recast.ai for chatbots and I thought that trying to extract this kind of thing from a text could be like talking to a bot with the text and waiting for him to detect the "anomaly" intent.
However, this means that I have only one intent in my model and rasa doesn't seem to work thus for intent detection (it does good named-entity recognition but 0 intent detection).
Is there another way to extract what I want to extract using rasa or any other tool ?
Thank you in advance
Antoine
Related
I'm new to Dialogflow and I want to learn something from it. Currently I'm stuck with 2 problems. First is how do we know if what customer says meet the intent?:
According to google's tutorial, it puts in training phrases:like the screen shot. In my case, I don't know how to trigger the intent. I tried My favorite color is Tony. It would ask as what I expected:What's your favorite color. But if I just say, Tony, It would goes to the fallback case. I'm confused about how would the intent be trigger. Is it by entity? or something else?
In addition, I currently wrote a coupon lines of code on linux(which could convert to other language as well) to turn on and off a board's led light(my friend helped me). However, since I'm new to dialogflow and I want to do things like: If I talk to google, it would turn it on/off. I'm wondering how should I do it. Could I get some hint? I never learned api before and I could certainly learn it on my own. I just wanna need some help.
p.s: I learned one year C++, so I'm not familiar with javascript, if completing this project requires javascript, I would certainly do that. Just need some hint pls.
Thanks!!
First, you should know that you need not learn any other language if you are already good at something & that is cause Dialogflow offers you SDKs for that. You can check it out over here: https://dialogflow.com/docs/sdks.
Now coming to your query, when user enters anything, that query comes to dialogflow & then it tries to find matches between different training phrases that you have entered in your intent. If there's a match found, having scored more than threshold, it sends us the response defined for that intent. You can even define custom entities, such as for colors, it would just help dialogflow to find more accurate intent. Following snaps should help you understand a situation better:
1. Intent-1
2. Intent-2
3. Custom entity
4. Output
Hope this answers your query.
I'm looking to automate data entry from predefined forms that have been filled out by hand. The characters are not separated, but the fields are identifiable by lines underneath or as a part of a table. I know that handwriting OCR is still an area of active research, and I can include an operator review function, so I do not expect accuracy above 90%.
The first solution that I thought of is a combination of OpenCV for field identification (http://answers.opencv.org/question/63847/how-to-extract-tables-from-an-image/) and Tesseract to recognize the handwriting (https://github.com/openpaperwork/pyocr).
Another potentially simpler and more efficacious method for field identification with a predefined form would be to somehow subtract the blank form from the filled form. Since the forms would be scanned, this would likely require some location tolerance, noise reduction, and feature recognition.
Any suggestions or comments would be greatly appreciated.
As said in Tesseract FAQ it is not recommended to use if you're looking for a successful handwritten recognition. I would recommend you to look more into commercial projects like Microsoft OCR API (Scroll down to Read handwritten text from images), you can try it online and use their API in your application.
Another option is ABBYY OCR which has a lot of useful functions to recognize tables, complicated documents etc. You can read more here
As for free alternatives - the only think that comes to mind is Lipi toolkit
As for detection of letters - it really depends on the input, in general if your form is more or less same every time - it would be best to simply measure your form and use predefined positions in which you need to search for text. Otherwise OpenCV is a right technology to look for text, there are plenty of tutorials online and good answers here on stackoverflow, for example you can take a look at detection using MSER answer by Silencer.
Hi guys I'm looking for a solution to that enable a user compare a image to a previously store image. For example, i take a picture on my iPhone of a chair and then it will compare to a locally store image, if the similarity is reasonable then it confirms the image and calls another action.
Most solutions I've been able to find require cloud processing on third party servers(Visioniq, moodstocks, kooaba etc). Is this because the iPhone doesn't have sufficient processing power to complete this task?
A small reference library will be stored on the device and referenced when needed.
Users will be able to index their own pictures for recognition.
Is there any such solution that anyone has heard of? My searches have only shown cloud solutions from the above mentioned BaaS providers.
Any help will be much appreciated.
Regards,
Frank
Try using OpenCV library in iOS
OpenCV
You can try using FlannBasedMatcher to match images. Documentation and code is available here You can see "Feature Matching FLANN" here.
Currently I am trying to create an app for iPhone which is capable of recognizing the objects on an image such as car, bus, building, bridge, human, etc, and label as object name with the help of Internet.
Is there any free service which provide solution to my problem, as object recognition its self a complex algorithm requiring digital image processing, neural networks and all.
Can this can be done via API?
If you want to recognise planar images the current generation of mobile AR SDKs from Metaio, Qualcomm and Layar will allow you to upload images to match against, and perform the matching.
If you want to match freely against a set of 3D objects, e.g. a Toyota Prius or the Empire state, the same techniques might be applied to match against sets of images taken at different rotations, but you might have to choose to match just one object due to limitations on how large an image database you can have with the service, or contact those companies for a custom solution, and it may not work very reliably given the state of the art is to reliably match against planar images.
If you want to recognize general classes (human, car, building), this is a very difficult problem, and I don't know of any solutions anywhere fast enough to operate online (which I assume is a requirement given you want an AR solution - is that a fair assumption?). It's been a few years since I studied CV, but at that time the most promising solution for visual classification was "bag of visual words" approaches - you might try reading up on those.
Take a look at Cortexica. Very useful for this sort of thing.
http://www.cortexica.com/
I haven't done work with mobile AR in a while, but the last time I was working on this stuff I was using Layar and starting to investigate Junaio. Those are oriented toward 3D graphics, not simply text labels, so for your use case you may be better served with OpenCV.
Note that Layar (and I believe Junaio too) works like a web app, where you put the content on your own server and give Layar the URL to link to.
I would like to start on Chinese hand-writing recognition program for IPhone...but I couldn't find any library or API that can help me to do so. It's hard for me to write the algorithm myself because of my time span.
Some of suggestion recommended that I should make use of a back-end server to do the recognition work. But I don't know how to set up that kind of server.
So any suggestion or basic steps that can help me to achieve this personal project?
You might want to check out Zinnia. Tegaki relies on other APIs (Zinnia is one of them) to do the actual character recognition.
I haven't looked at the code, but I gather it's written in C or C++, so should suit your needs better than Tegaki.