automatically generating intent and entity from a complete sentence - chatbot

I am building a bot with Rasa.ai.When training the bot with Rasa NLU, we use a training data file where the text, intent, entity etc. are specified. For example for a simple restaurant chatbot, the training file data.json may contain
{
"text": "central indian restaurant",
"intent": "restaurant_search",
"entities": [
{
"start": 0,
"end": 7,
"value": "central",
"entity": "location"
},
{
"start": 8,
"end": 14,
"value": "indian",
"entity": "cuisine"
}
]
}
We use this to train the model. But we need to create this training file manually (or through a GUI).
Is there any tool where I can feed sentences and it can automatically create intent and entity?
Sample Input: Is there any central Indian restaurant?
Sample Output: The above data.json
EDIT:
To better explain this question - suppose I have a huge set of customer service call log. My understanding is with Rasa (or other similar framework) - a human being need to go through the call log and understand all possible intents, entity combination that happened in the past and create a file like data.json such as above before training the model. This seems like a really unscalable problem. Is there a way to generate that data.json file from those GB size call logs without involving a human being? Am I missing something here?

This is exactly the task which you are training Rasa NLU to perform. Take in sentences and turn them into structured output. By providing examples, you are teaching the model how this works.
So you don't have to provide annotations for gigabytes of customer logs, but just some and the algorithm should generalise to the other sentences which it hasn't seen yet. How well this works depends on how many intents you have, how complex they are, and other factors.
I would start by annotating a few hundred sentences (the markdown format is a bit easier actually), keep 50 or so examples separate, and see how well Rasa NLU predicts them. Keep annotating more and more examples and add them to your training data, until you are happy with the performance on the held-out examples.

A fast way to generate arbitrarily big training datasets with a few rows of code is Chatito
You write down typical sentences and synonyms for the entities in an intuitive DSL.
It generates for you all the combinations and shuffles them for a better training.
It splits the examples between 2 files, one for training and one for testing. So you can measure the accuracy of your trained language model.

What I am asking is essentially unsupervised learning. Input a bunch of natural languages and output it in intent/entity format that Rasa or any other similar tool require.
This is absent from Rasa or similar tool as they are doing supervised learning. One example tool that might resolve my problem is lang.ai

The idea is to provide the sample sentences only. By providing the sample you are training the model to understand the sentence structure, where to expect the entities, what data type the entities are etc.
However if you just looking for named entity identification, you can use spaCy alone. Just throwing a sentence it will try to detect entities in the sentence. Spacy has already trained models to do so.
Reference: Spacy Named Entities

Related

Productize a model with Target Encoding

I am new to data science and I am experimenting with target encoding for my dataset that has several columns with multiple categories (I have discovered that one hot encoding has failed me from encountering a real-world dataset). While I was building my model using the insights I gained from https://github.com/groverpr/Machine-Learning/tree/9963e59823fe0ff18cc8e1b2657b71c01f133193 and https://github.com/scikit-learn-contrib/category_encoders I couldn't help but wonder how these models are able to be used for real-world situations post training. I also wonder how target encoding can even be used for feature selection. I would be very grateful for any clarification on this topic.

Having a combination of pre trained and supervised embeddings in rasa nlu pipeline

I am new to rasa and started creating a very domain-specific chatbot. As part of it, I understand its better to use supervised embeddings as part of nlu pipeline, since my use case is domain-specific.
I have an example intent in my nlu.md
## create_system_and_config
- create a [VM](system) of [12 GB](config)
If I try to use a supervised featurizer, it might be working fine with my domain-specific entities, but my concern here is, by using only supervised learning, won't we lose the advantage of pre-trained models? For example, in a query such as add a (some_system) of (some_config). add and create are very closely related. pre-trained models will be able to pick such verbs easily. Is it possible to have a combination of pre-trained model and then do some supervised learning on top of it in our nlu pipeline, something like transfer learning?
If you're creating domain-specific chatbot, it's always better to use supervised embedding instead of pre-trained
For example, in general English, the word “balance” is closely related
to “symmetry”, but very different to the word “cash”. In a banking
domain, “balance” and “cash” are closely related and you’d like your
model to capture that.
In your case also
your model needs to capture that words VM and Virtual Machine are same. Pretrained featurizers are not trained to capture this and they are more generic.
The advantage of using pre-trained word embeddings in your pipeline is
that if you have a training example like: “I want to buy apples”, and
Rasa is asked to predict the intent for “get pears”, your model
already knows that the words “apples” and “pears” are very similar.
This is especially useful if you don’t have enough training data
For more details you can refer Rasa document

Efficiently extract WikiData entities from text

I have a lot of texts (millions), ranging from 100 to 4000 words. The texts are formatted as written work, with punctuation and grammar. Everything is in English.
The problem is simple: How to extract every WikiData entity from a given text?
An entity is defined as every noun, proper or regular. I.e., names of people, organizations, locations and things like chair, potatoes etc.
So far I've tried the following:
Tokenize the text with OpenNLP, and use the pre-trained models to extract people, location, organization and regular nouns.
Apply Porter Stemming where applicable.
Match all extracted nouns with the wmflabs-API to retrieve a potential WikiData ID.
This works, but I feel like I can do better. One obvious improvement would be to cache the relevant pieces of WikiData locally, which I plan on doing. However, before I do that, I want to check if there are other solutions.
Suggestions?
I tagged the question Scala because I'm using Spark for the task.
Some suggestions:
consider Stanford NER in comparison to OpenNLP to see how it compares on your corpus
I wonder at the value of stemming for most entity names
I suspect you might be losing information by dividing the task into discrete stages
although Wikidata is new, the task isn't, so you might look at papers for Freebase|DBpedia|Wikipedia entity recognition|disambiguation
In particular, DBpedia Spotlight is one system designed for exactly this task.
http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/38389.pdf
http://ceur-ws.org/Vol-1057/Nebhi_LD4IE2013.pdf

How to auto-tag content, algorithms and suggestions needed

I am working with some really large databases of newspaper articles, I have them in a MySQL database, and I can query them all.
I am now searching for ways to help me tag these articles with somewhat descriptive tags.
All these articles is accessible from a URL that looks like this:
http://web.site/CATEGORY/this-is-the-title-slug
So at least I can use the category to figure what type of content that we are working with. However, I also want to tag based on the article-text.
My initial approach was doing this:
Get all articles
Get all words, remove all punctuation, split by space, and count them by occurrence
Analyze them, and filter common non-descriptive words out like "them", "I", "this", "these", "their" etc.
When all the common words was filtered out, the only thing left is words that is tag-worthy.
But this turned out to be a rather manual task, and not a very pretty or helpful approach.
This also suffered from the problem of words or names that are split by space, for example if 1.000 articles contains the name "John Doe", and 1.000 articles contains the name of "John Hanson", I would only get the word "John" out of it, not his first name, and last name.
Automatically tagging articles is really a research problem and you can spend a lot of time re-inventing the wheel when others have already done much of the work. I'd advise using one of the existing natural language processing toolkits like NLTK.
To get started, I would suggest looking at implementing a proper Tokeniser (much better than splitting by whitespace), and then take a look at Chunking and Stemming algorithms.
You might also want to count frequencies for n-grams, i.e. a sequences of words, instead of individual words. This would take care of "words split by a space". Toolkits like NLTK have functions in-built for this.
Finally, as you iteratively improve your algorithm, you might want to train on a random subset of the database and then try how the algorithm tags the remaining set of articles to see how well it works.
You should use a metric such as tf-idf to get the tags out:
Count the frequency of each term per document. This is the term frequency, tf(t, D). The more often a term occurs in the document D, the more important it is for D.
Count, per term, the number of documents the term appears in. This is the document frequency, df(t). The higher df, the less the term discriminates among your documents and the less interesting it is.
Divide tf by the log of df: tfidf(t, D) = tf(t, D) / log(df(D) + 1).
For each document, declare the top k terms by their tf-idf score to be the tags for that document.
Various implementations of tf-idf are available; for Java and .NET, there's Lucene, for Python there's scikits.learn.
If you want to do better than this, use language models. That requires some knowledge of probability theory.
Take a look at Kea. It's an open source tool for extracting keyphrases from text documents.
Your problem has also been discussed many times at http://metaoptimize.com/qa:
http://metaoptimize.com/qa/questions/1527/what-are-some-good-toolkits-to-get-lda-like-tagging-of-my-documents
http://metaoptimize.com/qa/questions/1060/tag-analysis-for-document-recommendation
If I understand your question correctly, you'd like to group the articles into similarity classes. For example, you might assign article 1 to 'Sports', article 2 to 'Politics', and so on. Or if your classes are much finer-grained, the same articles might be assigned to 'Dallas Mavericks' and 'GOP Presidential Race'.
This falls under the general category of 'clustering' algorithms. There are many possible choices of such algorithms, but this is an active area of research (meaning it is not a solved problem, and thus none of the algorithms are likely to perform quite as well as you'd like).
I'd recommend you look at Latent Direchlet Allocation (http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) or 'LDA'. I don't have personal experience with any of the LDA implementations available, so I can't recommend a specific system (perhaps others more knowledgeable than I might be able to recommend a user-friendly implementation).
You might also consider the agglomerative clustering implementations available in LingPipe (see http://alias-i.com/lingpipe/demos/tutorial/cluster/read-me.html), although I suspect an LDA implementation might prove somewhat more reliable.
A couple questions to consider while you're looking at clustering systems:
Do you want to allow fractional class membership - e.g. consider an article discussing the economic outlook and its potential effect on the presidential race; can that document belong partly to the 'economy' cluster and partly to the 'election' cluster? Some clustering algorithms allow partial class assignment and some do not
Do you want to create a set of classes manually (i.e., list out 'economy', 'sports', ...), or do you prefer to learn the set of classes from the data? Manual class labels may require more supervision (manual intervention), but if you choose to learn from the data, the 'labels' will likely not be meaningful to a human (e.g., class 1, class 2, etc.), and even the contents of the classes may not be terribly informative. That is, the learning algorithm will find similarities and cluster documents it considers similar, but the resulting clusters may not match your idea of what a 'good' class should contain.
Your approach seems sensible and there are two ways you can improve the tagging.
Use a known list of keywords/phrases for your tagging and if the count of the instances of this word/phrase is greater than a threshold (probably based on the length of the article) then include the tag.
Use a part of speech tagging algorithm to help reduce the article into a sensible set of phrases and use a sensible method to extract tags out of this. Once you have the articles reduced using such an algorithm, you would be able to identify some good candidate words/phrases to use in your keyword/phrase list for method 1.
If the content is an image or video, please check out the following blog article:
http://scottge.net/2015/06/30/automatic-image-and-video-tagging/
There are basically two approaches to automatically extract keywords from images and videos.
Multiple Instance Learning (MIL)
Deep Neural Networks (DNN), Recurrent Neural Networks (RNN), and the variants
In the above blog article, I list the latest research papers to illustrate the solutions. Some of them even include demo site and source code.
If the content is a large text document, please check out this blog article:
Best Key Phrase Extraction APIs in the Market
http://scottge.net/2015/06/13/best-key-phrase-extraction-apis-in-the-market/
Thanks, Scott
Assuming you have pre-defined set of tags, you can use the Elasticsearch Percolator API like this answer suggests:
Elasticsearch - use a "tags" index to discover all tags in a given string
Are you talking about the name-entity recognition ? if so, Anupam Jain is right. it;s research problem with using deep learning & CRF. In 2017, the name-entity recognition problem is force on semi-surprise learning technology.
The below link is related ner of paper:
http://ai2-website.s3.amazonaws.com/publications/semi-supervised-sequence.pdf
Also, The below link is key-phase extraction on twitter:
http://jkx.fudan.edu.cn/~qzhang/paper/keyphrase.emnlp2016.pdf

machine learning and code generator from strings

The problem: Given a set of hand categorized strings (or a set of ordered vectors of strings) generate a categorize function to categorize more input. In my case, that data (or most of it) is not natural language.
The question: are there any tools out there that will do that? I'm thinking of some kind of reasonably polished, download, install and go kind of things, as opposed to to some library or a brittle academic program.
(Please don't get stuck on details as the real details would restrict answers to less generally useful responses AND are under NDA.)
As an example of what I'm looking at; the input I'm wanting to filter is computer generated status strings pulled from logs. Error messages (as an example) being filtered based on who needs to be informed or what action needs to be taken.
Doing Things Manually
If the error messages are being generated automatically and the list of exceptions behind the messages is not terribly large, you might just want to have a table that directly maps each error message type to the people who need to be notified.
This should make it easy to keep track of exactly who/which-groups will be getting what types of messages and to update the routing of messages should you decide that some of the messages are being misdirected.
Typically, a small fraction of the types of errors make up a large fraction of error reports. For example, Microsoft noticed that 80% of crashes were caused by 20% of the bugs in their software. So, to get something useful, you wouldn't even need to start with a complete table covering every type of error message. Instead, you could start with just a list that maps the most common errors to the right person and routes everything else to a person for manual routing. Each time an error is routed manually, you could then add an entry to the routing table so that errors of that type are handled automatically in the future.
Document Classification
Unless the error messages are being editorialized by people who submit them and you want to use this information when routing them, I wouldn't recommend treating this as a document classification task. However, if this is what you want to do, here's a list of reasonably good packages for document document classification organized by programming language:
Python - To do this using the Python based Natural Language Toolkit (NLTK), see the Document Classification section in the freely available NLTK book.
Ruby - If Ruby is more of your thing, you can use the Classifier gem. Here's sample code that detects whether Family Guy quotes are funny or not-funny.
C# - C# programmers can use nBayes. The project's home page has sample code for a simple spam/not-spam classifier.
Java - Java folks have Classifier4J, Weka, Lucene Mahout, and as adi92 mentioned Mallet.
Learning Rules with Weka - If rules are what you want, Weka might be of particular interest, since it includes a rule set based learner. You'll find a tutorial on using Weka for text categorization here.
Mallet has a bunch of classifiers which you can train and deploy entirely from the commandline
Weka is nice too because it has a huge number of classifiers and preprocessors for you to play with
Have you tried spam or email filters? By using text files that have been marked with appropriate categories, you should be able to categorize further text input. That's what those programs do, anyway, but instead of labeling your outputs a 'spam' and 'not spam', you could do other categories.
You could also try something involving AdaBoost for a more hands-on approach to rolling your own. This library from Google looks promising, but probably doesn't meet your ready-to-deploy requirements.