Why won't my neural network train? - neural-network

I have gathered over 20,000 legal pleadings in PDF format. I am an attorney, but also I write computer programs to help with my practice in MFC/VC++. I'd like to learn to use neural networks (unfortunately my math skills are limited to college algebra) to classify documents filed in lawsuits.
My first goal is to train a three layer feed forward neural network to recognize whether a document is a small claims document (with the letters "SP" in the case number), or whether it is a regular document (with the letters "CC" in the case number). Every attorney puts some variant of the word "Case:" or "Case No" or "Case Number" or one of an infinite variations of that. So I've taken the first 600 characters (all attorneys will put the case number within the first 600 chars), and made a CSV database with each row being one document, with 600 columns containing the ASCII codes of the first 600 characters, and the 601st character is either a "1" for regular cases, or a "0" for small claims.
I then run it through the neural network program coded here:
https://takinginitiative.wordpress.com/2008/04/23/basic-neural-network-tutorial-c-implementation-and-source-code/
(Naturally I update the program to handle 600 neurons, with one output), but when I run through the accuracy is horrible - something like 2% on the training data, and 0% on the general set. 1/8 of documents are for non-small claims cases.
Is this the sort of problem a Neural Net can handle? What am I doing wrong?

So I've taken the first 600 characters (all attorneys will put the case number within the first 600 chars), and made a CSV database with each row being one document, with 600 columns containing the ASCII codes of the first 600 characters, and the 601st character is either a "1" for regular cases, or a "0" for small claims.
Looking at each and every character at the beginning of the document independently will be very inaccurate. Rather than consider the characters independently, first tokenize the first 600 characters into words. Use those words as input to your neural net, rather than individual characters.
Note that once you have tokenized the first 600 characters, you may will find a distinctly finite list of tokens that mean "case number", removing the need for a neural net.
The Standford Natural Language Processor provides this functionality. You can find a .NET compatible implementation available in NuGet.

Related

FastText window size

I'm currently working on fastText unsuperived learning. I wanted clarify something of context window present in fastText documentation.
In the description of the fasttext library for python https://github.com/facebookresearch/fastText/tree/master/python for training a fastText model there are different arguments, one of the arguments is,
ws: size of the context window
My input file contains lines with 2 - 3 tokens.
Eg.,
Senior Database Administrator
Senior DotNet programmer
Network administrator
Head Programmer (Mainframe)
The default window size 5. Here, in the above example, I have lines with token count less than the window size. What will happen if the window size is bigger than the document length?
FastText (& related algorithms like word2vec) will simply use as much of the context window as is possible.
For example, assume a window-size of 5 and the input tokens:
['Senior', 'Database', 'Administrator']
When training with the 'center' word 'Senior', the algorithm would be ready to consult up-to-5 words in either direction.
But, there are 0 words preceding 'Senior', and only 2 words succeeding 'Senior', so only those 2 following words will be considered as neighbors.
(No 'plug values' will be used as if they were blank-neighbors, nor will any 'bleed-through' to beighboring texts occur.)
Two other related notes to keep in mind:
These algorithms do need neighboring words for any training to occur, so any texts with just a single word are essentially no-ops. (If there happens to be a word that only ever appears alone, you might still see a vector for it at the end of training, but in the implementations with which I am familiar, that will just be a randomly-initialized starting vector, completely untrained by real usage examples.)
Most implementations will simulate a weighting-of-neighboring-words by not *always using exactly your declared window-size, but rather, for each pass over a specific target center word, choosing a random window-size, from 1 to your chosen window-size. In this way, immediate-neighbors are always part of training, while words further away are more-often skipped.

In Fasttext skipgram training, what will happen if some sentences in the corpus have just one word?

Imagine that you have a corpus in which some lines have just one word, so there is no context around some of the words. In this situation how does Fasttext perform to provide embeddings for these single words? Note that the frequency of some of these words are one and there is no cut-off to get rid of them.
There's no way to train a context_word -> target_word skip-gram pair for such words (in either 'context' or 'target' roles), so such words can't receive trained representations. Only texts with at least 2 tokens contribute anything to word2vec or FastText word-vector training.
(One possible exception: FastText in its 'supervised classification' mode might be able to make use of, and train vectors for, such words, because then even single words can be used to predict the known-label of training texts.)
I suspect that such corpuses will still result in the model counting the word in its initial vocabulary-discovery scan, and thus it will be allocated a vector (if it appears at least min_count times), and that vector will receive the usual small-random-vector initialization. But the word-vector will receive no further training – so when you request the vector back after training, it will be of low-quality, with the only meaningful contributions coming from any char n-grams shared with other words that received real training.
You should consider any text-breaking process that results in single-word texts as buggy for the purposes of FastText. If those single-word texts come from another meaningful context where they were once surrounded by other contextual words, you should change your text-breaking process to work in larger chunks that retain that context.
Also note: it's rare for min_count=1 to be a good idea for word-vector models, at least when the training text is real natural-language material where word-token frequencies roughly follow Zipf's law. There will be many, many 1-occurrence (or few-occurrence) words, but with just one to a few example usage contexts, not likely representing the true breadth and subtleties of that word's real usages, it's nearly impossible for such words to receive good vectors that generalize to other uses of those same words elsewhere.
Training good vectors require a variety of usage examples, and just one or a few examples will practically be "noise" compared to the tens-to-hundreds of examples of other words' usage. So keeping these rare words, instead of dropping them like a default min_count=5 (or higher in larger corpuses) would do, tends to slow training, slow convergence ("settling") of the model, and lower the quality of the other more-frequent word vectors at the end – due to the significant-but-largely-futile efforts of the algorithm to helpfully position these many rare words.

How do Facebook's fasttext library handle numerical data in input for word vectorization?

I am using Facebook's Fasttext for performing text classification.
I wanted to know how fasttext library handle the numbers in a text string provided as input for word vectorization.
Do fasttext typecast each number as a string before creating word vectors?
For e.g. 1124 to " 1124 "
Or any other transformation/preprocessing is performed in the background before training?
For e.g. 1124 to " one one two four "
What should be the most optimal approach to handle numerical data if my input text to fasttext contains numbers?
Fasttext doesn't do any preprocessing of numeric tokens. They are treated like other whitespace-separated "words".
Unless you already have a specific problem with fasttext and numbers in your input, I wouldn't worry about what fasttext does with the numbers. Just use it as normal.
If you have a lot of numbers and they're causing problems - this is possible since fasttext likely doesn't have any useful vectors for most specific numbers - you can pre-process your input to replace them with <NUMBER> or another dummy token. That way these sentences will be the same to fasttext:
I ate 1023 oranges.
I ate 1024 oranges.
Whether you want to treat those as the same or not depends on your application.

Implementing one hot encoding

I already understand the uses and concept behind one hot encoding with neural networks. My question is just how to implement the concept.
Let's say, for example, I have a neural network that takes in up to 10 letters (not case sensitive) and uses one hot encoding. Each input will be a 26 dimensional vector of some kind for each spot. In order to code this, do I act as if I have 260 inputs with each one displaying only a 1 or 0, or is there some other standard way to implement these 26 dimensional vectors?
In your case, you have to differ between various frameworks. I can speak for PyTorch, which is my goto framework when programming a neural network.
There, one-hot encodings for sequences are generally performed in a way where your network will expect a sequence of indices. Taking your 10 letters as an example, this could be the sequence of ["a", "b", "c" , ...]
The embedding layer will be initialized with a "dictionary length", i.e. the number of distinct elements (num_embeddings) your network can receive - in your case 26. Additionally, you can specify embedding_dim, i.e. the output dimension of a single character. This is already past the step of one-hot encodings, since you generally only need them to know which value to associate with that item.
Then, you would feed a coded version of the above string to the layer, which could be looking like this: [0,1,2,3, ...]. Assuming the sequence is of length 10, his will produce an output of [10,embedding_dim], i.e. a 2-dimensional Tensor.
To summarize, PyTorch essentially allows you to skip this rather tedious step of encoding it as a one-hot encoding. This is mainly due to the fact that your vocabulary can in some instances be quite large: Consider for example Machine Translation Systems, in which you could have 10,000+ words in your vocabulary. Instead of storing every single word as a 10,000-dimensional vector, using a single index is more convenient.
If that should not completely answer your question (since I am essentially telling you how it is generally preferred): Instead of making a 260-dimensional vector, you would again use a [10,26] Tensor, in which each line represents a different letter.
If you have 10 distinct elements(Ex: a,b....j OR 1,2...10) to be represented as 'one hot-encoding' vector of dimension-26 then, your inputs are 10 vectors only each of which is to be represented by 26-dim vector. Do this:
y = torch.eye(26) # If you want a tensor for each 'letter' of length 26.
y[torch.arange(0,10)] #This line gives you 10 one hot-encoding vector each of dimension 26.
Hope this helps a bit.

How to predict word using trained CBOW

I have a question about CBOW prediction. Suppose my job is to use 3 surrounding words w(t-3), w(t-2), w(t-1)as input to predict one target word w(t). Once the model is trained and I want to predict a missing word after a sentence. Does this model only work for a sentence with four words which the first three are known and the last is unknown? If I have a sentence in 10 words. The first nine words are known, can I use 9 words as input to predict the last missing word in that sentence?
Word2vec CBOW mode typically uses symmetric windows around a target word. But it simply averages the (current in-training) word-vectors for all words in the window to find the 'inputs' for the prediction neural-network. Thus, it is tolerant of asymmetric windows – if there are fewer words are available on either side, fewer words on that side are used (and perhaps even zero on that side, for words at the front/end of a text).
Additionally, during each training example, it doesn't always use the maximum-window specified, but some random-sized window up-to the specified size. So for window=5, it will sometimes use just 1 on either side, and other times 2, 3, 4, or 5. This is done to effectively overweight closer words.
Finally and most importantly for your question, word2vec doesn't really do a full-prediction during training of "what exact word does the model say should be heat this target location?" In either the 'hierarchical softmax' or 'negative-sampling' variants, such an exact prediction can be expensive, requiring calculations of neural-network output-node activation levels proportionate to the size of the full corpus vocabulary.
Instead, it does the much-smaller number-of-calculations required to see how strongly the neural-network is predicting the actual target word observed in the training data, perhaps in contrast to a few other words. In hierarchical-softmax, this involves calculating output nodes for a short encoding of the one target word – ignoring all other output nodes encoding other words. In negative-sampling, this involves calculating the one distinct output node for the target word, plus a few output nodes for other randomly-chosen words (the 'negative' examples).
In neither case does training know if this target word is being predicted in preference over all other words – because it's not taking the time to evaluate all others words. It just looks at the current strength-of-outputs for a real example's target word, and nudges them (via back-propagation) to be slightly stronger.
The end result of this process is the word-vectors that are usefully-arranged for other purposes, where similar words are close to each other, and even certain relative directions and magnitudes also seem to match human judgements of words' relationships.
But the final word-vectors, and model-state, might still be just mediocre at predicting missing words from texts – because it was only ever nudged to be better on individual examples. You could theoretically compare a model's predictions for every possible target word, and thus force-create a sort of ranked-list of predicted-words – but that's more expensive than anything needed for training, and prediction of words like that isn't the usual downstream application of sets of word-vectors. So indeed most word2vec libraries don't even include any interface methods for doing full target-word prediction. (For example, the original word2vec.c from Google doesn't.)
A few versions ago, the Python gensim library added an experimental method for prediction, [predict_output_word()][1]. It only works for negative-sampling mode, and it doesn't quite handle window-word-weighting the same way as is done in training. You could give it a try, but don't be surprised if the results aren't impressive. As noted above, making actual predictions of words isn't the usual real goal of word2vec-training. (Other more stateful text-analysis, even just large co-occurrence tables, might do better at that. But they might not force word-vectors into interesting constellations like word2vec.)