What type of neural networks should be used when we have sequential inputs.
Let say we want to phrase some text which should give output depend on the sequence of the words. For this the output should dependent on the previous state of the inputs.
Recurrent neural networks are what you are looking for -- at least in the general case, where sequences are long, or not all sequences have the same length.
Here's an example, here is a Python implementation and here are some reference papers.
Related
Much work has been done in recent years on neural networks where the input is a collection. In particular, convolutional networks often work well on N-dimensional arrays (exploiting spatial structure) and recurrent neural networks such as LSTM often work well on time-series data (exploiting temporal structure).
I'm currently looking at the question of using neural networks for heuristic guidance of search in theorem proving. One of the problems here is that the input takes the form not of an array but of a set of clauses, where there is not so much of a natural ordering between clauses, and we cannot say that each clause will mostly interact with nearby neighbors in space.
Aside from 'bag of words' for text documents (which works because there is a fixed dictionary), is anything already known about which neural network architectures work well for inputs that don't have a natural order?
I am interested in machine translation and more specific I would like to examine the similarity between two strings. I would like to know if there are deep learning methods for text feature extraction. I already tried the famous statistics methods like cosine similarity, Levenstein distance, word frequency and others.
Thank you
To find the similarity between 2 string ,try to train a Siamese networks
on your dataset
Siamese networks are a special type of neural network architecture. Instead of a model learning to classify its inputs, the neural networks learns to differentiate between two inputs. It learns the similarity between them.
https://medium.com/#gautam.karmakar/manhattan-lstm-model-for-text-similarity-2351f80d72f1
The below is the link of a kaggle competition ,they have used siamese networks for text simmilarity
https://medium.com/mlreview/implementing-malstm-on-kaggles-quora-question-pairs-competition-8b31b0b16a07
Hope this clears your doubts
I want to costruct a neural network which will be trained based on data i create. My question is what form these data should have? In other words does keras allow neural networks that take strings/characters as input? If not, and only is able to accept numbers in what range should the input/output be?
The only condition for your input data i.e features, is that it should be numerical. There isn't really any constraint on range but it's always a good idea to do Feature Scaling, Normalization etc to make sure that our model won't get confused. Neural Networks or other machine learning methods cannot accept string (characters, words) directly, therefore, you need to first convert string to numbers. There are many ways to do that, most common techniques include Bag of Words, tf-idf features, word embeddings etc.
Following tutorials (using scikit) might be a good starting point:
http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html
https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-1-for-beginners-bag-of-words
i'm curious as to the kind of limitations even an expertly designed network might have. this one in particular is what i could use some insight on:
given:
a set of random integers of non-trivial size (say at least 500)
an expertly created/trained neural network.
task:
number anagram: create the largest representation of an infinite sequence of integers possible in a given time frame where the sequence
either can be represented in closed form (ie - n^2, 2x+5, etc) or is
registered in OEIS (http://oeis.org/). the numbers used to create the
sequence can be taken from the input set in any order. so if the
network is fed (3, 5, 1, 7...), returning (1, 3, 5, 7 ...) would be an
acceptable result.
it's my understanding that an ANN can be trained to look for a particular sequence pattern (again - n^2, 2x+5, etc). what I'm wondering is if it can be made to recognize a more general pattern like n^y or xy+z. my thinking is that it won't be able to, because n^y can produce sequences that look different enough from one another that a stable 'base pattern' can't be established. that is - intrinsic to the way ANNs work (taking sets of input and doing fuzzy-matching against a static pattern it's been trained to look for) is that they are limited in terms of scope of what it is they can be trained to look for.
have i got this right?
Continuing from the conversation I had with you in the comments:
Neural networks still might be useful. Instead of training a neural net to search for a single pattern, the neural net can be trained to predict the data. If the data contains a predictable pattern, the NN can learn it, and the weights of the NN will represent the pattern it has learned. I think that may be what you were intending to do.
Some things that might be helpful for you if you do this:
Autoencoders do unsupervised learning and can learn the structure of individual datapoints.
Recurrent Neural Networks can model sequences of data rather than just individual datapoints. This sounds more like what you are looking for.
A Compositional Pattern-Producing Network (CPPNs) is a really fancy word for a neural network with mathematical functions as activation functions. This would allow you to model functions that aren't easily approximated by NNs with simple activation functions like sigmoids or ReLU. But usually this isn't necessary, so don't worry to much about it until after you have a simple NN working.
Dropout is a simple technique where you remove half of the hidden units every iteration. This seems to seriously reduce overfitting. It prevents complicated relationships between neurons from forming, which should make the models more interpretable, which seems like your goal.
I've been learning to work with neural networks as a hobby project, but am at a complete loss with how to handle categorical data. I read the article http://visualstudiomagazine.com/articles/2013/07/01/neural-network-data-normalization-and-encoding.aspx, which explains normalization of the input data and explains how to preprocess categorical data using effects encoding. I understand the concept of breaking the categories into vectors, but have no idea how to actually implement this.
For example, if I'm using countries as categorical data (e.g. Finland, Thailand, etc), would I process the resulting vector into a single number to be fed to a single input, or would I have a separate input for each component of the vector? Under the latter, if there are 196 different countries, that would mean I would need 196 different inputs just to process this particular piece of data. If a lot of different categorical data is being fed to the network, I can see this becoming really unwieldy very fast.
Is there something I'm missing? How exactly is categorical data mapped to neuron inputs?
Neural network inputs
As a rule of thumb: different classes and categories should have their own input signals.
Why you can't encode it with a single input
Since a neural network acts upon the input values through activation functions, a higher input value will result in a higher activation input.
A higher input value will make the neuron more likely to fire.
As long as you don't want to tell the network that Thailand is "better" than Finland then you may not encode the country input signal as InputValue(Finland) = 24, InputValue(Thailand) = 140.
How it should be encoded
Each country deserves its own input signal so that they contribute equally to activating the neurons.