I previously worked with neural networks, but just for fun mainly using them with normalized Categorical(enum),Numeric and Bit(bool) values. I know NNs have trouble understanding characters, but I was wondering if they could understand how to transform text.
So is it possible for example to train NN to do following:
13/20 = 20
aa/bb = bb
20/10 = 10
Or (replaced d with f)
abcde = abcfe
tdfg = tffg
ddhj = ffhj
If yes, how reliably? Or maybe there is something better suited for the job?
The question wasn't related to implementation but since I have just finished a lab on a similar topic, here are some practical tips:
It would be quite possible and easy to use a RNN network to transform the text in that way. Clone this (https://github.com/karpathy/char-rnn) repository and train it with a file placed in data/folder/input.txt with enough size in the same format you want the output:
abcde = abcfe
tdfg = tffg
ddhj = ffhj
use this command to train:
th train.lua -data_dir data/folder
When testing the network it should be able to produce a correct output based on the seed text you provide:
th sample.lua cv/[latest_sample] -primetext "abcd" -length 7
should be able to produce the output:
abcd = abcf
Everything depends on complexity of the transformation. If you are interested in these examples, then, certanly yes, this is doable and reliable. Second example is trivial, you just present NN one character at the time, encode input and output as one-hot vector (one neuron per character) and it will do the job. First example can be solved by converting left and right part to one-hot input vector representing one of all possible combinations of two symbols and having two outputs, asking NN to select if first part or second part should be selected (better ways to encode input exists, especially for long strings). Provided you have enough training examples, everything should work fine.
Generally, days when it was prohibitively difficut for NNs to deal with texts are long gone. Now NNs can be trained to do machine translation (better then any other method) and even, to some extent, trained to predict output of simple computer programs based on program character string (but this is still difficult task for NNs).
Related
First of all, I know this question is kind of off-topic, but I have already tried to ask elsewhere but got no response.
Adding a UNK token to the vocabulary is a conventional way to handle oov words in tasks of NLP. It is totally understandable to have it for encoding, but what's the point to have it for decoding? I mean you would never expect your decoder to generate a UNK token during prediction, right?
Depending on how you preprocess your training data, you might need the UNK during training. Even if you use BPE or other subword segmentation, OOV can appear in the training data, usually some weird UTF-8 stuff, fragments of alphabets, you are not interested in at all, etc.
For example, if you take WMT training data for English-German translation, do BPE and take the vocabulary, you vocabulary will contain thousands of Chinese characters that occur exactly once in the training data. Even if you keep them in the vocabulary, the model has no chance to learn anything about them, not even to copy them. It makes sense to represent them as UNKs.
Of course, what you usually do at the inference time is that you prevent the model predict UNK tokens, UNK is always incorrect.
I have used it one time in the following situation:
I had a preprocessed word2vec(glove.6b.50d.txt) and I was outputting an embedded vector, in order to transform it into a word I used cosine similarity based on all vectors in the word2vec if the most similar vector was the I would output it.
Maybe I'm just guessing it here, but what I think might happen under the hoods is that it predicts based on previous words(e.g. it predicts the word that appeared 3 iterations ago) and if that word is the neural net outputs it.
I already understand the uses and concept behind one hot encoding with neural networks. My question is just how to implement the concept.
Let's say, for example, I have a neural network that takes in up to 10 letters (not case sensitive) and uses one hot encoding. Each input will be a 26 dimensional vector of some kind for each spot. In order to code this, do I act as if I have 260 inputs with each one displaying only a 1 or 0, or is there some other standard way to implement these 26 dimensional vectors?
In your case, you have to differ between various frameworks. I can speak for PyTorch, which is my goto framework when programming a neural network.
There, one-hot encodings for sequences are generally performed in a way where your network will expect a sequence of indices. Taking your 10 letters as an example, this could be the sequence of ["a", "b", "c" , ...]
The embedding layer will be initialized with a "dictionary length", i.e. the number of distinct elements (num_embeddings) your network can receive - in your case 26. Additionally, you can specify embedding_dim, i.e. the output dimension of a single character. This is already past the step of one-hot encodings, since you generally only need them to know which value to associate with that item.
Then, you would feed a coded version of the above string to the layer, which could be looking like this: [0,1,2,3, ...]. Assuming the sequence is of length 10, his will produce an output of [10,embedding_dim], i.e. a 2-dimensional Tensor.
To summarize, PyTorch essentially allows you to skip this rather tedious step of encoding it as a one-hot encoding. This is mainly due to the fact that your vocabulary can in some instances be quite large: Consider for example Machine Translation Systems, in which you could have 10,000+ words in your vocabulary. Instead of storing every single word as a 10,000-dimensional vector, using a single index is more convenient.
If that should not completely answer your question (since I am essentially telling you how it is generally preferred): Instead of making a 260-dimensional vector, you would again use a [10,26] Tensor, in which each line represents a different letter.
If you have 10 distinct elements(Ex: a,b....j OR 1,2...10) to be represented as 'one hot-encoding' vector of dimension-26 then, your inputs are 10 vectors only each of which is to be represented by 26-dim vector. Do this:
y = torch.eye(26) # If you want a tensor for each 'letter' of length 26.
y[torch.arange(0,10)] #This line gives you 10 one hot-encoding vector each of dimension 26.
Hope this helps a bit.
I'm using the functional API to create an input layer, feed that into a time-distributed layer, and then feed that into an LSTM. As of now, it looks something like this
input_layer = Input(shape=(100,10,20))
layer_2 = TimeDistributed(SomeLayer(params))(input_layer)
My issue is that I'd like to feed time sequences of various lengths into my neural net, and not just sequences of a hundred time steps.
Is this doable?
If you want to add variable number of time steps , you can pad the sequences to length = max time steps
data_array=sequence.pad_sequences(data_array,maxlen=max_timesteps)
EDIT:
I found this answer which might be useful. You just have to keep the time steps same in a batch. They can vary across the batches
Training an RNN with examples of different lengths in Keras
I have a rather large(not too large but possibly 50+) set of conditions that must be placed on a set of data(or rather the data should be manipulated to fit the conditions).
For example, Suppose I have the a sequence of binary numbers of length n,
if n = 5 then a element in the data might be {0,1,1,0,0} or {0,0,0,1,1}, etc...
BUT there might be a set of conditions such as
x_3 + x_4 = 2
sum(x_even) <= 2
x_2*x_3 = x_4 mod 2
etc...
Because the conditions are quite complex in that they come from experiment(although they can be written down in logic form) and are hard to diagnose I would like instead to use a large sample set of valid data. i.e., Data I know satisfies the conditions and is a pretty large set. i.e., it is easier to collect the data then it is to deduce the conditions that the data must abide by.
Having said that, basically what I'm doing is very similar to neural networks. The difference is, I would like an actual algorithm, in some sense optimal, in some form of code that I can run instead of the network.
It might not be clear what I'm actually trying to do. What I have is a set of data in some raw format that is unique and unambiguous but not appropriate for my needs(in a sense the amount of data is too large).
I need to map the data into another set that actually is ambiguous to some degree but also has certain specific set of constraints that all the data follows(certain things just cannot happen while others are preferred).
The unique constraints and preferences are hard to figure out. That is, the mapping from the non-ambiguous set to the ambiguous set is hard to describe(which is why it is ambiguous). The goal, actually, is to have an unambiguous map by supplying the right constraints if at all possible.
So, on the vein of my initial example, I'm given(or supply) a set of elements and need some way to derive a list of constraints similar to what I've listed.
In a sense, I simply have a set of valid data and train it very similar to neural networks.
Then, after this "Training" I'm given the mapping function I can then use on any element in my dataset and it will produce a new element satisfying the constraint's, or if it can't, will give as close as possible an unambiguous result.
The main difference between neural networks and what I'm trying to achieve is I'd like to be able to use have an algorithm to code to be used instead of having to run a neural network. The difference here is the algorithm would probably be a lot less complex, not need potential retraining, and a lot faster.
Here is a simple example.
Suppose my "training set" are the binary sequences and mappings
01000 => 10000
00001 => 00010
01010 => 10100
00111 => 01110
then from the "Magical Algorithm Finder"(tm) I would get a mapping out like
f(x) = x rol 1 (rol = rotate left)
or whatever way one would want to express it.
Then I could simply apply f(x) to any other element, such as x = 011100 and could apply f to generate a hopefully unambiguous output.
Of course there are many such functions that will work on this example but the goal is to supply enough of the dataset to narrow it down to hopefully a few functions that make the most sense(at the very least will always map the training set correctly).
In my specific case I could easily convert my problem into mapping the set of binary digits of length m to the set of base B digits of length n. The constraints prevents some numbers from having an inverse. e.g., the mapping is injective but not surjective.
My algorithm could be a simple collection if statements acting on the digits if need be.
I think what you are looking for here is an application of Learning Classifier Systems, LCS -wiki. There are actually quite a few LCS open-source applications available, but you may need to experiment with the parameters in order to get a good result.
LCS/XCS/ZCS have the features that you are looking for including individual rules that could be heavily optimized, pressure to reduce the rule-set, and of course a human-readable/understandable set of rules. (Unlike a neural-net)
I have several datasets i.e. matrices that have a 2 columns, one with a matlab date number and a second one with a double value. Here an example set of one of them
>> S20_EavesN0x2DEAir(1:20,:)
ans =
1.0e+05 *
7.345016409722222 0.000189375000000
7.345016618055555 0.000181875000000
7.345016833333333 0.000177500000000
7.345017041666667 0.000172500000000
7.345017256944445 0.000168750000000
7.345017465277778 0.000166875000000
7.345017680555555 0.000164375000000
7.345017888888889 0.000162500000000
7.345018104166667 0.000161250000000
7.345018312500001 0.000160625000000
7.345018527777778 0.000158750000000
7.345018736111110 0.000160000000000
7.345018951388888 0.000159375000000
7.345019159722222 0.000159375000000
7.345019375000000 0.000160625000000
7.345019583333333 0.000161875000000
7.345019798611111 0.000162500000000
7.345020006944444 0.000161875000000
7.345020222222222 0.000160625000000
7.345020430555556 0.000160000000000
Now that I have those different sensor values, I need to get them together into a matrix, so that I could perform clustering, neural net and so on, the only problem is, that the sensor data was taken with slightly different timings or timestamps and there is nothing I can do about that from a data collection point of view.
My first thought was interpolation to make one sensor data set fit another one, but that seems like a messy approach and I was thinking maybe I am missing something, a toolbox or function that would enable me to do this quicker without me fiddling around. To even complicate things more, the number of sensors grew over time, therefore I am looking at different start dates as well.
Someone a good idea on how to go about this? Thanks
I think your first thought about interpolation was the correct one, at least if you plan to use NNs. Another option would be to use approaches which are designed to deal with missing data, like http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory for example.
It's hard to give an answer for the clustering part, because I have no idea what you're looking for in the data.
For the neural network, beside interpolating there are at least two other methods that come to mind:
training separate networks for each matrix
feeding them all together to the same network, with a flag specifying which matrix the data is coming from, i.e. something like: input (timestamp, flag_m1, flag_m2, ..., flag_mN) => target (value) where the flag_m* columns are mutually exclusive boolean values - i.e. flag_mK is 1 iff the line comes from matrix K, 0 otherwise.
These are the only things I can safely say with the amount of information you provided.