PyBrain how to interpret the results from net.activate? - neural-network

I've trained a network on PyBrain for purpose of classification and am ready to fire away with specific input. However, when I do
classes = ['apple', 'orange', 'peach', 'banana']
data = ClassificationDataSet(len(input), 1, nb_classes=len(classes), class_labels=classes)
data._convertToOneOfMany( ) # recommended by PyBrain
fnn = buildNetwork( data.indim, 5, data.outdim, outclass=SoftmaxLayer )
trainer = BackpropTrainer( fnn, dataset=data, momentum=m, verbose=True, weightdecay=wd)
trainer.trainUntilConvergence(maxEpochs=80)
# stop training and start using my trained network here
output = fnn.activate(input)
As expected, I get a numeric value for "output", but is there a way to determine the predicted class label directly? Even if there's not one, how can I map the value of "output" to my class label? Thank you for your help.

When you say you get a numeric value for "output" do you mean a scalar (that is, not an array)? From my understanding of it, you should have gotten an array of four values (ie. as many as possible output classes you have). The biggest value in that array corresponds to the index of the class. I don't know if PyBrain provides an utility function to extract that, but you can do it like this:
class_index = max(xrange(len(output)), key=output.__getitem__)
class_name = classes[class_index]
Incidentally, you omitted the step in which you actually fill the data in the dataset.

Related

How to predict the outcome variables using a saved pipeline when the data set does not contain the actual outcome?

I have a data set that contains the following columns: outcome (this is the outcome that we want to predict), and raw (a column that consists of text). I want to develop an ML model that will predict the outcome from the raw column. I have trained an ML model in Databricks using the following pipeline:
regexTokenizer = RegexTokenizer(inputCol="raw", outputCol="words", pattern="\\W")
countVec = CountVectorizer(inputCol="words", outputCol="features")
indexer = StringIndexer(inputCol="outcome", outputCol="label").setHandleInvalid("skip").fit(trainDF)
inverter = IndexToString(inputCol="prediction", outputCol="prediction_label", labels=indexer.labels)
nb = NaiveBayes(labelCol="label", featuresCol="features", smoothing=1.0, modelType="multinomial")
pipeline = Pipeline(stages=[regexTokenizer, indexer, countVec, nb, inverter])
model = pipeline.fit(trainDF)
model.write().overwrite().save("/FileStore/project")
In another notebook, I load the model and try to predict the values for a new data set. This data set does not contain the outcome variable ("outcome" in this case):
model = PipelineModel.load("/FileStore/project")
score_output_df = model.transform(score_this)
When I try to predict the values for the new data set, I get an error message that the column "outcome" cannot be found. I suspect that this is due to the fact that some stages in the pipeline transform this column (the indexer and inverter stages are used to convert the outcome column to numbers and then back to string labels.).
My question is this, how can I load a saved model and use it to predict values when the original pipeline contains stages that have this column as an input.
instead of using
model.write().overwrite().save("/FileStore/project")
you have to write it like this
model.write().overwrite().save("/FileStore/project/model.sav")
and then for loading you will use this
model = PipelineModel.load("/FileStore/project/model.sav")
score_output_df = model.transform(score_this)
I have found a solution to the problem and will post it here so that if someone faces the same problem they can benefit from it. The solution was simply to extract the stages that I want to use in the prediction and save them to the model as such:
model = PipelineModel.load("/FileStore/project")
stages1 = []
stages1 += [model.stages[0]]
stages1 += [model.stages[2]]
stages1 += [model.stages[3]]
stages1 += [model.stages[4]]
model.stages = stages1
score_output_df = model.transform(score_this)
In this code, I exclude the second step ([1]) because it contains the indexer. Once I do this, I can predict values when the "outcome" column is not available.

map and sort from alphanumeric data

I have the file "global power plants" with a column "capacity_in_mw" (with numbers 30, 100, 45, ...) and another column is "primary_fuel" (Coal, Hydro, Oil, Solar, Nuclear, Wind, Coal).
I can generate a map in function of "capacity_in_mw" by setting the condition
plotdata = data.query('capacity_in_mw > 50')
Now, I would like to generate a map in function of "primary_fuel". Because data is alphanumeric, how do I set up the condition?
Furthermore, when making the map, to assign color='black' for Coal, color='green' for Wind, color='yellow' for Solar, ... etc.
Python.
I am a novice, but I think I found the solution. It is more of an issue of syntax, to use the == in the query to identify the alphanumeric value.
plotdata = data.query('primary_fuel == "Hydro"')
Also, lesson learned in the future to dig more before posting a question.

What is the correct svmlight input format in Mallet?

I am using Mallet with the SVMLight input format to do classification usingNaiveBayes classifier. But I get a NumberFormatException. I'm wondering how I can use strings features when using SVMLight. As I read in the guideline 1, the features can also be strings.
Can anyone help me what is wrong with my code or input?
Here is my code:
public void trainMalletNaiveBayes() throws Exception {
ArrayList<Pipe> pipes = new ArrayList<Pipe>();
pipes.add(new SvmLight2FeatureVectorAndLabel());
pipes.add(new PrintInputAndTarget());
SerialPipes pipe = new SerialPipes(pipes);
//prepare training instances
InstanceList trainingInstanceList = new InstanceList(pipe);
trainingInstanceList.addThruPipe(new CsvIterator(new FileReader("/tmp/featureFiles_svm.csv"), "^(\\S*)[\\s,]*(.*)$", 2, 1, -1));
//prepare test instances
InstanceList testingInstanceList = new InstanceList(pipe);
testingInstanceList.addThruPipe(new CsvIterator(new FileReader("/tmp/test_set.csv"), "^(\\S*)[\\s,]*(.*)$", 2, 1, -1));
ClassifierTrainer trainer = new NaiveBayesTrainer();
Classifier classifier = trainer.train(trainingInstanceList);
And here is the first three lines of my input file:
No f1:NP f2:NN f3:1 f4:1 f5:0 f6:0 f7:0 f8:0.0 f9:1 f10:true f11:false f12:false f13:false f14:false f15:ROOT f16:NN f17:NOTHING
No f1:NP f2:NN f3:8 f4:4 f5:0 f6:0 f7:1 f8:4.127134385045092 f9:8 f10:true f11:false f12:false f13:false f14:false f15:ROOT f16:DT f17:NOTHING
Yes f1:NP f2:NN f3:4 f4:3 f5:0 f6:0 f7:0 f8:0.0 f9:4 f10:true f11:false f12:false f13:false f14:false f15:NP f16:DT f17:NN
The first column is the label of the instance and there rest of the data includes the features and their values. For example, NN shows the POS of the head word of a phrase.
In the meantime, I get the exception for the NN (NumberFormatException: For input string: "NN") . I'm wondering why it doesn't have any problem with the NP which comes before that, but stops at the NN.
All features need to have numeric values. For booleans you can use true=1 and false=0. You would also have to modify f1:NP to f1_NP=1.
The reason it's not dying on the NP is that the SvmLight2FeatureVectorAndLabel class is expecting to parse an entire line (label and data), but the code is reading the file with a CsvIterator that is splitting off the first element as a label.
The classify.tui.SvmLight2Vectors class uses this code for an iterator:
new SelectiveFileLineIterator (fileReader, "^\\s*#.+")

Why does supressing weights improve Tensorflow neural net performance?

I have a 2-layer non-convolutional network in Tensorflow, using tanh as the activation function. I understand that weights should be initialized with a truncated normal distribution divided by sqrt(nInputs) e.g.:
weightsLayer1 = tf.Variable(tf.div(tf.truncated_normal([nInputUnits, nUnitsHiddenLayer1),math.sqrt(nInputUnits))))
Being a bit of a bumbling newbie in NN and Tensorflow, I mistakenly implemented this as 2 lines only to make it more readable:
weightsLayer1 = tf.Variable(tf.truncated_normal([nInputUnits, nUnitsHiddenLayer1])
weightsLayer1 = tf.div(weightsLayer1, math.sqrt(nInputUnits))
I now know that this is wrong and that the 2nd line causes the weights to be recomputed at each learning step. However, to my suprise, the "incorrect" implementation consistently yields better performance, both in train and test/evaluation datasets. I thought that the incorrect 2-line implementation should be a train wreck, since it is recomputing (suppressing) weights to values other than those chosen by the optimizer, which I would expect would wreak havoc in the optimization process, but it actually improves it. Does anyone have any explanation for this? I am using the Tensorflow adam optimizer.
Update 2016.6.22 - updated the 2nd code block above.
You are right that weightsLayer1 = tf.div(weightsLayer1, math.sqrt(nInputUnits)) is executed at each step. But that does NOT mean that the values in the weight variable are scaled down by sqrt(nInputUnits) in each step. This line is not an in-place operation that affects the values stored in the variable. It computes a new tensor, holding the values in the variable divided by sqrt(nInputUnits) and that tensor, I assume, then goes into the rest of your computation graph. This does not interfere with the optimizer. You are still defining a valid computation graph, just with an somewhat arbitrary scaling of the weights. The optimizer can still compute the gradients with respect to this variable (it will back-propagate through your division operation) and create the corresponding update operations.
In terms of the model that you are defining, the two versions are totally equivalent. For any set of values of weightsLayer1 in the original model (where you don't do the division), you can simply scale them up by sqrt(nInputUnits) and you will get the identical results with your second model. The two represent exactly the same model class, if you will.
Why one works better than the other? Your guess is as good as mine. If you have done the same division for all your variables, you have effectively divided your learning rate by sqrt(nInputUnits). This smaller learning rate might have been beneficial to the problem at hand.
Edit: I think the fact that you give the same name to the variable and the newly created tensor causes confusion. When you do
A = tf.Variable(1.0)
A = tf.mul(A, 2.0)
# Do something with A
then the second line creates a new tensor (as discussed above) and you re-bind the name (and it is only a name) A to that new tensor. For the graph being defined, the naming is absolutely irrelevant. The following code defines the same graph:
A = tf.Variable(1.0)
B = tf.mul(A, 2.0)
# Do something with B
Maybe this becomes clear if you execute the following code:
A = tf.Variable(1.0)
print A
B = A
A = tf.mul(A, 2.0)
print A
print B
The output is
<tensorflow.python.ops.variables.Variable object at 0x7ff025c02bd0>
Tensor("Mul:0", shape=(), dtype=float32)
<tensorflow.python.ops.variables.Variable object at 0x7ff025c02bd0>
The first time you print A it tells you that A is a variable object. After executing A = tf.mul(A, 2.0) and printing A again, you can see that the name A is now bound to a tf.Tensor object. However, the variable still exists, as can be seen by looking at the object behind the name B.
This is what the single line of code does:
t = tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] )
Creates a Tensor with shape [ nInputUnits, nUnitsHiddenLayer1 ], initialized with 1.0 as the standard deviation of the truncated normal distribution. ( 1.0 is standard stddev value )
t1 = tf.div( t, math.sqrt( nInputUnits ) )
divide all values in t with math.sqrt( nInputUnits )
Your two lines of code do exactly the same thing. On the first line and the second line all values are divided by math.sqrt( nInputUnits ).
As for your statement:
I now know that this is wrong and that the 2nd line causes the weights to be recomputed at each learning step.
EDIT my mistake
Indeed you are right, they are divided by math.sqrt( nInputUnits ) at every execuction, but not reinitialized! The point of importance is where you put tf.variable()
Here both lines are only initialized once:
weightsLayer1 = tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] )
weightsLayer1 = tf.Variable( tf.div( weightsLayer1, math.sqrt( nInputUnits ) ) )
and here the second line is preformed at every step:
weightsLayer1 = tf.Variable( tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] )
weightsLayer1 = tf.div( weightsLayer1, math.sqrt( nInputUnits ) )
Why does the second yield better results? it looks like some kind normalization to me, but somebody more knowledgeable should verify that.
Ps.
you can write it more readable like this:
weightsLayer1 = tf.Variable( tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] , stddev = 1. / math.sqrt( nInputUnits ) )

I'm having trouble shuffling a deck of cards in Matlab. Need help to see where I went wrong

I currently have the deck of cards coded, but it is unshuffled. This is for programming the card game of War if it helps. I need to shuffle the deck, but whenever I do, it will only shuffle together the card numbers and the suits, not the full card. For example, I have A identified as an ace and the suits come after each number. A normal card would be "AH" (an ace of hearts) or "6D" (a six of diamonds). Instead, it will output "5A" as one of the cards, as in a 5 of aces. I don't know how to fix this, but the code that I currently have is this:
card_nums = ('A23456789TJQK')';
card_suits = ('HDSC')';
unshuffled_deck = [repmat(card_nums,4,1),repmat(card_suits,13,1)];
disp(unshuffled_deck)
shuffled_deck = unshuffled_deck(randperm(numel(unshuffled_deck)));
disp(shuffled_deck)
I would appreciate any help with this, and thank you very much for your time!
You're creating a random permutation of all of the elements from both columns of unshuffled_deck combined. Instead you need to create a random permutation of the rows of unshuffled_deck:
shuffled_deck = unshuffled_deck(randperm(size(unshuffled_deck,1)),:);
The call to size gives you the number of rows in the deck array, then we get a random permutation of the row indices, and copy the row (value, suit) as a single entity.
Here's a version using a structure array in response to #Carl Witthoft's comment. I was afraid it would add too much complexity to the solution, but it really isn't bad:
card_nums = ('A23456789TJQK')';
card_suits = ('HDSC')';
deck_nums = repmat(card_nums,4,1);
deck_suits = repmat(card_suits,13,1);
cell_nums = cellstr(deck_nums).'; %// Change strings to cell arrays...
cell_suits = cellstr(deck_suits).'; %// so we can use them in struct
%// Construct a struct array with fields 'value' and 'suit'
unshuffled_deck = struct('value',cell_nums,'suit',cell_suits);
disp('unshuffled deck:');
disp([unshuffled_deck.value;unshuffled_deck.suit]);
%// Shuffle the deck using the number of elements in the structure array
shuffled_deck = unshuffled_deck(randperm(numel(unshuffled_deck)));
disp('shuffled deck:');
disp([shuffled_deck.value; shuffled_deck.suit]);
Here's a test run:
unshuffled deck:
A23456789TJQKA23456789TJQKA23456789TJQKA23456789TJQK
HDSCHDSCHDSCHDSCHDSCHDSCHDSCHDSCHDSCHDSCHDSCHDSCHDSC
shuffled deck:
4976TT93KTJQJATK953A75QA82Q6226K5J784J4A3372486K859Q
CHSSSHCDSCSSHDDCDSHHCDHSDDCDHCCHHCHHHDDCSCDSSCHDSCSD
To access an individual card, you can do:
>> shuffled_deck(2)
ans =
scalar structure containing the fields:
value = 9
suit = H
Or you can access the individual fields:
>> shuffled_deck(2).value
ans = 9
>> shuffled_deck(2).suit
ans = H
Unfortunately, I don't know of any way to simply index the struct array and get, for instance, 9H as you would in a regular array using disp(shuffled_deck(2,:)). In this case, the only option I know of is to explicitly concatenate each field:
disp([shuffled_deck(2).value,shuffled_deck(2).suit]);