incorrect result of fasttext model - pyspark

I created a fasttext model to do comments sentiments analysis , I used a train_File with 51% positive comments, 47% negative and 2% neutral.
When I want to test with some given sentences always results are divided between: 0.49 .. positive, 0.47 .. negative, 0.02 neutral, even if I type a single negative word.
I have the following code :
import fasttext
model = fasttext.train_supervised(TRAIN_FILE, lr=0.1, dim=20, epoch=20, word_ngrams=1 , loss='softmax')
model.save_model(MODEL_FILE)
model = fasttext.load_model(MODEL_FILE)
pred = model.predict(['bad'], k =3)
print(pred)
I get always a result around 0,4.. positif , 0,4... negatif , 0.0.. neutral :
([['__label__négatif', '__label__positif', '__label__neutre']], array([[0.49168783, 0.47954634, 0.02879585]]))
Can someone tell me where the mistake lies

Related

Please help debug my call scipy library for Kolmogorov-Smirnov Test

I am completing an assignment but can not get the right results from a kolmogorov smirnov test for a small sample of observations against a 'norm' distribution.
I have setup a minimal sample in a jupyter notebook with expected kstest results and tried running this in several environment and reviewed the call for hours. Answer key says my ks_value and p_value are wildly wrong.
But, I cannot see my error.
the sample I have is from the test run in the answer key. it is a 1d array, a valid input option.
sample mean and standard deviation I compute look right
if I change ddof it makes a small difference (hint is to use ddof=0)
norm is a valid distribution for the kstest
library documentation is at
https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.kstest.html#scipy-stats-kstest
Any ideas or comments?
Would you expect a sample = [0.37, 0.27, 0.69, 0.56, 0.26] compared to a normal distribution to have
'KS test statistic' of 0.64 or 0.24
and
'p-value' of 0.02 or 0.94
TIA
import pandas as pd
import numpy as np
from scipy.stats import kstest
sample = [0.37, 0.27, 0.69, 0.56, 0.26]
normal_args = (np.mean(sample), np.std(sample, ddof=0))
print('mean', normal_args[0])
print('std', normal_args[1])
ks_value, p_value = kstest(sample, 'norm', normal_args )
print('ks_value', ks_value)
print('p_value', p_value)
print('')
print('#####posted solution')
print('expected ks_value = 0.63919407')
print('expected p_value = 0.01650327')
mean 0.43000000000000005
std 0.1688786546606764
ks_value 0.23881183701141995
p_value 0.9379686201081335
####posted solution
expected ks_value = 0.63919407
expected p_value = 0.01650327
My bad. A new guy mistake.
The function calls defines the 3rd argument as "args=()". I had put the 3rd argument in treating the input as a positional. Changing the call to
ks_value, p_value = kstest(sample, 'norm', args=(normal_args) )
yields the correct response.

pinv(H) is not equal to pinv(H'*H)*H'

I'm testing the y = SinC(x) function with single hidden layer feedforward neural networks (SLFNs) with 20 neurons.
With a SLFN, in the output layer, the output weight(OW) can be described by
OW = pinv(H)*T
after adding regularized parameter gamma, which
OW = pinv(I/gamma+H'*H)*H'*T
with
gamma -> Inf, pinv(H'*H)*H'*T == pinv(H)*T, also pinv(H'*H)*H' == pinv(H).
But when I try to calculate pinv(H'*H)*H' and pinv(H), I find a huge difference between these two when neurons number is over 5 (under 5, they are equal or almost the same).
For example, when H is 10*10 matrix, cond(H) = 21137561386980.3, rank(H) = 10,
H = [0.736251410036783 0.499731137079796 0.450233920602169 0.296610970576716 0.369359425954153 0.505556211442208 0.502934880027889 0.364904559142718 0.253349959726753 0.298697900877265;
0.724064281864009 0.521667364351399 0.435944895257239 0.337878535128756 0.364906002569385 0.496504064726699 0.492798607017131 0.390656915261343 0.289981152837390 0.307212326718916;
0.711534656474153 0.543520341487420 0.421761457948049 0.381771374416867 0.360475582262355 0.487454209236671 0.482668250979627 0.417033287703137 0.329570921359082 0.315860145366824;
0.698672860220896 0.565207057974387 0.407705930918082 0.427683127210120 0.356068794706095 0.478412571446765 0.472552121296395 0.443893207685379 0.371735862991355 0.324637323886021;
0.685491077062637 0.586647027111176 0.393799811411985 0.474875155650945 0.351686254239637 0.469385056318048 0.462458480695760 0.471085139463084 0.415948455902421 0.333539494486324;
0.672003357663056 0.607763454504209 0.380063647372632 0.522520267708374 0.347328559602877 0.460377531907542 0.452395518357816 0.498449772544129 0.461556360076788 0.342561958147251;
0.658225608290477 0.628484290731116 0.366516925684188 0.569759064961507 0.342996293691614 0.451395814182317 0.442371323528726 0.525823695636816 0.507817005881821 0.351699689941632;
0.644175558300583 0.648743139215935 0.353177974096445 0.615761051907079 0.338690023332811 0.442445652121229 0.432393859824045 0.553043275759248 0.553944175102542 0.360947346089454;
0.629872705346690 0.668479997764613 0.340063877672496 0.659781468051379 0.334410299080102 0.433532713184646 0.422470940392161 0.579948548513999 0.599160649563718 0.370299272759337;
0.615338237874436 0.687641820315375 0.327190410302607 0.701205860709835 0.330157655029498 0.424662569229062 0.412610204098877 0.606386924575225 0.642749594844498 0.379749516620049];
T=[-0.806458764562879 -0.251682808380338 -0.834815868451399 -0.750626822371170 0.877733363571576 1 -0.626938984683970 -0.767558933097629 -0.921811074815239 -1]';
There is a huge difference between pinv(H'*H)*H*T and pinv(H)*T, where
pinv(H'*H)*H*T = [-4803.39093243484 3567.08623820149 668.037919243849 5975.10699147077
1709.31211566970 -1328.53407325092 -1844.57938928594 -22511.9388736373
-2377.63048959478 31688.5125271114]';
pinv(H)*T = [-19780274164.6438 -3619388884.32672 -76363206688.3469 16455234.9229156
-135982025652.153 -93890161354.8417 283696409214.039 193801203.735488
-18829106.6110445 19064848675.0189]'.
I also find that if I round H , round(H,2), pinv(H'*H)*H*T and pinv(H)*T return the same answer. So I guess one of the reason might be the float calculation issue inside the matlab.
But since cond(H) is large, any small change of H may result in large difference in the inverse of H. I think the round function may not be a good option to test. As Cris Luengo mentioned, with large cond,the numerical imprecision will affect the accuracy of inverse.
In my test, I use 1000 training samples Input:[-10,10], with noise between [-0.2,0.2], and test samples are noise free. 20 neurons are selected. The OW = pinv(H)*Tcan give reasonable results for SinC training, while the performance for OW = pinv(H'*H)*T is worse. Then I try to increase the precision of H'*H by pinv(vpa(H'*H)), there's no significant improvement.
Does anyone know how to solve this?
After some research, the answer is that ELM is very sentive to scaling and activation function.
Please refer to this paper for details: https://dl.acm.org/citation.cfm?id=2797143.2797161
And paper: https://ieeexplore.ieee.org/document/8533625 demonstrated a noval algorithm to improve the perforamance of ELM for scaling.

3-layered Neural network doesen't learn properly

So, I'm trying to implement a neural network with 3 layers in python, however I am not the brightest person so anything with more then 2 layers is kinda difficult for me. The problem with this one is that it gets stuck at .5 and does not learn I have no actual clue where it went wrong. Thank you for anyone with the patience to explain the error to me. (I hope the code makes sense)
import numpy as np
def sigmoid(x):
return 1/(1+np.exp(-x))
def reduce(x):
return x*(1-x)
l0=[np.array([1,1,0,0]),
np.array([1,0,1,0]),
np.array([1,1,1,0]),
np.array([0,1,0,1]),
np.array([0,0,1,0]),
]
output=[0,1,1,0,1]
syn0=np.random.random((4,4))
syn1=np.random.random((4,1))
for justanumber in range(1000):
for i in range(len(l0)):
l1=sigmoid(np.dot(l0[i],syn0))
l2=sigmoid(np.dot(l1,syn1))
l2_err=output[i]-l2
l2_delta=reduce(l2_err)
l1_err=syn1*l2_delta
l1_delta=reduce(l1_err)
syn1=syn1.T
syn1+=l0[i].T*l2_delta
syn1=syn1.T
syn0=syn0.T
syn0+=l0[i].T*l1_delta
syn0=syn0.T
print l2
PS. I know that it might be a piece of trash as a script but that is why I asked for assistance
Your computations are not fully correct. For example, the reduce is called on the l1_err and l2_err, where it should be called on l1 and l2.
You are performing stochastic gradient descent. In this case with such few parameters, it oscilates hugely. In this case use a full batch gradient descent.
The bias units are not present. Although you can still learn without bias, technically.
I tried to rewrite your code with minimal changes. I have commented your lines to show the changes.
#!/usr/bin/python3
import matplotlib.pyplot as plt
import numpy as np
def sigmoid(x):
return 1/(1+np.exp(-x))
def reduce(x):
return x*(1-x)
l0=np.array ([np.array([1,1,0,0]),
np.array([1,0,1,0]),
np.array([1,1,1,0]),
np.array([0,1,0,1]),
np.array([0,0,1,0]),
]);
output=np.array ([[0],[1],[1],[0],[1]]);
syn0=np.random.random((4,4))
syn1=np.random.random((4,1))
final_err = list ();
gamma = 0.05
maxiter = 100000
for justanumber in range(maxiter):
syn0_del = np.zeros_like (syn0);
syn1_del = np.zeros_like (syn1);
l2_err_sum = 0;
for i in range(len(l0)):
this_data = l0[i,np.newaxis];
l1=sigmoid(np.matmul(this_data,syn0))[:]
l2=sigmoid(np.matmul(l1,syn1))[:]
l2_err=(output[i,:]-l2[:])
#l2_delta=reduce(l2_err)
l2_delta=np.dot (reduce(l2), l2_err)
l1_err=np.dot (syn1, l2_delta)
#l1_delta=reduce(l1_err)
l1_delta=np.dot(reduce(l1), l1_err)
# Accumulate gradient for this point for layer 1
syn1_del += np.matmul(l2_delta, l1).T;
#syn1=syn1.T
#syn1+=l1.T*l2_delta
#syn1=syn1.T
# Accumulate gradient for this point for layer 0
syn0_del += np.matmul(l1_delta, this_data).T;
#syn0=syn0.T
#syn0-=l0[i,:].T*l1_delta
#syn0=syn0.T
# The error for this datpoint. Mean sum of squares
l2_err_sum += np.mean (l2_err ** 2);
l2_err_sum /= l0.shape[0]; # Mean sum of squares
syn0 += gamma * syn0_del;
syn1 += gamma * syn1_del;
print ("iter: ", justanumber, "error: ", l2_err_sum);
final_err.append (l2_err_sum);
# Predicting
l1=sigmoid(np.matmul(l0,syn0))[:]# 1 x d * d x 4 = 1 x 4;
l2=sigmoid(np.matmul(l1,syn1))[:] # 1 x 4 * 4 x 1 = 1 x 1
print ("Predicted: \n", l2)
print ("Actual: \n", output)
plt.plot (np.array (final_err));
plt.show ();
The output I get is:
Predicted:
[[0.05214011]
[0.97596354]
[0.97499515]
[0.03771324]
[0.97624119]]
Actual:
[[0]
[1]
[1]
[0]
[1]]
Therefore the network was able to predict all the toy training examples. (Note in real data you would not like to fit the data at its best as it leads to overfitting). Note that you may get a bit different result, as the weight initialisations are different. Also, try to initialise the weight between [-0.01, +0.01] as a rule of thumb, when you are not working on a specific problem and you specifically know the initialisation.
Here is the convergence plot.
Note that you do not need to actually iterate over each example, instead you can do matrix multiplication at once, which is much faster. Also, the above code does not have bias units. Make sure you have bias units when you re-implement the code.
I would recommend you go through the Raul Rojas' Neural Networks, a Systematic Introduction, Chapter 4, 6 and 7. Chapter 7 will tell you how to implement deeper networks in a simple way.

Why does supressing weights improve Tensorflow neural net performance?

I have a 2-layer non-convolutional network in Tensorflow, using tanh as the activation function. I understand that weights should be initialized with a truncated normal distribution divided by sqrt(nInputs) e.g.:
weightsLayer1 = tf.Variable(tf.div(tf.truncated_normal([nInputUnits, nUnitsHiddenLayer1),math.sqrt(nInputUnits))))
Being a bit of a bumbling newbie in NN and Tensorflow, I mistakenly implemented this as 2 lines only to make it more readable:
weightsLayer1 = tf.Variable(tf.truncated_normal([nInputUnits, nUnitsHiddenLayer1])
weightsLayer1 = tf.div(weightsLayer1, math.sqrt(nInputUnits))
I now know that this is wrong and that the 2nd line causes the weights to be recomputed at each learning step. However, to my suprise, the "incorrect" implementation consistently yields better performance, both in train and test/evaluation datasets. I thought that the incorrect 2-line implementation should be a train wreck, since it is recomputing (suppressing) weights to values other than those chosen by the optimizer, which I would expect would wreak havoc in the optimization process, but it actually improves it. Does anyone have any explanation for this? I am using the Tensorflow adam optimizer.
Update 2016.6.22 - updated the 2nd code block above.
You are right that weightsLayer1 = tf.div(weightsLayer1, math.sqrt(nInputUnits)) is executed at each step. But that does NOT mean that the values in the weight variable are scaled down by sqrt(nInputUnits) in each step. This line is not an in-place operation that affects the values stored in the variable. It computes a new tensor, holding the values in the variable divided by sqrt(nInputUnits) and that tensor, I assume, then goes into the rest of your computation graph. This does not interfere with the optimizer. You are still defining a valid computation graph, just with an somewhat arbitrary scaling of the weights. The optimizer can still compute the gradients with respect to this variable (it will back-propagate through your division operation) and create the corresponding update operations.
In terms of the model that you are defining, the two versions are totally equivalent. For any set of values of weightsLayer1 in the original model (where you don't do the division), you can simply scale them up by sqrt(nInputUnits) and you will get the identical results with your second model. The two represent exactly the same model class, if you will.
Why one works better than the other? Your guess is as good as mine. If you have done the same division for all your variables, you have effectively divided your learning rate by sqrt(nInputUnits). This smaller learning rate might have been beneficial to the problem at hand.
Edit: I think the fact that you give the same name to the variable and the newly created tensor causes confusion. When you do
A = tf.Variable(1.0)
A = tf.mul(A, 2.0)
# Do something with A
then the second line creates a new tensor (as discussed above) and you re-bind the name (and it is only a name) A to that new tensor. For the graph being defined, the naming is absolutely irrelevant. The following code defines the same graph:
A = tf.Variable(1.0)
B = tf.mul(A, 2.0)
# Do something with B
Maybe this becomes clear if you execute the following code:
A = tf.Variable(1.0)
print A
B = A
A = tf.mul(A, 2.0)
print A
print B
The output is
<tensorflow.python.ops.variables.Variable object at 0x7ff025c02bd0>
Tensor("Mul:0", shape=(), dtype=float32)
<tensorflow.python.ops.variables.Variable object at 0x7ff025c02bd0>
The first time you print A it tells you that A is a variable object. After executing A = tf.mul(A, 2.0) and printing A again, you can see that the name A is now bound to a tf.Tensor object. However, the variable still exists, as can be seen by looking at the object behind the name B.
This is what the single line of code does:
t = tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] )
Creates a Tensor with shape [ nInputUnits, nUnitsHiddenLayer1 ], initialized with 1.0 as the standard deviation of the truncated normal distribution. ( 1.0 is standard stddev value )
t1 = tf.div( t, math.sqrt( nInputUnits ) )
divide all values in t with math.sqrt( nInputUnits )
Your two lines of code do exactly the same thing. On the first line and the second line all values are divided by math.sqrt( nInputUnits ).
As for your statement:
I now know that this is wrong and that the 2nd line causes the weights to be recomputed at each learning step.
EDIT my mistake
Indeed you are right, they are divided by math.sqrt( nInputUnits ) at every execuction, but not reinitialized! The point of importance is where you put tf.variable()
Here both lines are only initialized once:
weightsLayer1 = tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] )
weightsLayer1 = tf.Variable( tf.div( weightsLayer1, math.sqrt( nInputUnits ) ) )
and here the second line is preformed at every step:
weightsLayer1 = tf.Variable( tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] )
weightsLayer1 = tf.div( weightsLayer1, math.sqrt( nInputUnits ) )
Why does the second yield better results? it looks like some kind normalization to me, but somebody more knowledgeable should verify that.
Ps.
you can write it more readable like this:
weightsLayer1 = tf.Variable( tf.truncated_normal( [ nInputUnits, nUnitsHiddenLayer1 ] , stddev = 1. / math.sqrt( nInputUnits ) )

OneR WEKA - wrong prediction?

I am trying to make a ranking of attributes depending on their predictive power by using OneR in WEKA iteratively. At every run I remove the chosen attribute to see what the next best is.
I have done this for all my attributes and some (3 out of ten attributes) get 'ranked' higher than others, although they have less % correct prediction, a smaller ROC Area average and their rules are less compact.
As I understand, OneR just looks at the frequency tables for the attribute it has and then the class values, so it wouldn't care about whether I take attributes out or not...but I am probably missing something
Would anyone have an idea?
As an alternative you can you use the OneR package (available on CRAN, more information here: OneR - Establishing a New Baseline for Machine Learning Classification Models)
With the option verbose = TRUE you get the accuracy of all attributes, e.g.:
> library(OneR)
> example(OneR)
OneR> data <- optbin(iris)
OneR> model <- OneR(data, verbose = TRUE)
Attribute Accuracy
1 * Petal.Width 96%
2 Petal.Length 95.33%
3 Sepal.Length 74.67%
4 Sepal.Width 55.33%
---
Chosen attribute due to accuracy
and ties method (if applicable): '*'
OneR> summary(model)
Rules:
If Petal.Width = (0.0976,0.791] then Species = setosa
If Petal.Width = (0.791,1.63] then Species = versicolor
If Petal.Width = (1.63,2.5] then Species = virginica
Accuracy:
144 of 150 instances classified correctly (96%)
Contingency table:
Petal.Width
Species (0.0976,0.791] (0.791,1.63] (1.63,2.5] Sum
setosa * 50 0 0 50
versicolor 0 * 48 2 50
virginica 0 4 * 46 50
Sum 50 52 48 150
---
Maximum in each column: '*'
Pearson's Chi-squared test:
X-squared = 266.35, df = 4, p-value < 2.2e-16
(full disclosure: I am the author of this package and I would be very interested in the results you get)
The OneR classifier looks a bit like nearest-neighbor. Given that, the following applies: In the source code of the OneR classifier, it says:
// if this attribute is the best so far, replace the rule
if (noRule || r.m_correct > m_rule.m_correct) {
m_rule = r;
}
Thus, it should be possible (either in 1-R generally or in this implementation) for an attribute to block another, yet be later removed in your process.
Say you have attributes 1,2, and 3 with the distribution 1: 50%, 2: 30%, 3: 20%. In all cases where attribute 1 is best, attribute 3 is second best.
Thus, when attribute 1 is left out, attribute 3 wins with 70%, even though before attribute 2 ranked as "better" than 3 in the comparison of all three.