Getting wrong output for [1,0] in OR gate using perceptron - neural-network

def train(self,inputs,target):
self.weights=np.ones(np.shape(inputs)[1]+1)[np.newaxis]
for i in range(self.epochs):
for x,t in zip(inputs,target):
y_pred=self.predict(x)
for i in range(np.shape(inputs)[1]):
self.weights[0,1]+=self.learning_rate*(t-y_pred)
self.weights[0,i+1]+=self.learning_rate*(t-y_pred)*x[i]
The above code is to update weights in perceptron model (implementation of OR gate using perceptron)
print(per.predict([0,0]))
print(per.predict([0,1]))
print(per.predict([1,0]))
print(per.predict([1,1]))
I got the output :
[0]
[1]
[0]
[1]
Please explain where is the mistake and how to resolve it..
(link for complete code: https://onlinegdb.com/rJRmz_Cpr)

Related

Unclear documentation for ANN parameters

I'm dealing with some problem about reproducing results of ANN. Namely in the parameters of ANN there are 2 that I cannot address:
net. Inputs{i}.range
net.output.range
The documentation is very lapidary and cannot help me to understand how it works. Both seem to have massive impact on the output. Let's consider this MWE:
net=feedforwardnet(10);
net.inputs{1}.size=3;
%net.inputs{1}.range=[0 100;0 100;0 100];
%net.output.range=[1 200;];
net.layers{2}.size=1;
L1=[-1.1014, -2.1138, -2.6975;
-2.3545, 0.7693, 1.7621;
-1.1258, -1.4171, -3.1113;
-0.7845, -3.7105, 0.1605;
0.3993, 0.7042, 3.5076;
0.283, -3.914, -1.3428;
-2.0566, -3.4762, 1.3239;
-1.0626, 0.3662, 2.9169;
0.1367, 2.5801, 2.5867;
0.7155, 2.6237, 2.5376;];
B1=[3.5997, 3.1386, 2.7002, 1.8243, -1.9267, -1.6754, 0.8252, 1.0865, -0.0005, 0.6126];
L2=[0.5005, -1.0932, 0.34, -1.5099, 0.5896, 0.5881, 0.4769, 0.6728, -0.9407, -1.0296];
B2=0.1567;
net.IW{1}=L1;
net.Lw{2,1}=L2;
net.b{1}=B1';
net.b{2}=B2;
input=[40; 30; 20];
output=net(input)
If you uncomment line 3 and 4 the result rises from 0.1464 to 119.1379. I'm trying to reproduce this aspect of Matlab ANN in another environment, but the documentation is too short and does not explain anything.
What do these two parameters exactly do? I mean, what exact function is applied on the input and output data?

Unable to load a checkpoint for a previously trained Faster R-CNN detector MATLAB

I was training a faster r-cnn for two weeks and, somehow, it crashed last night..
I was saving the checkpoints after each epoch but now, if I try to start a new training with the last one (or any other) I receive:
Error using vision.internal.cnn.rcnnDatasetStatistics>iGetRPNSoftmaxLayerSource (line 125)
Expected one output from a curly brace or dot indexing expression, but there were 0 results.
Error in vision.internal.cnn.rcnnDatasetStatistics (line 17)
params.RPNSoftmaxLayerSource = iGetRPNSoftmaxLayerSource(analysis);
Error in trainFasterRCNNObjectDetector>iCollectImageInfo (line 1674)
imageInfo = vision.internal.cnn.rcnnDatasetStatistics(trainingData, rpnLayerGraph, imageInfoParams);
Error in trainFasterRCNNObjectDetector (line 423)
[imageInfo,trainingData,options] = iCollectImageInfo(trainingData, rpn, iStageOneParams(params), params, options);
And the detector object seems to be empty:
fasterRCNNObjectDetector with properties:
ModelName: 'MCs'
Network: []
AnchorBoxes: [3×2 double]
ClassNames: {}
MinObjectSize: [16 16]
Can't I resume training with these chechpoints where it left off?
And, if not, what went wrong during the saving of these checkpoints?!
I don't know if it matters but I was training with the Training method "four-step".
Please, please, any help??

Inputs to Encoder-Decoder LSTMCell/RNN Network

I'm creating an LSTM Encoder-Decoder Network, using Keras, following the code provided here: https://github.com/LukeTonin/keras-seq-2-seq-signal-prediction. The only change I made is to replace the GRUCell with an LSTMCell. Basically both the encoder and decoder consists of 2 layers, of 35 LSTMCells. The layers are stacked over (and combined with) each other using an RNN Layer.
The LSTMCell returns 2 states whereas the GRUCell returns 1 state. This is where I am encountering an error, as I do not know how to code for the 2 returned states of the LSTMCell.
I have created two models: first, an encoder-decoder model. Second, a prediction model. I am not encountering any problems in the encoder-decoder model, but a encountering problems in the decoder of the prediction model.
The error I am getting is:
ValueError: Layer rnn_4 expects 9 inputs, but it received 3 input tensors. Input received: [<tf.Tensor 'input_4:0' shape=(?, ?, 1) dtype=float32>, <tf.Tensor 'input_11:0' shape=(?, 35) dtype=float32>, <tf.Tensor 'input_12:0' shape=(?, 35) dtype=float32>]
This error happens when this line below, in the prediction model, is run:
decoder_outputs_and_states = decoder(
decoder_inputs, initial_state=decoder_states_inputs)
The section of code this fits into is:
encoder_predict_model = keras.models.Model(encoder_inputs,
encoder_states)
decoder_states_inputs = []
# Read layers backwards to fit the format of initial_state
# For some reason, the states of the model are order backwards (state of the first layer at the end of the list)
# If instead of a GRU you were using an LSTM Cell, you would have to append two Input tensors since the LSTM has 2 states.
for hidden_neurons in layers[::-1]:
# One state for GRU, but two states for LSTMCell
decoder_states_inputs.append(keras.layers.Input(shape=(hidden_neurons,)))
decoder_outputs_and_states = decoder(
decoder_inputs, initial_state=decoder_states_inputs)
decoder_outputs = decoder_outputs_and_states[0]
decoder_states = decoder_outputs_and_states[1:]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_predict_model = keras.models.Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
Could somebody help me with the for loop above, and initial states I should be passing the decoder after that?
I had an similar error and i solved just doing what he says, adding another input tensor:
# If instead of a GRU you were using an LSTM Cell, you would have to append two Input tensors since the LSTM has 2 states.
for hidden_neurons in layers[::-1]:
# One state for GRU
decoder_states_inputs.append(keras.layers.Input(shape=(hidden_neurons,)))
decoder_states_inputs.append(keras.layers.Input(shape=(hidden_neurons,)))
here it solved the prolem...

Time-Series Forecasting using SVM in Matlab

I want to forecast the next week (horizon = 7) electric load with lag=7 using AR, KNN and SVM, and I need help with that.
I've already wrote a code for all of them and i got results which is not as I expected it should be.
I have a time-series (7160-by-1) and here is a part of my code:
SVM:
Part of Training Data
52538 51690 55509 56740 58106 58280 57395
51690 55509 56740 58106 58280 57395 55425
55509 56740 58106 58280 57395 55425 55755
56740 58106 58280 57395 55425 55755 58563
Part of Training Targets
55425 55755 58563 58705 58245 61880 61540
55755 58563 58705 58245 61880 61540 59791
58563 58705 58245 61880 61540 59791 57945
58705 58245 61880 61540 59791 57945 59198
Part of Validate Data
101750 97201 98986 99491 99778 99711 100701
97201 98986 99491 99778 99711 100701 102790
98986 99491 99778 99711 100701 102790 98277
99491 99778 99711 100701 102790 98277 99520
Part of Validate Data Targets
102790 98277 99520 102719 103308 103750 103582
98277 99520 102719 103308 103750 103582 103193
99520 102719 103308 103750 103582 103193 98592
102719 103308 103750 103582 103193 98592 102985
Creating SVM model using LSSVM library
model = initlssvm(Train_Data,Train_Data_Targets,'f',[],[],'RBF_kernel','o');
model = tunelssvm(model,'simplex','crossvalidatelssvm',{5,'mse'});
model = trainlssvm(model);
Predicting future values
Estimated_Value = simlssvm(model,Validate_Data(1));
but results is not so good, so can you help me ? I can provide the KNN and AR code too is needed.

Keras sample_weight array error

I'm working on a brain lesion segmentation problem and I'm trying to implement a Unet with code inspired by: https://github.com/jocicmarko/ultrasound-nerve-segmentation
One of the issues I'm trying to overcome is class balance (lots more non-lesion voxels rather than lesion voxels). I tried using class_balance but that didn't work so now I'm trying to use sample_weight and that's also giving me all sorts of errors.
First thing I tried was to set sample_weight_mode to temporal and feed in a weight matrix of the same shape as my target data:
target_data.shape -> (n_samples,512 rows/pixels, 512 cols/pixels, 1 channel)
Weight_map.shape -> (n_samples,512 rows/pixels, 512 cols/pixels, 1 channel)
Output:
_ValueError: Found a sample_weight array with shape (100, 512, 512, 1). In order to use timestep-wise sample weighting, you should pass a 2D sample_weight array.*
Second thing I tried was to flatten the sample array so it would be of shape:
Weight_map.shape -> (n_samples,512x512x1).
Output:
ValueError: Found a sample_weight array with shape (100, 262144) for an input with shape (100, 512, 512, 1). sample_weight cannot be broadcast.*
Next I tried following the advice of uschmidt83 (here) and flattening the output of my model along with the corresponding target data.
last_layer = keras.layers.Flatten()(second_last_layer)
target_data.shape -> (n_samples,512x512x1).
Weight_map.shape -> (n_samples,512x512x1).
Output:
ValueError: Found a sample_weight array for an input with shape (100, 262144). Timestep-wise sample weighting (use of sample_weight_mode="temporal") is restricted to outputs that are at least 3D, i.e. that have a time dimension.*
Oddly enough, even if I set sample_weight=None I still get the same error as right above.
Any advice on how to fix this sample_weight error? Here is the basic code to reproduce the error:
https://gist.github.com/andreimouraviev/2642384705034da92d6954dd9993fb4d
Also, if you have advice about how to deal the class imbalance problem, please let me know.
Weight needs to be a 1D array, where as target has an extra channel like input.
Can you try sample_weight_mode=temporal with the following dimensions:
input_image -> (n_samples, 512, 512, 1)
target_label -> (n_samples, 262144, 1)
weight_map -> (n_samples, 262144)
The following link contains information about class weights:
https://github.com/fchollet/keras/issues/2115