loss and validation loss decreasing but accuracy and validation accuracy remain static - neural-network
input data (xs) :
array([[ 0.28555165, -0.03237782, 0.28525293, 0.2898103 , 0.03093571],
[ 0.28951845, -0.03555493, 0.28561172, 0.29346927, 0.03171808],
[ 0.28326774, -0.03258297, 0.27879436, 0.2804189 , 0.03079463],
[ 0.27617554, -0.03335768, 0.27927279, 0.28285823, 0.03015975],
[ 0.29084073, -0.0308716 , 0.28788416, 0.29102994, 0.03019182],
[ 0.27353097, -0.03571149, 0.26874771, 0.27310096, 0.03021105],
[ 0.26163049, -0.03528769, 0.25989708, 0.26688066, 0.0303842 ],
[ 0.26223156, -0.03429704, 0.26169114, 0.26127023, 0.02962107],
[ 0.26259217, -0.03496377, 0.26145193, 0.26773441, 0.02942868],
[ 0.26583775, -0.03354123, 0.26240878, 0.26358757, 0.02925554]])
Output data (ys) :
array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.]])
The training set is split 70% training and 30% validation.
Training this network can see the loss and val_loss decreases but acc and val_acc remain static at 0.5714 and 0 respectively :
Train on 7 samples, validate on 3 samples
Epoch 1/60
0s - loss: 4.4333 - acc: 0.0000e+00 - val_loss: 4.4340 - val_acc: 0.0000e+00
Epoch 2/60
0s - loss: 4.4335 - acc: 0.0000e+00 - val_loss: 4.4338 - val_acc: 0.0000e+00
Epoch 3/60
0s - loss: 4.4331 - acc: 0.0000e+00 - val_loss: 4.4335 - val_acc: 0.0000e+00
Epoch 4/60
0s - loss: 4.4319 - acc: 0.0000e+00 - val_loss: 4.4331 - val_acc: 0.0000e+00
Epoch 5/60
0s - loss: 4.4300 - acc: 0.0000e+00 - val_loss: 4.4326 - val_acc: 0.0000e+00
Epoch 6/60
0s - loss: 4.4267 - acc: 0.0000e+00 - val_loss: 4.4320 - val_acc: 0.0000e+00
Epoch 7/60
0s - loss: 4.4270 - acc: 0.1429 - val_loss: 4.4314 - val_acc: 0.0000e+00
Epoch 8/60
0s - loss: 4.4257 - acc: 0.1429 - val_loss: 4.4307 - val_acc: 0.0000e+00
Epoch 9/60
0s - loss: 4.4240 - acc: 0.0000e+00 - val_loss: 4.4300 - val_acc: 0.0000e+00
Epoch 10/60
0s - loss: 4.4206 - acc: 0.1429 - val_loss: 4.4292 - val_acc: 0.0000e+00
Epoch 11/60
0s - loss: 4.4192 - acc: 0.1429 - val_loss: 4.4284 - val_acc: 0.0000e+00
Epoch 12/60
0s - loss: 4.4156 - acc: 0.4286 - val_loss: 4.4276 - val_acc: 0.0000e+00
Epoch 13/60
0s - loss: 4.4135 - acc: 0.4286 - val_loss: 4.4267 - val_acc: 0.0000e+00
Epoch 14/60
0s - loss: 4.4114 - acc: 0.5714 - val_loss: 4.4258 - val_acc: 0.0000e+00
Epoch 15/60
0s - loss: 4.4072 - acc: 0.7143 - val_loss: 4.4248 - val_acc: 0.0000e+00
Epoch 16/60
0s - loss: 4.4046 - acc: 0.4286 - val_loss: 4.4239 - val_acc: 0.0000e+00
Epoch 17/60
0s - loss: 4.4012 - acc: 0.5714 - val_loss: 4.4229 - val_acc: 0.0000e+00
Epoch 18/60
0s - loss: 4.3967 - acc: 0.5714 - val_loss: 4.4219 - val_acc: 0.0000e+00
Epoch 19/60
0s - loss: 4.3956 - acc: 0.5714 - val_loss: 4.4209 - val_acc: 0.0000e+00
Epoch 20/60
0s - loss: 4.3906 - acc: 0.5714 - val_loss: 4.4198 - val_acc: 0.0000e+00
Epoch 21/60
0s - loss: 4.3883 - acc: 0.5714 - val_loss: 4.4188 - val_acc: 0.0000e+00
Epoch 22/60
0s - loss: 4.3849 - acc: 0.5714 - val_loss: 4.4177 - val_acc: 0.0000e+00
Epoch 23/60
0s - loss: 4.3826 - acc: 0.5714 - val_loss: 4.4166 - val_acc: 0.0000e+00
Epoch 24/60
0s - loss: 4.3781 - acc: 0.5714 - val_loss: 4.4156 - val_acc: 0.0000e+00
Epoch 25/60
0s - loss: 4.3757 - acc: 0.5714 - val_loss: 4.4145 - val_acc: 0.0000e+00
Epoch 26/60
0s - loss: 4.3686 - acc: 0.5714 - val_loss: 4.4134 - val_acc: 0.0000e+00
Epoch 27/60
0s - loss: 4.3666 - acc: 0.5714 - val_loss: 4.4123 - val_acc: 0.0000e+00
Epoch 28/60
0s - loss: 4.3665 - acc: 0.5714 - val_loss: 4.4111 - val_acc: 0.0000e+00
Epoch 29/60
0s - loss: 4.3611 - acc: 0.5714 - val_loss: 4.4100 - val_acc: 0.0000e+00
Epoch 30/60
0s - loss: 4.3573 - acc: 0.5714 - val_loss: 4.4089 - val_acc: 0.0000e+00
Epoch 31/60
0s - loss: 4.3537 - acc: 0.5714 - val_loss: 4.4078 - val_acc: 0.0000e+00
Epoch 32/60
0s - loss: 4.3495 - acc: 0.5714 - val_loss: 4.4066 - val_acc: 0.0000e+00
Epoch 33/60
0s - loss: 4.3452 - acc: 0.5714 - val_loss: 4.4055 - val_acc: 0.0000e+00
Epoch 34/60
0s - loss: 4.3405 - acc: 0.5714 - val_loss: 4.4044 - val_acc: 0.0000e+00
Epoch 35/60
0s - loss: 4.3384 - acc: 0.5714 - val_loss: 4.4032 - val_acc: 0.0000e+00
Epoch 36/60
0s - loss: 4.3390 - acc: 0.5714 - val_loss: 4.4021 - val_acc: 0.0000e+00
Epoch 37/60
0s - loss: 4.3336 - acc: 0.5714 - val_loss: 4.4009 - val_acc: 0.0000e+00
Epoch 38/60
0s - loss: 4.3278 - acc: 0.5714 - val_loss: 4.3998 - val_acc: 0.0000e+00
Epoch 39/60
0s - loss: 4.3254 - acc: 0.5714 - val_loss: 4.3986 - val_acc: 0.0000e+00
Epoch 40/60
0s - loss: 4.3205 - acc: 0.5714 - val_loss: 4.3975 - val_acc: 0.0000e+00
Epoch 41/60
0s - loss: 4.3171 - acc: 0.5714 - val_loss: 4.3963 - val_acc: 0.0000e+00
Epoch 42/60
0s - loss: 4.3150 - acc: 0.5714 - val_loss: 4.3952 - val_acc: 0.0000e+00
Epoch 43/60
0s - loss: 4.3106 - acc: 0.5714 - val_loss: 4.3940 - val_acc: 0.0000e+00
Epoch 44/60
0s - loss: 4.3064 - acc: 0.5714 - val_loss: 4.3929 - val_acc: 0.0000e+00
Epoch 45/60
0s - loss: 4.3009 - acc: 0.5714 - val_loss: 4.3917 - val_acc: 0.0000e+00
Epoch 46/60
0s - loss: 4.2995 - acc: 0.5714 - val_loss: 4.3905 - val_acc: 0.0000e+00
Epoch 47/60
0s - loss: 4.2972 - acc: 0.5714 - val_loss: 4.3894 - val_acc: 0.0000e+00
Epoch 48/60
0s - loss: 4.2918 - acc: 0.5714 - val_loss: 4.3882 - val_acc: 0.0000e+00
Epoch 49/60
0s - loss: 4.2886 - acc: 0.5714 - val_loss: 4.3871 - val_acc: 0.0000e+00
Epoch 50/60
0s - loss: 4.2831 - acc: 0.5714 - val_loss: 4.3859 - val_acc: 0.0000e+00
Epoch 51/60
0s - loss: 4.2791 - acc: 0.5714 - val_loss: 4.3848 - val_acc: 0.0000e+00
Epoch 52/60
0s - loss: 4.2774 - acc: 0.5714 - val_loss: 4.3836 - val_acc: 0.0000e+00
Epoch 53/60
0s - loss: 4.2714 - acc: 0.5714 - val_loss: 4.3824 - val_acc: 0.0000e+00
Epoch 54/60
0s - loss: 4.2696 - acc: 0.5714 - val_loss: 4.3813 - val_acc: 0.0000e+00
Epoch 55/60
0s - loss: 4.2641 - acc: 0.5714 - val_loss: 4.3801 - val_acc: 0.0000e+00
Epoch 56/60
0s - loss: 4.2621 - acc: 0.5714 - val_loss: 4.3790 - val_acc: 0.0000e+00
Epoch 57/60
0s - loss: 4.2569 - acc: 0.5714 - val_loss: 4.3778 - val_acc: 0.0000e+00
Epoch 58/60
0s - loss: 4.2556 - acc: 0.5714 - val_loss: 4.3767 - val_acc: 0.0000e+00
Epoch 59/60
0s - loss: 4.2492 - acc: 0.5714 - val_loss: 4.3755 - val_acc: 0.0000e+00
Epoch 60/60
0s - loss: 4.2446 - acc: 0.5714 - val_loss: 4.3744 - val_acc: 0.0000e+00
Out[23]:
<keras.callbacks.History at 0x7fbb9c4c7a58>
The source for my network is :
from keras.callbacks import History
history = History()
from keras import optimizers
model = Sequential()
model.add(Dense(100, activation='softmax', input_dim=inputDim))
model.add(Dropout(0.2))
model.add(Dense(200, activation='softmax'))
model.add(Dropout(0.2))
model.add(Dense(84, activation='softmax'))
sgd = optimizers.SGD(lr=0.0009, decay=1e-10, momentum=0.9, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer=sgd , metrics=['accuracy'])
model.fit(xs,ys , validation_split=0.3 , verbose=2 , callbacks=[history] , epochs=60,batch_size=32)
Some simple statistics of my training data :
0 1 2 3 4
count 10.000000 10.000000 10.000000 10.000000 10.000000
mean 0.275118 -0.033855 0.273101 0.277016 0.030270
std 0.011664 0.001594 0.011386 0.012060 0.000746
min 0.261630 -0.035711 0.259897 0.261270 0.029256
25% 0.263404 -0.035207 0.261871 0.267094 0.029756
50% 0.274853 -0.033919 0.273771 0.276760 0.030201
75% 0.284981 -0.032777 0.283758 0.288072 0.030692
max 0.290841 -0.030872 0.287884 0.293469 0.031718
generated using :
import pandas as pd
pd.DataFrame(xs).describe()
The standard deviation is very low for this dataset, is this a cause of my network not converging ?
Are there other modification's I can try in order to improve the training and validation accuracies of this network ?
Update :
First and fourth training examples :
[0.28555165, -0.03237782, 0.28525293, 0.2898103 , 0.03093571]
[0.27617554, -0.03335768, 0.27927279, 0.28285823, 0.03015975]
contain same target mappings :
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.
Is there a property of these training examples that could be skewing the results ? I understanding a large amount of training data is required to train a neural network but this does not explain why loss and val_loss decrease but evaluations of training accuracy and validation accuracy : acc and val_acc remain static ?
First, I must warn you of some things that are not quite ok here:
It seems that you are attempting a classification problem of 84 classes with only 10 data samples (7 for training and 3 for validation). This is definitely way too few data to attempt creating a successful deep learning model (most deep learning problems require at least thousands data sample, others even up to millions). For starters, you don't even have one data sample for all your categories, so it seems to me like a lost cause given that few data.
Seems that you are already aware of this as per what you indicate in your post. You also say that even though it does not explain the strange behavior of your accuracy, but I must say that it would not be a good idea to conclude that given these conditions. Having that few data samples surely can result in unexpected/erratic behavior on your training, so it is no wonder you metrics are behaving strangely.
I see you have used softmax activation in all the layers of your model. In my experience this is not a good idea for a classification problem. A current "standard" for classification with deep learning models is to use ReLU activations for the inner layers and leave softmax only for the output layer.
This is because softmax return a probability distribution for your N classes (where they all sum to 1), so it helps to obtain the most probable class among your choices. This also means that softmax is going to "squash" or modify the input values, so they are all between [0, 1], which could affect your training process when applied to all layers, as it will not give you the same activation values that other sigmoidal functions would give. In simpler words, you are somewhat "normalizing" your values on each layer of your model, not letting the data "speak for itself".
Now, if we look at your 4 metrics during training, we can see that your acc is not so static as you think: its first epochs stays at 0.0, then on epoch 7 it starts to increase, until epoch 17 where it reaches 0.5714 and seems to reach an asymptotic limit.
We can also see that your loss metric had really few improvement, starting on 4.4333 and ending on 4.2446 with several ups and downs in between. Given this evidence it seems that your model has Overfitted: that is, it learned by memory your 7 training samples but did not really learned the representation of your model. When given 3 data it has never seen it failed in all of them. This is no surprise, as you have really few and unbalanced data and the other aspects mentioned before.
Are there other modification's I can try in order to improve the training and validation accuracies of this network ?
Besides getting more data and possibly redesigning your network architecture there is one other thing that could be affecting you, and is the validation_split parameter. You have correctly used it, by specifying the desired ratio for test and train data. However, reading from the Keras FAQ How is the validation split computed?, we can see that:
If you set the validation_split argument in model.fit to e.g. 0.1, then the validation data used will be the last 10% of the data. If you set it to 0.25, it will be the last 25% of the data, etc. Note that the data isn't shuffled before extracting the validation split, so the validation is literally just the last x% of samples in the input you passed.
This means that by specifying a validation split of 0.3 you are always using the last 3 data elements as validation. What you can do is either shuffle all your data before calling fit, or use the validation_data parameter instead, specifying as a tuple your (X_test, Y_test) you wish to use with your data (with sklearn's train_test_split for example). I hope that this helps you with your problem, good luck.
Related
How to make an agent/turtle not see through multiple walls? (Netlogo)
I am new to posting in stackoverflow. Before I had a problem with agents/turtles seeing through the walls I make, but I have found the solution here: Not see through the walls The problem now is when there is another wall within the vision-cone, the valid line of sight (green patches) are incorrect. I'm currently still tinkering on how to fix this. Here is my current code that I have been working on, breed [walls wall] walls-own [first-end second-end] to setup ca reset-ticks set-default-shape walls "line" crt 1 [setxy 4 0] ask turtles [facexy 0 0] ;color all cones in radius blue by default let dist 10 let angle 80 ask turtles [ask patches in-cone dist angle [set pcolor blue]] ;; place a wall down.... the line of sight is blocked (keyword: line) create-walls 1 [setxy 0 0] ;;This is an interpretation of a wall. Two points that define the edges. ask wall 1 [set size 10] ;;my wall is vertical. you can do trig above and below to adjust for not vert lines. ask wall 1 [set heading 45] ask wall 1 [set color hsb 216 50 100] ;; I made it so that it could consider not vertical lines ask wall 1 [ set first-end (list (round (sin heading * (size / 2))) (round (cos heading * (size / 2))))] ask wall 1 [ set second-end (list (round((-1 * (size / 2) * sin heading))) (round ((-1 * (size / 2) * cos heading))))] ;;Test wall. UN-COMMENT THE SET BELOW TO SEE THE PROBLEM ; create-walls 1 [setxy 4 4] ; ask wall 2 [set size 10] ; ask wall 2 [set heading 90] ; ask wall 2 [ set first-end (list (round (sin heading * (size / 2))) (round (cos heading * (size / 2))))] ; ask wall 2 [ set second-end (list (round((-1 * (size / 2) * sin heading))) (round ((-1 * (size / 2) * cos heading))))] ; ask wall 2 [set color hsb 216 50 100] ask turtle 0 [ask in-sight dist angle [set pcolor green]] end ;;a turtle can see a patch if the line from the patch to the trutle isnt intersected by a wall. to-report in-sight [dist angle] let turtle-x xcor let turtle-y ycor report patches in-cone dist angle with [ not any? walls with [intersects [pxcor] of myself [pycor] of myself turtle-x turtle-y ;; line 1 (first first-end)(last first-end)(first second-end)(last second-end)] ;; line 2 ] end to-report counter-clockwise [x1 y1 x2 y2 x3 y3] ;; returns true if triplet creates counter clockwise angle (uses slopes) ;; (C.y-A.y) * (B.x-A.x) > (B.y-A.y*(C.x-A.x) report (y3 - y1)*(x2 - x1)>(y2 - y1)*(x3 - x1) end to-report intersects [x1 y1 x2 y2 x3 y3 x4 y4] ;; line 1: x1 y1 x2 y2 ;; line 2: x3 y3 x4 y4 ;;DANGER: Doesnt work for colinear segments report (counter-clockwise x1 y1 x3 y3 x4 y4) != (counter-clockwise x2 y2 x3 y3 x4 y4) and (counter-clockwise x1 y1 x2 y2 x3 y3) != (counter-clockwise x1 y1 x2 y2 x4 y4) end This is what it looks like if you would uncomment the set of code for the test wall. Problem Any suggestions or tips would be very helpful! Thank you in advance.
Not see through the walls
I am trying to model some building search in Netlogo, but now I am stuck, because I have no idea how to solve seeing through walls. My agent can see in cone lets say 60 degrees and 5 patches ahead. I can detect wall somewhere in my sight and i can detect other objects as well, problem is to detect if the obejct is behind wall or not. How can I solve it? Any ideas?
Here's a full working example. You can do some optimizations regarding which walls or patches need to be checked, rather than all of them, but I'll leave that to you...Essentially, You can imagine that you draw a line between the agent and all other patches that are in the vision of the agent and remove the patches such that there is an intersecting wall. The line-intersection problem is well studied and I put a link for you to look at. To make my calculations easier, I store the end points of the wall--you could use trig to calculate the end points if the wall wasn't vertical like in my case. I use some aesthetics too... breed [ walls wall] walls-own [first-end second-end] to setup ca reset-ticks set-default-shape walls "line" crt 1 [ setxy 4 0] ask turtles [facexy 0 0] ;;color all cones in radius blue by default let dist 10 let angle 30 ask turtles [ ask patches in-cone dist angle [set pcolor blue]] ;; place a wall down...the line of sight is blocked (keyword: line) create-walls 1 [ setxy 0 0 ] ;;This is an interpretation of a wall. Two points that define the edges. ask wall 1 [set size 10] ask wall 1 [set first-end (list 0 (size / 2))] ask wall 1 [set second-end (list 0 (-1 * size / 2))] ;;my wall is vertical. You can do trig above and below to adjust for not vert lines. ask wall 1 [ set heading 0] ask wall 1 [set color hsb 216 50 100] ;;pretty blue =) ask turtle 0 [ ask in-sight dist angle [ set pcolor green]] end ;;a turtle can see a patch if the line from the patch to the turtle isn't intersected by a wall. to-report in-sight [dist angle] let turtle-x xcor let turtle-y ycor report patches in-cone dist angle with [ not any? walls with [intersects [pxcor] of myself [pycor] of myself turtle-x turtle-y ;; line 1 (first first-end) (last first-end) (first second-end) (last second-end)] ;; line 2 ] end ;; See http://stackoverflow.com/questions/3838329/how-can-i-check-if-two-segments-intersect ;;counter clockwise method (doesn't consider colinearity) to-report counter-clockwise [x1 y1 x2 y2 x3 y3] ;;returns true if triplet creates counter clockwise angle (uses slopes) ;(C.y-A.y) * (B.x-A.x) > (B.y-A.y) * (C.x-A.x) report (y3 - y1) * (x2 - x1) > (y2 - y1) * (x3 - x1) end to-report intersects [x1 y1 x2 y2 x3 y3 x4 y4] ;;line 1: x1 y1 x2 y2 ;;line 2: x3 y3 x4 y4 ;;DANGER: Doesn't work for colinear segments!!! ;ccw(A,C,D) != ccw(B,C,D) and ccw(A,B,C) != ccw(A,B,D) report (counter-clockwise x1 y1 x3 y3 x4 y4) != (counter-clockwise x2 y2 x3 y3 x4 y4) and (counter-clockwise x1 y1 x2 y2 x3 y3) != (counter-clockwise x1 y1 x2 y2 x4 y4) end
Netlogo - non-valid results from random float, logic, and choosers
In Netlogo, I use choosers to pick a strategy and its associated outcome given an event. In my case, building codes are the strategy, fire damage is the outcome, and a fire is the event). A random variable denotes whether a fire event occurs. However, my results are not valid (I sometimes get fire damage > 0 resulting from a p > 0.40). Thank you so much for any insight regarding the problem. to fire? ask patches [ if (strategy = "updated-building-codes") [set p random-float 1.00 if p > 0.40 [ set fire-level 0 ] if p > 0.01 and p <= 0.40 [ set fire-level 1 ] if p > 0.002 and p <= 0.01 [ set fire-level 2 ] if p <= 0.002 [ set fire-level 3 ] ] if (strategy = "no-updated-building-codes") [set p random-float 1.00 if p > 0.40 [ set fire-level 0 ] if p > 0.01 and p <= 0.40 [ set fire-level 5 ] if p > 0.002 and p <= 0.01 [ set fire-level 6 ] if p <= 0.002 [ set fire-level 7 ] ] ] end
Drawing a super-ellipse with a turtle
Obviously, any shape drawable by other means can be drawn by a turtle. Circles and squares are easy rt 1 fd .0 and if ticks mod 100 = 0 [rt 90] fd 1 Super-ellipses not so much. (regular ellipses are not trivial either.) The Wikipedia article on super-ellipses if you need to be refreshed on the topic. Any input is appreciated. Using a pendown turtle is there way to make a super-ellipse that emerges from turtle movement?
I have 1/4 of it, I suppose you could piece-wise put the other three together. Other values of n are not tested here. (using the Wiki notation, plus phi as an angle of rotating the whole thing.) And the placement of reset-ticks, pen-down, is sloppy, I know. to go2 clear-all reset-ticks let a 6 let b 5 let phi 0 let n 3.5 create-turtles 1 [ let iNdx 1 repeat 90 [ show iNdx show cos(iNdx) if cos(iNdx) > 0 and sin(iNdx) > 0 [ let tx (a * (cos(iNdx) ^ (2 / n))) let ty (b * (sin(iNdx) ^ (2 / n))) let tx2 tx * cos(phi) - ty * sin(phi) let ty2 tx * sin(phi) + ty * cos(phi) setxy tx2 ty2 ] pen-down set iNdx iNdx + 1 ] ] end The ellipse looks simpler, but you be the judge to go clear-all reset-ticks let a 6 let b 5 let phi 45 create-turtles 1 [ let iNdx 1 repeat 360 [ let tx (a * cos(iNdx)) let ty (b * sin(iNdx)) let tx2 tx * cos(phi) - ty * sin(phi) let ty2 tx * sin(phi) + ty * cos(phi) setxy tx2 ty2 pen-down set iNdx iNdx + 1 ] ] end a generalization and simplification as a procedure. to Super-ellipse [x y a b m n] create-turtles 1 [ let iNdx 1 repeat 360 [ setxy (x + (abs cos iNdx)^(2 / m) * a * (sgn cos iNdx)) (y + (abs sin iNdx)^(2 / n) * b * (sgn sin iNdx)) pendown set iNdx iNdx + 1] ] end
The generalized form of another answer seems to produce the sort of thing I was thinking of. the closer the pen starts to one of the foci the closer the drawing is to a square. the n<1 super-ellipses are not achieved. globals[c] breed [pens pen] breed [foci focus] foci-own [dist distx disty] to setup ca create-pens 1 [set heading 45 fd 10 pendown set C self] ;create-foci 1 [setxy (random-xcor / 2) (random-ycor / 2)] create-foci 1 [setxy 10 10] create-foci 1 [setxy 10 -10] create-foci 1 [setxy -10 -10] create-foci 1 [setxy -10 10] end to go repeat 5100 [ ask foci [ set dist distance c set distx xcor - [xcor] of c set disty ycor - [ycor] of c ] ask c [ set heading 90 + atan ( sum [distx / dist] of foci / sum [dist] of foci) ( sum [disty / dist] of foci / sum [dist] of foci) FD .0125 ] ] end
Simplifying code - multiples of x (NetLogo
I currently have a piece of coding which I want to simplify. Essentially I want a procedure to happen every time a custom value equals a multiple of x. The way I have done it so far is to write a separate line of code for each multiple up to 35, however I want it to go much higher than 35, the way it is so far is messy, time consuming and takes up space! I need some piece of coding that accounts for any whole integer within a range (I think!). Currently I have this (I have only shown you the first 5 lines so as not to fill up the thread, but I currently go up to 35): to fly-emergence ask flies [if (age >= (((fly-life-expectancy / fly-life-progeny) * 1) - 0.5)) and (age < (((fly-life-expectancy / fly-life-progeny) * 1) + 0.5)) [hatch 1 [set age 0]]] ask flies [if (age >= (((fly-life-expectancy / fly-life-progeny) * 2) - 0.5)) and (age < (((fly-life-expectancy / fly-life-progeny) * 2) + 0.5)) [hatch 1 [set age 0]]] ask flies [if (age >= (((fly-life-expectancy / fly-life-progeny) * 3) - 0.5)) and (age < (((fly-life-expectancy / fly-life-progeny) * 3) + 0.5)) [hatch 1 [set age 0]]] ask flies [if (age >= (((fly-life-expectancy / fly-life-progeny) * 4) - 0.5)) and (age < (((fly-life-expectancy / fly-life-progeny) * 4) + 0.5)) [hatch 1 [set age 0]]] ask flies [if (age >= (((fly-life-expectancy / fly-life-progeny) * 5) - 0.5)) and (age < (((fly-life-expectancy / fly-life-progeny) * 5) + 0.5)) [hatch 1 [set age 0]]] end
You're looking for the mod primitive: to fly-emergence ask flies [ if (round age) mod (fly-life-expectancy / fly-life-progeny) = 0 [ hatch 1 [ set age 0 ] ] ] end I'm not absolutely sure I've matched your code exactly... but the basic ideas are, use mod to make something periodic, and use round to avoid the "-0.5 to 0.5" part of the check.