Encog Neural Network for Classifying Images : Training - neural-network

I'm having problem training a neural network for image classification. Basing it on the performance of the NN, it doesn't seem to be learning.
I basically run the given program in encog-examples (ImageNeuralNetwork) to classify a set of images. The following is the sample program output
Training set created
Added input image:./faces/at33_straight_neutral_open.png; identity: shadesNone
Added input image:./faces/night/night_up_happy_sunglasses_4.png; identity: shades
...<more files here>...
Added input image:./faces/cheyer/cheyer_up_neutral_open_4.png; identity: shadesNone
Downsampling images...
Created network: [BasicNetwork: Layers=3]
Training Beginning... Output patterns=2
Beginning training...
Iteration #1 Error:199.591952% elapsed time = 00:00:00 time left = 00:01:00
Iteration #2 Error:196.384178% elapsed time = 00:00:00 time left = 00:01:00
Iteration #3 Error:160.422574% elapsed time = 00:00:00 time left = 00:01:00
...
Iteration #16 Error:99.733657% elapsed time = 00:00:00 time left = 00:01:00
...
Iteration #202 Error:99.489796% elapsed time = 00:00:04 time left = 00:01:00
...
Iteration #203 Error:199.605091% elapsed time = 00:00:04 time left = 00:01:00
As you can see, the NN oscillates from error close to 200% then to error close to 100%.
In the first place, I'm not even sure if an error above 100% is possible, much less 200%.
Below is my input file containing the commands and parameters to the NN :
CreateTraining: width:16,height:15,type:Brightness
Input: image:./faces/at33_straight_neutral_open.png, identity:shadesNone
Input: image:./faces/night/night_up_happy_sunglasses_4.png, identity:shades
Input: image:./faces/choon/choon_up_angry_open_4.png, identity:shadesNone
Input: image:./faces/cheyer/cheyer_left_angry_sunglasses_4.png, identity:shades
<more files...>
Network: hidden1:10, hidden2:10
Train: Mode:console, Minutes:1, StrategyError:0.25, StrategyCycles:100
Whatis: image:./faces/tammo/tammo_right_sad_sunglasses_4.png
<more files...>
My initial guess was that either the actual images or the ideal values were not properly fed to the NN, but I checked the inputs (e.g. outputting the images and ideal values that I have read), and they were ok.
Now my hunch is that some directories (perhaps of the java library or where the input files are read) are not properly set. Below is my Eclipse Run Configuration:
Classpath Tab
Bootstrap Entries
JRE System Library [jre7]
User Entries
encog-examples (default classpath)
encog-examples
encog-core-3.2.0-SNAPSHOT.jar \encog-examples\lib
Incidentally, I can't also properly run the Forest Cover example (which require an input file), while I can run the Lunar Lander and XOR examples (which don't require input files). This strengthens my assumption that my problem is directory-related.
Any help is much appreciated. Thanks.

Ignore my post above, 3.2.0 works just fine.
I had same problem like newind27, network just seemed to not learn anything and error changing was going wild. After doing a bit more research I found that encog kinda hates pictures with too much white.
Solution that worked for me was reducing the brightness of pictures that were used for network learning. One way to do this (without destroying original image in the process) is using RescaleOp class with BufferedImage.
Other possible (not tested) solution could be changing the order of images that are being fed up to the network in learning process.

seems to be bug with 3.2.0, had same problem
try training network with 3.1.0, should fix it

Related

Neural Network Oscillates Around 0.5

I wanted to create my own neural network - mainly for the fun of it, but also because Khan Academy doesn't allow libraries, and I hadn't seen any good neural nets on the site.
Neural Network Info:
The one I am showing in the images is a 1-2-3-2-1 neural network, although it does this for all layer sizes and amounts. The thicker line is the first training run, which is 5,000 iterations. The thinner line shows another 1,000 iterations after the first training run.
Training Data Info:
I'm making it switch 0 to 1 and 1 to 0. The graphs shown are the loss when trying to change 1 to 0. The dataset looks like this:
[{
inputs: [0],
outputs: [1]
}, {
inputs: [1],
outputs: [0]
}]
Before each iteration, the dataset is randomized.
I put a neural net together, but when testing I ran across an interesting issue:
It will oscillate around 0.5 about 3/4ths of the time. The other 1/4th of the time, it works as intended. Sometimes it will go to where it is supposed to (about a quarter of the time) (These graphs show the loss, with the line in the center being 0):
Another part of the time (maybe 1/20th, so pretty rarely), it will "stick" at 0.5, but then kick itself out:
Or it'll get it right, but then just mess itself up for no reason (very rare, almost never happens):
And the rest of the time, it will just stay at around 0.5:
I have no clue what's causing these to happen (although I think it might be my implementation of Gradient Descent, found on line 137 of the program), or how to fix them.
You can find the program here:
khanacademy.org/cs/-/6305674778411008
I think that this can be a overffitting. the neural network reach the min. but after of some time the loss start to grow again and stop in a local min.
but this depends of how you neural network are implemented. you need see if you data is normalized between 0 and 1 or -1 and 1 for example. because if o data is not normalized the gradient can "break out".
Standartzation is important too.

Is it necessary to initialized the weights for every time retraining the same model in matlab with nntool?

I know for the ANN model, the initial weights are random. If I train a model and repeat training 10 times by nntool, do the weights initialize every time when I click the training button, or still use the same initial weights you just adjusted?
I am not sure if the nntool you refer to uses the train method (see here https://de.mathworks.com/help/nnet/ref/train.html).
I have used this method quite extensively and it works in a similar way as tensorflow, you store a number of checkpoints and load the latest status to continue training from such point. The code would look something like this.
[feat,target] = iris_dataset;
my_nn = patternnet(20);
my_nn = train(my_nn,feat,target,'CheckpointFile','MyCheckpoint','CheckpointDelay',30);
Here we have requested that checkpoints are stored at a rate not greater than one each 30 seconds. When you want to continue training the net must be loaded from the checkpoint file as:
[feat,target] = iris_dataset;
load MyCheckpoint
my_nn = checkpoint.my_nn;
my_nn = train(my_nn,feat,target,'CheckpointFile','MyCheckpoint');
This solution involves training the network from the command line or via a script rather than using the GUI supplied by Mathworks. I honestly think this latter method is great for beginners but if you want to do any interesting clever start using the command line or even better switch to libraries like Torch or Tensorflow!
Hope it helps!

matlab script node in Labview with different timing

I have a DAQ for Temperature measurment. I take a continuous sample rate and after DAQ, calculating temperature difference per minute (Cooling Rate: CR) during this process. This CR and temperature values are inserted into the Matlab script for a physical model running (predicting the temperature drop for next 30 sec). Then, I record and compare the predicted and experimental values in LabVIEW.
What i am trying to do is the matlab model is executing every 30 sec, and send out its predictions as an output from matlab script. One of this outputs helps me to change the Air Blower Motor Speed until next matlab run( eventually affect the temperature drop for next 30 sec as well, which becomes a closed loop). After 30 sec while main process is still running, sending CR and temperature values to matlab model again, and so on.
I have a case structure for this Matlab script. And inside of case structure i applied an elapsed time function to control the timing for the matlab script, but this is not working.
Yes. Short answer: I believe (one of) the reasons the program behaves weird on changed timing are several race conditions present in the code.
The part of the diagram presented shows several big problems with the code:
Local variables lead to race conditions. Use dataflow. E.g. you are writing to Tinitial local variable, and reading from Tinitial local varaible in the chunk of code with no data dependencies. It is not known whether reading or writing will happen first. It may not manifest itself badly with small delays, while big delays may be an issue. Solution: rewrite you program using the following example:
From Bad:
To Good:
(nevermind broken wires)
Matlab script node executes in the main UI execution system. If it is executing for a long time, it may freeze indicators/controls as well as execution of other pieces of code. Change execution system of other VIs in your program (say to "other 1") and see if the situation improves.

Weka classifier MultilayerPerceptron

I have a problem with weka =/.
I'm using weka for data mining time series with neural network, in other words: the classifier MultilayerPerceptron.
my configuration is "MultilayerPerceptron -L 0.3 -M 0.1 -N 1000 -V 0 -S 0 -E 20 -H a"
There is the problem... the weka never ends.
I have 1904 instances and 18 attributes, corresponding to five days of time series, is not much data =/.
the last time the weka run for 8 days and it stop to run but don't give me a result.
any idea ?
I have run a MultilayerPerceptron using 10-fold Cross-Validation using a generated dataset containing 1904 instances and 18 attributes.
Given the configuration outlined above, each fold took 12 seconds on my PC and completed quite fine. Given the size of the dataset and the number of training runs, it shouldn't really take too long to train the MLP.
Perhaps there is something up with the data that you are using (Perhaps you could supply the arff header and some sample lines) or the system stopped training for some reason. You could try on another computer, but I'm not sure if that would resolve the issue.
I can't see why it would take 8 days to train a network like this. You probably don't need to wait that long before realising that there is an issue in the training. :)
Hope this Helps!

Dymola/Modelica real-time simulation advances too fast

I want to simulate a model in Dymola in real-time for HiL use. In the results I see that the Simulation is advancing about 5% too fast.
Integration terminated successfully at T = 691200
CPU-time for integration : 6.57e+005 seconds
CPU-time for one GRID interval: 951 milli-seconds
I already tried to increase the grid interval to reduce the relativ error, but still the simulation is advancing too fast. I only read about aproaches to reduce model complexity to allow simulation within the defined time steps.
Note, that the Simulation does keep up with real-time and is even faster. How can I ín this case match simulated time and real time?
Edit 1:
I used Lsodar solver with checked "Synchronize with realtime option" in Realtime tab. I have the realtime simulation licence option. I use Dymola 2013 on Windows 7. Here the result for a stepsize of 15s:
Integration terminated successfully at T = 691200
CPU-time for integration : 6.6e+005 seconds
CPU-time for one GRID interval : 1.43e+004 milli-seconds
The deviation still is roughly about 4.5%.
I did however not use inline integration.
Do I need hard realtime or inline integration to improve those results? It should be possible to get a deviation lower than 4.5% using soft realtime or not?
Edit 2:
I took the Python27 block from the Berkeley Buildings library to read the System time and compare it with the Simulation advance. The result shows that 36 hours after Simulation start, the Simulation slows down slightly (compared to real time). About 72 hours after the start of the simulation it starts getting about 10% faster than real time. In addition, the jitter in the result increases after those 72 hours.
Any explanations?
Next steps will be:
-changing to fixed step solver (Might well be this is a big part of the solution)
-Changing from DDE Server to OPC Server, which at the Moment doesn't not seem to be possible in Dymola 2013 however.
Edit 3:
Nope... using a fixed step solver does seem to solve the problem. In the first 48 hours of simulation time the deviation seems to be equal to the deviation using a solver with variable step size. In this example I used the Rkfix 3 solver with an integrator step of 0.1.
Nobody knows how to get rid of those huge deviations?
If I recall correctly, Dymola has a special compilation option for real-time performance. However, I think it is a licensed option (not sure).
I suspect that Dymola is picking up the wrong clock speed.
You could use the "Slowdown factor" that is in the Simulation Setup, on the Realtime tab just below "Synchronize with realtime". Set this to 1/0.95.
There is a parameter in Dymola that you can use to set the CPU speed but I could not find this now, I will have a look for this again later.
I solved the problem switching to an embedded OPC-Server. Error between real time and simulation time in this case is shown below.
Compiling Dymola Problems with an embedded OPC-Server requires administrator rights (which I did not have before). The active folder of Dymola must not be write protected.