I am trying to create a delay block in simulink that has 3 options within it. The 3 options I want is a x, y, and z. I would like to be able to choose different types of statistical probability functions within each one. However, I am having a hard time finding in documentation on this. Any help would be greatly appreciated.
I am working on my simulation model in Simulink, where I am using "NN Predictive controller" module. I am trying to display the output of the NN Controller in Scope 2.
As you can see in the picture I have 2 signals from workspace. Both are discrete signals (sampling frequency 360 Hz) - both are vectors of 3600 rows.
In NN Predictive controller I have trained neural network to signals of the same size and set the sampling interval to 0.1.
When I run this simulation it runs approximately for 10 hours. Is there any way to minimize the simulation time without increasing the sampling interval in NN Controller? My second question: Why it takes so long?
If you need additional inforation about this model please let me know.
Thank you
Picture:
Use Accelerator Mode or Rapid Accelerator Mode in SL. You can switch the mode in the drop-down list where currently normal is selected.
In the Accelerator Mode part of the Modell is compiled. In the Rapid Accelerator Mode additionally all scopes are deactivated. For data evaluation you than need to store to the workspace and/or file and analyse it later.
See: http://de.mathworks.com/help/simulink/ug/how-the-acceleration-modes-work.html
and: http://de.mathworks.com/company/newsletters/articles/improving-simulation-performance-in-simulink.html
I am currently studying a doctoral thesis in control theory. At the end of every chapter there is a simulation of a relative-with-the-subject problem. I have finished the theory,but for further understanding I would like to reproduce the simulations. The first simulation is as follows :
The solution of the problem concludes in a system of differential equations whose right hand side consists of functions with unknown parameters. The author states the following : "We will use neural networks with one hidden layer,sigmoid basis functions and 5 weights in the external layer in order to approximate every parameter of the unknown functions.More specifically, the weights of the hidden layer are selected through iterative trials and are kept stable during the simulation." And then he states the logic with which he selects the initial values of the unknown parameters and then shows the results of the simulation.
Could anyone give me a lead on where to look and what I need to know in order to solve this specific problem myself in MATLAB (since this is the environment I am most familiar with)? Because the results of a google search are chaotic since I don't really know what I'm looking for.
If you need any more info,feel free to ask!
You can try MATLAB's Neural Network Toolbox. This gives you an nice UI where you can configure the network, train it with data to find the parameter values and test for performance. No coding involved.
Or, you can program it by hand. Since you are working with one hidden layer, it should be very simple. I am sure any machine learning or neural net (NN) textbook would have one example of it. You can also look into GitHib for projects. There should be many NN projects there, in case you are looking to salvage code from existing project.
Most importantly, you should start by learning about NN, if you haven't done that already. NN with single hidden layer is easy to implement once you understand the equations for the forward and back propagation.
Is there any way I could program the Matlab/Simulink to be able to automatically generate circuits on its own? I am using PLECS blockset (Piece-wise Linear Electrical Circuit Simulation ) embedded in Simulink.
For example, I need to have hundreds of identical block in a single .mdl file, instead of inserting one by one by myself by calling the block which I previously saved in Simulink library, is it possible that Simulink can be programmed to automatically generate hundreds of blocks by itself?
The only way I was told is by "using vectorization for most components. Most components are vectorized if they have a vectorized input signal or if one of their parameter is specified as a vector." However, I could not find any further information/details, appreciate if anyone of you could give opinion on this?
I just want to know if this is possible? Else, I would have to try another approach?
Thanks!
edited on 10 July 2013: Further to my question, I have confirmed with Plexim that there isn't such features ( add_block and add_line) in Plecs (Piece-wise Linear Electrical Circuit Simulation), does anyone know if there is any way I could automate the Plecs model? Appreciate any suggestion...Thanks
You can probably use functions like add_block and add_line to automate the creation of your Simulink model from a library.
I'm quite new with this topic so any help would be great. What I need is to optimize a neural network in MATLAB by using GA. My network has [2x98] input and [1x98] target, I've tried consulting MATLAB help but I'm still kind of clueless about what to do :( so, any help would be appreciated. Thanks in advance.
Edit: I guess I didn't say what is there to be optimized as Dan said in the 1st answer. I guess most important thing is number of hidden neurons. And maybe number of hidden layers and training parameters like number of epochs or so. Sorry for not providing enough info, I'm still learning about this.
If this is a homework assignment, do whatever you were taught in class.
Otherwise, ditch the MLP entirely. Support vector regression ( http://www.csie.ntu.edu.tw/~cjlin/libsvm/ ) is much more reliably trainable across a broad swath of problems, and pretty much never runs into the stuck-in-a-local-minima problem often hit with back-propagation trained MLP which forces you to solve a network topography optimization problem just to find a network which will actually train.
well, you need to be more specific about what you are trying to optimize. Is it the size of the hidden layer? Do you have a hidden layer? Is it parameter optimization (learning rate, kernel parameters)?
I assume you have a set of parameters (# of hidden layers, # of neurons per layer...) that needs to be tuned, instead of brute-force searching all combinations to pick a good one, GA can help you "jump" from this combination to another one. So, you can "explore" the search space for potential candidates.
GA can help in selecting "helpful" features. Some features might appear redundant and you want to prune them. However, say, data has too many features to search for the best set of features by some approaches such as forward selection. Again, GA can "jump" from this set candidate to another one.
You will need to find away to encode the data (input parameters, features...) fed to GA. For finding a set of input paras or a good set of features, I think binary encoding should work. In addition, choosing operators for GA to reproduce offsprings is also important. Yet GA needs to be tuned, too (early stopping which can also be applied to ANN).
Here are just some ideas. You might want to search for more info about GA, feature selection, ANN pruning...
Since you're using MATLAB already I suggest you look into the Genetic Algorithms solver (known as GATool, part of the Global Optimization Toolbox) and the Neural Network Toolbox. Between those two you should be able to save quite a bit of figuring out.
You'll basically have to do 2 main tasks:
Come up with a representation (or encoding) for your candidate solutions
Code your fitness function (which basically tests candidate solutions) and pass it as a parameter to the GA solver.
If you need help in terms of coming up with a fitness function, or encoding of candidate solutions then you'll have to be more specific.
Hope it helps.
Matlab has a simple but great explanation for this problem here. It explains both the ANN and GA part.
For more info on using ANN in command line see this.
There is also plenty of litterature on the subject if you google it. It is however not related to MATLAB, but simply the results and the method.
Look up Matthew Settles on Google Scholar. He did some work in this area at the University of Idaho in the last 5-6 years. He should have citations relevant to your work.