Is there anyone working with Dymola in the field of Electrical drives simulations?
You can find a very nice implementation here: [Haumer].
Read also the publications from this group: There are a lot of improvements on this approach. The advantage of this approach is the funtamental physics/symmetries of all types of machines.
Related
we want to publish an Open-Source for integrating Reinforcement Learning to Smartgrid optimization.
We use OpenModelica as GUI, PyFMI for the import to Python and Gym.
Nearly everything is running, but a possibility to connect or disconnect additional loads during the simulation is missing. Everything we can do for now is a variation of the parameters of existing loads, which gives some flexibility, but way less then the possibility to switch loads on and off.
Using the implemented switches in OpenModelica is not really an option. They just place a resistor at this spot, giving it either a very low or very high resistance. First, its not really decoupled, and second, high resistances make the ODE-system stiff, which makes it really hard (and costly) to solve it. In tests our LSODA solver (in stiff cases basically a BDF) ran often in numerical errors, regardless of how the jacobian was calculated (analytically by directional derivatives or with finite differences).
Has anyone an idea how we can implement a real "switching effect"?
Best regards,
Henrik
Ideal connection and disconnection of components during simulation
requires structure variability, which is not fully supported
by Modelica (yet). See also this answer https://stackoverflow.com/a/30487641/8725275
One solution for this problem is to translate all possible
model structures in advance and switch the simulation model if certain conditions are met. As there is some overhead involved, this approach only makes sense, when the model is not switched very often.
There is a python framework, which was built to support this process: DySMo. The tool was written by Alexandra Mehlhase, who made a lot of interesting publications regarding structure variability, e.g. An example of beneficial use of
variable-structure modeling to enhance an existing rocket model.
The paper Simulating a Variable-structure Model of an Electric Vehicle for Battery Life Estimation Using Modelica/Dymola and Python of Moritz Stueber is also worth a look. It contains a nice introduction about variable structure systems and available solutions.
I have tried and searched, found that RNNs give better results. Which to use: LSTMs or GRU or traditional RNN or CNN?
The architectures you mention are really loose families of architecture. Performance depends on the details and (of course) the task. Moreover, the two styles are often combined in various ways, so it isn't really an "either-or" choice.
Nevertheless, at the time of writing the CNN-like BERT and RNN-like ELMo architectures are popular. Pre-trained models and code are available for both and they both perform well across a variety of tasks, including classification. Why not try them both?
these architectures can be considered as "vanilla", because there are many advanced architectures that depend on these one, a new one called ULMFiT is . actually giving some state of the art result in classification and is simple to understand and implement using fast.ai library. BERT is also a good one but more complicated to understand in my opinion.
I am looking for a distributed simulation algorithm which allows me to couple multiple standalone systems. The systems I am targeting for interconnection use different formalisms, e.g. discrete time and continuous simulation paradigms. Now, the only algorithms I found were from the field of parallel discrete event simulation (PDES), such as the classical chandy/misra "null"-message protocol, which has some very undesirable problems. My question is now, what other approaches to interconnecting simulation systems besides PDES-algorithms are known i.e. can be used for interconnecting simulation systems?
Not an algorithm, but there are two IEEE standards out there that define protocols intended to address your issue: High-Level Architecture (HLA) and Distributed Interactive Simulation (DIS). HLA has a much greater presence in the analytic discrete-event simulation community where I hang out, DIS tends to get more use in training applications. If you'd like to check out some applications papers, go to the Winter Simulation Conference / INFORMS-sponsorted paper archive site and search for HLA, you'll get 448 hits.
Be forewarned, trying to make this stuff work in general requires some pretty weird plumbing, lots of kludges, and can be very fragile.
I have a couple slightly modified / non-traditional setups for feedforward neural networks which I'd like to compare for accuracy against the ones used professionally today. Are there specific data sets, or types of data sets, which can be used as a benchmark for this? I.e. "the style of ANN typically used for such-and-such a task is 98% accurate against this data set." It would be great to have a variety of these, a couple for statistical analysis, a couple for image and voice recognition, etc.
Basically, is there a way to compare an ANN I've put together against ANNs used professionally, across a variety of tasks? I could pay for data or software, but would prefer free of course.
CMU has some benchmarks for neural networks: Neural Networks Benchmarks
The Fast Artificial Neural Networks library (FANN) has some benchmarks that are widely used: FANN. Download the source code (version 2.2.0) and look at the directory datasets, the format is very simple. There is always a training set (x.train) and a test set (x.test). At the beginning of the file is the number of instances, the number of inputs and the number of outputs. The next lines are the input of the first instance and the output of the first instance and so on. You can find example programs with FANN in the directory examples. I think they even had detailed comparisons to other libraries in previous versions.
I think most of FANN's benchmarks if not all are from Proben1. Google for it, there is a paper from Lutz Prechelt with detailed descriptions and comparisons.
I am new in neural networks and I need to determine the pattern among a given set of inputs and outputs. So how do I decide which neural network to use for training or even which learning method to use? I have little idea about the pattern or relation between the given input and outputs.
Any sort of help will be appreciated. If you want me to read some stuff then it would be great if links are provided.
If any more info is needed plz say so.
Thanks.
Choosing the right neural networks is something of an art form. It's a bit difficult to give generic suggestions as the best NN for a situation will depend on the problem at hand. As with many of these problems neural netowrks may or may not be the best solution. I'd highly recommned trying out different networks and testing their performance vs a testing data set. When I did this I usually used the ANN tools though the R software package.
Also keep your mind open to other statistical learning techniques as well, things like decision trees and Support Vector Machines may be a better choice for some problems.
I'd suggest the following books:
http://www.amazon.com/Neural-Networks-Pattern-Recognition-Christopher/dp/0198538642
http://www.stats.ox.ac.uk/~ripley/PRbook/#Contents