Libraries to model district heating systems with Modelica Dymola - modelica

I want to model a district heating system with Modelica Dymola.
Which libraries can be used to model consumers, heat source and district heating network?
- typically used
- open source or commercial
I would also appreciate tips on how to assemble a simple first model.

The opensource library TransiEnt is also able to model distric heating networks. It is part of a research project. https://www.tuhh.de/transient-ee/en/download.html

You can get a commercial, supported and industrially proven library to model district heating systems from Modelon, the Thermal Power Library, see http://www.modelon.com/products/modelica-libraries/thermal-power-library/. You can e.g. do optimization based scheduling of production units for district heating networks, and a lot more.

Or you can look at the commercial libraries that are part of the Dymola product portfolio, I’m sure you can find a good match.

Related

Evaluation of user-based collaborative filtering K-Nearest Neighbor Algorithm

I was trying to find evaluation mechanisms of collaborative K-Nearest neighbor algorithm, but i am confused how can I evaluate this algorithm. How can I be sure that the recommendation done by this algorithm is correct or good. Actually I have also developed an algorithm that i want to compare with it. but i am not sure how can i compare and evaluate both of them. The data set used by me is of movie lens.
your people help on evaluating this recomender system will be highly appreciated.
Evaluating recommender systems is a large concern of its research and industry communities. Look at "Evaluating collaborative filtering recommender systems", a Herlocker et al paper. The people who publish MovieLens data (the GroupLens research lab at the University of Minnesota) also publish many papers on recsys topics, and the PDFs are often free at http://grouplens.org/publications/.
Check out https://scholar.google.com/scholar?hl=en&q=evaluating+recommender+systems.
In short, you should use a method that hides some data. You will train your model on a portion of the data (called "training data") and test on the remainder of the data that your model has never seen before. There's a formal way to do this called cross-validation, but the general concept of visible training data versus hidden test data is the most important.
I also recommend https://www.coursera.org/learn/recommender-systems, a Coursera course on recommender systems taught by GroupLens folks. In that course you'll learn to use LensKit, a recommender systems framework in Java that includes a large evaluation suite. Even if you don't take the course, LensKit may be just what you want.

Simulation of Electrical drives using Dymola

Is there anyone working with Dymola in the field of Electrical drives simulations?
You can find a very nice implementation here: [Haumer].
Read also the publications from this group: There are a lot of improvements on this approach. The advantage of this approach is the funtamental physics/symmetries of all types of machines.

Theoretically, can everyday computing tasks be broken down into ones solvable by a neural network?

MIT Review recently published this article about a chip from IBM, which is more or less a Artificial neural network. Why IBM’s New Brainlike Chip May Be “Historic” | MIT Technology Review
The article suggests that the chip might have borrowed a page from the future. It might be the beginning of an era of new and evolved computation power. And also talks about programming for the chip.
One downside is that IBM’s chip requires an entirely new approach to
programming. Although the company announced a suite of tools geared
toward writing code for its forthcoming chip last year (see “IBM
Scientists Show Blueprints for Brainlike Computing”), even the best
programmers find learning to work with the chip bruising, says Modha:
“It’s almost always a frustrating experience.” His team is working to
create a library of ready-made blocks of code to make the process
easier.
Which brings me to the question, can everyday computing tasks be broken down into ones solvable by a neural network (theoretically and/or practically)?
It depends from the task.
There are plenty of tasks, for which John von Neumann computers are good enough. For example calculate precise values of functions in some range, or for example apply some filter at image, or store text into db and read it from it or store prices of some products etc. This is not area where NN are needed. Of course it is possible theoretically to train NN to choose where and how to save data. Or to train it to accountancy. But for cases where big array of data need to be analyzed, and current methods doesn't feet NN can be option to consider. Or speech recognition, or buying some pictures which will become masterpieces in the future can be done with NN.

What are the available approaches to interconnecting simulation systems?

I am looking for a distributed simulation algorithm which allows me to couple multiple standalone systems. The systems I am targeting for interconnection use different formalisms, e.g. discrete time and continuous simulation paradigms. Now, the only algorithms I found were from the field of parallel discrete event simulation (PDES), such as the classical chandy/misra "null"-message protocol, which has some very undesirable problems. My question is now, what other approaches to interconnecting simulation systems besides PDES-algorithms are known i.e. can be used for interconnecting simulation systems?
Not an algorithm, but there are two IEEE standards out there that define protocols intended to address your issue: High-Level Architecture (HLA) and Distributed Interactive Simulation (DIS). HLA has a much greater presence in the analytic discrete-event simulation community where I hang out, DIS tends to get more use in training applications. If you'd like to check out some applications papers, go to the Winter Simulation Conference / INFORMS-sponsorted paper archive site and search for HLA, you'll get 448 hits.
Be forewarned, trying to make this stuff work in general requires some pretty weird plumbing, lots of kludges, and can be very fragile.

Which physical open source simulation methods worth to port to GPU

I am writing a report, and I would like to know, in your opinion, which open source physical simulation methods (like Molecular Dynamics, Brownian Dynamics, etc) and not ported yet, would be worth to port to GPU or another special hardware that can potentially speedup the calculation.
Links to the projects would be really appreciated.
Thanks in advance
Any physical simulation technique, be it finite difference, finite element, or boundary element, could benefit from a port to GPU. Same for Monte Carlo simulations of financial models. Anything that could use that smoking floating point processing, really.
I am currently working on quantum chemistry application on GPU. as far as I am aware, quantum chemistry is one of most demanding areas, in terms of total cpu time. there has been a number of papers regarding GPU and quantum chemistry, you can research those.
As far as methods, all of them are open source. are you asking about particular program? Then you can look at pyquante or mpqc. for molecular dynamics, look at hoomd. you can also Google QCD on GPU.