Simulation and Software Engineering [closed] - simulation

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to start a simulation project, which will be a descrete-time simulation. The purpose is simulating agent communication with some non-autonomous physical models involved, so it is not necessarily limited to a pure agent-based simulation. Before starting, I wanted to ask what software engineering practices specific to simulation do exist, for example test practices (TDD suited? Simulation tends to be hinghly non-deterministic), which problems from a software engineering point of view are common, often occurring problems, etc. I am not talking about the modelling process, but the process of the realization of a system that uses existing models. Related book recommendations are very welcome.
Thanks.

Marcin is right, this question is much too broad to have a correct answer apart from It Depends.™
The main reason for this is that simulation software is, first and foremost, still 'just' software, and the engineering part very much depends on your requirements (programming language, purposes of the software, time budget, constraints on resources, etc.).
Of course, there might be additional steps involved (such as VV&A) and certain tasks need extra care, such as testing, but all this depends on the context.
Also, before you start hacking away, have you looked at existing tools - maybe there is a library or framework that you can rely on? If so, what approaches have worked there?
Except general introductions (like this), most books and papers are also focused on specific subsets of simulation software (e.g. simulation software in C++, or agent-based simulations, or parallel and distributed simulations). So without more context it is hard to even point you to relevant material.

One common problem in software engineering and development of (agent-based) simulation software is dealing with floating point numbers.
Since not all real numbers can be exactly represented by the floating point formats used in computers, small errors can build up in simulations with many operations over time and influence the final results.
Moreover, small differences in how floating point operations are implemented on various hardware and software platforms may result in different outcomes when a simulation is run on different systems.
See these links for some extensive studies into the effects of this on agent-based models:
http://www.macaulay.ac.uk/fearlus/floating-point/
http://jasss.soc.surrey.ac.uk/8/1/5.html

Related

Master thesis on developping Twincat3 driver [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
If there is any PLC programmer or Twincat3 user out there. I would like to write my master thesis on Twincat3 in a company. Basically, They have different kinds of the Test bench, and they want someone to develop drivers for them. I have no experience with PLC or C++ or IEC61131 languages. Is it possible to learn any of these in 3 months, and then start writing the thesis? I have three months of Internship time before starting. I am having a bit of doubt. Even though it is daunting as an Electrical engineering student, I have no other options.
I thank you in Advance.
Also, The test benches are mainly Powerelectronics or Electrical machine test bench. I believe I need to automate the test processes in twincat3.
Best Regards
Good choice with TwinCAT 3. TwinCAT 3 is very capable, and quite easy to learn (of course dependent on your background, but generally a good platform to work on).
All I can support you with is a link to a TwinCAT 3 tutorial that I've created that is free of charge (available on YouTube):
https://www.youtube.com/playlist?list=PLimaF0nZKYHz3I3kFP4myaAYjmYk1SowO
There are also some other resources available both on YouTube and on the website. I've created a set of links here to help you find all the resources you might need.
To answer your question I would say it depends. Three months is not much time, especially considering you probably have a lot of other things that need to go in there (doing studies, writing the thesis, implementation, conclusions etc). It depends on the complexity of your project (it's not very specific what "writing drivers for them" means). If it's a simple project (including a very basic set of I/Os) it might be do-able. If it's anything more complex (like needing to add a front-end, doing motion control and maybe even safety) then it's most likely going to be hard to finish it in three months.
But again, I think more details on what you want to achieve is necessary.

New Intel processors KPTI bug. Which slowdown to expect for floating point computation? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Some media have reported that a new hardware bug in Intel processors, allowing user mode processes to access kernel mode memory, has been discovered:
It is understood the bug is present in modern Intel processors
produced in the past decade. It allows normal user programs – from
database applications to JavaScript in web browsers – to discern to
some extent the layout or contents of protected kernel memory areas.
The effects [of fixes] are still being benchmarked, however we're
looking at a ballpark figure of five to 30 per cent slow down,
depending on the task and the processor model.
After the bug is fixed, which slowdown am I to expect for multicore floating point computations?
To my understanding, only the performance of switches between kernel and user mode are affected. For example, handling a lot of I/O is a workload where this is frequent, but CPU-intensive processes should not be affected as much.
To quote from one article that analyzes performance of the Linux KPTI patch:
Most workloads that we have run show single-digit regressions. 5% is a good round number for what is typical. The worst we have seen is a roughly 30% regression on a loopback networking test that did a ton of syscalls and context switches.
...
So PostgreSQL SELECT command is about ~20% slower with KPTI workaround, and I/Os in general seem to be impacted negatively according to Phoronix benchmarks especially with fast storage, but not gaming performance, Linux kernel compilation, H.264 encoding, etc…
Source: https://www.cnx-software.com/2018/01/03/intel-hardware-security-bug-fix-to-hit-performance-on-windows-linux/
So, if your FP computations rely mostly on in-memory data shifting and not I/O, they should be mostly unaffected.

PLC Programming Best Practices [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have recently inherited a PLC project. We are using Automation Direct PLCs and using the C-more software for writing ladder logic.
C-more allows me to add rungs for "Execute on every scan", "Execute when called", etc.
It also allows me to break out separate sections under each of these headings to attempt some organization.
Are there some agreed upon best practices for structuring ladder logic programs? I'm trying to bring some sanity to the development process.
Document as you go. Logic, elements, memory map, etc. Document for "the other person", even if that person is YOU. PLCs and their programs tend to have a LONG life, so you will be glad 1 year, 5 years or even 20 years down the road when you have to tweak/debug that PLC. You'll be glad you explained things in a little more detail by documenting it for "the other person".
Do NOT wait until "the end" to document. Yes, that implies that you need to keep the documentation up-to-date.
There are no established norms in the PLC programming realm, I've been developing, commissioning, maintaining (and reverse-engineering) PLC programs for 26 years. Many organizations develop in-house standards, but there are no accepted industry-wide standards. However, a method I gleaned from an old pro dictates placing decision-making rungs first (evaluating conditions and setting flags), making control decisions in the next segment, turning outputs on/off in the next section, and monitoring performance/upset conditions in the last.
It's based on how older machines evaluated I/O and handled ladder execution. The advent of ladder 'sub-routines' has helped enormously; I generally treat each motor as a 'sub-system' element and assign its' own sub-routine.
Hope this helps!

Undergraduate project related to High Performance Computing or similar fields [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am looking for ideas for my undergraduate project and I quite like the area of High Performance Computing , has got a lot of scope for research . Are there any ideas / already existing open source projects worth looking at ?
One hot field right now is in the area of algorithmic trading. You can sign up for $3000 (if you're under 21 -- it's $10k for over 21) at InteractiveBrokers.com and they will give you a free paper trading account (which is fake money traded using realtime data) of $10,000,000. They have API's in C#, C++, VB, Java and reasonable support... You could write your own stock pair trading algorithm. They have good documentation on how to get started.
You can scale this as high as you want, also a lot of people do high frequency trading which requires hpc and in-depth knowledge of Unix and C++.
Worth looking into, my 2 cents.
Perhaps massively parallel processing? Libraries like Cuda, OpenCL, and DirectCompute are just blossoming, and have a high likelihood of becoming commonplace. In my company, we are researching uses for OpenCL, and we're finding that it has the potential to revolutionize our industry.
Just a thought.
I would suggest looking at Sandia National Labs's SST (The Structural Simulation Toolkit). Its a highly parallel simulator framework used for HPC. It uses and incorporates other simulators from academia and industry. For instance, it currently integrates GEM5, QSim, MacSim, DRAMSim, Merlin, Portals, DRAMSim2, Iris, etc. Moreover, it is open source so you can contribute to the development.
You could work on integrating other academia components into SST, improve the interface of one of the components with SST, or just improve of the components themselves.

Advantages of the Unified Software Development Process [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Why adopt the UP process over others? What are the relative advantages? I know that it is closely coupled with UML but clearly this cannot be the only advantage? Why choose this approach over others?
I think it really depends against what process/methodology you compare. Without details, only general characteristics of UP can be mentioned.
It is iterative incremental methodology with well described roles and activities, using modeling techniques in object oriented analysis and design. It is vertically (time) divided in phases and those in iterations and horizontally into groups of activities concerning different aspects of software development, such as requirements, analysis, design, testing deployment etc...
Although we are not practitioners of the full UP processes, we use it frequently to see what type of products we need and which roles will have the responsibility to perform the activities for that product. We like it, because it details the various aspects from design until the deployment phase and comes with various templates, guidelines and processes which help out in the development life-cycle.
Take a look at : http://epf.eclipse.org/wikis/openup/
As we are a team where members can play different roles depending the projects, we simply navigate to the role, and check what products are needed for the project at hand. Depending the weight/complexity of the project we will choose the products that will help us in our daily duties. UML is an asset which we highly depend upon and comes as a benefit within OpenUP (or other UP incarnations).
I am certified in RUP and a Scrum Master. Most teams find that no process "off the shelf" is a perfect fit. That being said, the Unified Process focuses on driving risk out of a project early. However, I have seen many implementations where UP introduces a level of risk simply by being overly complex. Depending on the nature of the project, organizational structure, and other factors such as compliance and scale, UP offers a set of practices that can be easily tailored.