What is Visio Process Simulator? - visio

Can anybody please tell me what Visio Process Simulator is? How can I use it with Rockwell Arena simulation software?

First off, I just want to disclose that I am the developer of a tool that interfaces with Process Simulator and Arena, among other simulation tools, which my company plans to sell.
That being said, Process Simulator is a product from the ProModel company, and is basically a Visio-based front-end to the ProModel simulation engine. It allows you to lay out a chain of processes in Visio, and apply cycle times, resources, and several other simulation attributes. It then runs a simulation within Visio and produces an output statistics report.
To answer your question, you can interface a Process Simulator map (PSM) with the Arena simulation software, though it will take a good amount of skill programming for Visio, as well as Arena.
Basically, the solution entails that you compile the PSM from Visio into some in-memory representation. This representation should list all the resources in the PSM, as well as all the processes and all their attributes. It's fairly easy to retrieve these attributes, as Process Simulator stores them in a shape's ShapeSheet.
Now, with the in-memory representation of the model elements, you'd have to transfer them to Arena, but using Arena's COM API. Basically, you would create module objects in Arena (process and resource modules) corresponding with the PSM elements, and mapping the PSM attributes to the appropriate Arena attributes.
As I mentioned above, I have created a software product that does this, but utilizes the Core Manufacturing Simulation Data standard. This means, I extract the PSM data to an intermediary XML format created for storing simulation data, and I have another translator that builds an Arena model from the CMSD data. You can find out more here if you're interested, or you can build your own (which really isn't too hard to do).

Related

Agent Based Modeling in Modelica

Is it possible to simulate multi-agent systems in Modelica? I'm talking about a system such MASON written in Java. How easy or difficult it would be?
As I understand, Modelica is not a typical programming language, so would it be particularly helpful or will the basic design of modelica language throw any hindrance? And more importantly, how we're going to model "messaging" systems that's common in Agent-based modeling?
Modelica can simulate discrete event systems. Some libraries exist: ModelicaDEVS, ARENALib etc.
Maybe the syntax is not perfect yet for this "Messaging", but maybe the language will be improved further in this direction.
An advantage might be that real-time capable code can be created, so the agents could run in embedded systems even with hard real-time - only some of the other tools support this like Ptolemy II.
P.S. (added see first comment):
From the start Modelica was designed to create code which is capable to run in real-time. So you could take the unchanged modelica model of your agent connect IO to sensors and actuators and download it on real-time hardware (e.g. PowerPC). Your swarm of agents will then exactly fullfill the time behaviour you modeled and exist in real. Also you could have only one real agent in hardware (maybe this hardware is expensive) and simulate the interaction to all the other agents in real-time on a real-time simulator hardware using your unchanged models for that too.
This is one of the major reasons why Modelica's semantic is not that dynamic as e.g. Java. If you want to run your MASON agent on real hardware you are in trouble: you have to move to e.g. Safety Critical Java, which means that a lot of constructs of your code, but also of standard Java libraries must be rewritten or are not allowed at all. Without this you will have to live with the possibility that your agent will miss his mission and burn down the house ...

Simplify a very complex/detailed 3D model for mobile apps

tl;dr;
How a software developer who get's a very high detailed 3D model, quickly & easily optimize it for mobile apps, so he can focus his time & energy on developing the app logic?
I think it's a pretty common use case and there may be a tool for this already out somewhere.
Long story
I have a 3D model (collada) of a machine. This model being created by the machine's engineering team, contains a lot of minute details essential in creating the machine hardware.
Now, I am developing a mobile app with unity that needs to render this machine along with 10 other machines in a single scene. The thing I like about the available models is that they look exactly like the real stuff. At the same time, I am not interested in the internal stuff, the external shell is just enough for me. I have no interaction with the 3D modelling team (let's assume I downloaded the model from some archive), and hence can't ask them to make any changes for me. The model is all I have. I am on my own.
There are two problems I am facing
How to get rid of the interiors of the models?
How to get rid of the high resolutions details in the external shell which the human eye can't detect in a mobile phone?
To give a sense of the scale, the real equipment and hence the model can be as big as 100 ft. (30 m.) while these will never occupy more than a 5 inch HD display. The size of the models ranges from 50MB to 400MB. The entire scene hence can go up to 2GB. Each model has nearly 300k faces.
The other challenge I am facing is that I am a software developer familiar with code and my familiarity with 3D modelling tools is very limited and I would like it to be that way :) I can play around with these tools, but I don't want to start spending half my time with these tools.
I have tried blender's decimate modifier. But the result's aren't good, the amount of details lost is very uniform, instead of being targeted to the interiors. I don't want to spend time in going through each mesh and deleting them manually.
Also, for some reason when I import a model that is exported from blender into unity, they look horrible (some faces/polygons that I can see in Blender I don't see them in unity), even with 0 decimation.
I am unable to digest the fact that the manual process is the only way. I feel with today's technology this would be a simple automation. The steps as I see are
Detect polygons that aren't directly reachable from any exterior raycasts. If required, I can define the set (14 may be enough) of points for the raycast's origin, basically camera's locations.
Delete these polygons
Detect polygons with dimensions less than a threshold
Delete these polygons
Blender to unity models can have slight problems if you don't export them the right way. How to do this is out of my field as I am also a developer and personally prefer 3DSMax.
What I would reccomend you to do is do what you don't want to do, it is the easiest way. Select the faces (just drag and select) and then just delete them all (the inside faces that is) you should be able to hide the outer shell if you got a propper 3d program, just google how to do it if it's too complicated for you.
If you want to delete smaller details on the outside, do exactly the same... just select the polys and delete them. I wouldn't reccomend using a build in tool because most of those tend to take the whole object and make it less polys or more polys depending on what tool you use.
In the end next time just try to get in to the program, as a programmer I dislike having to use 3D modelling software as well, but it is part of the job, so put some effort in and just learn the tools. It's less work than it seems.
Edit: As for the tools you are asking for, those do not really exist, you don't normally take a high poly model and change it to a low poly model for a mobile game. Instead you usually get a 3D artist to make a low poly model. The fact you do not have any communication with the team is a bit odd, but so be it. I'd reccomend either getting in touch with them or like I said before, putting the effort in and learning a 3D program, what you are wanting to do literally sounds like just click, drag, select and then press delete to delete some polys you wouldn't see anyways.
-Lars
with vcglib
vcglib may work for you, you can see the sample for simplify a ply 3D file. And it can applyed for many other 3D file format such as stl, obj... As vcglib is a C++ library, you can write a simple to use this library to simplify your stl model. This method work on the OS without X, such as ubuntu server. You can refer to my quesiton Failed to to simplify 3D models with vcglib, Assertion `0' failed on how to use vcglib to simplify a ply 3D file.
with meshlabserver
If you want do the auto simplyfication on OS with X, or windows, or Mac OS, it's much easy, you can refer to the meshlabserver, meshlab is also build on vcglib. You can run such ccommand, where the PLYmesher_script.mlx is the filter file, you can write this file or generate it with meshlab refer here.
meshlabserver -i ./option-0000.ply -o ./meshed.ply -m vc vn -s scripts/PLYmesher_script.mlx

Use Case Diagram and Activity Diagram, Chicken and Egg?

I want to question and/or perhaps challenge the school of thought on UML behavioral diagrams.
Firstly, I want to ask, what comes first: Use Case or Activity?
I was taught that Use Case diagrams come first and then for each Use Case, you have one or more Activity diagrams to represent successful and alternate flows. From the Activity diagrams, you can identify nouns to establish classes.
I have, however, read other articles which say you create an Activity diagram for the end to end process and then from that, you can identify Use Cases.
I can see both scenarios working, and am confused, as to me it seems a case of hierarchy. For example, say I have a high level business process which is 'Grading Student Results'. If I map it as an Activity diagram, within which I would see swim-lanes. I would be able to pick out Use Cases, such as 'Determine Grade Boundaries,' 'Submit Results,' 'Convert Result to Grade,' and so on.
You could argue they are the same thing, i.e. both diagrams would meet this process modeling need. I then want to model the next level, for example, how you 'Submit Results.'
Can someone advise on the best practice: whether a Use Case diagram comes before or after an Activity diagram?
First:
There is no competition between any of the UML diagrams to be the "first
one". Sometimes it is better to work on some diagrams simultaneously and iteratively.
Second:
Each diagram can be used in different contexts and for different purposes.
Use Case Diagram vs. Activity Diagram
"Use Cases" are scenarios which show how the user will use the system to achieve their goals.
So:
Instead of showing this "scenario" with written use cases, you
can visualize its' steps with an activity diagram.
But in order to find use cases, you should discover the system requirements to some degree, (e.g. the scope, broad feature set, priority, cost, etc.).
In some business domains, such as for an automation project, in order to discover requirements/use cases, you may have to investigate current business flow. Sometimes this business flow can be complex, so you may want to investigate it with an activity diagram.
So:
An activity diagram can be used to investigate a business process to
understand and discover the flow, to better discover requirements.
So:
An Activity Diagram can be used at different levels of software
development stages for different purposes.
Just like other diagrams, you can use the Activity Diagram at any time, anywhere, as soon as it can help you to ask the right question, to understand and explore any issue related to your purpose.
Here is a summary purpose of Activity Diagrams:
The purpose of the activity diagram is to model the procedural flow of
actions that are part of a larger activity. In projects in which use
cases are present, activity diagrams can model a specific use case at
a more detailed level. However, activity diagrams can be used
independently of use cases for modeling a business-level function,
such as buying a concert ticket or registering for a college class.
Activity diagrams can also be used to model system-level functions,
such as how a ticket reservation data-mart populates a corporate
sales system's data warehouse.
UML Basics : The activity diagram by by Donald Bell
To get a quick grasp of which diagrams can be used for which purposes, I advise you to check out Scott W. Ambler's mini book: The Elements of UML(TM) 2.0 Style
Activity diagram is one of the those with widest abstraction range in UML. An activity can be used for anything between a business process (very abstract, comparing with software system) and a single method algorithm (code-level, practically blue-print, meaning kind of abstraction ground level).
Use Cases on the other side are in practice very limited in their abstraction. They show the interaction between a user and the system and would be somewhere in the middle of the abstraction scale. Not as abstract as a business process, and definitelly a lot more abstract than the implementation diagram.
Software projects tend to start working on a very abstract level (business goal for example) and finish with the abstracion 0 (implemented system). During the project analysts, architects and developers work together to gradually lower this abstraction producing always less abstract artifacts/models - business processes, use cases, architecture, design, code.
After this introduction it is not hard to answer your question - any of those can be used first and that depends on type of your project and its size. SOme examples:
A large project of development an ERP system. It is almost certain that in this kind of project the first thing to model is the business process. A long before even thinking of its functionality, the team must understand the business background. The best UML diagram for this is naturally the activity diagram. Some time after, when the process is clear and the high-level reqs known, the use case modelling can start.
A middle sized of relativelly small project, with no complex processes in background (for example a mobile app development) can start directly with use cases, identifying the users and their features. Later on, these can be further refined using activities.
A very small development of some interface, driver of communication gateway, highely technical, where even the user interaction is minimal, the modelling can start with the activities again, showing the concrete algorithm too be implemented. USe cases can be completelly skipped.
As a summary I would conclude that there are no unbreakable rules of this kind in software development. Each project is unique, each development methodology is unique, even each development team is special and unique. To think about "which diagram" to do first is straight and simply WRONG! Think about what kind of analysis or specification you need in a given moment - what is easiest and most usefull to be modelled. When this is clear - there are 13 UML diagrams to pick up from in order to optimally fulfill the aim.
Choice of UML diagram is the "HOW". More important than that is more often than not - the the "WHAT".
Use case diagram is for showing the functionalities and Activity diagram is for showing operations(1 functionality can have many operations).
eg. Use case diag. is Moher (can have many childrens) and
Activity diag. is like describing the child of Mother i.e. Use Case diag.

Extensive comparison between SIMULINK and LabVIEW

I am trying to determine which of these two to buy for my work. I have used SIMULINK but not LabVIEW. Is there anyone who has used both and would like to provide some details? My investigation criteria are the user friendliness, availability of libraries and template functions, real-time probing facility, COTS hardware interfacing opportunity, quality of code generation, design for testability (i.e. ease of generating unit/acceptance tests), etc. However, if anyone would like to educate me with more criteria, please do so by all means!
For anyone who does not know about SIMULINK and LabVIEW - These are both Domain-Specific Languages (DSLs) intended for graphical dataflow modelling (and also code generation). These are multi-industrial tools and quite heavily used for engineering design and modelling.
IMPORTANT - I am quite interested to know if SIMULINK and LabVIEW offer real-time probing. For example, I have a model that I want to simulate. If there are variables associated to certain building blocks in that model, could I view them changing as the simulation continues? I know that it is certainly not possible with SIMULINK as it has a step-by-step debugger. I am not aware of anything similar in LabVIEW.
I really have not used LabVIEW and cannot obtain it temporarily as my work internet has got download restrictions and administrative privilege issues. This is the reason why I simply cannot use only NI website to draw conclusions. If there is any white paper available that addresses this issue, I would also love to know :)
UPDATE SINCE LAST POST
I have used MATLAB code generator and will not say that it is the best. However, I hear now that SIMULINK Embedded Coder is the best code generator and almost one of its own kind. Can anyone confirm whether or not this is good for safety critical system design i.e. generating code from safety-critical subsystem models. I know that the Mathworks is constantly trying to close the gap to achieve fully-flexible production-level C/C++ code generation.
I know that an ideal answer would be,"Depending on what you are trying to do, use a bit of both". And interestingly, I think I am heading to that direction. ATEOTD, it is a lot of money and need to be spent "nicely".
Thanks in advance.
I used labVIEW from 1995, and Simulink from 2000. Now I am involved in control system design, and simulation of robotic systems using labVIEW Real Time and automotive ECUs using MATALAB/Simulink/DSPACE .
LabVIEW is focus on measurement systems, and MATLAB/SIMULINK in dynamic simulation, so,
If you run complex simulations, and your work is create/debug complex simulation models of controllers or plants, use Simulink+RealTimeWorkShop+StateFlowChart. LabVIEW has no eficient code generators for dynamic simulation. RTW generates smaller and fastest code.
If your main work is developing systems with controllers and GUI for machines, or you want to deploy the controllers on field, use labVIEW.
If your main work is developing flexible HIL or SIL systems, with a good GUI, you can use VeriStand. Veristand can mix Simulink and LabVIEW code.
And if you have a big budget ( VERY BIG ) and you are working in automotive control prototypes, DSPACE hardware is a very good choice for fast development of automotive ECUS, or OPAL to develope electric power circuits. But only for prototype or HIL testing of controllers.
From the point of view of COTS hardware:
Mathworks donĀ“t manufacture hardware -> Matlab/Simulink support hardware from several vendors.
National Instruments produce/sell hardware->LabVIEW Real Time is focused in support NationalInstruments hardware. There are no COTS full replacement.
I have absolutely no experience with Simulink, so I'll comment only on LV, although a quick read about Simulink on Wikipedia seems to indicate that it's focused mainly on simulation and modelling, which is certainly not the case with LabVIEW.
OK, so first of all, LV is NOT a DSL. While you wouldn't want to use it for any project, it's a general purpose programming language and you should take that into account. I know that NI has a simulation toolkit for LV, which might help you if that's what you're after, but I have absolutely no experience with it. The images I saw of it seemed to indicate that it adds a special kind of diagram to LV for simulation.
Second, LV is not restricted to any kind of hardware. It's a general purpose language, so you can write code which won't use any hardware at all, code which will use or run on NI's hardware or code which will use any hardware (be it through DLL calls, .NET assemblies, RS232, TCP, GPIB or any other option you can think of). There is quite a large collection of LV drivers for various devices and the quality of the driver usually depends on who wrote it.
Third, you can certainly probe in real time in LV. You write your code, just as you would in C or Java, and when you run it, you have several debugging options:
Single stepping. This isn't actually all that common, partially because LV is parallel.
Execution highlighting. This runs the code in slow motion, while showing all the values in the various wires.
Probes, which show you the last value that each wire had, where wires fill the same function that variables do in text based languages. This updates in real time and I assume is what you want.
Retain wire values, which allows you to probe a wire even after data passed through it. This is similar to what you get in text based IDEs with variables. In LV you don't usually have it because wire values are transient, so the value is not kept around unless you explicitly ask for it.
Of course, since you're talking about code, you could also simply write the code to display the values to the screen on a graph or a numeric indicator or to log them to a file, so there should be no need for actual probing. You could also add analysis code, etc.
Fourth, you could try downloading and running LV in a fully functional evaluation mode. If I remember correctly, NI currently gives you 7 days and then 45 days if you register on their site. If you can't do that on a work computer, you could try at home. If your problem is only with downloading, you could try contacting your local NI office and asking them to send you a DVD.
Note that I don't really know anything about modelling and simulation, so I have no idea what kind of code you would actually have to write in order to do what you want. I assume that if NI has a special module for it, then it's not something that you can completely cover in regular code (at least not if you want the original notation), but I would say that if you could write the code that does what you want in C, there's no reason you shouldn't be able to write it in LV (assuming, of course, that you know how to write code in LV).
A lot of the best answer would have to depend on your ultimate design requirements. Are you developing a product? If so, in what stage of development are you? Or are you doing research?
I recently did a comparison just as you are doing. I know LV, but was wanting to move towards a more hardware-scalable option, since NI HW is very expensive in volume. That is, my company was wanting to move towards a product. What LV and NI HW give you is flexibility. You can change code very quickly compared to C. On the other hand, LV does not run on nearly as many different HW platforms as C. So I wanted to find an inexpensive platform that would work well for real-time control and data acquisition, such that if we wanted to sell a product for, say, $30k, our controller wouldn't be costing $15k of that. We ended up with Diamond Systems Linux SBC's. Interestingly, Simulink ended up using the most expensive hardware! It did have a lot of flexibility, and could generate code, as well as model plants and controllers. But then, LV can do that as well.
As Yair wrote, LV has plenty of good debugging tools. One of the more interesting tools that is not so well known is the Suspend when Called option for a SubVI. This allows you to play with the inputs and outputs of a SubVI as much as you want while execution is paused.
MATLAB and Simulink are the defacto standard for control system design and simulation. Simulink controller models can be used for offline simulation in conjunction with plant models, all the way to realtime implementation on embedded targets. It is a general simulation framework with extensive built-in libraries as well as a la carte special purpose libraries, and can be extended through creation of custom blocks (S-function blocks) in C and other languages. It includes the ability to display values in graphs, numeric displays, gages, etc. while a nonrealtime simulation is taking place. Realtime target support from The Mathworks includes x86 (xPC Target) and several embedded targets (MPC555, etc.), and there is 3rd party support for other targets. The aforementioned dSPACE provides complete prototyping controllers including support for their quite powerful hardware. xPC Target includes support for a plethora of COTS PC data acquisition cards. Realtime target support includes GUI elements such as graphs, numeric displays gages, etc.
As I understand it (I have never really used it in anger), LabView only supports NI hardware, and is more hardware-oriented. Simulink supports hardware from multiple vendors, be it for data acquisition, or real-time implementation, but it may require a bit more work for the user to interface to his or her own hardware (less plug & play than LabView). On the other hand, Simulink provides tools to support the whole model-based design process, from modelling & simulation, control design, verification & validation, code generation, hardware-in-the-loop, etc...
Disclaimer: I used to work for MathWorks.
You guys may really be interested in Control Design adn Simulation Module for LabVIEW. It does a lot of simulations and in the future may be competitive to Simulink. I'm not a control engineer but I use it sometimes for simple testing and I'm glad that I don't have to learn Simulink from the beginning to do some work since I'm familiar with LabVIEW philosophy.

Real time system concept proof project

I'm taking an introductory course (3 months) about real time systems design, but any implementation.
I would like to build something that let me understand better what I'll learn in theory, but since I have never done any real time system I can't estimate how long will take any project. It would be a concept proof project, or something like that, given my available time and knowledge.
Please, could you give me some idea? Thank you in advance.
I programm in TSQL, Delphi and C#, but I'll not have any problem in learning another language.
Suggest you consider exploring the Real-Time Specification for Java (RTSJ). While it is not a traditional environment for constructing real-time software, it is an up-and-coming technology with a lot of interest. Even better, you can witness some of the ongoing debate about what matters and what doesn't in real-time systems.
Sun's JavaRTS is freely available for download, and has some interesting demonstrations available to show deterministic behavior, and show off their RT garbage collector.
In terms of a specific project, I suggest you start simple: 1) Build a work-generator that you can tune to consume a given amount of CPU time; 2) Put this into a framework that can produce a distribution of work-generator tasks (as threads, or as chunks of work executed in a thread) and a mechanism for logging the work produced; 3) Produce charts of the execution time, sojourn time, deadline, slack/overrun of these tasks versus their priority; 4) demonstrate that tasks running in the context of real-time threads (vice timesharing) behave differently.
Bonus points if you can measure the overhead in the scheduler by determining at what supplied load (total CPU time produced by your work generator tasks divided by wall-clock time) your tasks begin missing deadlines.
Try to think of real-time tasks that are time-critical, for instance video-playing, which fails if tasks are not finished (e.g. calculating the next frame) in time.
You can also think of some industrial solutions, but they are probably more difficult to study in your local environment.
You should definitely consider building your system using a hardware development board equipped with a small processor (ARM, PIC, AVR, any one will do). This really helped remove my fear of the low-level when I started developing. You'll have to use C or C++ though.
You will then have two alternatives : either go bare-metal, or use a real-time OS.
Going bare-metal, you can learn :
How to initalize your processor from scratch and most importantly how to use interrupts, which are the fastest way you have to respond to an externel event
How to implement lightweight threads with fast context switching, something every real-time OS implements
In order to ease this a bit, look for a dev kit which comes with lots of documentation and source code. I used Embedded Artists ARM boards and they give you a lot of material.
Going with the RT OS :
You'll fast-track your project, and will be able to learn how to fine-tune a RT OS
You may try your hand at an open-source OS, such as Linux or the BSDs, and learn a lot from the source code
Either choice is good, you will get a really cool hands-on project to show off and hopefully better understand your course material. Good luck!
As most realtime systems are still implemented in C or C++ it may be good to brush up your knowledge of these programming languages. Many realtime systems are also embedded systems, so you might want to play around with a cheap open source one like BeagleBoard (http://beagleboard.org/). This will also give you a chance to learn about cross compiling etc.