What is the difference between a workflow and a flowchart if any? - workflow

I'm doing a study on how to visualize a process of an application. Now I found the term workflow and workflow management but it's a little confusing.
When I searched further the term flowchart also showed up.
My question is now, is there any difference between the two because I see those two used in the same context.
Thanks

http://en.wikipedia.org/wiki/Workflow
http://en.wikipedia.org/wiki/Flowchart
"A workflow" is a definition of a process for how an item of work should be done.
A flowchart is a diagram for describing a process. So a workflow can be described by a flowchart.
Perhaps it is the case that workflows are commonly described by flowcharts, that some people may use the two terms interchangably.
Does that help?

Based on my limited experience, I would add that Workflow diagrams tend to be more relaxed in their layout. Flowcharts extremely strict and rigid; every step must be followed by a decision point (a diamond). So: A == B? (where == is a true/false question). If TRUE then proceed to process X; if FALSE proceed to next decision point, B >= C?.
Workflows, from my limited experience, are much more relaxed. they don't have to have the decision points but they often do. Basically, there are cases in which the rigidity of a flowchart just does not work and workflow diagrams seem to have that flexibility.
You also have to remember your target audiences. True flowcharts are so rigid because what they represent is so rigid. Flowcharts are very commonly used by programmers and software development leaves no room for error.

Related

What alternatives are there to dynamic patching (to deal with variables passed at creation time)?

I have heard people describe dynamic patching as a bit of a hack or at risk of breaking in future releases of Pd. This is reasonable enough, but it seems to imply that there are alternatives when building abstractions.
Dynamic patching seems to be useful for both instantiating a variable number of objects and connecting up to a variable number (a number defined at creation time - I personally don't need it to change after the fact, at this stage) of inlets and outlets within an abstraction.
Now I understand that the [clone] object can solve the problem of creating objects. I can see too that looping through send and receive objects would solve much of the connection issues with careful planning but what I do not understand is how objects like [trigger], [route] and [select] can be adjusted or replaced in some way? I fail to see how you would avoid using dynamic patching to, for example, create a [trigger f f] when the creation arg to your abstraction is 2, and a [trigger f f f] when the creation arg is 3. Again, the same with [route] and [select] and similar objects.
EDIT: The original question was perceived as too vague. I later posed a follow-up question in the comments which should really be here instead. As it happens, the answer to the follow-up provided a good answer to the original question, in my opinion. So to summarise and hopefully clarify, I was after a few "tools" to use when building abstractions so that I could limit my use of dynamic patching, if possible. These tools turned out to be:
using send and receive instead of inlets and outlets (although [initbang] can be used for creating inlets and outlets at instantiation).
using [clone]
chaining trigger, route and select objects using send and receive - for example, using [t b b] - [t b b] instead of [t b b b]. This means that the number of arguments in these objects can be defined at creation time with the help of [clone] for example. This is discussed in the Pd mailing list.
using [initbang] as indicated in the answer below.
After having attempted to build a drum machine with presets and an arbitrary number of tracks with my limited knowledge of dynamic patching techniques, I realised that there must be many ways of avoiding the problems I had when doing this, which were several! Of course, some things have to be done with dynamic patching and that's fine. It's just about creating manageable code.
This is really an answer to "follow-up question" in the comment¹, rather than the original question (which I consider too broad to be answered),
Is there a way to define an abstraction that has an argument that defines how many outlets the abstraction exposes?
Sure, just use $1 for that.
E.g. [gates 10] could create 10 outlets...
Presumably it could dynamically patch itself, but that doesn't seem like a good idea.
well, if you want an abstraction to have a dynamic API (that is: a variable number of inlets/outlets), then there is no way around dynamic patching.
Is this a good case for building your own external?
depends on what you actually want the external to do.
the iemguts library (disclaimer: of which I am the author) has everything in place to allow you to dynamically patch what you need.
Most important, there is [initbang], to create iolets before Pd tries to connect them (if you use [loadbang], the iolets will be created after Pd failed to connect to them).
It also includes a [canvasargs] object which allows you to get all the arguments to the abstraction (e.g. which simplifies the task of having the number of outlets equal the number of arguments - like [trigger] or [pack])
if instead you want to wrap the entire functionality of your abstraction into an external, that's of course also possible (and pretty simple in the realm of C).
Also keep in mind that other's might have already coded what you need.
¹ please don't abuse the comment field for follow-up questions. either update your original question (if the follow-up is a mere clarification of the original question) or post a new one.

Assigning weights to intents

I'm just getting started with Watson Conversation and I ran into the following issue.
I have a general intent for #greetings (hello, hi, greetings....) and another intent for #general-issues (I have a problem, I have an issue, stuff's not working, etc.....). If the user says: "hello, I have a problem with your product.", it matches the #greetings intent and replies back, but in this case I want to match the #general-issue intent.
As per: https://console.bluemix.net/docs/services/conversation/dialog-overview.html#dialog-overview
I would expect the nodes at the top of the list to be matched first, so I have the #greetings node at the bottom of the dialogue tree, to give a chance for a "higher weight" node to be matched first, but it doesn't seem to work every time.
Is duplicating the #greeting intents in #general-issue the only solution here?
So, trying to help you based on my experience, you can use the intents[0].confidence as your favor.
For example:
In my example, I create one condition with:
intents[0].confidence > 0.75
So, Watson will recognizes this intent just if the user types something very similar to their trained examples for the #greetings Intent.
As you can see, works very well:
See more about building a Complex dialog using Watson Conversation.
See more about Confidence in Conversation here.
So here are two other approaches you can take.
Treat off topic as a contamination.
When building a conversational system, it's important to know what your end users are actually saying. So collect questions from real users.
You will find that not many people will say a greeting and a question. I personally haven't done the statistical chance over projects I've done, but at least anecdotal I have not seen it happen often.
Knowing this, you can try removing off topic / chit-chat from your intents. As it does not fully reflect the domains you want to train on.
To counter this, you can create a more detailed second workspace with off topic/chit-chat. If you do not get a good hit on the primary workspace, you can call out to the second one. You can improve this by adding chit-chat to counter examples in the primary workspace.
You can also mitigate this by simply wording your initial response to the user. For example, if your initial response is a hello, have the system also ask a question. Or have it progress the conversation where a hello becomes redundant.
Detect possible compounded intents.
At the moment, this is only easily possible at the application layer.
Setting alternate_intents to true will return the top 10 intents and their confidences.
Before going further, if the top intent < 0.2 then it needs more training (so no need to proceed).
If > 0.2 you can map these on a graph, you can visually see if the top two intents. For example:
To have your application see this, you can use the k-means algorithm to create two buckets (k=2). That would be relevant and irrelevant.
Once you see more then one that is relevant, you can take action to ignore the chit-chat/off-topic.
There is more details, and sample code here.

Two-Phase Modelica Media example

I am trying to develop a simulation in OpenModelica of a flow that has a single substance that will be liquid or vapor. The Modelica.Media.Water models do have two phases, but are extremely complicated, and would be very hard to reproduce for a completely different substance.
I would like to find a simple example of a two phase medium that I can work from. There is a partial package TemplateMedium and a partial package PartialTwoPhaseMedium, but I don't see any examples of how to write a completely new Medium that can be in either of two phase.
If anyone can provide a simple example, or just a list of the minimum set of properties and equations that are required that would be extremely helpful as a starting point.
To address some of the question in comments:
I am just getting started on this model, so I am trying to understand the details of how the Media model is constructed, and what my specifics are included in the model versus what has to be added for each new media. I working with propylene, so there is good data available. This is one of the media that is included in CoolProp, so being able to use ExternalMedia and CoolProp would be very useful, but I believe that these are not yet working with OpenModelica, from a number of comments and bug reports.
Generally, your medium model can be written in Modelica or you can reuse an existing external library. Writing good medium models is a lot of work, so reusing existing libraries is usually a good idea. This is the approach taken by ExternalMedia (open source) or TILMedia (commercial).
If you are interested in an open-source workflow, ExternalMedia in combination with Coolprop is a reasonable decision. All three projects OpenModelica, ExternalMedia and CoolProp accept contributions from the community, so maybe you should help improving these instead of writing your own library. There is a lot of work going on already, I am unsure of the current status. Writing qualified bug reports (including steps to reproduce the problem) is also a very welcome way to contribute.
For some applications, it might be good to have the Medium model directly in Modelica. This is the approach taken by Modelica.Media (obviously), HelmholtzMedia and the commercial media libraries from XRG or Modelon (not 100% sure about that). There are some more implementations, but these are neither open source nor commercial, only information are e.g. conference papers.
The examples you can look at include the R134a medium from the MSL or the code from the HelmholtzMedia library.
Also, looking at the ExternalMedia implementation might help.
For fluids that cannot change phase, there are some good examples in the Annex60 library.
As you have a pure substance that can change phase, your new medium should extend from PartialTwoPhaseMedium.
PartialTwoPhaseMedium is partial, defining only what functions are there, but (mostly) not the algorithms of the functions.
You will have to write an algorithm for each and every function that is available in the interface and does not have an algorithm in order to be fully compatible.
For a start, you should implement at least one of the setState funtions, e.g. the setState_ph function.
Then later, implement at least one setSat function and the BaseProperties.
If you implement your own medium, you also have the choice of how to model it: Using the full multiparameter Helmholtz energy equation of state, a simpler equation of state like Peng-Robinson or other cubic EoS, some polynomials or splines, table-based methods like TTSE or SBTL and probably many more options.

What are "not so well defined problems" that LISP is supposed to solve?

Most people agree that LISP helps to solve problems that are not well defined, or that are not fully understood at the beginning of the project.
"Not fully understood"" might indicate that we don't know what problem we are trying to solve, so the developer refines the problem domain continuously. But isn't this process language independent?
All this refinement does not take away the need for, say, developing algorithms/solutions for the final problem that does need to be solved. And that is the actual work.
So, I'm not sure what advantage LISP provides if the developer has no idea where he's going i.e. solving a problem that is not finalised yet.
Lisp (not "LISP") has a number of advantages when you're facing problems that are not well-defined. First of all, you have a REPL where you can quickly experiment with -- that helps in sketching out quick functions and trying to play with them, leading to a very rapid development cycle. Second, having a dynamically typed language is working well in this context too: with a statically typed language you need to "design more" before you begin, and changing the design leads to changing more code -- in contrast, with Lisps you just write the code and the data it operates on can change as needed. In addition to these, there's the usual benefits of a functional language -- one with first class lambda functions, etc (eg, garbage collection).
In general, these advantage have been finding their way into other languages. For example, Javascript has everything that I listed so far. But there is one more advantage for Lisps that is still not present in other languages -- macros. This is an important tool to use when your problem calls for a domain specific language. Basically, in Lisp you can extend the language with constructs that are specific to your problem -- even if these constructs lead to a completely different language.
Finally, you need to plan ahead for what happens when the code becomes more than a quick experiment. In this case you want your language to cope with "growing scripts into applications" -- for example, having a module system means that you can get a more "serious"
application. For example, in Racket you can get your solution separated into such modules, where each can be written in its own language -- it even has a statically typed language which makes it possible to start with a dynamically typed development cycle and once the code becomes more stable and/or big enough that maintenance becomes difficult, you can switch some modules into the static language and get the usual benefits from that. Racket is actually unique among Lisps and Schemes in this kind of support, but even with others the situation is still far more advanced than in non-Lisp languages.
In AI (Artificial Intelligence) historically Lisp was seen as the AI assembly language. It was used to build higher-level languages which help to work with the problem domain in a more direct way. Many of these domains need a lot of 'knowledge' for finding usable answers.
A typical example is an expert system for, say, oil exploration. The expert system gets as inputs (geological) observations and gives information about the chances to find oil, what kind of oil, in what depths, etc. To do that it needs 'expert knowledge' how to interpret the data. When you start such a project to develop such an expert system it is typically not clear what kind of inferences are needed, what kind of 'knowledge' experts can provide and how this 'knowledge' can be written down for a computer.
In this case one typically develops new languages on top of Lisp and you are not working with a fixed predefined language.
As an example see this old paper about Dipmeter Advisor, a Lisp-based expert system developed by Schlumberger in the 1980s.
So, Lisp does not solve any problems. But it was originally used to solve problems that are complex to program, by providing new language layers which should make it easier to express the domain 'knowledge', rules, constraints, etc. to find solutions which are not straight forward to compute.
The "big" win with a language that allows for incremental development is that you (typically) has a read-eval-print loop (or "listener" or "console") that you interact with, plus you tend to not need to lose state when you compile and load new code.
The ability to keep state around from test run to test run means that lengthy computations that are untouched by your changes can simply be kept around instead of being re-computed.
This allows you to experiment and iterate faster. Being able to iterate faster means that exploration is less of a hassle. Very useful for exploratory programming, something that is typical with dealing with less well-defined problems.

Why use a post compiler?

I am battling to understand why a post compiler, like PostSharp, should ever be needed?
My understanding is that it just inserts code where attributed in the original code, so why doesn't the developer just do that code writing themselves?
I expect that someone will say it's easier to write since you can use attributes on methods and then not clutter them up boilerplate code, but that can be done using DI or reflection and a touch of forethought without a post compiler. I know that since I have said reflection, the performance elephant will now enter - but I do not care about the relative performance here, when the absolute performance for most scenarios is trivial (sub millisecond to millisecond).
Let's try to take an architectural point on the issue. Say you are an architect (everyone wants to be an architect ;)
You need to deliver the architecture to your team:
a selected set of libraries, architectural patterns, and design patterns. As a part of your design, you say: "we will implement caching using the following design pattern:"
string key = string.Format("[{0}].MyMethod({1},{2})", this, param1, param2 );
T value;
if ( !cache.TryGetValue( key, out value ) )
{
using ( cache.Lock(key) )
{
if (!cache.TryGetValue( key, out value ) )
{
// Do the real job here and store the value into variable 'value'.
cache.Add( key, value );
}
}
}
This is a correct way to do tracing. Developers are going to implement this pattern thousands of times, so you write a nice Word document telling how you want the pattern to be implemented. Yeah, a Word document. Do you have a better solution? I'm afraid you don't. Classic code generators won't help. Functional programming (delegates)? It works fairly well for some aspects, but not here: you need to pass method parameters to the pattern. So what's left? Describe the pattern in natural language and trust developers will implement them.
What will happen?
First, some junior developer will look at the code and tell "Hm. Two cache lookups. Kinda useless. One is enough." (that's not a joke -- ask the DNN team about this issue). And your patterns cease to be thread-safe.
As an architect, how do you ensure that the pattern is properly applied? Unit testing? Fair enough, but you will hardly detect threading issues this way. Code review? That's maybe the solution.
Now, what is you decide to change the pattern? For instance, you detect a bug in the cache component and decide to use your own? Are you going to edit thousands of methods? It's not just refactoring: what if the new component has different semantics?
What if you decide that a method is not going to be cached any more? How difficult will it be to remove caching code?
The AOP solution (whatever the framework is) has the following advantages over plain code:
It reduces the number of lines of code.
It reduces the coupling between components, therefore you don't have to change much things when you decide to change the logging component (just update the aspect), therefore it improves the capacity of your source code to cope with new requirements over time.
Because there is less code, the probability of bugs is lower for a given set of features, therefore AOP improves the quality of your code.
So if you put it all together:
Aspects reduce both development costs and maintenance costs of software.
I have a 90 min talk on this topic and you can watch it at http://vimeo.com/2116491.
Again, the architectural advantages of AOP are independent of the framework you choose. The differences between frameworks (also discussed in this video) influence principally the extent to which you can apply AOP to your code, which was not the point of this question.
Suppose you already have a class which is well-designed, well-tested etc. You want to easily add some timing on some of the methods. Yes, you could use dependency injection, create a decorator class which proxies to the original but with timing for each method - but even that class is going to be a mess of repetition...
... or you can add reflection to the mix and use a dynamic proxy of some description, which lets you write the timing code once, but requires you to get that reflection code just right -which isn't as easy as it might be, especially if generics are involved.
... or you can add an attribute to each method that you want timed, write the timing code once, and apply it as a post-compile step.
I know which seems more elegant to me - and more obvious when reading the code. It can be applied even in situations where DI isn't appropriate (and it really isn't appropriate for every single class in a system) and with no other changes elsewhere.
AOP (PostSharp) is for attaching code to all sorts of points in your application, from one location, so you don't have to place it there.
You cannot achieve what PostSharp can do with Reflection.
I personally don't see a big use for it, in a production system, as most things can be done in other, better, ways (logging, etc).
You may like to review the other threads on this matter:
Anyone with Postsharp experience in production?
Other than logging, and transaction management what are some practical applications of AOP?
Aspect Oriented Programming: What do you use PostSharp for?
etc (search)
Aspects take away all the copy & paste - code and make adding new features faster.
I hate nothing more than, for example, having to write the same piece of code over and over again. Gael has a very nice example regarding INotifyPropertyChanged on his website (www.postsharp.net).
This is exactly what AOP is for. Forget about the technical details, just implement what you are being asked for.
In the long run, I think we all should say goodbye to the way we are writing software now. It's tedious and plainly stupid to write boilerplate code and iterate manually.
The future belongs to declarative, functional style being held together by an object oriented framework - and the cross cutting concerns being handled by aspects.
I guess the only people who will not get it soon are the guys who are still payed for lines of code.