Related
I am doing a project using CPLEX solver, on Netbeans with Java. We have several optimization problems to solve, I have already solved one of them by coding in Java all the constraints, objective and variables, without using AMPL. However, some people in my team want to use AMPL.
Thus, as I don't want to read all the AMPL book to find the answer, is there an obvious reason to rather use AMPL than coding all the constraints "manually"? Moreover, can AMPL be integrated in Netbeans ? I did not find any documentation about that.
Is AMPL useful when the constraints need to be "flexible" (I mean, we can't guess in advance the exact number of constraints, it depends on the parameters fixed by the user, modularity is a high importance factor...)
I am really curious to hear about that soon !
Thanks for help
AMPL is an algebraic modeling language and quoting from that link:
One advantage of AMPL is the similarity of its syntax to the
mathematical notation of optimization problems.
For example, this can allow you to define groups of constraints without knowing in advance the dimensions of the model. And, perhaps, you can make big changes to your model more quickly. (You'll have to think about how often you will actually do that.)
However, one could argue that the "obvious advantage" of AMPL is that it supports dozens of different solvers. You can create your model and solve it with CPLEX, but then decide that you want to use a different solver (e.g., Gurobi, Xpress, etc.). On the AMPL Solvers web page, they have the following recommendation:
We recommend that you then test alternative solvers to determine which
offers the best tradeoff of price and performance for your needs.
The AMPL API web page says that there is a Java API, so that should allow you to include it in a Netbeans project, but I have no experience with that.
At the end of the day, you could also argue that these "advantages" are a matter of taste. Using the CPLEX Java API directly, as you have already done, is certainly a valid solution if it meets your requirements. It may allow you to build the model more efficiently, use solver-specific/advanced features that might not be supported by AMPL, and to have more fine-grained control over the model formulation.
You have just coded an optimisation model to optimise your company's production of widgets. Your company got a really good deal on $SOLVER1 so that's what you're using.
Over the next ten years, you improve and extend that model as your bosses throw new requirements at you. By the end of that time, you may have tens of thousands of lines of optimisation code as part of a system that, by now, is absolutely critical to your company's operations.
Your company's original licensing deal has expired, and the manufacturers of $SOLVER1 have massively increased the licensing fees, so you're now paying hundreds of thousands a year in licensing costs.
Meanwhile, the boffins at a rival company have just released a new version of $SOLVER2. It has fancy new algorithms that could solve the widget optimisation problem 20% faster and find better solutions than $SOLVER1 is giving you. It doesn't cost any more than $SOLVER1 and the performance is better.
Meanwhile, the open-source community has released $FREESOLVER. It might not be quite as powerful as the top commercial options, but it's as good as $SOLVER1 was ten years ago, and if you weren't paying $100k/year for licensing you could rent an awful lot of server time to make up for it.
...so, did you write your optimisation model on a platform that lets you switch to a new solver and take advantage of these opportunities without having to jettison ten years' worth of code?
There are huge advantages to being able to switch solvers quickly and easily. I know of one company who uses three different solvers for their work: they try two different open-source solvers both running in the cloud, and if neither of those can find an adequate solution then they throw it to an expensive solver with smarter algorithms. The open-source solvers handle 90% of their problems, so they only have to use the commercial solver for the last 10%, which allows them to make significant savings on their licensing costs.
One option we've discussed at my work is to use a commercial solver for mission-critical work, and open-source alternatives for applications like training or small-scale prototyping where we don't have the same requirements. That way we can minimise the number of concurrent users we need to license for the commercial solver.
(And, yes, there is still an issue of lock-in with the platform, but platforms like AMPL are significantly cheaper than a high-end commercial solver.)
Totally agree with everything that rkersh says. Also note that you should never write your model in a way that hard-codes details of your problem sizes etc. whether you write in an algebraic modelling language or through one of the more direct APIs.
Also, working with a modelling language gives you an extra level/layer of abstraction which can help, especially in sharing or explaining your model to others, comparing with a range of standard problem types etc., but I prefer the more nuts-and-bolts 'feel' of working with the more direct APIs, and almost never need (or have time & budget) to reformulate my models that deeply.
Even GPL means "general" yet newer and newer GPLs coming to life, so a given GPL is "more general" to somet tasks than others... :-) In theory writing a compiler the most efficiently for Pascal or Perl should not matter, so in fact you could write in whatever language you want and yet you should not lose expressivity or efficiency (e.g. for C# which is in the same league for Java now, MS writes a better compiler than the opensource equivalent).
Humans are specializing - this is why we have gotten this far :-) . No different when it comes to achieve a given task to convert a business problem to a math model (aka modeling). The whole idea of having a given modeling layer is that
A. you have the outmost expressivity for that particular task (aka math modeling)
B. it enforces some best practicies for modeling what in GPL you are not "forced" to do (1. you are free to do 2. it is marketed to you as such = flexibility). E.g. AMPL, GAMS, others are mixing declarative code (aka model code) and procedural code (aka flow-control-like) which is not a good practice. On the other hand e.g. separating data and an abstract model is getting to ALL modeling languages but interestingly enough very slowly...
C. thru no.A you can maintain the code more efficiently than otherwise (contrary to API modeling - I have clients who say they turned to modelinglanguage becuase API modeling is a liability for rapid model revamp)
D. in theory you could be solver independent.
If you look around all modeling languages are trying to maintain no.C except OPL (that's for historical reasons). But even in case of OPL, you get constraint-programming and constraint-based scheduling (beside math-programming) what with AMPL/GAMS you don't, however solverindependent they are...
the $Solver1 and $Solver2 + $Freesolver comparison is a bit broken for 4 reasons
A. opensolvers are still very far away from commercial solvers in term of performance when it comes to large/complex problems (probably LP is getting to the exception) - I have clients - the fastest ever sales in my memory - when they tested commercial solvers after their "free-ride".
B. while indeed the scenario described in relation with $Solver1 and $Solver2 seems plausible ($Solver1, the incumbent is getting more expensive over time), we could witness just the other way around where the $Solver2 (a new comer) actualy increased its pricing 4x in 7 years and in some cases doubled it, while $Solver1 (the incumbent) has had no change.
C. mixing up modeling capabilities and solvers is a mistake. The whole idea is that somebody writes models in APIs IS the way to stick to a solver much more than thru modeling languages. At a minimum, as the Hungarians say "what you gain on the custom you lose it on the ferry", in other words, "freedom (i.e. flexibility) comes with using it responsibly"
D. owning a solver for development is NOT expensive at all, i.e. a company can maintain large # of solvers (for less than 10k$ a company could have +4 solvers for development) to test which is the fastest for any given model and then choose the best suited for deployment.
in addition, solver is just one piece of the puzzle. E.g. I have a client who has disparate data sources and it takes 8hours to create a model and 4hours to solve it. Would this client welcome a more efficient data handling suite or would it insist that the solver should be faster? Modelers are too isolated from the business in most cases and while in their mind a given model is perfect, how it is populated by data is secondary, yet it makes or breaks a good performance.
I witness that API modelers are moving to modeling languages, not the other way around for various reasons...
but as somebody wrote above, there are lots of "tastes in the game", so eventually if you feel more confortable with a given approach then nobody can blame you to choose so... :-) after all it is very difficult to compare the/an other approach since it's almost never there on a given case... so eventually what counts is speed from business problem to a model which solve fast in the given application context :-)
phew, it was long... but I gave all my shots... :-)
To keep it short to illustrate advantage/disadvantage of using AMPL just compare using Java(AMPL) instead of assembly language(CPLEX).
I recently had a friend tell me
"see perl was never designed to be fast"
Is that true?
The relevant piece of information I can find is this from Wikipedia:
The language is intended to be practical (easy to use, efficient, complete) rather than beautiful (tiny, elegant, minimal).
But it doesn't directly talk about speed. I think that with all the text processing that it needs to do, speed of execution really matters for a language like Perl. And with all the weird syntax, elegance was never an objective, I agree.
Was high speed of execution one of the design objectives of Perl?
There is one important aspect to be considered : algorithms. Perl secret weapons are the algorithms backing certain language features and the CPAN library.
Good algorithms trump raw execution speed for non trivial problems. It typically takes more effort to select and implement algorithms in C-like languages than in Perl. This means that for half a day coding some little tool the perl version often outperforms a C version because it was easier to make good datastructures with hashes and by using the features provided in the language and libraries.
Once a Perl script starts running (i.e. after loading and compiling everything), it can be very speedy. It's that yucky compile-every-time that's a bit nasty.
However, I find that people don't really have to worry about how fast Perl can be. They waste all of their time by implementing stupid designs that do a lot more work than they need to do, misunderstanding key technologies, or just being boneheaded. It's not uncommon for me to help someone make their stuff go orders of magnitude faster by just tuning in the right places. That's not particular to Perl though. People have that problem with every language.
Perl has always aimed toward practicality, not anything (even close to) some sort of ivory tower purity, where a few goals are given absolute priority, and others are ignored (completely or nearly so).
As such, I think it's reasonable to say that maintaining a reasonable speed of execution has always been seen as important for Perl, but there are other factors (especially things like flexibility and ease of use) that are generally more important, so if a choice has to be made between one of them and speed of execution, the other factor will generally win unless the effect on execution speed is really serious.
I would have said that a language that designed for optimal run time performance would not have constructs that allow compiling while running. So no, perhaps.
It became a design objective as of Perl 5.0. But keep in mind it is still interpreted, so it is fast for an interpreted language.
I've been looking at LLVM for quite some time as a new back-end for the language I'm currently implementing. It seems to have good performance, rather high-level generation APIs, enough low-level support to optimize exotic optimizations. In addition, and although I haven't checked it myself, Apple seems to have successfully demonstrated the use of LLVM for garbage-collected multi-core programs.
So far, so good. As I'm interested in both garbage-collection and multi-core, the next step would be to choose a LLVM multi-core-able garbage-collector. Which brings me to the question: what is available? I'm aware of Jon Harrop's HLVM work, but that's about it.
Note that I need cross-platform, so Apple's GC is probably not what I'm looking for (unless there's a cross-platform version). Also note that I have nothing against stop-the-world garbage-collectors.
Thanks in advance,
Yoric
LLVM docs say that it does not support multi-threaded collectors yet.
As the matrix indicates, LLVM's
garbage collection infrastructure is
already suitable for a wide variety of
collectors, but does not currently
extend to multithreaded programs. This
will be added in the future as there
is interest.
The docs do say that to do multi-threaded garbage collection you need to stop the world and that this is a non-portable thing:
Threaded
Denotes a multithreaded mutator; the collector must still stop the
mutator ("stop the world") before
beginning reachability analysis.
Stopping a multithreaded mutator is a
complicated problem. It generally
requires highly platform specific code
in the runtime, and the production of
carefully designed machine code at
safe points.
However, shared state between threads is a nasty scaling issue. If your language communicates solely through message passing between 'tasks', and therefore there was no shared state between worker threads, then you could use a per-thread collector for the per-thread heap?
The quotes that Will gave are about LLVM's intrinsic support for GC, where you augment LLVM with C++ code telling it how to walk the stack, interpret stack frames, inject read and write barriers and so on. The primary goal of my HLVM project is to become useful with minimal effort and risk so I chose to use the shadow stack for an "uncooperative environment" in order to avoid hacking on immature internals of LLVM. Consequently, those statements about LLVM's intrinsic support for GC do not apply to HLVM's garbage collector because it does not use that infrastructure at all. My results are extremely compelling: you can achieve excellent performance with minimal effort (serial performance and parallel performance).
I believe HLVM already runs out-of-the-box across Unixs including Mac OS X because it requires only POSIX threads. I strongly disagree with the claim that writing a stop-the-world GC is difficult: it took me 5 days to write a 100-line multicore garbage collector and I barely know anything about computers. I cannot believe it would be difficult to port to Windows either.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
How do I learn PLC programming? Would it differ greatly for different brands of PLCs? Is ladder programming the same as PLC programming?
I did a lot of PLC programming, and now do quite a bit of .NET programming. It's very dangerous to make the switch either way, because a lot of the skills that you think should be transferrable (patterns and such) lead you very far astray.
The biggest difference that I tell people is that PC program code should be written as if other programmers are the audience, but PLC programs (ladder logic) must be written as if maintenance people are the audience. Maintainance in most facilities (particularly manufacturing) frequently connect directly to PLCs and in online mode they can watch the code execute graphically to figure out what's wrong.
For instance, if an output isn't turning on, they'll type the output electrical device ID into the find function of the programming software, find that output coil, and start tracing back from there looking for issues. One of the frequent mistakes that some PLC programmers make is to "map" their I/O into a structure (in PLCs, these are called user-defined types), and they use a copy instruction to move all the inputs or outputs over to the structure at once. Makes sense from a PC programming perspective, but it makes the maintenance person want to kill you. Typically the programming software provides a cross reference feature where they can specify that output coil, and it will tell them everywhere in the program that it's used. If you use a copy instruction to move 10 words of I/O into a 10 word data structure, he's got to sit there and count bits to figure out which bit in the source of the copy maps to which bit on the destination side of the copy. True, comments can help, but there's a problem with that too... PLCs store the whole program and allow you to upload the program from it in an emergency if you need to troubleshoot and you don't have a copy of the original program. The problem is that for space reasons, the PLC doesn't store the comments. So if the line is down, it's costing $5000 per minute in downtime, and a guy runs out there with a laptop, he might have to do a quick upload without comments and try to troubleshoot it. Having those copy instructions in there, wasting 10 minutes of his time, just cost the company $50,000 in downtime. These are the things you have to be aware of when writing PLC programs.
Some other tips: some PLCs have support for FOR loops. Never use them. For the same reason above, they make the code very difficult to troubleshoot for a maintenance person. This is because if you have one piece of code in the PLC that gets scanned more than once per scan (like the contents of a loop), then when you go into online debugging mode, the software can't show you the values for each of 10 loops that executed this scan, so you really have no idea what value you're looking at. Then you have to write all this tricky code to pull the loop values for a specific loop index out into some other tags (variables) that you can monitor. That's just one more impedance to fixing the problem in an emergency. Using a subroutine more than once per scan suffers from the same problem.
Indirect addressing (what we would call Arrays) are very difficult for maintenance people to understand. It's generally OK to use them when you're dealing with recipe management (storing and retrieving values for how to build your part) but you should try to stay away from it in the control part of the program.
In PC programming, of course we seek to re-use code as much as possible. However, in PLCs and control systems, downtime is extremely expensive, and hardware is expensive. Memory is cheap, and actually PLC programmers are cheap. Therefore, it's expected that if you have 10 identical things on your machine (like conveyor drives or something) that you will have 10 different files (subroutines), one for each drive, and each drive will have its own variables associated with them: e.g. Drive1_Run, Drive2_Run, Drive3_Run, etc. This is going to feel very "wrong" to you when you come from a PC programming background, but this is all because of the points I've made above. When you're in a downtime situation, and someone says that Drive 3 isn't working, you crack open the laptop, go to the file for Drive 3 and you look at the Run output rung. You start troubleshooting from there, while the program is executing. There's no breakpoints (the program never stops).
Good luck on your endeavors. I wrote up some more insights from my years of programming PLCs, if you want to check them out.
You can learn PLC programming from various sources on the internet, one of which is this(wikibooks) or this
The program that you write will be pretty much the same across different brands of PLCs for LLDs (Ladder Logic Diagrams) unless you use PLC specific functions. But there will be much more differences if you use some language like IL (Instruction List). But once you have written the program, the format of storage and execution differs widely across brands
Ladder logic is one of the 5 programming languages for PLC, the others being FBD (Function block diagram), ST (Structured text, similar to the Pascal programming language), IL (Instruction list, similar to assembly language) and SFC (Sequential function chart). These are just various representations of the programming language, various flavours if you will. But usually, a given brand supports only one of these. In USA, LLDs are widely used, while in Europe, ILs are more popular.
Ladder, often call LD is one of several language styles defined in ISO 61131 automation programming standard. Others are SFC (sequential flow chart), FBD (functional block diagram), ST (structured text), and IL (instruction list). IL is similar to assembler and very few people use it. ST is a text based programming much like early versions of BASIC. It is not often used either. LD is designed to resemble relay contacts off an electrical control panel (which many PLC replaced). FBD looks more like a circuit diagram. SFC is basically a flow chart.
Some PLC support all, other only some, or even one. While LD is the most common, FBD and SFC are gain popularity.
Different brands do use slightly different programming languages. They are usually similar enough that once you understand one brand, you can work with any of them, but you cannot directly take code from one PLC and using on another brand.
The answers given so far are pretty on target. One thing I found that PLCs have a split personality when it comes to their langauges and setup. Their core design is to give the electrical guys a flexible means of setting up control logic for their overall design. PLCs are basically a bunch of input and a bunch of outputs and how they are connected is controlled by the software you load into the device.
One of the emphasis of the languages that are used for PLCs is that they are accessible to people coming from an electrical background. So the idioms and structures seem counter intuitive for a person used to high level languages or even assembly languages. Ladder Logic for example is very accessible for electrical folks.
However in recent years PLCs have been supporting a multitude of languages for maximum flexibility. However in my opinion the handful of PLCs I worked are very lacking in terms of being a programming environment. Simple things like assigning variable names to memory location are often not designed into the language being used. The ones that are easy to work are often not the most cost effective for the job.
Despite these handicaps they are excellent for simplifying complex electrical systems. If you are working with others on a project, you will find that your knowledge of programming will help the project solve thorny programs. I was able to take a 100 rung ladder logic program and rewrite it into a third of the rungs. Once I was able to learn the ladder logic language I was able implement various optimizations that reduced the complexity of the program.
One tip is that you will need to learn about latching. Sometimes you will need to store or hold some output and unless you have a latch it the result will disappear the next cycle. Once you understand the issue it become clear but at first it was a great source of frustration for me.
PLC programming should be viewed as implementation activity of PLC software engineering output, unless you are using PLC as purely part of alternative components to mechanical or electrical solutions.
With this as basis, PLC programming environment is typically IEC61131 driven, gauranteed cycle time, "pre-emptive" realtime, no need to handle realtime OS related issues, continuous code scanning, non-program-pointer, different concept from typical computer task spawning kind of multi-tasking. Code execution is naturally atomic, no need to use monitors between tasks.
Each of the languages has its closeness to how conceivable is your code to the logic model you want to implement.
Ladder has its basic concept on electrical power flow interlocking style. Code resolution within single network is either horizontal or vertical scanning (your can find resource on this topic from manufacturer or other sites). If your code has single scan resolution nature and is within one network, some unconceivable behavior can be due to scanning type (important to remember that ladder is only emulation of electrical circuit, it is still sequential in execution).
FBD or function block diagram was electronic signal flow but today can be data flow depending on type of PLC. FBD shows clearer execution sequence quite similar to horizontal scanning ladder in scanning sequence. Today, FBD is typically used as container for object function blocks, although dependency implementation and visual similarity to process model is dependent on PLC type.
Literal is very similar to BASIC, but syntax only; execution is still scan-through. Literal language is good for mathematical calculation. For high level implementation, methods or derivation of attributes within object can be easier using Literal. State machine programming using English-like state representation or constants makes program very readable.
Statement list looks similar to assembly mnemonics but again execution is still scan-through and not program pointer. It is strong in bit operation and parenthesis-styled discrete logics. It can be a very efficient language to use with proper structuring and commenting.
SFC or sequential flow chart is a complementary language for sequence implementation. SFC has inherent rules on action block activation, state transitions, parellel sequence activation and merging. However, complex exception branching or concurrent action management can make implementation complicated and flow chart difficult to read.
PLC system management on IO handling, communication, hot-standby is hardware configuration effort, and is product dependent. Generally, can be treated separately from software engineering. However, data related to PLC system management are of "located" (independent data addressing area) type, good data modeling approach in software engineering can help in manageability of system data.
The Online PLC Simulator may be useful.
You can use Structured Text (ST) which consists of a series of instructions which, as determined in high level languages, ("IF..THEN..ELSE") or in loops (WHILE..DO) can be executed.
I find it better than Ladder as it is close to standard programming language.
I had a little of PLC programming on University. It seemed to me, to be a one level lower than assembly, but device we were using wasn't the newest one.
I belive you need to have a PLC driver, but I would first look for simulators and read more about it before buying.
Allen-Bradley has a free dos based software PLC, specifically for training. You can probably find it if you go to their site, or Google it. It's used to teach PLC programming in schools.
For a beginner trying to learn ladder logic, the best way is to attend free online training at http://plcs.net
PLC is the term used for the devices that use ladder logic. The devices that are programmed in more typical programming languages are generally called microcontrollers. However, there are some of us that on occasion lump them all under the PLC name. :-) Not sure how much ladder logic varies, but microcontroller code can vary significantly.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
It seems that there has been a recent rising interest in STM (software transactional memory) frameworks and language extensions. Clojure in particular has an excellent implementation which uses MVCC (multi-version concurrency control) rather than a rolling commit log. GHC Haskell also has an extremely elegant STM monad which also allows transaction composition. Finally, so as to toot my own horn just a bit, I've recently implemented an STM framework for Scala which statically enforces reference restrictions.
All of these are interesting experiments, but they seem to be confined to that sphere alone (experimentation). So my question is: have any of you seen or used STM in the real world? If so, why? What sort of benefits did it bring? What about performance? (there seems to be a great deal of conflicting information on this point) Would you use STM again or would you prefer to use some other concurrency abstraction like actors?
I participated in the hobbyist development of the BitTorrent client in Haskell (named conjure). It uses STM quite heavily to coordinate different threads (1 per peer + 1 for storage management + 1 for overall management).
Benefits: less locks, readable code.
Speed was not an issue, at least not due to STM usage.
Hope this helps
We use it pretty routinely for high concurrency apps at Galois (in Haskell). It works, its used widely in the Haskell world, and it doesn't deadlock (though of course you can have too much contention). Sometimes we rewrite things to use MVars, if we've got the design right -- as they're faster.
Just use it. It's no big deal. As far as I'm concerned, STM in Haskell is "solved". There's no further work to do. So we use it.
The article "Software Transactional Memory: Why is it Only a Research Toy?" (Călin Caşcaval et al., Communications of the ACM, Nov. 2008), fails to look at the Haskell implementation, which is a really big omission. The problem for STM, as the article points out, is that implementations must chose between either making all variable accesses transactional unless the compiler can prove them safe (which kills performance) or letting the programmer indicate which ones are to be transactional (which kills simplicity and reliability). However the Haskell implementation uses the purity of Haskell to avoid the need to make most variable uses transactional, while the type system provides a simple model together with effective enforcement for the transactional mutation operations. Thus a Haskell program can use STM for those variables that are truly shared between threads whilst guaranteeing that non-transactional memory use is kept safe.
We, factis research GmbH, are using Haskell STM with GHC in production. Our server receives a stream of messages about new and modified "objects" from a clincal "data server", it transforms this event stream on the fly (by generating new objects, modifying objects, aggregating things, etc) and calculates which of these new objects should be synchronized to connected iPads. It also receives form inputs from iPads which are processed, merged with the "main stream" and also synchronized to the other iPads. We're using STM for all channels and mutable data structures that need to be shared between threads. Threads are very lightweight in Haskell so we can have lots of them without impacting performance (at the moment 5 per iPad connection). Building a large application is always a challenge and there were many lessons to be learned but we never had any problems with STM. It always worked as you'd naively expect. We had to do some serious performance tuning but STM was never a problem. (80% of the time we were trying to reduce short-lived allocations and overall memory usage.)
STM is one area where Haskell and the GHC runtime really shines. It's not just an experiment and not for toy programs only.
We're building a different component of our clincal system in Scala and have been using Actors so far, but we're really missing STM. If anybody has experience of what it's like to use one of the Scala STM implementations in production I'd love to hear from you. :-)
We have implemented our entire system (in-memory database and runtime) on top of our own STM implementation in C. Prior to this, we had some log and lock based mechanism to deal with concurrency, but this was a pain to maintain. We are very happy with STM since we can treat every operation the same way. Almost all locks could be removed. We use STM now for almost anything at any size, we even have a memory manager implement on top.
The performance is fine but to speed things up we now developed a custom operating system in collaboration with ETH Zurich. The system natively supports transactional memory.
But there are some challenges caused by STM as well. Especially with larger transactions and hotspots that cause unnecessary transaction conflicts. If for example two transactions put an item into a linked list, an unnecessary conflict will occur that could have been avoided using a lock free data structure.
I'm currently using Akka in some PGAS systems research. Akka is a Scala library for developing scalable concurrent systems using Actors, STM, and built-in fault tolerance capabilities modeled after Erlang's "Let It Fail/Crash/Crater/ROFL" philosophy. Akka's STM implementation is supposedly built around a Scala port of Clojure's STM implementation. An overview of Akka's STM module can be found here.