Q# versus LIQUi|> - q#

Is Q# meant to be a LIQUiD replacement? It's unclear from the website that's this is true, but I haven't heard anything about LIQUiD since the announcement of Q#. There seems to be a large set of tools in LIQUiD that are not available in Q#, so are these meant to be linked together via .NET? In particular, I am interested in the circuit class and optimizations for QECC in LIQUiD.

LiQUi|> (which I'll write Liquid from now on :-)) and Q# have different goals. Liquid is an F#-based platform for simulating quantum circuits. It provides a lot of handy tools and features, like the QECC and noise modelling components you mention. It provides full access to (and manipulation of, if desired) the quantum state, so you can simulate things with Liquid that you could never do on a real quantum system. Finally, Liquid includes some highly-optimized capabilities for simulating Hamiltonian simulations that do a bunch of linear algebra tricks that are not available on a real quantum system.
Q# is a high-level language for coding quantum algorithms. Its goal is to let you easily code large quantum applications that would eventually be run on a large quantum system (hundreds of logical, error-corrected qubits). It does support simulation, but as a debugging aid. Put another way, Q# isn't primarily a language for programming quantum simulations, even though that's the way it's used today because of the low availability of large-scale quantum systems.
Liquid is still alive. If your focus is on simulation, Liquid is a great choice: you can get direct access to the innards of the simulator, but still code at a high level.
Q# is active and growing. While the focus is on actual execution, the Quantum Development Kit already includes both a full state vector simulator and a resource-estimating simulator (the trace simulator). I wouldn't be too surprised if more debugging features, including simulation, are on the way; for instance, the June release added the DumpMachine and DumpRegister operations to allow debugging access to the full state vector.

Related

Agent Based Modeling in Modelica

Is it possible to simulate multi-agent systems in Modelica? I'm talking about a system such MASON written in Java. How easy or difficult it would be?
As I understand, Modelica is not a typical programming language, so would it be particularly helpful or will the basic design of modelica language throw any hindrance? And more importantly, how we're going to model "messaging" systems that's common in Agent-based modeling?
Modelica can simulate discrete event systems. Some libraries exist: ModelicaDEVS, ARENALib etc.
Maybe the syntax is not perfect yet for this "Messaging", but maybe the language will be improved further in this direction.
An advantage might be that real-time capable code can be created, so the agents could run in embedded systems even with hard real-time - only some of the other tools support this like Ptolemy II.
P.S. (added see first comment):
From the start Modelica was designed to create code which is capable to run in real-time. So you could take the unchanged modelica model of your agent connect IO to sensors and actuators and download it on real-time hardware (e.g. PowerPC). Your swarm of agents will then exactly fullfill the time behaviour you modeled and exist in real. Also you could have only one real agent in hardware (maybe this hardware is expensive) and simulate the interaction to all the other agents in real-time on a real-time simulator hardware using your unchanged models for that too.
This is one of the major reasons why Modelica's semantic is not that dynamic as e.g. Java. If you want to run your MASON agent on real hardware you are in trouble: you have to move to e.g. Safety Critical Java, which means that a lot of constructs of your code, but also of standard Java libraries must be rewritten or are not allowed at all. Without this you will have to live with the possibility that your agent will miss his mission and burn down the house ...

Extensive comparison between SIMULINK and LabVIEW

I am trying to determine which of these two to buy for my work. I have used SIMULINK but not LabVIEW. Is there anyone who has used both and would like to provide some details? My investigation criteria are the user friendliness, availability of libraries and template functions, real-time probing facility, COTS hardware interfacing opportunity, quality of code generation, design for testability (i.e. ease of generating unit/acceptance tests), etc. However, if anyone would like to educate me with more criteria, please do so by all means!
For anyone who does not know about SIMULINK and LabVIEW - These are both Domain-Specific Languages (DSLs) intended for graphical dataflow modelling (and also code generation). These are multi-industrial tools and quite heavily used for engineering design and modelling.
IMPORTANT - I am quite interested to know if SIMULINK and LabVIEW offer real-time probing. For example, I have a model that I want to simulate. If there are variables associated to certain building blocks in that model, could I view them changing as the simulation continues? I know that it is certainly not possible with SIMULINK as it has a step-by-step debugger. I am not aware of anything similar in LabVIEW.
I really have not used LabVIEW and cannot obtain it temporarily as my work internet has got download restrictions and administrative privilege issues. This is the reason why I simply cannot use only NI website to draw conclusions. If there is any white paper available that addresses this issue, I would also love to know :)
UPDATE SINCE LAST POST
I have used MATLAB code generator and will not say that it is the best. However, I hear now that SIMULINK Embedded Coder is the best code generator and almost one of its own kind. Can anyone confirm whether or not this is good for safety critical system design i.e. generating code from safety-critical subsystem models. I know that the Mathworks is constantly trying to close the gap to achieve fully-flexible production-level C/C++ code generation.
I know that an ideal answer would be,"Depending on what you are trying to do, use a bit of both". And interestingly, I think I am heading to that direction. ATEOTD, it is a lot of money and need to be spent "nicely".
Thanks in advance.
I used labVIEW from 1995, and Simulink from 2000. Now I am involved in control system design, and simulation of robotic systems using labVIEW Real Time and automotive ECUs using MATALAB/Simulink/DSPACE .
LabVIEW is focus on measurement systems, and MATLAB/SIMULINK in dynamic simulation, so,
If you run complex simulations, and your work is create/debug complex simulation models of controllers or plants, use Simulink+RealTimeWorkShop+StateFlowChart. LabVIEW has no eficient code generators for dynamic simulation. RTW generates smaller and fastest code.
If your main work is developing systems with controllers and GUI for machines, or you want to deploy the controllers on field, use labVIEW.
If your main work is developing flexible HIL or SIL systems, with a good GUI, you can use VeriStand. Veristand can mix Simulink and LabVIEW code.
And if you have a big budget ( VERY BIG ) and you are working in automotive control prototypes, DSPACE hardware is a very good choice for fast development of automotive ECUS, or OPAL to develope electric power circuits. But only for prototype or HIL testing of controllers.
From the point of view of COTS hardware:
Mathworks don´t manufacture hardware -> Matlab/Simulink support hardware from several vendors.
National Instruments produce/sell hardware->LabVIEW Real Time is focused in support NationalInstruments hardware. There are no COTS full replacement.
I have absolutely no experience with Simulink, so I'll comment only on LV, although a quick read about Simulink on Wikipedia seems to indicate that it's focused mainly on simulation and modelling, which is certainly not the case with LabVIEW.
OK, so first of all, LV is NOT a DSL. While you wouldn't want to use it for any project, it's a general purpose programming language and you should take that into account. I know that NI has a simulation toolkit for LV, which might help you if that's what you're after, but I have absolutely no experience with it. The images I saw of it seemed to indicate that it adds a special kind of diagram to LV for simulation.
Second, LV is not restricted to any kind of hardware. It's a general purpose language, so you can write code which won't use any hardware at all, code which will use or run on NI's hardware or code which will use any hardware (be it through DLL calls, .NET assemblies, RS232, TCP, GPIB or any other option you can think of). There is quite a large collection of LV drivers for various devices and the quality of the driver usually depends on who wrote it.
Third, you can certainly probe in real time in LV. You write your code, just as you would in C or Java, and when you run it, you have several debugging options:
Single stepping. This isn't actually all that common, partially because LV is parallel.
Execution highlighting. This runs the code in slow motion, while showing all the values in the various wires.
Probes, which show you the last value that each wire had, where wires fill the same function that variables do in text based languages. This updates in real time and I assume is what you want.
Retain wire values, which allows you to probe a wire even after data passed through it. This is similar to what you get in text based IDEs with variables. In LV you don't usually have it because wire values are transient, so the value is not kept around unless you explicitly ask for it.
Of course, since you're talking about code, you could also simply write the code to display the values to the screen on a graph or a numeric indicator or to log them to a file, so there should be no need for actual probing. You could also add analysis code, etc.
Fourth, you could try downloading and running LV in a fully functional evaluation mode. If I remember correctly, NI currently gives you 7 days and then 45 days if you register on their site. If you can't do that on a work computer, you could try at home. If your problem is only with downloading, you could try contacting your local NI office and asking them to send you a DVD.
Note that I don't really know anything about modelling and simulation, so I have no idea what kind of code you would actually have to write in order to do what you want. I assume that if NI has a special module for it, then it's not something that you can completely cover in regular code (at least not if you want the original notation), but I would say that if you could write the code that does what you want in C, there's no reason you shouldn't be able to write it in LV (assuming, of course, that you know how to write code in LV).
A lot of the best answer would have to depend on your ultimate design requirements. Are you developing a product? If so, in what stage of development are you? Or are you doing research?
I recently did a comparison just as you are doing. I know LV, but was wanting to move towards a more hardware-scalable option, since NI HW is very expensive in volume. That is, my company was wanting to move towards a product. What LV and NI HW give you is flexibility. You can change code very quickly compared to C. On the other hand, LV does not run on nearly as many different HW platforms as C. So I wanted to find an inexpensive platform that would work well for real-time control and data acquisition, such that if we wanted to sell a product for, say, $30k, our controller wouldn't be costing $15k of that. We ended up with Diamond Systems Linux SBC's. Interestingly, Simulink ended up using the most expensive hardware! It did have a lot of flexibility, and could generate code, as well as model plants and controllers. But then, LV can do that as well.
As Yair wrote, LV has plenty of good debugging tools. One of the more interesting tools that is not so well known is the Suspend when Called option for a SubVI. This allows you to play with the inputs and outputs of a SubVI as much as you want while execution is paused.
MATLAB and Simulink are the defacto standard for control system design and simulation. Simulink controller models can be used for offline simulation in conjunction with plant models, all the way to realtime implementation on embedded targets. It is a general simulation framework with extensive built-in libraries as well as a la carte special purpose libraries, and can be extended through creation of custom blocks (S-function blocks) in C and other languages. It includes the ability to display values in graphs, numeric displays, gages, etc. while a nonrealtime simulation is taking place. Realtime target support from The Mathworks includes x86 (xPC Target) and several embedded targets (MPC555, etc.), and there is 3rd party support for other targets. The aforementioned dSPACE provides complete prototyping controllers including support for their quite powerful hardware. xPC Target includes support for a plethora of COTS PC data acquisition cards. Realtime target support includes GUI elements such as graphs, numeric displays gages, etc.
As I understand it (I have never really used it in anger), LabView only supports NI hardware, and is more hardware-oriented. Simulink supports hardware from multiple vendors, be it for data acquisition, or real-time implementation, but it may require a bit more work for the user to interface to his or her own hardware (less plug & play than LabView). On the other hand, Simulink provides tools to support the whole model-based design process, from modelling & simulation, control design, verification & validation, code generation, hardware-in-the-loop, etc...
Disclaimer: I used to work for MathWorks.
You guys may really be interested in Control Design adn Simulation Module for LabVIEW. It does a lot of simulations and in the future may be competitive to Simulink. I'm not a control engineer but I use it sometimes for simple testing and I'm glad that I don't have to learn Simulink from the beginning to do some work since I'm familiar with LabVIEW philosophy.

Neural Network simulator in FPGA? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
To learn FPGA programming, I plan to code up a simple Neural Network in FPGA (since it's massively parallel; it's one of the few things where an FPGA implementation might have a chance of being faster than a CPU implementation).
Though I'm familiar with C programming (10+ years). I'm not so sure with FPGA development stuff. Can you provide a guided list of what I should do / learn / buy?
Thanks!
Necroposting, but for others like me that come across this question there is an in-depth, though old, treatment of implementing neural networks using FPGAs
It's been three years since I posted this, but it is still being viewed so I thought I'd add another two papers from last year I recently found.
The first talks about FPGA Acceleration of Convolutional Neural Networks. Nallatech performed the work. It's more marketing that an academic paper, but still an interesting read, and might be a jumping off point for someone interesting in experimenting. I am not connected to Nallatech in any way.
The second paper came out of the University of Birmingham, UK, written by Yufeng Hao. It presents A General Neural Network Hardware Architecture on FPGA.
Most attempts at building a 'literal' neural network on an FPGA hit the routing limits very quickly, you might get a few hundred cells before P&R pulls takes longer to finish than your problem is worth waiting for. Most of the research into NN & FPGA takes this approach, concentrating on a minimal 'node' implementation and suggesting scaling is now trivial.
The way to make a reasonably sized neural network actually work is to use the FPGA to build a dedicated neural-network number crunching machine. Get your initial node values in a memory chip, have a second memory chip for your next timestamp results, and a third area to store your connectivity weights. Pump the node values and connection data through using techniques to keep the memory buses saturated (order node loads by CAS line, read-ahead using pipelines). It will take a large number of passes over the previous dataset as you pair off weights with previous values, run them through DSP MAC units to evaluate the new node weights, then push out to the result memory area once all connections evaluated. Once you have a whole timestep finished, reverse the direction of flow so the next timestep writes back to the original storage area.
I want to point out a potential issue with implementing a Neural Network in FPGA. FPGAs have limited amount of routing resources. Unlike logic resources (flops, look-up tables, memories), routing resources are difficult to quantify. Maybe a simple Neural Network will work, but a "massively parallel" one with mesh interconnects might not.
I'd suggest starting with a simple core from OpenCores.org just to get familiar with FPGA flow, and then move on to prototyping a Neural Network. Downloading free Xilinx WebPack, which includes ISIM simulator, is a good start. Later on you can purchase a cheap dev. board with a small FPGA (e.g. Xilinx Spartan 3) to run your designs on.
A neural network may not be the best starting point for learning how to program an FPGA. I would initially try something simpler like a counter driving LEDs or a numeric display and build up from there. Sites that may be of use include:
http://www.fpga4fun.com/ - Excellent examples of simple projects and some boards.
http://opencores.org/ - Very useful reference code for many interfaces, etc...
You may also like to consider using a soft processor in the FPGA to help your transition from C to VHDL or Verilog. That would allow you to move small code modules from one to the other to see the differences in hardware. The choice of language is somewhat arbitrary - I code in VHDL (syntactically similar to ADA) most of the time, but some of my colleagues prefer Verilog (syntactically similar to C). We debate it once in a while but really it's personal choice.
As for the buyers / learners guide, you need:
Patience :) - The design cycle for FPGAs is significantly longer than for software due to the number of extra 'free parameters' in the build, so don't be surprised if it takes a while to get designs working exactly the way you want.
A development board - For learning, I would buy one from one of the three bigger FPGA vendors: Xilinx, Altera or Lattice. My preference is Xilinx at the moment but all three are good. For learning, don't buy one based on the higher-end parts - you don't need to when starting using FPGAs. For Xilinx, get one based on the Spartan series such as the SP601 (I have one myself). For Altera, buy a Cyclone one. The development boards will be significantly cheaper than those for the higher-end parts.
A programming cable - Most companies produce a USB programming cable with a special connector to program the devices on the board (often using JTAG). Some boards have the programming interface built in (such as the SP601 from Xilinx) so you don't need to spend extra money on it.
Build tools - There are many varieties of these but most of the big FPGA vendors provide a solution of their own. Bear in mind that the tools are only free for the smaller lower-performance FPGAs, for example the Xilinx ISE Webpack.
The software comprises stages with which you may not be familiar having come from the software world. The specifics of the tool flow are always changing, but any tool you use should be able to get from your code to your specific device. The last part of this design flow is normally provided by the FPGA vendor because it's hardware-specific and proprietary.
To give you a brief example, the software you need should take your VHDL and Verilog code and (this is the Xilinx version):
'Synthesise' it into constructs that match the building blocks available inside your particular FPGA.
'Translate & map' the design into the part.
'Place & route' the logic in the specific device so it meets your timing requirements (e.g. the clock speed you want the design to run at).
Regardless of what Charles Stewart says, Verilog is a fine place to start. It reminds me of C, just as VHDL reminds me of ADA. No one uses Occam in industry and it isn't common in universities.
For a Verilog book, I recommend these especially Verilog HDL. Verilog does parallel work trivially, unlike C.
To buy, get a relatively cheap Cyclone III eval board from [Altera] or Altera's 3 (e.g. this Cyclone III one with NIOS for $449 or this for $199) or Xilinx.
I'll give you yet a third recommendation: Use VHDL. Yes, on the surface it looks like ADA. While Verilog bears a passing resemblance to C. However, with Verilog you only get the types that come with it out of the box. With VHDL you can define your own new types which lets you program at a higher level (still RTL, of course). I'm pretty sure the Xilinx and Altera free tools support both VHDL and Verilog. "A Designers Guide to VHDL" by Ashenden is a good VHDL book.
VHDL has a standard fixed-point math package which can make NN implementation easier.
It's old, because I haven't thought much about FPGAs in nearly 20 years, and it uses a concurrent programming language that is rather obscure, but Page & Luk, 1991, Compiling Occam into FPGAs covers some crucial topics in a nice way, enough, I think, for your purposes. Two links for trying stuff out:
KRoC is an actively maintained, linux-based Occam compiler, which I know has an active user base.
Roger Peel has a logic synthesis page that has some documentation of his linux-based workflow from Occam code synthesis through to FPGA I/O.
Occam->FPGA isn't where the action is, but it may be a much better place to start than, say, Verilog.
I would recommend looking into xilinx high-level synthesis, especially if you are coming from a C background. It abstracts away the technical details in using a hdl so the designer can focus on the algorithmic implementation.
The are restriction in the type of C code you can write. For example, you can't use dynamically sized data structures, as that would infer dynamically sized hardware.

Real time system concept proof project

I'm taking an introductory course (3 months) about real time systems design, but any implementation.
I would like to build something that let me understand better what I'll learn in theory, but since I have never done any real time system I can't estimate how long will take any project. It would be a concept proof project, or something like that, given my available time and knowledge.
Please, could you give me some idea? Thank you in advance.
I programm in TSQL, Delphi and C#, but I'll not have any problem in learning another language.
Suggest you consider exploring the Real-Time Specification for Java (RTSJ). While it is not a traditional environment for constructing real-time software, it is an up-and-coming technology with a lot of interest. Even better, you can witness some of the ongoing debate about what matters and what doesn't in real-time systems.
Sun's JavaRTS is freely available for download, and has some interesting demonstrations available to show deterministic behavior, and show off their RT garbage collector.
In terms of a specific project, I suggest you start simple: 1) Build a work-generator that you can tune to consume a given amount of CPU time; 2) Put this into a framework that can produce a distribution of work-generator tasks (as threads, or as chunks of work executed in a thread) and a mechanism for logging the work produced; 3) Produce charts of the execution time, sojourn time, deadline, slack/overrun of these tasks versus their priority; 4) demonstrate that tasks running in the context of real-time threads (vice timesharing) behave differently.
Bonus points if you can measure the overhead in the scheduler by determining at what supplied load (total CPU time produced by your work generator tasks divided by wall-clock time) your tasks begin missing deadlines.
Try to think of real-time tasks that are time-critical, for instance video-playing, which fails if tasks are not finished (e.g. calculating the next frame) in time.
You can also think of some industrial solutions, but they are probably more difficult to study in your local environment.
You should definitely consider building your system using a hardware development board equipped with a small processor (ARM, PIC, AVR, any one will do). This really helped remove my fear of the low-level when I started developing. You'll have to use C or C++ though.
You will then have two alternatives : either go bare-metal, or use a real-time OS.
Going bare-metal, you can learn :
How to initalize your processor from scratch and most importantly how to use interrupts, which are the fastest way you have to respond to an externel event
How to implement lightweight threads with fast context switching, something every real-time OS implements
In order to ease this a bit, look for a dev kit which comes with lots of documentation and source code. I used Embedded Artists ARM boards and they give you a lot of material.
Going with the RT OS :
You'll fast-track your project, and will be able to learn how to fine-tune a RT OS
You may try your hand at an open-source OS, such as Linux or the BSDs, and learn a lot from the source code
Either choice is good, you will get a really cool hands-on project to show off and hopefully better understand your course material. Good luck!
As most realtime systems are still implemented in C or C++ it may be good to brush up your knowledge of these programming languages. Many realtime systems are also embedded systems, so you might want to play around with a cheap open source one like BeagleBoard (http://beagleboard.org/). This will also give you a chance to learn about cross compiling etc.

Robot simulation environments

I would like to make a list of remarkable robot simulation environments including advantages and disadvantages of them. Some examples I know of are Webots and Player/Stage.
ROS will visualize your robot and any data you've recorded from it.
Packages to check out would rviz and nav_view
This made me remember the breve project.
breve is a free, open-source software package which makes it easy to build 3D simulations of multi-agent systems and artificial life.
There is also a wikipage listing Robotics simulators
Microsoft Robotics Studio/Microsoft Robotics Developer Studio 2008
Also read this article on MSDN Magazine
It all depends on what you want to do with the simulation.
I do legged robot simulation, I am coming from a perspective that is different than mobile robotics, but...
If you are interested in dynamics, then the one of the oldest but most difficult to use is sd/fast. The company that originally made it was acquired by a large cad outfit.
You might try heading to : http://www.sdfast.com/
It will cost you a bit of money, but I trust the accuracy of the simulation. There is no contact or collision model, so you have to roll you own. I have used it to simulate bipeds, swimming fish, etc.. There is also no visualization. So, it is for the hardcore programmer. However, it is well respected among us old folk.
OpenDynamics engine is used by people http://www.ode.org/ for "easier" simulation. It comes with an integrator and a primitive visualization package. There are python binding (Hurray for python!).
The build in friction model.. is ... well not very well documented. And did not make sense. Also, the simulations can suddenly "fly apart" for no apparent reason. The simulations may or may not be accurate.
Now, MapleSoft (in beautiful Waterloo Canada) has come out with maplesim. It will set you back a bit of money but here is what I like about it:
It goes beyond just robotics. You can virtually anything. I am sure you can simulate the suspension system on a car, gears, engines... I think it even interfaces with electrical circuit simulation. So, if you are building a high performance product, than MapleSim is a strong contender. Goto www.maplesoft.com and search for it.
They are pretty nice about giving you an eval copy for 30 days.
Of course, you can go home brew. You can solve the Lagrange-Euler equations of motion for most simple robots using a symbolic computation program like maple or mathematica.
EDIT: Have not be able to elegantly do certain derivatives in Maple. I have to resort to a hack.
However, be aware of speed issue.
Finally for more biologically motivated work, you might want to look at opensim (not to be confused with OpenSimulator).
EDIT: OpenSim shares a team member with SD/Fast.
There a lots of other specialized simulators. But, beware.
In sum here are the evaluation criteria for a simulator for robot oriented work:
(1) What kind of collision model do you have ? If it is a very stiff elastic collision, you may have problem in numerical stability during collisions
(2) Visualization- Can you add different terrains, etc..
(3) Handy graphical building tools so you don't have to code then see-what-you-get.
Handling complex system (say a full scale humanoid) is hard to think about in your head.
(4) What is the complexity of the underlying simulation algorithm. If it is O(N) then that is great. But it could be O(N^4) as would be the case for a straight Lagrange-Euler derivation... then your system just will not scale no matter how fast your machine.
(5) How accurate is it and do you care?
(6) Does it help you integrate sensors. For mobile robots you need to have a "robot-eyes view"
(7) If it does visualization, can it you do things like automatically follow the object as it is moving or do you have to chase it around?
Hope that helps!
It's not as impressive looking as Webots, but RobotBasic is free, easy to learn, and useful for prototyping simple robot movement algorithms. You can also program a BasicStamp from the IDE.
I've been programming against SimSpark. It's the open-source simulation engine behind the RoboCup 3D Simulated Soccer League.
It's extensible for different simulations. You can plug in your own sensors, actuators and models using C++, Ruby and/or RSG (Ruby Scene Graph) files.
ABB has a quite a solution called RobotStudio for simulating their huge industrial robots. I don't think it's free and I don't guess you'll get much fun out of it but it's quite impressive. Here's a page about it
I have been working with Carmen http://carmen.sourceforge.net/ and find it useful.
One of the disadvantages with Carmen is the documentation with all respect I think the webpage is a bit outdated and insufficient. So I like to hear from other people with experience in working with Carmen, or student reports/projects dealing with Carmen.
You can find a great list with simulation environments http://www.intorobotics.com/robotics-simulation-softwares-with-3d-modeling-and-programming-support/
MRDS is one of the best and it's free. Also LabView is good to be used in robotcs
National Instruments' LabView is a graphical programming environment for developing measurement, test, and control systems.
It could be used for 3D control simulation with SolidWorks.
MRDS is free and is one of the best simulation environment for robotics. Workspace also can be used, and please check this link if you want a complete list with robotics simulation software
Trik Studio has a nice and clear 2D model simulator and also visual and textual programming programming environments for them. They also soon will support 3D modeling tools based on Morse simulator. Also it is free and opensource and has multi-language interface.