As the example below:
model HelloWorld "A Simple Model"
Real x(start = 1);
equation
**der(x) = -x;**
annotation (uses(Modelica(version="3.2")));
end HelloWorld;
I planned to write some code transforming Modelica source into java format. But, I can not find the source code of special operators like der() and so on.
I mean: the example in java can be written into this format:
class HelloWorld{
ModelicaReal x = new ModelicaReal(start,1);
public void run(){
while(time...){
...der(x)...
}
}
}
I want to process der(x) as a java function call. But I must find the der() operator source code first, then I can transform the der() operator into a java function. But Modelica operators source code are not in Modelica standard library?
There is no source code for der(). (At least not like the one you are looking for.)
Why?
Because it is (you said it) an operator. Not a function. What you are asking is something (almost) like the source code for the + operator or connect.
I am sure you have come across functions like that in java. For example in C++ you have sizeof(). Which is not really a function but looks and acts like one.
der() is implemented by each simulator's integration method. And provided for you as a built-in operator. It is not implemented as a library function like sin, cos.... And quite frankly it can't be. It is not evaluated as you see it in the source code.
I am not sure how you would go about translating it. But there are some ode solvers and integrators out there. I hope some one will soon give you an alternative approach.
Just a friendly heads up, It might not be as easy as you are approaching it now. And you can't translate modelica code (for that matter any source code) to another language line by line like that. Maybe that's OK for translating java to C/C++ or vice versa, but those languages are closely related and used for the same kind of programming paradigm. Modelica is different.
It is easier if you stick to translating the algorithmic parts of modelica and leave equations out of it for now. Then you can go ahead with the current approach.
Good luck.
Related
I'm browsing the swift tensorflow code, and stumbled upon instances of
var result = #tfop("Mul", a, b)
#tfop is well explained in the doc here, in the sense of 'what it does' but I'm also interested in what is actually is from a language standpoint, or as a function implementation.
What does #tfop represent, beside a handle to the computation graph? why the '#'? Where can I find tfop implementation if I want to? (I browsed the code, but no luck, although I can't guarantee that I didn't miss anything).
per Chris Lattner:
#tfop is a “well known” representation used for tensor operations.
It is an internal implementation detail of our stack that isn’t meant
to be user visible, and is likely to change over time.
In Swift, "#foo(bar: 42)” is the general syntax used for “macro like”
and “compiler magic” operations. For example C things like FILE
are spelled as #file in swift:
https://github.com/apple/swift-evolution/blob/master/proposals/0034-disambiguating-line.md
And the “#line 42” syntax used by the C preprocesser is represented
with arguments like this: #sourceLocation(file: "foo", line: 42)
In the case of #tfop specifically, this is represented in the Swift
AST as an ObjectLiteralExpr, which is the normal AST node for this
sort of thing:
https://github.com/google/swift/blob/tensorflow/include/swift/AST/Expr.h#L1097
We use special lowering magic to turn it into a SIL builtin
instruction in SILGen, which are prefixed with "__tfop_"
https://github.com/google/swift/blob/tensorflow/lib/SILGen/SILGenExpr.cpp#L3009
I’d like to move away from using builtin instructions for this, and
introduce a first-class sil instruction instead, that’s tracked by:
https://github.com/google/swift/issues/16
These instructions are specially recognized by the partitioning pass
of GPE:
https://github.com/google/swift/blob/tensorflow/lib/SILOptimizer/Mandatory/TFUtilities.cpp#L715
source here
I need to rewrite the linkage function in matlab. Now, as I examine it, I realized there is a method called linkagemex inside of it. But I simply cannot step into this method to see its code. Can anyone help me out with this strange situastion?
function Z= linkage (Y, method, pdistArg, varargin)
Z=linkagemex(Y,method);
PS. I think I am pretty good at learning, but matlab is not so easy to learn. If you have good references to learn it well, feel free to let me know. Thanks very much for your time and attention.
As #m.s. mentions, you've found a call to a MEX function. MEX functions are implemented as C code that is compiled into a function callable by MATLAB.
As you've found, you can't step into this method (as it is compiled C code, not MATLAB code), and you don't have access to the C source code, as it's not supplied with MATLAB.
Normally, you would be at kind of a dead end here. Fortunately, that's not quite the case with linkagemex. You'll notice on line 240 of linkage.m that it actually does a test to see whether linkagemex is present. If it isn't, it instead calls a local subfunction linkageold.
I think you can assume that linkageold does at least roughly the same thing as linkagemex. You may like to test them out with a few suitable input arguments to see if they give the same results. If so, then you should be able to rewrite linkage using the code from linkageold rather than linkagemex.
I'm going to comment more generally, related to your PS. Over the last few days I've been answering a few of your questions - and you do seem like a fast learner. But it's not really that MATLAB is hard to learn - you should realize that what you're attempting (rewriting the clustering behaviour of phytree) is not an easy thing to do for even a very advanced user.
MathWorks write their stuff in a way that makes it (hopefully) easy to use - but not necessarily in a way that makes it easy for users to extend or modify. Sometimes they do things for performance reasons that make it impossible for you to modify, as you've found with linkagemex. In addition, phytree is implemented using an old style of OO programming that is no longer properly documented, so even if you have the code, it's difficult to work out what it even does, unless you happen to have been working with MATLAB for years and remember how the old style worked.
My advice would be that you might find it easier to just implement your own clustering method from scratch, rather than trying to build on top of phytree. There will be a lot of further headaches for you down the road you're on, and mostly what you'll learn is that phytree is implemented in an obscure old-fashioned way. If you take the opportunity to implement your own from scratch, you could instead be learning how to implement things using more modern OO methods, which would be more useful for you in the future.
Your call though, that's just my thoughts. Happy to continue trying to answer questions when I can, if you choose to continue with the phytree route.
You came across a MEX function, which "are dynamically linked subroutines that the MATLAB interpreter loads and executes". Since these subroutines are natively compiled, you cannot step into them. See also the MATLAB documentation about MEX functions.
I'm looking for a function in Julia to estimate coefficients for an ARMA process.
For example using the Prediction Error Model as pem and armax in Matlab (part of system identification toolbox) do. pem documentation and armax documentation.
I've looked at the following packages, but can't see that they do what I'm looking for:
TimeSeries.jl
TimeModels.jl
One solution is of course to use Matlab.jl and use the Matlab functions, but I was hoping to do it all in Julia.
If there isn't anything right now, does anyone know of if there are any good Julia functions for multidimensional numerical minimisation (like Newton-Raphson), that can be used for implementing a PEM function?
UPDATE: I've just pushed a module to github called RARIMA.jl. This module can be used to estimate, forecast, and simulate ARIMA models (of which ARMA is a special case). Some of the functions are implemented in Julia, others (particularly estimation) call equivalent R functions using the RCall package which you will need to install and verify it works prior to using RARIMA. The package isn't officially registered (yet), so Pkg.add("RARIMA") won't work for now. If you want to use RARIMA, instead try Pkg.clone("https://github.com/colintbowers/RARIMA.jl"). If this fails, you can file an issue on the repository github page, but be sure to check RCall is installed and working before doing this. Cheers, I'll come back and update here if/when the package is officially registered.
ORIGINAL ANSWER: I just had a glance at the source, and TimeModels does not appear to have any functionality for estimating ARIMA models, although does have one function for simulating them. Given time though, I suspect this will be the package that deals with ARIMA modelling. The TimeSeries package is more about building the object type TimeSeries rather than implementing time series models, so I would be surprised if ARIMA modelling is ever merged into that package.
As near as I can tell, at this point if you want a fully functioning ARIMA package you'll need to use Matlab or R. The R one is very good (see the forecast package written by Rob Hyndman - it is very nice) and is probably easier to interface with from Julia than the Matlab option. Of course, the other option is to start it yourself and merge the code with the TimeModels package :-)
In terms of optimization procedures, Julia has a fair few that are written in Julia, and can be found under the JuliaOpt umbrella. The Optim package in particular is quite popular and well developed. However, most of the people I know who are really into this stuff use NLOpt which is a free open source library callable from many languages (including Julia). I have heard nothing but good things about this library from people who tend to work with this stuff 24/7.
A friend of mine needs to implement some statistical calculations in hardware.
She wants it to be accomplished using VHDL.
(cross my heart, I haven't written a line of code in VHDL and know nothing about its subtleties)
In particular, she needs a direct analogue of MATLAB's betainc function.
Is there a good package around for doing this?
Any hints on the implementation are also highly appreciated.
If it's not a good idea at all, please tell me about it as well.
Thanks a lot!
There isn't a core available that performs an incomplete beta function in the Xilinx toolset. I can't speak for the other toolsets available, although I would doubt that there is such a thing.
What Xilinx does offer is a set of signal processing blocks, like multipliers, adders and RAM Blocks (amongst other things, filters, FFTs), that can be used together to implement various custom signal transforms.
In order for this to be done, there needs to be a complete understanding of the inner workings of the transform to be applied.
A good first step is to implement the function "manually" in matlab as a proof of concept:
Instead of using the built-in function in matlab, your friend can try to implement the function just using fundamental operators like multipliers and adders.
The results can be compared with those produced by the built-in function for verification.
The concept can then be moved to VHDL using the building blocks that are provided.
Doing this for the incomplete beta function isn't something for the faint-hearted, but it can be done.
As far as I know there is no tool which allow interface of VHDL and matlab.
But interface of VHDL and C is fairly easy, so if you can implement your code(MATLAB's betainc function) in C then it can be done easily with FLI(foreign language interface).
If you are using modelsim below link can be helpful.
link
First of all a word of warning, if you haven't done any VHDL/FPGA work before, this is probably not the best place to start. With VHDL (and other HDL languages) you are basically describing hardware, rather than a sequential line of commands to execute on a processor (as you are with C/C++, etc.). You thus need a completely different skill- and mind-set when doing FPGA-development. Just because something can be written in VHDL, it doesn't mean that it actually can work in an FPGA chip (that it is synthesizable).
With that said, Xilinx (one of the major manufacturers of FPGA chips and development tools) does provide the System Generator package, which interfaces with Matlab and can automatically generate code for FPGA chips from this. I haven't used it myself, so I'm not at all sure if it's usable in your friend's case - but it's probably a good place to start.
The System Generator User guide (link is on the previously linked page) also provides a short introduction to FPGA chips in general, and in the context of using it with Matlab.
You COULD write it yourself. However, the incomplete beta function is an integral. For many values of the parameters (as long as both are greater than 1) it is fairly well behaved. However, when either parameter is less than 1, a singularity arises at an endpoint, making the problem a bit nasty. The point is, don't write it yourself unless you have a solid background in numerical analysis.
Anyway, there are surely many versions in C available. Netlib must have something, or look in Numerical Recipes. Or compile it from MATLAB. Then link it in as nav_jan suggests.
As an alternative to VHDL, you could use MyHDL to write and test your beta function - that can produce synthesisable (ie. can go into an FPGA chip) VHDL (or Verilog as you wish) out of the back end.
MyHDL is an extra set of modules on top of Python which allow hardware to be modelled, verified and generated. Python will be a much more familiar environment to write validation code in than VHDL (which is missing many of the abstract data types you might take for granted in a programming language).
The code under test will still have to be written with a "hardware mindset", but that is usually a smaller piece of code than the test environment, so in some ways less hassle than figuring out how to work around the verification limitations of VHDL.
I realize that it is impossible to have one language that is best for everything.
But there is a class of simple programs, whose source code looks virtually identical in any language.
I am thinking not just "hello world", but also arithmetics, maybe string manipulation, basic stuff that you would typically see in utility classes.
I would like to keep my utilities in this meta-language and have it automatically translated to a bunch of popular languages. I do this by hand right now.
Again, I do not ask for translation of every single possible program. I am thinking a very limited, simple language, but superportable.
Do you know of anything like that? Is there a reason why it should not exist?
Check Haxe, and its Wikipedia page. It's open source and its main purpose is what you describe: generating code in many languages from only one source.
Just about any language that you choose is going to have some feature that doesn't map to another in a natural way. The closest thing I can think of is probably a useful subset of JavaScript. Of course, if you are the language author you can limit it as much as you want, providing only constructs that are common to just about any language (loops, conditionals, etc.)
For purposes of mutability, an XML representation would be best, but you wouldn't want to code in it.
If you find that there is no universal language, you can try a pragmatic model-driven development approach, using a template-based code generator.
On the template you keep the underlying concepts of an algorithm. Then, you would add code for this algorithm in one or more specific languages (C++,Java,JS,Python) when necessary. You would have to do it anyway, whatever the language or approach you choose. A configuration switch would pick the correct language for any template you apply.
AtomWeaver is a code generator that works with templates and employs ABSE as the modeling approach.
I did some looking and found this.
https://www.indiegogo.com/projects/universal-programming-language
looks interesting
A classic Pascal is very simple. Oberon is another similar option. Or you could invent your own derivative language similar to the pseudocode from the computer science textbooks. It's trivial to implement a translator from one of that languages into any decent modern imperative language.