Is there a supported way to have an Ada program call out to GNAT to compile a source file, and then load the result dynamically?
(By 'supported' I mean: better than shelling out to gnatmake.)
Background: I have a program whose configuration files contain code. Right now I have an LLVM-based library, written in C++, which I have C bindings to and call out to from Ada, which handles this: it loads the configuration files, JIT compiles them into (commendably fast) machine code, and then I call out to them from Ada.
This is horrible.
It would be far cleaner to simply write the configuration in Ada, and compile and link it against the program. But I don't want to force the end user to have to set up a build system. What I'd like is for the end user to just point my program at the configuration files, and then have the program compile and dynamically load them invisibly, without the user needing to care.
This sounds like exactly the sort of thing that someone's already done. Have they? (I only care about Unixoids like Linux, if that helps.)
Related
I don't get how standard library like libc is linked.I use MingW compiler.
I see that there is no libc.dll file in its bin folder.Then how libc is linked?
How does compiler know difference between custom and dynamic library?
We use build tools because they are a practical way to compile code and create executables, deployables etc.
For example. Consider that you have a large Java application consisting of thousands of separate source files split over multiple packages. Suppose that it has a number of JAR file dependencies, some of them for libraries that you have developed, and others for external libraries that can be downloaded from standard places.
You could spend hours manually downloading external JAR files and putting them in the right place. Then you could manually run javac for each of the source files, and jar in multiple times. Each time in the correct directory with the correct command line arguments ... and in the correct order.
And each time you change the source code ... repeat the relevant parts of the above process.
And make sure you don't make a mistake which will cause you to waste time finding test failures, runtime errors, etc caused by not building correctly.
Or ... you could use a build tool that takes care of it all. And does it correctly each time.
In summary, the reasons we use build tools are:
They are less work than doing it by hand
They are more accurate
The results are more reproducible.
I want to know why compiler can't do it?
Because compilers are not designed to perform the entire build process. Just like your oven is not designed to cook a three course meal and serve it up at the dinner table.
A compiler is (typically) designed to compile individual source code files. There is more to building than doing that.
So, I have a project where I need to program a Real time system on the microbit using Ada https://blog.adacore.com/ada-on-the-microbit
I've come accross a problem, by using the arm-elf library and compiler I seem to lose access to all Ada base libraries, that is, the only one I can use is Ada.Text_IO, all others can't seem to be found by the IDE
I want to debug my code, printing the data I'm receiving from the accelerometer, but it's a number, and the library Ada.Text_IO only works with strings, so I tried to use Ada.Integer_Text_IO which was not found.
But if I change in project settings to the ada base compiler, I can compile and build my code (which means the code is correct), but I'm missing the button to flash it to the microbit
Well, the runtime provided for the MicroBit is a ZFP which means Zero FootPrint runtime.
So you shouldn't expect all the standard library to be implemented... But expect that there's nothing :)
In fact, you only have what exists in the Ada drivers library.
Moreover, what would be IO on such a microcontroller ? Where do you expect it to output ?
If you want to output something, take a look at this example and use Image attribute of your number.
I would like to write some pieces of code in Scala . It is important for me that this code can be called from matlab. AFAIK java code can easily be integrated with matlab. http://www.mathworks.co.uk/help/matlab/ref/javamethod.html
Is this valid also for scala code?
It is also valid for Scala code. Just pretend that you're doing Java interop (e.g. a method called + would actually be $plus) and make sure scala-library.jar is in Matlab's classpath (e.g. using javaaddpath).
Incidentally, I've done Java/Matlab interop before, and it's not as "easily integrated" as one might hope. Passing data is kind of awkward and you tend to trip on classloader and/or classpath issues every now and then.
I'd be wary of planning a big project with lots of tightly-connected interop. In my experience it usually works better using the Unix mindset: make independent tools that do their own thing well, and chain them together to get what you want. For instance, I routinely have Scala write Matlab code and call Matlab to run it, or have Matlab use system to fire up a Scala program to process a subset of the data.
So--calling Scala from Matlab is completely possible, and for simple interfaces looks just like Java. Just try to keep the interface simple.
You could encapsulate your Scala code in another language that's easily called by Matlab, such as Java of C. Since you seem to know about Java, you could use these examples to learn to encapsulate your Scala methods inside Jave methods that you can call from Matlab. A number of other methods are available to you with different languages (MEX-files, loadlibrary, etc...)
I have been on a "cleaning spree" lately at work, doing a lot of touch-up stuff that should have been done awhile ago. One thing I have been doing is deleted modules that were imported into files and never used, or they were used at one point but not anymore. To do this I have just been deleting an import and running the program's test file. Which gets really, really tedious.
Is there any programmatic way of doing this? Short of me writing a program myself to do it.
Short answer, you can't.
Longer possibly more useful answer, you won't find a general purpose tool that will tell you with 100% certainty whether the module you're purging will actually be used. But you may be able to build a special purpose tool to help you with the manual search that you're currently doing on your codebase. Maybe try a wrapper around your test suite that removes the use statements for you and ignores any error messages except messages that say Undefined subroutine &__PACKAGE__::foo and other messages that occur when accessing missing features of any module. The wrapper could then automatically perform a dumb source scan on the codebase of the module being purged to see if the missing subroutine foo (or other feature) might be defined in the unwanted module.
You can supplement this with Devel::Cover to determine which parts of your code don't have tests so you can manually inspect those areas and maybe get insight into whether they are using code from the module you're trying to purge.
Due to the halting problem you can't statically determine whether any program, of sufficient complexity, will exit or not. This applies to your problem because the "last" instruction of your program might be the one that uses the module you're purging. And since it is impossible to determine what the last instruction is, or if it will ever be executed, it is impossible to statically determine if that module will be used. Further, in a dynamic language, which can extend the program during it's run, analysis of the source or even the post-compile symbol tables would only tell you what was calling the unwanted module just before run-time (whatever that means).
Because of this you won't find a general purpose tool that works for all programs. However, if you are positive that your code doesn't use certain run-time features of Perl you might be able to write a tool suited to your program that can determine if code from the module you're purging will actually be executed.
You might create alternative versions of the modules in question, which have only an AUTOLOAD method (and import, see comment) in it. Make this AUTOLOAD method croak on use. Put this module first into the include path.
You might refine this method by making AUTOLOAD only log the usage and then load the real module and forward the original function call. You could also have a subroutine first in #INC which creates the fake module on the fly if necessary.
Of course you need a good test coverage to detect even rare uses.
This concept is definitely not perfect, but it might work with lots of modules and simplify the testing.
I have a function written in C (Say in HelloWorld.c file).
I want to compile this and need to create a staic object file HelloWorld.a
Finally I need to call this from a Perl program (HelloWorld.pl).
To call from perl to C one usually compiles a shared, not a static, library from his c code, and then loads that into the perl interpreter using the XSLoader or DynaLoader module.
To then be able to call the C code from perl space there's many ways. The most common one is writing something called XSUBs, which have a perl-side interface, map the perl calling-conventions to C calling-conventions, and call the C functions. Those XSUBs are usually also linked into the shared library that'll be loaded into perl, and written in a language called XS, which is extensively documented in perlxs and perlxstut.
There's also other ways to build that wrapper layer, like various XS code generators, as well as SWIG. But you could also call to the C functions directly using an NCI. Perl also has many of those. The P5NCI is one example of those, the ctypes module developed in this year's Google Summer of Code program is another.
Another related technique that should probably be mentioned here is Inline::C, and the other modules of the Inline family. They allow you to write code in other languages directly in perl, and call to it. Under the hood Inline::C just builds XS code and loads the result of that into the interpreter.
As #rafl says, you should use a shared library.
If you must use a static library, then you have to rebuild Perl with the static library built in. You'll need some XS glue too. However, this is messy enough that you really, really don't want to do it.
According to perlxstut:
It is commonly thought that if a system does not have the capability to dynamically load a library, you cannot build XSUBs. This is incorrect. You can build them, but you must link the XSUBs subroutines with the rest of Perl, creating a new executable. This situation is similar to Perl 4.
This tutorial can still be used on such a system. The XSUB build mechanism will check the system and build a dynamically-loadable library if possible, or else a static library and then, optionally, a new statically-linked executable with that static library linked in.
Should you wish to build a statically-linked executable on a system which can dynamically load libraries, you may, in all the following examples, where the command "make" with no arguments is executed, run the command "make perl" instead.
If you have generated such a statically-linked executable by choice, then instead of saying "make test", you should say "make test_static". On systems that cannot build dynamically-loadable libraries at all, simply saying "make test" is sufficient.