Is there an equivalent of march=native in the Crystal compiler? - compiler-optimization

GCC and Clang support a compiler option named -march=native, which is handy if you want to optimize for the current machine's architecture. The resulting binary might not be portable, but that is OK if it will only be executed on the same machine.
I wondered if the Crystal compiler supports it. I can see the following options --mcpu, --mattr, --mcmodel, which might be want I need. Unfortunately, I could not find a lot of information.
Is there a recommended way in Crystal to optimize for the current machine? Ideally, it should figure out the available CPU instructions automatically (like -march=native).
Background: How to see which flags -march=native will activate?

The Crystal compiler doesn't support -march. Maybe that should be added. From what I hear it's there's often no clear separation between -mcpu and -march.
As a workaround, you could ask the compiler to emit LLVM IR or byte code. That allows you to compile the binary with LLVM tools directly, which would give full access to LLVM options like -march.

Related

How can I compile, run AML with ACPI, or how should I utilize ACPI functions?

I'm trying to write a kernel and plan to use ACPI on some issues (e.g. identify interrupt source on APIC).
However, I'm really a beginner on this, I read the related documentation and still do not have any clue on how to configure and use ACPI functions.
I have some basic understanding that:
1, there are some ACPI tables will be mapped in memory space, within which DSDT and SSDT will provide some definition blocks.
2, The definition block are AML code
3, I can retrieve some information directly from ACPI tables (e.g. I/O APIC base address)
4, Further information some times need to run ACPI objects.
These are basically my understanding about ACPI. However, how should I use AML code, how should I run ACPI objects. I do not have a clue.
So if any one can provide a basic structure of how this mechanism works, how some basic functions provided by ACPI can be realized by OS??
Thanks a lot! I'll keep reading the documentation and try to find some thing that can help me on understanding it.
My advice is:
a) If you're a beginner, implement support for "PIC chips" while taking into account future support for things like IO APIC and MSI but not implementing that support yet (e.g. just dummy stubs, etc); and then worry about adding support for IO APICs (and MSI) and ACPI later (e.g. after most of your OS has been done, including device drivers, file systems, etc). Note that this is a big part of why I advocate a "kernel tells device driver which resources it should use" approach (rather than a "device driver tells the kernel which resources it wants" approach) - so you can add support for IO APIC and MSI later without touching any of the code for any of the device drivers.
b) For ACPI's AML; it's a nasty festering mess. Specifically, the OS has to tell AML what the OS is (e.g. using an \_OS object in AML to tell AML the operating system's name), if the OS isn't recognized by the computer's AML then the AML will typically fall back to a crippled "bare minimum functionality" mode, and the AML for lots of computers will only recognize (various versions of) Windows. The result is that to use the full functionality provided by AML your OS has to pretend that it is (a version of) Windows, and has to have the same behaviour as that version of Windows, which is not well documented (e.g. not included in the ACPI specs at all) and not easily discovered by "trial and error" techniques. If that's not bad enough; various computers have buggy AML, and you need "who knows how many" workarounds for these bugs. The most practical way to work around this problem is by relying on a well-tested code written by other people. More specifically; you will probably want to port ACPICA (see https://acpica.org/ ), which is an open-source and OS-independent implementation of ACPI that includes an AML interpreter and hides/abstracts a lot of the pain.
If you are working with linux, try the following (as root), it will give you a good start (you should install the distro relevant package, like acpica-tools):
$acpidump > dump.bin
$acpixtract -x dump.bin
(this will create a binary file for each table in the initial dump file. lots of ".dat" file)
$iasl -d *dat
(this will disassemble the binary files to human readable format)
you can also download intel's implementation for the iasl compiler from github (look it up. it is very easy to compile)

Ada on microbit, GNAT

So, I have a project where I need to program a Real time system on the microbit using Ada https://blog.adacore.com/ada-on-the-microbit
I've come accross a problem, by using the arm-elf library and compiler I seem to lose access to all Ada base libraries, that is, the only one I can use is Ada.Text_IO, all others can't seem to be found by the IDE
I want to debug my code, printing the data I'm receiving from the accelerometer, but it's a number, and the library Ada.Text_IO only works with strings, so I tried to use Ada.Integer_Text_IO which was not found.
But if I change in project settings to the ada base compiler, I can compile and build my code (which means the code is correct), but I'm missing the button to flash it to the microbit
Well, the runtime provided for the MicroBit is a ZFP which means Zero FootPrint runtime.
So you shouldn't expect all the standard library to be implemented... But expect that there's nothing :)
In fact, you only have what exists in the Ada drivers library.
Moreover, what would be IO on such a microcontroller ? Where do you expect it to output ?
If you want to output something, take a look at this example and use Image attribute of your number.

Using Inline::CPP vs SWIG - when?

In this question i saw two different answers how to directly call functions written in C++
Inline::CPP (and here are more, like Inline::C, Inline::Lua, etc..)
SWIG
Handmade (as daxim told - majority of modules are handwritten)
I just browsed nearly all questions in SO tagged [perl][swig] for finding answer for the next questions:
What are the main differences using (choosing between) SWIG and Inline::CPP or Handwritten?
When is the "good practice" - recommented to use Inline::CPP (or Inline:C) and when is recommented to use SWIG or Handwritten?
As I thinking about it, using SWIG is more universal for other uses, like asked in this question and Inline::CPP is perl-specific. But, from the perl's point of view, is here some (any) significant difference?
I haven't used SWIG, so I cannot speak directly to it. But I'm pretty familiar with Inline::CPP.
If you would like to compose C++ code that gets compiled and becomes callable from within Perl, Inline::CPP facilitates this. So long as the C++ code doesn't change, it should only compile once. If you base a module on Inline::CPP, the code will be compiled at module install time, so another user never really sees the first time compilation lag; it happens at install time, just before the testing phase.
Inline::CPP is not 100% free of portability isues. The target user must have a C++ compiler that is of similar flavor to the C compiler used to build Perl, and the C++ standard libraries should be of versions that produce binary-compatible code with Perl. Inline::CPP has about a 94% success rate with the CPAN testers. And those last 6% almost always boil down to issues of the installation process not correctly deciphering what C++ compiler and libraries to use. ...and of those, it usually comes down to the libraries.
Let's assume you as a module author find yourself in that 95% who have no problem getting Inline::CPP installed. If you know that your target audience will fall into that same category, then producing a module based on Inline::CPP is simple. You basically have to add a couple of directives (VERSION and NAME), and swap out your Makefile.PL's ExtUtils::MakeMaker call to Inline::MakeMaker (it will invoke ExtUtils::MakeMaker). You might also want a CONFIGURE_REQUIRES directive to specify a current version of ExtUtils::MakeMaker when you create your distribution; this insures that your users have a cleaner install experience.
Now if you're creating the module for general consumption and have no idea whether your target user will fit that 94% majority who can use Inline::CPP, you might be better off removing the Inline::CPP dependency. You might want to do this just to minimize the dependency chain anyway; it's nicer for your users. In that case, compose your code to work with Inline::CPP, and then use InlineX::CPP2XS to convert it to a plain old XS module. Your user will now be able to install without the process pulling Inline::CPP in first.
C++ is a large language, and Inline::CPP handles a large subset of it. Pay attention to the typemap file to determine what sorts of parameters can be passed (and converted) automatically, and what sorts are better dealt with using "guts and API" calls. One feature I wouldn't recommend using is automatic string conversion, as it would produce Unicode-unfriendly conversions. Better to handle strings explicitly through API calls.
The portion of C++ that isn't handled gracefully by Inline::CPP is template metaprogramming. You're free to use templates in your code, and free to use the STL. However, you cannot simply pass STL type parameters and hope that Inline::CPP will know how to convert them. It deals with POD (basic data types), not STL stuff. Furthermore, if you compose a template-based function or object method, the C++ compiler won't know what context Perl plans to call the function in, so it won't know what type to apply to the template at compiletime. Consequently, the functions and object methods exposed directly to Inline::CPP need to be plain functions or methods; not template functions or classes.
These limitations in practice aren't hard to deal with as long as you know what to expect. If you want to expose a template class directly to Inline::CPP, just write a wrapper class that either inherits or composes itself of the template class, but gives it a concrete type for Inline::CPP to work with.
Inline::CPP is also useful in automatically generating function wrappers for existing C++ libraries. The documentation explains how to do that.
One of the advantages to Inline::CPP over Swig is that if you already have some experience with perlguts, perlapi, and perlcall, you will feel right at home already. With Swig, you'll have to learn the Swig way of doing things first, and then figure out how to apply that to Perl, and possibly, how to do it in a way that is CPAN-distributable.
Another advantage of using Inline::CPP is that it is a somewhat familiar tool in the Perl community. You are going to find a lot more people who understand Perl XS, Inline::C, and to some extent Inline::CPP than you will find people who have used Swig with Perl. Although XS can be messy, it's a road more heavily travelled than using Perl with Swig.
Inline::CPP is also a common topic on the inline#perl.org mailing list. In addition to myself, the maintainer of Inline::C and several other Inline-family maintainers frequent the list, and do our best to assist people who need a hand getting going with the Inline family of modules.
You might also find my Perl Mongers talk on Inline::CPP useful in exploring how it might work for you. Additionally, Math::Prime::FastSieve stands as a proof-of-concept for basing a module on Inline::CPP (with an Inline::CPP dependency). Furthermore, Rob (sisyphus), the current Inline maintainer, and author of InlineX::CPP2XS has actually included an example in the InlineX::CPP2XS distribution that takes my Math::Prime::FastSieve and converts it to plain XS code using his InlineX::CPP2XS.
You should probably also give ExtUtils::XSpp a look. I think it requires you to declare a bit more stuff than Inline::CPP or SWIG, but it's rather powerful.

What's the state of compilers that generate X86 assembly today?

Whenever I talk to people that work with real-time preformance they tend to point out that the generated X86 assembly instructions are not that efficent.
With things like VMX on the horizon I have to ask, how likely is it that commercial C++ compilers will utilize these instruction sets? I get the feeling that compiler vendors don't emit particulary fancy assembly or focus on keeping ther compilers up to date.
And for that matter, what constitutes good X86 assembly in the first place?
The guys you're talking to must be performance nuts. Most modern compilers will generate very efficient code that makes use of branch-prediction and pipeline-stall tables and a host of optimisation techniques. They will generally emit better code than all but the smartest programmers can match. There are oddball exceptions, which is why it's nice to have __asm and intrinsics on standby, but the situations in which these prove necessary (and helpful) are few and far between these days.
"Good assembly" means that the compiled program utilizes resources optimally. There's a wisdom "write code in clear manner and let the compiler do the optimizations". For this wisdom to hold true compilers go to great extent to generate really fast code.
From my experience Visual C++ often produces surprisingly consice code for complex looking C++ construct, so the idea that compiler vendors don't care about code emisson is not that true.

Code generation for Java JVM / .NET CLR

I am doing a compilers discipline at college and we must generate code for our invented language to any platform we want to. I think the simplest case is generating code for the Java JVM or .NET CLR. Any suggestion which one to choose, and which APIs out there can help me on this task? I already have all the semantic analysis done, just need to generate code for a given program.
Thank you
From what I know, on higher level, two VMs are actually quite similar: both are classic stack-based machines, with largely high-level operations (e.g. virtual method dispatch is an opcode). That said, CLR lets you get down to the metal if you want, as it has raw data pointers with arithmetic, raw function pointers, unions etc. It also has proper tailcalls. So, if the implementation of language needs any of the above (e.g. Scheme spec mandates tailcalls), or if it is significantly advantaged by having those features, then you would probably want to go the CLR way.
The other advantage there is that you get a stock API to emit bytecode there - System.Reflection.Emit - even though it is somewhat limited for full-fledged compiler scenarios, it is still generally enough for a simple compiler.
With JVM, two main advantages you get are better portability, and the fact that bytecode itself is arguably simpler (because of less features).
Another option that i came across what a library called run sharp that can generate the MSIL code in runtime using emit. But in a nicer more user friendly way that is more like c#. The latest version of the library can be found here.
http://code.google.com/p/runsharp/
In .NET you can use the Reflection.Emit Namespace to generate MSIL code.
See the msdn link: http://msdn.microsoft.com/en-us/library/3y322t50.aspx