Setup RISC-V toolchain with specific instruction set - toolchain

I'm developing a processor using a form of the RISC-V ISA and i'm currently trying to setup the toolchain.
My current processor design uses the RV32I base instruction set and i want to compile for this ISA. However the default configuration of the toolchain, as written on the http://RISCV.org site, is to compile for the RV64I ISA.
How can i reconfigure this toolchain to produce a binary for the RV32I ISA?

If you are using the RISC-V port of gcc, you can use the -march flag to constrain which instruction sets and extensions it will emit.
Example:
riscv64-unknown-elf-gcc -march=RV32I etc.
The fact that the compiler name begins with riscv64 is irrelevant. x86 is the same way (the x86 64bit compiler can generate 32-bit ia32 code via "-m32").

Related

How to use FMIKit in 64-bit Matlab / Simulink to generate 32-bit binary FMU?

I have successfully used FMIKit2.7(from [https://github.com/CATIA-Systems/FMIKit-Simulink] )in 64-bit Matlab / Simulink (Matlab2017a) to generate a 64-bit binary FMU, which is indeed useful. However, now other simulation tools need to use 32-bit binary FMU for co-simulation, I follow the following way:
[ https://ww2.mathworks.cn/help/coder/ug/build-32-bit-dll-on-64-bit-windows(r)-platform-using-msvc-toolchain.html?s_tid=srchtitle]
Added a 32-bit compiler toolchain to 64-bit Matlab.
In addition, Intel-> x86-32 (Windows32) is selected for Hardware Implementation, and rtwsfcnfmi.tlc is selected for System Target File. The FMI Option sets the output to a Co-Simulation type FMU.
Hardware Implementation
System Target File
FMI Option
However, in fact, the FMU generated by FMIKit is still a 64-bit binary FMU.
My guess is that FMIKit automatically chose a 64-bit compiler during the compilation and linking process. What do I need to do to modify the FMIKit configuration file (such as a .tlc file or others) or Matlab / Simulink to generate a 32-bit binary FMU?
You should use the grtfmi.tlc target insted of the rtwsfcn.tlc target.
Then you either manage to configure the cmake-build to directly generate a 32-bit FMU. (I am not familiar with this. But there are offered different cmake generators for different VisualStudio versions, some of them having a 64 in the name. So are the others 32bit?)
As an alternative:
check "Include Sources in FMU"
Then you can afterwards compile the FMU e.g. using fmpy (command line or gui --> help --> add platform binary) from a 32-bit python version. You cannot use the Anaconda installation for the latest fmpy version in 32bit, see https://github.com/CATIA-Systems/FMPy/issues/64.

Why are there different packages for the same architecture, but different OSes?

My question is rather conceptual. I noticed that there are different packages for the same architecture, like x86-64, but for different OSes. For example, RPM offers different packages for Fedora and OpenSUSE for the same x86-64 architecture: http://www.rpmfind.net/linux/rpm2html/search.php?query=wget - not to mention different packages served up by YUM and APT (for Ubuntu), all for x86-64.
My understanding is that a package contains binary instructions suitable for a given CPU architecture, so that as long as CPU is of that architecture, it should be able to execute those instructions natively. So why do packages built for the same architecture differ for different OSes?
Considering just different Linux distros:
Besides being compiled against different library versions (as Hadi described), the packaging itself and default config files can be different. Maybe one distro wants /etc/wget.conf, while maybe another wants /etc/default/wget.conf, or for those files to have different contents. (I forget if wget specifically has a global config file; some packages definitely do, and not just servers like Exim or Apache.)
Or different distros could enable / disable different sets of compile-time options. (Traditionally set with ./configure --enable-foo --disable-bar before make -j4 && make install).
For wget, choices may include which TLS library to compile against (OpenSSL vs. gnutls), not just which version.
So ABIs (library versions) are important, but there are other reasons why every distro has their own package of things.
Completely different OSes, like Linux vs. Windows vs. OS X, have different executable file formats. ELF vs. PE vs. Mach-O. All three of those formats contain x86-64 machine code, but the metadata is different. (And OS differences mean you'd want the machine code to do different things.
For example, opening a file on Linux or OS X (or any POSIX OS) can be done with an int open(const char *pathname, int flags, mode_t mode); system call. So the same source code works for both those platforms, although it can still compile to different machine code, or actually in this case very similar machine code to call a libc wrapper around the system call (OS X and Linux use the same function calling convention), but with a different symbol name. OS X would compile to a call to _open, but Linux doesn't prepend underscores to symbol names, so the dynamic linker symbol name would be open.
The mode constants for open might be different. e.g. maybe OS X defines O_RDWR as 4, but maybe Linux defines it as 2. This would be an ABI difference: same source compiles to different machine code, where the program and the library agree on what means what.
But Windows isn't a POSIX system. The WinAPI function for opening a file is HFILE WINAPI OpenFile(LPCSTR lpFileName, LPOFSTRUCT lpReOpenBuff, UINT uStyle);
If you want to do anything invented more recently than opening / closing files, especially drawing a GUI, things are even less similar between platforms and you will use different libraries. (Or a cross platform GUI library will use a different back-end on different platforms).
OS X and Linux both have Unix heritage (real or as a clone implementation), so the low-level file stuff is similar.
These packages contain native binaries that require a particular Application Binary Interface (ABI) to run. The CPU architecture is only one part of the ABI. Different Linux distros have different ABIs and therefore the same binary may not be compatible across them. That's why there are different packages for the same architecture, but different OSes. The Linux Standard Base project aims at standardizing the ABIs of Linux distros so that it's easier to build portable packages.

Develop Simple OS under Mac OS X, how to build the boot img from the Mach-O?

I am writing a simple OS under Mac OS X environment. I can build a simple bootloader by nasm. when i develop the more part by C language, i should build them together. The GCC of Mac OS X will compile a Mach-O output format. I want to know how to cat the instruction part of the output object, and link it together with the nasm part.
thanks.
There's a bigger problem which you aren't seeing.
GCC does not generate 16-bit x86 code, only 32-bit or 64-bit. x86 PC bootloaders start execution in the real addressing mode, which is a 16-bit mode for 16-bit code only.
So, even if you manage to link together the C code compiled with gcc and the assembly code compiled with NASM, you won't be able to execute the C code part (any 32-bit code part for that matter) until after you've switched into the 32-bit protected mode, which is not a very easy thing to do.
And you don't want to switch into protected mode in the 512-byte-long boot sector either. BIOS functions cannot be used in protected mode. If you switch too early, you won't be able to load any more stuff from the disk.
The most practical strategy is to split the bootloader into several parts. The 512-byte-long bootsector would load the next part(s) using the BIOS disk I/O functions. And those other parts will either contain the whole OS, or enough code that would load the rest of the OS either by using the same BIOS I/O functions or by using its own disk driver(s) in the real or protected mode.
So, you are doomed to writing 16-bit code in assembly language by hand for the bootsector, no C, no 32-bit.
You can, however, use other C compilers capable of producing 16-bit x86 code for the other parts of the bootloader. There are at least two of such compilers freely available online:
Turbo C++ 1.01 (runs only in DOS, Windows XP or below, VMs with DOS/Windows, e.g. DosBox)
Open Watcom C/C++ 1.9 (runs in DOS, Windows and probably Linux)

Is there an equivalent of ${LIB} for dyld?

I'm working on a Mac launcher for a trace library - the tracing works by adding the library to DYLD_INSERT_LIBRARIES (the Mac equivalent of LD_PRELOAD). The DYLD_INSERT_LIBRARIES variable is then propagated by the trace library as further processes are spawned.
The trouble is that I need the 32-bit version of the trace library to be used for 32-bit tracee processes and the 64-bit version for 64-bit tracee processes. In the Linux launcher I have, this is achieved by using ${LIB} in LD_PRELOAD - the dynamic loader (ld.so) then replaces this with "the right thing" when loading a process.
Is there an equivalent of ld.so's ${LIB} variable for dyld on Mac? I couldn't immediately see one when I looked through the man page (https://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/dyld.1.html), but I may just be reading it wrong. If not, is there another way of achieving the same effect please?
I think what you want is to compile your inserted library as a fat binary (e.g., multiple architectures in the same binary). This should allow a single value of DYLD_INSERT_LIBRARIES to work for subprocesses of various architectures.

cross compilation from Solaris sparc to Solaris x86

May I know if I can cross compile the Solaris x86 library from Solaris sparc server?
The source code is mainly in C++ (some C). I need to use the Solaris C++ compiler CC to compile. I understand that some compile or link flags are different between sparc and x86. I have done a check to make sure that the flags I used are common.
Is it possible to simply copy the library compiled in sparc to x86? Or I need to apply specific flag during compiling and linking?
Thanks,
The Sun/Oracle Studio C++ compilers do not support cross-compilation. You would need to use another compiler that does, like a specially built gcc.
Simply copying the library can't work - SPARC and x86 are very different instruction sets, with no binary compatibility between the two.
Even if you could cross compile the Solaris libraries on SPARC for x86, it would seem a lot simpler to just install the x86 compilers and libraries. The interdependencies of these libraries is probably so complex that such a project would probably not work.
What's preventing you from just downloading and installing the Studio software on x86 Solaris?
Oracle Sun Studio C++ compiler (CC) has --xarch option with big variety of architectures. There are: sparc, amd64, pentium_pro and various extensions/modifications. This flag should be provided for both compiler and linker if you compile and link in separate steps.
You can verify target architecture with file command; e.g:
bash-3.2$ file /usr/bin/CC
/usr/bin/CC: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, stripped
Please, refer to CC manual for details:
Sun Studio 11 C++ Man Page