Disable Interrupts Raspberry - raspberry-pi

I want to disable all the local interrupts on my raspberry pi.
There exist the function: local_irq_disable();
But my g++ compiler cannot find it. I tried the common header files like system.h and irq.h.
But it didn't work.
Which headerfile do I have to use to build the irq_disable function on a raspi with the os raspian ?

I am assuming you are working on a device driver since this is a function that is meant for code that lives in kernel space, not in user space. The function you are looking for seems to be defined as a preprocessor macro in linux/irqflags.h. See Linux Cross Reference for more information.

Related

How do I add a missing peripheral register to a STM32 MCU model in Renode?

I am trying out this MCU / SoC emulator, Renode.
I loaded their existing model template under platforms/cpus/stm32l072.repl, which just includes the repl file for stm32l071 and adds one little thing.
When I then load & run a program binary built with STM32CubeIDE and ST's LL library, and the code hits the initial function of SystemClock_Config(), where the Flash:ACR register is being probed in a loop, to observe an expected change in value, it gets stuck there, as the Renode Monitor window is outputting:
[WARNING] sysbus: Read from an unimplemented register Flash:ACR (0x40022000), returning a value from SVD: 0x0
This seems to be expected, not all existing templates model nearly everything out of the box. I also found that the stm32L071 model is missing some of the USARTs and NVIC channels. I saw how, probably, the latter might be added, but there seems to be not a single among the default models defining that Flash:ACR register that I could use as example.
How would one add such a missing register for this particular MCU model?
Note1: For this test, I'm using a STM32 firmware binary which works as intended on actual hardware, e.g. a devboard for this MCU.
Note2:
The stated advantage of Renode over QEMU, which does apparently not emulate peripherals, is also allowing to stick together a more complex system, out of mocked external e.g. I2C and other devices (apparently C# modules, not yet looked into it).
They say "use the same binary as on the real system".
Which is my reason for trying this out - sounds like a lot of potential for implementing systems where the hardware is not yet fully available, and also automatted testing.
So the obvious thing, commenting out a lot of parts in init code, to only test some hardware-independent code while sidestepping such issues, would defeat the purpose here.
If you want to just provide the ACR register for the flash to pass your init, use a tag.
You can either provide it via REPL (recommended, like here https://github.com/renode/renode/blob/master/platforms/cpus/stm32l071.repl#L175) or via RESC.
Assuming that your software would like to read value 0xDEADBEEF. In the repl you'd use:
sysbus:
init:
Tag <0x40022000, 0x40022003> "ACR" 0xDEADBEEF
In the resc or in the Monitor it would be just:
sysbus Tag <0x40022000, 0x40022003> "ACR" 0xDEADBEEF
If you want more complex logic, you can use a Python peripheral, as described in the docs (https://renode.readthedocs.io/en/latest/basic/using-python.html#python-peripherals-in-a-platform-description):
flash: Python.PythonPeripheral # sysbus 0x40022000
size: 0x1000
initable: false
filename: "script_with_complex_python_logic.py"
```
If you really need advanced implementation, then you need to create a complete C# model.
As you correctly mentioned, we do not want you to modify your binary. But we're ok with mocking some parts we're not interested in for a particular use case if the software passes with these mocks.
Disclaimer: I'm one of the Renode developers.

Device Drivers vs the /dev + glibc Interface

I am looking to have the processor read from I2C and store the data in DDR in an embedded system. As I have been looking at solutions, I have been introduced to Linux device drivers as well as the GNU C Library. It seems like for many operations you can perform with the basic Linux drivers you can also perform with basic glibc system calls. I am somewhat confused when one should be used over the other. Both interfaces can be accessed from the user space.
When should I use a kernel driver to access a device like I2C or USB and when should I use the GNU C Library system functions?
The GNU C Library forwards function calls such as read, write, ioctl directly to the kernel. These functions are just very thin wrappers around the system calls. You could call the kernel all by yourself using inline assembly, but that is rarely helpful. So in this sense, all interactions with the kernel driver will go through these glibc functions.
If you have questions about specific interfaces and their trade-offs, you need to name them explicitly.
In ARM:
Privilege states are built into the processor and are changed via assembly commands. A memory protection unit, a part of the chip, is configured to disallow access to arbitrary ranges of memory depending on the privilege status.
In the case of the Linux kernel, ALL physical memory is privileged- memory addresses in userspace are virtual (fake) addresses, translated to real addresses once in privileged mode.
So, to access a privileged memory range, the mechanics are like a function call- you set the parameters indicating what you want, and then make a ('SVC')- an interrupt function which removes control of the program from userspace, gives it to the kernel. The kernel looks at your parameters and does what you need.
The standard library basically makes that whole process easier.
Drivers create interfaces to physical memory addresses and provide an API through the SVC call and whatever 'arguments' it's passed.
If physical memory is not reserved by a driver, the kernel generally won't allow anyone to access it.
Accessing physical memory you're not privileged to will cause a "bus error".
BTW: You can use a driver like UIO to put physical memory into userspace.

Nsight 5.0 on MacPro lion 10.7.4

I'm new to CUDA dev and I'm using NSight 5 on a MacPro.
I'm doing a very simple simulation with two particles (ver1 and ver2 here, which are two structs that have pointers to another type of structs – links)
The code compiled but seems to run into problem when reaches the end of this block, and never stepped into the integrate_functor():
...
thrust::device_vector<Vertex> d_vecGlobalVec(2);
d_vecGlobalVec[0] = ver1;
d_vecGlobalVec[1] = ver2;
thrust::for_each(
d_vecGlobalVec.begin(),
d_vecGlobalVec.end(),
integrate_functor(deltaTime)
);
...
So my questions are:
In NSight, I can see the values of member variables of ver1 and ver2; but right before the last line of the code in this block, when I expand the hierarchy of d_vecGlobalVec, I can see any of these values - the corresponding fields (e.g. of the first element in this vector) are just empty. Why is this the case? Obviously, ver1 and ver2 are on Host memo while the values in d_vecGlobalVec are on the device.
2.
A member of the NSight team posted this.
So following that, in general, does it mean that I should be able to step in and out between host and device code, and be able to see host/device variables as if there is no barrier between them?
System:
NVIDIA GeForce GT 650M 1024 MB
Mac OS X Lion 10.7.4 (11E2620)
Make sure your device code is actually called. Check all return codes and confirm that device actually worked on the output. Sometimes thrust may run the code on host if it believes it is more effective.
I would really recommend updating to 10.8 - it has the latest drivers with the best support for NVIDIA GeForce 6xx series.
Also note that for optimum experience you need to have different GPUs for display and CUDA debugging - otherwise Mac OS X may interfere and kill the debugger.

Why would the address of reset vector differ when two firmwares are linked with the same linker script?

I have a Cortex-M3 chip and on it I am running a bootloader that uses eCos. The bootloader, after checking for firmware updates etc, jumps to another location (BASE_ADDRESS_OF_APP + 0x19) on the ROM where the actual application resides (also compiled with eCos).
Now instead of running the normal firmware, I want to run my CppUTests compiled for the Cortex-M3 target. So I am able to compile and link my tests for the target platform, using the ecos glibc, but not the actual operating system. But when I load it on to my board using JTAG, it doesn't run.
After some investigation using arm-eabi-objdump, I found out that the reset vector of the CppUTest firmare is at an offset of 0x490 as opposed to an offset of 0x18 for the normal firmware. My suspicion is that this is the reason why the tests are never executed. Is this correct?
How is it possible that the two firmwares have different starting addresses when I am linking them with the same linker script?
How can I make sure that the starting point of the test program is the same as the starting point of the application?
It depends on how your linker script is written, if your entry point address is not set to a static location in the linker script, then there could be the chance that other data/code is put in the object file before your entry point, effectively moving the location of your entry point and indeed causing problems.
I typically solve this by creating a new section with only 1 symbol in it, and a jump/branch instruction as follows:
.section entryPointSection
b myCodeEntryPoint
Then in your linker script put the entryPointSection at the hard coded address that your bootloader will jump to.
The myCodeEntryPoint label can be the name of a C function (or assembly label if necessary) that is in the normal .text section and can be linked anywhere within reach of the jmp. It will become your entry point, but you don't really care where it is because the linker should find it and link it properly.
Consider posting your linker script if you have further questions.

Enabling GRUB to automatically boot from the kernel

I am developing a kernel for an operating system. In order to execute it, I've decided to use GRUB. Currently, I have a script attached to GRUB's stage1, stage2, a pad file and the kernel itself together which makes it bootable. The only problem is that when I run it, you have to let GRUB know where the kernel is and how big it is manually and then boot it, like this:
kernel 200+KERNELSIZE
boot
KERNELSIZE is the size of the kernel in blocks. This is fine and alright for a start, but is it possible to get these values in the binary and make GRUB boot the kernel automatically? Any suggestions on how to accomplish that?
http://www.gnu.org/software/grub/manual/grub.html#Embedded-data gives some general information about block list storage in GRUB. Most importantly, it mentions that block lists are stored in well defined locations in stage2.
You will probably want to look at the GRUB source code to figure out the exact location.
I would imagine you could just make your own menu.lst conf file, load that at the grub shell with "configfile /path/to/menu.lst" and then do "setup (hd0)" replacing values as needed. I'm just guessing though.. no telling what the differences are on your custom setup.