How does Qemu locate its image file or How does Eclipse interact with Qemu? - eclipse

for studying Qemu I got the 'QEMU Eclipse' plugin, built the 'blinky' project from https://gnu-mcu-eclipse.github.io/tutorials/blinky-arm/ and when I let it run, it produces the following output:
xPack 64-bit QEMU v2.8.0-9 (qemu-system-gnuarmeclipse).
Board: 'STM32F4-Discovery' (ST Discovery kit for STM32F407/417 lines).
Board picture: '/home/hgiese/Work/qemu-arm-dev/linux-x64/install/qemu-arm/share/qemu/graphics/STM32F4-Discovery.jpg'.
Device file: '/home/hgiese/Work/qemu-arm-dev/linux-x64/install/qemu-arm/share/qemu/devices/STM32F40x-qemu.json'.
Device: 'STM32F407VG' (Cortex-M4 r0p0, MPU, 4 NVIC prio bits, 82 IRQs), Flash: 1024 kB, RAM: 128 kB.
Command line: 'blinky' (6 bytes).
...
and when I let it run everything was fine.
Now, when I called Qemu directly (outside of Eclipse) with the exact command line the output was this:
xPack 64-bit QEMU v2.8.0-9 (qemu-system-gnuarmeclipse).
Board: 'STM32F4-Discovery' (ST Discovery kit for STM32F407/417 lines).
Board picture: '/home/hgiese/Work/qemu-arm-dev/linux-x64/install/qemu-arm/share/qemu/graphics/STM32F4-Discovery.jpg'.
Device file: '/home/hgiese/Work/qemu-arm-dev/linux-x64/install/qemu-arm/share/qemu/devices/STM32F40x-qemu.json'.
Device: 'STM32F407VG' (Cortex-M4 r0p0, MPU, 4 NVIC prio bits, 82 IRQs), Flash: 1024 kB, RAM: 128 kB.
Command line: 'blinky' (6 bytes).
qemu-system-gnuarmeclipse: Guest image must be specified (using --image)
no matter how I specified 'blinky':
being in the Debug directory
calling it as blinky.elf
calling it with its complete path
It got even stranger: In an attempt to produce an error message from Qemu (which might tell me something) I replaced 'blinky' with 'xyz' (in the command line field of the 'Debugger' pane in the 'Debug Configuration' dialog) - and it still worked. The only difference being the line
Command line: 'xyz' (3 bytes).
in Qemu's output, which showed me that Qemu had received this dummy parameter - and it didn't make the slightest difference for it: it still had the correct image file.
How do I have to communicate the image file to Qemu (or 'How does Eclipse does its magic?')
Any insight will be greatly appreciated.
Helmut

Related

How to fix error [USF-XSim-62] when simulating project with Vivado Xilinx (also using DPI-C)?

I have a Verilog project that makes use of a testbench written in SystemVerilog and a few imports/exports of functions through the DPI-C interface.
When attempting to simulate, I get an xsim error (as far as I can tell) and the simulation stops. I have struggled with this issue for a while, and there is no specific info given in the Vivado terminal together with the error.
The exact error received is:
ERROR: [USF-XSim-62] 'elaborate' step failed with error(s) while executing 'path/to/proj/<proj_name>/<proj_name>.sim/sim_1/behav/xsim/elaborate.bat' script. Please check that the file has the correct 'read/write/execute' permissions and the Tcl console output for any other possible errors or warnings.
ERROR: [Vivado 12-4473] Detected error while running simulation. Please correct the issue and retry this operation.
launch_simulation: Time (s): cpu = 00:00:01 ; elapsed = 00:00:06 . Memory (MB): peak = 1412.562 ; gain = 0.000
ERROR: [Common 17-39] 'launch_simulation' failed due to earlier errors.
Not only is the file created by Vivado (so it should have permissions to access it), but this error appears and disappears at random. For example, modifying the testbench file and then modifying it back to original (to force recompilation) sometimes allows the simulation to run, seemingly at random.
What is even more confusing is that there are some "safe states" of the testbench code that allow the simulation to always run. My initial hunch was that it was related to the DPI-C functions, but I tried altering the files in many ways and I didn't find any obvious correlation.
Mentions
I am using Vivado 2021.2
I have implemented an automated TCL script to compile the C files using xsc and insert them into the project directory
I am using xelab command line arguments to link the C code compiled with xsc: -sv_root path/to/xsc -sv_lib dpi.
Update
When I run xelab by itself (with -v), it gives the following output (after static elaboration and simulation data flow analysis passed):
SDG Object Count: 1284, SDG Object Memory Usage: 133 KB.
Time Resolution for simulation is 1ps
Compiling package std.std
ICR Memory Usage: 491KB, 8192KB
Compiling package xil_defaultlib.$unit_tb_sv
ICR Memory Usage: 491KB, 8192KB
Compiling module xil_defaultlib.decoder_interface
ICR Memory Usage: 500KB, 8192KB
Compiling module xil_defaultlib.lut_biases
ICR Memory Usage: 584KB, 8192KB
Compiling module xil_defaultlib.saturate_default
ICR Memory Usage: 758KB, 8192KB
Compiling module xil_defaultlib.variable_nodes_default
ICR Memory Usage: 1832KB, 8192KB
Compiling module xil_defaultlib.check_nodes_default
INFO: [XSIM 43-4009] "abs_prev_proc_elem", written at line 26 in file "D:/Projects/Matlab/NN_BP_BCH/nn-min-sum-decoding/hardware/check_nodes.v", has also been read in this always_comb/always_latch block and is not added to the sensitivity list.
INFO: [XSIM 43-4009] "reg_min", written at line 30 in file "D:/Projects/Matlab/NN_BP_BCH/nn-min-sum-decoding/hardware/check_nodes.v", has also been read in this always_comb/always_latch block and is not added to the sensitivity list.
INFO: [XSIM 43-4009] "reg_sign", written at line 31 in file "D:/Projects/Matlab/NN_BP_BCH/nn-min-sum-decoding/hardware/check_nodes.v", has also been read in this always_comb/always_latch block and is not added to the sensitivity list.
INFO: [XSIM 43-4009] "temp_reg", written at line 52 in file "D:/Projects/Matlab/NN_BP_BCH/nn-min-sum-decoding/hardware/check_nodes.v", has also been read in this always_comb/always_latch block and is not added to the sensitivity list.
ICR Memory Usage: 6896KB, 8192KB
Compiling module xil_defaultlib.interm_layer
ICR Memory Usage: 6902KB, 8192KB
Compiling module xil_defaultlib.llr_to_out_default
ICR Memory Usage: 6927KB, 8192KB
Compiling module xil_defaultlib.out_layer_default
ICR Memory Usage: 7674KB, 8192KB
Compiling module xil_defaultlib.decoder_top_default
ICR Memory Usage: 8281KB, 16384KB
Compiling module xil_defaultlib.tb
child killed: unknown signal

"image is too large" keeps on happening to openbmc image for Raspberrypi platform

Could someone please give me advice to make an openbmc image for Raspberrypi platform ?
Before I tried, I looked through related documents and believed an openbmc image can be worked on Raspberrypi.
Like OpenBMC with Raspberry Pi (2 or 3) and build bmcweb?
and https://kevinleeblog.github.io/project1/2019/11/25/openbmc-for-raspberry-pi-zero/.
So, I followed these instructions and tried the following steps.
#1: Git clone openbmc.git to my local PC.
tm#tm-VB1:~/Rpi4-64$ git clone https://github.com/openbmc/openbmc.git
Snip the logs but it looks no problem.
Receiving objects: 100% (182121/182121), 84.10 MiB | 5.55 MiB/s, done.
Resolving deltas: 100% (96860/96860), done.
#2: set TEMPLATECONF for raspberrypi
tm#tm-VB1:~/Rpi4-64$ export TEMPLATECONF=meta-evb/meta-evb-raspberrypi/conf
tm#tm-VB1:~/Rpi4-64$ echo $TEMPLATECONF
meta-evb/meta-evb-raspberrypi/conf
#3: set up the environment by "openbmc-env"
tm#tm-VB1:~/Rpi4-64/openbmc$ . openbmc-env
### Initializing OE build env ###
Snip the logs but it looks no problem. As you know, the script automatically creates a subdirectory,build, under openbmc.
Common targets are:
obmc-phosphor-image
tm#tm-VB1:~/Rpi4-64/openbmc/build$
#4: Change the directory and edit local.conf for my Raspberrypi platform.
tm#tm-VB1:~/Rpi4-64/openbmc/build$ cat ./conf/local.conf
Snip the log for unchanged part.
MACHINE ??= "raspberrypi4-64" <<< Change here for my platform.
DL_DIR ?= "/home/tm/Yocto/downloads" <<< Add here for build-time reduction at retry.
SSTATE_DIR ?= "/home/tm/Yocto/sstate-cache" <<< Add here for build-time reduction at retry.
#5: Change FLASH_SIZE variable based on the following sugestion. https://github.com/openbmc/openbmc/issues/3590
tm#tm-VB1:~/Rpi4-64/openbmc/meta-phosphor/classes$ cat image_types_phosphor.bbclass
Snip the log.
# Flash characteristics in KB unless otherwise noted
FLASH_SIZE ?= "131072" <<< I changed only this variable from 32768 to 131072.
#6: bitbake starts.
tm#tm-VB1:~/Rpi4-64/openbmc/bitbake obmc-phosphor-image
Then, ERROR happened.
ERROR: Logfile of failure stored in: /home/tm/Rpi/openbmc/build/tmp/work/raspberrypi-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2055074
DEBUG: Executing python function do_generate_static
DEBUG: Executing shell function do_mk_static_nor_image
32768+0 records in
32768+0 records out
33554432 bytes (34 MB, 32 MiB) copied, 0.09147 s, 367 MB/s
DEBUG: Shell function do_mk_static_nor_image finished
DEBUG: Considering file size=495980 name=/home/tm/Rpi/openbmc/build/tmp/deploy/images/raspberrypi/u-boot.bin
DEBUG: Spanning start=0K end=512K
DEBUG: Compare needed=495980 available=524288 margin=28308
484+1 records in
484+1 records out
495980 bytes (496 kB, 484 KiB) copied, 0.00120141 s, 413 MB/s
DEBUG: Considering file size=8266960 name=/home/tm/Rpi/openbmc/build/tmp/deploy/images/raspberrypi/fitImage-obmc-phosphor-initramfs-raspberrypi-raspberrypi
DEBUG: Spanning start=512K end=4864K
>>>DEBUG: Compare needed=8266960 available=4456448 margin=-3810512
ERROR: Image '/home/tm/Rpi/openbmc/build/tmp/deploy/images/raspberrypi/fitImage-obmc-phosphor-initramfs-raspberrypi-raspberrypi' is too large!
DEBUG: Python function do_generate_static finished
It said margin=-3810512.
Now, my 2nd try.
I removed the whole openbmc directory and did the same steps above.
But this time, I change FLASH_SIZE from 32768 to 262144.
It is the same result like below.
ERROR: obmc-phosphor-image-1.0-r0 do_generate_static: Image '/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin' is too large!
ERROR: Logfile of failure stored in: /home/tm/Rpi4/openbmc/build/tmp/work/raspberrypi4-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2061792
ERROR: Task (/openbmc/meta-phosphor/recipes-phosphor/images/obmc-phosphor-image.bb:do_generate_static) failed with exit code '1'
NOTE: Tasks Summary: Attempted 3915 tasks of which 2633 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/openbmc/meta-phosphor/recipes-phosphor/images/obmc-phosphor-image.bb:do_generate_static
Summary: There were 2 WARNING messages shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
tm#tm-VB1:~/Rpi4/openbmc/build$ cat /home/tm/Rpi4/openbmc/build/tmp/work/raspberrypi4-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2061792
DEBUG: Executing python function do_generate_static
DEBUG: Executing shell function do_mk_static_nor_image
32768+0 records in
32768+0 records out
33554432 bytes (34 MB, 32 MiB) copied, 0.177223 s, 189 MB/s
DEBUG: Shell function do_mk_static_nor_image finished
DEBUG: Considering file size=548224 name=/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin
DEBUG: Spanning start=0K end=512K
>>>DEBUG: Compare needed=548224 available=524288 margin=-23936
ERROR: Image '/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin' is too large!
DEBUG: Python function do_generate_static finished
tm#tm-VB1:~/Rpi4/openbmc/build$
It said margin=-23936.
OK. Image is too large. So,my 3rd try.
I removed the whole openbmc directory and did the same steps above.
But this time, I change FLASH_SIZE from 32768 to 9437184.
It is the same result like below.
ERROR: obmc-phosphor-image-1.0-r0 do_generate_static: Image '/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin' is too large!
ERROR: Logfile of failure stored in: /home/tm/Rpi4/openbmc/build/tmp/work/raspberrypi4-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2058361
ERROR: Task (/openbmc/meta-phosphor/recipes-phosphor/images/obmc-phosphor-image.bb:do_generate_static) failed with exit code '1'
NOTE: Tasks Summary: Attempted 3935 tasks of which 0 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/openbmc/meta-phosphor/recipes-phosphor/images/obmc-phosphor-image.bb:do_generate_static
Summary: There were 4 WARNING messages shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
tm#tm-VB1:~/Rpi4/openbmc$
tm#tm-VB1:~/Rpi4/openbmc$ cat /home/tm/Rpi4/openbmc/build/tmp/work/raspberrypi4-openbmc-linux-gnueabi/obmc-phosphor-image/1.0-r0/temp/log.do_generate_static.2058361
DEBUG: Executing python function do_generate_static
DEBUG: Executing shell function do_mk_static_nor_image
32768+0 records in
32768+0 records out
33554432 bytes (34 MB, 32 MiB) copied, 0.173685 s, 193 MB/s
DEBUG: Shell function do_mk_static_nor_image finished
DEBUG: Considering file size=548224 name=/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin
DEBUG: Spanning start=0K end=512K
>>>DEBUG: Compare needed=548224 available=524288 margin=-23936
ERROR: Image '/home/tm/Rpi4/openbmc/build/tmp/deploy/images/raspberrypi4/u-boot.bin' is too large!
DEBUG: Python function do_generate_static finished
tm#tm-VB1:~/Rpi4/openbmc$
It said the same margin as 256MB case.
My 4th try.
I removed the whole openbmc directory and did the same steps above.
I changed MACHINE ??= "raspberrypi4-64" to "raspberrypi2"
But this time, I change FLASH_SIZE from 32768 to 33554432.
It is the same result before.
My 5th try.
I removed the whole openbmc directory and did the same steps above.
I used MACHINE ??= "raspberrypi2"
But this time, I change FLASH_SIZE from 32768 to 67108864.
It is the same result before.
After I tried several variations, it always said "image is too large" although I changed FLASH_SIZE to much much larger one.
So, I am wondering if I have missed some important configuration or it needs another parameter to fix this except FLASH_SIZE.
By the way, I tried romulus and made it.
My environment is ubuntu-20.04.2.0-desktop-amd64.
I really appreciate someone could kindly give me advice to make this work.
Interesting, I don't have a quick fix for you but I did notice the partition that is over sized is the uboot partition. The uboot is a smaller separate binary installed on the machine. It looks as if your uboot build is over 512k and the partition is set for 512k. Your flash size is massize
FLASH_SIZE = 9437184" that is more then a gig, (because FLASH_SIZE is in K)
If I were you I would first try to build an older version of openbmc for raspberry pi. (It used to work so you just need to find the commit before uboot grew to big). Use git to move back a month until you find it works.
If that does not work I would try to modify the partition table.
here is where you failing
this looks fine building the uboot image looks fine
increasing the kernel offset make if build, but the other targets in openbmc will not be happy with this solution. So maybe meta-raspberry-pi will have to override the partition table (if uboot can not be shrunk)
What ever you do, open an issue on the github and share you changes. Also use the discord, and gerrit.
I just replicated this issue. We should fix it

Gem5 in full system running spec2006 runs out of memory

What I am trying to do is run a spec2006 benchmark (namely the 410.bwaves one) in full system mode.
I have made a .rcS script to pass to the fs.py script and the command I type to start the simulation is as follows:
build/X86/gem5.opt configs/example/fs.py --script="../run_bwaves.rcS" --disk-image=ubuntu-14.04.img --kernel=x86_64-vmlinux-2.6.22.9
The result after some time is:
Free swap: 0kB
131072 pages of RAM
3650 reserved pages
18 pages shared
0 pages swap cached
Out of memory: kill process 807 (bwaves) score 13154 or a child
Killed process 807 (bwaves)
/tmp/script: line 39: 807 Killed ./bwaves
Full linux output here
Gem5 output here
I am guessing it has something to do with this line: warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)
but I am not sure.
I have tried adding a --mem-size=... flag but it brakes the simulation with a Memory size not divisible by page size error.
If anyone could help me I would be glad.
Edit: As suggested by comment, I used a large enough --mem-size flag divisible by the page size. The error now has turned into
bwaves[807]: segfault at 00007ffee647624c rip 0000000000410eb5 rsp 00007fff664761c0 error 4
/tmp/script: line 39: 807 Segmentation fault ./bwaves

JFR.dump fails. "No data in specified interval"

I get the following error message when executing jcmd 115 JFR.dump name=continuous_recording:
115:
Dump failed. No data found in the specified interval.
I started the recording with the following configuration:
XX:StartFlightRecording=disk=true,dumponexit=true,
filename=/home/site/diagnostics/recording.jfr,
maxsize=1024m,maxage=1d,name=continuous_recording
It could be that the buffer has not yet filled the minimum chunk size. But the JFR.check command does not provide that information.
Update:
I can get a dump from the Java app if I run JFR.dump without specifying the name of the recording. I tried encapsulating the recording name with quotes (escaped and un-escaped) and got the same error as before.
005c736ce3ee:/home# jcmd 115 JFR.dump filename="home/6_10_dump1.jfr"
Picked up JAVA_TOOL_OPTIONS: -Djava.net.preferIPv4Stack=true
115:
Dumped recording, 155.8 MB written to: /home/6_10_dump1.jfr
What version of the JDK are you using? The chunk should not need to be filled up.
There is a bug [1] that happens in JDK 11 or later, if you specify the filename of the file you want to dump at startup, but don't specify it when you dump the file.
Try this as a work around:
$ jcmd 115 JFR.dump filename=recording.jfr
[1] https://bugs.openjdk.java.net/browse/JDK-8220657
Azul support helped figure this out. If you don't specify the filename when starting the VM, then the following works:
XX:StartFlightRecording=disk=true,name=continuous_recording,dumponexit=true,maxsize=1024m,maxage=1d
This bug was filed and fixed as JDK-8220657. Azul said they will backport this to Zulu 8 and 11 in the future.

SIGSEGV on memory access in Beaglebone Black (ARM) cross-compilation w/ Eclipse - Linaro

I am new to this, or better rusted (being 62).
Trying to develop on Beaglebone Black running Debian over IP using Eclipse Luna CDT and linaro tools.
I succeed in running and debugging standard helloworld.c.
Need to control GPIO fast (to connect to uncommon peripheral) but
all attempts to read or write to memory mapped registers fail.
Instruction
i = (*((volatile unsigned int *)(0x4804c130)))
which should read GPIO status register results in
Child terminated with signal = 0xb (SIGSEGV)
GDBserver exiting
logout
This is the source (hellobone.c) I compile without errors:
int main(void)
{
unsigned int i = 1;
i = (*((volatile unsigned int *)(0x4804c130))) ;
}
(I tried all variations on this pointer arithmetic)
Makefile trace: (ignore includes)
---COMPILE--- C:/hellobone/source/hellobone.c
"C:\gcc-linaro\bin\arm-linux-gnueabihf-gcc.exe" -c -o C:/hellobone/object/hellobone.o C:/hellobone/source/hellobone.c -marm -O0 -g -I. -IC:/hellobone/include
.
---LINK---
"C:\gcc-linaro\bin\arm-linux-gnueabihf-gcc.exe" -o hellobone C:/hellobone/object/hellobone.o C:/hellobone/object/tools.o C:/hellobone/object/gpio_v2.o -marm -O0 -g -I. -IC:/hellobone/include
.
The binary also crashes running as root from TTY:
root#beaglebone:~# ./hellobone
Segmentation fault
I installed Eclipse on the BBB Debian and read and write to memory works just fine. Just too slow compiling, and unstable, to be practical.
Reading memory should be doable. What am I doing wrong?
I suspect
GNU gdbserver (GDB) 7.4.1-debian
This gdbserver was configured as "arm-linux-gnueabihf"
But maybe I am missing something obvious, have not seen any post on this problem...
Really stuck. Being working on this for months now. Setting up toolchain very frustrating, nothing works as in YouTube videos..
Any help would be really appreciated
Marco
You need to mmap /dev/mem to access memory mapped peripherals through physical addresses. Easiest example / code I know does this goes by the name devmem2.
Thank you a lot, that certainly helped.
I compiled the program you gave me and it worked perfect in run mode in Eclipse, and in terminal on the remote machine.
Curiously, when running the Eclipse debugger, it crashes executing:
if((fd = open("/dev/mem", O_RDWR | O_SYNC)) == -1) FATAL;
I get this error message from gdbserver
Remote debugging from host 192.168.1.2
/root/hellobone: relocation error: /root/hellobone: symbol �pen, version GLIBC_2.4 not defined in file libc.so.6 with link time reference
Child exited with status 127
GDBserver exiting
Have been trying to use fopen but that gives a segmentation fault. Anyhow, I think that is a toolchain issue and not a programming issue.