I am trying to debug a raspberry pi pico from vscode using a picoprobe. After a lot of pain I managed to get everything running from a msys2 mingw64 shell (I built openocd in that shell). However, debugging from vscode results in a popup saying OpenOCD: GDB Server Quit Unexpectedly. My Debug console reads
Cortex-Debug: VSCode debugger extension version 1.6.7 git(b0f5563). Usage info: https://github.com/Marus/cortex-debug#usage
Reading symbols from arm-none-eabi-objdump --syms -C -h -w C:/VSARM/sdk/pico/pico-examples/build/blink/blink.elf
Reading symbols from arm-none-eabi-nm --defined-only -S -l -C -p C:/VSARM/sdk/pico/pico-examples/build/blink/blink.elf
Launching GDB: arm-none-eabi-gdb -q --interpreter=mi2
1-gdb-version
Launching gdb-server: "C:/VSARM/debug_tools/openocd/src/openocd.exe" -c "gdb_port 50000" -c "tcl_port 50001" -c "telnet_port 50002" -s "C:V/SARM/debug_tools/openocd/tcl" -f "c:/Users/micha/.vscode/extensions/marus25.cortex-debug-1.6.7/support/openocd-helpers.tcl" -f interface/cmsis-dap.cfg -f target/rp2040.cfg
Please check TERMINAL tab (gdb-server) for output from C:/VSARM/debug_tools/openocd/src/openocd.exe
Finished reading symbols from objdump: Time: 86 ms
Finished reading symbols from nm: Time: 115 ms
OpenOCD: GDB Server Quit Unexpectedly. See gdb-server output in TERMINAL tab for more details.
and my terminal (set to msys2 mingw64 terminal in the vscode settings) reads
[2022-12-05T14:08:43.239Z] SERVER CONSOLE DEBUG: onBackendConnect: gdb-server session connected. You can switch to "DEBUG CONSOLE" to see GDB interactions.
"C:/VSARM/debug_tools/openocd/src/openocd.exe" -c "gdb_port 50000" -c "tcl_port 50001" -c "telnet_port 50002" -s "C:V/SARM/debug_tools/openocd/tcl" -f "c:/Users/micha/.vscode/extensions/marus25.cortex-debug-1.6.7/support/openocd-helpers.tcl" -f interface/cmsis-dap.cfg -f target/rp2040.cfg
[2022-12-05T14:08:43.310Z] SERVER CONSOLE DEBUG: onBackendConnect: gdb-server session closed
GDB server session ended. This terminal will be reused, waiting for next session to start...
My launch.json is
{
"version": "0.2.0",
"configurations": [
{
"name": "Pico Debug",
"cwd": "${workspaceRoot}",
"executable": "${command:cmake.launchTargetPath}",
"request": "launch",
"type": "cortex-debug",
"servertype": "openocd",
// This may need to be arm-none-eabi-gdb depending on your system
"gdbPath" : "arm-none-eabi-gdb",
"device": "RP2040",
"configFiles": [
"interface/cmsis-dap.cfg",
"target/rp2040.cfg"
],
"svdFile": "${env:PICO_SDK_PATH}/src/rp2040/hardware_regs/rp2040.svd",
"runToMain": true,
// Work around for stopping at main on restart
"postRestartCommands": [
"break main",
"continue"
],
"searchDir": ["C:/VSARM/debug_tools/openocd/tcl"],
"showDevDebugOutput": "raw",
}
]
}
Does somebody see a mistake in my setup?
Currently, my best guess is that there is some sort of dependency that is only satisfied in msys2 mingw64 and not in the terminal (windows power shell?) vscode uses to run the gdb/openocd server.
Does somebody know how I can force vscode (or the cortex-debug extension) to use the msys2 mingw64 shell to run openocd?
An other possible solution/workaround I see is to start the openocd server manually in msys2 mingw64. I could then connect to the openocd server from vscode. Does somebody know if and how I can do this? I only found solutions where vscode starts both the gdb and openocd server.
Thank you for your help.
Related
I try to remote debug an ARM Linux embedded device with Native Debug in VSCode on a Windows host (no WSL).
Host launch.json
{
"version": "0.2.0",
"configurations": [
{
"type": "gdb",
"request": "attach",
"name": "gdb",
"executable": "${workspaceRoot}\\myprogram\\myprogram ",
"stopAtConnect": true,
"target": "192.168.xxx.xxx:2000",
"remote": true,
"cwd": "${workspaceRoot}/myprogram",
"gdbpath": "C:\\msys64\\mingw64\\bin\\gdb-multiarch.exe",
"debugger_args": ["-iex", "set osabi none"],
}
]
}
Target
debarm:~# gdbserver --version
GNU gdbserver (GDB) 7.0.1-debian
Copyright (C) 2009 Free Software Foundation, Inc.
gdbserver is free software, covered by the GNU General Public License.
This gdbserver was configured as "arm-linux-gnueabi"
debarm:~# gdbserver :2000 --attach 1966
Attached; pid = 1966
Listening on port 2000
Remote debugging from host 192.168.xxx.xxx
However, stepping gives warning: Remote failure reply: E01 similar to: GDB remote debugging fails with error E01
I also tried the arm-none-eabi-gdb.exe from https://developer.arm.com/downloads/-/gnu-rm but it gives the same problem.
I also tried the arm-linux-gnueabi-gdb.exe from https://releases.linaro.org/components/toolchain/binaries/latest-5/arm-linux-gnueabi/ but it gives Error while reading shared library symbols for target:/lib/ld-linux.so.3.
Any suggestions for what the problem is with this approach?
Got it working with Linaro arm-linux-gnueabi-gdb.exe and adding "-iex", "set auto-solib-add 0" to debugger_args in launch.json.
Windows10 eclipse esp-idf latest version 2021-03
With the command line idf.py I can build and flash the esp-idf\examples\get-started\blink programme which run on a ESP32.
In eclipse, the buils works but the run command display in console
Usage: C:\Users\peter\esp-idf\tools\idf.py [OPTIONS] COMMAND1 [ARGS]...
[COMMAND2 [ARGS]...]...
ESP-IDF CLI build management tool. For commands that are not known to
idf.py an attempt to execute it as a build system target will be made.
.... bla bla ...
Can anybody tells me what is wrong ?
Regards
There is a bug in the eclipse/esp-idf launch bar.
The ESP target for esp32 is defined with a Serial Port COM3.
But that info is not used.
If one redefine a new ESP Target with the same serial port under a different name, then the run command will work!
See hereafter for the people interested in the details
cmd.exe /C "cd /D C:\Users\peter\esp-idf\components\esptool_py && C:\Users\peter.espressif\tools\cmake\3.16.4\bin\cmake.exe -D IDF_PATH="C:/Users/peter/esp-idf" -D ESPTOOLPY="C:\Users\peter.espressif\python_env\idf4.2_py3.8_env\Scripts\python.exe C:/Users/peter/esp-idf/components/esptool_py/esptool/esptool.py --chip esp32" -D ESPTOOL_ARGS="--before=default_reset --after=hard_reset write_flash #flash_args" -D WORKING_DIRECTORY="C:/Users/peter/eclipse-workspace/blink/build" -P C:/Users/peter/esp-idf/components/esptool_py/run_esptool.cmake"
esptool.py --chip esp32 -p COM3 -b 460800 --before=default_reset --after=hard_reset write_flash --flash_mode dio --flash_freq 40m --flash_size 2MB 0x8000 partition_table/partition-table.bin 0x1000 bootloader/bootloader.bin 0x10000 blink.bin
esptool.py v3.0
VSCode Version: 1.53.0-insider (x64)
OS Version: Microsoft Windows [Version 10.0.21292.1010]
WSL version: WSL 2
distribution: Ubuntu-20.04
Steps to Reproduce:
(in wsl.exe)Type code-insiders ~ && exit
Press ctrl+j in the VSCode window
Type explorer.exe .
Result:
It shows me this error:
<3>init: (632) ERROR: UtilConnectUnix:466: connect failed 111
Question:
How can I fix it and why it is happening ?
Extensions on WSL:
ms-vscode.cpptools
eamodio.gitlens
ms-toolsai.jupyter
ms-python.vscode-pylance
ms-python.python
I think I had a similar issue here, you have to check the environment variables of you WSL2 session.
WSL_INTEROP=/run/WSL/197_interop
if this points to a not existing socket the connection fails.
cheers
Marco
I have the following devcontainer.json file in a project.
When I try to open VSCode in a container, it crashes. The container builds successfully, but the following logs are emitted during startup. When I remove the environment variable configuration, the container starts up and stays running just fine.
I followed the example for configuring environment variables inside the dev container, according to the Visual Studio Code documentation for Advanced Container Configuration.
Question: How do I properly configure the PATH environment variable in my devcontainer.json file?
devcontainer.json
{
"name": "Ubuntu 18.04 & Git",
"dockerFile": "Dockerfile",
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
"containerEnv": {
"PATH": "${containerEnv:PATH}:/root/.customfolder/bin/"
}
}
Logs
6499 ms] Successfully built 096d41dceada
[6503 ms] Successfully tagged vsc-asdf-73cee28d5205fdd4a6063fc596248885:latest
[6506 ms] Start: Run: git rev-parse --show-toplevel
[6533 ms] Start: Starting container
[6533 ms] Start: Run: docker run -a STDOUT -a STDERR --mount type=bind,source=/Users/username/git/asdf,target=/workspaces/asdf,consistency=cached --mount source=/Users/username/.aws/credentials,target=/root/.aws/credentials,type=bind -l vsch.quality=stable -l vsch.remote.devPort=0 -l vsch.local.folder=/Users/username/git/asdf -e PATH=${containerEnv:PATH}:/root/.customfolder/bin/ --entrypoint /bin/sh vsc-pulumi-73cee28d5205fdd4a6063fc596248885 -c echo Container started ; while sleep 1; do :; done
[6852 ms] /bin/sh: 1: sleep: not found
[6852 ms] Container started
[6873 ms] Start: Inspecting container
[6879 ms] Start: Run in container: uname -m
[7031 ms] Start: Run in container: cat /etc/passwd
[7035 ms] Shell server terminated (code: 1, signal: null)
Error response from daemon: Container 8e0f6eeb22c358b0dfd8f1c1410c10b382ea66aa432e7e400a4564671619046f is not running
An error occurred setting up the container
Environment
MacOS Catalina
Docker Desktop 2.2.0.0
Microsoft Visual Studio Code 1.42.0
VSCode Remote-Containers extension 0.101.0
You should be able to change the property from containerEnv to remoteEnv to resolve the issue.
Only the remoteEnv property supports referencing existing container env vars. The containerEnv property is like -e for the Docker CLI and is therefore evaluated before the container is created. This is mainly useful when your Dockerfile itself depends on certain env vars being set (though you can modify the PATH inside your Dockerfile if you so desire).
For everything else, remoteEnv is the way to go since VS Code and all sub-processes like terminals us it. Since this is evaluated after container create, you can update the path as the examples illustrates.
"remoteEnv": {
"PATH": "${containerEnv:PATH}:/some/other/path",
"MY_REMOTE_VARIABLE": "some-other-value-here",
"MY_REMOTE_VARIABLE2": "${localEnv:SOME_LOCAL_VAR}"
}
"containerEnv": {
"PATH": "${localEnv:PATH}:/workspaces/v8/depot_tools"
}
I think that is what you need. localEnv here means the container env.
I'm doing automation test for an android app. My test code is ready and good to run from android studio and manual command from android device. But when I change the command to sh, it failed with INSTRUMENTATION_FAILED. Anyone can help me how to fix it? I just don't understand, why it's working when directly run from terminal, but failed when run from sh.
Manual input comment, which is working:
am instrument -w -r -e debug false -e class com.amap.auto.androidautomation.testcases.basemap.SmokeTest com.amap.auto.androidautomation.test/android.support.test.runner.AndroidJUnitRunner
Result:
INSTRUMENTATION_STATUS: numtests=9
INSTRUMENTATION_STATUS: stream=
com.amap.auto.androidautomation.testcases.basemap.SmokeTest:
INSTRUMENTATION_STATUS: id=AndroidJUnitRunner
INSTRUMENTATION_STATUS: test=smoke01
INSTRUMENTATION_STATUS: class=com.amap.auto.androidautomation.testcases.
Run from sh:(the command is the same as manual, just put it in a sh file)
sh r.sh
Result:
INSTRUMENTATION_STATUS: id=ActivityManagerService
INSTRUMENTATION_STATUS: Error=Unable to find instrumentation info for: Component
Info{com.amap.auto.androidautomation.test/android.support.test.runner.AndroidJUn
}tRunner
INSTRUMENTATION_STATUS_CODE: -1
android.util.AndroidException: INSTRUMENTATION_FAILED: com.amap.auto.androidauto
mation.test/android.support.test.runner.AndroidJUnitRunner
at com.android.commands.am.Am.runInstrument(Am.java:1093)
at com.android.commands.am.Am.onRun(Am.java:371)
at com.android.internal.os.BaseCommand.run(BaseCommand.java:47)
at com.android.commands.am.Am.main(Am.java:100)
at com.android.internal.os.RuntimeInit.nativeFinishInit(Native Method)
at com.android.internal.os.RuntimeInit.main(RuntimeInit.java:251)
: not foundnload/r.sh[2]: exit