Raspberry Pico W MicroPython execution freezes a few seconds after disconnecting screen from UART - micropython

I got my program running fine as explained at: How can you make a micropython program on a raspberry pi pico autorun?
I'm installing a main.py that does:
import machine
import time
led = machine.Pin('LED', machine.Pin.OUT)
# For Rpi Pico (non-W) it was like this instead apparently.
# led = Pin(25, Pin.OUT)
i = 0
while (True):
led.toggle()
print(i)
time.sleep(.5)
i += 1
When I power the device on by plugging the USB to my laptop, it seems to run fine, with the LED blinking.
Then, if I connect from my laptop to the UART with:
screen /dev/ttyACM0 115200
I can see the numbers coming out on my host terminal correctly, and the LED still blinks, all as expected.
However, when I disconnect from screen with Ctrl-A K, after a few seconds, the LED stops blinking! It takes something around 15 seconds for it to stop, but it does so every time I tested.
If I reconnect the UART again with:
screen /dev/ttyACM0 115200
it starts blinking again.
Also also noticed that after I reconnect the UART and execution resumes, the count has increased much less than the actual time passed, so one possibility is that the Pico is going into some slow low power mode?
If I remove the print() from the program, I noticed that it does not freeze anymore after disconnecting the UART (which of course shows no data in this case).
screen -fn, screen -f and screen -fa made no difference.
Micropython firmware: rp2-pico-w-20221014-unstable-v1.19.1-544-g89b320737.uf2, Ubuntu 22.04 host.
Some variants follow.
picocom /dev/ttyACM0 instead of screen and disconnect with Ctrl-A Ctrl-Q: still freezes like with screen.
If I exit from picocom with Ctrl-A Ctrl-X instead however, then it works. The difference between both seems to be that Ctrl-Q logs:
Skipping tty reset...
while Ctrl-X doesn't, making this a good possible workaround.
The following C analog of the MicroPython hacked from:
https://github.com/raspberrypi/pico-examples/blob/a7ad17156bf60842ee55c8f86cd39e9cd7427c1d/pico_w/blink
https://github.com/raspberrypi/pico-examples/blob/a7ad17156bf60842ee55c8f86cd39e9cd7427c1d/hello_world/usb
did not show the same problem, tested on https://github.com/raspberrypi/pico-sdk/tree/2e6142b15b8a75c1227dd3edbe839193b2bf9041
#include <stdio.h>
#include "pico/stdlib.h"
#include "pico/cyw43_arch.h"
int main() {
stdio_init_all();
if (cyw43_arch_init()) {
printf("WiFi init failed");
return -1;
}
int i = 0;
while (true) {
printf("%i\n", i);
cyw43_arch_gpio_put(CYW43_WL_GPIO_LED_PIN, i % 2);
i++;
sleep_ms(500);
}
return 0;
}
Reproduction speed can be greatly increased from a few seconds to almost instant by printing more and faster as in:
import machine
import time
led = machine.Pin('LED', machine.Pin.OUT)
i = 0
while (True):
led.toggle()
print('asdf ' * 10 + str(i))
time.sleep(.1)
i += 1
This corroborates people's theories that the problem is linked to flow control: the sender appears to stop sending if the consumer stops being able to receive fast enough.
Also asked at:
https://github.com/orgs/micropython/discussions/9633
Possibly related:
https://forums.raspberrypi.com/viewtopic.php?p=1833725&hilit=uart+freezes#p1833725

What appears to be happening here is that exiting screen (or exiting picocom without the tty reset) leaves the DTR line on the serial port high. We can verify this by writing some simple code to control the DTR line, like this:
#include <unistd.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <termios.h>
#include <sys/types.h>
#include <sys/time.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <signal.h>
int main(int argc, char **argv)
{
int fd;
int dtrEnable;
int flags;
if (argc < 2) {
fprintf(stderr, "Usage: ioctl <device> <1 or 0 (DTR high or low)>\n");
exit(1);
}
if ((fd = open(argv[1], O_RDWR | O_NDELAY)) < 0) {
perror("open:");
exit(1);
}
sscanf(argv[2], "%d", &dtrEnable);
ioctl(fd, TIOCMGET, &flags);
if(dtrEnable!=0) {
flags |= TIOCM_DTR;
} else {
flags &= ~TIOCM_DTR;
}
ioctl(fd, TIOCMSET, &flags);
close(fd);
}
Compile this into a tool called setdtr:
gcc -o setdtr setdtr.c
Connect to your Pico using screen, start your code, and then disconnect. Wait for the LED to stop blinking. Now run:
./setdtr /dev/ttyACM0 0
You will find that your code starts running again. If you run:
./setdr /dev/ttyACM0 1
You will find that your code gets stuck again.
The serial chip on the RP2040 interprets a high DTR line to mean that a device is still connected. If nothing is reading from the serial port, it eventually blocks. Setting the DTR pin to 0 -- either using this setdtr tool or by explicitly resetting the serial port state on close -- avoids this problem.

I don't know why it works, but based on advie from larsks:
sudo apt install picocom
picocom /dev/ttyACM0
and then quit with Ctrl-A Ctrl-X (not Ctrl-A Ctrl-Q) does do what I want. Not sure what screen is doing differently exactly.
When quitting, Ctrl-Q shows on terminal:
Skipping tty reset...
and Ctrl-X does not, which may be a major clue.

Related

VS Code with stdin/out .exe server on Windows

I have an LSP server that works fine with VS 2019 IDE. I am now trying to get it to work with VSCode. I wrote a simple extension for VSCode that was working with the server at one point, but VSCode is now not working at all. So, I decided to write a simple C program that simply reads stdin and echos the characters to stderr, not expecting it to work, but to verify that VSCode is at least trying to communicate with the server. As with my original server, this program receives absolutely nothing: VSCode is not sending any packets to the server, and I don't know why.
Here is the simple "echo" server code. All it does is read stdin one character indefinitely then echo the char more or less to stderr, flush()ing each time.
#include <iostream>
#include <stdio.h>
#include <stdio.h>
#include <fcntl.h>
#include <errno.h>
#include <io.h>
int main()
{
for (;;)
{
char buf[10];
int readc = _read(0, buf, 1);
fprintf(stderr, "%d\n", buf[0]);
fflush(stderr);
}
return 0;
}
Here is a stripped-down VSCode client extension, derived from the doc, which so happens to provide zero information on spawning a server as a process. This calls spawn() of the server with a window.
export function activate(context: vscode.ExtensionContext) {
const server: vscodelc.Executable = {
command: `C:/Users/kenne/source/repos/ConsoleApplication1/Debug/ConsoleApplication1.exe`,
args: [],
options: {shell: true, detached: true }
};
const serverOptions: vscodelc.ServerOptions = server;
let clientOptions: vscodelc.LanguageClientOptions = {
// Register the server for plain text documents
documentSelector: [{ scheme: 'file', language: 'plaintext' }]
};
const client = new vscodelc.LanguageClient('Antlr Language Server', serverOptions, clientOptions);
console.log('Antlr Language Server is now active!');
client.start();
}
(Via debugging, I figured out that I needed options: {shell: true, detached: true } in the ServerOptions struct to make spawn() create a detached window for the process.) Running the client, the server is spawned with a window, but there is indeed no characters written to the server, even for the simple C "echo" program. In the debugger, I even see that write() is called in the client code, into the json write code, and then into the Node write code. For the VS2019 IDE, this all works perfectly fine.
Does anyone have any ideas on how to get an executable server that uses stdin/stdout to work with VSCode?
The answer is that the tables that the package.json file were messed up. It contains tables required for the server: "activationEvents" describes all the languages supported; "languages" associate a file extension with a language. In addition, the language table is duplicated in the LanguageClientOptions in the activate() function. Without these tables, VSCode may not send an open file request to the LSP server, or even not start the LSP server. In addition, there is a bug in libuv that prevents "windowHidden" to not be processed correctly for Windows. Therefore, the server process cannot be created with a window until fixed. Instead, send server debugging output to a file. The server now works great with VSCode for Antlr2, 3, 4, Bison, and W3C EBNF.

OpenOCD exit on breakpoint

I'm developing an application on an STM32F042.
I drive everything from a makefile, including my unit tests.
I use OpenOCD and ST-LINK to flash the target.
My unit tests run on the host and on the target.
The host unit test driver returns 0 from main() on success and non-zero on failure, so the makefile knows if the tests pass.
The makefile flashes and starts tests on the target, but doesn't know if they succeed or fail.
The embedded test application turns on a red LED for fail and green for pass, so I know--now I want to automate this step.
I want to set two breakpoints in the code, one in the failure handler and one at the end of main, and tell OpenOCD to exit with zero or non-zero status if it hits one or the other breakpoint.
So my question boils down to two specific ones:
To set a breakpoint, I need to know the PC value at a specific line of code. How do I get that value from the arm-gcc toolchain?
Can I configure OpenOCD to exit on specific breakpoints, and with specific status?
Here's what I ended up with. For each target unit test, I start an OpenOCD server and connect to it with gdb. Gdb runs a script that sets two breakpoints, one for success, one for failure. If it hits either breakpoint, it shuts down the OCD server and exits with a code that communicates success and failure to the shell. To run the same tests on the host, I simply compile them as regular executables.
Makefile:
# target unit test binaries
foo_tests.elf bar_tests.elf baz_tests.elf bop_tests.elf: unit_test_runner.ao
# disable optimization for target unit test driver to avoid optimizing
# away functions that serve as breakpoint labels
unit_test_runner.ao: CFLAGS += -O0 -g
# link target unit test binaries for semihosting
%_tests.elf: ARM_LDLIBS += -specs=rdimon.specs -lrdimon
# host unit test binaries
foo_tests bar_time_tests baz_tests bop_tests: unit_test_runner.o
# run target unit test binaries through gdb and OpenOCD; redirecting stderr
# leaves printf output from `assert()' clearly visible on the console
%.tut: %.elf
openocd -f interface/stlink-v2-1.cfg -f target/stm32f0x.cfg 2> $#.log &
gdb-multiarc -batch-silent -x tut.gdb $< 2> $#-gdb.log
# run host binary
%.run: %
./$*
tests: foo_tests.run bar_time_tests.run baz_tests.run bop_tests.run \
foo_tests.tut bar_time_tests.tut baz_tests.tut bop_tests.tut
tut.gdb:
target remote localhost:3333
monitor arm semihosting enable # let assert()'s printf() through
monitor reset halt
load
monitor reset init
break success # set breakpoint on function `sucess()'
commands # on hitting this bp, execute the following:
monitor shutdown # shutdown OpenOCD server
quit 0 # exit GDB with success code
end
break failure # set breakpoint on function `sucess()'
commands
monitor shutdown
quit 1 # exit GDB with failure code
end
continue
unit_test_runner.c:
#include <stdlib.h>
/* These two functions serve as labels where gdb can place
breakpoints. */
void success() {}
void failure() {}
/* Implementation detail for `assert()' macro */
void assertion_failure(const char *file,
int line,
const char *function,
const char *expression)
{
printf("assertion failure in %s:%d (%s): `%s'\n",
file, line, function, expression);
failure();
exit(1);
}
/* This function is necessary for ARM semihosting */
extern void initialise_monitor_handles(void);
int main(int argc, char* argv[])
{
#ifdef __arm__
initialise_monitor_handles();
#endif
tests(); /* client code implements this function */
success();
return 0;
}

qDebug and cout don't work

I have this simple code
#include <QtCore/qdebug.h>
#include <QtCore/qcoreapplication.h>
#include <iostream>
using namespace std;
int main(int argc, char **argv)
{
cout << "pluto" << endl;
QCoreApplication app(argc, argv);
qDebug() << "pippo" << endl;
return app.exec();
//return 0;
}
I compiled it with MinGw in Eclipse with no errors, but when I run the code no string message appear on the consolle. What is wrong? Thanks.
Luca
For cout to work on Windows, you need to have CONFIG+=console in the .pro file. It shouldn't have any effect on any other platform, so you can just add it there. You can use qmake conditionals if you only want it for debug builds or something., or you can pass it to qmake as command line option, if it is more convenient for your workflow:
qmake ...other args... CONFIG+=console
Under Windows, qDebug() output by default goes to Windows debug logs. You can get it in two ways:
Use an application such as an IDE or standalone DebugView tool from Microsoft.
Use qInstallMessageHandler Qt function in your program code, and catch the debug output, and do what you want with it, such as print it with cout and/or log it.
If you really need have that on output, you can try with QTextSteam:
#include <QTextStream>
QTextStream cout(stdout);
cout << "string\n";
QTextSteam cerr(stderr);
cerr << "error!\n";

"Failed to open X display" when trying to run project from within Eclipse

I have a simple OpenGL/GLFW test program in Eclipse
#include <iostream>
#include <string>
#include <GL/glew.h>
#define GLFW_INCLUDE_GLU
#include <GLFW/glfw3.h>
void errorCallback(int error, const char *description)
{
std::cerr << description << " (GLFW error " << error << ")" << std::endl;
}
int main(int argc, char **argv)
{
int returnValue = 0;
try {
// Initialise GLFW.
glfwSetErrorCallback(errorCallback);
if(!glfwInit()) throw std::string("Could not initialise GLFW");
/* ...do everything else... */
} catch(std::string const &str) {
std::cerr << "Error: " << str << std::endl;
returnValue = 1;
}
return returnValue
}
However, running it causes the following to come up in the console:
X11: Failed to open X display (GLFW error 65542)
Error: Could not initialise GLFW
i.e. it fails during glfwInit() (I commented out all the code just to make sure it doesn't actually happen during window creation or something). However, navigating to the build directory (using my file manager, not Eclipse, that is) and manually launching from there works just fine.
Anyone know what the problem could be?
Sounds to me like Eclipse clears all or some of the environment variables when launching the program. The environment variable DISPLAY tell the program how to connect to the X11 server. Without that information it can't open the display, giving you that error.
Simple test to verify this: Add the following like right before glfwInit() (never mind that this is not C++ and doesn't use iostream, but that's okay for a quick test:
fprintf(stderr, "DISPLAY=%s\n", getenv("DISPLAY"));
You must include the headers stdio.h and stdlib.h.
Eclipse indeed wasn't passing any environment variables to my program (thanks datenwolf for getting me started). It's possible to select which environment variables to pass to the program by going to Run Configurations, selecting the appropriate launch configuration under "C/C++ Application" (I only had the default one), opening the Environment tab and then hitting the select button (it lists all available environment variables) and picking which ones you want.

custom boot sector virtual CD

Following lots of "How to build your own Operating system" tutorials,
I'm supposed to write custom loader to floppy disk boot sector via
#include <sys/types.h> /* unistd.h needs this */
#include <unistd.h> /* contains read/write */
#include <fcntl.h>
int main()
{
char boot_buf[512];
int floppy_desc, file_desc;
file_desc = open("./boot", O_RDONLY);
read(file_desc, boot_buf, 510);
close(file_desc);
boot_buf[510] = 0x55;
boot_buf[511] = 0xaa;
floppy_desc = open("/dev/fd0", O_RDWR);
lseek(floppy_desc, 0, SEEK_CUR);
write(floppy_desc, boot_buf, 512);
close(floppy_desc);
}
I didn't have PC with floppy drive and I prefer to try whole of project on virtual machine via VirtualBox.
So How to write custom boot sector to a virtual CD image that will be invoked by my virtual machine ? :)
If you have any alternative way please suggest it :)
(note: this assumes you are on linux)
Instead of writing to /dev/fd0, which requires a real floppy drive, you could write to some a disk image which could be used to boot VirtualBox. However, you need to pad the file to 1.44MiB, since that's what the typical floppy is.
An even better way would be to first create the bootsector binary (with the 0xAA55 'magic code'), and then do something like dd if=MyBootsectorBin of=Floppy.flp bs=512 count=2880 to create the output file, Floppy.flp. This could be then booted via VirtualBox (or my preference, QEMU, via qemu -fda Floppy.flp).
I'm not sure about virtual CDs, but you can easily create an ISO to write to the disk. The required program for this is mkisofs, and more can be read about it from here.