Unable to debug PyCharm Remote Python scripts (docker-compose) - docker-compose

Using: PyCharm 2021.3.2 (Professional Edition)
Situation: -
I have a docker-compose.yml based deployment
It contains one image that's deployed with a different behaviour based on environment variables
What I'm finding is that the built-in Remote Python debugging works when the
image runs using unicorn (i.e. I can set breakpoints and pause the program)
but I cannot debug python code that exists in the same image (i.e. I cannot set breakpoints or pause the program).
I'm using separate Python Interpreters for each service in the docker-compose file,
as recommended. Each service uses the same image, controlled by
variations in the environment variables. I have an interpreter for the
unicorn app in the docker-compose file, another for the Python script.
Summary of the unicorn run-config
Script path: /usr/local/bin/gunicorn
Parameters: app.app:app
This runs, as expected, and can be debugged (i.e. I can set
breakpoints and pause the app).
Summary of the python script/module config
This is the same image, but instead of running gunicorn with arguments
I'm just running some python that's in the image.
Script path: test.py
Parameters:
I can launch the app in its container and see the container logs.
But, unlike the gunicorn run-config above,
the debugging is not working. In this container image I cannot set breakpoints
or pause the image execution.
I've tried everything but it just feels that scripts/modules can't be
debugged in my image whereas gunicorn can be debugged.
I have other containers that run flask and celerly as the main program and
they too can be debugged. But my attempts at debugging 'raw' Python scripts all fail.
This is the latest PyCharm pro and I am perplexed as to why the image can be
debugged when running gunicorn but not when running a python script.
Has anyone else encountered this?
What Aam I doing wrong?

Related

MongoDB init script doesn't run on Docker run by Jenkins SSH agent

I have a Docker container with MongoDB, with a /docker-entrypoint-initdb.d/... script with my initial mongo config.
The problem is that it doesn't get executed when run by Jenkins (through docker-compose up via SSH). When I start the container "manually" (same command through console, also SSH) - all is fine.
I'm a newbie at Jenkins, I think that's the case - the Jenkins SSH agent ceates a workspace that differs from the dir that Docker uses when run by me through terminal. But all the required files are there.. maybe it has sth to do with the script being executed only on initial startup? I tried removing it from the agent's workspace to be initialized again, but still no luck..

How to run initialization commands after SSH in VS Code Remote?

Problem
I am trying to connect to my school's computing cluster (aka a linux server with "login node" and "computing node") using VS Code's Remote SSH, but I cannot figure out how to run a command after SSH-ing.
Goal
I simply want to view Python code and test some small lines in a .ipynb jupyter notebook in the computing platform's environment.
Description
Basically, normally in the command line (or mobaXterm of a Windows machine) of my local machine, I first log onto the computing platform's login node with ssh -Y -L PORT:127.0.0.1:PORT username#computing.cluster.ip, and then run srun -t 0-12:00 --pty -p gpu --gres=gpu:1 --x11 --tunnel PORT:PORT /bin/bash to log onto the computing node interactively (shown command allows for port forwarding). The problem is, in VS Code I can only connect to the login node, but after that there's no way for me to run another command and log onto the computing node. The reason I need to get to computing node is that I want to test something with a .ipynb file interactively on VS Code while reading the code, and the login node does not allow me to perform computation.
Failed trials
I've been trying Code-Server, but it does not support .ipynb well (it keeps asking me to install jupyter notebook even though I have installed it in my conda env), possibly because it by default recognizes HPC cluster's Python interpreter which I cannot modify (I can't even select Jupyter kernel in code-server). I also tried to directly use Jupyter Notebook (open Jupyter with port forwarding after getting onto computing node), but reading code on it is much more inconvenient.
Would greatly appreciate your suggestions!

How can I launch postgres server headless (without terminal) on Windows?

Using Postgres 9.5 and the libpqxx c++ bindings, I want to launch a copy of postgres that is not installed on the users machine, but is instead packaged up in my application directory.
Currently, I am using pg_ctl.exe to start and stop the server, however when we do this, pg_ctl.exe seems to launch postgres.exe in a new terminal window.
I want it to launch postgres.exe in a headless state, but can't work out how.
I have tried enabling/disabling the logging collector, setting the logging method to a csv file (instead of stdout/stderr), and a couple of other logging related things, but I don't think the issue is the logging.
I have also tried running postgres.exe manually (without pg_ctl) and can get that to run headless by spawning it as a background process and redirecting the logs, but I would prefer to use the "pg_ctl start" api for the "wait for startup" (-w), and "timeout" (-t) options that it provides.
I believe you won't be able to do that with pg_ctl.
It is perfectly fine to start PostgreSQL directly through the server executable postgres.exe. Alternatively, you can use pg_ctl register to create a service and start the service.
In my use case, I was able to resolve the issue by running pg_ctl.exe using
CreateProcess, and providing the dwCreationFlags CREATE_NEW_PROCESS_GROUP | CREATE_NO_WINDOW.
I was originally using CREATE_NEW_PROCESS_GROUP | DETACHED_PROCESS, but DETACHED_PROCESS still allowed a postgres terminal to appear. This is because DETACHED_PROCESS will spawn the pg_ctl without a console, but any process that inherits stdin/stdout from pg_ctl will try to use it's console, and since there isn't one, one will be spawned. CREATE_NO_WINDOW however will launch the process with a conhost.exe, however the console will have no window. When the executables spawned by pg_ctl try to write to the terminal, they will successfully write to the console created by the conhost.exe which has no window.
I am now able to run pg_ctl from code with no console appearing.

Ansible return status

I have an ansible playbook which deploys a jboss eap instance alongside some other things. The playbook runs fine until it gets to the point to start jboss using the provided standalone.sh script. I am using the shell module to start this script and it works fine in that jboss starts however when the task is executed ansible does not return any status message like a changed or OK and just seems to hang.
Is there a way I can force ansible to see this as something which has changed the system state ?
I don't personally use jboss, but this sounds to me like the startup.sh script simply isn't launching jboss in the background, so ansible is simply waiting for it to end and it never does.
There are a few potential ways you can address this. Given the information in this thread you might want to try a task like this:
- name: start jboss
shell: nohup standalone.sh > /dev/null
async: True
poll: 0
The poll: 0 tells Ansible to 'fire and forget' the command. If you don't include this then Ansible will kill the process when it returns from the remote server.
Another possibility is to use an init script. The thread I linked to above points to a location where you should be able to find an init script. You could install that (and leave it disabled if you don't want jboss to start up when the system reboots), and then simply run it via the service command from Ansible:
- name: start jboss
service: name=jboss state=started
Edit: If you're unwilling or unable to use nohup or an init script that puts jboss into the background then another alternative is to make use of screen if you have that installed and available to you. I regularly do this when invoking long-running commands from Ansible so that I can check back well after the playbook has been run to check on the status of the command:
- name: Invoke long running command
command: /usr/bin/screen -d -m /path/to/my/command
async: True
poll: 0
This will launch a detached screen session and invoke your command inside it. Ansible will promptly forget about it, leaving the command to run on its own.

How to make a server daemon which re-runs automatically when they're terminated unexpectedly?

I'm trying to running OrientDB on Ubuntu. Currently, I'm running with bin/server.sh. This works fine except it runs foreground on shell. I can make it work background by Ctrl+Z and bg command, but this doesn't mean it's running as daemon.
I wish the program will keep running after I logout. And will be started again when it terminated unexpectedly or OS restarts. Like MS Windows Services. But the problem is I don't know how can I do this.
How can I run a program as a long-running service?
If you do not own the server, look into using the "screen" command. It will allow you to run a command, detach from the console where the command is running, then log out while leaving it running. You may reconnect to the running screen to see output or restart the script. Here's more info about the screen command:
http://www.manpagez.com/man/1/screen/
If you own the server, you should write an init script. It's not very hard, and you can set it up to run automatically on startup. The system will run the script with a "start" parameter when you want it to start, and a "stop" parameter when you want it to stop. Here's more detailed information:
http://www.novell.com/coolsolutions/feature/15380.html
If the command doesn't already detach from the console (run in daemon mode) then in the init script place the command in parenthesis to run in it's own shell. You will not see any output unless you pipe it to a file within the parenthesis.
(bin/server.sh >> /var/log/server.log)