Prefect Shelltask fails in Supervisor - supervisord

I'm running a Prefect local agent using Supervisor.
I have two tasks in the pipeline, one shell script and a python script.
I'm able to execute the python script with no issues, but I'm unable to execute the shell task using Supervisor
(I'm able to execute both these tasks outside supervisor with no issues)
this is my supervisor config
[program:prefect-agent]
environment=PATH= "/home/user/venv/bin"
command =/bin/bash -c 'source activate; exec prefect agent local start'
autostart=true
autorestart=true
this is the prefect task
#task(name="task1", skip_on_upstream_skip=False, log_stdout=True)
def task1(data):
logger = prefect.context.get("logger")
cmd = ShellTask(
command=f"/home/user/some_shell_script.sh",
stream_output=True,
log_stderr=True)
cmd.run()
return data
I understand, it has something to do with supervisor initializing bash shell, before executing the shell task.
But not sure, how to achieve this.

figured it out
need to add ENV_PATH variable to the conf
[program:prefect-agent]
environment=PATH= "/home/user/venv/bin:%(ENV_PATH)s"
command =/bin/bash -c 'source activate; exec prefect agent local start'
autostart=true
autorestart=true

Related

Execute powershell script with gitlab-runner on local windows machine

I do have following setup:
a win PC with gitlab-runner installed (working)
a powershell script running on the same PC is starting an application
a gitlab server to connect this local PC and starting the powershell script
Now when starting the powershell script directly from the local PC, the application starts and terminates after done - working as expected. When starting the same powershell script with the gitlab server (yml-file) then I can see that the application has been started (new process in taskmanager) but it is not running as well it never terminates.
When manually end the task I see that gitlab terminates again.
Question:
what could be the root cause?
is it possible to run the powershell script with gitlab-runner? I think there is a way with the command "exec". How does the command looks like when calling the powershell script?
is it possible to run the application not in the background in order to see whats going on?
others?
thanks in advance
I think there is a bug with the gitlab runner on windows.
No matter which shell you configure in the config.toml the runner
will always use cmd.exe for an exec local run.
Specify the --shell argument to override the default cmd.exe shell:
> gitlab-runner exec shell your_job --shell pwsh
If you run this locally in your project, it outputs to .builds/, so add this to your .gitignore because git will see it and think you might want to add a submodule.

Run PowerShell script when container starts

How to run a simple PowerShell script after Docker container starts?
FROM ...
ENTRYPOINT ["powershell", "C:\scripts\remotetools.ps1"]
or
FROM ...
CMD ["powershell", "C:\scripts\remotetools.ps1"]
didn't work
Take the ENTRYPOINT/CMD out of the dockerfile and then build the image again, and run it. find the container ID with
docker container ls
Now run your command but passed through the exec function so you can see if it works and get some better debugs:
docker exec <HEX_CONTAINER_ID> powershell C:\scripts\remotetools.ps1
you may also require the --privileged flag if the script doesn't run, then you may be looking at a permissions issue

Windows - Docker CMD does not execute

For the life of me, I cannot seem to get my provisioning script to execute when I run my container. Down the road, I will need to pass in arguments to the docker run command to replace 'hiiii' and '123' for multiple container deployments.
This is my docker file
FROM microsoft/aspnet:3.5-windowsservercore-10.0.14393.1198
SHELL [“powershell”, “-Command”, “$ErrorActionPreference = ‘Stop’; $ProgressPreference = ‘SilentlyContinue’;”]
COPY *.ps1 /Container/
COPY [“wwwroot”, “/inetpub/wwwroot”]
COPY [“Admin”, “/Program Files (x86)Admin”]
COPY [“Admin/web.config”, “/Program Files (x86)/Admin/web_default.config”]
#ENTRYPOINT [“powershell”]
CMD [“powershell.exe”, -NoProfile, -File, C:\Container\Start-Admin.Docker.Cmd.ps1 -Parm1 ‘Hiiii’ -parm2 ‘123’]
I have also tried the shell version of CMD as follows
CMD powershell -NoProfile -File C:\Container\Start-Admin.Docker.Cmd.ps1 -Parm1 ‘Hiiii’ -Parm2 ‘123’
This is the command I am using.
docker image build -t image:v1:v1 .
docker run --name container -p 8001:80 -d image:v1
After I create and run the container I see that the script did not run, or failed. However, I can exec into powershell on the container and run the script manually and it works fine and I see all the changes that I need.
docker exec --interactive --tty container powershell
C:\Container\Start-Admin.Docker.Cmd.ps1 -Parm1 ‘Hiiii’ -Parm2 ‘123’
I am just at a loss as to what I am missing regarding CMD.
Thanks!
I was able to get it working the way I had hoped. Though I am still working out some details in the provisioning script, this is how I ended up getting the result I wanted from the docker side of things.
FROM microsoft/aspnet:3.5-windowsservercore-10.0.14393.1198
#The shell line needs to be removed and any RUN commands need to be immediately proceeded by 'powershell' eg RUN powershell ping google.com
#SHELL [“powershell”, “-Command”, “$ErrorActionPreference = ‘Stop’; $ProgressPreference = ‘SilentlyContinue’;”]
COPY *.ps1 /Container/
COPY [“wwwroot”, “/inetpub/wwwroot”]
COPY [“Admin”, “/Program Files (x86)Admin”]
COPY [“Admin/web.config”, “/Program Files (x86)/Admin/web_default.config”]
ENV myParm1 Hiiii
ENV myParm2 123
ENTRYPOINT ["powershell", "-NoProfile", "-Command", "C:\\Container\\Start-Admin.Docker.Cmd.ps1"]
CMD ["-parm1 $Env:myParm1 -parm2 $Env:myParm2"]
The docker run command looks like this
docker run -d -p 8001:80 -e "myParm1=byeeeee" --name=container image:v1
I hope this helps someone else that is in my boat. Thanks for all the answers!
You can give this a try. ARG passes the variable to shell accessible via PS $env: variable.
ARG parmOne=Hiiii
ARG parmTwo=123
# Run a native PowerShell session and pass the script as a command.
RUN ["powershell", "-NoProfile", "-Command", "C:\\Container\\Start-Admin.Docker.Cmd.ps1 -Parm1 $env:parmOne -parm2 $env:parmTwo"]

Running Python Script in Background Infinitely

I am trying to write a python script which runs on another server such that even if I close my server connection on my PC's terminal it keeps on running on that server.When the script is kept alive, it runs infinitely listening to any events on a Website (UI), on event occurrence it then starts up certain dockers appropriately and keeps on listening to PosgreSQL Events.
When I tried to use nohup (to run the script in background) it did run in the background but was unable to listen to any of the events. Has any one worked on something similar before? Please share your thoughts.
I am sharing a part of my script.
self.pool = await asyncpg.create_pool(user='alg_user',password='algy',database='alg',host='brain',port=6543)
async with self.pool.acquire() as conn:
def enqueue_listener(*args):
self.queue.put_nowait(args)
await conn.add_listener('task_created', enqueue_listener)
print("Added the listener")
while True:
print("---- Listening for new job ----")
conn2, pid, channel, payload = await self.queue.get()
x = re.sub("[^\w]", " ", payload).split()
print(x)
if x[5] == '1':
tsk = 'TASK_ID=%s' % str(x[1])
if x[3] == '1':
command = "docker run --rm -it -e ALGORITHM_ID=1 -e TASK_ID=%s --network project_default project/docked_prol:1.0" % (str(x[1]))
subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
if x[3] == '8':
command = "docker run --rm -it -e ALGORITHM_ID=8 -e TASK_ID=%s --network project_default project/docked_pro:1.0" % (str(x[1]))
subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
The script is running absolutely fine when kept on running manually, but just need some advice with implementation methodology.
First of all, I am here after 3 years later.
To run a script infinitely as a background task, you need process manager tools. PM2 is my favorite process manager tool made in nodejs but can run any terminal task because it is a CLI.
Basically, you can install NodeJs and npm to reach pm2. (You can visit NodeJs.org to download the installer.)
You need to install the pm2 as a global module using npm install -g pm2 on your terminal
You can check if it is installed simply by pm2 -v
Then you can start your python script on your terminal using pm2 start file_name.py
It will create a thread in background to run your script and it will be restart forever.
If you were doing something that takes so much time and you dont want to see the task running on the terminal you can just disable restarting by adding the parameter --no-autorestart into the command. (# pm2 start file_name.py --no-autorestart)
If you want to see the logs or the state of the task, you can just use pm2 status, pm2 logs and pm2 monit.
If you want to stop the task, you can use pm2 stop task_name
You can use pm2 reload all or pm2 update to start all the tasks back
You can kill the task using pm2 kill
For more information you can visit PM2 Python Documentation
Running something in background via nohup will only work if the process/script runs automatically without providing external inputs, because there is no way to provide manual inputs to a background process.
First, try checking if the process is still running in background (ps -fe|grep processname).
If its running, then check the 'nohup.out' file to see where the process is getting stuck. This gets generated in the same directory where you started the process. This will give you some idea what is going on inside the process.

Run a perl script at startup in Ubuntu

I have a perl script that I need to run once at startup with an argument under my user account.
So when I boot the system up it needs to execute a command like this,
./path/to/script.pl start
Any ideas?
You could use a line in your crontab (crontab -e)
To run a command at startup:
edit /etc/crontab
Add the following line:
#reboot root perl ./path/to/script.pl start
^^^ Runs as root. Change "root" to "BlackCow" to run as BlackCow
Or, you could use upstart (add a .conf file to /etc/init/). Here's a copy and paste from my notes:
Use upstart to run a daemon at reboot/start
e.g. /etc/init/prestocab.conf:
#!upstart
description "node.js server"
author "BlackCow"
start on (local-filesystems and net-device-up IFACE=eth0)
stop on shutdown
script
export HOME="/root"
exec sudo -u root /usr/local/bin/node /home/prestocab/prestocab.com/www/socket.io/server.js 2>&1 >> /var/log/prestocab.log
end script
To use:
start prestocab
stop prestocab
restart prestocab
#
You might want to use some sort of process monitor to restart the daemon if it crashes
Depends on what init you are using, if your version of Ubuntu is using upstart
you have to configure the appropriate Upstart start scripts, if not
the rc scripts based on your runlevel. Check update-rc.d.
On Ubuntu, the simplest way is to add this line to your /etc/rc.local file (before the exit 0 line, substituting username with your own user name):
su -c "./path/to/script.pl start" username &