In a /etc/init/script how I capture event for "eth1 is ready" on RHEL 6 in upstarts? - upstart

I've tried below, but doesn't work:
1:start on (net-device-up IFACE!=lo and runlevel [2345])
2:start on started network-interface INTERFACE=eth1
I saw the networking service is still be brought up by calling these init.d scripts,
so I doubt on RHEL 6, there is real a "event" from upstart for NIC's up.
anyone has any idea?

I figure it out somehow by:
start on stopped rc RUNLEVEL [2345]
so the upstart job rc(/etc/init/rc.conf) just call all the init scripts by
exec /etc/rc.d/rc $RUNLEVEL
when all the rc scripts done, all the NIC are up, and the job "rc" in upstarts does have a status change and emit status.

Related

Supervisor kills Prefect agent with SIGTERM unexpectedly

I'm using a rapsberry pi 4, v10(buster).
I installed supervisor per the instructions here: http://supervisord.org/installing.html
Except I changed "pip" to "pip3" because I want to monitor running things that use the python3 kernel.
I'm using Prefect, and the supervisord.conf is running the program with command=/home/pi/.local/bin/prefect "agent local start" (I tried this with and without double quotes)
Looking at the supervisord.log file it seems like the Prefect Agent does start, I see the ASCII art that normally shows up when I start it from the command line. But then it shows it was terminated by SIGTERM;not expected, WARN recieved SIGTERM inidicating exit request.
I saw this post: Supervisor gets a SIGTERM for some reason, quits and stops all its processes but I don't even have that 10Periodic file it references.
Anyone know why/how Supervisor processes are getting killed by sigterm?
It could be that your process exits immediately because you don’t have an API key in your command and this is required to connect your agent to the Prefect Cloud API. Additionally, it’s a best practice to always assign a unique label to your agents, below is an example with “raspberry” as a label.
You can also check the logs/status:
supervisorctl status
Here is a command you can try, plus you can specify a directory in your supervisor config (not sure whether environment variables are needed but I saw it from other raspberry Pi supervisor user):
[program:prefect-agent]
command=prefect agent local start -l raspberry -k YOUR_API_KEY --no-hostname-label
directory=/home/pi/.local/bin/prefect
user=pi
environment=HOME="/home/pi/.local/bin/prefect",USER="pi"

Stop mosquitto autorestart on centos7

I'm trying to stop the Mosquitto broker service on a centos 7 server.
I've stopped the service with
sudo systemctl stop mosquitto.service
then I've disabled it with
sudo systemctl disable mosquitto.service
with ps I still get
/usr/sbin/mosquitto -c /etc/mosquitto/mosquitto.conf
if I kill it, it autorestarts automatically and even after a reboot It's still running.
The process is owned by an other user (admin).
How can I stop it definitively?
This has nothing to do with mosquitto but how systemd manages its services.
systemctl disable only affects the autostart of the service but a disabled service will still get started if another service depends on it.
Lets say you have got a service mqtt-client depending on mosquitto with e.g. Wants=mosquitto. Every time mqtt-client is started the mosquitto service gets started as well, even if it is disabled.
So one way is either to prevent mqtt-client from starting as well (and all other services which depend on mosquitto) or remove the dependency.
Another approach is to totally prevent the service from being loaded by masking it:
systemctl mask mosquitto - this way you can neither start it manually nor by another service.
I would recommend reworking your dependencies in the long run since masking will just create a symlink to dev/null so nothing happens if service gets loaded and you are not able to start it by yourself without unmasking it first.

sbt docker:Publish - app crashes but container doesn't

I'm building docker images for my Scala applications using the sbt-native-packager plugin. I noticed that when the process inside a container crashes (log shows Exception in thread "main"... and the process is definitely dead), the container is still "alive":
me#my-laptop$ docker exec 5cca ps
PID TTY TIME CMD
1 ? 00:00:08 java
152 ? 00:00:00 ps
The generated Dockerfile is:
FROM java:openjdk-8-jre
WORKDIR /opt/docker
ADD opt /opt
RUN ["chown", "-R", "daemon:daemon", "."]
USER daemon
ENTRYPOINT ["bin/the-app-name"]
CMD []
where bin/the-app-name is a pretty big auto-generated bash script that gathers all the necessary parameters (classpath, main class name, etc.) and runs the app using the java command. So my guess is that something about this setup makes docker consider the container to be "running" as long as the JVM is running, regardless of my code crashing...
Any idea how i can cause my container to exit when the app crashes?
When running naked pods this behavior is expected, because naked pods are not rescheduled in the event of node failure.
When you deploy the pod, do you set the restartPolicy to "Always", "OnFailure" or "Never"?
The current status of the pod might be "Ok" right now, but this does not necessarily mean that the pod was not restarted before.
Can you run kubectl get po and print the output to check if the pod was restarted or not?
Info on naked pods here: https://kubernetes.io/docs/concepts/configuration/overview/#naked-pods-vs-replication-controllers-and-jobs
More info on restart policy: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle
After some experimenting it looks like there's a thread-leak somewhere that prevents the application from exiting. I'm suspecting it may be coming from the akka ActorSystem but did not find it yet.
Either way, catching the exception on the main thread and calling System.exit(1) causes the java process to die and the container stops.

Jenkins kills JBoss server when job finishes

I use Ant to start/shutdown JBoss 5 server through Jenkins. Ant java spawn and fork are set to "true", so command is executed in the background.
Jenkins successfully starts up the server, waits two minutes (a "sleep" command in Jenkins), then after the sleep it for some strange reason shuts down the server. The sleep command is the last step in the build job. The shutdown says:
2013-01-29 17:03:39,332 INFO [org.jboss.bootstrap.microcontainer.ServerImpl] Runtime shutdown hook called, forceHalt: true
I googled it and tried the suggested -Xrs command, but it didn't help. What is happening here?
Jenkins have something called the process tree killer that will kill all processes created by the job (even those started with spawn and fork set to true).
There are some workarounds to this behavior.
disabling process tree killer
-Dhudson.util.ProcessTreeKiller.disable=true
or
set the env. var BUILD_ID=dontKillMe in the JBOSS process.
export BUILD_ID=dontKillMe
You can browse the ProcessTreeKill wiki article or jenkins JIRA to find various workarounds for this issue.
This source (comments) suggest other environment variables, apparently for older versions of Jenkins. For me it didn't work before I started using JENKINS(_SERVER)_COOKIE.

UpStart initctl start|restart ubuntu

When using upstart on ubuntu how do I issue a command for starting a job if not running and restarting if already running. When deploying an app to a new node the job is not defined.
initctl restart JOB complains if not already running
initctl start JOB complains if already running.
I can script it to do
initctl start JOB
initctl restart JOB
But it doesn't seem to be the nicest thing to do.
I was in front of the same problem.
Short of a straight "lazy-stop-then-start" command built-in initctl, we have to script.
Invoke start and restart if it fails:
initctl start JOB || initctl restart JOB
This script is probably not the answer both of us were looking for but it is short enough to mention it.
As long as the service works nicely, it will do the trick.
When the services fails, this script fails twice; For example, if the service was stopped and actually fails to start, it will also fail to restart.
Definitely looking for an improvement to this.
I hope this helps.
I also tried the 'start or restart' method that hmalphettes suggested, but got into troubles. When using this approach then updates to the upstart script would not be applied. Instead I use this, which works as I would expect:
sudo stop JOB || true && sudo start JOB
This basically reads 'Stop the job if it's running, then start it.'
sudo service JOB restart
The service command was patched in Ubuntu to make it work the same on Upstart as it does in the most common cases on sysvinit.
systemctl restart JOB
Has some unexpected effects, and in general should be carefully studied before using. It is mostly there so you can restart a job without re-loading the job definition, which is a really uncommon case.