I've been trying to get the timing and sequencing ironed out in the upssched-cmd script, but I just can't seem to find the right path. I have a RPi3b+ running Raspbian OS that is operating as a Controller for my Unifi network and I also have an EdgeRouter Pro. I can SSH into both systems, but I cannot (or maybe too afraid to) install the NUT program on the EdgeRouter. Since I can ssh into each device, I had planed to simply deliver the halt command to the pi and shutdown command to the ER when the DietPi running as a NUT server detected the UPS moving from OnBattery to LowBattery. I could then set a timer for say 60 sec prior to the FSD being initiated and killing the DietPi and eventually rebooting the UPS prior to it going completely dead. I set the LowBattery point to 4 mins and/or 33%, so hopefully I would have plenty of time. I've never written a script before, so I have only copied from other examples and attempted to determine what it's doing:
#! /bin/sh
# SSH connection settings
ssh_host1='ControllerUser#ControllerIP'
ssh_host2='RouterUser#RouterIP'
# Misc logging
UPS="apc"
STATUS=$( upsc $UPS ups.status )
CHARGE=$( upsc $UPS battery.charge )
CHMSG="[$STATUS]:$CHARGE%"
logger -i -t upssched-cmd Calling upssched-cmd $1
case $1 in
onbatt)
message="Power Failure on UPS ${UPSNAME}!"
echo -e "Warning: UPS $UPSNAME experienced a power failure and is now running on battery!" \
| mail -s"Warning: $message" root
remote_cmd="log warning message=\"${message}\""
#ssh $SSH_HOST -l $SSH_USER -i $SSH_KEY $remote_cmd
ssh $ssh_host1 $remote_cmd
;;
online)
message="Power restored on UPS $UPSNAME"
echo -e "Power on UPS $UPSNAME has been restored." \
| mail -s"$message" root
remote_cmd="log info message=\"${message}\""
#ssh $SSH_HOST -l $SSH_USER -i $SSH_KEY $remote_cmd
ssh $ssh_host1 $remote_cmd
;;
lowbatt)
message="Low battery on UPS ${UPSNAME}!"
echo -e "Warning: UPS $UPSNAME is low on battery! All connected Systems will be shut down soon." \
| mail -s"Warning: $message" root
remote_cmd="log warning message=\"${message}\""
#ssh $SSH_HOST -l $SSH_USER -i $SSH_KEY $remote_cmd
ssh $ssh_host1 $remote_cmd
;;
fsd)
message="Forced Shutdown from UPS ${UPSNAME}!"
echo -e "Warning: All Systems connected to UPS $UPSNAME will be shut down now!" \
| mail -s"Warning: $message" root
remote_cmd="log error message=\"${message}\" ; beep 0.5 ; delay 4000ms ; beep 0.5 ; system shutdown!"
#ssh $SSH_HOST -l $SSH_USER -i $SSH_KEY $remote_cmd
ssh $ssh_host1 'sudo halt'
ssh $ssh_host2 'sudo shutdown'
ssh $ssh_host1 $remote_cmd
;;
commok)
message="Communications restored with UPS $UPSNAME"
echo -e "Communications with UPS $UPSNAME have been restored." \
| mail -s"$message" root
remote_cmd="log info message=\"${message}\""
#ssh $SSH_HOST -l $SSH_USER -i $SSH_KEY $remote_cmd
ssh $ssh_host1 $remote_cmd
;;
commbad)
message=""
echo -e "Warning: Lost communications with UPS ${UPSNAME}!" \
| mail -s"Warning: Lost communications with UPS ${UPSNAME}!" root
remote_cmd="log warning message=\"${message}\""
#ssh $SSH_HOST -l $SSH_USER -i $SSH_KEY $remote_cmd
ssh $ssh_host1 $remote_cmd
;;
shutdown)
message="System $HOST is shutting down now!"
echo -e "Warning: System $HOST is shutting down now!" \
| mail -s"Warning: $message" root
remote_cmd="log warning message=\"${message}\""
#ssh $SSH_HOST -l $SSH_USER -i $SSH_KEY $remote_cmd
ssh $ssh_host1 'sudo halt'
ssh $ssh_host2 'sudo shutdown'
ssh $ssh_host1 $remote_cmd
;;
replbatt)
message="Replace battery on UPS ${UPSNAME}!"
echo -e "Warning: The UPS $UPSNAME needs to have its battery replaced!" \
| mail -s"Warning: $message" root
remote_cmd="log warning message=\"${message}\""
#ssh $SSH_HOST -l $SSH_USER -i $SSH_KEY $remote_cmd
ssh $ssh_host1 $remote_cmd
;;
nocomm)
message="The UPS $UPSNAME can’t be contacted for monitoring!"
echo -e "Warning: The UPS $UPSNAME can’t be contacted for monitoring!" \
| mail -s"Warning: $message" root
remote_cmd="log warning message=\"${message}\""
#ssh $SSH_HOST -l $SSH_USER -i $SSH_KEY $remote_cmd
ssh $ssh_host1 $remote_cmd
;;
*)
logger -t upssched-cmd "Unrecognized command: $1"
;;
esac
logger -i -t upssched-cmd $message
Any guidance would be greatly appreciated!!
Thanks in advance!
I am not quite sure what question you are asking, however at a quick glance, that script looks like it will work.
I would suggest making a copy of the script, removing all of the shutdown and reboot commands and replacing them with a simple command for debugging (touch a file, echo to the remote terminal tty, etc), then setup everything and start doing testing.
The only way you will find out if it works or not is to test it.
I am looking at doing something similar with a raspberry pi as a NUT-server to monitor a couple UPS units and using it to shutdown some VMware ESXi servers and other network attacked equipment.
If it is alright with you, I may take some ideas from your script for my solution.
Related
I am using kubectl port-forward in a shell script but I find it is not reliable, or doesn't come up in time:
kubectl port-forward ${VOLT_NODE} ${VOLT_CLUSTER_ADMIN_PORT}:${VOLT_CLUSTER_ADMIN_PORT} -n ${NAMESPACE} &
if [ $? -ne 0 ]; then
echo "Unable to start port forwarding to node ${VOLT_NODE} on port ${VOLT_CLUSTER_ADMIN_PORT}"
exit 1
fi
PORT_FORWARD_PID=$!
sleep 10
Often after I sleep for 10 seconds, the port isn't open or forwarding hasn't happened. Is there any way to wait for this to be ready. Something like kubectl wait would be ideal, but open to shell options also.
I took #AkinOzer's comment and turned it into this example where I port-forward a postgresql database's port so I can make a pg_dump of the database:
#!/bin/bash
set -e
localport=54320
typename=service/pvm-devel-kcpostgresql
remoteport=5432
# This would show that the port is closed
# nmap -sT -p $localport localhost || true
kubectl port-forward $typename $localport:$remoteport > /dev/null 2>&1 &
pid=$!
# echo pid: $pid
# kill the port-forward regardless of how this script exits
trap '{
# echo killing $pid
kill $pid
}' EXIT
# wait for $localport to become available
while ! nc -vz localhost $localport > /dev/null 2>&1 ; do
# echo sleeping
sleep 0.1
done
# This would show that the port is open
# nmap -sT -p $localport localhost
# Actually use that port for something useful - here making a backup of the
# keycloak database
PGPASSWORD=keycloak pg_dump --host=localhost --port=54320 --username=keycloak -Fc --file keycloak.dump keycloak
# the 'trap ... EXIT' above will take care of kill $pid
I am using telepresence to remote debugging the kubernetes cluster, and I am log in cluster using command:
telepresence
but when I want to install some software in the telepresence pod:
sudo apt-get install wget
and I did not know the password of telepresence pod, so what should I do to install software?
you could using this script to login pod as root:
#!/usr/bin/env bash
set -xe
POD=$(kubectl describe pod "$1")
NODE=$(echo "$POD" | grep -m1 Node | awk -F'/' '{print $2}')
CONTAINER=$(echo "$POD" | grep -m1 'Container ID' | awk -F 'docker://' '{print $2}')
CONTAINER_SHELL=${2:-bash}
set +e
ssh -t "$NODE" sudo docker exec --user 0 -it "$CONTAINER" "$CONTAINER_SHELL"
if [ "$?" -gt 0 ]; then
set +x
echo 'SSH into pod failed. If you see an error message similar to "executable file not found in $PATH", please try:'
echo "$0 $1 sh"
fi
login like this:
./login-k8s-pod.sh flink-taskmanager-54d85f57c7-wd2nb
I need to allow user "vine" to to transfer files with sftp to my server in certain folder /data/xxx/. Ssh should not be allowed.
Addition to that another user "beer" needs to be able to read and delete transferred files from the xxx folder.
I am using RHEL 6.6 and openSSH 5.3 p1.
I have tried several options, but no breakthrough. Any help on this?
This is the latest attemp, but giving following error:
Write failed: Broken pipe
Couldn't read packet: Connection reset by peer
Something to do with the access rights.
#!/bin/sh
######################################################
# Create a new SFTP user and configure their chroot
######################################################
# "Create sftpusers Group"
groupadd sftpusers
# "Create vine user"
useradd -g sftpusers -s /sbin/nologin vine
# # Create password for vine user."
echo -e vine#1234 | passwd vine --stdin
# "Modify beer to sftpusers group."
usermod -a -G sftpusers beer
# "Setup sftp-server Subsystem in sshd_config."
sed -e '/Subsystem/ s/^#*/#/' -i /etc/ssh/sshd_config
echo "Subsystem sftp internal-sftp" >> /etc/ssh/sshd_config
echo "Match Group sftpusers" >> /etc/ssh/sshd_config
echo " ChrootDirectory /Data/sftp/%u" >> /etc/ssh/sshd_config
echo " ForceCommand internal-sftp" >> /etc/ssh/sshd_config
# "Create sftp Home Directory"
mkdir -p /data/sftp/vine/
chown -R root:root /data/sftp/vine
#mkdir /data/sftp/vine
#chown root:root /data/sftp/vine
mkdir /data/sftp/vine/incoming
# "Setup Appropriate Permission"
chown vine:sftpusers /data/sftp/vine/incoming
# "Restart sshd and Test Chroot SFTP"
service sshd restart
chmod -R 755 /data
The usual way to handle this involves setting up chroot. This worked well for me: https://www.the-art-of-web.com/system/sftp-logging-chroot/ (Ubuntu)
I'm using postgres 9.4.9, pgpool 3.5.4 on centos 6.8.
I'm having a major hard time getting pgpool to automatically detect when nodes are up (it often detects the first node but rarely detects the secondary) but if I use pcp_attach_node to tell it what nodes are up, then everything is hunky dory.
So I figured until I could properly sort the issue out, I would write a little script to check the status of the nodes and attach them as appropriate, but I'm having trouble with the password prompt. According to the documentation, I should be able to issue commands like
pcp_attach_node 10 localhost 9898 pgpool mypass 1
but that just complains
pcp_attach_node: Warning: extra command-line argument "localhost" ignored
pcp_attach_node: Warning: extra command-line argument "9898" ignored
pcp_attach_node: Warning: extra command-line argument "pgpool" ignored
pcp_attach_node: Warning: extra command-line argument "mypass" ignored
pcp_attach_node: Warning: extra command-line argument "1" ignored
it'll only work when I use parameters like
pcp_attach_node -U pgpool -h localhost -p 9898 -n 1
and there's no parameter for the password, I have to manually enter it at the prompt.
Any suggestions for sorting this other than using Expect?
You have to create PCPPASSFILE. Search pgpool documentation for more info.
Example 1:
create PCPPASSFILE for logged user (vi ~/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 ~/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Example 2:
create PCPPASSFILE (vi /usr/local/etc/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 /usr/local/etc/.pcppass), set variable PCPPASSFILE (export PCPPASSFILE=/usr/local/etc/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Script for auto attach the node
You can schedule this script with for example crontab.
#!/bin/bash
#pgpool status
#0 - This state is only used during the initialization. PCP will never display it.
#1 - Node is up. No connections yet.
#2 - Node is up. Connections are pooled.
#3 - Node is down.
source $HOME/.bash_profile
export PCPPASSFILE=/appl/scripts/.pcppass
STATUS_0=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 0 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 0 status "$STATUS_0;
if (( $STATUS_0 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 0 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 0 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
STATUS_1=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 1 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 1 status "$STATUS_1;
if (( $STATUS_1 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 1 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 1 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
exit 0
yes you can trigger execution of this command using a customised failover_command (failover.sh in your /etc/pgpool)
Automated way to up your pgpool down node:
copy this script into a file with execute permission to your desired location with postgres ownership into all nodes.
run crontab -e comamnd under postgres user
Finally set that script to run every minute at crontab . But to execute it for every second you may create your own
service and run it.
#!/bin/bash
# This script will up all pgpool down node
#************************
#******NODE STATUS*******
#************************
# 0 - This state is only used during the initialization.
# 1 - Node is up. No connection yet.
# 2 - Node is up and connection is pooled.
# 3 - Node is down
#************************
#******SCRIPT*******
#************************
server_node_list=(0 1 2)
for server_node in ${server_node_list[#]}
do
source $HOME/.bash_profile
export PCPPASSFILE=/var/lib/pgsql/.pcppass
node_status=$(pcp_node_info -p 9898 -h localhost -U pgpool -n $server_node -w | cut -d ' ' -f 3);
if [[ $node_status == 3 ]]
then
pcp_attach_node -n $server_node -U pgpool -p 9898 -w -v
fi
done
I have an application that runs in a user account (Plack-based) and want an init script.
It seems as easy as "sudo $user start_server ...". I just wrote an LSB script using start-stop-daemon and it is really clumsy and verbose. It doesn't feel like the right way.
After scouring for a bit and looking at a log of examples, I'm still not sure what the best way to do this is and there isn't a cohesive guide that I've found.
Right now I have it working with:
start-stop-daemon --background --quiet --start --pidfile $PIDFILE \
--make-pidfile --chuid $DAEMONUSER \
--exec $DAEMON -- $DAEMON_OPTS
With DAEMON and DAEMON_OPTS as:
DAEMON="/home/mediamogul/perl5/perlbrew/perls/current/bin/start_server"
DAEMON_OPTS="--port $PORT -- starman --workers $WORKERS /home/mediamogul/MediaMogul/script/mediamogul.psgi"
This then requires me to adjust how to detect running, because it's a perl script so perl is showing up as the command and not "start_server".
(I'm running this out of a perlbrew on that user account so it is completely separate from the system perl, that's why the paths are pointing to a perl in the user dir)
Is this really the best way to go about doing this? It seems very clunky to me, but I'm not an admin type.
You can use the --pid option to starman to have it write the PID when the app starts, if you use the same filename as you give start-stop-daemon then it will work nicly.
For example, from one of my init.d scripts:
SITENAME=mysite
PORT=5000
DIR=/websites/mysite
SCRIPT=bin/app.pl
USER=davidp
PIDFILE=/var/run/site-$SITENAME.pid
case "$1" in
start)
start-stop-daemon --start --chuid $USER --chdir $DIR \
--pidfile=$PIDFILE \
--exec /usr/local/bin/starman -- -p $PORT $SCRIPT -D --pid $PIDFILE
;;
stop)
start-stop-daemon --stop --pidfile $PIDFILE
;;
*)
echo "Usage: $SCRIPTNAME {start|stop}" >&2
exit 3
;;
esac
It's very close to what you are already doing, and I'll admit it is a little clumsy, granted, but it works - having Starman write the PID file means that start-stop-daemon can reliably start & stop it.