gsutil rsync -C "continue" option not working - google-cloud-storage

gsutil rsync -C "continue" option is not working from backup_script:
$GSUTIL rsync -c -C -e -r -x $EXCLUDES $SOURCE/Documents/ $DESTINATION/Documents/
From systemd log:
$ journalctl --since 12:00
Jul 25 12:00:14 localhost.localdomain CROND[9694]: (wolfv) CMDOUT (CommandException: Error opening file "file:///home/wolfv/Documents/PC_maintenance/backup_systems/gsutil/ssmtp.conf": .)
Jul 25 12:00:14 localhost.localdomain CROND[9694]: (wolfv) CMDOUT (Caught ^C - exiting)
Jul 25 12:00:14 localhost.localdomain CROND[9694]: (wolfv) CMDOUT (Caught ^C - exiting)
Jul 25 12:00:14 localhost.localdomain CROND[9694]: (wolfv) CMDOUT (Caught ^C - exiting)
Jul 25 12:00:14 localhost.localdomain CROND[9694]: (wolfv) CMDOUT (Caught ^C - exiting)
because owner is root rather than user:
$ ls -l ssmtp.conf
-rw-r-----. 1 root root 1483 Jul 24 21:30 ssmtp.conf
rsyc worked fine after deleting the root-owned file.
This happened on a Fedora22 machine, when cron called backup_script which called gsutil rsync.

Thanks for reporting that problem. We'll get a fix for this bug in gsutil release 4.14.
Mike

Related

container process label is set to spc_t after setting selinux-enable=true in containerd-config.toml

containerd: file label is container_ro_file_t but container process runs as spc_t. Is process label spc_t correct if selinux enabled for containerd or did i miss some setting with containerd?
K8s version: 1.23.8
Containerd version: 1.6.6
selinux enable by setting [enable_selinux = true] in /etc/containerd/config.toml
// create pod using tomcat official image then check the process and file label
$kubectl exec tomcat -it -- ps -eZ
system_u:system_r:spc_t:s0 1 ? 00:00:26 java
system_u:system_r:spc_t:s0 45 pts/0 00:00:00 ps
$kubectl exec tomcat -it -- ls -FlaZ
drwxr-xr-x. 1 root root system_u:object_r:container_ro_file_t:s0 4096 Jun 28 00:54 ./
drwxr-xr-x. 1 root root system_u:object_r:container_ro_file_t:s0 4096 Jun 28 00:50 ../
drwxr-xr-x. 2 root root system_u:object_r:container_ro_file_t:s0 4096 Jun 28 00:54 bin/
#containerd is running as container_runtime_t:
$ps -eZ | grep containerd
system_u:system_r:container_runtime_t:s0 912 ? 00:00:10 containerd
system_u:system_r:container_runtime_t:s0 1327 ? 00:00:00 containerd-shim
//seems run as spc_t is correct
$sesearch -T -t container_var_lib_t | grep spc_t
type_transition container_runtime_t container_ro_file_t : process spc_t;
Issue resolved after adding version in /etc/containerd/config.toml
version=2

How to run a process in daemon mode with systemd service?

I've googled and read quite a bit of blogs, posts, etc. on this. I've also been trying them out manually on my EC2 instance. However, I'm still not able to properly configure the systemd service unit to have it run the process in background as I expect. The process I'm running is nessus service. Here's my service unit definition:
$ cat /etc/systemd/system/nessusagent.service
[Unit]
Description=Nessus
[Service]
ExecStart=/opt/myorg/bin/init_nessus
Type=simple
[Install]
WantedBy=multi-user.target
and here is my script /opt/myorg/bin/init_nessus:
$ cat /opt/apiq/bin/init_nessus
#!/usr/bin/env bash
set -e
NESSUS_MANAGER_HOST=...
NESSUS_MANAGER_PORT=...
NESSUS_CLIENT_GROUP=...
NESSUS_LINKING_KEY=...
#-------------------------------------------------------------------------------
# link nessus agent with manager host
#-------------------------------------------------------------------------------
/opt/nessus_agent/sbin/nessuscli agent link --key=${NESSUS_LINKING_KEY} --host=${NESSUS_MANAGER_HOST} --port=${NESSUS_MANAGER_PORT} --groups=${NESSUS_CLIENT_GROUP}
if [ $? -ne 0 ]; then
echo "Cannot link the agent to the Nessus manager, quitting."
exit 1
fi
/opt/nessus_agent/sbin/nessus-service -q -D
When I run the service, I always get the following:
$ systemctl status nessusagent.service
● nessusagent.service - Nessus
Loaded: loaded (/etc/systemd/system/nessusagent.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Mon 2020-08-24 06:40:40 UTC; 9min ago
Process: 27787 ExecStart=/opt/myorg/bin/init_nessus (code=exited, status=0/SUCCESS)
Main PID: 27787 (code=exited, status=0/SUCCESS)
...
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: + /opt/nessus_agent/sbin/nessuscli agent link --key=... --host=... --port=8834 --groups=...
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: [info] [agent] HostTag::getUnix: setting TAG value to '8596420322084e3ab97d3c39e5c92e00'
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: [info] [agent] Successfully linked to <myorg.com>:8834
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: + '[' 0 -ne 0 ']'
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[28506]: + /opt/nessus_agent/sbin/nessus-service -q -D
However, I can't see the process that I expect to see:
$ ps faux | grep nessus
root 28565 0.0 0.0 12940 936 pts/0 S+ 06:54 0:00 \_ grep --color=auto nessus
If I run the last command manually, I can see it:
$ /opt/nessus_agent/sbin/nessus-service -q -D
$ ps faux | grep nessus
root 28959 0.0 0.0 12940 1016 pts/0 S+ 07:00 0:00 \_ grep --color=auto nessus
root 28952 0.0 0.0 6536 116 ? S 07:00 0:00 /opt/nessus_agent/sbin/nessus-service -q -D
root 28953 0.2 0.0 69440 9996 pts/0 Sl 07:00 0:00 \_ nessusd -q
What is it that I'm missing here?
Eventually figured out that this was because of the extra -D option in the last command. Removing the -D option fixed the issue. Running the process in daemon mode inside a system manager is not the way to go. We need to run it in the foreground and let the system manager handle it.

rsync ignoring file permissions

I'm sorting and rationalising my backups to my raspberry pi + ext drive
One is a win10 PC that I mount
sudo mount -t cifs //192.168.1.92/blah/PC /mnt/PC -o username=xx,password=xx,ro,uid=pi,gid=pi
and then the source directory (eg)
pi#raspberrypi:~ $ ls -la /mnt/PC/pictures/
total 24284
pi#raspberrypi:~ $ ls -l /mnt/PC/..../Pictures/2009_07_15/ | grep 70
-rwxr-xr-x 1 pi pi 2387027 Jul 15 2009 IMG_0063.JPG
-rwxr-xr-x 1 pi pi 2385117 Jul 15 2009 IMG_0070.JPG
-rwxr-xr-x 1 pi pi 3457076 Jul 15 2009 IMG_0071.JPG
pi#raspberrypi:~ $
pi#raspberrypi:~ $
ive rationalised backups so have some pictures in destination from another source
so the (existing) destination is
pi#raspberrypi:~ $ ls -l /mnt/seagate/PC/../2009_07_15/ | grep 70
-rwxrwxrwx 1 pi pi 2387027 Jul 15 2009 IMG_0063.JPG
-rwxrwxrwx 1 pi pi 2385117 Jul 15 2009 IMG_0070.JPG
-rwxrwxrwx 1 pi pi 3457076 Jul 15 2009 IMG_0071.JPG
when I do
rsync -n -vv -rtdiz --no-owner --no-perms --no-group --progress --log-file=/tmp/rsynclog --backup-dir=/mnt/seagate/deletedfiles/backup-2020-01-07 --delete /source /destination
it says it will delete the files in 'destination' and then copy the same file from 'source'
pi#raspberrypi:~ $ cat /tmp/rsynclog | grep 2009_07_15
2020/01/08 11:55:02 [31205] backed up Documents.../2009_07_15/Thumbs.db to /mnt/seagate/deletedfiles/backup-2020-01-08/....2009_07_15/Thumbs.db
2020/01/08 11:55:02 [31205] backed up Documents..../2009_07_15/IMG_0071.JPG to /mnt/seagate/deletedfiles/backup-2020-01-08/....2009_07_15/IMG_0071.JPG
etc etc
2020/01/08 11:56:45 [31205] .d..t...... Documents/.....Pictures/2009_07_15/
2020/01/08 11:56:45 [31205] .f Documents/.. .2009_07_15/IMG_0061.JPG
2020/01/08 11:56:45 [31205] .f Documents/../Pictures/2009_07_15/IMG_0062.JPG
2020/01/08 11:56:45 [31205] .f Documents../Pictures/2009_07_15/IMG_0063.JPG
etc
I assume because the permissions are different
OK - i could just chmod all the files in 'destination' but I'd like to know what ive done wrong in the rsync command .
The difference i have found which could account for the transfers is
on the source , the mounted windows machine
pi#raspberrypi:/mnt/PC/Documents/path,... $ ls -l | grep ALL
drwxr-xr-x 2 pi pi 0 Jan 1 07:54 ALL PHOTOS
on the destination , the seagate attached HD
pi#raspberrypi:/mnt/seagate/PC/Documents/path,... $ ls -la | grep ALL
drwxrwxrwx 1 pi pi 20480 Dec 18 22:08 ALL PHOTOS

Program.service ExecStart fails but the program itself runs

I am testing how to run a script using a .service file on CentOS7.
The script is a very simple loop just to make sure it runs:
if [ "$1" == "start" ] || [ "$1" == "cycle" ]
then
/u/Test/Bincustom/haltrun_wrap.sh run &
echo $! /u/Test/Locks/start.pid
exit
elif [ "$1" == "stop" ] || [ "$1" == "halt" ]
then
killall -q -9 haltrun_wrap.sh
echo " " /u/Test/Locks/start.pid
elif [ "$1" == "run" ]
then
process_id=$(pidof haltrun_wrap.sh)
#echo $process_id /u/Test/Locks/start.pid
while [ 1 ]
do
CurTime=$(date)
echo $CurTime /u/Test/Logs/log
sleep 30s
done
else
cat /u/Test/Locks/start.pid
cat /u/Test/Logs/log
fi
That script runs fine as the root or test user if i launch manually.
The Program.service file looks like this:
[Unit]
Description=Program
[Service]
Type=forking
RemainAfterExit=yes
PIDFile=/u/Test/Locks/start.pid
EnvironmentFile=/u/Test/Config/environ
Environment="Base="sudo -u sirsi '/u/Test/Bincustom/Program " "Stop=halt force'" "Start=cycle force'""
ExecStart=/bin/sh $Base$Start
ExecStop=/bin/sh $Base$Stop
[Install]
WantedBy=multi-user.target
WantedBy=WebServices
WantedBy=BCA
The error is always:
● Program.service - Program
Loaded: loaded (/usr/lib/systemd/system/Program.service; enabled; vendor preset: disabled)
Active: failed (Result: resources) since Wed 2017-01-11 14:53:10 MST; 1s ago
Process: 12014 ExecStart=/bin/sh $Base$Start (code=exited, status=0/SUCCESS)
Jan 11 14:53:09 localhost.localdomain systemd[1]: Starting Program...
Jan 11 14:53:10 localhost.localdomain systemd[1]: PID file /u/Test/Locks/start.pid not readable (yet?) after start.
Jan 11 14:53:10 localhost.localdomain systemd[1]: Failed to start Program.
Jan 11 14:53:10 localhost.localdomain systemd[1]: Unit Program.service entered failed state.
Jan 11 14:53:10 localhost.localdomain systemd[1]: Program.service failed.
Obviously I'm doing something wrong in the .service but for the life of me I am still missing it.
The issue was in the line:
Environment="Base="sudo -u sirsi '/u/Test/Bincustom/Program " "Stop=halt force'" "Start=cycle force'""
ExecStart=/bin/sh $Base$Start
ExecStop=/bin/sh $Base$Stop
Apparently .service files do not recognize variables.
I also had an issue with sudo not being allowed to run my test script.
i had to add the sudo into the test script.

Mongodb over Lustre?

I need to install a mongodb instance with a lot of data storage.
We have a Lustre FS with hundreds of terabytes, but when monogdb start show me this error:
Mon Jul 15 12:06:50.898 [initandlisten] exception in initAndListen: 10310 Unable to lock file: /var/lib/mongodb/mongod.lock. Is a mongod instance already running?, terminating
Mon Jul 15 12:06:50.898 dbexit:
But the permissions should be fine:
# ls -lart /project/mongodb/
total 8
drwxr-xr-x 19 root root 4096 Jul 15 11:12 ..
-rwxr-xr-x 1 mongod mongod 0 Jul 15 11:54 mongod.lock
drwxr-xr-x 2 mongod mongod 4096 Jul 15 12:10 .
And no other running process:
# ps -fu mongod
UID PID PPID C STIME TTY TIME CMD
#
Has anyone done this (Lustre+mongodb)?
# rm mongod.lock
rm: remove regular empty file `mongod.lock'? y
# ls -lrt
total 0
# ls -lart
total 8
drwxr-xr-x 19 root root 4096 Jul 15 11:12 ..
drwxr-xr-x 2 mongod mongod 4096 Jul 15 12:10 .
# ps aux | grep mongod
root 25865 0.0 0.0 103296 884 pts/15 S+ 13:04 0:00 grep mongod
# service mongod start
Starting mongod: about to fork child process, waiting until server is ready for connections.
forked process: 25935
all output going to: /var/log/mongo/mongod.log
ERROR: child process failed, exited with error number 100
[FAILED]
I realize that this is an old question, but I feel I should set the record straight.
MongoDB, or any DB or any application can run against a lustre file system without issues. However, by default, lustre clients do not explicitly set user_xattr or flock (enable).
Having set -o flock or even -o localflock while mounting the file system would have resolved the issue.