I'm sorting and rationalising my backups to my raspberry pi + ext drive
One is a win10 PC that I mount
sudo mount -t cifs //192.168.1.92/blah/PC /mnt/PC -o username=xx,password=xx,ro,uid=pi,gid=pi
and then the source directory (eg)
pi#raspberrypi:~ $ ls -la /mnt/PC/pictures/
total 24284
pi#raspberrypi:~ $ ls -l /mnt/PC/..../Pictures/2009_07_15/ | grep 70
-rwxr-xr-x 1 pi pi 2387027 Jul 15 2009 IMG_0063.JPG
-rwxr-xr-x 1 pi pi 2385117 Jul 15 2009 IMG_0070.JPG
-rwxr-xr-x 1 pi pi 3457076 Jul 15 2009 IMG_0071.JPG
pi#raspberrypi:~ $
pi#raspberrypi:~ $
ive rationalised backups so have some pictures in destination from another source
so the (existing) destination is
pi#raspberrypi:~ $ ls -l /mnt/seagate/PC/../2009_07_15/ | grep 70
-rwxrwxrwx 1 pi pi 2387027 Jul 15 2009 IMG_0063.JPG
-rwxrwxrwx 1 pi pi 2385117 Jul 15 2009 IMG_0070.JPG
-rwxrwxrwx 1 pi pi 3457076 Jul 15 2009 IMG_0071.JPG
when I do
rsync -n -vv -rtdiz --no-owner --no-perms --no-group --progress --log-file=/tmp/rsynclog --backup-dir=/mnt/seagate/deletedfiles/backup-2020-01-07 --delete /source /destination
it says it will delete the files in 'destination' and then copy the same file from 'source'
pi#raspberrypi:~ $ cat /tmp/rsynclog | grep 2009_07_15
2020/01/08 11:55:02 [31205] backed up Documents.../2009_07_15/Thumbs.db to /mnt/seagate/deletedfiles/backup-2020-01-08/....2009_07_15/Thumbs.db
2020/01/08 11:55:02 [31205] backed up Documents..../2009_07_15/IMG_0071.JPG to /mnt/seagate/deletedfiles/backup-2020-01-08/....2009_07_15/IMG_0071.JPG
etc etc
2020/01/08 11:56:45 [31205] .d..t...... Documents/.....Pictures/2009_07_15/
2020/01/08 11:56:45 [31205] .f Documents/.. .2009_07_15/IMG_0061.JPG
2020/01/08 11:56:45 [31205] .f Documents/../Pictures/2009_07_15/IMG_0062.JPG
2020/01/08 11:56:45 [31205] .f Documents../Pictures/2009_07_15/IMG_0063.JPG
etc
I assume because the permissions are different
OK - i could just chmod all the files in 'destination' but I'd like to know what ive done wrong in the rsync command .
The difference i have found which could account for the transfers is
on the source , the mounted windows machine
pi#raspberrypi:/mnt/PC/Documents/path,... $ ls -l | grep ALL
drwxr-xr-x 2 pi pi 0 Jan 1 07:54 ALL PHOTOS
on the destination , the seagate attached HD
pi#raspberrypi:/mnt/seagate/PC/Documents/path,... $ ls -la | grep ALL
drwxrwxrwx 1 pi pi 20480 Dec 18 22:08 ALL PHOTOS
Related
I am stuck with the following error while starting mongod service systemctl start mongod
{"t":{"$date":"2020-08-27T20:48:20.219+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"IllegalOperation: Attempted to create a lock file on a read-only directory: /var/lib/mongo"}}
I have already checked /var/lib/mongo folder permissions and seem to be ok:
[root#**system]# ls -l / | grep var
drwxr-xr-x. 21 root root 4096 Jun 25 07:43 var
[root#**system]# ls -l /var | grep lib
drwxr-xr-x. 6 root root 56 Aug 27 20:38 lib
[root#** system]# ls -l /var/lib | grep mongo
drwxr-xr-x. 4 mongod mongod 4096 Aug 27 20:16 mongo
Any idea on why I am getting the error?
On my system, /home and /etc have exactly the same permissions:
$ ls -ld /home /etc
drwxr-xr-x 67 root root 4096 Nov 13 15:59 /etc
drwxr-xr-x 3 root root 4096 Oct 18 13:45 /home
However, Postgres can read one, but not the other:
test=# select count(*) from (select pg_ls_dir('/etc')) a;
count
-------
149
(1 row)
test=# select count(*) from (select pg_ls_dir('/home')) a;
ERROR: could not open directory "/home": Permission denied
Even though the user the DB is running as can, in fact, run ls /home:
$ sudo -u postgres ls /home > /dev/null && echo "ls succeeded"
ls succeeded
What is going on?
My postgres version is 11.5, running on Arch Linux.
I figured it out, it is because Arch's bundled postgresql.service file set ProtectHome=true, causing systemd to use Linux mount namespaces to block the postgres processes from accessing /home.
After configuring a standalone Concourse 2.4.0 per the instructions, everything seems to be up and running. However, when trying to run the "hello world" example, I can see the following error in the Concourse UI:
runc create: exit status 1: rootfs ("/volumes/live/a72f9a0d-3506-489b-5b9b-168744b892c1/volume") does not exist
"web" start command:
./concourse web \
--basic-auth-username admin \
--basic-auth-password admin \
--session-signing-key session_signing_key \
--tsa-host-key host_key \
--tsa-authorized-keys authorized_worker_keys \
--external-url http://myconcoursedomain:8080 \
--postgres-data-source postgres://user:pass#mydbserver/concourse
"worker" start command:
./concourse worker \
--work-dir worker \
--tsa-host 127.0.0.1 \
--tsa-public-key host_key.pub \
--tsa-worker-private-key worker_key
I'm wondering if the problem occurs since the "missing" directory is created in the directory specified in the "start worker" command, instead of at the actual root directory:
~/concourse# ls -la worker
total 145740
drwxr-xr-x 5 root root 4096 Nov 15 23:07 .
drwxr-xr-x 3 root root 4096 Nov 15 23:07 ..
drwxr-xr-x 3 root root 4096 Nov 15 23:07 2.4.0
drwxr-xr-x 2 root root 4096 Nov 15 23:09 depot
drwxr-xr-x 1 root root 24 Nov 15 23:07 volumes
-rw-r--r-- 1 root root 42142052352 Nov 15 23:15 volumes.img
Concourse is installed on Ubuntu 14.04:
uname -r
4.4.0-47-generic
uname -a
Linux ubuntu-2gb-nyc3-01 4.4.0-47-generic #68~14.04.1-Ubuntu SMP Wed Oct 26 19:42:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
For reasons that I still do not understand, it appears that if you specify the --work-dir value to be /opt/concourse/worker, then the worker will work with this kernel version without issue.
I was using a relative path to a worker directory within a dir in my user folder as my --work-dir value.
I'm currently reading a book on programming with C, I got to a part where I've got to write a program which will display the real uid and effective uid that the file is being executed on. After compiling the code with gcc, I input the command to see the current uOwner and gOwner ls- l id_demo the output is this:
-rwxrwxr-x 1 user user 8629 Sep 21 13:04 id_demo
I then execute the program itself, this is what I get:
real uid: 1000 effective uid: 1000
...so far so good. I then input a command to change the owner of the file:
sudo chown root:root ./id_demo
The ls -l confirms that the owner has been changed to root:
-rwxrwxr-x 1 root root 8629 Sep 21 13:04 id_demo
Again, executing the program shows real uid and uid as 1000. The last step after which the uid must be 0 is this: sudo chmod u+s ./uid_demo but for me they stay as 1000, where in the book the output is clearly show to be this:
real uid: 1000
effective uid: 0
Any ideas why is this happening?
UPDATE
id_demo source code:
#include <stdio.h>
int main ()
{
printf("real uid: %d\n", getuid());
printf("effective uid: %d\n", geteuid());
}
UPDATE 2
Screen shots
PLEASE HELP. I'm going crazy I spent 6+hour looking for the solution and I need to move on.
We've figured it out. The cause is an ecryptfs-mounted home directory. The mount output contains the following line:
/home/evgeny/.Private on /home/evgeny type ecryptfs
That means that the home directory isn't actually part of the root filesystem (that has the necessary suid flag), but its own virtual filesystem that apparently doesn't support setuid binaries by default. I have successfully reproduced the issue with a test user that has an encrypted home directory.
It is possible to add the suid flag to the ecryptfs with the following command:
sudo mount -i -o remount,suid /home/evgeny
I'm not certain though how safe that is, nor how to change it permanently so that it would survive reboots.
This works for me:
compile
$ gcc uid_demo.c -o uid_demo
$ ll
total 12
-rwxrwxr-x 1 saml saml 6743 Sep 21 17:05 uid_demo
-rw-rw-r-- 1 saml saml 116 Sep 21 16:58 uid_demo.c
chown
$ sudo chown root:root uid_demo
$ ll
total 12
-rwxrwxr-x 1 root root 6743 Sep 21 17:05 uid_demo
-rw-rw-r-- 1 saml saml 116 Sep 21 16:58 uid_demo.c
chmod
$ sudo chmod u+s uid_demo
$ ll
total 12
-rwsrwxr-x 1 root root 6743 Sep 21 17:05 uid_demo
-rw-rw-r-- 1 saml saml 116 Sep 21 16:58 uid_demo.c
run
$ ./uid_demo
real uid: 500
effective uid: 0
I have a yum repository I've set up where I store custom rpms.
I have no problem finding information about other packages that were built and stored in this custom repo.
#yum --disablerepo=rhui-us-east-rhel-server-1y,epel,epel-testing --enablerepo=customrepo install php53-pecl-xdebug
php53-pecl-xdebug x86_64 2.2.1-2 customrepo 132 k
No problem.
Now I drop somerpm.rpm in centos/repo/5/noarch, run createrepo --update . in this directory and try the same command, and yet it shows no results.
I tried running createrepo --update in the root of the repo as well, but that did not work either (I'm actually not sure where to run it and if it needs a repodata directory in each subdir).
[root#reposerver mnt]# ls -l /var/www/repo/
total 12
-rw-r--r-- 1 root root 203 Jun 8 00:13 REPO_README
drwxr-xr-x 3 root root 4096 Jun 10 2011 centos
drwxr-xr-x 2 root root 4096 Oct 18 20:02 repodata
[root#reposerver mnt]# ls -l /var/www/repo/centos/5/
SRPMS/ i386/ noarch/ repodata/ x86_64/
[root#reposerver mnt]# ls -l /var/www/repo/centos/5/noarch/
total 7324
-rw-r--r-- 1 root root 1622 Jun 28 2011 compat-php-5.1.6-1.noarch.rpm
drwxr-xr-x 2 root root 4096 Oct 18 19:55 repodata
-rw-r--r-- 1 root root 1066928 Oct 18 19:54 salt-0.10.3-1.noarch.rpm
-rw-r--r-- 1 root root 6363197 Oct 18 19:54 salt-0.10.3-1.src.rpm
-rw-r--r-- 1 root root 21822 Oct 18 19:54 salt-master-0.10.3-1.noarch.rpm
-rw-r--r-- 1 root root 14294 Oct 18 19:54 salt-minion-0.10.3-1.noarch.rpm
I also tried adding the exactarch=0 flag to my repo config to ignore arch restrictions and this did not work either, it was a shot in the dark, since my rpm is noarch, it should show regardless.
# cat /etc/yum.repos.d/mycompany.repo
[mycompany]
name=mycompany custom repo
baseurl=http://config/repo/centos/5/$basearch
enabled=1
exactarch=0
I'm at a loss at this point. Usually createrepo --update does the trick, but for some reason it cannot find the new rpms.
repo]# find . -type f -name "*.gz" | xargs zcat | grep salt-minion
returns results as well, so it's definitely in the repo data.
yum clean all on the server I was trying to install on worked.
Also make sure to do createrepo --update on the specific subdir instead of the root of the repo.