I recently configured a little server for test some services, now, before an upgrade or install new software, I want to make an exact copy of my files, with owners, groups and permissions, also the symlinks.
I tried with rsync to keep the owner and group but in the machine who receives the copy I lost them.
rsync -azp -H /directorySource/ myUser#192.168.0.30:/home/myUser/myBackupDirectory
My intention is to do it with the / folder, to keep all my configurations just in case, I have 3 services who have it's own users and maybe makes modifications in folders outside it's home.
In the destination folder appear with my destination user, whether I do the copy from the server as if I do it from the destination, it doesn't keep the users and groups!, I create the same user, tried with sudo, even a friend tried with 777 folder :)
cp theoretically serves the same but doesn't work over ssh, anyway I tried to do it in the server but have many errors. As I remembered the command tar also keep the permissions and owners but have errors because the server it's working and it isn't so fast the process to restore. I remember too the magic dd command, but I made a big partition. Rsync looked the best option to do it, and to keep synchronized the backup. I saw rsync in the new version work well with owners but I have the package upgraded.
Anybody have some idea how I do this, or how is the normal process to keep my own server well backuped, to restore just making the partition again?
The services are taiga, a project manager platform, a git repository, a code reviewer, and so on, all are working well with nginx over Ubuntu Server. I haven't looked other backup methods because I thought rsync with a cron job do the work.
Your command would be fine, but you need to run as root user on the remote end (only root has permission to set file owners):
rsync -az -H /directorySource/ root#192.168.0.30:/home/myUser/myBackupDirectory
You also need to ensure that you use rsync's -o option to preserve owners, and -g to preserve groups, but as these are implied by -a your command is OK. I removed -p because that's also implied by -a.
You'll also need root access, on the local end, to do the reverse transfer (if you want to restore your files).
If that doesn't work for you (no root access), then you might consider doing this using tar. A proper archive is probably the correct tool for the job, and will contain all the correct user data. Again, root access will be needed to write that back to the file-system.
Related
When I run singularity exec foo.simg whoami I get my own username from the host, unlike in Docker where I would get root or the user specified by the container.
If I look at /etc/passwd inside this Singularity container, an entry has been added to /etc/passwd for my host user ID.
How can I make a portable Singularity container if I don't know the user ID that programs will be run as?
I have converted a Docker container to a Singularity image, but it expects to run as a particular user ID it defines, and several directories have been chown'd to that user. When I run it under Singularity, my host user does not have access to those directories.
It would be a hack but I could modify the image to chmod 777 all of those directories. Is there a better way to make this image work on Singularity as any user?
(I'm running Singularity 2.5.2.)
There is actually a better approach than just chmod 777, which is to create a "vanilla" folder with your application data/conf in the image, and then copy it over to a target directory within the container, at runtime.
Since the copy will be carried out by the user actually running the container, you will not have any permission issues when working within the target directory.
You can have a look at what I done here to create a portable remote desktop service, for example: https://github.com/sarusso/Containers/blob/c30bd32/MinimalMetaDesktop/files/entrypoint.sh
This approach is compatible with both Docker and Singularity, but it depends on your use-case if it is a viable solution or not. Most notably, it requires you to run the Singularity container with --writable-tmpfs.
As a general comment, keep in mind that even if Singularity is very powerful, it behaves more as an environment than a container engine. You can make it work more container-like using some specific options (in particular --writable-tmpfs --containall --cleanenv --pid), but it will still have limitations (variable usernames and user ids will not go away).
First, upgrade to the v3 of Singularity if at all possible (and/or bug your cluster admins to do it). The v2 is no longer supported and several versions <2.6.1 have security issues.
Singularity is actually mounting the host system's /etc/passwd into the container so that it can be run by any arbitrary user. Unfortunately, this also effectively clobbers any users that may have been created by a Dockerfile. The solution is as you thought, to chmod any files and directories to be readable by all. chmod -R o+rX /path/to/base/dir in a %post step is simplest.
Since the final image is read-only, allowing write permission doesn't do anything and it's useful to get into the mindset about only writing to files/directories that have been mounted to the image.
Currently running a WHM / Cpanel server running Centos. Server seems to be running fine no issues there. However I'm using a deployment process to put files outside of the document root. e.g.
~/deployment
instead of:
~/public_html
Obviously I need to point public_html to this folder so my site will run. So, I'm removing the public_html and creating a symlink and pointing it to the new deployment folder. This results in a 500 error.
So looking at the logs I've discovered that it produces the following error:
Directory "/home/xyz/deployment" is writeable by group
Checking the file permissions looks as though the symlink is 777 where i need it to be 755 for the server to allow viewing.
Is there a setting in WHM ? Is there a setting in Centos? I have another box running that doesn't have this issue so I'm assuming that this is related to the current setup of this machine.
Any help would be appreciated, thanks.
when you create a hard link from a file or folder, This file/folder inherits the accesses and permissions of the original file/folder, and in soft link it will be 777 permission, so i think you can use rsync options for both purpose :
1- have a folder with all files in source
2- have your own permissions in folder
I have a zillion machines in different places (home network, cloud, ...) and I use fish on each of them. The problem is that I have to synchronize their configuration every time I change something in there.
Is there a way to load the configuration from a remote repository? (= a place where it would be stored, not necessarily git but ideally I would manage them in GitHub). In such a case I would just have a one liner everywhere.
I do not care too much about startup time, loading the config each time would be acceptable
I cannot push the configuration to the machines (via Ansible for instance) - not of them are reachable from everywhere directly - but all of them can reach Internet
There are two parts to your question. Part one is not specific to fish. For systems I use on a regular basis I use Dropbox. I put my ~/.config/fish directory in a Dropbox directory and symlink to it. For machines I use infrequently, such as VMs I use for investigating problems unique to a distro, I use rsync to copy from my main desktop machine. For example,
rsync --verbose --archive --delete -L --exclude 'fishd.*' krader#macpro:.config .
Note the exclusion of the fishd.* pattern. That's part two of your question and is unique to fish. Files in your ~/.config/fish directory named with that pattern are the universal variable storage and are currently unique for each machine. We want to change that -- see https://github.com/fish-shell/fish-shell/issues/1912. The problem is that file contains the color theme variables. So to copy your color theme requires exporting those vars on one machine:
set -U | grep fish_color_
Then doing set -U on the new machine for each line of output from the preceding command. Obviously if you have other universal variables you want synced you should just do set -U and import all of them.
Disclaimer: I wouldn't choose this solution myself. Using a cloud storage client as Kurtis Rader suggested or a periodic cron job to pull changes from a git repository (+ symlinks) seems a lot easier and fail-proof.
On those systems where you can't or don't want to sync with your cloud storage, you can download the configuration file specifically, using curl for example. Some precious I/O time can be saved by utilizing HTTP cache control mechanisms. With or without cache control, you will still need to create a connection to a remote server each time (or each X times or each Y time passed) and that wastes quite some time already.
Following is a suggestion for such a fish script, to get you started:
#!/usr/bin/fish
set -l TMP_CONFIG /tmp/shared_config.fish
curl -s -o $TMP_CONFIG -D $TMP_CONFIG.headers \
-H "If-None-Match: \"$SHARED_CONFIG_ETAG\"" \
https://raw.githubusercontent.com/woj/dotfiles/master/fish/config.fish
if test -s $TMP_CONFIG
mv $TMP_CONFIG ~/.config/fish/conf.d/shared_config.fish
set -U SHARED_CONFIG_ETAG (sed -En 's/ETag: "(\w+)"/\1/p' $TMP_CONFIG.headers)
end
Notes:
Warning: Not tested nearly enough
Assumes fish v2.3 or higher.
sed behavior varies from platform to platform.
Replace woj/dotfiles/master/fish/config.fish with the repository, branch and path that apply to your case.
You can run this from a cron job, but if you insist to update the configuration file on every init, change the script to place the configuration in a path that's not already automatically loaded by fish, e.g.:
mv $TMP_CONFIG ~/.config/fish/shared_config.fish
and in your config.fish run this whole script file, followed by a
source ~/.config/fish/shared_config.fish
I have 2 computers in different places (so it's impossible to use the same wifi network).
One contains about 50GBs of data (MongoDB files) that I want to move to the second one which has much more computation power for analysis. But how can I make MongoDB on the second machine recognize that folder?
When you start mongodprocess you provide an argument to it --dbpath /directory which is how it knows where the data folder is.
All you need to do is:
stop the mongod process on the old computer. wait till it exits.
copy the entire /data/db directory to the new computer
start mongod process on the new computer giving it --dbpath /newdirectory argument.
The mongod on the new machine will use the folder you indicate with --dbpath. There is no need to "recognize" as there is nothing machine specific in that folder, it's just data.
I did this myself recently, and I wanted to provide some extra considerations to be aware of, in case readers (like me) run into issues.
The following information is specific to *nix systems, but it may be applicable with very heavy modification to Windows.
If the source data is in a mongo server that you can still run (preferred)
Look into and make use of mongodump and mongorestore. That is probably safer, and it's the official way to migrate your database.
If you never made a dump and can't anymore
Yes, the data directory can be directly copied; however, you also need to make sure that the mongodb user has complete access to the directory after you copy it.
My steps are as follows. On the machine you want to transfer an old database to:
Edit /etc/mongod.conf and change the dbPath field to the desired location.
Use the following script as a reference, or tailor it and run it on your system, at your own risk.
I do not guarantee this works on every system --> please verify it manually.
I also cannot guarantee it works perfectly in every case.
WARNING: will delete everything in the target data directory you specify.
I can say, however, that it worked on my system, and that it passes shellcheck.
The important part is simply copying over the old database directory, and giving mongodb access to it through chown.
#!/bin/bash
TARGET_DATA_DIRECTORY=/path/to/target/data/directory # modify this
SOURCE_DATA_DIRECTORY=/path/to/old/data/directory # modify this too
echo shutting down mongod...
sudo systemctl stop mongod
if test "$TARGET_DATA_DIRECTORY"; then
echo removing existing data directory...
sudo rm -rf "$TARGET_DATA_DIRECTORY"
fi
echo copying backed up data directory...
sudo cp -r "$SOURCE_DATA_DIRECTORY" "$TARGET_DATA_DIRECTORY"
sudo chown -R mongodb "$TARGET_DATA_DIRECTORY"
echo starting mongod back up...
sudo systemctl start mongod
sudo systemctl status mongod # for verification
quite easy for windows, just move the data folder to the target location
run cmd
"C:\your\mongodb\bin-path\mongod.exe" --dbpath="c:\what\ever\path\data\db"
In case of Windows in case you need just to configure new path for data, all you need to create new folder, for example D:\dev\mongoDb-data, open C:\Program Files\MongoDB\Server\6.0\bin\mongod.cfg and change there path :
Then, restart your PC. Check folder - it should contains new files/folders with data.
Maybe what you didn't do was export or dump the database.
Databases aren't portable therefore must be exported or created as a dumpfile.
Here is another question where the answer is further explained
I am trying to create an exact mirror of a Magento production server on my local server for further development, but I have run into a few issues.
On the production server, our Magento is configured to run without displaying the index.php, but after attempting a migration to my local server, the index.php is required to access any links. Additionally, when I select a category to visit (for example), I am directed to http://localhost/category.html instead of http://localhost/my-magento-store.com/index.php/category.html
The other issue I've noticed is that I am unable to log in to the admin section. After entering the correct login credentials, I am redirected to the login screen again without any error messages.
I am running a MAMP stack on the local server, and here is what I have done:
Created a tar of the entire production server
Created a database backup in Magento System > Tools > Backups
Downloaded and extracted tar into local directory
Imported database dump into local MySQL using Alexey Ozerov's big dump script. (The .sql file is 1.3m lines)
Changed values of web/unsecure/base_url and web/secure/base_url in core_config_data table. (As I don't have a self-signed SSL cert, I put http://localhost:8888/my-magento-store/ for both values)
Dumped contents of var/cache and var/sesson
Changed permissions to 755 for all files on local dev server
Navigated to http://localhost:8888/my-magento-store/ but got the "Index of /" page instead.
Navigated to http://localhost:8888/my-magento-store/index.php and got an error.
Followed these steps to solve the error, reloaded the page, and the home page loaded correctly.
Any ideas?
URL Rewriting depends on your .htaccess file, so there are a couple of things to check:
web/seo/use_rewrites in core_config_data should be true.
when you created your tarball, did it include . files in the root directory especially .htaccess? If you used tar -cvf archive.tar * then it may have missed them. (Nice "feature" of *nix).
Check that your MAMP httpd.conf has AllowOverride All, otherwise your local .htaccess will be ignored.
I'm not familiar with MAMP, but it's possible that it's having a problem reading/interpreting your .htaccess, though this is unlikely. I'd focus on options 1 thru 3 first.
HTH,
JD