How to load fish configuration from a remote repository? - fish

I have a zillion machines in different places (home network, cloud, ...) and I use fish on each of them. The problem is that I have to synchronize their configuration every time I change something in there.
Is there a way to load the configuration from a remote repository? (= a place where it would be stored, not necessarily git but ideally I would manage them in GitHub). In such a case I would just have a one liner everywhere.
I do not care too much about startup time, loading the config each time would be acceptable
I cannot push the configuration to the machines (via Ansible for instance) - not of them are reachable from everywhere directly - but all of them can reach Internet

There are two parts to your question. Part one is not specific to fish. For systems I use on a regular basis I use Dropbox. I put my ~/.config/fish directory in a Dropbox directory and symlink to it. For machines I use infrequently, such as VMs I use for investigating problems unique to a distro, I use rsync to copy from my main desktop machine. For example,
rsync --verbose --archive --delete -L --exclude 'fishd.*' krader#macpro:.config .
Note the exclusion of the fishd.* pattern. That's part two of your question and is unique to fish. Files in your ~/.config/fish directory named with that pattern are the universal variable storage and are currently unique for each machine. We want to change that -- see https://github.com/fish-shell/fish-shell/issues/1912. The problem is that file contains the color theme variables. So to copy your color theme requires exporting those vars on one machine:
set -U | grep fish_color_
Then doing set -U on the new machine for each line of output from the preceding command. Obviously if you have other universal variables you want synced you should just do set -U and import all of them.

Disclaimer: I wouldn't choose this solution myself. Using a cloud storage client as Kurtis Rader suggested or a periodic cron job to pull changes from a git repository (+ symlinks) seems a lot easier and fail-proof.
On those systems where you can't or don't want to sync with your cloud storage, you can download the configuration file specifically, using curl for example. Some precious I/O time can be saved by utilizing HTTP cache control mechanisms. With or without cache control, you will still need to create a connection to a remote server each time (or each X times or each Y time passed) and that wastes quite some time already.
Following is a suggestion for such a fish script, to get you started:
#!/usr/bin/fish
set -l TMP_CONFIG /tmp/shared_config.fish
curl -s -o $TMP_CONFIG -D $TMP_CONFIG.headers \
-H "If-None-Match: \"$SHARED_CONFIG_ETAG\"" \
https://raw.githubusercontent.com/woj/dotfiles/master/fish/config.fish
if test -s $TMP_CONFIG
mv $TMP_CONFIG ~/.config/fish/conf.d/shared_config.fish
set -U SHARED_CONFIG_ETAG (sed -En 's/ETag: "(\w+)"/\1/p' $TMP_CONFIG.headers)
end
Notes:
Warning: Not tested nearly enough
Assumes fish v2.3 or higher.
sed behavior varies from platform to platform.
Replace woj/dotfiles/master/fish/config.fish with the repository, branch and path that apply to your case.
You can run this from a cron job, but if you insist to update the configuration file on every init, change the script to place the configuration in a path that's not already automatically loaded by fish, e.g.:
mv $TMP_CONFIG ~/.config/fish/shared_config.fish
and in your config.fish run this whole script file, followed by a
source ~/.config/fish/shared_config.fish

Related

File ownership and permissions in Singularity containers

When I run singularity exec foo.simg whoami I get my own username from the host, unlike in Docker where I would get root or the user specified by the container.
If I look at /etc/passwd inside this Singularity container, an entry has been added to /etc/passwd for my host user ID.
How can I make a portable Singularity container if I don't know the user ID that programs will be run as?
I have converted a Docker container to a Singularity image, but it expects to run as a particular user ID it defines, and several directories have been chown'd to that user. When I run it under Singularity, my host user does not have access to those directories.
It would be a hack but I could modify the image to chmod 777 all of those directories. Is there a better way to make this image work on Singularity as any user?
(I'm running Singularity 2.5.2.)
There is actually a better approach than just chmod 777, which is to create a "vanilla" folder with your application data/conf in the image, and then copy it over to a target directory within the container, at runtime.
Since the copy will be carried out by the user actually running the container, you will not have any permission issues when working within the target directory.
You can have a look at what I done here to create a portable remote desktop service, for example: https://github.com/sarusso/Containers/blob/c30bd32/MinimalMetaDesktop/files/entrypoint.sh
This approach is compatible with both Docker and Singularity, but it depends on your use-case if it is a viable solution or not. Most notably, it requires you to run the Singularity container with --writable-tmpfs.
As a general comment, keep in mind that even if Singularity is very powerful, it behaves more as an environment than a container engine. You can make it work more container-like using some specific options (in particular --writable-tmpfs --containall --cleanenv --pid), but it will still have limitations (variable usernames and user ids will not go away).
First, upgrade to the v3 of Singularity if at all possible (and/or bug your cluster admins to do it). The v2 is no longer supported and several versions <2.6.1 have security issues.
Singularity is actually mounting the host system's /etc/passwd into the container so that it can be run by any arbitrary user. Unfortunately, this also effectively clobbers any users that may have been created by a Dockerfile. The solution is as you thought, to chmod any files and directories to be readable by all. chmod -R o+rX /path/to/base/dir in a %post step is simplest.
Since the final image is read-only, allowing write permission doesn't do anything and it's useful to get into the mindset about only writing to files/directories that have been mounted to the image.

make server backup, and keep owner with rsync

I recently configured a little server for test some services, now, before an upgrade or install new software, I want to make an exact copy of my files, with owners, groups and permissions, also the symlinks.
I tried with rsync to keep the owner and group but in the machine who receives the copy I lost them.
rsync -azp -H /directorySource/ myUser#192.168.0.30:/home/myUser/myBackupDirectory
My intention is to do it with the / folder, to keep all my configurations just in case, I have 3 services who have it's own users and maybe makes modifications in folders outside it's home.
In the destination folder appear with my destination user, whether I do the copy from the server as if I do it from the destination, it doesn't keep the users and groups!, I create the same user, tried with sudo, even a friend tried with 777 folder :)
cp theoretically serves the same but doesn't work over ssh, anyway I tried to do it in the server but have many errors. As I remembered the command tar also keep the permissions and owners but have errors because the server it's working and it isn't so fast the process to restore. I remember too the magic dd command, but I made a big partition. Rsync looked the best option to do it, and to keep synchronized the backup. I saw rsync in the new version work well with owners but I have the package upgraded.
Anybody have some idea how I do this, or how is the normal process to keep my own server well backuped, to restore just making the partition again?
The services are taiga, a project manager platform, a git repository, a code reviewer, and so on, all are working well with nginx over Ubuntu Server. I haven't looked other backup methods because I thought rsync with a cron job do the work.
Your command would be fine, but you need to run as root user on the remote end (only root has permission to set file owners):
rsync -az -H /directorySource/ root#192.168.0.30:/home/myUser/myBackupDirectory
You also need to ensure that you use rsync's -o option to preserve owners, and -g to preserve groups, but as these are implied by -a your command is OK. I removed -p because that's also implied by -a.
You'll also need root access, on the local end, to do the reverse transfer (if you want to restore your files).
If that doesn't work for you (no root access), then you might consider doing this using tar. A proper archive is probably the correct tool for the job, and will contain all the correct user data. Again, root access will be needed to write that back to the file-system.

Issue with crc32c verification using gsutil

crc32c signature computed for local file (Rgw3kA==) doesn't match cloud-supplied digest (5A+KjA==). Local file (/home/blah/pgdata.tar) will be deleted.
I did a bit of diagnosing, and I noticed that it was always on the cloud-supplied digest of "5A+KjA==" but usually at a different point in the file with different local crc32c. This is using either:
gsutil -m rsync gs://bucket/ /
or
gsutil -m cp gs://bucket/pgdata.tar /
I seem to get this error almost all the time transferring a large 415GB tar database file. It always exits in error at a different part, and it doesn't resume. Is there any workarounds for this? If it is legitimate file corruption, I would expect it to fail at the same point in the file?
File seems fine as I've loaded this onto various instances and postgresql about a week ago.
I'm not sure of the verision of gsutils, but it is the natively installed one on GCE Ubuntu 14.04 image, following the GCE provided instructions for crcmod installation on Debian/Ubuntu.

Is NET::SCP uses multiple connections to transfer multiple files?

HI,
Actually i am trying to reduce the time taken to transfer N-number of files from remote machine local machine using secure file transfer, previously i used scp system command it establishes a connection for each file transfer.
Thanks in advance.
Unless you have a bandwidth cap on each individual TCP connection, you are not going to get a significant reduction in download time by using multiple SCP connections.
You can check whether you get a speed up by putting a separate scp command for each file in a shell script and timing the script. Then rerun the shell script with & at the end of each scp line. If this speeds up the transfer and you want to really do this in Perl, look into fork or Parallel::ForkManager.
I think this would create a separate connection each time. However, scp with the -r flag (which Net::SCP uses) recursively copies all of the files in a directory with a single connection. That might be the way to go, if you have your files in a few directories and you want to copy all of the files in those directories.
Otherwise, rsync with the --files-from option should use only one connection. (Don't forget -z for compression, or -a).
If the only reason you're considering using perl to do this is that you want a single session, then just use the command line rsync to get this effect (with --files-from). If you want perl power to generate the list of files-from, File::Rsync supports that.

How do I copy data from a remote system without using ssh or FTP Perl modules?

I have to write a Perl script to automatically copy data from remote server to my local system. The directory structure on remote systems is:
../log/D1/<date>.tar.gz
../log/D2/<date>.gz
../log/D3/<date>.tar.gz
../log/D4/<date>
and same on other server. I want to copy the data on local system in below format.
../log/S1/D1/<date>.tar.gz
../log/S1/D2/<date>.gz
../log/S1/D3/<date>.tar.gz
../log/S1/D4/<date>
and same for other servers i.e. S2, S3, etc
Also, no ssh supported Perl modules are available on remote server as well on local server and I dont have permission to install any Perl modules. The only good thing is that the connectivity is through password-less ssh keys.
Can anyone please suggest me any Perl code to get this done?
I believe you can access to shell command from perl.
So you can do this:
$cmd = "/usr/bin/scp remotefile localfile";
system $cmd;
NOTE: scp is secure-copy -- a buddy of ssh.
This does not require ssh-perl module but it require ssh support on both (which I have).
Hope this helps.
I started to suggest the scp command line program, but it seems that there's a CPAN module for that (no surprise). Check out Net::SCP.
By using scp on your client (where you can install new Perl modules) you can copy files without having to install any new software on the remote system. It just needs to have the ssh server running - which you've said it does.
I'd say stop trying to make life difficult for yourself and get the system to support the features you require.
Trying to develop for such a limited/ locked down platform is not going to be cost-effective in the long run - you'll develop stuff more slowly and it will have more bugs.
A little developer time is way more expensive than a decent hosted VM / hardware box.
Get a proper host, it will definitely save money (talk to your manager about this).
From your query above I understand that you don't have much permissions to install perl modules or do any changes which require administrative privileges. I love perl but to automate things like this you should use bash instead of perl. Below is the sample code I am using with password less ssh keys.
#!/bin/bash
DATE=`date`
BASEDIR="/basedir"
cd $BASEDIR
for HOST in S1 S2 S3
do
scp -q $HOST:$BASEDIR/D1/$DATE.tar.gz $HOST/D1/
echo "Data copy from $HOST done"
done
exit 0
You can use different date formats like date +%Y%m%d for current date in format YYYYMMDD. Also you can use this link to learn different date formats.
Hope this helps.
You may not be able to install anything in system-wide lib directories, but there is nothing preventing you from installing modules in a location to which you have write-access. See How do I keep my own module/library directory?
This creates no more of a security issue than allowing you to write scripts on this system in the first place.
So, go forth and install Net::SCP.
It sounds like you want rsync. You shouldn't have to do any programming at all.