Remotely restarting services on several servers - service

I have around 1000 servers on which I need to restart the SNMP service on, is there an easy method to this via a script or a batch file?

Do you have any sort of collection of the IP's and the root users and passwords (or SSH keys)?
If so, you could use a for loop to cycle through them (implementation depends on the way they're stored), and select the username and password with regular expression filtering or selecting by field and use expect to provide it the password.
If you don't have a collection like that, it seems that you'll have to build a database of them, and it may just be easier to do it manually, but it may be worth creating the database anyways in case you ever need to do this again.

You should give a look to Ansible provisioning tool.
The steps should be somthing like this:
Install Ansible: sudo apt-get install ansible (on ubuntu)
Define your server groups at /etc/ansible/hosts
[snmpservers]
myhostnames[01:10000].example.com
Restart the service on all servers
ansible snmpservers -m service -a "name=snmp state=restarted"

Related

Having Trouble with a First Time Installation of PostgreSQL 14.1 on Ubuntu 18.04

I've been having a bit of trouble trying to install PostgreSQL 14 for the first time.
I would like to apologize in advance if this question has been asked in the manner that I am about to ask it, but I do not think it has. If it has been, please direct me to the appropriate location!
I've done a fair amount of Googling on the matter, and all the information that I find seems to be rather fragmented, or I end up following a spaghetti trail of hyperlinks (a la-do-this-and-follow-this-other-link-with-more-information-than-you-need-to-understand-this-other-required-portion).
Personally, I don't want to jump around to 50 different locations on the web to try and conjure up a piecemeal solution that I believe works, only to be proven wrong later. I want to know what to do and why it works. I've tried reading the documentation, and have given up on it, because to me, it seems to assume that the server has already been set up by a database administrator.
Instead of articulating my problem directly (as I seem to be having more trouble than I would like by trying to do so), I believe it would be easier to articulate my problem indirectly by stating what my expectations would be after installing PostgreSQL for the first time.
So to start, I will mention that I'm running Ubuntu 18.04.6 LTS, and am installing PostgreSQL 14.1 with the following command:
sudo apt install postgresql-14
Before continuing, I would like to add a side note in advance, that I do not want suggestions for an alternative OS or install method. I just want to be able to get "up and running" in a common-sense fashion from this exact point.
Moving on, I know that the aforementioned command creates a *nix user called postgres.
From here, I can now indirectly state my problem using an outline of what my goals and expectations are immediately after installing the software via that command.
After installing PostgreSQL via apt, these are my expected goals:
I want any client to be able to connect to the database server from any computer where a route exists from the client to the server.
For the sake of simplicity with these stated goals, when it is directly or implicitly stated that I am trying to connect a client to the database server, I am making the assumption that the client is able to, at a minimum, ping the machine that the server is running on, and vice versa.
For now, I'm not completely worried about the database being accessible from the public Internet.
I expect to be able to access the database from any computer on my LAN, whether it is an actual LAN, or some sort of logical LAN (like a WAN or a VPN).
If I change the PostgreSQL password of the postgres user, I expect that any client logging into the database  server via the postgres user will require the password.
This means if I want to change the password to some_password via \password postgres or ALTER USER postgres WITH PASSWORD 'some_password'; (I am assuming this is how you change the login password of a PostgreSQL user), then...
I expect running psql [-h host] -U postgres -W from any host...
That when I am prompted to enter the password... 
I can only log in by entering the exact password of some_password.
Entering any other arbitrary text for the password should not allow me to log in.
I am adding this as a requirement because previous install attempts have shown me that this is NOT the case.
I expect to be able to create a PostgreSQL user account other than postgres (e.g. db_user) with a password and have it be subject to the same requirements as the postgres user.
i.e. once the new account is given permission to log in, the same common-sense login requirements to log in must be imposed, i.e. you can't get in if you don't have the correct username/password combination. 
If the process to achieve the aforementioned can be explained in such a way that it can be understood with minimal mental friction, I would be extremely grateful.
Feel free to assume that my knowledge is on par with that of a undergraduate CS student who just completed their first year of university, who also understands Linux filesystems and basic computer networking. I just want the answer to be as accessible to as many people as possible, as I am sure I'm not the only person who has struggled with installing PostgreSQL, in spite of having a power user's level of computer literacy.
sudo apt install postgresql
sudo -u postgres psql
Set a password for this user with \password or the other method you mention
sudo vi /etc/postgresql/10/main/pg_hba.conf
Make the only uncommented nonblank line in this file be host all all all md5
sudo vi /etc/postgresql/10/main/postgresql.conf
uncomment listen_addresses line and set it to '*'
sudo service postgresql restart
When you make a new user, you should also make a new database which has the same spelling as the user does. Otherwise you will need to specify the database name when you try to log in with psql -U, such as psql -U newname -d postgres -h[hhh]. Should you actually be running 14 not 10, then you will need to change the paths of the config files you need to edit accordingly.

I dont know how Postgresql created user on my mac

Two days ago, i started learning postgresql. Most tutorials I followed online were either old or the codes wont just work on my mac. I followed a lot of tutorials that did a lot of totally different things.
When i switched on my system today. I noticed Postgresql created a user on my mac. I don't know what this is or maybe i used the wrong CLI code.
When I tried viewing the user, I saw this
should I delete this user or it has a function?
postgres user account
Creating a user account specifically for Postgres, commonly named postgres, is a normal part of a Postgres installation. Your installer app likely prompted you for a password to assign to this new user account.
One reason for this is security: The database’s data files and security configuration files are stored in folders owned by the postgres user. So if your main user account is hijacked, the intruder does not yet have access to the database (often the most valuable thing in storage). The intruder must jump through more hoops to compromise Postgres. Also, the separate ownership prevents other apps from inadvertently stomping on the Postgres files.
You will find Postgres is much more enterprise-oriented than other products such as MySQL. This means locking-down for security. Another example: Postgres by default is configured to not accept connections over the network. To enable connections from other computers, you must change the configuration. Inconvenient for the beginner, but more secure. Like a bar on your car steering wheel and deadbolts on your doors, more security always means more steps to take and more annoyance.
Use a virtual machine
Installing the postgres user account is one of the things that makes Postgres a rather heavyweight installation. I suggest to those learning Postgres to use a virtual machine for Postgres. Something like:
Parallels or Fusion or VirtualBox on your own computer
Cloud server such as FreeBSD on DigitalOcean.com.
To remove Postgres, simply discard the vm.
Postgres.app for macOS
Another option for a Mac user is Postgres.app, created by the person who built one of the first Postgres-as-a-Service implementations (on Heroku). I have not used Postgres.app, but I understand it wraps Postgres, so it does not install the postgres user account. Also, Postgres starts and stops when launch and quit the app, rather than running in the background all the time.
Be aware: you may have conflicts with Postgres.app on a Mac where you already have a conventional installation. I suggest you first carefully remove the conventional Postgres from your Mac before installing Postgres.app. Uninstalling involves finding and deleting various files and folders in various places.
Database-as-a-Service (DBaaS)
Another option to avoid local installation is the increasing choices for running Postgres as a service. This is sometimes referred to as “managed Postgres” because the vendor maintains the installation of Postgres on your behalf. You simply use Postgres to create your database, but you do not fully control Postgres in such a service.
Some examples:
Heroku
Digital Ocean
Azure
Amazon Web Services (AWS)
ElephantSQL
My experience
Personally, I often install Postgres on a Mac using the installer by EnterpriseDB.com. That company sells added-value versions of Postgres, but kindly provides an installer for plain-vanilla Postgres, as a service to the community.
I have also used that same installer from EnterpriseDB.com to install onto a Parallels VM running macOS as the guest OS within the VM on a MacBook Pro running macOS as the host OS. You can easily configure the VM to share the host Mac’s IP address on the network, or you can give the VM its own network address which might be handy for demo/dev/test work.
Thirdly, I have installed Postgres on FreeBSD on DigitalOcean.com.
All three of these options have worked quite well for me. Which is preferable depends on the scenario. For example, the DigitalOcean.com approach is good if I want colleagues to be able to reach the database 24x7 without my own MacBook being available.
This discussion is for development work. For mission-critical deployment, I strongly recommend using heavy-duty server equipment with error-correcting memory and redundant storage such as RAID or ZFS pool. Postgres is extremely reliable but depends, of course, on reliable hardware.
Your tag says Postgres 9.1. That version is quite old now. I suggest using the latest version. By the way, the version numbering system has changed for postgres. The first number is now the roughly-annual release number likely requiring you to dump and reload data to upgrade, and the second number is compatible updates.
As pointed out by #basil-bourque, the account is required for several reasons.
That said, if it annoys you to have the PostgreSQL showing up in the login screen -as it did me-, you can remove it as long as you have admin user rights in MacOS.
Apple Support gives the following command to hide a user from the login screen:
$ sudo dscl . create /Users/hiddenuser IsHidden 1
However, since the PostgreSQL user was not included listed in the login window at installation, that command will yield no result -at least in Catalina, which is my OS.
You should use the following two commands instead, as suggested by josemarluedke:
## add postgres to the list of hidden users on login screen
$ sudo defaults write /Library/Preferences/com.apple.loginwindow HiddenUsersList -array-add 'postgres'
## instruct not to show any hidden accounts at login
$ sudo defaults write /Library/Preferences/com.apple.loginwindow SHOWOTHERUSERS_MANAGED -bool FALSE
Worked for me!

Run tasks as another user

Using Capistrano v3, how can I run all remote tasks through su as another user? I cannot find anything in the official documentation (http://capistranorb.com/)
For my use case, there is one SSH user and one user for every virtual host. User A connects to the server and should run all commands as user B.
This isn't much of an answer, but I don't think what you are trying to do is possible without code modifications. Here's why:
There are two primary cases where you would use a different user:
Deployment needs to run as a particular user because of file ownership.
Deployment needs to run with root permissions.
In the first case, you generally would simply tell Capistrano to ssh as that user.
In the second case, you would tell Capistrano to run certain commands with paswordless sudo (http://capistranorb.com/documentation/getting-started/authentication-and-authorisation/#authorisation).
I can see a situation where only one user is available via SSH, but file ownership and permissions is based on another user, so you want to make su part of the workflow. I'm sure it is possible to do, but if I had to do it, I would be reading the source code of Capistrano and overriding how shell commands are executed. This would be non-trivial.
If you have a specific command like rm which needs to run as a different user, you may be able to use the SSHKit.config.command_map[:rm] = 'sudo rm' mechanism to do it.
In a nutshell, I don't think what you are asking for is, on its face, easily done with Capistrano. If you have a specific use case, we may be able to offer suggestions as to how you may approach the problem differently which plays better to Capistrano's strengths.
Good luck!
Update
Looking further, the capistrano-rbenv gem has a mechanism by which it has overridden the execution of all commands:
task :map_bins do
SSHKit.config.default_env.merge!({ rbenv_root: fetch(:rbenv_path), rbenv_version: fetch(:rbenv_ruby) })
rbenv_prefix = fetch(:rbenv_prefix, proc { "#{fetch(:rbenv_path)}/bin/rbenv exec" })
SSHKit.config.command_map[:rbenv] = "#{fetch(:rbenv_path)}/bin/rbenv"
fetch(:rbenv_map_bins).each do |command|
SSHKit.config.command_map.prefix[command.to_sym].unshift(rbenv_prefix)
end
end
https://github.com/capistrano/rbenv/blob/master/lib/capistrano/tasks/rbenv.rake#L17
You might have success with something similar.
In order to run all remote tasks through su as another user I think you need to change Ownership for that User.
I'm Assuming that deployment folder name is /public_html/test .
sudo chown User:User /public_html/test # `chown` will change the owner ship so that `User` user can `**Read/Write**`
umask 0002
sudo chown User:User public_html/test/releases
sudo chown User:User public_html/test/shared
Hope this will solve your issue!!!

make server backup, and keep owner with rsync

I recently configured a little server for test some services, now, before an upgrade or install new software, I want to make an exact copy of my files, with owners, groups and permissions, also the symlinks.
I tried with rsync to keep the owner and group but in the machine who receives the copy I lost them.
rsync -azp -H /directorySource/ myUser#192.168.0.30:/home/myUser/myBackupDirectory
My intention is to do it with the / folder, to keep all my configurations just in case, I have 3 services who have it's own users and maybe makes modifications in folders outside it's home.
In the destination folder appear with my destination user, whether I do the copy from the server as if I do it from the destination, it doesn't keep the users and groups!, I create the same user, tried with sudo, even a friend tried with 777 folder :)
cp theoretically serves the same but doesn't work over ssh, anyway I tried to do it in the server but have many errors. As I remembered the command tar also keep the permissions and owners but have errors because the server it's working and it isn't so fast the process to restore. I remember too the magic dd command, but I made a big partition. Rsync looked the best option to do it, and to keep synchronized the backup. I saw rsync in the new version work well with owners but I have the package upgraded.
Anybody have some idea how I do this, or how is the normal process to keep my own server well backuped, to restore just making the partition again?
The services are taiga, a project manager platform, a git repository, a code reviewer, and so on, all are working well with nginx over Ubuntu Server. I haven't looked other backup methods because I thought rsync with a cron job do the work.
Your command would be fine, but you need to run as root user on the remote end (only root has permission to set file owners):
rsync -az -H /directorySource/ root#192.168.0.30:/home/myUser/myBackupDirectory
You also need to ensure that you use rsync's -o option to preserve owners, and -g to preserve groups, but as these are implied by -a your command is OK. I removed -p because that's also implied by -a.
You'll also need root access, on the local end, to do the reverse transfer (if you want to restore your files).
If that doesn't work for you (no root access), then you might consider doing this using tar. A proper archive is probably the correct tool for the job, and will contain all the correct user data. Again, root access will be needed to write that back to the file-system.

How do I copy data from a remote system without using ssh or FTP Perl modules?

I have to write a Perl script to automatically copy data from remote server to my local system. The directory structure on remote systems is:
../log/D1/<date>.tar.gz
../log/D2/<date>.gz
../log/D3/<date>.tar.gz
../log/D4/<date>
and same on other server. I want to copy the data on local system in below format.
../log/S1/D1/<date>.tar.gz
../log/S1/D2/<date>.gz
../log/S1/D3/<date>.tar.gz
../log/S1/D4/<date>
and same for other servers i.e. S2, S3, etc
Also, no ssh supported Perl modules are available on remote server as well on local server and I dont have permission to install any Perl modules. The only good thing is that the connectivity is through password-less ssh keys.
Can anyone please suggest me any Perl code to get this done?
I believe you can access to shell command from perl.
So you can do this:
$cmd = "/usr/bin/scp remotefile localfile";
system $cmd;
NOTE: scp is secure-copy -- a buddy of ssh.
This does not require ssh-perl module but it require ssh support on both (which I have).
Hope this helps.
I started to suggest the scp command line program, but it seems that there's a CPAN module for that (no surprise). Check out Net::SCP.
By using scp on your client (where you can install new Perl modules) you can copy files without having to install any new software on the remote system. It just needs to have the ssh server running - which you've said it does.
I'd say stop trying to make life difficult for yourself and get the system to support the features you require.
Trying to develop for such a limited/ locked down platform is not going to be cost-effective in the long run - you'll develop stuff more slowly and it will have more bugs.
A little developer time is way more expensive than a decent hosted VM / hardware box.
Get a proper host, it will definitely save money (talk to your manager about this).
From your query above I understand that you don't have much permissions to install perl modules or do any changes which require administrative privileges. I love perl but to automate things like this you should use bash instead of perl. Below is the sample code I am using with password less ssh keys.
#!/bin/bash
DATE=`date`
BASEDIR="/basedir"
cd $BASEDIR
for HOST in S1 S2 S3
do
scp -q $HOST:$BASEDIR/D1/$DATE.tar.gz $HOST/D1/
echo "Data copy from $HOST done"
done
exit 0
You can use different date formats like date +%Y%m%d for current date in format YYYYMMDD. Also you can use this link to learn different date formats.
Hope this helps.
You may not be able to install anything in system-wide lib directories, but there is nothing preventing you from installing modules in a location to which you have write-access. See How do I keep my own module/library directory?
This creates no more of a security issue than allowing you to write scripts on this system in the first place.
So, go forth and install Net::SCP.
It sounds like you want rsync. You shouldn't have to do any programming at all.