What is a simple way to disable a given postgresql database cluster? - postgresql

I can use pg_ctlcluster ver cluster stop to stop the service of a given cluster but if the machine reboots it starts right back up again. I could do something like change the data directory in it's postgresql.conf file but that smells to me.

PostgreSQL installations that have pg_ctlcluster (Debian-based) look for a start.conf file in /etc/postgresql/<version>/<clustername> with these contents:
# Automatic startup configuration
# auto: automatically start/stop the cluster in the init script
# manual: do not start/stop in init scripts, but allow manual startup with
# pg_ctlcluster
# disabled: do not allow manual startup with pg_ctlcluster (this can be easily
# circumvented and is only meant to be a small protection for
# accidents).
auto
Put manual instead to avoid auto-starting at boot, per-cluster.

Related

Which config files could disable the automatically starting ssh server, so a headless connect becomes impossible?

Which config files could disable the automatically starting ssh server, so a headless connect becomes impossible?
I need to know the config files that might interfere with the ssh server to normally start up at boot.
I believe that you are looking for the following commands (assuming you are running the last version of raspbian):
sudo systemctl stop sshd
sudo systemctl disable sshd
sudo systemctl mask sshd
stop Basically stops the service immediately. disable disables the service from starting at bootup. Additionally, mask will make it impossible to load the service.
Digging deeper into what each command does, on modern linux distributions there are configuration files for each service called unit files. They are stored (usually) in /usr/lib/systemd. These are basically the evolution of scripts to start services.
the stop command just calls the sshd.service unit file with a stop parameter, in order to shut down the server.
the disable (or enable) command removes(or creates) a symlink of the unit file in a directory where systemd looks into when booting services (usually, /etc/systemd/system).
systemctl mask creates a symlink to /dev/null instead of the unit file. That way the service cant be loaded.

AWS Elasticbeanstalk ebextensions server restart error "Error occurred during build: [Errno 4] Interrupted function call"

I've got a elasticbeanstalk environment that needs to run a powershell script and restart before the application is deployed. According to the documentation this is supported as per the documentation
If the system requires a reboot after the command completes, the system reboots after the specified number of seconds elapses. If the system reboots as a result of a command, Elastic Beanstalk will recover to the point after the command in the configuration file. The default value is 60 seconds. You can also specify forever, but the system must reboot before you can run another command.
However when I add a reboot command to a ebextensions .config file I get the following exception from elasticbeanstalk
Error occurred during build: [Errno 4] Interrupted function call
The logs on the server after it has rebooted show that the command was executed so I assume the error is caused by a restart during the app deploy stage.
If I remove the restart command, deploy, wait for it to be ready then trigger a restart manually it works fine. But this is obviously not acceptable.
I've looked into the deployment hooks file system approach but that doesn't work either, and seems unessesary given it sounds like it should support this requirement out of the box.
Does anybody have any ideas?
We've had the same issue. We needed to disable SSL and TLS < 1.2, which requires registry changes and a reboot. Our workaround is to do the reboot in the container_commands section with a wait of forever. This seems to properly reboot and then trigger success in the deployment. However, it never actually does any of the steps after the reboot, which includes the built-in deployment of the code from the staging location to the actual final file destination (inetpub/wwwroot most likely). To get around this, have a step just before the reboot to copy the files from the local staging directory to the web root yourself.
We also needed to set a registry value and reboot. Our solution was to put the script in the command section and set waitAfterCompletion to foreve. There is a restart-computer --Force in our powershell script to cause the reboot.
disable_secure_time_seeding:
command: powershell.exe -ExecutionPolicy Bypass -File "C:\\scripts\\DisableSecureTimeSeeding.ps1" #This will cause a reboot
waitAfterCompletion: forever

Moving MongoDB dbpath to an AWS EBS device

I'm using CentOS 7 via AWS.
I'd like to store MongoDB data on an attached EBS instead of the default /var/lib path.
However, when I edit /etc/mongod.conf to point to a new dbpath, I'm getting a permission denied error.
Permissions are set correctly to mongod.mongod on the dir.
What gives?
TL;DR - The issue is SELinux, which affects what daemons can access. Run setenforce 0 to temporarily disable.
You're using a flavour of Linux that uses SELinux.
From Wikipedia:
SELinux can potentially control which activities a system allows each
user, process and daemon, with very precise specifications. However,
it is mostly used to confine daemons[citation needed] like database
engines or web servers that have more clearly defined data access and
activity rights. This limits potential harm from a confined daemon
that becomes compromised. Ordinary user-processes often run in the
unconfined domain, not restricted by SELinux but still restricted by
the classic Linux access rights
To fix temporarily:
sudo setenforce 0
This should disable SELinux policies and allow the service to run.
To fix permanently:
Edit /etc/sysconfig/selinux and set this:
SELINUX=disabled
Then reboot.
The service should now start-up fine.
The data dir will also work with Docker, i.e. something like:
docker run --name db -v /mnt/path-to-mounted-ebs:/data/db -p 27017:27017 mongo:latest
Warning: Both solutions DISABLE the security that SELinux provides, which will weaken your overall security. A better solution is to understand how SELinux works, and create a policy on your new data dir that works with mongod. See https://wiki.centos.org/HowTos/SELinux for a more complete tutorial.

Vagrant & Chef - Postgresql not starting on reboot

I decided to create my own chef script to install Postgres. The installation works perfectly fine, but postgres doesn't start on boot when I vagrant reload
Here's my recipes/default.rb:
include_recipe "apt"
apt_repository 'apt.postgresql.org' do
uri 'http://apt.postgresql.org/pub/repos/apt'
distribution node["lsb"]["codename"] + '-pgdg'
components ['main', node["postgres"]["version"]]
key 'http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc'
action :add
end
package 'postgresql-' + node["postgres"]["version"] do
action :install
end
file "/etc/postgresql/#{node['postgres']['version']}/main/postgresql.conf" do
action :delete
end
link "/etc/postgresql/#{node['postgres']['version']}/main/postgresql.conf" do
to node["postgres"]["conf_path"]
action :create
notifies :reload, "service[postgresql]", :delayed
end
service "postgresql" do
action [:enable, :start]
supports :status=>true, :restart=>true, :start => true, :stop => true, :reload=>true
end
And here's my attributes/default.rb:
default["postgres"]["version"] = "9.3"
default["postgres"]["conf_path"] = "/home/vagrant/postgres/postgresql.conf"
Any help would be greatly appreciated!!
============ EDIT 1 ============
Here is the output when running vagrant up for the first time with chef.log_level = :debug: http://pastebin.com/w8Lp8gzv
Here is /etc/init.d/postgresql: http://pastebin.com/dQ5Zb1yj
Here is /var/log/postgresql/postgresql-9.3-main.log: http://pastebin.com/0Y2RhWvL
============ EDIT 2 ============
I'm now fairly confident that it's my postgresql.conf file, which looks like: http://pastebin.com/rjX89iU0
shared_buffers might be too high...
When you run vagrant reload, is the Chef Client running? I suspect not. Mitchell changed the behavior in a recent version of vagrant to only provision if the machine hasn't already been provisioned. This information is stored in the .vagrant directory in your working directory. In short, since you already provisioned your machine with vagrant up, it is not provisioned when you run vagrant reload.
You run vagrant up - this is actually going to run vagrant up --provision, which executes the Chef Client provisioner on the node, executing your Chef Recipe.
You run vagrant reload - this actually runs vagrant up --no-provision, because the .vagrant. directory indicates the machine has already been provisioned. So your machine is rebooted, but the Chef Client provisioner is not executed.
Solution
Run vagrant reload with the --provision flag
vagrant reload --provision
Notes
This still doesn't explain why upstart (or whatever you're using to ensure the postgres service is running at boot) isn't starting the server for your automatically. In order to answer that question, I'll need to see more information. Can you set the chef.log_level = :debug in your Vagrantfile and update your question with the output? It would also be helpful to see the init.d script this postgres installer creates, and any log output from /var/log related to postgres.
Alright, it looks like Postgresql doesn't play nice with postgresql.conf being a symbolic link. Copying the file instead did the trick.
Turns out the postgresql was starting before the postgersql.conf file was mounted
If you're starting services with Upstart that depend on something in Vagrant's shared folders, have your upstart conf file listen for the vagrant-mounted event.
# /etc/init/start-postgresql.conf
start on vagrant-mounted
script
# commands to start postgresql...
end script
The vagrant-mounted event is emitted after Vagrant is done setting up shared folders, this way you can restart dependent services after vagrant reload without having to run your provisioners again.

How to run postgres on centos when installed via YUM repo as default daemon user

With a freshly installed version of Postgres 9.2 via yum repository on Centos 6, how do you run postgres as a different user when it is configured to run as 'postgres:postgres' (u:g) out of the box?
In addition to AndrewPK's explanation, I'd like to note that you can also start new PostgreSQL instances as any user by stopping and disabling the system Pg service, then using:
initdb -D /path/to/data/directory
pg_ctl start -D /path/to/data/directory
This won't auto-start the server on boot, though. For that you must integrate into your init system. On CentOS 6 a simple System V-style init script in /etc/init.d/ and a suitable symlink into /etc/rc3.d/ or /etc/rc3.d/ (depending on default runlevel) is sufficient.
If running more than one instance at a time they must be on different ports. Change the port directive in postgresql.conf in the datadir or set it on startup with pg_ctl -o "-p 5433" .... You may also need to override the unix_socket_directories if your user doesn't have write permission to the default socket directory.
pg_ctl
initdb
This is only for a fresh installation (as it pertained to my situation) as it involves blowing away the data dir.
The steps I took to resolve this issue while utilizing the packaged startup scripts for a fresh installation:
Remove the postgres data dir /var/lib/pgsql/9.2/data if you've already gone through the initdb process with the postgres user:group configured as default.
Modify the startup script (/etc/init.d/postgresql-9.2) to replace all instances of postgres:postgres with NEWUSER:NEWGROUP.
Modify the startup script to replace all instances of postgres in any $SU -l postgres lines with the NEWUSER.
run /etc/init.d/postgres initdb to regenerate the cluster using the new username
Make sure any logs created are owned by the new user or remove old logs if error on initdb (the configuration file in my case was found in /var/lib/pgsql/9.2/data/postgresql.conf).
Startup postgres and it should now be running under the new user/group.
I understand this might not be what other people are looking for if they have existing postgres db's and want to restart the server to run as a different user/group combo - this was not my case, and I didn't see an answer posted anywhere for a 'fresh' install utilizing the pre-packaged startup scripts.