Puppetdb and postgresql module: How to manage Postgresql but NOT manage Postgresql REPO using Hiera? - postgresql

I am trying to disable the managing of the Postgresql repo using Hiera when using the puppetlabs/postgresql module. I have tried every Hiera combination I can think of (from reading the docs/code) but nothing works.
puppetdb::database::postgresql::manage_package_repo: false
puppetdb::globals::manage_package_repo: false
postgresql::globals::manage_package_repo: false
It still adds the /etc/apt/sources.list.d/apt.postgresql.org.list which won't work since we are using our own Aptly mirror and servers cannot directly communicate with the internet, and so the apt update fails and the entire puppet agent run fails with it.
How do I disable the management of /etc/apt/sources.list.d/apt.postgresql.org.list using Hiera?
System:
OS: Ubuntu 16.04 and 20.04 puppetserver version: 6.12.0
puppet-version: 6.16.0
puppetserver version: 6.12.0
mod 'puppetlabs/postgresql', '6.5.0'
mod 'puppetlabs/puppetdb', '7.4.0'
mod 'puppetlabs/stdlib', '6.3.0'

According to the source for the PostgreSQL Puppet module, the repos are disabled by setting
postgresql::globals::manage_package_repo: false
And according to the source for PuppetDB, its repos are disabled with
puppetdb::manage_package_repo: false
You'll need to set both.
Note that setting these values to false won't remove the repo if it has already been installed, so if that has happened, you'll need to remove it by hand before running Puppet.

I don't think you should be doing anything with puppetdb, that's Puppet's own database where it stores node data and agent run reports. The configuration is definitely going to be somewhere below postgresql::

Related

Bareos Postgres Plugin NOT backing up remote PostgreSQL13 database

I've installed Bareos 20.0.1 on Ubuntu 20.04.3 according to their documentations here.
I'm trying to backup a remote PostgreSQL database and apparently, there are three possible scenarios and the pros of the PostgreSQL Plugin (third solution), makes it the obvious choice.
Following the PostgreSQL Plugin documentations, in the Prerequisites for the PostgreSQL Plugin section, there is a line saying:
The plugin must be installed on the same host where the PostgreSQL database runs.
Now what I'm failing to understand is that, if I'm supposed to install the plugin on my database node, how will the bareos machine and the plugin on the db machine communicate?
Furthermore, I've checked out the source code for this module on their GitHub, and I see that the plugin source code tries to find files locally and that is a proof to the aforementioned statement.
In a desperate act, I tried installing the plugin and its dependencies on the bareos node and I keep getting the error Error: python3-fd-mod: Could not read Label File /var/lib/postgresql/13/main/backup_label which is actually trying to find the backup_label file in the bareos node.
Here is the configuration for my fileset:
FileSet {
Name = "psql"
Include {
Options {
compression=GZIP
signature = MD5
}
Plugin = "python"
":module_path=/usr/lib/bareos/plugins"
":module_name=bareos-fd-postgres"
":postgresDataDir=/var/lib/postgresql/13/main"
":walArchive=/var/lib/postgresql/13/wal_archive/"
":dbHost=DATABASE_DNS"
":dbuser=DATABASE_USER"
}
}
Note that the plugin document specifies the dbHost parameter as:
useful, if socket is not in default location. Specify socket-directory with a leading / here
However, since I'm trying a remote database, I'm using the DNS address of the remote database. I verified the bareos connection to database and made sure the backup_label file gets created while the PostgreSQL backup job runs.
I'll be happy to provide more details if necessary. Appreciate any help or even guesses :-D

PostgreSQL log configuration on Ubuntu

I have PostgreSQL 9.5 (yes I know it's not supported anymore) installed on Ubuntu Server 18.04 using this instructions https://www.postgresql.org/download/linux/ubuntu/
I want to change path and separate log for every database. But it's configuret by package maintainer in such a way that it ignores log* settings in PostgreSQl configuration and uses some other way to log everything to files and I can't find out how. Currently it logs to /var/log/postgresql/postgresql-9.5-clustername.log. I want it to be /var/log/postgresql/clustername/database.log but I don't know where to configure it. In PostgreSQL log_destination is set to stderr
The Ubuntu packages have logging_collector disabled by default, so the log is not handled by PostgreSQL, but by the startup script.
However, there is no way in PostgreSQL to get a separate log file per database, so the only way to get what you want is to put the databases in individual clusters rather than into a single cluster.

Is processManagement (fork to true) needed when MongoDB is only application running on Linux?

I am having problems logging into the mongo shell when I have configuration file set with processManagement fork set to true. Each time I have fork set to true, I have issues connecting to the mongo shell. Turn fork off (false), I have no issues connecting to the shell. I am using vagrant to build 3 Debian 10 boxes where the only thing I am adding to each box is MongoDB. Each MongoDB box will be a part of a replicaSet and communicate to each other on a private network ip. Do I need to set mongodb to run in the background of a linux OS? If I do, what is the advantage of doing so?
You do not necessarily need to fork/daemonize mongod. For example, this isn't necessary in docker.
For some of the reasons why programs daemonize, see https://unix.stackexchange.com/questions/287793/why-do-we-daemonize-processes.

Cant connect orange to local PGSQL server

I installed Orange and I have data in a local PGSQL server.
PGSQL listens on the default port which is 5432.
I have the psycopg2 lib installed, and I also wrote a short python script which pulls some data from the database to check the module is insatlled correctly.
Firewall is down.
Python Env Path is set to use 3.4.4 which is what Orange3 uses.
When I add a sql table widget, I get an error suggesting "please install a backend to use this widgt"
Documentaion in Orange site mentions that all that needs to be done for the DB integration is just installing the python module, but this doesnt work for me.
Help would be appreciated.
Links:
https://docs.orange.biolab.si/3/visual-programming/widgets/data/sqltable.html

ideal vm setup for meteor with shared folder

situation
Hello, I run arch Linux for which there is no meteor package and have an Ubuntu server run within virtualbox for web development. There is a shared folder I mount through database. hich means I can code in to the active environment.
However, like many others, I have a problem with mongodb starting up, specifically the exit code 100.
tracing the problem:
I created the /data/DB directory
gave access rights to my user
ran mongod on its own with no problems
Still I have the issue though.
Question
Where is the configuration file for mongodb which is installed with meteor so I can move it and do I need to create rights for a 'mongodb' user?
Question
What would be the ideal virtual machine for running a meteor development environment in the above set up? Having to create the data directory in the first place tells me Ubuntu server isn't ideal. some extra documentation available to answer this second question appearing on the meteor website would be beautiful
MongoDB does not work correctly on virtualbox shared folders. By default, meteor creates a mongo database in your project's directory, however you can override this behavior with the MONGO_URL environment variable. If you set this variable, meteor will not try to start mongo and will instead connect directly to the mongo endpoint you specify. This allows you to setup mongo however you like (eg using the Ubuntu mongodb package), with data somewhere not in the shared folder.