I've installed Bareos 20.0.1 on Ubuntu 20.04.3 according to their documentations here.
I'm trying to backup a remote PostgreSQL database and apparently, there are three possible scenarios and the pros of the PostgreSQL Plugin (third solution), makes it the obvious choice.
Following the PostgreSQL Plugin documentations, in the Prerequisites for the PostgreSQL Plugin section, there is a line saying:
The plugin must be installed on the same host where the PostgreSQL database runs.
Now what I'm failing to understand is that, if I'm supposed to install the plugin on my database node, how will the bareos machine and the plugin on the db machine communicate?
Furthermore, I've checked out the source code for this module on their GitHub, and I see that the plugin source code tries to find files locally and that is a proof to the aforementioned statement.
In a desperate act, I tried installing the plugin and its dependencies on the bareos node and I keep getting the error Error: python3-fd-mod: Could not read Label File /var/lib/postgresql/13/main/backup_label which is actually trying to find the backup_label file in the bareos node.
Here is the configuration for my fileset:
FileSet {
Name = "psql"
Include {
Options {
compression=GZIP
signature = MD5
}
Plugin = "python"
":module_path=/usr/lib/bareos/plugins"
":module_name=bareos-fd-postgres"
":postgresDataDir=/var/lib/postgresql/13/main"
":walArchive=/var/lib/postgresql/13/wal_archive/"
":dbHost=DATABASE_DNS"
":dbuser=DATABASE_USER"
}
}
Note that the plugin document specifies the dbHost parameter as:
useful, if socket is not in default location. Specify socket-directory with a leading / here
However, since I'm trying a remote database, I'm using the DNS address of the remote database. I verified the bareos connection to database and made sure the backup_label file gets created while the PostgreSQL backup job runs.
I'll be happy to provide more details if necessary. Appreciate any help or even guesses :-D
Related
I'm trying to follow the diesel.rs tutorial using PostgreSQL. When I get to the Diesel setup step, I get an "authentication method 10 not supported" error. How do I resolve it?
You have to upgrade the PostgreSQL client software (in this case, the libpq used by the Rust driver) to a later version that supports the scram-sha-256 authentication method introduced in PostgreSQL v10.
Downgrading password_encryption in PostgreSQL to md5, changing all the passwords and using the md5 authentication method is a possible, but bad alternative. It is more effort, and you get worse security and old, buggy software.
This isn't a Rust-specific question; the issue applies to any application connecting to a Postgres DB that doesn't support the scram-sha-256 authentication method. In my case it was a problem with the Perl application connecting to Postgres.
These steps are based on a post.
You need to have installed the latest Postgres client.
The client bin directory (SRC) is "C:\Program Files\PostgreSQL\13\bin" in this example. The target (TRG) directory is where my application binary is installed: "C:\Strawberry\c\bin". My application failed during an attempt to connect the Postgres DB with error "... authentication method 10 not supported ...".
set SRC=C:\Program Files\PostgreSQL\13\bin
set TRG=C:\Strawberry\c\bin
dir "%SRC%\libpq.dll" # to see the source DLL
dir "%TRG%\libpq__.dll" # to see the target DLL. Will be replaced from SRC
cp "%SRC%\libpq.dll" %TRG%\.
cd %TRG%
pexports libpq.dll > libpq.def
dlltool --dllname libpq.dll --def libpq.def --output-lib ..\lib\libpq.a
move "%TRG%"\libpq__.dll "%TRG%"\libpq__.dll_BUP # rename ORIGINAL name to BUP
move "%TRG%"\libpq.dll "%TRG%"\libpq__.dll # rename new DLL to ORIGINAL
At this point I was able successfully connect to Postgres from my Perl script.
The initial post shown above also suggested to copy other DLLs from source to the target:
libiconv-2.dll
libcrypto-1_1-x64.dll
libssl-1_1-x64.dll
libintl-8.dll
However, I was able to resolve my issue without copying these libraries.
Downgrading to PostgreSQL 12 helped
I have PostgreSQL 9.5 (yes I know it's not supported anymore) installed on Ubuntu Server 18.04 using this instructions https://www.postgresql.org/download/linux/ubuntu/
I want to change path and separate log for every database. But it's configuret by package maintainer in such a way that it ignores log* settings in PostgreSQl configuration and uses some other way to log everything to files and I can't find out how. Currently it logs to /var/log/postgresql/postgresql-9.5-clustername.log. I want it to be /var/log/postgresql/clustername/database.log but I don't know where to configure it. In PostgreSQL log_destination is set to stderr
The Ubuntu packages have logging_collector disabled by default, so the log is not handled by PostgreSQL, but by the startup script.
However, there is no way in PostgreSQL to get a separate log file per database, so the only way to get what you want is to put the databases in individual clusters rather than into a single cluster.
I'm trying to migrate a PostgreSQL DB persisted on cloud (on DO droplet) to RDS using AWS Database Migration Service (DMS).
I've successfully configured the replication instance and endpoints.
I've created a task with Migrate existing data and replicate ongoing changes. When I start the task it shows some error ERROR: could not access file "test_decoding": No such file or directory.
I've tried to create a replication slot manually on my DB console it throws the same error.
I've followed the procedures which was suggested on the DMS documentation for Postgres
I'm using PostgreSQL 9.4.6 on my source endpoint.
I presume that the problem is the output plugin test_decoding was not accessible to do the replication.
Please assist me to resolve this. Thanks in advance!
You must install postgresql-contrib additional supplied modules on Your source endpoint.
If it is installed, make sure, directory where test_decoding module located is the same with directory where PostgreSQL expect it.
In *nix, You can check module directory by command:
pg_config --pkglibdir
If it is not the same, copy module, or make symlink, or some other solution You prefer.
After painful installation of hadoop_fdw into our running pgsql 9.3.4, I am trying to connect it to cloudera cluster 5.2.0 with no luck.
Is there a way of debugging the fdw? After creating the foreign table and selecting from it, I just got an error - ERROR: failed to connect to Hive: No more data to read.
btw.: Some old version of hadoop_fdw was capable of using url (jdbc://server:port/args), but not the recent version, there's just address & port.
Hadoop_fdw didn'd make it. There's probably something wrong/old/obsolete in hive.c. But with even more effort we managed to make jdbc_fdw work with cloudera jdbc drivers. The steps were as follows:
1) install jdbc_fdw extension
2) merge all driver jar files into one
3) CREATE SERVER cloudera2 FOREIGN DATA WRAPPER jdbc_fdw OPTIONS(drivername 'com.cloudera.hive.jdbc4.HS2Driver',url 'jdbc:hive2://fqdn:10000;user=hive',querytimeout '15', jarfile '/opt/cloudera/combined.jar');
mental note: set client_min_messages to debug5; can help you identify where is the problem e.g.:driver not found etc
SQL distributes pre-initialized catalog cluster but for postgresql we need initialize cluster using initdb and a network service account. It fails in few cases and causing bit of misery!
Can initialize cluster ourselves and distribute pre-initialized cluster?
Thanks
The "cluster" (or data directory) depends on the operating system and the architecture. So a data directory that was initialized with initdb on a 32bit Linux will not work on a 64bit Windows.
But you don't need to do that. A service account is only necessary if you want to run PostgreSQL as a service.
You can easily use the ZIP distribution to install and start Postgres without the need for a full-fledge installation or a service account.
The steps to do so are:
Unzip the binaries
Run initdb pointing it to the directory where the database cluster should be created.
Run pg_ctl to start the server.
Note that the steps 2) and 3) must be run using the same user, otherwise the server will have no priviliges to write to the data directory.
These steps can easily be put into a batch file or shell script.
Hard to understand your question, but I think you are talking about the Windows installer for PostgreSQL. Right? What version, what installer, what about error messages, loggings, etc. ?
The installer can be found here.
SQL = database language, SQL Server =
Microsoft database product