I have PostgreSQL 9.5 (yes I know it's not supported anymore) installed on Ubuntu Server 18.04 using this instructions https://www.postgresql.org/download/linux/ubuntu/
I want to change path and separate log for every database. But it's configuret by package maintainer in such a way that it ignores log* settings in PostgreSQl configuration and uses some other way to log everything to files and I can't find out how. Currently it logs to /var/log/postgresql/postgresql-9.5-clustername.log. I want it to be /var/log/postgresql/clustername/database.log but I don't know where to configure it. In PostgreSQL log_destination is set to stderr
The Ubuntu packages have logging_collector disabled by default, so the log is not handled by PostgreSQL, but by the startup script.
However, there is no way in PostgreSQL to get a separate log file per database, so the only way to get what you want is to put the databases in individual clusters rather than into a single cluster.
Related
I've installed Bareos 20.0.1 on Ubuntu 20.04.3 according to their documentations here.
I'm trying to backup a remote PostgreSQL database and apparently, there are three possible scenarios and the pros of the PostgreSQL Plugin (third solution), makes it the obvious choice.
Following the PostgreSQL Plugin documentations, in the Prerequisites for the PostgreSQL Plugin section, there is a line saying:
The plugin must be installed on the same host where the PostgreSQL database runs.
Now what I'm failing to understand is that, if I'm supposed to install the plugin on my database node, how will the bareos machine and the plugin on the db machine communicate?
Furthermore, I've checked out the source code for this module on their GitHub, and I see that the plugin source code tries to find files locally and that is a proof to the aforementioned statement.
In a desperate act, I tried installing the plugin and its dependencies on the bareos node and I keep getting the error Error: python3-fd-mod: Could not read Label File /var/lib/postgresql/13/main/backup_label which is actually trying to find the backup_label file in the bareos node.
Here is the configuration for my fileset:
FileSet {
Name = "psql"
Include {
Options {
compression=GZIP
signature = MD5
}
Plugin = "python"
":module_path=/usr/lib/bareos/plugins"
":module_name=bareos-fd-postgres"
":postgresDataDir=/var/lib/postgresql/13/main"
":walArchive=/var/lib/postgresql/13/wal_archive/"
":dbHost=DATABASE_DNS"
":dbuser=DATABASE_USER"
}
}
Note that the plugin document specifies the dbHost parameter as:
useful, if socket is not in default location. Specify socket-directory with a leading / here
However, since I'm trying a remote database, I'm using the DNS address of the remote database. I verified the bareos connection to database and made sure the backup_label file gets created while the PostgreSQL backup job runs.
I'll be happy to provide more details if necessary. Appreciate any help or even guesses :-D
I'm new to Postgres so can't seem to change the logging setting.
At the moment it logs ALL queries that are executed by any application. The app writes millions of queries a day so the log files get too big. I only need it to log any errors.
How can I change that in Postgres? I've installed it using Homebrew on Mac OS X.
You need to configure in postgresql.conf(data/postgresql.conf) file
change the setting in log_statement = 'all' to get the desired value which is available in When To Log
see this SO question for more info.
#jacob You can restart the postgresql service by sudo /etc/init.d/postgresql restart or sudo service postgresql restart.
With amazon annoucing postgresql service in RDS, can we setup chef-server(11) with amazon rds postgresql database, rather than using the default databse that gets installed with the omnibus installer. I have read that we can achieve same by configuring chef-server.rb file and then reconfiguring the chef-server, but can't get what setttings to specify in the chef-server.rb file. Any help will be appreciated.
I did not try this, but it seems rather straightforward.
Check your chef-server-running.json, it should be in /etc/chef-server. The file has properties listed with which chef-server is running. Scroll down to "postgresql" - you will see some properties used for connecting to postgresql database.
Add the properties you would like to change (such as *listen_address* or *sql_password*) into chef-server.rb file like that:
postgresql['listen_address'] = 'new_host'
postgresql['sql_password'] = 'new_password'
and run
chef-server-ctl reconfigure
Then start chef-server and make sure that chef-server-running.json now has new values for postgresql.
Hopefully that helps.
I'd like to monitor the queries getting sent to my database from an application. To that end, I've found pg_stat_activity, but more often then not, the rows which are returned read " in transaction". I'm either doing something wrong, am not fast enough to see the queries come through, am confused, or all of the above!
Can someone recommend the most idiot-proof way to monitor queries running against PostgreSQL? I'd prefer some sort of easy-to-use UI based solution (example: SQL Server's "Profiler"), but I'm not too choosy.
PgAdmin offers a pretty easy-to-use tool called server monitor
(Tools ->ServerStatus)
With PostgreSQL 8.4 or higher you can use the contrib module pg_stat_statements to gather query execution statistics of the database server.
Run the SQL script of this contrib module pg_stat_statements.sql (on ubuntu it can be found in /usr/share/postgresql/<version>/contrib) in your database and add this sample configuration to your postgresql.conf (requires re-start):
custom_variable_classes = 'pg_stat_statements'
pg_stat_statements.max = 1000
pg_stat_statements.track = top # top,all,none
pg_stat_statements.save = off
To see what queries are executed in real time you might want to just configure the server log to show all queries or queries with a minimum execution time. To do so set the logging configuration parameters log_statement and log_min_duration_statement in your postgresql.conf accordingly.
pg_activity is what we use.
https://github.com/dalibo/pg_activity
It's a great tool with a top-like interface.
You can install and run it on Ubuntu 21.10 with:
sudo apt install pg-activity
pg_activity
If you are using Docker Compose, you can add this line to your docker-compose.yaml file:
command: ["postgres", "-c", "log_statement=all"]
now you can see postgres query logs in docker-compose logs with
docker-compose logs -f
or if you want to see only postgres logs
docker-compose logs -f [postgres-service-name]
https://stackoverflow.com/a/58806511/10053470
I haven't tried it myself unfortunately, but I think that pgFouine can show you some statistics.
Although, it seems it does not show you queries in real time, but rather generates a report of queries afterwards, perhaps it still satisfies your demand?
You can take a look at
http://pgfouine.projects.postgresql.org/
SQL distributes pre-initialized catalog cluster but for postgresql we need initialize cluster using initdb and a network service account. It fails in few cases and causing bit of misery!
Can initialize cluster ourselves and distribute pre-initialized cluster?
Thanks
The "cluster" (or data directory) depends on the operating system and the architecture. So a data directory that was initialized with initdb on a 32bit Linux will not work on a 64bit Windows.
But you don't need to do that. A service account is only necessary if you want to run PostgreSQL as a service.
You can easily use the ZIP distribution to install and start Postgres without the need for a full-fledge installation or a service account.
The steps to do so are:
Unzip the binaries
Run initdb pointing it to the directory where the database cluster should be created.
Run pg_ctl to start the server.
Note that the steps 2) and 3) must be run using the same user, otherwise the server will have no priviliges to write to the data directory.
These steps can easily be put into a batch file or shell script.
Hard to understand your question, but I think you are talking about the Windows installer for PostgreSQL. Right? What version, what installer, what about error messages, loggings, etc. ?
The installer can be found here.
SQL = database language, SQL Server =
Microsoft database product