using PostgreSQL 9.2 on Windows Xp(RAM 4GB) sometimes the Database service is closing unexpectedly.
am new in PostgreSQL Database, So i cannot figure out the reason behind it, can any one help me ?!
log 1
log 2
Current Backend Access config(pg_hba.conf)
host all all 0.0.0.0/0 trust
Postgresql.conf Settings
effective_cache_size = 131072
lc_messages = English_United States.1252
lc_monetary = English_United States.1252
lc_numeric = English_United States.1252
lc_time = English_United States.1252
listen_addresses = *
log_destination = stderr
log_line_prefix = %t
log_timezone = Asia/Calcutta
logging_collector = on
max_connections = 10000
max_files_per_process = 1000
port = 5432
restart_after_crash = on
shared_buffers = 131072
temp_buffers = 131072
TimeZone = Asia/Calcutta
work_mem = 1048576
Related
I'm trying to drop a user from a Redshift cluster but receive the following error:
drop user "xxx";
ERROR: user "xxx" cannot be dropped because permission dependency is found
I've installed the admin views and revoked all privileges from all tables and schemas. I cannot find any reference to this specific error. It is also not included in this instructional: https://aws.amazon.com/premiumsupport/knowledge-center/redshift-user-cannot-be-dropped/
select ddl from admin.v_generate_user_grant_revoke_ddl where ddltype='revoke' and grantee='xxx' order by objseq, grantseq desc;
ddl
-----
(0 rows)
select ddl, grantor, grantee from admin.v_generate_user_grant_revoke_ddl where grantee='xxx' and ddltype='grant' and objtype <>'default acl' order by objseq,grantseq;
ddl | grantor | grantee
-----+---------+---------
(0 rows)
select * from pg_user where usename = 'xxx';
usename | usesysid | usecreatedb | usesuper | usecatupd | passwd | valuntil | useconfig
------------+----------+-------------+----------+-----------+----------+----------+-----------
xxx | 110 | f | f | f | xxx | |
(1 row)
select * from pg_default_acl where defacluser=110;
defacluser | defaclnamespace | defaclobjtype | defaclacl
------------+-----------------+---------------+-----------
(0 rows)
The user in not in any groups either. Any guidances is appreciated.
The user had not run queries for at least the last two weeks, he was not very active. His only access was through a Redshift/Excel ODBC setup. I did not check initially, but a day later there were no active sessions for him. I re-ran the drop user command and got the expected result. There must have been some lingering 'something'. For reference I ran this cmd to see who had active session: select * from stv_sessions;. My problem has been resolved by trying again the next day. https://docs.aws.amazon.com/redshift/latest/dg/r_STV_SESSIONS.html
I have master-slave (primary-standby) streaming replication set up on 2 physical nodes. Although the replication is working correctly and walsender and walreceiver both work fine, the files in the pg_wal folder on the slave node are not getting removed. This is a problem I have been facing every time I try to bring the slave node back after a crash. Here are the details of the problem:
postgresql.conf on master and slave/standby node
# Connection settings
# -------------------
listen_addresses = '*'
port = 5432
max_connections = 400
tcp_keepalives_idle = 0
tcp_keepalives_interval = 0
tcp_keepalives_count = 0
# Memory-related settings
# -----------------------
shared_buffers = 32GB # Physical memory 1/4
##DEBUG: mmap(1652555776) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory
#huge_pages = try # on, off, or try
#temp_buffers = 16MB # depends on DB checklist
work_mem = 8MB # Need tuning
effective_cache_size = 64GB # Physical memory 1/2
maintenance_work_mem = 512MB
wal_buffers = 64MB
# WAL/Replication/HA settings
# --------------------
wal_level = logical
synchronous_commit = remote_write
archive_mode = on
archive_command = 'rsync -a %p /TPINFO01/wal_archive/%f'
#archive_command = ':'
max_wal_senders=5
hot_standby = on
restart_after_crash = off
wal_sender_timeout = 5000
wal_receiver_status_interval = 2
max_standby_streaming_delay = -1
max_standby_archive_delay = -1
hot_standby_feedback = on
random_page_cost = 1.5
max_wal_size = 5GB
min_wal_size = 200MB
checkpoint_completion_target = 0.9
checkpoint_timeout = 30min
# Logging settings
# ----------------
log_destination = 'csvlog,syslog'
logging_collector = on
log_directory = 'pg_log'
log_filename = 'postgresql_%Y%m%d.log'
log_truncate_on_rotation = off
log_rotation_age = 1h
log_rotation_size = 0
log_timezone = 'Japan'
log_line_prefix = '%t [%p]: [%l-1] %h:%u#%d:[PG]:CODE:%e '
log_statement = all
log_min_messages = info # DEBUG5
log_min_error_statement = info # DEBUG5
log_error_verbosity = default
log_checkpoints = on
log_lock_waits = on
log_temp_files = 0
log_connections = on
log_disconnections = on
log_duration = off
log_min_duration_statement = 1000
log_autovacuum_min_duration = 3000ms
track_functions = pl
track_activity_query_size = 8192
# Locale/display settings
# -----------------------
lc_messages = 'C'
lc_monetary = 'en_US.UTF-8' # ja_JP.eucJP
lc_numeric = 'en_US.UTF-8' # ja_JP.eucJP
lc_time = 'en_US.UTF-8' # ja_JP.eucJP
timezone = 'Asia/Tokyo'
bytea_output = 'escape'
# Auto vacuum settings
# -----------------------
autovacuum = on
autovacuum_max_workers = 3
autovacuum_vacuum_cost_limit = 200
auto_explain.log_min_duration = 10000
auto_explain.log_analyze = on
include '/var/lib/pgsql/tmp/rep_mode.conf' # added by pgsql RA
recovery.conf
primary_conninfo = 'host=xxx.xx.xx.xx port=5432 user=replica application_name=xxxxx keepalives_idle=60 keepalives_interval=5 keepalives_count=5'
restore_command = 'rsync -a /TPINFO01/wal_archive/%f %p'
recovery_target_timeline = 'latest'
standby_mode = 'on'
Result of pg_stat_replication on master/primary
select * from pg_stat_replication;
-[ RECORD 1 ]----+------------------------------
pid | 8868
usesysid | 16420
usename | xxxxxxx
application_name | sub_xxxxxxx
client_addr | xx.xx.xxx.xxx
client_hostname |
client_port | 21110
backend_start | 2021-06-10 10:55:37.61795+09
backend_xmin |
state | streaming
sent_lsn | 97AC/589D93B8
write_lsn | 97AC/589D93B8
flush_lsn | 97AC/589D93B8
replay_lsn | 97AC/589D93B8
write_lag |
flush_lag |
replay_lag |
sync_priority | 0
sync_state | async
-[ RECORD 2 ]----+------------------------------
pid | 221533
usesysid | 3541624258
usename | replica
application_name | xxxxx
client_addr | xxx.xx.xx.xx
client_hostname |
client_port | 55338
backend_start | 2021-06-12 21:26:40.192443+09
backend_xmin | 72866358
state | streaming
sent_lsn | 97AC/589D93B8
write_lsn | 97AC/589D93B8
flush_lsn | 97AC/589D93B8
replay_lsn | 97AC/589D93B8
write_lag |
flush_lag |
replay_lag |
sync_priority | 1
sync_state | sync
Steps I had followed to bring the standby node back from a crash
On master started select pg_start_backup('backup');
rsync data folder and wal_archive folder from master/primary to slave/standby
On master `select pg_stop_backup();
Restart postgres on slave/standby node.
This resulted in the slave/standby node being in sync with master and has been working fine since then.
On the primary/master node the pg_wal folder gets its files removed after nearly 2 hours. But the files on the slave/standby node are not removed. Almost all the files are in the archive_status folder in the pg_wal folder with the <filename>.done as well on the standby node.
I guess the problem can go away if I perform a switchover, but I still want to understand the reason why it is happening.
Please see, I am also trying to find the answers to some of the following questions as well:
Which process writes the files to pg_wal on the slave/standby node? I am following this link
https://severalnines.com/database-blog/postgresql-streaming-replication-deep-dive
Which parameter removes the files from the pg_wal folder on the standby node?
Do they need to go to wal_archive folder on the disk just like they go to wal_archive folder on the master node?
You didn't describe omitting pg_replslot during your rsync, as the docs recommend. If you didn't omit it, then now your replica has a replication slot which is a clone of the one on the master. But if nothing ever connects to that slot on the replica and advances the cutoff, then the WAL never gets released to recycling. To fix you just need to shutdown the replica, remove that directory, restart it, (and wait for the next restart point to finish).
Do they need to go to wal_archive folder on the disk just like they go to wal_archive folder on the master node?
No, that is optional not necessary. It is set by archive_mode = always if you want it to happen.
I'm trying to save a file from Azure File Storage into Azure SQL Database table varbinary(max) column (store whole content as advised in this answer). I've tried a few times to adjust my SQL query but without success. Here's the code which results in error 'Bad or inaccessible location specified in external data source "my_Azure_Files".' when it invokes OPENROWSET:
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'mypassword123'
GO
CREATE DATABASE SCOPED CREDENTIAL [https://mystorageaccount.file.core.windows.net/]
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sas_token_generated_on_azure_portal';
CREATE EXTERNAL DATA SOURCE my_Azure_Files
WITH (
LOCATION = 'https://mystorageaccount.file.core.windows.net/test',
CREDENTIAL = [https://mystorageaccount.file.core.windows.net/],
TYPE = BLOB_STORAGE
);
Insert into dbo.myTable(targetColumn)
Select BulkColumn FROM OPENROWSET(
BULK 'test.csv',
DATA_SOURCE = 'my_Azure_Files',
SINGLE_BLOB) AS testFile;
CLOSE MASTER KEY;
GO
I'm able to download the test.csv file by a web-browser using the same SAS token and url path. I'm also able to verify that the credential and the external source are successfully created in the database:
+-------------------------------------------------+------------------+-----------------------------------------------------+-------------------------+------------------+---------------------------+---------------+---------------+----------------+--------------------+----------+
| data_source_id | name | location | type_desc | type | resource_manager_location | credential_id | database_name | shard_map_name | connection_options | pushdown |
+-------------------------------------------------+------------------+-----------------------------------------------------+-------------------------+------------------+---------------------------+---------------+---------------+----------------+--------------------+----------+
| 65540 | my_Azure_Files | https://mystorageaccount.file.core.windows.net/test | BLOB_STORAGE | 05/01/1900 00:00 | NULL | 65539 | NULL | NULL | NULL | ON |
| name | principal_id | credential_id | credential_identity | create_date | modify_date | target_type | target_id | | | |
+-------------------------------------------------+------------------+-----------------------------------------------------+-------------------------+------------------+---------------------------+---------------+---------------+----------------+--------------------+----------+
| https://mystorageaccount.file.core.windows.net/ | 1 | 65539 | SHARED ACCESS SIGNATURE | 15/07/2020 13:14 | 15/07/2020 13:14 | NULL | NULL | | | |
When creating SAS on Azure portal I checked all allowed resource types and all allowed permissions, except 'Delete'. I also removed the leading '?' from SAS to use in the SECRET field.
I've tried variations of TYPE = BLOB_STORAGE and TYPE = HADOOP as well as SINGLE_BLOB, SINGLE_CLOB and SINGLE_NCLOB parameters.
Please help me solve my problem.
By following below steps, able to successfully insert into the target table:
While generating the SAS, please select Allowed Resource Type as ‘Container’ and ‘Object’:
Copy the SAS and use below command:
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password#123'
Use SAS token generated without ‘?’ at the start and create Scoped Credentials:
CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential WITH IDENTITY =
'SHARED ACCESS SIGNATURE', SECRET = 'sv=2019-10-
10XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX';
Create External Data Source referencing your blob path:
CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://mystorageaccount.file.core.windows.net'
, CREDENTIAL= MyAzureBlobStorageCredential
);
Run the insert using OPENROWSET:
Insert into dbo.test(name1)
Select BulkColumn FROM OPENROWSET(
BULK 'test/test.csv',
DATA_SOURCE = 'MyAzureBlobStorage',
SINGLE_BLOB) AS testFile;
Can also use Bulk insert:
BULK INSERT dbo.test
FROM 'test/test.csv'
WITH (DATA_SOURCE = 'MyAzureBlobStorage',
FORMAT = 'CSV');
Assuming table dbo.test is already created
I am trying to create shortcuts for DBAs/admins in psql.
I have created some functions and queries, which give me information such as schema size in bytes, or a list of tables in order of storage used, etc. These will be used whenever an admin or DBA wants to get some info about the database.
I can run these queries naturally with select, or as a function like get_size(), but I want them to be accessible as shortcuts, similar to the native backslash commands (\dx, \dt, etc).
So I have used psql's \set feature to store queries/functions as variables, which I will put in the psqlrc file:
\set size 'select pg_size_pretty(my_size_function(''public''));'
Then when I type :size in psql I would get the size of the "public" schema.
What I want though, is to be able to dynamically pass a schema name, so I could run things like
:size public, :size schema2 etc.
I tried changing the \set to: \set size 'select pg_size_pretty(my_size_function(:schema));', but I can only call that by executing \set schema '''public''' first.
Since the whole point is to use these universally as shortcuts, having to manually run \set commands each time defeats the purpose.
In Oracle the would be colon a bind variable, which would be read at runtime.
How can I do this with psql?
I use these ways.
Postgres Functions (Inside Postgres CLI)
postgres=# CREATE OR REPLACE FUNCTION getSize(tableName varchar) RETURNS varchar LANGUAGE SQL as
postgres-# $$
postgres$# SELECT pg_size_pretty(pg_relation_size(tableName));
postgres$# $$;
CREATE FUNCTION
postgres=# select getSize('test');
getsize
------------
8192 bytes
(1 row)
From Shell (Outside Postgres CLI)
psqlgettablesize(schema, tablename) - Get table size. Pass all for schema argument to get for all schemas.
$ psqlgettablesize customer events
table_name | total_size | table_size | index_size
--------------------------+---------------------+-------------+---------------
test(complete database) | 19039892127 (18 GB) | |
--------- | --------- | --------- | ---------
customer.events | 24576 (24 kB) | 0 (0 bytes) | 24576 (24 kB)
(3 rows)
psqlgettablecount(schema, tablename) - Get count of rows in a table, in a schema
$ psqlgettablecount customer events
count
----------
51850000
(1 row)
psqlgetvacuumdetails(schema, tablename) - Get vacuum details of a table, in a schema
$ psqlgetvacuumdetails customer events
schemaname | relname | n_live_tup | n_dead_tup | last_analyze | analyze_count | last_autoanalyze | autoanalyze_count | last_vacuum | vacuum_count | last_autovacuum | autovacuum_count
------------+-----------+------------+------------+----------------------------+---------------+----------------------------+-------------------+---------------------------+--------------+-----------------+------------------
customer | events | 0 | 0 | 2019-12-02 18:25:04.887653 | 2 | 2019-11-27 18:49:19.002405 | 1 | 2019-11-29 13:11:15.92002 | 1 | | 0
(1 row)
psqltruncatetable(schema, tablename) - Truncate a table, in a schema, after authorization.
$ psqltruncatetable customer events
Are you sure to truncate table 'customer.events' (y/n)? y
Time: 4.944 ms
psqlsettings(category) - Get settings of Postgres
$ psqlsettings Autovacuum
name | setting | unit | category | short_desc | extra_desc | context | vartype | source | min_val | max_val | enumvals | boot_val | reset_val | sourcefile | sourceline | pending_restart
-------------------------------------+-----------+------+------------+-------------------------------------------------------------------------------------------+------------+------------+---------+---------+---------+------------+----------+-----------+-----------+------------+------------+-----------------
autovacuum | on | | Autovacuum | Starts the autovacuum subprocess. | | sighup | bool | default | | | | on | on | | | f
autovacuum_analyze_scale_factor | 0.1 | | Autovacuum | Number of tuple inserts, updates, or deletes prior to analyze as a fraction of reltuples. | | sighup | real | default | 0 | 100 | | 0.1 | 0.1 | | | f
psqlselectrows(schema, tablename) - Get rows from a table, in a schema
$ psqlselectrows customer events
id | name
----+------
1 | Clicked
2 | Page Loaded
(10 rows)
#Colors
B_BLACK='\033[1;30m'
B_RED='\033[1;31m'
B_GREEN='\033[1;32m'
B_YELLOW='\033[1;33m'
B_BLUE='\033[1;34m'
B_PURPLE='\033[1;35m'
B_CYAN='\033[1;36m'
B_WHITE='\033[1;37m'
RESET='\033[0m'
#Postgres Command With Params
psqlcommand="$POSTGRES_BIN/psql -U postgres test -q -c"
function psqlgettablesize()
{
[ -z "$1" ] && echo -e "${B_RED}Argument 1 missing. Schema name needed. ${B_YELLOW}(Pass 'all' to get details from all schema(s)${RESET}" ||
{
criteria="and table_schema = '$1'"
if [ "$1" == "all" ]; then
criteria=""
fi
if [ "$2" != "" ]; then
criteria+=" and table_name = '$2'"
fi
[ -z "$2" ] && echo -e "${B_YELLOW}Table name not given. ${B_GREEN}Showing size of all tables in $1 schema(s)${RESET}"
query="SELECT
concat(current_database(), '(complete database)') AS table_name, concat(pg_database_size(current_database()), ' (', pg_size_pretty(pg_database_size(current_database())), ')') AS total_size, '' AS table_size, '' AS index_size
UNION ALL SELECT '---------','---------','---------','---------'
UNION ALL SELECT
table_name,
concat(total_table_size, ' (', pg_size_pretty(total_table_size), ')'),
concat(table_size, ' (', pg_size_pretty(table_size), ')'),
concat(index_size, ' (', pg_size_pretty(index_size), ')')
FROM (
SELECT
concat(table_schema, '.', table_name) AS table_name,
pg_total_relation_size(concat(table_schema, '.', table_name)) AS total_table_size,
pg_relation_size(concat(table_schema, '.', table_name)) AS table_size,
pg_indexes_size(concat(table_schema, '.', table_name)) AS index_size
FROM information_schema.tables where table_schema !~ '^pg_' AND table_schema <> 'information_schema' $criteria ORDER BY total_table_size) AS sizes";
$psqlcommand "$query"
}
}
function psqlgettablecount()
{
[ -z "$1" ] && echo -e "${B_RED}Argument 1 missing: Need schema name${RESET}"
[ -z "$2" ] && echo -e "${B_RED}Argument 2 missing: Need table name${RESET}" ||
$psqlcommand "select count(*) from $1.$2;"
}
function psqlgetvacuumdetails()
{
[ -z "$1" ] && echo -e "${B_RED}Argument 1 missing: Need schema name${RESET}" ||
[ -z "$2" ] && echo -e "${B_RED}Argument 2 missing: Need table name${RESET}" ||
$psqlcommand "SELECT schemaname, relname, n_live_tup, n_dead_tup, last_analyze::timestamp, analyze_count, last_autoanalyze::timestamp, autoanalyze_count, last_vacuum::timestamp, vacuum_count, last_autovacuum::timestamp, autovacuum_count FROM pg_stat_user_tables where schemaname = '$1' and relname='$2';"
}
function psqltruncatetable()
{
[ -z "$1" ] && echo -e "${B_RED}Argument 1 missing: Need schema name${RESET}" ||
[ -z "$2" ] && echo -e "${B_RED}Argument 2 missing: Need table name${RESET}" ||
{
read -p "$(echo -e ${B_YELLOW}"Are you sure to truncate table '$1.$2' (y/n)? "${RESET})" choice
case "$choice" in
y|Y ) $psqlcommand "TRUNCATE $1.$2;";;
n|N ) echo -e "${B_GREEN}Table '$1.$2' not truncated${RESET}";;
* ) echo -e "${B_RED}Invalid option${RESET}";;
esac
}
}
function psqlsettings()
{
query="select * from pg_settings"
if [ "$1" != "" ]; then
query="$query where category like '%$1%'"
fi
query="$query ;"
$psqlcommand "$query"
if [ -z "$1" ]; then
echo -e "${B_YELLOW}Passing Category as first argument will filter the related settings.${RESET}"
fi
}
function psqlselectrows()
{
[ -z "$1" ] && echo -e "${B_RED}Argument 1 missing: Need schema name${RESET}" ||
[ -z "$2" ] && echo -e "${B_RED}Argument 2 missing: Need table name${RESET}" ||
$psqlcommand "SELECT * from $1.$2"
}
I couldn't figure out how to dynamically pass variables so I settled for a system of aliases and variables that work together to provide what I was looking for. So I can run :load_schema <schemaname> and then run commands interacting with the "loaded" schema. Loading a schema is simply aliasing a \set command. This way people who are unfamiliar with psql can just press : and tab autocomplete will show them all of my shortcuts.
All my code is here.
This isn't exactly what I was looking for so I won't accept it but it's close enough to post in case anyone else wants to do this.
Here's the error :
$ psql -h localhost -U kMbjQ6pR9G -d fzvqFILx0d
Password for user kMbjQ6pR9G:
psql: FATAL: password authentication failed for user "kMbjQ6pR9G"
FATAL: password authentication failed for user "kMbjQ6pR9G"
I'm probably missing a configuration step on using fresh PostgreSQL.
Here's the command lines I tried to create a new user with his own database :
sudo -u postgres bash -c "psql -c \"CREATE USER $USER WITH PASSWORD '$PASSWORD';\"" &&
sudo -u postgres bash -c "psql -c \"CREATE DATABASE $DB;\"" &&
sudo -u postgres bash -c "psql -c \"GRANT ALL PRIVILEGES ON DATABASE $DB to $USER;\"" &&
Here's the PostgreSQL configuration :
$ sudo grep -v ^# /etc/postgresql/9.5/main/pg_hba.conf |grep -v ^$
local all postgres md5
local all all md5
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
Here's all the accound I created to test :
$ sudo -u postgres bash -c 'psql -c "select * from pg_shadow;"'
Password:
usename | usesysid | usecreatedb | usesuper | userepl | usebypassrls | passwd | valuntil | useconfig
------------+----------+-------------+----------+---------+--------------+-------------------------------------+----------+-----------
av6izmbp9l | 16384 | f | f | f | f | md5a49721ef3f5428e269badc03931baf48 | |
rqmejchq7n | 16386 | f | f | f | f | md54edf3a05ca96a435f94152b75495e9dc | |
yyfiknesu8 | 16388 | f | f | f | f | md5b3d4a03913569abbf318bdc490d0f821 | |
qgv2ryqvdw | 16390 | f | f | f | f | md5d0959b4b5e1ed2982e19e4d0af574b11 | |
postgres | 10 | t | t | t | t | md5xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx | |
pf09joszuj | 16392 | f | f | f | f | md5a920dc31666459e5f0a96e9430d07f02 | |
I think it something simple, but I miss this point.
Thanks for your support,
David.
Did you explicitly specify the database when trying to connect?
psql -h localhost -U myuser -d mydb
This is how I usually set up a user to completely own a given database:
create role myuser with createdb login encrypted password 'mypassword';
create database mydb with owner 'myuser' encoding 'utf8';
pg_hba.conf:
local myuser mydb md5
...
The problem was obvious : username & database names are lowercase !
Here's the script I use to create user and his own database :
#!/bin/bash
if [ -n "$1" ]; then
DB="$1"
else
DB=$(php -r "echo substr(str_shuffle(str_repeat('abcdefghijklmnopqrstuvwxyz', ceil(10/63) )),1,10);")
fi
USER=$(php -r "echo substr(str_shuffle(str_repeat('abcdefghijklmnopqrstuvwxyz', ceil(10/63) )),1,10);")
PASSWORD=$(php -r "echo substr(str_shuffle(str_repeat('0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ', ceil(20/63) )),1,20);")
sudo -u postgres psql -c "CREATE USER $USER WITH PASSWORD '$PASSWORD';"
sudo -u postgres psql -c "CREATE DATABASE $DB;"
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE $DB to $USER;"
cat << EndOfMessage
POSTGRESQL_ADDON_DB="$DB"
POSTGRESQL_ADDON_HOST="localhost"
POSTGRESQL_ADDON_PASSWORD="$PASSWORD"
POSTGRESQL_ADDON_PORT="5432"
POSTGRESQL_ADDON_USER="$USER"
EndOfMessage
https://github.com/davidbouche/linux-install/blob/master/pgsql-database-create.sh
Thanks for your contribution.
David