Invalid Value for Parameter 'recovery_target_time' in Postgres Database - postgresql

Wishing you all a Happy and Prosperous New Year!
I am getting below error while restoring postgres database to a specific target time (PITR):
Invalid value for parameter recovery_target_time in file postgresql.conf
And database is not getting restored. I have tried following values for recovery_target_time:
recovery_target_time = '2022-12-31 12:45:00 UTC'
recovery_target_time = '2022-12-31 12:45:00 UTC+00'
recovery_target_time = '2022-12-31 12:45:00 UTC+00:00'
recovery_target_time = '2022-12-31 12:45:00 Etc/UTC'
recovery_target_time = '2022-12-31 12:45:00.000'
But none of the above is working.
What value should be used for recovery_target_time?.
I am using postgresql 14 on Ubuntu 22.04 and trying restore database using pg_basebackup. Timezone is in UTC. Let me know if I need to add more details.
Update: Provided logs
2022-12-30 13:32:19.686 UTC [34563] LOG: invalid value for parameter "recovery_target_time": "2022–12–30 12:45:00 UTC+00:00"

Related

filebeat Grok for postgres log file not work

I am using filebeat in kibana to export and manage postgressql database log file .
version using is 7.13.3
Follow instruction at
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-postgresql.html
log_line_prefix = '%m [%p] %q%u#%d '
log_duration = 'on'
log_statement = 'none'
log_min_duration_statement = 0
Log was exported and put in kibana success but the Grok parsing meet the following error
Provided Grok expressions do not match field value: [2021-07-20 16:07:24.606 +07,"postgres","hr",4988,"[local]",60f6920d.137c,3,"SELECT",2021-07-20 16:06:21 +07,9/0,0,LOG,00000,"duration: 445.927 ms statement: select * from events_102078;",,,,,,,,,"psql"]
The raw log line in postgres_log.csv is
2021-07-20 16:07:24.606 +07,"postgres","hr",4988,"[local]",60f6920d.137c,3,"SELECT",2021-07-20 16:06:21 +07,9/0,0,LOG,00000,"duration: 445.927 ms statement: select * from events_102078;",,,,,,,,,"psql"
So how can I fix this ?

Is there an issue with Central Africa Time (CAT) PostgreSQL TIMESTAMPTZ conversion?

One of our PostgreSQL 11.4 deployments in Congo uses the CAT timezone (Africa/Kigali +02) and one of our function chokes when trying to convert human-input timestamps to actual TIMESTAMPTZ data.
For example:
SELECT '2019-10-17 00:00:00 CAT'::TIMESTAMPTZ;
ERROR: invalid input syntax for type timestamp with time zone: "2019-10-17 00:00:00 CAT"
LINE 2: SELECT '2019-10-17 00:00:00 CAT'::TIMESTAMPTZ
^
SQL state: 22007
Character: 9
But when I try with CEST (Central European, also +02) it works.
SELECT '2019-10-17 00:00:00 CEST'::TIMESTAMPTZ;
"2019-10-17 00:00:00+02"
Incidentally, converting from epoch to CAT also works
select to_timestamp(1571263200);
"2019-10-17 00:00:00+02"
Version:
"PostgreSQL 11.4 (Ubuntu 11.4-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit" on Ubuntu 18.04.2 LTS
For whatever reason, 'CAT' is not valid for input by default, presumably someone felt it was ambiguous or something. You could append the line
CAT 7200 # Central Africa Time
to the file "$SHAREDIR/timezonesets/Default" to make this work.
Or you could create a file "$SHAREDIR/timezonesets/Africa" with the contents:
#INCLUDE Default
#OVERRIDE
CAT 7200 # Central Africa Time
And then set the parameter timezone_abbreviations to 'Africa'.
I am not horologist, you might want to research why CAT is missing before blindly adding it. Also, if you go either of the above routes, you should document it clearly someplace. You will need to repeat the steps you took when you upgrade PostgreSQL, or restore or move your database.
Or, you could preprocess your user input to replace 'CAT' with 'Africa/Kigali'.
Incidentally, converting from epoch to CAT also works
select to_timestamp(1571263200);
"2019-10-17 00:00:00+02"
'CAT' does not appear in your example. So it is not clear what this is an example of.

How to upload data into a Redshift Table with a Date Format 'MMDDYYYY'

I need to upload a Data in the format 'MMDDYYYY'
current way code i am using to send via psql
SET BaseFolder=C:\
psql -h hostname -d database -c "\copy test_table(id_test,
colum_test,columndate DATEFORMAT 'MMDDYYYY')
from '%BaseFolder%\test_table.csv' with delimiter ',' CSV HEADER;"
here test_table is the table in the postgres DB
Id_test: float8
Column_test: float8
columndate: timestamp
id_test colum_test colum_date
94 0.3306 12312017
16 0.3039 12312017
25 0.5377 12312017
88 0.6461 12312017
i am getting the following error when i run the above query in CMD in windows 10
ERROR: date/time field value out of range: "12312017"
HINT: Perhaps you need a different "datestyle" setting.
CONTEXT: COPY test_table, line 1, column columndate : "12312017"
The DATEFORMAT applies to the whole COPY command, not a single field.
I got it to work as follows...
Your COPY command suggests that the data is comma-separated, so I used this input data and stored it in an Amazon S3 bucket:
id_test colum_test,colum_date
94,0.3306,12312017
16,0.3039,12312017
25,0.5377,12312017
88,0.6461,12312017
I created a table:
CREATE TABLE foo (
foo_id BIGINT,
foo_value DECIMAL(4,4),
foo_date DATE
)
Then loaded the data:
COPY foo (foo_id, foo_value, foo_date)
FROM 's3://my-bucket/foo.csv'
IAM_ROLE 'arn:aws:iam::123456789012:role/Redshift-Role'
CSV
IGNOREHEADER 1
DATEFORMAT 'MMDDYYYY'
Please note that the recommended way to load data into Amazon Redshift is from files stored in Amazon S3. (I haven't tried using the native psql copy command with Redshift, and would recommend against it β€” particularly for large data files. You certainly can't mix commands from the Redshift COPY command into the psql Copy command.)
Then, I ran SELECT * FROM foo and it returned:
16 0.3039 2017-12-31
88 0.6461 2017-12-31
94 0.3306 2017-12-31
25 0.5377 2017-12-31
That is a horrible format for dates. Don't break your date type, convert your data to a saner format.
=> select to_date('12312017', 'MMDDYYYY');
to_date
------------
2017-12-31

locale issue workaround in PostgreSQL data directory upgrade from 8.4 to 9.5

Desciption of the issue is given at link. It seems that it is a PostgreSQL bug only. To resolve this issue, there seems to be only a single workaround, which is to create a list of locale (map) with key as <Language>_<Country>.<CodePage>and value as <Language>, <Country>.
For example:
English_United States.1252 = English, United States
...
Since value of parameter --locale is accepted in the format of <Language>, <Country>whereas output of command SHOW LC_COLLATE is in the format of <Language>_<Country>.<CodePage>. So, during an ugrade I will get the value of lc_collate cmd and get the corresponding value from the list and provide it during PostgreSQL 9.5 installation.
How do I convert <Language>_<Country>.<CodePage> to appropriate format for successful installation ?

Loading Data from PostgreSQL into Stata

When I load data from PostgreSQL into Stata some of the data has unexpected characters appended. How can I avoid this?
Here is the Stata code I am using:
odbc query mydatabase, schema $odbc
odbc load, exec("SELECT * FROM my_table") $odbc allstring
Here is an example of the output I see:
198734/0 one/0/r April/0/0/0
893476/0 two/0/r May/0/0/0
324192/0 three/0/r June/0/0/0
In Postgres the data is:
198734 one April
893476 two May
324192 three June
I see this in mostly in larger tables and with fields of all datatypes in PostgreSQL. If I export the data to a csv there are no trailing characters.
The odbci.ini file I am using looks like this:
[ODBC Data Sources]
mydatabase = PostgreSQL
[mydatabase]
Debug = 1
CommLog = 1
ReadOnly = no
Driver = /usr/lib64/psqlodbcw.so
Servername = myserver
Servertype = postgres
FetchBufferSize = 99
Port = 5432
Database = mydatabase
[Default]
Driver = /usr/lib64/psqlodbcw.so
I am using odbc version unixODBC 2.3.1 and PostgreSQL version 9.4.9 with server encoding UTF8 and Stata version 14.1.
What is causing the unexpected characters in the data imported into Stata? I know that I can clean the data once it’s in Stata but I would like to avoid this.
I was able to fix this by adding the line
set odbcdriver ansi
to the Stata code.