How to set timezone showing on heroku pg:backups results? - postgresql

When I run
heroku pg:backups
I get results like
=== Backups
ID Backup Time Status Size Database
---- ------------------------- ----------------------------------- ------ --------
b015 2016-08-05 04:43:16 +0000 Completed 2016-08-05 04:43:19 +0000 132kB DATABASE
a014 2016-08-04 21:03:23 +0000 Completed 2016-08-04 21:06:15 +0000 132kB DATABASE
...
Can I set it to show timestamp on different timezone? says UTC+7. I tried set heroku config with
heroku config:add TZ="Asia/Bangkok"
and it didn't work.

Heroku exclusively uses UTC for this. This is pretty much the standard for all web services as it ensures no times can be misinterpreted. You cannot change this output.

Related

Cannot set the timezone of Postgresql role

The official Postgresql recognized timezones says Iran Time is IT so why I get this error? It's version 12 if matters.
postgres=# alter role myuser set timezone to 'IT';
ERROR: invalid value for parameter "TimeZone": "IT"
Well, you shouldn't read the manual of outdated and discontinued versions. The fact that the page doesn't exist any longer for supported versions should have made you suspicious.
Modern Postgres versions provide the view pg_timezone_names to check for valid timezone names.
If you run
select *
from pg_timezone_names
where name ilike '%iran%'
you will get this result:
name
abbrev
utc_offset
is_dst
Europe/Tirane
CET
01:00:00
f
posix/Europe/Tirane
CET
01:00:00
f
posix/Iran
+0330
03:30:00
f
Iran
+0330
03:30:00
f
so there is no such abbreviation (any more). You will need to use the name:
alter role myuser set timezone to 'Iran';
Note that the result also depends on the operating system on which the Postgres server is running. On Windows you wouldn't get the posix time zones.

db2 rollforward command output show '-' in "Log FIles processed" column for non-Catalog partitions

I have rollforward the database after restore command. Below are the outputs of queries:
db2 "rollforward db hdpf1 query status"
Rollforward Status
Input database alias = hdpf1
Number of members have returned status = 3
Member ID Rollforward Next log Log files processed Last committed transaction
status to be read
----------- -------------------------- ------------------- ------------------------- --------------------------
0 DB pending S0001422.LOG - 2019-10-22-17.26.46.000000 UTC
1 DB pending S0004726.LOG - 2019-10-22-17.40.25.000000 UTC
2 DB pending S0004583.LOG - 2019-10-22-17.52.59.000000 UTC
db2 "rollforward db hdpf1 to end of logs on all dbpartitionnums OVERFLOW LOG PATH ('/home/db2inst1/logs/db2inst1/HDPF1’)”
 
                                 Rollforward Status
 
Input database alias                   = hdpf1
Number of members have returned status = 3
 
Member ID    Rollforward                 Next log             Log files processed        Last committed transaction
              status                      to be read
-----------  --------------------------  -------------------  -------------------------  --------------------------
           0  DB  working                 S0001423.LOG         S0001422.LOG-S0001422.LOG  2019-10-27-07.32.56.000000 UTC
           1  DB  working                 S0004727.LOG                     -              2019-10-25-03.05.53.000000 UTC
           2  DB  working                 S0004584.LOG                     -              2019-10-25-03.04.32.000000 UTC
 
DB20000I  The ROLLFORWARD command completed successfully.
$ db2_all "db2 get db cfg for hdpf1 | grep -i 'First active log file'"
 
First active log file                                   = S0001421.LOG
db2 get db cfg for ... completed ok
 
First active log file                                   = S0004725.LOG
db2 get db cfg for ... completed ok
 
First active log file                                   = S0004582.LOG
db2 get db cfg for ... completed ok
 
It seems that the state before applying the log was :
For NODE0000, the log number was on: S0001421.LOG
For NODE0001, the log number was on: S0004725.LOG
For NODE0002, the log number was on: S0004582.LOG
Then the user provided the logs with below range :
For NODE0000 : S0001421.LOG - S0001423.LOG
For NODE0001 : S0004725.LOG - S0004726.LOG
For NODE0002 : S0004582.LOG - S0004583.LOG
Provided logs got applied to the database, I am not sure why the column "Log files processed " is blank for NODE 1 and 2.
I can see the log number changes in partition 1 and partition 2 (“Next logs to be read” column has updated) but output didn’t show “Logs files processed” column for partition 1 and partition 2. What could be the possible reason?
`
When i tried for another database, this value got reflected for non-catalog nodes also:
`
db2 "rollforward db hdpf2 query status"
Rollforward Status
Input database alias = hdpf2
Number of members have returned status = 4
Member ID Rollforward Next log Log files processed Last committed transaction
status to be read
----------- -------------------------- ------------------- ------------------------- --------------------------
0 DB pending S0000052.LOG - 2019-10-30-07.36.45.000000 UTC
1 DB pending S0000038.LOG - 2019-10-30-07.37.01.000000 UTC
2 DB pending S0000040.LOG - 2019-10-30-07.37.07.000000 UTC
3 DB pending S0000038.LOG - 2019-10-30-07.37.13.000000 UTC
db2 "rollforward db hdpf2 to end of logs on all dbpartitionnums OVERFLOW LOG PATH ('/home/db2inst1/log/HDPF')"
Rollforward Status
Input database alias = hdpf2
Number of members have returned status = 4
Member ID Rollforward Next log Log files processed Last committed transaction
status to be read
----------- -------------------------- ------------------- ------------------------- --------------------------
0 DB working S0000060.LOG S0000052.LOG-S0000059.LOG 2019-10-31-10.32.32.000000 UTC
1 DB working S0000040.LOG S0000038.LOG-S0000039.LOG 2019-10-30-07.37.01.000000 UTC
2 DB working S0000042.LOG S0000040.LOG-S0000041.LOG 2019-10-30-07.37.07.000000 UTC
3 DB working S0000040.LOG S0000038.LOG-S0000039.LOG 2019-10-30-07.37.13.000000 UTC
DB20000I The ROLLFORWARD command completed successfully.
What is the Db2 level you are using? Are the two environments using different Db2 levels?
Since you mentioned db2 levels are same, please see following additional info:
The "Log files processed" reported can be less than what are really processed. The true indication of what has been recovered is the "Last committed transaction" timestamp.

PostgreSQL on Corda enterprise node throws relation errors

Running corda enterprise with PostgreSQL in docker container. I have followed the instruction in docs and have set database schema. On database start I see the following errors. Can anyone help what is going on there?
2018-10-11 06:57:57.491 UTC [1506] ERROR: relation "node_checkpoints" does not exist at character 22
2018-10-11 06:57:57.491 UTC [1506] STATEMENT: select count(*) from node_checkpoints
2018-10-11 06:58:22.440 UTC [1506] ERROR: relation "corda-schema.databasechangeloglock" does not exist at character 22
2018-10-11 06:58:22.440 UTC [1506] STATEMENT: select count(*) from "corda-schema".databasechangeloglock
It seems the database user name and schema name don't have the same value, ensure that correct default schema is set for the user by running as database administrator:
ALTER ROLE "[USER]" SET search_path = "[SCHEMA]";
Other possible issue is to mixing upper/lower case and other characters in schema name, could you ensure that schema name has all lower cases (e.g. corda-schema and not CORDA-SCHEMA or Corda-Schema).

PostgreSQL timezone error with DbSchema

I want to setup my postgreSQL server to 'Europe/Berlin' but having an error:
SET time zone 'Europe/Berlin';
ERROR: invalid value for parameter "TimeZone": "Europe/Berlin"
But the real issue is with DdbSchema, when I want to connect to my DB i've got the error
FATAL: invalid value for parameter "TimeZone": "Europe/Berlin"
DbSchema works when I connect to my local db but not with my NAS (Synology) DB.
Any idea ?
Found a way to solve the problem:
You have to start java with the proper time zone.
In my case, my server is GMT, so i had to add the args -Duser.timezone=GMT
For DbSchema, edit the file DbSchema.bat or DbSchema.sh
Find the declaration of SWING_JVM_ARGS
Add the argument -Duser.timezone=GMT a the end of the line
Start DbSchema with this script DbSchema.bat or DbSchema.sh
I think your solution is only a workaround for the actual problem concerning the zoneinfo on the synology diskstation.
I got exactly the same error when trying to connect to the postgres database on my diskstation. The query select * from pg_timezone_names; gives you all timezone names postgresql is aware of.
There are 87 entries all starting with "Timezone":
name | abbrev | utc_offset | is_dst
------------------------+--------+------------+--------
Timezone/Kuwait | AST | 03:00:00 | f
Timezone/Nairobi | EAT | 03:00:00 | f
...
The configured postgres timezonesets contain much more entries, so there must be another source that postgres is building this view of at startup. I discovered that there is a compile-option --with-system-tzdata=DIRECTORY that tells postgres to obtain its values from system zoneinfo.
I looked in /usr/share/zoneinfo and found one subdirectory called Timezone with exactly 87 entries. And there obviously was no subdirectory called Europe (with a timezone file called Berlin). I did not quickly find a solution for the diskstation to update the tzdata automatically or manually by unpacking tzdata2016a.tar.gz and making (make not found...). As a quickfix I copied the Berlin timezone file from another linux system and the problem was solved, so that I now can connect via java/jdbc using the correct timezone "Europe/Berlin"!

Sysdate different than database date in SQL Developer

I am using SQL Developer as a client for an Oracle 11G RAC. The database server is set to Pacific Daylight Timezone (PDT) and when I query sysdate in sqlplus, it always shows the PDT time. But in SQL Developer it displays time as GMT -4.
The system date on the system where SQL Developer is running is also set to PDT, even if I changed from Central timezone. I tried to add this parameter to the SQL Developer configuration files:
AddVMOption -Duser.timezone=GMT-7
But I continue to see the following results:
From SQL Developer:
select to_char(current_date,'DD-MON-YY HH:MI:SS'), to_char(sysdate,'DD-MON-YY HH:MI:SS'), sessiontimezone from dual;
CURRENT_DATE SYSDATE SESSIONTIMEZONE
09-AUG-13 12:57:11 10-AUG-13 03:57:11 -07:00
From sqlplus:
SQL> select to_char(current_date,'DD-MON-YY HH:MI:SS'), to_char(sysdate,'DD-MON-YY HH:MI:SS'), sessiontimezone from dual;
TO_CHAR(CURRENT_DATE,'DD-MO TO_CHAR(SYSDATE,'DD-MON-YYH
SESSIONTIMEZONE
09-AUG-13 12:55:11 09-AUG-13 12:55:11 -07:00
Anyone knows how to have the same output as generated by sqlplus?
I have to schedule jobs in a production environment and I guess it is better to use sysdate instead of the current_date.