psql: intermittent segmentation fault: server closed the connection unexpectedly - postgresql

I looked at similar-sounding questions but none seemed to address my case:
On Mac OS Siera 16GB RAM, localhost (no other postgres running anywhere)
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
The logs say:
2019-03-23 08:12:04.076 MDT [841] LOG: server process (PID 1175) was terminated by signal 11: Segmentation fault
2019-03-23 07:13:10.459 MDT [841] LOG: terminating any other active
server processes 2019-03-23 07:13:10.459 MDT [951] WARNING:
terminating connection because of crash of another server process
2019-03-23 07:13:10.459 MDT [951] DETAIL: The postmaster has
commanded this server process to roll back the current transaction and
exit, because another server process exited abnormally and possibly
corrupted shared memory. 2019-03-23 07:13:10.459 MDT [951] HINT: In a
moment you should be able to reconnect to the database and repeat your
command. 2019-03-23 07:13:10.460 MDT [980] FATAL: the database system
is in recovery mode 2019-03-23 07:13:10.461 MDT [841] LOG: all server
processes terminated; reinitializing 2019-03-23 07:13:10.470 MDT [981]
LOG: database system was interrupted; last known up at 2019-03-23
07:06:47 MDT 2019-03-23 07:13:10.744 MDT [981] LOG: database system
was not properly shut down; automatic recovery in progress 2019-03-23
07:13:10.746 MDT [981] LOG: redo starts at 28/15BF74F0 2019-03-23
07:13:10.746 MDT [981] LOG: invalid record length at 28/15BF7528:
wanted 24, got 0 2019-03-23 07:13:10.746 MDT [981] LOG: redo done at
28/15BF74F0 2019-03-23 07:13:10.755 MDT [841] LOG: database system is
ready to accept connections
PSQL version:
psql --version
psql (PostgreSQL) 11.1
Happens in both psql terminal and pgAdmin. No CPU or memory spikes when this happens.
It doesn't happen on simple result sets. See this example: it's the same query, the first time returning a count, the second time returning rows (which triggers the error):
shill=# with yards_manual as (
select device_id,loc, sum(sq_meters)*10.7639 as manual_yard_sq_ft from device d
inner join zones z on (z.device_id=d.id)
where z.enabled and z.sq_meters<46 or z.sq_meters>47
group by 1,2
)
select count(device_id) from yards_manual;
count
-------
84983
shill=# with yards_manual as (
shill(# select device_id,loc, sum(sq_meters)*10.7639 as manual_yard_sq_ft from device d
shill(# inner join zones z on (z.device_id=d.id)
shill(# where z.enabled and z.sq_meters<46 or z.sq_meters>47 --and z.crop_type in ('WARM_SEASON_GRASS','COOL_SEASON_GRASS')
shill(# group by 1,2
shill(# )
shill-#
shill-# select distinct device_id, y.manual_yard_sq_ft, build_area_ft2 , prop_area_ft2,(prop_area_ft2-build_area_ft2) as gis_yard_sq_ft2 --, st_npoints(property_geom) as corners
shill-# from yards_manual y inner join yards b on st_contains(b.property_geom,y.loc)
shill-# where (prop_area_ft2-build_area_ft2)>0 and (prop_area_ft2-build_area_ft2)<20000
shill-# ;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
Although, this last query sometimes returns. Once it errors-out, it always errors out until I sart/stop the db. But sarting/stopping does not always work. I have retried restarting postgres, backing up and restoring the database, to no avail. The problem just started happening. VACCUUM FULL worked fine, error still happens. The db is 24GB.
Here is the same query now randomly returning:
device_id | manual_yard_sq_ft | build_area_ft2 | prop_area_ft2 | gis_yard_sq_ft2
----------+-------------------+------------------+------------------+------------------
0022682e | 3999.9944068 | 1666.25757779497 | 12948.051385913 | 11281.793808118
002a4379 | 1934.99812741536 | 2907.60847006035 | 15872.352961764 | 12964.7444917037
002adeb4 | 1599.9984516096 | 2856.54321331877 | 9800.49184470172 | 6943.94863138295
But when I ran it a second time, it errored out as described above.
Here's the SQL execution plan:
Unique (cost=137590686.48..137602981.21 rows=819649 width=548)
Output: y.device_id, y.manual_yard_sq_ft, b.build_area_ft2, b.prop_area_ft2, ((b.prop_area_ft2 - b.build_area_ft2))
CTE yards_manual
-> Finalize GroupAggregate (cost=163766.01..227836.10 rows=519752 width=77)
Output: z.device_id, d.loc, (sum(z.sq_meters) * '10.7639'::double precision)
Group Key: z.device_id, d.loc
-> Gather Merge (cost=163766.01..218090.75 rows=433126 width=77)
Output: z.device_id, d.loc, (PARTIAL sum(z.sq_meters))
Workers Planned: 2
-> Partial GroupAggregate (cost=162765.98..167097.24 rows=216563 width=77)
Output: z.device_id, d.loc, PARTIAL sum(z.sq_meters)
Group Key: z.device_id, d.loc
-> Sort (cost=162765.98..163307.39 rows=216563 width=77)
Output: z.device_id, d.loc, z.sq_meters
Sort Key: z.device_id, d.loc
-> Parallel Hash Join (cost=8564.46..133948.71 rows=216563 width=77)
Output: z.device_id, d.loc, z.sq_meters
Hash Cond: ((z.device_id)::text = (d.id)::text)
-> Parallel Seq Scan on public.zones z (cost=0.00..118450.79 rows=216563 width=45)
Output: z.device_id, z.sq_meters
Filter: ((z.enabled AND (z.sq_meters < '46'::double precision)) OR (z.sq_meters > '47'::double precision))
-> Parallel Hash (cost=5648.76..5648.76 rows=120376 width=69)
Output: d.loc, d.id
-> Parallel Seq Scan on public.device d (cost=0.00..5648.76 rows=120376 width=69)
Output: d.loc, d.id
-> Sort (cost=137362850.38..137364899.50 rows=819649 width=548)
Output: y.device_id, y.manual_yard_sq_ft, b.build_area_ft2, b.prop_area_ft2, ((b.prop_area_ft2 - b.build_area_ft2))
Sort Key: y.device_id, y.manual_yard_sq_ft, b.build_area_ft2, b.prop_area_ft2, ((b.prop_area_ft2 - b.build_area_ft2))
-> Nested Loop (cost=0.41..136878917.80 rows=819649 width=548)
Output: y.device_id, y.manual_yard_sq_ft, b.build_area_ft2, b.prop_area_ft2, (b.prop_area_ft2 - b.build_area_ft2)
-> CTE Scan on yards_manual y (cost=0.00..10395.04 rows=519752 width=556)
Output: y.device_id, y.loc, y.manual_yard_sq_ft
-> Index Scan using prop_geom_idx on public.yards b (cost=0.41..263.31 rows=2 width=173)
Output: b.block_id, b.property_geom, b.building_geom, b.prop_area_ft2, b.build_area_ft2, b.yard_area_ft, b.vegetation, b.yard_id
Index Cond: (b.property_geom ~ y.loc)
Filter: (((b.prop_area_ft2 - b.build_area_ft2) > '0'

Related

PostgreSQL 14.2: out of memory - Failed on request of size 24576 in memory context "TupleSort main"

I have recently installed a PostgreSQL 14.1 in parallel to my old 12.9 on my RedHat server. Both instances are running their default configurations. The server itself has 48 CPU and 188 GB RAM, which seemed to be more than sufficient for 12.9
Everything worked as expected, but I keep receiving an error message.
out of memory - Failed on request of size 24576 in memory context "TupleSort main"
SQL state: 53200
SQL tables: pos has 18 584 522 rows // orderedposteps has 18 rows // posteps has 18 rows
CREATE TEMP TABLE actualpos ON COMMIT DROP AS
SELECT DISTINCT lsa.id
FROM pos sa
JOIN orderedposteps osas ON osas.stepid = sa.stepid
JOIN posteps sas ON sas.id = osas.stepid
JOIN LATERAL
(
SELECT innersa.*
FROM pos innersa
JOIN orderedposteps innerosas ON innerosas.stepid = innersa.stepid
WHERE (innersa.id = sa.id) AND
(innersa.iscached IS FALSE) AND
(innersa.isobsolete IS FALSE)
ORDER BY innersa.createdtimestamp DESC, innerosas.stepindex DESC
LIMIT 1
) lsa ON TRUE
LEFT JOIN LATERAL
(
SELECT innersa.*
FROM pos innersa
JOIN orderedposteps innerosas ON innerosas.stepid = innersa.stepid
WHERE (innersa.id = sa.id) AND
(innersa.iscached IS TRUE) AND
(innersa.isobsolete IS FALSE)
ORDER BY innersa.createdtimestamp DESC, innerosas.stepindex DESC
LIMIT 1
) sacheck ON TRUE
LEFT JOIN orderedposteps osascheck ON osascheck.stepid = sacheck.stepid
WHERE ((sacheck IS NULL) OR (sacheck.createdtimestamp < sa.createdtimestamp) OR (osascheck.stepindex < osas.stepindex))
AND (((osas.stepindex < v_laststepindex) AND (sa.isfailure != sas.isvalidsum) AND (sa.iscached IS FALSE)) OR ((osas.stepindex = v_laststepindex) AND (sa.iscached IS FALSE)))
ORDER BY lsa.createdtimestamp DESC LIMIT 50000
The only difference I can see is the RAM utilization, showed by htop.
While 12.9 only consumes up to 10 GB RAM, the 14.1 grows up to 62GB and crashes by reaching more or less 62GB.
I have already tried to increase the work_mem via
ALTER SYSTEM SET work_mem = '4MB';
Used pgtune as well in order to change some other values, but nothing has a significant effect.
I am pretty sure the SQL can be simplified and tuned, which I could do, but I want to understand where the difference between 12.9 and 14.1 is, or what to change configuration wise, instead of refactoring one function to work with the lasted version.

Simple batch DELETE then INSERT procedure some 1000 times slower than executing the statements one after the other

In arather simple table with an composite primary key (see DDL) there are about 40k records.
create table la_ezg
(
be_id integer not null,
usage text not null,
area numeric(18, 6),
sk_area numeric(18, 6),
count_ezg numeric(18, 6),
...
...
constraint la_ezg_pkey
primary key (be_id, usage)
);
There is also a simple procedure which purpose is to delete rows with a certain be_id and persist the rows from another view where they are "generated"
CREATE OR REPLACE function pr_create_la_ezg(pBE_ID numeric) returns void as
$$
begin
delete from la_ezg where be_id = pBE_ID;
insert into la_ezg_(BE_ID, USAGE, ...)
select be_id, usage, ...
from vw_la_ezg_with_usage
where be_id = pBE_ID;
END;
$$ language plpgsql;
The procedure need about 7 Minutes to execute...
Both Statements (DELETE and INSERT) execute in less than 100ms on the very same be_id.
There are a lot of different locks happening in pg_lock during that 7 Minutes but I wasn't able to figure out what exactly is going on inside this transaction and if there is some kind of deadlocking. After all the procedure is returning successful, but it needs way too much time doing it.
EDIT (activated 'auto_explain' and ran all three queries again):
duration: 1.420 ms plan:
Query Text: delete from la_ezg where be_id=790696
Delete on la_ezg (cost=4.33..22.89 rows=5 width=6)
-> Bitmap Heap Scan on la_ezg (cost=4.33..22.89 rows=5 width=6)
Output: ctid
Recheck Cond: (la_ezg.be_id = 790696)
-> Bitmap Index Scan on sys_c0073325 (cost=0.00..4.33 rows=5 width=0)
Index Cond: (la_ezg.be_id = 790696)
1 row affected in 107 ms
duration: 71.645 ms plan:
Query Text: insert into la_ezg(BE_ID,USAGE,...)
select be_id,USAGE,... from vw_la_ezg_with_usage where be_id=790696
Insert on la_ezg (cost=1343.71..2678.87 rows=1 width=228)
-> Nested Loop (cost=1343.71..2678.87 rows=1 width=228)
Output: la_ezg_geo.be_id, usage.nutzungsart, COALESCE(round(((COALESCE(st_area(la_ezg_geo.geometry), '3'::double precision) / '10000'::double precision))::numeric, 2), '0'::numeric), NULL::numeric, COALESCE((count(usage.nutzungsart)), '0'::bigint), COALESCE(round((((sum(st_area(st_intersection(ezg.geometry, usage.geom)))) / '10000'::double precision))::numeric, 2), '0'::numeric), COALESCE(round(((((sum(st_area(st_intersection(ezg.geometry, usage.geom)))) * '100'::double precision) / COALESCE(st_area(la_ezg_geo.geometry), '3'::double precision)))::numeric, 2), '0'::numeric), NULL::character varying, NULL::timestamp without time zone, NULL::character varying, NULL::timestamp without time zone
-> GroupAggregate (cost=1343.71..1343.76 rows=1 width=41)
Output: ezg.be_id, usage.nutzungsart, sum(st_area(st_intersection(ezg.geometry, usage.geom))), count(usage.nutzungsart)
Group Key: ezg.be_id, usage.nutzungsart
-> Sort (cost=1343.71..1343.71 rows=1 width=1834)
Output: ezg.be_id, usage.nutzungsart, ezg.geometry, usage.geom
Sort Key: usage.nutzungsart
-> Nested Loop (cost=0.42..1343.70 rows=1 width=1834)
Output: ezg.be_id, usage.nutzungsart, ezg.geometry, usage.geom
-> Seq Scan on la_ezg_geo ezg (cost=0.00..1335.00 rows=1 width=1516)
Output: ezg.objectid, ezg.be_id, ezg.name, ezg.se_anno_cad_data, ezg.benutzer_geaendert, ezg.datum_geaendert, ezg.status, ezg.benutzer_erstellt, ezg.datum_erstellt, ezg.len, ezg.geometry, ezg.temp_char, ezg.vulgo, ezg.flaeche, ezg.hauptgemeinde, ezg.prozessart, ezg.verbauungsgrad, ezg.verordnung_txt, ezg.gemeinden_txt, ezg.hinderungsgrund, ezg.kompetenz, ezg.seehoehe_min, ezg.seehoehe_max, ezg.neigung_min, ezg.neigung_max, ezg.exposition
Filter: (ezg.be_id = 790696)
-> Index Scan using dkm_nutz_fl_geom_1551355663100174000 on dkm.dkm_nutz_fl nutzung (cost=0.42..8.69 rows=1 width=318)
Output: usage.gdo_gid, usage.gst, usage.nutzungsart, usage.nutzungsabschnitt, usage.statistik, usage.flaeche, usage.kennung, usage.von_datum, usage.bis_datum, usage.von_az, usage.bis_az, usage.projekt, usage.fme_basename, usage.fme_dataset, usage.fme_feature_type, usage.fme_type, usage.oracle_srid, usage.geom
Index Cond: ((usage.geom && ezg.geometry) AND (usage.geom && ezg.geometry))
Filter: _st_intersects(usage.geom, ezg.geometry)
-> Seq Scan on la_ezg_geo (cost=0.00..1335.00 rows=1 width=1516)
Output: la_ezg_geo.objectid, la_ezg_geo.be_id, la_ezg_geo.name, la_ezg_geo.se_anno_cad_data, la_ezg_geo.benutzer_geaendert, la_ezg_geo.datum_geaendert, la_ezg_geo.status, la_ezg_geo.benutzer_erstellt, la_ezg_geo.datum_erstellt, la_ezg_geo.len, la_ezg_geo.geometry, la_ezg_geo.temp_char, la_ezg_geo.vulgo, la_ezg_geo.flaeche, la_ezg_geo.hauptgemeinde, la_ezg_geo.prozessart, la_ezg_geo.verbauungsgrad, la_ezg_geo.verordnung_txt, la_ezg_geo.gemeinden_txt, la_ezg_geo.hinderungsgrund, la_ezg_geo.kompetenz, la_ezg_geo.seehoehe_min, la_ezg_geo.seehoehe_max, la_ezg_geo.neigung_min, la_ezg_geo.neigung_max, la_ezg_geo.exposition
Filter: (la_ezg_geo.be_id = 790696)
1 row affected in 149 ms
duration: 421851.819 ms plan:
Query Text: select pr_create_la_ezg(790696)
Result (cost=0.00..0.26 rows=1 width=4)
Output: pr_create_la_ezg('790696'::numeric)
1 row retrieved starting from 1 in 7 m 1 s 955 ms (execution: 7 m 1 s 929 ms, fetching: 26 ms)
P.S. I shortened some of the queries and names for the sake of readability
P.P.S. This database is a legacy migration project. Like in this case there are often views dependent on views in multiple layers. I´d like to streamline all this but Ia m in a desperate need to debug whats going on inside such an transaction, otherwise I would have to rebuild nearly all with the risk of breaking things

Need help understanding NpgSQL connection opening process

I have been trying to optimize a web service that is using NpgSQL 3.2.7 to connect to a PostgreSQL 9.3 database. Today I installed pgBouncer and noticed when running "select * from pg_stat_activity;" that all of my NpgSQL connections had this query listed:
SELECT ns.nspname, a.typname, a.oid, a.typrelid, a.typbasetype,
CASE WHEN pg_proc.proname='array_recv' THEN 'a' ELSE a.typtype END AS type,
CASE
WHEN pg_proc.proname='array_recv' THEN a.typelem
WHEN a.typtype='r' THEN rngsubtype
ELSE 0
END AS elemoid,
CASE
WHEN pg_proc.proname IN ('array_recv','oidvectorrecv') THEN 3 /* Arrays last */
WHEN a.typtype='r' THEN 2 /* Ranges before */
WHEN a.typtype='d' THEN 1 /* Domains before */
ELSE 0 /* Base types first */
END AS ord
FROM pg_type AS a
JOIN pg_namespace AS ns ON (ns.oid = a.typnamespace)
JOIN pg_proc ON pg_proc.oid = a.typreceive
LEFT OUTER JOIN pg_type AS b ON (b.oid = a.typelem)
LEFT OUTER JOIN pg_range ON (pg_range.rngtypid = a.oid)
WHERE
(
a.typtype IN ('b', 'r', 'e', 'd') AND
(b.typtype IS NULL OR b.typtype IN ('b', 'r', 'e', 'd')) /* Either non-array or array of supported element type */
)
When I run this query in pgAdmin it takes 3 to 5 seconds to complete the second time I run it when everything should be cached. When I have run my code interactively executing the first open command in a web service call has taken 3 to 5 seconds.
Does this run every time a connection is created? It looks to me like this is an expensive query to get some relatively static data. If this does have to run every time a connection is created, does anyone have any suggestions on how to architect around this in a web service? 3 to 5 seconds is just too much overhead for every call to a web service. Does using pooling have any affect on whether or not this query is run?
ADDED: 03/14/2018
These are log entries I am seeing after creating a table to hold the results of the types query. It runs it successfully and then later cannot find the table for some reason.
2018-03-14 15:35:42 EDT LOG: duration: 0.715 ms parse : select nspname,typname,oid,typrelid,typbasetype,type,elemoid,ord from "public"."npgsqltypes"
2018-03-14 15:35:42 EDT LOG: duration: 0.289 ms bind : select nspname,typname,oid,typrelid,typbasetype,type,elemoid,ord from "public"."npgsqltypes"
2018-03-14 15:35:42 EDT LOG: execute : select nspname,typname,oid,typrelid,typbasetype,type,elemoid,ord from "public"."npgsqltypes"
2018-03-14 15:35:42 EDT LOG: duration: 0.391 ms
2018-03-14 15:35:44 EDT ERROR: relation "public.npgsqltypes" does not exist at character 71
2018-03-14 15:35:44 EDT STATEMENT: select nspname,typname,oid,typrelid,typbasetype,type,elemoid,ord from "public"."npgsqltypes"
2018-03-14 15:35:44 EDT LOG: statement: DISCARD ALL
2018-03-14 15:35:44 EDT LOG: duration: 0.073 ms
ADDED: 03/15/2018
Explain output of types query:
Sort (cost=3015139.78..3018795.67 rows=1462356 width=213)
Sort Key: (CASE WHEN (pg_proc.proname = ANY ('{array_recv,oidvectorrecv}'::name[])) THEN 3 WHEN (a.typtype = 'r'::"char") THEN 2 WHEN (a.typtype = 'd'::"char") THEN 1 ELSE 0 END)
-> Hash Left Join (cost=920418.37..2779709.53 rows=1462356 width=213)
Hash Cond: (a.oid = pg_range.rngtypid)
-> Hash Join (cost=920417.24..2752289.21 rows=1462356 width=209)
Hash Cond: ((a.typreceive)::oid = pg_proc.oid)
-> Hash Join (cost=919817.78..2724270.58 rows=1462356 width=149)
Hash Cond: (a.typnamespace = ns.oid)
-> Hash Left Join (cost=919305.50..2687199.40 rows=1462356 width=89)
Hash Cond: (a.typelem = b.oid)
Filter: (((a.typtype = ANY ('{b,r,e,d}'::"char"[])) AND ((b.typtype IS NULL) OR (b.typtype = ANY ('{b,r,e,d}'::"char"[])))) OR ((a.typname = ANY ('{record,void}'::name[])) AND (a.typtype = 'p'::"char")))
-> Seq Scan on pg_type a (cost=0.00..694015.89 rows=13731889 width=89)
-> Hash (cost=694015.89..694015.89 rows=13731889 width=5)
-> Seq Scan on pg_type b (cost=0.00..694015.89 rows=13731889 width=5)
-> Hash (cost=388.79..388.79 rows=9879 width=68)
-> Seq Scan on pg_namespace ns (cost=0.00..388.79 rows=9879 width=68)
-> Hash (cost=465.87..465.87 rows=10687 width=68)
-> Seq Scan on pg_proc (cost=0.00..465.87 rows=10687 width=68)
-> Hash (cost=1.06..1.06 rows=6 width=8)
-> Seq Scan on pg_range (cost=0.00..1.06 rows=6 width=8)
You're right, this query is issued by Npgsql to load all the types from a PostgreSQL backend - different database can have different data types (due to extensions, user-defined types, etc.).
However, this query is sent only on the first physical connection to a specific database, as identified by its connection string. In other words, if you connect to the same database X times - to the same connection string - you should only see this query being sent once. Npgsql caches this information internally. I just verified that this is the behavior in 3.2.7, are you seeing something else?

RedShift copy command return

can we get the number of row inserted through copy command? Some records might fail, then what is the no of records successfully inserted?
I have a file with json object in Amazon S3 and trying to load data into Redshift using copy command. How do I know how many of records successfully got inserted and how many failed?
Loading some example data:
db=# copy test from 's3://bucket/data' credentials '' maxerror 5;
INFO: Load into table 'test' completed, 4 record(s) loaded successfully.
COPY
db=# copy test from 's3://bucket/err_data' credentials '' maxerror 5;
INFO: Load into table 'test' completed, 1 record(s) loaded successfully.
INFO: Load into table 'test' completed, 2 record(s) could not be loaded. Check 'stl_load_errors' system table for details.
COPY
Then the following query:
with _successful_loads as (
select
stl_load_commits.query
, listagg(trim(filename), ', ') within group(order by trim(filename)) as filenames
from stl_load_commits
left join stl_query using(query)
left join stl_utilitytext using(xid)
where rtrim("text") = 'COMMIT'
group by query
),
_unsuccessful_loads as (
select
query
, count(1) as errors
from stl_load_errors
group by query
)
select
query
, filenames
, sum(stl_insert.rows) as rows_loaded
, max(_unsuccessful_loads.errors) as rows_not_loaded
from stl_insert
inner join _successful_loads using(query)
left join _unsuccessful_loads using(query)
group by query, filenames
order by query, filenames
;
Giving:
query | filenames | rows_loaded | rows_not_loaded
-------+------------------------------------------------+-------------+-----------------
45597 | s3://bucket/err_data.json | 1 | 2
45611 | s3://bucket/data1.json, s3://bucket/data2.json | 4 |
(2 rows)

Why are long-running queries blank in postgresql log?

I'm running a log (log_min_duration_statement = 200) to analyse some slow queries in PostgreSQL 9.0 but the statements for worst queries aren't being logged. Is there any way I can find out what the queries actually are?
(some values replaced with *** for brevity and privacy.)
2012-06-29 02:10:39 UTC LOG: duration: 266.658 ms statement: SELECT *** FROM "oauth_accesstoken" WHERE "oauth_accesstoken"."token" = E'***'
2012-06-29 02:10:40 UTC LOG: duration: 1797.400 ms statement:
2012-06-29 02:10:49 UTC LOG: duration: 1670.132 ms statement:
2012-06-29 02:10:50 UTC LOG: duration: 354.336 ms statement: SELECT *** FROM ***
...
There are some log file destination options in postgresql.conf, as shown below. I suggest to use csvlog.
log_destination = 'csvlog'
logging_collector = on
log_directory = '/var/applog/pg_log/1922/'
log_rotation_age = 1d
log_rotation_size = 10MB
log_statement = 'ddl' # none, ddl, mod, all
log_min_duration_statement = 200
After making any changes, you need to reload the postgresql.conf file.
It turns out because I was keeping an eye on the logs with tail -f path | grep 'duration .+ ms' any statement starting with a newline was not visible. I was mainly doing this to highlight the duration string.