Why duration not write in logfile? - postgresql

I use PostgreSQL 14.
My Postgres conf:
logging_collector = on
log_min_duration_statement = 400
log_connections = on
log_disconnections = on
log_duration = off
log_line_prefix = '%Q %r %d ' #%Q = query ID %r = remote host and port %d = database name
log_statement = 'all'
And i get log:
2022-06-24 10:08:36.668 UTC [92] LOG: statement:
UPDATE ***
SET ****
WHERE ***
Why it not have 'duration'?
I want get log:
2022-06-24 10:08:36.668 UTC [92] LOG: duration: 5944.540 ms statement:
UPDATE ***
SET ****
WHERE ***

Try setting log_statement = 'none', then all statements running for more than 0.4 seconds will be logged along with their duration.

Related

Undefined function: 7 ERROR: function hour(timestamp without time zone) does not exist

I get the error PG::Error: ERROR: function hour(timestamp without time zone) does not exist
I am trying to convert MySQL to PostgreSQL database.
My Query:
SELECT volt as volt1, max(current) as current1, max(total_load) as total_load, HOUR(created_at) as hourly from "rms_dc_power_history" where "rms_id" = 2 and "created_at"::date = 2022-09-03 and "created_at"::time > 00:00 and "created_at"::time <= 24:00 group by HOUR(created_at)

How to get local UTC time Flutter

You need to translate the time (UTC + 7) into the time that the user has on the phone.
My code:
String date = '20211231031500';
var dateTime = DateFormat("yyyyMMddHHmmss").parse(date, true);
dateLocal = dateTime.toLocal();
Error: Trying to read MM from 20211231031500 at position 14

How to stream data sorted from a table in PostgreSQL?

I am trying to get the data from a PostgreSQL table in sorted order using Java. The problem lies in the query planning of PostgreSQL - have a look at these queries:
select *
from the_table
order by the_indexed_column asc
;
The query plan for this is:
Gather Merge (cost=16673025.39..28912422.53 rows=104901794 width=64)
Workers Planned: 2
-> Sort (cost=16672025.36..16803152.60 rows=52450897 width=64)
Sort Key: "time"
-> Parallel Seq Scan on raw (cost=0.00..4030550.63 rows=52450897 width=64)
The Sort at the top prevents the streaming of the data, because it must aggregate the data first. This is problematic for sorts with large amounts of data, e.g. 20GB in my case as they have to be saved to disk.
Compare this query:
select *
from raw
order by the_index_column asc
limit 10000000
;
Plan:
Limit (cost=0.57..9871396.70 rows=10000000 width=64)
-> Index Scan using raw_time_idx on raw (cost=0.57..124263259.38 rows=125882152 width=64)
This data can be easily streamed.
I think PostgreSQL only optimizes for total query speed here, not for additional features like disk usage and streaming capabilities. Is there a way to tune PostgreSQL to choose the second plan in favour of the first?
EDIT:
This the code for the execution of the query. The string at the end is not printed.
Connection database = DriverManager.getConnection(DatabaseConstants.DATABASE_URL, DatabaseConstants.USER, DatabaseConstants.PASSWORD);
String sql = "select " +
"column_a, column_b, some_expression, morestuff " +
"from the_table " +
"order by the_indexed_column asc " +
";";
database.setAutoCommit(false);
PreparedStatement statement = database.prepareStatement(sql);
statement.setFetchSize(1024);
ResultSet set = statement.executeQuery();
System.out.println("Got first results...");
The value of cursor_tuple_fraction was lowered to 0.05, 0.01 and 0.0 with no effect.
PostgreSQL Version: 10.7,
Driver Version: 42.2.5.jre7 (the most recent in Maven (now for real)),
OS: Fedora 29 (Minimal with KDE on top)
This is the output on the log with log_min_duration_statement = 0:
2019-03-29 17:11:52.532 CET [15068] LOG: database system is ready to accept connections
2019-03-29 17:12:04.615 CET [15119] LOG: duration: 0.397 ms parse <unnamed>: SET extra_float_digits = 3
2019-03-29 17:12:04.615 CET [15119] LOG: duration: 0.008 ms bind <unnamed>: SET extra_float_digits = 3
2019-03-29 17:12:04.615 CET [15119] LOG: duration: 0.046 ms execute <unnamed>: SET extra_float_digits = 3
2019-03-29 17:12:04.615 CET [15119] LOG: duration: 0.024 ms parse <unnamed>: SET application_name = 'PostgreSQL JDBC Driver'
2019-03-29 17:12:04.615 CET [15119] LOG: duration: 0.006 ms bind <unnamed>: SET application_name = 'PostgreSQL JDBC Driver'
2019-03-29 17:12:04.615 CET [15119] LOG: duration: 0.026 ms execute <unnamed>: SET application_name = 'PostgreSQL JDBC Driver'
2019-03-29 17:12:04.662 CET [15119] LOG: duration: 0.023 ms parse <unnamed>: BEGIN
2019-03-29 17:12:04.662 CET [15119] LOG: duration: 0.006 ms bind <unnamed>: BEGIN
2019-03-29 17:12:04.662 CET [15119] LOG: duration: 0.004 ms execute <unnamed>: BEGIN
2019-03-29 17:12:04.940 CET [15119] LOG: duration: 277.705 ms parse <unnamed>: [the query...]
2019-03-29 17:12:05.162 CET [15119] LOG: duration: 222.742 ms bind <unnamed>/C_1: [the query...]
During this, the disk usage increases.
That shouldn't be a problem. Use a cursor by applying setFetchSize with a non-zero value to the prepared statement.
Then PostgreSQL will choose a plan that returns the first rows quickly, that is an index scan.
If PostgreSQL still chooses a sort, lower cursor_tuple_fraction from its default value of 0.1 (10% of the total result set).
For the record: This is how it should look in the log:
duration: 0.126 ms parse S_1: BEGIN
duration: 0.015 ms bind S_1: BEGIN
duration: 0.034 ms execute S_1: BEGIN
duration: 0.998 ms parse S_2: SELECT /* the query */
duration: 1.752 ms bind S_2/C_3: SELECT /* the query */
duration: 0.081 ms execute S_2/C_3: SELECT /* the query */
duration: 0.060 ms execute fetch from S_2/C_3: SELECT /* the query */
duration: 0.065 ms execute fetch from S_2/C_3: SELECT /* the query */
duration: 0.070 ms execute fetch from S_2/C_3: SELECT /* the query */
duration: 0.078 ms execute fetch from S_2/C_3: SELECT /* the query */

powershell string comparision for taking backup

I have a file which is shown as below:
DateTime: 2018-02-09 02:00:12
Database: [master]
Status: ONLINE
Mirroring role: None
Standby: No
Updateability: READ_WRITE
User access: MULTI_USER
Is accessible: Yes
Recovery model: SIMPLE
Differential base LSN: 940000000008500178
Last log backup LSN: NULL
DateTime: 2018-02-09 02:00:12
Command: DECLARE #ReturnCode int EXECUTE #ReturnCode = [master].dbo.xp_create_subdir N'J:\dump_data\sqlserver234\db_dump' IF #ReturnCode 0 RAISERROR('Error creating directory.', 16, 1)
Outcome: Succeeded
Duration: 00:00:00
DateTime: 2018-02-09 02:00:12
DateTime: 2018-02-09 02:00:12
Command: BACKUP DATABASE [master] TO DISK = N'J:\dump_data\sqlserver234\db_dump\master_FULL_20180209_020012.bak' WITH CHECKSUM, COMPRESSION
Processed 512 pages for database 'master', file 'master' on file 1.
Processed 3 pages for database 'master', file 'mastlog' on file 1.
BACKUP DATABASE successfully processed 515 pages in 0.088 seconds (45.693 MB/sec).
Outcome: Succeeded
Duration: 00:00:00
DateTime: 2018-02-09 02:00:12
DateTime: 2018-02-09 02:00:12
Command: RESTORE VERIFYONLY FROM DISK = N'J:\dump_data\sqlserver234\db_dump\master_FULL_20180209_020012.bak'
The backup set on file 1 is valid.
Outcome: Succeeded
Duration: 00:00:00
DateTime: 2018-02-09 02:00:12
DateTime: 2018-02-09 02:00:12
Database: [model]
Status: ONLINE
Mirroring role: None
Standby: No
Updateability: READ_WRITE
User access: MULTI_USER
Is accessible: Yes
Recovery model: SIMPLE
Differential base LSN: 31000001141300037
Last log backup LSN: NULL
DateTime: 2018-02-09 02:00:12
Command: DECLARE #ReturnCode int EXECUTE #ReturnCode = [master].dbo.xp_create_subdir N'J:\dump_data\sqlserver234\db_dump' IF #ReturnCode 0 RAISERROR('Error creating directory.', 16, 1)
Outcome: Succeeded
Duration: 00:00:00
DateTime: 2018-02-09 02:00:12
DateTime: 2018-02-09 02:00:12
Command: BACKUP DATABASE [model] TO DISK = N'J:\dump_data\sqlserver234\db_dump\model_FULL_20180209_020012.bak' WITH CHECKSUM, COMPRESSION
Processed 320 pages for database 'model', file 'modeldev' on file 1.
Processed 2 pages for database 'model', file 'modellog' on file 1.
BACKUP DATABASE successfully processed 322 pages in 0.048 seconds (52.256 MB/sec).
Outcome: Failed
Duration: 00:00:00
DateTime: 2018-02-09 02:00:12
DateTime: 2018-02-09 02:00:12
Command: RESTORE VERIFYONLY FROM DISK = N'J:\dump_data\sqlserver234\db_dump\model_FULL_20180209_020012.bak'
The backup set on file 1 is valid.
Outcome: Failed
Duration: 00:00:00
DateTime: 2018-02-09 02:00:12
I have written a PowerShell file which give me databasename and corresponding outcome:
param(
[Parameter(Mandatory=$True)][string]$path
)
#param([string]$path)
# <context>
# <description>
# Sending output to console
# </description>
# </context>
try
{
foreach($line in [System.IO.File]::ReadLines("E:\utility\TEDM_DBA_M_MNT_BACKUP_System.txt"))
{
$database=$line|select-string -pattern 'Database:'
$outcome=$line|select-string -pattern 'Outcome:'
Write-host $outcome
if($outcome -eq 'Outcome: Failed')
{
Write-Host $outcome
}
}
}
Catch
{
Write-Host -BackgroundColor Red -ForegroundColor White "Fail"
$errText = $Error[0].ToString()
if ($errText.Contains("network-related"))
{
Write-Host "Connection Error. Check server name, port, firewall."
}
Write-Host $errText
continue
}
But when I am comparing string with Outcome: failed it passes for all conditions:
if($outcome -eq 'Outcome: Failed')
What am I doing wrong in the string comparison?

Rails 4, Heroku, Comparing times not working

Using Heroku, I am trying to compare a saved times (open and close) of a place to the current time to determine what to output.
Code:
def get_place_search_display(open_time, close_time)
open_time_formatted = open_time.strftime('%l:%M %p') if !open_time.nil?
close_time_formatted = close_time.strftime('%l:%M %p') if !close_time.nil?
current_time = Time.current #.strftime('%l:%M %p')
if open_time.nil? || close_time.nil?
"Closed today"
elsif current_time < open_time
"Opens at #{open_time} until #{close_time}"
elsif current_time >= open_time && current_time <= close_time
"Open #{open_time} - #{close_time}"
elsif current_time > close_time
"Closed at #{close_time} today"
else
"Open until #{close_time} today"
end
end
Example returned values:
Open time: 7:00 AM
Close time: 8:45 PM, 2000-01-01 20:45:00 UTC
Current time: 6:35 PM, 2014-09-07 18:35:36 -0700
Returns: "Closed at 2000-01-01 20:45:00 UTC today"
Any ideas what the comparison isn't working using this logic?
Looking at the term "today" in your output string, and also the '%l:%M %p' in your time formatting method, I assume you are trying to compare Time objects without Date info
However in Ruby, Time object always comes with date. But you can achieve that by converting them into the same day before comparing them.
a = Time.current
=> Mon, 08 Sep 2014 02:30:44 UTC +00:00
b = Time.current.months_ago(1)
=> Fri, 08 Aug 2014 02:30:59 UTC +00:00
a > b
=> true
a2 = a.change(day:1, month:1, year:2000)
=> Sat, 01 Jan 2000 02:30:44 UTC +00:00
b2 = b.change(day:1, month:1, year:2000)
=> Sat, 01 Jan 2000 02:30:59 UTC +00:00
a2 > b2
=> false
Hope it helps.