Is there a way to use a sql profiler for nmemory (an in-memory database) - entity-framework

I'm using Entity Framework with Effort that uses NMemory to test without having actual database side-effects. Is there any way to view the sql that's being sent to the nmemory database?
Edit:
Thanks to #Gert_Arnold I have been looking in to DbContext.Database.Log. Unfortunately my output looks like below. Can anyone comment on this? I'm assuming I'm getting these null entries instead of my sql.
Opened connection at 4/27/2015 11:08:22 AM -05:00
Started transaction at 4/27/2015 11:08:22 AM -05:00
<null>
-- Executing at 4/27/2015 11:08:23 AM -05:00
-- Completed in 132 ms with result: 1
<null>
-- Executing at 4/27/2015 11:08:23 AM -05:00
-- Completed in 5 ms with result: 1
Committed transaction at 4/27/2015 11:08:23 AM -05:00
Closed connection at 4/27/2015 11:08:23 AM -05:00
Disposed transaction at 4/27/2015 11:08:23 AM -05:00
Opened connection at 4/27/2015 11:08:24 AM -05:00
Started transaction at 4/27/2015 11:08:24 AM -05:00
<null>
-- Executing at 4/27/2015 11:08:24 AM -05:00
-- Completed in 8 ms with result: 1
Committed transaction at 4/27/2015 11:08:24 AM -05:00
Closed connection at 4/27/2015 11:08:24 AM -05:00
Disposed transaction at 4/27/2015 11:08:24 AM -05:00

You can intercept and log the commands.
// Before command is sent tell EF about the new interceptor
DbInterception.Add(new MyEFDbInterceptor());
// The interceptor class is called by , see the various interface methods
// just a couple shown here.
public class MyEFDbInterceptor: IDbCommandInterceptor {
public void ReaderExecuting(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext) {
Debug.Writeln(command.CommandText );
//Debug.Writeln(interceptionContext.Result ); // might be interesting use
}
public void ReaderExecuted(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext)
{
Debug.Writeln(command.CommandText );
}
}

Related

PostgreSQL crashing: terminating connection due to unexpected postmaster exit

I am doing a datamigration task running 60 parallel processes from Python using Python threading.
After a while, PG crashes, see the end of the PG log file:
2021-11-17 09:30:09.973 CET [19372] FATAL: terminating connection due to unexpected postmaster exit
2021-11-17 09:30:09.973 CET [17944] FATAL: terminating connection due to unexpected postmaster exit
2021-11-17 09:30:09.973 CET [16628] FATAL: terminating connection due to unexpected postmaster exit
2021-11-17 09:30:09.973 CET [19508] FATAL: terminating connection due to unexpected postmaster exit
2021-11-17 09:30:09.973 CET [21636] FATAL: postmaster exited during a parallel transaction
2021-11-17 09:30:09.973 CET [21636] CONTEXT: SQL statement "......"
PL/pgSQL function swn.nearest_pgr_node(geometry,numeric) line 7 at SQL statement
SQL statement "select swn.nearest_pgr_node(structure_end_geom, 2.0)"
PL/pgSQL function swn.migrate_cable(character varying) line 316 at SQL statement
2021-11-17 09:30:09.973 CET [21636] STATEMENT: select swn.migrate_cable('{F97554BF-59BA-44D6-9D70-DD9B6B5927EA}')
2021-11-17 09:30:09.973 CET [17944] FATAL: could not duplicate handle for "Global/PostgreSQL.2398935412": Permission denied
2021-11-17 09:30:11.227 CET [14284] FATAL: terminating connection due to unexpected postmaster exit
Has anyone got a clue what the problem is? I am running this on a physical Windows server using local disk. PG version 14.
Thank you for any input. Googling the problem did not give me any usable hints.
The SQL code above (the "......") is
2021-11-17 09:30:09.973 CET [21636] CONTEXT: SQL statement "with ptest as
(
(select the_geom as point, id as id
from swn.pni_route_vertices_pgr
where st_distance(the_geom, _point) < _dist)
)
SELECT
closest_route.id FROM ptest p1
CROSS JOIN LATERAL
(SELECT
id,
st_closestpoint(p2.point, _point) as nearest_point,
ST_Distance(_point, p2.point) as dist
FROM ptest p2
ORDER BY _point <-> p2.point
LIMIT 1
) AS closest_route
order by 1"
PL/pgSQL function swn.nearest_pgr_node(geometry,numeric) line 7 at SQL statement
SQL statement "select swn.nearest_pgr_node(structure_end_geom, 2.0)"
PL/pgSQL function swn.migrate_cable(character varying) line 316 at SQL statement
2021-11-17 09:30:09.973 CET [21636] STATEMENT: select swn.migrate_cable('{F97554BF-59BA-44D6-9D70-DD9B6B5927EA}')
I have browsed through my PG log files looking for this error, and it turns out, that it is crashing every time on this very same code each time.
Extensions installed:
pg_routing
plpgsql
postgis
postgis_sfcgal
uuid-ossp
The cause is that the postmaster, the parent process of all PostgreSQL processes, died unexpectedly.
For further clarification, check if there are other log messages that shed light on that event. You should also look into the kernel log for messages from the OOM killer. Very often, such events are caused by the system going out of memory while memory overcommit hasn't been deactivated (tune vm.overcommit_memory and vm.overcommit_ratio!).
I notice that your query is using PostGIS, which can consume a lot of memory when dealing with complicated geometries. Reduce work_mem and/or shared_buffers to avoid going out of memory.

Postgres.exe crashes and tears down all apps, recovers and is running again

I'm running an application with about 20 processes connected to a postgres DB (10.0) on windows server 2016.
Since about a month I have unexpected crashes of postgres.exe.
To isolate the problem I extended the logging by setting log_min_duration_statement = 0
This creates more detailed logfile. What I can see is:
LOG: server process (PID xxxxx) was terminated by exception
0xFFFFFFFF DETAIL: Failed process was running: COMMIT HINT: See C
include file "ntstatus.h" for a description of the hexadecimal value.
Then it tears down all 20 processes like this:
LOG: terminating any other active server processes
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.Then DB recovers:
HINT: In a moment you should be able to reconnect to the database and repeat your command.
LOG: all server processes terminated; reinitializing
LOG: database system was interrupted; last known up at 2021-06-11 18:17:18 CEST
DB enters recovery mode
FATAL: the database system is in recovery mode
FATAL: the database system is in recovery mode
FATAL: the database system is in recovery mode
FATAL: the database system is in recovery mode
LOG: database system was not properly shut down; automatic recovery in progress
...
LOG: redo starts at 1B2/33319E58
FATAL: the database system is in recovery mode
LOG: invalid record length at 1B2/33D29930: wanted 24, got 0
LOG: redo done at 1B2/33D29908
LOG: last completed transaction was at log time 2021-06-11 18:21:39.830526+02
FATAL: the database system is in recovery mode
...
FATAL: the database system is in recovery mode
LOG: database system is ready to accept connections
Now it's running again like normal
The crashed PID xxxxx I can identify to a postgres.exe running for one of the 20 application processes. It's not always the same one. This happens about every 5-10 days.
Can anybody give me some advice how to track down the reason of this crash?
Extensions used:
oracle_fdw 2.0.0, PostgreSQL 10.0, Oracle client 11.2.0.3.0, Oracle server 11.2.0.2.0
Crashdump:
Followed the link :
https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Windows
Although the postgres user has "full control" of the crashdump folder in the security tab it does not write something. Folder stays empty.
Follow-Up on the comment #Laurenz Albe:
The COMMIT is not the reason of the crash. It is the last successfull executed command of the session. Explained on the following example:
Process gets a job and starts to do it's job
2021-06-15 16:27:51.100 CEST [25604] LOG: duration: 0.061 ms statement: DISCARD ALL
2021-06-15 16:27:51.100 CEST [25604] LOG: duration: 0.012 ms statement: BEGIN
2021-06-15 16:27:51.100 CEST [25604] LOG: duration: 0.015 ms statement: SET TRANSACTION ISOLATION LEVEL READ COMMITTED
now a lot of action going on within session 25604
and among others the oracle foreign datawrapper
2021-06-15 16:28:13.792 CEST [25604] LOG: duration: 0.016 ms execute <unnamed>: FETCH ALL FROM "<unnamed portal 689>"
finishes action successfully (data of the transaction in the database)
2021-06-15 16:28:13.823 CEST [25604] LOG: duration: 0.059 ms statement: COMMIT
a lot of action is going in different sessions
among others the oracle foreign datawrapper
more the 7 minutes afterwards the next job is requested and now postgres.exe crash
2021-06-15 16:36:01.524 CEST [17904] LOG: server process (PID 25604) was terminated by exception 0xFFFFFFFF
The process does not do DISCARD ALL, BEGIN and SET TRANSACTION ISOLATION LEVEL READ COMMITTED
It crashes immediately
My Conclusion:
"the possibly corrupted shared memory" was initiated by one of the processes before. Meaning between the last successful COMMIT and the new request.
That's a 7 minutes time span where the problem occurs.
Some feedback on this conclusion?

PSQL TimescaleDB, ERROR: the database system is in recovery mode

We have an application pipeline and Postgres-12(TimescaleDB, managed through Patroni) on a separate server (VM with Ubuntu 18.04 LTS).
We are facing an issue with the DB, it suddenly stuck in the recovery mode, and also we can’t access it from the psql client and select queries also hung.
After an hour or late all got back to normal (As my current pipeline terminated) and able to run queries against the DB server.
Master DB error details:
2020-11-03 18:35:08.612 IST [9773] [unknown]#[unknown] LOG: connection received: host=x.x.x.x port=58780
2020-11-03 18:35:08.612 IST [9773] FATAL: the database system is in recovery mode
2020-11-03 18:35:08.596 IST [18276] LOG: could not send data to client: Broken pipe
Replica server error details:
2020-11-03 18:34:55 IST [18316]: [85649-1] user=postgres,db=postgres,app=[unknown],client=x.x.x.x LOG: duration: 10.228 ms statement: SELECT * FROM pg_stat_bgwriter;
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-11-03 18:35:08 IST [18322]: [2-1] user=,db=,app=,client= FATAL: could not receive data from WAL stream: SSL SYSCALL error: EOF detected
2020-11-03 18:35:08 IST [20500]: [1-1] user=,db=,app=,client= FATAL: could not connect to the primary server: FATAL: the database system is in recovery mode
FATAL: the database system is in recovery mode
Pipeline error details:
Job aborted due to stage failure: Task 4 in stage 0.0 failed 3 times, most recent failure: Lost task 4.2 in stage 0.0 (TID 29, ip-x-x-x-x.ap-southeast-1.compute.internal, executor 19): org.postgresql.util.PSQLException: FATAL: the database system is in recovery mode at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:514) at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:141) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195) at org.postgresql.Driver.makeConnection(Driver.java:454) at org.postgresql.Driver.connect(Driver.java:256) at org.apache.spark.sql.execution.datasources.jdbc.DriverWrapper.connect(DriverWrapper.scala:45)
Please any advise on this issue?
What version of TimescaleDB are you running? In particular, there were some issues with 1.7.x if you try to query a read replica; we recommend upgrading to 1.7.4.
(Otherwise, there's not much information about to suggest what might have happened.)
https://github.com/timescale/timescaledb/releases/tag/1.7.4

Debugging AccessExclusiveLock in Postgres 9.6

We have an application, backed by Postgres which briefly locked up. The Postgres logs showed a series of AccessExclusiveLock entries for pg_database:
[13-1] sql_error_code = 00000 LOG: process 7045 still waiting for AccessExclusiveLock on object 0 of class 1262 of database 0 after 1000.123 ms
[6-1] sql_error_code = 00000 LOG: process 7132 still waiting for AccessExclusiveLock on object 0 of class 1262 of database 0 after 1000.118 ms
[6-1] sql_error_code = 00000 LOG: process 8824 still waiting for AccessExclusiveLock on object 0 of class 1262 of database 0 after 1000.133 ms
[14-1] sql_error_code = 00000 LOG: process 7045 acquired AccessExclusiveLock on object 0 of class 1262 of database 0 after 39265.319 ms
[7-1] sql_error_code = 00000 LOG: process 7132 acquired AccessExclusiveLock on object 0 of class 1262 of database 0 after 12824.407 ms
[7-1] sql_error_code = 00000 LOG: process 8824 acquired AccessExclusiveLock on object 0 of class 1262 of database 0 after 6362.509 ms
1262 here refers to pg_database:
=> select 1262::regclass;
+-------------+
| regclass |
+-------------+
| pg_database |
+-------------+
We are running Postgres 9.6.5 on Heroku.
From what I understand, an AEL will be taken for "heavy" operations such as DROP TABLE, TRUNCATE, REINDEX [1]... Our runtime operations consist of a number of stored procedures, each of which insert/update/delete on multiple tables (deletes are rarer). We do not perform any of the operations listed above and in the linked documentation at runtime, and there were no releases/maintenance (by us) running at this time.
I haven't managed to find any documentation giving examples of when this lock could be taken during the "normal operation" outlined above. My questions are:
What could have caused the AELs (if I have given enough information to speculate) and are there any best-practices for avoiding them in future?
What else could I look at to help debug the cause?
[1]: Postgres docs - explicit locking

UWSGI Flask SQLAlchemy Intermittent PostgreSQL errors with "WARNING: there is already a transaction in progress"

In my UWSGI Flask application I'm getting intermittent errors like the following :
DatabaseError: (psycopg2.DatabaseError) error with no message from the libpq
ResourceClosedError: This result object does not return rows. It has been closed automatically.
NoSuchColumnError: "Could not locate column in row for column 'my_table.my_column_name_that_exists'"
DatabaseError: (psycopg2.DatabaseError) insufficient data in "D" message...lost synchronization with server: got message type "2", length 740303471
In my postgresql log I see: WARNING: there is already a transaction in progress
Refreshing the web page in flask usually resolves the error.
Here are the steps I take to reproduce the error:
stop the application
sudo service postgresql restart
start the application
navigate to a web page in my flask app that does several simultaneous queries
expected behavior: no database errors logged
actual behavior: one or more of the errors listed above occur
I tried increasing the verbosity of postgresql logging and what appears to be inappropriate sharing of virtual transactions, e.g. following shows all log entries with virtual transaction 2/53 and corresponds to the above errors:
process 8548 session 5901589a.2164 vtransaction 2/53 LOG: statement: BEGIN
process 8548 session 5901589a.2164 vtransaction 2/53 LOG: statement: SELECT 1
process 8548 session 5901589a.2164 vtransaction 2/53 LOG: statement: SELECT my_table.id AS my_table_id, ...
FROM my_table
WHERE my_table.id = 'my_id'
LIMIT 1
process 8548 session 5901589a.2164 vtransaction 2/53 LOG: statement: BEGIN
process 8548 session 5901589a.2164 vtransaction 2/53 WARNING: there is already a transaction in progress
process 8548 session 5901589a.2164 vtransaction 2/53 LOG: statement: SELECT 1
process 8548 session 5901589a.2164 vtransaction 2/53 LOG: statement: SELECT my_other_table.id AS my_other_table_id, ...
FROM my_other_table
WHERE 'my_other_id' = my_other_table.id
process 8548 session 5901589a.2164 vtransaction 2/53 LOG: statement: SELECT my_table.id AS my_table_id, ...
FROM my_table
WHERE my_table.id = 'my_id'
LIMIT 1
process 8548 session 5901589a.2164 vtransaction 2/53 LOG: statement: ROLLBACK
These errors are symptoms of database connections being shared incorrectly by multiple threads or processes.
By default, uwsgi forks the process after the application is created in the wsgi-file. If the application creation creates database connections that may be re-used, you will likely end up with forked processes having corrupt database state. To resolve this in uwsgi there are options:
do not create database connections until after the application is created, OR
call uwsgi with the --lazy-apps option, which changes uwsgi to fork before the application is created
There are negative performance consequences to lazy-apps mode (see preforking vs lazy-apps vs lazy), so avoiding database usage during app creation is generally the better option.
Thanks univerio for explaining this in the comments.