Is there any way/tool to trace and debug a query in PostgreSQL 9.3.18?
I'm a SQL programmer and sometimes I need to trace and debug my queries and see the values of different fields at execution time. I've Googled this but didn't get any relevant result.
Any idea would be appreciated
PG Admin (database interaction GUI) that is sometimes bundled with PostgreSQL includes a step through debugger for query/calls to Postgres database functions (as opposed to every query that goes to the server).
https://www.pgadmin.org/docs/pgadmin4/4.29/debugger.html
Before using it you have to enable it as a plugin/library in PG Admin.
The debugger will step to statements so sometimes a complex single statement will execute without letting you step through it's details. Still, if you need to see a basic step through of a longer multi statement function or variable values at some points it can be useful. Note, this debug applies to database functions and not general queries.
Related
I have for years used MS Access and its query tool that is used for making queries. This tool is so good that I have never been forced to do very much SQL programming myself. The query tool has done all the work.
However, now I have started using Postgres and the Graphical Query Builder in pgAdmin. It seems that the Graphical Query Builder is rather primitive: It cannot be used if you need an aggregate function like SUM or COUNT.
Is this really correct?
Does anybody know a better graphical query builder for PostgreSQL?
Until now I have 'solved' the problem by linking MS Access to the Postgres database through ODBC and used MS Access' query tool to generate SQL commands which I then copy into pgAdmin's Graphical Query Builder - and edit the SQL commands until everything is correct.
SQL Maestro has a query designer, although it is PC only, AFAIK
http://www.sqlmaestro.com/products/postgresql/maestro/screenshots/getting_started/visual_query_builder/
EMS SQL Manager also has a query designer
http://www.sqlmanager.net/en/products/postgresql/manager/screenshots
Try SQLeo, its free and has a specific feature (autosavepoint) to prevent
ERROR: current transaction is aborted, commands ignored until end of transaction block.
Try a browser-based query builder for PostgreSQL called Skyvia. And you don’t have to worry about which operating system it works. It works where a browser exists. Simply make a connection to your PostgreSQL database and start building your queries. See a sample below.
Note that you can still switch between SQL code and query builder if you want to, just like in MS Access. Get the details here.
I want to log the actual sql statements executed against a POSTGRES instance. I am aware that I can enable logging of the sql statements. Unfortunately, this doesn't log the actual sql, but rather a parsed version, with certain parameters stripped out and listed separately.
Is there a tool for reliably reconstituting this output into executable sql statements?
Or is there a way of intercepting the sql that is send to the postgres instance, such that that sql can be logged?
We want to be able to replay these sql statements against another database.
Thanks for your help!
Actually, PostgreSQL does log exactly the SQL that got executed. It doesn't strip parameters out. Rather, it doesn't interpolate them in, it logs what the application sent, with bind parameters separate. If your app sends insert into x(a,b) values ($1, $2) with bind params 42 and 18, that's what gets logged.
There's no logging option to interpolate bind parameters into the query string.
Your last line is the key part. You don't want logging at all. You're trying to do statement based replication via the logs. This won't work well, if at all, due to volatile functions, the search_path, per-user settings, sequence allocation order/gap issues, and more. If you want replication don't try to do it by log parsing.
If you want to attempt statement-based replication look into PgPool-II. It has a limited ability to do so, with caveats aplenty.
Via setting log_statement to all on postgresql.conf. See the documentation for runtime-config-logging
I have a SQL Server 2008 R2 with a database in it.
How to find a certain query that was executed and from what IP ?
I have tried to go through the transaction logs but I cant understand nothing there.
You should use SQL Server Profiler. It's usually installed by default - look in the SQL Server folder on the Start Menu. When you open it, start a new trace and select the database. In the Trace Properties dialog choose the TSQL template. This will then record all the queries running on the database, along with a whole lot of other stuff. It's not massively easy to track stuff down in here, but look for the BatchStarting events to find the SQL that gets run. Then you should run the procedure sp_who2 on the database so you can match up SPIDs in the profiler to logins.
I'm encountering some major performance problems with simple SQL queries generated by the Entity Framework (4.2) running against SQL Server 2008 R2. In some situations (but not all), EF uses the following syntax:
exec sp_executesql 'DYNAMIC-SQL-QUERY-HERE', #param1...
In other situations is simply executes the raw SQL with the provided parameters baked into the query. The problem I'm encountering is that queries executed with the sp_executesql are ignoring all indexes on my target tables, resulting in an extremely poor performing query (confirmed by examining the execution plan in SSMS).
After a bit of research, it sounds like the issue might be caused by 'parameter sniffing'. If I append the OPTION(RECOMPILE) query hint like so:
exec sp_executesql 'DYNAMIC-SQL-QUERY-HERE OPTION(RECOMPILE)', #param1...
The indexes on the target tables are used and the query executes extremely quickly. I've also tried toggling on the trace flag used to disable parameter sniffing (4136) on the database instance (http://support.microsoft.com/kb/980653), however this didn't appear to have any effect whatsoever.
This leaves me with a few questions:
Is there anyway to append the OPTION(RECOMPILE) query hint to the SQL generated by Entity Framework?
Is there anyway to prevent Entity Framework from using exec sp_executesql, and instead simply run the raw SQL?
Is anyone else running into this problem? Any other hints/tips?
Additional Information:
I did restart the database instance through SSMS, however, I will try restarting the service from the service management console.
Parameterization is set to SIMPLE (is_parameterization_forced: 0)
Optimize for adhoc workloads has the following settings
value: 0
minimum: 0
maximum: 1
value_in_use: 0
is_dynamic: 1
is_advanced: 1
I should also mention that if I restart the SQL Server Service via the service management console AFTER enabling trace flag 4136 with the below script, appears to actually clear the trace flag...perhaps I should be doing this a different way...
DBCC TRACEON(4136,-1)
tl;dr
update statistics
We had a delete query with one parameter (the primary key) that took ~7 seconds to complete when called through EF and sp_executesql. Running the query manually, with the parameter embedded in the first argument to sp_executesql made the query run quickly (~0.2 seconds). Adding option (recompile) also worked. Of course, those two workarounds aren't available to us since were using EF.
Probably due to cascading foreign key constraints, the execution plan for the long running query was, uhmm..., huge. When I looked at the execution plan in SSMS I noticed that the arrows between the different steps in some cases were wider than others, possibly indicating that SQL Server had trouble making the right decisions. That led me to thinking about statistics. I looked at the steps in the execution plan to see what table was involved in the suspect steps. Then I ran update statistics Table for that table. Then I re-ran the bad query. And I re-ran it again. And again just to make sure. It worked. Our perf was back to normal. (Still somewhat worse than non-sp_executesql performance, but hey!)
It turned out that this was only a problem in our development environment. (And it was a big problem because it made our integration tests take forever.) In our production environment, we had a job running that updated all statistics on a regular basis.
At this point I would recommend:
Set the optimize for ad hoc workloads setting to true.
EXEC sp_configure 'show advanced', 1;
GO
RECONFIGURE WITH OVERRIDE;
GO
EXEC sp_configure 'optimize for ad hoc', 1;
GO
RECONFIGURE WITH OVERRIDE
GO
EXEC sp_configure 'show advanced', 0;
GO
RECONFIGURE WITH OVERRIDE;
GO
If after some time this setting doesn't seem to have helped, only then would I try the additional support of the trace flag. These are usually reserved as a last resort. Set the trace flag using the command line via SQL Server Configuration Manager, as opposed to in a query window and using the global flag. See http://msdn.microsoft.com/en-us/library/ms187329.aspx
I need to see the queries submitted to a PostgreSQL server. Normally I would use SQL Server profiler to perform this action in SQL Server land, but I'm yet to find how to do this in PostgreSQL. There appears to be quite a few pay-for tools, I am hoping there is an open source variant.
You can use the log_statement config setting to get the list of all the queries to a server
https://www.postgresql.org/docs/current/static/runtime-config-logging.html#guc-log-statement
Just set that, and the logging file path and you'll have the list. You can also configure it to only log long running queries.
You can then take those queries and run EXPLAIN on them to find out what's going on with them.
https://www.postgresql.org/docs/9.2/static/using-explain.html
Adding to Joshua's answer, to see which queries are currently running simply issue the following statement at any time (e.g. in PGAdminIII's query window):
SELECT datname,procpid,current_query FROM pg_stat_activity;
Sample output:
datname | procpid | current_query
---------------+---------+---------------
mydatabaseabc | 2587 | <IDLE>
anotherdb | 15726 | SELECT * FROM users WHERE id=123 ;
mydatabaseabc | 15851 | <IDLE>
(3 rows)
I discovered pgBadger (https://pgbadger.darold.net/) and it is a fantastic tool that saved my life many times. Here is an example of an
report.
If you open it and go to 'top' menu you can see the slowest queries and the time consuming queries.
Then you can ask details and see nice graphs that show you the queries by hour and if you use detail button you can see the SQL text in a pretty form. So I can see that this tool is free and perfect.
I need to see the queries submitted to a PostgreSQL server
As an option, if you use pgAdmin (on my picture it's pgAdmin 4 v2.1). You can observe queries via "Dashboard" tab:
Update on Jun, 2022. Answering to the questions in the comments.
Question 1: My long SQL query gets truncated, is there any workaround?
Follow steps below:
Close pgAdmin
Find postgresql.conf file. On my computer it is located in c:\Program Files\PostgreSQL\13\data\postgresql.conf. If you can't find it - read this answer for more details.
Open postgresql.conf file and find property called track_activity_query_size. As you see by default the value is 1024 that means - all queries bigger than 1024 symbols will be truncated. Uncomment this property and set a new value, for example:
track_activity_query_size = 32768
Restart PostgreSQL service on your computer
P.S: now everything is ready, but keep in mind that this change can slightly decrease the performance. From development/debugging standpoint you won't see any difference, but better don't forget to revert this property in 'production' environment. For more details read this article.
Question 2: I ran my function/method that triggers SQL query but I still can't see it in pgAdmin, or sometimes I see it but it runs so quickly so I can't even expand the session on 'Dashboard' tab?
Answer: Try to run your application in 'debug' mode and set a breakpoint right before you close the connection to the database. At the same time (while you debugging) click on 'refresh' button on 'Dashboard' tab in pgAdmin.
You can use the pg_stat_statements extension.
If running the db in docker just add this command in docker-compose.yml, otherwise just look at the installation instructions for your setup:
command: postgres -c shared_preload_libraries=pg_stat_statements -c pg_stat_statements.track=all -c max_connections=200
And then in the db run this query:
CREATE EXTENSION pg_stat_statements;
Now to see the operations that took more time run:
SELECT * FROM pg_stat_statements ORDER BY total_time/calls DESC LIMIT 10;
Or play with other queries over that view to find what you are looking for.
All those tools like pgbadger or pg_stat_statements require access to the server, and/or altering the server-settings/server-log-settings, which is not such a good idea, especially if it requires server-restart and because logging slows everything down, including production use.
In addition to that, extensions such as pg_stat_statements don't really show the queries, let alone in chronological order, and pg_stat_activity doesn't show you anything that doesn't run right now, and in addition, queries that are running that are from other users than you.
Instead of running any such crap, you can add a TCP-proxy in between your application and PostgreSQL-server.
Then your TCP-proxy reads all the sql-query-statements from what goes over the wire from your application to the server, and outputs it to console (or wherever). Also it forwards everything to PostgreSQL and returns the answer(s) to your application.
This way, you don't need to stop/start/restart your db-server, you don't need admin/root rights !ON THE DB-SERVER! to change the config file, and you don't need any access to the db-server. All you need to do is change the db connection string in your application (e.g. in your dev-environment) to point to the proxy server instead of the sql-server (the proxy-server then needs to point to the sql-server). Then you can see (in chronological order) what your <insert_profanity_here> application does on the database - and also, other people's queries don't show up (which makes it even better than sql-server-profiler). [Of course, you can also see what other people do if you put it on the db server on the old db port, and assing the db a new port. ]
I have implemented this with pg_proxy_net
(runs on Windows, Linux and Mac and doesn't require OS-dependencies, as it is .NET-Core-self-contained-deployment).
That way, you get appx. "the same" as you get with sql-server profiler.
Wait, if you aren't disturbed by other people's queries, what you get with pg_proxy_net is actually better than what you get with sql-server profiler.
Also, on github, I have a command-line MS-SQL-Server profiler that works on Linux/Mac.
And an GUI MS-SQL-Express-Profiler for Windows.
The funny thing is, once you have written one such tool, writing some more is just a piece of cake and done in under a day.
Also, if you want to get pg_stat_statements to work, you need to alter the config file (postgresql.conf), adding tracking and preloading libraries, and then restart the server:
CREATE EXTENSION pg_stat_statements;
-- D:\Programme\LessPortableApps\SQL_PostGreSQL\PostgreSQLPortable\Data\data\postgresql.conf
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
You find the documentation for the PostgreSQL protocol here:
https://www.postgresql.org/docs/current/protocol-overview.html
You can see how the data is written into the TCP-buffer by looking at the source code of a postgresql-client, e.g. FrontendMessages of Npgsql on github:
https://github.com/npgsql/npgsql/blob/main/src/Npgsql/Internal/NpgsqlConnector.FrontendMessages.cs
Also, just in case you have a .NET application (with source code) that uses Npgsql, you might want to have a look at Npgsql.OpenTelemetry.
PS:
To configure the logs, see ChartIO Tutorial and TablePlus.
Cheers !
Happy "profiling" !