run an execution plan directly in PostgreSQL - postgresql

is it possible to run an execution plan directly in PostgreSQL?
I did not find anything about it after quite some search in the PostgreSQL document and on the internet.

No, it is not possible to directly execute a query plan in PostgreSQL. You must run actual SQL.
In theory you could customise the PostgreSQL executor to accept plans without the corresponding SQL by feeding in plan trees. This would be a pretty big job and I'm sure there are many things that'd make it harder that I don't even know about.
You really need to just run SQL.
There is no reverse-compiler to turn an execution plan back into SQL.

Related

Logging sql commands but not committing them

I'm looking for some advice for a system that has a "dry run" mode that will be used to help with the QA of data transformations. The idea is to be able to log and report what would happen to the database and then apply that log if not in dry run mode.
We've considered creating a sort of common format csv file but that has limited us. My other thought was to log the sql inserts/updates/deletes so we can view them in dry run and apply them in production mode, but that certainly adds work and the concern regarding sql injection (though I can use parameterized queries).
What would be ideal is to use something within postgres, such as using rollback transactions but then get the sql output from within those transactions.
Has anyone had to solve for something like this? Is this kind of a pipedream that may be more trouble than it's worth?

How to DoS the Postgres Database

I am doing small research work on how to protect database against DDoS atack.
I am using postgres database in my testings.
I want to perform a DDoS of Database on my local machine. But I don't really know how to do it.
My plan is to create script that runs bunch of queries. But I want these queries took as much time to complete as possible.
I saw this example: select tab1 from (select decode(encode(convert(compress(post) using latin1),concat(post,post,post,post)),sha1(concat(post,post,post,post))) as tab1 from table_1)a;
But I am failing to replicate it in postgres.
I need help in translating this query in postgres or other examples of functions or queries that would take lots of time to complete.
Edit:
sleep functions might not work. They are not loading system enough.
In my understanding, DDoS should be performed with functions that are taking a long time to perform and sucks up tons of compute powers of the system.

Azure Database, EF, Time out issues

I have taken over an existing MVC website which uses entity framework and hangfire and is hosted on Azure and uses Azure database. Every so often the website times out.
I'm new to Azure portal, entity framework and hangfire.
If I increase the DTU's it clears the timeout issues?
I'm looking for ways of how to diagnose why the website times out. I have added error logging using elmah and checked hangfire but this doesn't give me any further information.
Is there anything in azure portal that can help?
If it "times out" and if "increasing DTU resolves timeouts" and these observations are true (I think it's on you to really convince yourself this is absolutely true, don't make this assumption lightly) then the usual and obvious candidate is "a slow sql query". Entity Framework is often used with linq to create sql queries without writing sql. These queries are often fine for very simple tasks, such as someData.Where(x=>x.Id == 1).First(), however, if linq is used to join tables, or create complex associations, the generated sql can become monstrously bad, from a performance perspective. You can add logging to write out the sql generated by linq, or you can try to trace the database to see what sql is running on it. If tracing is out of the question, there are still meta queries you can use to view things like cached query plans and SQL Server can give you estimated costs and cached execution counts.
You can still hang yourself without using linq. You can still use stored procedures with EF. Way too many developers are naive about SQL performance still; you need to comb over your back end and learn the schema, the stored procedures; inspect the sql contents of everything. Check for any database triggers (easy to miss). Red flags are subqueries, too many joining, too many results from a query, lots of string manipulation in a query, joining tables on strings, or XML/JSON-based SQL work.
Be aware that "slow sql queries" will become slower when load is high. And when slow sql queries build up, they only take more time to resolve. This can also cause debilitating table locking, depending on the nature of the query.
But queries can be performant and still cause locking. ie One table is being written to often and it's blocking other writes or reads from that table. This is a little harder to diagnose, but you can figure it out by carefully inspecting logs of database calls and how long they take to execute. There are also sql queries you can run on the database to diagnose long-running queries, or what tables are locked at a given point in time.
Finally, check for any back end webjobs for your application. If timeouts occur at reoccurring days or times, then somebody's batch SQL could be blocking your production database from being read.
But this is all speculation. I think you need to do more research to determine what is actually causing the site to become unresponsive. If you can log response times for common queries, you can rule out SQL-based latency as being the culprit or not and work from there. There's nothing inherently "amiss" about any of the technologies you specified.
If queries are perfomant but still causing issues, a long term solution is to add something like a message queue and batch your sql work intelligently, or just make the database work asynchronous and not block the UI.
You should correlate any logged timeouts with azure's monitoring. Azure can give you CPU/RAM/page visits and such on the dashboard.
SQL Azure is a bit of a different beast. It doesn't have the on-demand performance of a dedicated DB unless you're prepared to throw serious $$ at it. And even then ...
EF, when written for well can perform quite well. When written poorly it can be a dog, and those problems are compounded on a platform like SQL Azure.
The first thing is to check that your EF contexts are set up to use an execution strategy suited to Azure: https://learn.microsoft.com/en-us/ef/ef6/fundamentals/connection-resiliency/retry-logic
The next thing would be to see what kinds of SQL tracing you can run on Azure. Tracing is essential to see what EF is doing behind the scenes. I'm not familiar with tools available for Azure, in my case my Azure experience was running SQL Server on VMs because SQL Azure was too immature, not HIPAA compliant at the time, and expensive for the DTU estimates we were able to get. Worst case, can you restore an database backup into an SQL Server instance and point a copy of your application environment temporarily at that to run through common usage scenarios? Using an SQL Trace you can pick up on exactly when and how often EF is executing queries, and what queries it is executing.
Things to look at:
How many queries are running? If you are loading a set of records and expect one query, are there a whole heap of queries getting sent? This would indicate lazy-load calls being triggered.
What queries are being run? Is it selecting a lot more fields than are being displayed? This would be potentially a case where entire entities are being loaded where a .Select() could be used to reduce the amount of data. Perhaps even the case where entire sets of entities are being loaded that aren't relevant to what is displayed/done, such as cases where someone is using .ToList() prior to just doing a .Count() or .Any() or doing a .FirstOrDefault() just to do a != null check.
Is the database properly indexed? Copy some of the heavier queries into SQL Manager and execute them with an execution plan. Are there indexing suggestions?
The common sins of developing with EF and other ORMs boil down to "pulling too much, too often." It's surprising how many clients I've worked with have development teams that have not used a profiler to inspect their ORM use efficiency. (and I'm talking 0% so far.)

Giving access to execute SQL queries on a static database

I am working on a project where i want to give people the possibility to execute SQL queries on an PostgreSQL database. I then only need to prevent people from hacking/attacking my database.
I thought that maybe a way to do that, is by giving only view access to de database connection. And using EXPLAIN ANALYSE to calculating the cost of the SQL query.
Is EXPLAIN ANALYSE trustworthy enough to make sure there are no cheap ways to get the website down?
Do you have suggestions?
EXPLAIN ANALYSE will execute the query, including any side-effects it may have. PostgreSQL also allows running arbitrary Perl and Python code if configured to do so, so be careful. You're likely better off running PostgreSQL instances in per-request VMs or in similar highly isolated environments.

Does PostgreSQL allow running stored procedures in parallel?

I'm working with an ETL tool, Business Objects Data Services, which has the capability of specifying parallel execution of functions. The documentation says that before you can do this, you have to make sure that your database, which in our case is Postgres, allows "a stored procedure to run in parallel". Can anyone tell me if Postgres does that?
Sure. Just run your queries in different connections, and they will run in parallel transactions. Beware of locking though.
You can also call different stored procedures from the same connection (and effectively still run them in parallel) by using DBLink.
See this SO answer to see an example.