I need access to the complete source code of objects in order to automate certain tasks. For example: complete source of view is the view itself, it's rules, triggers, privileges...
By using different PostgreSQL tools like PgAdmin, pg_dump, psql, this can easily be fetched, but I need to be able to access it through a (sql/plpgsql) function call.
It's not too difficult to implement API looking like this: getFunctionSource, getTableSource, getFUnctionSource. However, it looks like this code would need a lot of maintenance along different versions of database.
Is there officially maintained or well tested extension, API, pg_dump wrapper or whatever I can use?
If you run psql -E, you'll see hidden queries that get run by Postgres to output data definitions.
A function's raw source, for instance, can be found by running \df foo, reading the query, and subsequently trying:
select prosrc from pg_proc where proname = 'foo'
\sf foo doesn't yield the relevant functions using that approach, but a cursory peek at the docs on system information functions (of which there are many) should suggest that it's just a wrapper around:
select pg_get_functiondef('foo'::regproc);
A few views to get you started, if you go the route of posting your stuff on github:
https://gist.github.com/ddebernardy/7893922
(You'll want to create a "system" schema before running the file using \i in psql.)
Related
I see how to debug queries stored as Functions in the database. But my problem is with an external QGIS plugin that connects to my Postgres 10.4 via network and does a complex query and calculations, and stores the results back into PostGIS tables:
FOR r IN c LOOP
SELECT
(1 - ST_LineLocatePoint(path.geom, ST_Intersection(r.geom, path.geom))) * ST_Length(path.geom)
INTO
station
(continues ...)
When it errors, it just returns that line number as the failing location, but no clue where it was in the loop through hundreds of features. (And any features it has processed are not stored to the output tables when it fails.) I totally don't know enough about the plugin and about SQL to hack the external query, and I suspect if it was a reasonable task the plugin author would have included more revealing debug messages.
So is there some way I could use pgAdmin4 (or anything) from the server side to watch the query process? Even being able to see if it fails the first time through the loop or later would help immensely. Knowing the loop count at failure would point me to the exact problem feature. Being able to see "station" or "r.geom" would make it even easier.
Perfectly fine if the process is miserably slow or interferes with other queries, I'm the only user on this server.
This is not actually a way to watch the RiverGIS query in action, but it is the best I have found. It extracts the failing ST_Intersects() call from the RiverGIS code and runs it under your control, where you can display any clues you want.
When you're totally mystified where the RiverGIS problem might be, run this SQL query:
SELECT
xs."XsecID" AS "XsecID",
xs."ReachID" AS "ReachID",
xs."Station" AS "Station",
xs."RiverCode" AS "RiverCode",
xs."ReachCode" AS "ReachCode",
ST_Intersection(xs.geom, riv.geom) AS "Fraction"
FROM
"<your project name>"."StreamCenterlines" AS riv,
"<your project name>"."XSCutLines" AS xs
WHERE
ST_Intersects(xs.geom, riv.geom)
ORDER BY xs."ReachID" ASC, xs."Station" DESC
Obviously replace <your project name> with the QGIS project name.
Also works for the BankLines step if you replace "StreamCenterlines" with "BankLines". Probably could be adapted to other situations where ST_Intersects() fails without a clue.
You'll get a listing with shorter geometry strings for good cross sections and double-length strings for bad ones. Probably need to widen your display column a lot to see this.
Works for me in pgAdmn4, or in QGIS3 -> Database -> DB Manager -> (click the wrench icon). You could select only bad lines, but I find the background info helpful.
I am trying to create a trigger that sends an email based on a database event, specifically, when a record is INSERTed in a certain table, I want an email stating that fact to go to the SysAdmin.
I can successfully do the following from a SQL window in iSeries Navigator:
CL:SNDDST TYPE(*LMSG)
TOINTNET(('sysadmin#mycompany.com'))
DSTD('this is the Subject Line')
LONGMSG('This is an Email sent from iSeries box via Navigator')
...and an email gets sent. Which means that the necessary SMTP stuff is there and working.
So all I'm trying to do is encapsulate this code, perhaps with some data changes (e.g. "A record has been added to the XYZ table on whatever-the-sysdate-is"). Navigator has some tantalizing examples that call CL to do some plain-vanilla things, but no clue as to how to make it work in a trigger. I know how to write triggers that do "database stuff", but not this CL stuff. And this is iSeries DB2, so I don't have access to UTL_MAIL.
I know next to nothing about CL, DDS or other iSeries internals... I would prefer not to have to create an external Java program, but will do that as a last resort...but even then, I'm having a hard time finding straightforward examples.
thanks in advance.
First off, note that SNDDST isn't the best choice for internet mail from the IBM i. Basically, SNDDST is a relic from the SNADS networking days that IBM hacked into supporting SMTP emails. There are free alternatives, or if you're reasonably current on fixes for 7.1 then you should have the Send SMTP E-mail (SNDSMTPEMM) command available.
The Run SQL Scripts window of iNav does indeed support CL commands using the CL: prefix. But that's not the same thing as having the query engine itself understand CL.
The CL: prefix isn't going to work inside an SQL trigger.
You could however,use the QCMDEXC stored procedure to call a CL command. But I wouldn't necessarily call that the best option.
The IBM i supports using "external" stored procedures and triggers. Theoretically, you could use a CL program that invokes the SNDSMTPEMM command directly. But given you desires to include data from the table, I wouldn't recommend that approach as you'd be tied to the table structure.
Instead, create your own UTLMAILSND CL program that invokes SNDSMTPEMM. Then defined the UTLMAILSND program as an external stored procedure (you can even give it a longer SQL name of UTIL_MAIL_SEND).
Now you can call your UTIL_MAIL_SEND() procedure from your SQL trigger.
You need to try the SNDSMTPEMM command. It's like sliced bread compared to SNDDST TYPE(*LMSG) It supports HTML too which makes for a lot of fun.
Yes, I used SNDSMPTEMM (skipping the html for now...).
One big note, however: using this command in a CL program doesn't work when being called from SQL. I had to change it to a CLLE program.
So the final answer is as follows: a) an INSERT trigger on the table in question, which calls: b) an (external) PROCEDURE created in the database, which in turn calls: c) the compiled CLLE program object. Works like a charm.
p.s. I create the whole body of the email in the INSERT trigger, and pass it along, eventually to the CLLE program. This allows me to have just this one CLLE program to report on any INSERT/UPDATE/DELETE anywhere in the database.
I'm starting to move more logic into the database, using triggers, views, functions, CTEs, etc. When plv8/json comes out for postgres, I can see myself putting lots of logic in there.
I'm having problems with the "standard" way of doing database migrations in sequel and activerecord. Both sequel and activerecord let you put arbitrary sql code into timestamped files. When each file is ran, a schema_versions table is updated with the filename (or timestamp in the filename), which keeps record of which migrations have been applied to the current database.
If a lot of coding is being done at the database level, that means that modifications to existing views, functions, etc follow the below pattern:
Migration 1 defines a function and a view that uses that function.
-- Migration 1
create function calculate(x int) returns int as $$
return x + 1;
$$ language sql;
create view foos as (
select something, calculate(something) from a_table
);
Requirements change, and I need to change a function type. In Migration 2 I have to drop all objects that depend on foo, and recreate them by copying their entire body -- even if there weren't any changes in most of the other code!
-- Migration 2
-- Have to drop all views and functions that depend on the
-- `calculate(int)` function.
drop view foos;
create or replace calculate(x bigint) returns bigint as $$
return x + 1;
$$ language sql;
-- I could do `drop function calculate(int) cascade`,
-- but I might accidentally drop some objects that wouldn't get recreated below.
-- Now I have to recreate foo.
create view foos as (
select something, calculate(something) from a_table
);
If I'm building a system based on views and functions and triggers, my migrations would be filled with duplicated code, and it's difficult to find the latest version of the code. You might say "don't do that!", but for my purposes (e-commerce, shipping, transactions), I'm finding it's a lot easier and faster to have the database ensure the integrity of the data by doing the logic inside the database.
You can (of course) dump the current database schema (which includes all the code definitions), but I think you lose comments. And you wouldn't generally want to edit a giant file that contains the whole schema.
Any ideas on how to solve this problem?
My best idea is to how the sql code contained in their own canonical files (app/sql/orders/shipping.sql, app/sql/orders/creation.sql, etc). Everyone develops directly on these. Whenever it's time for a release, then you'd want to make a new migration file, look at all the changed code since the previous release, figure out the dependency chain of the database objects that need to be dropped and recreated, and then copy the sql from the canonical sql files into a new sequel/activerecord migration file. But it's a pain. :/
Thoughts are very welcome. I hope I explained this well enough, I'm cutting back on my caffeine intake and I'm a little groggy atm.
Oh, I asked a similar question on Stack Overflow: Changing the type of a column used in other views The answer was a function that let me pass in:
sql code to run
database views to drop and recreate
The function would retrieve the view definition, drop the views, run the sql code, then recreate the view definition (in reverse order of dropping). Perhaps a system of functions like this would help solve the problem of having to copy/paste sql code into the migration files.
I'd recommend liquibase.
You create files which track the changes to your database and these will be run into the database in the correct migration order.
You might find Dave Wheeler's blog-posts interesting starting from here:
http://justatheory.com/computers/databases/simple-sql-change-management.html
My rate of database change is fairly small but I tend to be careless and make small changes to the schema directly, so I've had to come up with a fair bit of infrastructure to catch when I've done so. The basic elements are:
A makefile that can rebuild a development database from scratch
A set of schema-files separated into "modules" (lookups_schema.sql, lookup_data.sql)
A set of update files that transition from one revision to the next
I don't usually have the corresponding downgrade scripts, some people do
A script to populate my database with a plausible amount of test data
Crucially, a test suite via pgTAP that checks my various functions, views and also the upgrade scripts. The upgrade tests can be run against a live database too.
If you have a separate instance of PostgreSQL set up with fsync turned off / on ramdisk etc then rebuilding the whole DB and populating it can take seconds (if you don't have too much test data).
Start with #1, #2, then add #6 (pgTAP is very cool), then the rest. The crucial thing is a test suite that checks your in-database code.
There are tools that try to automate schema changes for you, but they are really only good at adding a new column to a table and that sort of thing. Once you have code in your db then they're not much help.
I have been working with this stored procedure Sys.xp_readerrorlog for around a week now, and what I have learned is it accepts 7 parameters to fully refine how it should display its data. Easily enough to understand.
I have the question now from, where exactly does this stored procedure get its data from? I know you can also preview the data in the SSMS Object browser, under Managements In the SQL Server logs folder, although I have come to the theory that the Dialog that opens when you read the logs also use this procedure to display to the user in a grid.
I am baffled. I scouted through the system databases and found nothing (no table) which looks remotely like the output you get from this procedure
exec sys.xp_readerrorlog 1,0,'','',null,null,N'Desc';
Any expert that can tell me where the actual log data is stored, and if it is queryable through a select statement if you have admin rights?
It reads from the SQL Server error log file, which is a plain text file. There is no built-in interface to the file from TSQL; xp_readerrorlog is widely known, but it's also undocumented so relying on it is risky although of course you can use it if you don't mind that risk.
Using SMO you can find the file location but there is no special API for reading it because it's just a text file.
Is it possible to set a database role before running a report? I have a number of databases each containing a number of schemas with the same set of tables, where each schema has a number of roles to control read, write, data management and so on. None of these are default roles.
In sqlplus or TOAD I can do SET ROLE , before running a select statement. I would like to do the same in BIRT.
It may be possible to do this using the afterOpen event for the ODA Data Source, but I have not found any examples on how to get and use the native connection in JavaScript.
I am not allowed to add or change anything on the server end.
You can make an additional call to the database in the afterOpen method of the Data Source using Java. You can use JavaScript or a Java Event Handler to execute the SET ROLE statement, or to call a stored procedure that will execute it for you. This happens after the initial db connection is made, but before the Data Set query runs. It will be a little tricky to use the data source connection to make that call however, and I don't have the code right now to provide as an example.
Another way is to create a stored proc Data Set that will execute the desired command, and have that execute first. Drag and drop the Data Set into the report design, and make it invisible. It will run first before any other queries. Not the cleanest solution, but easy to do
Hope that helps
Le Birt Expert
You can write a login trigger and do a set role in this trigger ( PL/SQL: DBMS_SESSION.SET_ROLE). You can determine the username, osuser, program and machine of the user who want to log in.
The approach to use a stored procedure for setting the role won't work - at least not on Apache Derby. Reason: lifetime of the set role is limited to the execution of the procedure itself - after returning from the procedure the role will be the same as before the procedure has been called, i.e. for executing the report the same as no role would have ever been set.