I would like to delete a file from the server using plpgsql.
First I lo_import and lo_get the contents of the file, perform some updates/inserts, then lo_export the file to another folder.
Now what i would like to do after successful lo_export is to remove the original file.
Is there a way to do this?
PostgreSQL doesn't allow to manipulate with files native way. This is task for application server. The workaround can be done with some external procedures - stored procedures (functions) written in PLPerlu or PLPythonu (untrusted) language.
pl/pgSQL is a trusted language. That means that no access to files, the operating system or any network functionality is possible.
As Pavel pointed out, you must use an untrusted language. However functions in any untrusted language must be created by a superuser, no regular user can do that.
If your user role has pg_write_server_files permission, you can do COPY (select '') TO '/tmp/yourfilename', which deletes your file contents and creates a new text file with a single empty line at its place.
If your user role has pg_execute_server_program, you can run COPY (select '') TO PROGRAM 'rm -f /tmp/yourfilename' and your file will be gone.
Of course everything here must be carried on with great diligence. I assume if your user role has one of these permissions, then you are skilled enough to take all precautions.
Related
I am trying to create a trigger that sends an email based on a database event, specifically, when a record is INSERTed in a certain table, I want an email stating that fact to go to the SysAdmin.
I can successfully do the following from a SQL window in iSeries Navigator:
CL:SNDDST TYPE(*LMSG)
TOINTNET(('sysadmin#mycompany.com'))
DSTD('this is the Subject Line')
LONGMSG('This is an Email sent from iSeries box via Navigator')
...and an email gets sent. Which means that the necessary SMTP stuff is there and working.
So all I'm trying to do is encapsulate this code, perhaps with some data changes (e.g. "A record has been added to the XYZ table on whatever-the-sysdate-is"). Navigator has some tantalizing examples that call CL to do some plain-vanilla things, but no clue as to how to make it work in a trigger. I know how to write triggers that do "database stuff", but not this CL stuff. And this is iSeries DB2, so I don't have access to UTL_MAIL.
I know next to nothing about CL, DDS or other iSeries internals... I would prefer not to have to create an external Java program, but will do that as a last resort...but even then, I'm having a hard time finding straightforward examples.
thanks in advance.
First off, note that SNDDST isn't the best choice for internet mail from the IBM i. Basically, SNDDST is a relic from the SNADS networking days that IBM hacked into supporting SMTP emails. There are free alternatives, or if you're reasonably current on fixes for 7.1 then you should have the Send SMTP E-mail (SNDSMTPEMM) command available.
The Run SQL Scripts window of iNav does indeed support CL commands using the CL: prefix. But that's not the same thing as having the query engine itself understand CL.
The CL: prefix isn't going to work inside an SQL trigger.
You could however,use the QCMDEXC stored procedure to call a CL command. But I wouldn't necessarily call that the best option.
The IBM i supports using "external" stored procedures and triggers. Theoretically, you could use a CL program that invokes the SNDSMTPEMM command directly. But given you desires to include data from the table, I wouldn't recommend that approach as you'd be tied to the table structure.
Instead, create your own UTLMAILSND CL program that invokes SNDSMTPEMM. Then defined the UTLMAILSND program as an external stored procedure (you can even give it a longer SQL name of UTIL_MAIL_SEND).
Now you can call your UTIL_MAIL_SEND() procedure from your SQL trigger.
You need to try the SNDSMTPEMM command. It's like sliced bread compared to SNDDST TYPE(*LMSG) It supports HTML too which makes for a lot of fun.
Yes, I used SNDSMPTEMM (skipping the html for now...).
One big note, however: using this command in a CL program doesn't work when being called from SQL. I had to change it to a CLLE program.
So the final answer is as follows: a) an INSERT trigger on the table in question, which calls: b) an (external) PROCEDURE created in the database, which in turn calls: c) the compiled CLLE program object. Works like a charm.
p.s. I create the whole body of the email in the INSERT trigger, and pass it along, eventually to the CLLE program. This allows me to have just this one CLLE program to report on any INSERT/UPDATE/DELETE anywhere in the database.
I need access to the complete source code of objects in order to automate certain tasks. For example: complete source of view is the view itself, it's rules, triggers, privileges...
By using different PostgreSQL tools like PgAdmin, pg_dump, psql, this can easily be fetched, but I need to be able to access it through a (sql/plpgsql) function call.
It's not too difficult to implement API looking like this: getFunctionSource, getTableSource, getFUnctionSource. However, it looks like this code would need a lot of maintenance along different versions of database.
Is there officially maintained or well tested extension, API, pg_dump wrapper or whatever I can use?
If you run psql -E, you'll see hidden queries that get run by Postgres to output data definitions.
A function's raw source, for instance, can be found by running \df foo, reading the query, and subsequently trying:
select prosrc from pg_proc where proname = 'foo'
\sf foo doesn't yield the relevant functions using that approach, but a cursory peek at the docs on system information functions (of which there are many) should suggest that it's just a wrapper around:
select pg_get_functiondef('foo'::regproc);
A few views to get you started, if you go the route of posting your stuff on github:
https://gist.github.com/ddebernardy/7893922
(You'll want to create a "system" schema before running the file using \i in psql.)
I'm attempting to do a code first migration to turn on file streaming so that it doesn't have to be done manually.
I can of course configure file streaming at a database level (although I can't set the setting on the service...)
But I need to execute something like this
ALTER DATABASE DBNAME
ADD FILEGROUP FILESTREAMGroupName
CONTAINS FILESTREAM
So I need to get the name of the database that I'm working from.
Once I can get that then I can figure out the path to the main MDF by looking up the files and then set the path for the file stream group as my next command, but I just cannot figure out how to get the Database Name while in the DbMigration.
Ideas?
If you are using MSSQL2012 you can use
ALTER DATABASE CURRENT
Colin's comment about being a dupe is correct. The easiest way to do this is to use db_name() in your script.
I have been working with this stored procedure Sys.xp_readerrorlog for around a week now, and what I have learned is it accepts 7 parameters to fully refine how it should display its data. Easily enough to understand.
I have the question now from, where exactly does this stored procedure get its data from? I know you can also preview the data in the SSMS Object browser, under Managements In the SQL Server logs folder, although I have come to the theory that the Dialog that opens when you read the logs also use this procedure to display to the user in a grid.
I am baffled. I scouted through the system databases and found nothing (no table) which looks remotely like the output you get from this procedure
exec sys.xp_readerrorlog 1,0,'','',null,null,N'Desc';
Any expert that can tell me where the actual log data is stored, and if it is queryable through a select statement if you have admin rights?
It reads from the SQL Server error log file, which is a plain text file. There is no built-in interface to the file from TSQL; xp_readerrorlog is widely known, but it's also undocumented so relying on it is risky although of course you can use it if you don't mind that risk.
Using SMO you can find the file location but there is no special API for reading it because it's just a text file.
Is it possible to set a database role before running a report? I have a number of databases each containing a number of schemas with the same set of tables, where each schema has a number of roles to control read, write, data management and so on. None of these are default roles.
In sqlplus or TOAD I can do SET ROLE , before running a select statement. I would like to do the same in BIRT.
It may be possible to do this using the afterOpen event for the ODA Data Source, but I have not found any examples on how to get and use the native connection in JavaScript.
I am not allowed to add or change anything on the server end.
You can make an additional call to the database in the afterOpen method of the Data Source using Java. You can use JavaScript or a Java Event Handler to execute the SET ROLE statement, or to call a stored procedure that will execute it for you. This happens after the initial db connection is made, but before the Data Set query runs. It will be a little tricky to use the data source connection to make that call however, and I don't have the code right now to provide as an example.
Another way is to create a stored proc Data Set that will execute the desired command, and have that execute first. Drag and drop the Data Set into the report design, and make it invisible. It will run first before any other queries. Not the cleanest solution, but easy to do
Hope that helps
Le Birt Expert
You can write a login trigger and do a set role in this trigger ( PL/SQL: DBMS_SESSION.SET_ROLE). You can determine the username, osuser, program and machine of the user who want to log in.
The approach to use a stored procedure for setting the role won't work - at least not on Apache Derby. Reason: lifetime of the set role is limited to the execution of the procedure itself - after returning from the procedure the role will be the same as before the procedure has been called, i.e. for executing the report the same as no role would have ever been set.