Save stored prodeure result as file - sql-server-2008-r2

i have a Stored Procedure , in proc Print certain result like
Print '-- Start Transection--'
Print 'Transection No = ' + #TransectionId
...
...
Print 'Transection Success'
Print '-- End Transection--'
is it possible to Save printed result in a file while call it from UI. after that we have to mail that file to user also ask for download that file

PRINT is for logging and debugging purposes and shouldn't be used to return anything to the caller.
Here's a suggestion: instead of PRINTing, write into a logging table and return a loggingID. Then, query this table from your app and write to a file.
Example: create two tables
CREATE TABLE Logging
(
LoggingID int IDENTITY(1,1) PRIMARY KEY,
Created datetime
)
CREATE TABLE LoggingDetail
(
LoggingDetailID int IDENTITY(1,1) PRIMARY KEY,
LoggingID int FOREIGN KEY REFERENCES Logging,
LoggingText varchar(500)
)
At the beginning of a transaction, create a new loggingID:
INSERT INTO Logging (Created) VALUES (GETUTCDATE())
DECLARE #loggingID INT = ##IDENTITY
Instead of PRINTing the logging messsages, do something like
INSERT INTO LoggingDetail (LoggingID, LoggingText) VALUES (#loggingID,
'-- Start Transection--')
At the end of your sproc, return #loggingID to the caller. You can now get retrieve the log messages from the LoggingDetail table and write them to a file:
SELECT LoggingText FROM LoggingDetail WHERE LoggingID=<loggingID> ORDER BY LoggingDetailID
Might be a good idea to encapsulate the INSERTs in separate sprocs. Those sprocs then can write to the log tables and PRINT the log messages.

Related

Kafka/KsqlDb : Why is PRIMARY KEY appending chars?

I intend to create a TABLE called WEB_TICKETS where the PRIMARY KEY is equal to the key->ID value. For some reason, when I run the CREATE TABLE instruction the PRIMARY KEY value is appended with the chars 'JO' - why is this happening?
KsqlDb Statements
These work as expected
CREATE STREAM STREAM_WEB_TICKETS (
ID_TICKET STRUCT<ID STRING> KEY
)
WITH (KAFKA_TOPIC='web.mongodb.tickets', FORMAT='AVRO');
CREATE STREAM WEB_TICKETS_REKEYED
WITH (KAFKA_TOPIC='web_tickets_by_id') AS
SELECT *
FROM STREAM_WEB_TICKETS
PARTITION BY ID_TICKET->ID;
PRINT 'web_tickets_by_id' FROM BEGINNING LIMIT 1;
key: 5d0c2416b326fe00515408b8
The following successfully creates the table but the PRIMARY KEY value isn't what I expect:
CREATE TABLE web_tickets (
id_pk STRING PRIMARY KEY
)
WITH (KAFKA_TOPIC = 'web_tickets_by_id', VALUE_FORMAT = 'AVRO');
select id_pk from web_tickets EMIT CHANGES LIMIT 1;
|ID_PK|
|J05d0c2416b326fe00515408b8
As you can see the ID_PK value has the characters JO appended to it. Why is this?
It appears as though I wasn't properly setting the KEY FORMAT. The following command produces the expected result.
CREATE TABLE web_tickets_test_2 (
id_pk VARCHAR PRIMARY KEY
)
WITH (KAFKA_TOPIC = 'web_tickets_by_id', FORMAT = 'AVRO');

SELECT from result of UPDATE ... RETURNING in jOOQ

I'm transforming some old PostgreSQL code to jOOQ, and I'm currently struggling with SQL that has multiple WITH clauses, where each one depends on previous. It would be best to keep the SQL logic the way it was written and not to change it (e.g. multiple queries to DB).
As it seems, there is no way to do SELECT on something that is UPDATE ... RETURNING, for example
dsl.select(DSL.asterisk())
.from(dsl.update(...)
.returning(DSL.asterisk())
)
I've created some test tables, trying to create some sort of MVCE:
create table dashboard.test (id int primary key not null, data text); --test table
with updated_test AS (
UPDATE dashboard.test SET data = 'new data'
WHERE id = 1
returning data
),
test_user AS (
select du.* from dashboard.dashboard_user du, updated_test -- from previous WITH
where du.is_active AND du.data = updated_test.data
)
SELECT jsonb_build_object('test_user', to_jsonb(tu.*), 'updated_test', to_jsonb(ut.*))
FROM test_user tu, updated_test ut; -- from both WITH clauses
So far this is my jOOQ code (written in Kotlin):
dsl.with("updated_test").`as`(
dsl.update(Tables.TEST)
.set(Tables.TEST.DATA, DSL.value("new data"))
.returning(Tables.TEST.DATA) //ERROR is here: Required Select<*>, found UpdateResultStep<TestRecord>
).with("test_user").`as`(
dsl
.select(DSL.asterisk())
.from(
Tables.DASHBOARD_USER,
DSL.table(DSL.name("updated_test")) //or what to use here?
)
.where(Tables.DASHBOARD_USER.IS_ACTIVE.isTrue
.and(Tables.DASHBOARD_USER.DATA.eq(DSL.field(DSL.name("updated_test.data"), String::class.java)))
)
)
.select() //here goes my own logic for jsonBBuildObject (which is tested and works for other queries)
.from(
DSL.table(DSL.name("updated_test")), //what to use here
DSL.table(DSL.name("test_user")) //or here
)
Are there any workarounds for this? I'd like to avoid changing SQL if possible.
Also, in this project this trick is used very often to get JSON(B) from UPDATE clause (table has JSON(B) columns too):
with _updated AS (update dashboard.test SET data = 'something' WHERE id = 1 returning *)
select to_jsonb(_updated.*) from _updated;
and it will be a real step back for us if there is no workaround for this.
I'm using JOOQ version 3.13.3, and Postgres 12.0.
This is currently not supported in jOOQ, see:
https://github.com/jOOQ/jOOQ/issues/3185
https://github.com/jOOQ/jOOQ/issues/4474
The workaround is, as always, when some vendor specific syntax is unsupported, to resort to plain SQL templating
E.g.
// If you don't need to map data types
dsl.fetch("with t as ({0}) {1}", update, select);
// If you need to map data types
dsl.resultQuery("with t as ({0}) {1}", update, select).coerce(select.getSelect()).fetch();

PostgreSQL getting new id during insert

I need to create some customer number on record insert, format is 'A' + 4 digits, based on the ID. So record ID 23 -> A0023 and so on. My solution is currently this:
-- Table
create table t (
id bigserial unique primary key,
x text,
y text
);
-- Insert
insert into t (x, y) select concat('A',lpad((currval(pg_get_serial_sequence('t','id')) + 1)::text, 4, '0')), 'test';
This works perfectly. Now my question is ... is that 'safe', in the sense that currval(seq)+1 is guaranteed the same as the id column will receive? I think it should be locked during statement execution. Is this the correct way to do it or is there any shortcut to access the to-be-created ID directly?
Instead of storing this data, you could just query it each time you needed it, making the whole thing a lot less error-prone:
SELECT id, 'A' + LPAD(id::varchar, 4, '0')
FROM t

save file (.pdf) in database whit python 2.7

Craig Ringer Ican not work whit large object functions
My database looks like this
this is my table
-- Table: files
--
DROP TABLE files;
CREATE TABLE files
(
id serial NOT NULL,
orig_filename text NOT NULL,
file_data bytea NOT NULL,
CONSTRAINT files_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE files
I want save .pdf in my database, I saw you did the last answer, but using python27 (read the file and convert to a buffer object or use the large object functions)
I did the code would look like
path="D:/me/A/Res.pdf"
listaderuta = path.split("/")
longitud=len(listaderuta)
f = open(path,'rb')
f.read().__str__()
cursor = con.cursor()
cursor.execute("INSERT INTO files(id, orig_filename, file_data) VALUES (DEFAULT,%s,%s) RETURNING id", (listaderuta[longitud-1], f.read()))
but when I'm downloading, I save
fula = open("D:/INSTALL/pepe.pdf",'wb')
cursor.execute("SELECT file_data, orig_filename FROM files WHERE id = %s", (int(17),))
(file_data, orig_filename) = cursor.fetchone()
fula.write(file_data)
fula.close()
but when I'm downloading the file can not be opened, this damaged I repeat I can not work with large object functions
try this and turned me, can you help ?
I am thinking that psycopg2 Binary function does not user lob functions.
thus I used.....
path="salman.pdf"
f = open(path,'rb')
dat = f.read()
binary = psycopg2.Binary(dat)
cursor.execute("INSERT INTO files(id, file_data) VALUES ('1',%s)", (binary,))
conn.commit()
Just correction in INSERT statement, INSERT statement will be failed with null value in column "orig_filename" violates not-null constraint as orig_filename is defined as NOT NULL.... use instead
("INSERT INTO files(id, orig_filename,file_data) VALUES ('1','filename.pdf',%s)", (binary,))

UDTF returning a Table on DB2 V5R4 with Dynamic SQL

I must to write a UDF returning a Table. I’ve done it with Static SQL.
I’ve created Procedures preparing a Dynamic and Complex SQL sentence and returning a cursor.
But now I must to create a UDF with Dynamic SQL and return a table to be used with an IN clause inside other select.
It is possible on DB2 v5R4? Do you have an example?
Thanks in advance...
I don't have V5R4, but I have i 6.1 and V5R3. I have a 6.1 example, and I poked around in V5R3 to find how to make the same example work there. I can't guarantee V5R4, but this ought to be extremely close. Generating the working V5R3 code into 'Run SQL Scripts' gives this:
DROP SPECIFIC FUNCTION SQLEXAMPLE.DYNTABLE ;
SET PATH "QSYS","QSYS2","SYSPROC","SYSIBMADM","SQLEXAMPLE" ;
CREATE FUNCTION SQLEXAMPLE.DYNTABLE (
SELECTBY VARCHAR( 64 ) )
RETURNS TABLE (
CUSTNBR DECIMAL( 6, 0 ) ,
CUSTFULLNAME VARCHAR( 12 ) ,
CUSTBALDUE DECIMAL( 6, 0 ) )
LANGUAGE SQL
NO EXTERNAL ACTION
MODIFIES SQL DATA
NOT FENCED
DISALLOW PARALLEL
CARDINALITY 100
BEGIN
DECLARE DYNSTMT VARCHAR ( 512 ) ;
DECLARE GLOBAL TEMPORARY TABLE SESSION.TCUSTCDT
( CUSTNBR DECIMAL ( 6 , 0 ) NOT NULL ,
CUSTNAME VARCHAR ( 12 ) ,
CUSTBALDUE DECIMAL ( 6 , 2 ) )
WITH REPLACE ;
SET DYNSTMT = 'INSERT INTO Session.TCustCDt SELECT t2.CUSNUM , (t2.INIT CONCAT '' '' CONCAT t2.LSTNAM) as FullName , t2.BALDUE FROM QIWS.QCUSTCDT t2 ' CONCAT CASE WHEN SELECTBY = '' THEN '' ELSE SELECTBY END ;
EXECUTE IMMEDIATE DYNSTMT ;
RETURN SELECT * FROM SESSION . TCUSTCDT ;
END ;
COMMENT ON SPECIFIC FUNCTION SQLEXAMPLE.DYNTABLE
IS 'UDTF returning dynamic table' ;
And in 'Run SQL Scripts', the function can be called like this:
SELECT t1.* FROM TABLE(sqlexample.dyntable('WHERE STATE = ''TX''')) t1
The example is intended to work over IBM's sample QCUSCDT table in library QIWS. Most systems will have that table available. The table function returns values from two QCUSCDT columns, CUSNUM and BALDUE, directly through two of the table function's columns, CUSTNBR and CUSTBALDUE. The third table function column, CUSTFULLNAME, gets its value by a concatenation of INIT and LSTNAM from QCUSTCDT.
However, the part that apparently relates to the question is the SELECTBY parameter of the function. The usage example shows that a WHERE clause is passed in and used to help built a dynamic 'INSERT INTO... SELECT...statement. The example shows that rows containingSTATE='TX'` will be returned. A more complex clause could be passed in or the needed condition(s) could be retrieved from somewhere else, e.g., from another table.
The dynamic statement inserts rows into a GLOBAL TEMPORARY TABLE named SESSION.TCUSTCDT. The temporary table is defined in the function. The temporary column definitions are guaranteed (by the developer) to match the 'RETURNS TABLE` columns of the table function because no dynamic changes can be made to any of those elements. This allows SQL to handle reliably columns returned from the function, and that lets it compile the function.
The RETURN statement simply returns whatever rows are in the temporary table after the dynamic statement completes.
The various field definitions take into account the somewhat unusual definitions in the QCUSTCDT file. Those don't make great sense, but they're useful enough.