I have a CLOB(2000000) field in a db2 (v10) database, and I would like to run a simple UPDATE query on it to replace each occurances of "foo" to "baaz".
Since the contents of the field is more then 32k, I get the following error:
"{some char data from field}" is too long.. SQLCODE=-433, SQLSTATE=22001
How can I replace the values?
UPDATE:
The query was the following (changed UPDATE into SELECT for easier testing):
SELECT REPLACE(my_clob_column, 'foo', 'baaz') FROM my_table WHERE id = 10726
UPDATE 2
As mustaccio pointed out, REPLACE does not work on CLOB fields (or at least not without doing a cast to VARCHAR on the data entered - which in my case is not possible since the size of the data is more than 32k) - the question is about finding an alternative way to acchive the REPLACE functionallity for CLOB fields.
Thanks,
krisy
Finally, since I have found no way to this by an SQL query, I ended up exporting the table, editing its lob content in Notepad++, and importing the table back again.
Not sure if this applies to your case: There are 2 different REPLACE functions offered by DB2, SYSIBM.REPLACE and SYSFUN.REPLACE. The version of REPLACE in SYSFUN accepts CLOBs and supports values up to 1 MByte. In case your values are longer than you would need to write your own (SQL-based?) function.
BTW: You can check function resolution by executing "values(current path)"
Related
Hi I am trying to split the string column which has a delimiter(',')
drop table #address
CREATE TABLE #Address(stir VARCHAR(max));
GO
INSERT INTO #Address(stir)
values('aa,"","7453adeg3","tom","jon","1900-01-01","14155","","2"')
,('ca,"23","42316eg3","pom","","1800-01-01","9999","","1"')
,('daa,"","1324567a","","catty","","756432","213",""')
GO
Expected output:
I am using PARSENAME but it is returning null values? guide me on my expected out put
thanks in advance
The best solution here would be to just create a flat CSV file based on your current insert data, and then use SQL Server's bulk import tool to load it into a table. The following CSV data should be workable here:
aa,"","7453adeg3","tom","jon","1900-01-01","14155","","2"
ca,"23","42316eg3","pom","","1800-01-01","9999","","1"
daa,"","1324567a","","catty","","756432","213",""
Just make sure that you specify double quote as the field escape character.
In the IBM i system there's a way to copy a from a structured file to one without structure using Cpyf *nochk.
How can it be done with sql?
The answer may be "You can't", not if you are using DDL defined tables anyway. The problem is that *NOCHK just dumps data into the file like a flat file. Files defined with CRTPF, whether they have source, or are program defined, don't care about bad data until read time, so they can contain bad data. In fact you can even read bad data out of a file if you use a program definition for that file.
But, an SQL Table (one defined using DDL) cannot contain bad data. No matter how you write it, the database validates the data at write time. Even the *NOCHK option of the CPYF command cannot coerce bad data into an SQL table.
There really isn't an easy way
Closest would be to just build a big character string using CONCAT...
insert into flatfile
select mycharfld1
concat cast(myvchar as char(20))
concat digits(zonedFld3)
from mytable
That works for fixed length, varchar (if casted to char) and zoned decimal...
Packed decimal would be problematic..
I've seen user defined functions that can return the binary character string that make up a packed decimal...but it's very ugly
I question why you think you need to do this.
You can use QSYS2.QCMDEXC stored procedure to execute OS commands.
Example:
call qsys2.qcmdexc ( 'CPYF FROMFILE(QTEMP/FILE1) TOFILE(QTEMP/FILE2) MBROPT(*replace) FMTOPT(*NOCHK)' )
I'm converting the SQL server to db2..
I need a solution for stuff and for xml path
Ex
Select stuff(select something
from table name
Where condition
For xml path(''),1,1,'')
Pls convert this into db2.
Your code is an old school XML "trick" to convert multiple values to a single string. (Often comma separated but in this case space separated.) Since those days DB2 (and the sql standards) have added a new function called listagg which is designed to solve this exact problem:
Select listagg(something,' ')
from table name
Where condition
db2 docs -
https://www.ibm.com/support/knowledgecenter/en/SSEPEK_12.0.0/sqlref/src/tpc/db2z_bif_listagg.html
https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_74/db2/rbafzcollistagg.htm
I have a job where I am getting a flow into tOracleOutput where I am updating the table. Now, I have to update that table using an SQL statement, which I guess we have option in Advanced settings of tOracleOuptut, but I don't know how to use it or you can say that I am not getting the settings properly. I referred to official documentation but could not understand. Can any one explain the fields like Name, SQL expression, Position, Reference Column in a better way?
the SQL query which I am using is:
update set COL1=SOMETHING1
where COL2=SOMETHING2
Now, value for COL1 is coming from the flow but COL2 is some column in the table which is not coming from the flow.
Have a look to tOracleRow for such a case.
Hope this helps.
TRF
Using tOracleOutput is helpful when a ready data source (table or file (...) with same columns as destination) the more elaborate your query is, the more you should do as TRF said (and use tOracleRow), but here's an example to your question:
file contain 3 column,
DB table of destination contains 4 column, where the 4th is the date of update, (the first 3 are identical to the input)
so you add the destination's column's name in Name and put the SQL function for the date (eg: SYSDATE) and where to put it (Position) in reference to a column of your choice (Reference Column)
In my view it helps avoid using tMap for a miserable additional column when you want to Insert, but you want to Update, in which case the component doesn't offer the additional column section, plus I don't think you can add the WHERE clause here
Hope it helps
I have a PostgreSQL database. I had to extend an existing, big table with a few more columns.
Now I need to fill those columns. I tought I can create an .csv file (out of Excel/Calc) which contains the IDs / primary keys of existing rows - and the data for the new, empty fields. Is it possible to do so? If it is, how to?
I remember doing exactly this pretty easily using Microsoft SQL Management Server, but for PostgreSQL I am using PG Admin (but I am ofc willing to switch the tool if it'd be helpfull). I tried using the import function of PG Admin which uses the COPY function of PostgreSQL, but it seems like COPY isn't suitable as it can only create whole new rows.
Edit: I guess I could write a script which loads the csv and iterates over the rows, using UPDATE. But I don't want to reinvent the wheel.
Edit2: I've found this question here on SO which provides an answer by using a temp table. I guess I will use it - although it's more of a workaround than an actual solution.
PostgreSQL can import data directly from CSV files with COPY statements, this will however only work, as you stated, for new rows.
Instead of creating a CSV file you could just generate the necessary SQL UPDATE statements.
Suppose this would be the CSV file
PK;ExtraCol1;ExtraCol2
1;"foo",42
4;"bar",21
Then just produce the following
UPDATE my_table SET ExtraCol1 = 'foo', ExtraCol2 = 42 WHERE PK = 1;
UPDATE my_table SET ExtraCol1 = 'bar', ExtraCol2 = 21 WHERE PK = 4;
You seem to work under Windows, so I don't really know how to accomplish this there (probably with PowerShell), but under Unix you could generate the SQL from a CSV easily with tools like awk or sed. An editor with regular expression support would probably suffice too.