SQL3116W The field value in row and column is missing, but the target column is not nullable. How to specify to use Column Default - db2

I'm using LOAD command to get data into a table where one of the columns has the default value of the current timestamp. I had NULL value in the data being read as I thought it would cause the table to use the default value but based on above error that's not the case. How do I avoid the above error in this case?
Here is the full command, input file is text file: LOAD FROM ${LOADDIR}/${InputFile}.exp OF DEL MODIFIED BY COLDEL| INSERT INTO TEMP_TABLE NONRECOVERABLE

Try:
LOAD FROM ${LOADDIR}/${InputFile}.exp OF DEL MODIFIED BY USEDEFAULTS COLDEL| INSERT INTO TEMP_TABLE NONRECOVERABLE
This modifier usedefaults has been available in Db2-LUW since V7.x, as long as they are fully serviced (i.e. have had the final fixpack correctly applied).
Note that some Db2-LUW versions place restrictions on usage of usedefaults modifier, as detailed in the documentation. For example, restrictions relating to use with other modifiers, or modes or target table type.
Always specify your Db2-server version and platform when asking for help because the answer can depende on these facts.

You can specify which columns from the input file go into which columns of the table using METHOD P - if you omit the column you want the default for it will throw a warning but the default will be populated:
$ db2 "create table testtab1 (cola int, colb int, colc timestamp not null default)"
DB20000I The SQL command completed successfully.
$ cat tt1.del
1,1,1
2,2,2
3,3,99
$ db2 "load from tt1.del of del method P(1,2) insert into testtab1 (cola, colb)"
SQL27967W The COPY NO recoverability parameter of the Load has been converted
to NONRECOVERABLE within the HADR environment.
SQL3109N The utility is beginning to load data from file
"/home/db2inst1/tt1.del".
SQL3500W The utility is beginning the "LOAD" phase at time "07/12/2021
10:14:04.362385".
SQL3112W There are fewer input file columns specified than database columns.
SQL3519W Begin Load Consistency Point. Input record count = "0".
SQL3520W Load Consistency Point was successful.
SQL3110N The utility has completed processing. "3" rows were read from the
input file.
SQL3519W Begin Load Consistency Point. Input record count = "3".
SQL3520W Load Consistency Point was successful.
SQL3515W The utility has finished the "LOAD" phase at time "07/12/2021
10:14:04.496670".
Number of rows read = 3
Number of rows skipped = 0
Number of rows loaded = 3
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 3
$ db2 "select * from testtab1"
COLA COLB COLC
----------- ----------- --------------------------
1 1 2021-12-07-10.14.04.244232
2 2 2021-12-07-10.14.04.244232
3 3 2021-12-07-10.14.04.244232
3 record(s) selected.

Related

value too long for type character varying(512)--Why can't import the data?

The maximum size of limited character types (e.g. varchar(n)) in Postgres is 10485760.
description on max length of postgresql's varchar
Please download the file for testing and extract it in /tmp/2019q4, we only use pre.txt to import data with.
sample data
Enter you psql and create a database:
postgres=# create database edgar;
postgres=# \c edgar;
Create table according to the webpage:
fields in pre table definations
edgar=# create table pre(
id serial ,
adsh varchar(20),
report numeric(6,0),
line numeric(6,0),
stmt varchar(2),
inpth boolean,
rfile char(1),
tag varchar(256),
version varchar(20),
plabel varchar(512),
negating boolean
);
CREATE TABLE
Try to import data:
edgar=# \copy pre(adsh,report,line,stmt,inpth,rfile,tag,version,plabel,negating) from '/tmp/2019q4/pre.txt' with delimiter E'\t' csv header;
We analyse the error info:
ERROR: value too long for type character varying(512)
CONTEXT: COPY pre, line 1005798, column plabel: "LIABILITIES AND STOCKHOLDERS EQUITY 0
0001493152-19-017173 2 11 BS 0 H LiabilitiesAndStockholdersEqu..."
Time: 1481.566 ms (00:01.482)
1.What size i set in the field is just 512 ,more less than 10485760.
2.the content in line 1005798 is not same as in error info:
0001654954-19-012748 6 20 EQ 0 H ReclassificationAdjustmentRelatingToAvailableforsaleSecuritiesNetOfTaxEffect 0001654954-19-012748 Reclassification adjustment relating to available-for-sale securities, net of tax effect" 0
Now i drop the previous table ,convert the plabel field as text,re-create it:
edgar=# drop table pre;
DROP TABLE
Time: 22.763 ms
edgar=# create table pre(
id serial ,
adsh varchar(20),
report numeric(6,0),
line numeric(6,0),
stmt varchar(2),
inpth boolean,
rfile char(1),
tag varchar(256),
version varchar(20),
plabel text,
negating boolean
);
CREATE TABLE
Time: 81.895 ms
Import the same data with same copy command:
edgar=# \copy pre(adsh,report,line,stmt,inpth,rfile,tag,version,plabel,negating) from '/tmp/2019q4/pre.txt' with delimiter E'\t' csv header;
COPY 275079
Time: 2964.898 ms (00:02.965)
edgar=#
No error info in psql console,let me check the raw data '/tmp/2019q4/pre.txt' ,which it contain 1043000 lines.
wc -l /tmp/2019q4/pre.txt
1043000 /tmp/2019q4/pre.txt
There are 1043000 lines,how much lines imported then?
edgar=# select count(*) from pre;
count
--------
275079
(1 row)
Why so less data imported without error info ?
The sample data you provided is obviously not the data you are really loading. It does still show the same error, but of course the line numbers and markers are different.
That file occasionally has double quote marks where there should be single quote marks (apostrophes). Because you are using CSV mode, these stray double quotes will start multi-line strings, which span all the way until the next stray double quote mark. That is why you have fewer rows of data than lines of input, because some of the data values are giant multiline strings.
Since your data clearly isn't CSV, you probably shouldn't be using \copy in CSV format. It loads fine in text format as long as you specify "header", although that option didn't become available in text format until v15. For versions before that, you could manually remove the header line, or use PROGRAM to skip the header like FROM PROGRAM 'tail +2 /tmp/pre.txt' Alternatively, you could keep using CSV format, but choose a different quote character, one that never shows up in your data such as with (delimiter E'\t', format csv, header, quote E'\b')

Data too long Error for a column of type Varchar(600) in db2(Windows and 10.0.22 version)

I have set a column size of type Varchar of size 600. But i get the below error while updating the table.
[Code: -404, SQL State: 22001] THE SQL STATEMENT SPECIFIES A STRING THAT IS TOO LONG. SQLCODE=-404, SQLSTATE=22001, DRIVER=4.25.1301
I am using a update query as shown below.
update test set description= <here i am passing the description of charachters length 600> where id=1;
The above specified error doesn't appear if i pass the description of charachter count 597 charachters. But the error appears if the charachters count is greater than 597 even tough i have set the column size as 600. Is there any specific explanation for this?
A quick test:
db2 "create table varchar_test(c1 int, c2 varchar(10))"
db2 "insert into varchar_test values (1, '0123456789')"
DB20000I The SQL command completed successfully.
obviously I can insert 10-byte string here. But if I would replace the last character with multi-byte UTF character "ą" (2 bytes in utf-8), it fails:
db2 "insert into varchar_test values (2, '012345678ą')"
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0433N Value "012345678ą" is too long. SQLSTATE=22001
with one fewer it is OK:
db2 "insert into varchar_test values (2, '01234567ą')"
DB20000I The SQL command completed successfully.
so I can test now:
db2 "select c1, length(c2) c2_len_bytes,CHARACTER_LENGTH(c2) c2_char_len, hex(c2) c2_hex from varchar_test"
C1 C2_LEN_BYTES C2_CHAR_LEN C2_HEX
----------- ------------ ----------- --------------------
1 10 10 30313233343536373839
2 10 9 3031323334353637C485
-> it confirms that the second row is 10 bytes in size but has 9 characters.
I suggest to repeat the same exercise for the longest string you can fit and see whether indeed you have only single-byte characters in your VARCHAR. For a more detailed examination of utf-8 characters in Db2 database you can review my answer here:
When I importing gujarati data using csv file that time data show like?
(nothing Db2 specific there, just a regular "utf-8 troubleshooting")

Read current "metadata change counter" (version count) of a table

In Firebird every table has an internal 1 Byte "metadata change counter" which limits altering of each table to 255.
Is there a way to read the current value of this counter?
Each change you make in table's structure is recorded in RDB$FORMATS
system table. When you make 255 changes, you must do a backup and
subsequent restore - which resets counter for all tables.
Source
To get number of changes for a table you can use :
select max(t.rdb$format) from rdb$formats t
where
t.rdb$relation_id = (select t2.rdb$relation_id from rdb$relations t2
where (t2.rdb$relation_name = 'MY_TABLE_NAME'))
The simplest query to get the current (highest) format version of a table is
select rdb$format
from rdb$relations
where rdb$relation_name = 'TABLE_NAME'

How to insert first character of one column into another column?

I have a table with more than 30.000 entries and have to add a new column (zip_prefixes) containing the first digit of the a zip code (zcta).
I created the column successfully:
alter table zeta add column zip_prefixes text;
Then I tried to put the values in the column with:
update zeta
set zip_prefixes = (
select substr(cast (zctea as text)1,1)
from zeta)
)
Of course I got:
error more than one row returned by a subquery used as an expression
How can I get the first digit of the value from zctea into column zip_prefixes of the same row?
No need for sub-select:
update zeta
set zip_prefixes = substr(zctea, 1, 1);
update zeta
set zip_prefixes = substr(zctea as text)1,1)
There is no need for select query and casting
Consider not adding a functionally dependent column. It's typically cleaner and cheaper overall to retrieve the first character on the fly. If you need a "table", I suggest to add a VIEW.
Why the need to cast(zctea as text)? A zip code should be text to begin with.
Name it zip_prefix, not zip_prefixes.
Use the simpler and cheaper left():
CREATE VIEW zeta_plus AS
SELECT *, left(zctea::text, 1) AS zip_prefix FROM zeta; -- or without cast?
If you need the additional column in the table and the first character is guaranteed to be an ASCII character, consider the data type "char" (with double quotes). 1 byte instead of 2 (on disk) or 5 (in RAM). Details:
What is the overhead for varchar(n)?
Any downsides of using data type "text" for storing strings?
And run both commands in one transaction if you need to minimize lock time and / or avoid a visible column with missing values in the meantime. Faster, too.
BEGIN;
ALTER TABLE zeta ADD COLUMN zip_prefix "char";
UPDATE zeta SET zip_prefixes = left(zctea::text, 1);
COMMIT;

how to write a DB2 store procedure to insert/update/delete with random value?

1.I want to write a DB2 procedure to do common insert/update/delete to a table, problem is how to generate SQL statement with random values? for example, if a column of integer type, the store procedure could generate numbers between 1 to 10000, or for a column of varchar type, the store procedure could generate string of random chosen characters with a fixed length,say 10;
2.if the DB2 SQL syntax support sth to put the data from file into a LOB column for a randomly chosen row, say, I have a table t1(c0 integer,c1 clob), then how could I do sth like "insert into t1 values(100,some_path_to_a_text_file)" ?
3.using DB2 "import" to load data, if the file contains 10000 rows,it seems DB2 by default will commit the entire 10000 rows of insertion in one single transaction. Is there any configuration/option I could use to divide the "import" process into like 10 transaction, each with 1000 rows?
Thank you very much!
1) To do a random operation, get a random value, and process it according to set of rules. I have a similar case in an utility I am currently developping.
https://github.com/angoca/log4db2/blob/master/src/examples/sql-pl/bank/DemoBankRandom.sql
It realizes an insert, a select, an update or a delete based on a random value.
2) No idea. What is sth?
3) For more frequent commits, you put commitcount. For more info please check the infoCenter http://publib.boulder.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.admin.cmd.doc/doc/r0008304.html