Generatedkey in DB2 - db2

We have a table in DB2 with primary key auto generated. When I run
SELECT IDENTITY_VAL_LOCAL() as ID FROM SYSIBM.SYSDUMMY1
in the same session of AQT just after the insert, it gives us the ID of the record just inserted.
problem is with iBatis. When we use iBatis 2 as below
<selectKey resultClass="int" keyProperty="id">
SELECT IDENTITY_VAL_LOCAL() as ID FROM SYSIBM.SYSDUMMY1
</selectKey>
it doesn't return anything , hence causing NullPointer.
can someone please provide any input if you have any idea what is required to do this operation in DB2.
Thanks,
Rajesh
Need to capture the ID of last record inserted in DB2 using iBatis2.3.4

Related

Rollback doesn't work with Amazon Redshift

I am practicing with redshift, I have created a table:
Inserted values from another table
Delete the data from table
I have tried rollback both of this steps, but it doesn't work. What is wrong with this, I don't understand?
Open two psql terminals connected to same Redshift intance and database, say terminal-1 and terminal-2.
Execute following queries on terminal-1.
create table sales(
salesid integer not null Identity,
commission decimal(8,2),
saledate date,
description varchar(255),
created_at timestamp default sysdate,
updated_at timestamp);
begin;
insert into sales(commission,saledate,description,created_at,updated_at) values('3.55','2018-12-10','Test description','2018-05-17 23:54:51','2018-05-17 23:54:51');
insert into sales(commission,saledate,description,created_at,updated_at) values('5.67','2018-11-10','Test description1','2018-05-17 23:54:51','2018-05-17 23:54:51');
Hold on here and go to terminal-2; don't close the terminal-1, and execute following query
select * from sales;
You will not get above two data records inserted from terminal-1.
Hold on here, again go to terminal-1; and execute below query.
commit;
Hold on here and go to terminal-2; execute following query again
select * from sales;
Now, you will both records.
Point proven.

hive insert current date into a table using date function errors

I have to insert current date (timestamp) in a table via hive query. The query is failing for some reason. Can someone please help me out.
CREATE EXTERNAL TABLE IF NOT EXISTS dataFlagTest(
date string
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
LOCATION 's3://bckt1/hive_test/dateFlag/';
Now To insert into it, I run following query :
INSERT OVERWRITE TABLE dataFlagTest
SELECT from_unixtime(unix_timestamp()) ;
It failed with the following error :
FAILED: NullPointerException null
Can someone please help me out
Solution is you have to do select from a table. You cannot run select without from clause.
So, create a sample table with 1 row or use an existing table like below :
Insert OVERWRITE TABLE dataflagtest SELECT from_unixtime(unix_timestamp()) as date FROM EXISTING_TABLE TABLESAMPLE(1 ROWS);

Nxlog im_dbi is not working

I am able to insert data into PostgreSQL using nxlog(om_dbi).
But I am not able to select data(or fetch data) from PostgreSQL using nxlog. I tried many options nothing is working.
And in nxlog document also for IM_DBI module description has only "FIXME" mentioned.
Document Link: http://nxlog.org/documentation/nxlog-community-edition-reference-manual-v20928#im_dbi
Please help me to solve this.
Logs:
<Input dbiin>
Module im_dbi
SavePos TRUE
SQL SELECT * FROM NEW_TABLE
Driver pgsql
Option host 127.0.0.1
Option username chitta
Option password ''
Option dbname db
</Input>
2014-10-16 14:29:17 WARNING nxlog-ce received a termination request signal, exiting...
2014-10-16 14:29:18 INFO nxlog-ce-2.8.1248 started
2014-10-16 14:29:18 ERROR im_dbi failed to execute SQL statement. ERROR: column "id" does not exist;LINE 1: SELECT * FROM NEW_TABLE WHERE id = 1;
Note:
the module will automatically prepends a "WHERE id > %d" clause.
Not an answer, but here's some help.
The most important directive is missing: SQL Select ID as id,
DateOccured as EventTime, data from logtable
Source: https://www.mail-archive.com/nxlog-ce-users#lists.sourceforge.net/msg00225.html
I'm currently in the same boat. My assumption is that your data is not formatted in a way that nxlog can interpret. Troubleshooting and will get back to you if I can find a resolution.
Also digging through the source code for the im_dbi module.
https://github.com/lamby/pkg-nxlog-ce/blob/master/src/modules/input/dbi/im_dbi.c
The answer by SoMuchToGrok is valid.
Actually the question already has this: "ERROR: column "id" does not exist".
The table must have an id column or you must use SELECT x as id so that the result set has id in it

How to Retrieve autoincremnt value after inserting 1 record in single query (sql server)

I am have two fields in my table:
One is Primary key auto increment value and second is text value.
lets say: xyzId & xyz
So I can easily insert like this
insert into abcTable(xyz) Values('34')
After performing above query it must insert these information
xyzId=1 & xyz=34
and for retrieving I can retrieve like this
select xyzId from abcTable
But for this I have to write down two operation. Cant I retrieve in single/sub query ?
Thanks
If you are on SQL Server 2005 or later you can use the output clause to return the auto created id.
Try this:
insert into abcTable(xyz)
output inserted.xyzId
values('34')
I think you can't do an insert and a select in a single query.
You can use a Store Procedures to execute the two instructions as an atomic operation or you can build a query in code with the 2 instructions using ';' (semicolon) as a separator betwen instructions.
Anyway, for select identity values in SQL Server you must check ##IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT. It's faster and cleaner than a select in the table.

how to emulate "insert ignore" and "on duplicate key update" (sql merge) with postgresql?

Some SQL servers have a feature where INSERT is skipped if it would violate a primary/unique key constraint. For instance, MySQL has INSERT IGNORE.
What's the best way to emulate INSERT IGNORE and ON DUPLICATE KEY UPDATE with PostgreSQL?
With PostgreSQL 9.5, this is now native functionality (like MySQL has had for several years):
INSERT ... ON CONFLICT DO NOTHING/UPDATE ("UPSERT")
9.5 brings support for "UPSERT" operations.
INSERT is extended to accept an ON CONFLICT DO UPDATE/IGNORE clause. This clause specifies an alternative action to take in the event of a would-be duplicate violation.
...
Further example of new syntax:
INSERT INTO user_logins (username, logins)
VALUES ('Naomi',1),('James',1)
ON CONFLICT (username)
DO UPDATE SET logins = user_logins.logins + EXCLUDED.logins;
Edit: in case you missed warren's answer, PG9.5 now has this natively; time to upgrade!
Building on Bill Karwin's answer, to spell out what a rule based approach would look like (transferring from another schema in the same DB, and with a multi-column primary key):
CREATE RULE "my_table_on_duplicate_ignore" AS ON INSERT TO "my_table"
WHERE EXISTS(SELECT 1 FROM my_table
WHERE (pk_col_1, pk_col_2)=(NEW.pk_col_1, NEW.pk_col_2))
DO INSTEAD NOTHING;
INSERT INTO my_table SELECT * FROM another_schema.my_table WHERE some_cond;
DROP RULE "my_table_on_duplicate_ignore" ON "my_table";
Note: The rule applies to all INSERT operations until the rule is dropped, so not quite ad hoc.
For those of you that have Postgres 9.5 or higher, the new ON CONFLICT DO NOTHING syntax should work:
INSERT INTO target_table (field_one, field_two, field_three )
SELECT field_one, field_two, field_three
FROM source_table
ON CONFLICT (field_one) DO NOTHING;
For those of us who have an earlier version, this right join will work instead:
INSERT INTO target_table (field_one, field_two, field_three )
SELECT source_table.field_one, source_table.field_two, source_table.field_three
FROM source_table
LEFT JOIN target_table ON source_table.field_one = target_table.field_one
WHERE target_table.field_one IS NULL;
Try to do an UPDATE. If it doesn't modify any row that means it didn't exist, so do an insert. Obviously, you do this inside a transaction.
You can of course wrap this in a function if you don't want to put the extra code on the client side. You also need a loop for the very rare race condition in that thinking.
There's an example of this in the documentation: http://www.postgresql.org/docs/9.3/static/plpgsql-control-structures.html, example 40-2 right at the bottom.
That's usually the easiest way. You can do some magic with rules, but it's likely going to be a lot messier. I'd recommend the wrap-in-function approach over that any day.
This works for single row, or few row, values. If you're dealing with large amounts of rows for example from a subquery, you're best of splitting it into two queries, one for INSERT and one for UPDATE (as an appropriate join/subselect of course - no need to write your main filter twice)
To get the insert ignore logic you can do something like below. I found simply inserting from a select statement of literal values worked best, then you can mask out the duplicate keys with a NOT EXISTS clause. To get the update on duplicate logic I suspect a pl/pgsql loop would be necessary.
INSERT INTO manager.vin_manufacturer
(SELECT * FROM( VALUES
('935',' Citroën Brazil','Citroën'),
('ABC', 'Toyota', 'Toyota'),
('ZOM',' OM','OM')
) as tmp (vin_manufacturer_id, manufacturer_desc, make_desc)
WHERE NOT EXISTS (
--ignore anything that has already been inserted
SELECT 1 FROM manager.vin_manufacturer m where m.vin_manufacturer_id = tmp.vin_manufacturer_id)
)
INSERT INTO mytable(col1,col2)
SELECT 'val1','val2'
WHERE NOT EXISTS (SELECT 1 FROM mytable WHERE col1='val1')
As #hanmari mentioned in his comment. when inserting into a postgres tables, the on conflict (..) do nothing is the best code to use for not inserting duplicate data.:
query = "INSERT INTO db_table_name(column_name)
VALUES(%s) ON CONFLICT (column_name) DO NOTHING;"
The ON CONFLICT line of code will allow the insert statement to still insert rows of data. The query and values code is an example of inserted date from a Excel into a postgres db table.
I have constraints added to a postgres table I use to make sure the ID field is unique. Instead of running a delete on rows of data that is the same, I add a line of sql code that renumbers the ID column starting at 1.
Example:
q = 'ALTER id_column serial RESTART WITH 1'
If my data has an ID field, I do not use this as the primary ID/serial ID, I create a ID column and I set it to serial.
I hope this information is helpful to everyone.
*I have no college degree in software development/coding. Everything I know in coding, I study on my own.
Looks like PostgreSQL supports a schema object called a rule.
http://www.postgresql.org/docs/current/static/rules-update.html
You could create a rule ON INSERT for a given table, making it do NOTHING if a row exists with the given primary key value, or else making it do an UPDATE instead of the INSERT if a row exists with the given primary key value.
I haven't tried this myself, so I can't speak from experience or offer an example.
This solution avoids using rules:
BEGIN
INSERT INTO tableA (unique_column,c2,c3) VALUES (1,2,3);
EXCEPTION
WHEN unique_violation THEN
UPDATE tableA SET c2 = 2, c3 = 3 WHERE unique_column = 1;
END;
but it has a performance drawback (see PostgreSQL.org):
A block containing an EXCEPTION clause is significantly more expensive
to enter and exit than a block without one. Therefore, don't use
EXCEPTION without need.
On bulk, you can always delete the row before the insert. A deletion of a row that doesn't exist doesn't cause an error, so its safely skipped.
For data import scripts, to replace "IF NOT EXISTS", in a way, there's a slightly awkward formulation that nevertheless works:
DO
$do$
BEGIN
PERFORM id
FROM whatever_table;
IF NOT FOUND THEN
-- INSERT stuff
END IF;
END
$do$;