Is the output of sqlite in "mode .insert" correct? - postgresql

Consider the table I create in SQLite database with CREATE TABLE tbl(x); which has the following data: INSERT INTO tbl VALUES(1); INSERT INTO tbl VALUES(2);. Now I wish to create a SQL file of this schema and data that I wish to import into PostgreSQL and I do the following:
.mode insert
.output "tbl.sql"
.schema tbl
select * from tbl order by x;
.output stdout
And the output is:
CREATE TABLE tbl(x);
INSERT INTO table VALUES(1);
INSERT INTO table VALUES(2);
Shouldn't the output of the insert statements be INSERT INTO tbl VALUES(1); INSERT INTO tbl VALUES(2); ?
This is not really a problem because I can easily do a find/repalce to fix this but that might potentially introduce unforeseen problems (like changing data inside the insert statement).

From the fine SQLite manual:
When specifying insert mode, you have to give an extra argument which is the name of the table to be inserted into. For example:
sqlite> .mode insert new_table
sqlite> select * from tbl1;
INSERT INTO "new_table" VALUES('hello',10);
INSERT INTO "new_table" VALUES('goodbye',20);
sqlite>
So saying .mode insert leaves SQLite to use the default table name which is apparently table. You should be saying:
.mode insert tbl

Related

sqldeveloper exports integer inserts as strings

It seems like a bug in Version 19.4 which is fixed in 20+
I exported the content of my tables in sqldeveloper and the insert sql statements all have number as strings.
Example:
Insert into testtable(id,stuff) values ('1','Hello')
ID 1 becomes '1' in the export and I have trouble reading it in.
This is the case for every table. Is there a way to avoid the two ' ?
The DDL is:
create table TESTTABLE(
ID INTEGER not null
);
after executing its this in sqldeveloepr:
create table testtable{
"ID" Number(*,0) NOT NULL ENABLE
}
I noticed that I'm able to add such a line, if the constraints are not active. It seems like sqldeveloper convertes the string to a number internally.
create table testtable(
"ID" Number(*,0) NOT NULL ENABLE
);
insert into testtable values (1);
commit;
select /*insert*/ * from testtable;
Running this, i get
Table TESTTABLE created.
1 row inserted.
Commit complete.
REM INSERTING into TESTTABLE
SET DEFINE OFF;
Insert into TESTTABLE (ID) values (1);
No quotes on the number value/field. I did this with version 20.2 of SQL Developer.

DB2 Temp Table - Retrieving inserted Data in DB2 Z OS

I have created a temp table and Inserted in DB2 ZOS as mentioned below
CREATE GLOBAL TEMPORARY TABLE tmp2 (col1 INT)
INSERT INTO tmp2 (col1) VALUES (10687);
INSERT INTO tmp2 (col1) VALUES (10689);
INSERT INTO tmp2 (col1) VALUES (10691);
Inserted data with out any issues, where I'm trying to retrieve the data using select query Im unable see any values which I had inserted with the above values and getting.
select * from tmp2
I have an earlier experience in SQL Server and ran the below queries which work without any issues.
Drop table #tmp2
CREATE TABLE #tmp2 (col1 INT)
INSERT INTO #tmp2 (col1) VALUES (10687);
INSERT INTO #tmp2 (col1) VALUES (10689);
INSERT INTO #tmp2 (col1) VALUES (10691);
select * from #tmp2
How to get to see the inserted data?
Check the documentation for details, sometimes this is faster than waiting for an answer.
The CGTT (create global temporary table) object differs from a regular table when a COMMIT happens - it will empty the table if there are no with hold cursors open on the table. If you have autocommit enabled for your database connection then the result will be that your CGTT table may appear to be empty.
If you want more control over the commit behaviour (and rollback behaviour, and logging options etc. ) you can consider using DGTT (declare global temporary table) instead because that syntax lets you use additional non-default options like on commit preserve rows and on rollback preserve rows. But a DGTT object has more restrictions , including that its qualifier must always be SESSION and its definition is not catalogued so the table is invisible to any other session.
Thanks for all replies.
Below is the set of queries, actually I was looking for
DECLARE GLOBAL TEMPORARY TABLE SESSION.tmp2 (col1 INTEGER)
CCSID EBCDIC ON COMMIT PRESERVE ROWS;
INSERT INTO SESSION.tmp2 (col1) VALUES (10687);
INSERT INTO SESSION.tmp2 (col1) VALUES (10689);
INSERT INTO SESSION.tmp2 (col1) VALUES (10691);
select * from SESSION.tmp2;

Too much content of 'table_name' are deleted during executing alter distkey command

I've been getting this error when trying to alter sortkeys on a table (the table has ~1M rows x ~12 columns). The table has no sortkeys before doing the alter as seen below:
alter table table_name
alter sortkey (date_col, col2, col3);
This tries to run for a few seconds before returning the following:
ERROR: Too much content of 'table_name' are deleted during executing alter distkey command. Please Retry.
Has anyone run into this particular error before? My solution has been to create a new table with my desired sortkeys, which works fine.
create table table_name_sorted
sortkey (date_col, col2, col3)
as (
select * from table_name
);

Violation of primary key in an INSERT INTO statement - does entire statement get terminated?

I have a statement to update a table from another table
INSERT INTO table1 SELECT * FROM table2
Due to some rubbish test data still being in table1 (but a scenario that may occur in live, anything is possible), I get the error
Violation of PRIMARY KEY constraint 'PK_xxx'. Cannot insert duplicate key in object 'table1'
The statement has been terminated
If the SELECT from table2 returns 100 rows, is only the violating insert not committed to table1 or is the ENTIRE INSERT INTO statement not committed/rolled back due to the PK violation?
The entire statement is not committed. It's easy enough to test, like this:
Create Table #Target (Id Int Primary Key)
Insert Into #Target Values(1)
Insert Into #Target Values(3)
Insert Into #Target Values(5)
Insert Into #Target Values(7)
Create Table #Source (Id Int)
Insert Into #Source Values(1)
Insert Into #Source Values(2)
Insert Into #Source Values(3)
Insert Into #Source Values(4)
Insert Into #Source Values(5)
Insert Into #Source Values(6)
Insert Into #Source Values(7)
Insert Into #Source Values(8)
Insert Into #Target(Id)
Select Id From #Source
Select * From #target
Drop Table #Target
Drop Table #Source
The code above create a target table with a primary key. Then it creates a source table with the same column but different values. Then a similar command to the one you posted is executed where we insert rows from the source table to the target table.
Then we select from the target table. As you can see, only the original values are there.
If you use this code instead, only the missing rows will be inserted.
Insert
Into #Target
Select #Source.*
From #Source
Left Join #Target
On #Source.Id = #Target.Id
Where #Target.Id Is NULL

Copy Postgres table while maintaining primary key autoincrement

I am trying to copy a table with this postgres command however the primary key autoincrement feature does not copy over. Is there any quick and simple way to accomplish this? Thanks!
CREATE TABLE table2 AS TABLE table;
Here's what I'd do:
BEGIN;
LOCK TABLE oldtable;
CREATE TABLE newtable (LIKE oldtable INCLUDING ALL);
INSERT INTO newtable SELECT * FROM oldtable;
SELECT setval('the_seq_name', (SELECT max(id) FROM oldtable)+1);
COMMIT;
... though this is a moderately unusual thing to need to do and I'd be interested in what problem you're trying to solve.